(lots of) newbie questions

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
velis
Posts: 9
Joined: Mon Feb 09, 2015 12:12 pm

Tue Feb 10, 2015 8:02 am

So I'm trying to consolidate my home storage and Starwind so far seems like the perfect solution. However, I'm encountering some issues and I'd like to clear them before I proceed to deployment.

[*]I'm deploying to Windows Home (client OS) with relatively weak HW (will soon be 12GB RAM, 10TB storage). Is this even a scenario where I'd be legally using the VirtualSAN Free edition? I know it's not a supported scenario, but the question is about licensing: am I even allowed to do that?

[*]This would be a single-node setup. I understand that too is not the typical / intended deployment. However, I do have backup issues to solve. I have read that some enterprise backup solutions are supported, but how about straight file-copy solutions, such as Windows Backup or WinRAR even? Can I do an online backup using these?
[*][*]Do I have to backup L2 cache folders as well? I have read in another thread there are issues if those files go missing.

[*]RAM requirements: I read that 5GB / TB are needed for block hashing + 3,5GB / TB for deduplication. Questions:
[*][*]Do I understand correctly that deduplication is global? Any deduped image will share blocks with other images in order to maximize space savings?
[*][*]There is deduplication and compression mentioned in the docs. May I assume they actually refer to the same functionality?
[*][*]What are RAM requirements for thick-provisioned images?
[*][*]Have I observed correctly that RAM usage actually depends on "live" served data, not the provisioned image sizes? What I mean here: I have a 1TB image that is currently 200GB full. RAM usage is therefore 1GB + 0,7GB? Or is it even less, depending on quantity of "currently served" data, as in: served in this session? This is an important question for me since I'm running with relatively low RAM. Deduplication is (for me) more or less only useful for OS blocks, that is around 32GB per image. However, I'd still like to have 1TB images so that kids can load all their stuff. If RAM requirements go per session, that will be a relatively small amount of RAM required. If they go per stored data in image, my 12GB of ram gets filled with a single full 1TB image.

[*]There was another thread where this guy was complaining about RAM usage of v8 vs v6 lsfs. The RAM usage is because of 4KB blocks vs much larger ones in v6. In that thread there was mention that larger block sizes are in development for v8. Is there an approximate ETA for that?
[*][*]As an alternative, are you planning to move the hash tables to (heavily cached) file storage in order to support extremely large storages / test deployments with low RAM?

[*]What are Control devices that I can add to target? What is their function? I'm sorry if this is a super-basic question, but I can't seem to find any documentation about them.

[*]Do you provide any plugin API? Particulary I'm interested in "filter" plugins. That is a plugin that could register a new device and provide open / read / write / close functionality as well as having the same access to all registered devices (possibly even telling the system that its own registered device is parent to listed other devices).

Sorry for all these dumb questions and hoping that you can answer them,
Jure
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Sun Feb 15, 2015 10:13 am

Hi! Welcome to our community!
[*]I'm deploying to Windows Home (client OS) with relatively weak HW (will soon be 12GB RAM, 10TB storage). Is this even a scenario where I'd be legally using the VirtualSAN Free edition? I know it's not a supported scenario, but the question is about licensing: am I even allowed to do that?
That is supported
[*]This would be a single-node setup. I understand that too is not the typical / intended deployment. However, I do have backup issues to solve. I have read that some enterprise backup solutions are supported, but how about straight file-copy solutions, such as Windows Backup or WinRAR even? Can I do an online backup using these?
That is supported too :)
[*][*]Do I have to backup L2 cache folders as well? I have read in another thread there are issues if those files go missing.
You should do the backups from the client side, so it would "know" about hte data in cache.
[*]RAM requirements: I read that 5GB / TB are needed for block hashing + 3,5GB / TB for deduplication. Questions:
1.Do I understand correctly that deduplication is global? Any deduped image will share blocks with other images in order to maximize space savings?
2.There is deduplication and compression mentioned in the docs. May I assume they actually refer to the same functionality?
3.What are RAM requirements for thick-provisioned images?
4.Have I observed correctly that RAM usage actually depends on "live" served data, not the provisioned image sizes? What I mean here: I have a 1TB image that is currently 200GB full. RAM usage is therefore 1GB + 0,7GB? Or is it even less, depending on quantity of "currently served" data, as in: served in this session? This is an important question for me since I'm running with relatively low RAM. Deduplication is (for me) more or less only useful for OS blocks, that is around 32GB per image. However, I'd still like to have 1TB images so that kids can load all their stuff. If RAM requirements go per session, that will be a relatively small amount of RAM required. If they go per stored data in image, my 12GB of ram gets filled with a single full 1TB image.
1. DD analizws all the data within 1 LUN
2. Two features that are working to get 1 result.
3. Minimal Requirement is 0 per LUN :) As about hte rest- please refer to our Best Practices
4. StarWind allows you reserving the sapce in the RAM to be used in cache, so the RAM usage depends on your configuration.

I hope that helped.

[*]There was another thread where this guy was complaining about RAM usage of v8 vs v6 lsfs. The RAM usage is because of 4KB blocks vs much larger ones in v6. In that thread there was mention that larger block sizes are in development for v8. Is there an approximate ETA for that?
[*][*]As an alternative, are you planning to move the hash tables to (heavily cached) file storage in order to support extremely large storages / test deployments with low RAM?

[*]What are Control devices that I can add to target? What is their function? I'm sorry if this is a super-basic question, but I can't seem to find any documentation about them.

[*]Do you provide any plugin API? Particulary I'm interested in "filter" plugins. That is a plugin that could register a new device and provide open / read / write / close functionality as well as having the same access to all registered devices (possibly even telling the system that its own registered device is parent to listed other devices).
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
Post Reply