So I'm trying to consolidate my home storage and Starwind so far seems like the perfect solution. However, I'm encountering some issues and I'd like to clear them before I proceed to deployment.
[*]I'm deploying to Windows Home (client OS) with relatively weak HW (will soon be 12GB RAM, 10TB storage). Is this even a scenario where I'd be legally using the VirtualSAN Free edition? I know it's not a supported scenario, but the question is about licensing: am I even allowed to do that?
[*]This would be a single-node setup. I understand that too is not the typical / intended deployment. However, I do have backup issues to solve. I have read that some enterprise backup solutions are supported, but how about straight file-copy solutions, such as Windows Backup or WinRAR even? Can I do an online backup using these?
[*][*]Do I have to backup L2 cache folders as well? I have read in another thread there are issues if those files go missing.
[*]RAM requirements: I read that 5GB / TB are needed for block hashing + 3,5GB / TB for deduplication. Questions:
[*][*]Do I understand correctly that deduplication is global? Any deduped image will share blocks with other images in order to maximize space savings?
[*][*]There is deduplication and compression mentioned in the docs. May I assume they actually refer to the same functionality?
[*][*]What are RAM requirements for thick-provisioned images?
[*][*]Have I observed correctly that RAM usage actually depends on "live" served data, not the provisioned image sizes? What I mean here: I have a 1TB image that is currently 200GB full. RAM usage is therefore 1GB + 0,7GB? Or is it even less, depending on quantity of "currently served" data, as in: served in this session? This is an important question for me since I'm running with relatively low RAM. Deduplication is (for me) more or less only useful for OS blocks, that is around 32GB per image. However, I'd still like to have 1TB images so that kids can load all their stuff. If RAM requirements go per session, that will be a relatively small amount of RAM required. If they go per stored data in image, my 12GB of ram gets filled with a single full 1TB image.
[*]There was another thread where this guy was complaining about RAM usage of v8 vs v6 lsfs. The RAM usage is because of 4KB blocks vs much larger ones in v6. In that thread there was mention that larger block sizes are in development for v8. Is there an approximate ETA for that?
[*][*]As an alternative, are you planning to move the hash tables to (heavily cached) file storage in order to support extremely large storages / test deployments with low RAM?
[*]What are Control devices that I can add to target? What is their function? I'm sorry if this is a super-basic question, but I can't seem to find any documentation about them.
[*]Do you provide any plugin API? Particulary I'm interested in "filter" plugins. That is a plugin that could register a new device and provide open / read / write / close functionality as well as having the same access to all registered devices (possibly even telling the system that its own registered device is parent to listed other devices).
Sorry for all these dumb questions and hoping that you can answer them,
Jure
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software