V8.... Wow.... Feedback

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
Sienar
Posts: 3
Joined: Sat Nov 14, 2015 4:37 am

Sat Nov 14, 2015 5:06 am

So I setup a small lab environment and have been attempting to test out V8. I started yesterday with build 8198 because I'd downloaded it several weeks ago and had only just got around to it. Seemed be doing ok, had a 5GB LUN replicated between the 2 nodes and a 2012 R2 cluster using that for quorum. I noticed today that build 8730 was available. Is this your current production, selling version of Virtual SAN? You guys are making giant changes between builds of the same version (V8). IE, you completely removed the concept of clusters. That seems like a drastic change that would go into V9, not just another spin of V8. Also, build 8730 seems to have broken device creation. I can no longer create large LSFS devices because it won't let you turn off dedupe and I don't have 1TB of ram in my test machines. Looks like I'm rolling back to 8198 because of that, but the fun there is that it's killed the replication for my little quorum device so I'm probably going to have to delete and recreate that. Good thing there isn't important data in there...
User avatar
darklight
Posts: 185
Joined: Tue Jun 02, 2015 2:04 pm

Mon Nov 16, 2015 3:52 pm

I believe deduplication is even not enabled by default. Give it a double-check and you should be fine.
lsfs.jpg
lsfs.jpg (41.56 KiB) Viewed 8257 times
Cluster concept was something I never used to try since it looks more like cosmetics/management thing rather. Basically, I always set up everything manually, it's just kind of feeling to know exactly how it's configured. Didn't even notice that it is not present anymore :)
Sienar
Posts: 3
Joined: Sat Nov 14, 2015 4:37 am

Wed Nov 18, 2015 12:09 am

It shows unchecked. But the behavior of the following screens in the wizard indicate that the checkbox is ignored. It won't let me create a 10TB, thin provisioned, LSFS LUN. Both builds I've tried have proven way to flaky. I've scrapped the test and will just stick with MS DFS-R with a manual failover using a secondary IP that I can just move between the file servers. Would've loved for virtual SAN to work though.
oxyi
Posts: 67
Joined: Tue Dec 14, 2010 8:30 pm
Contact:

Wed Nov 18, 2015 5:22 am

Am I reading it correctly? are you saying in the newer version of virtual SAN, there is no more clustering , like 2 nodes, 3 nodes?
User avatar
darklight
Posts: 185
Joined: Tue Jun 02, 2015 2:04 pm

Wed Nov 18, 2015 10:53 am

Sienar, take a look to LSFS technical description:
https://knowledgebase.starwindsoftware. ... scription/
Limits and requirements
Required free RAM (it is not related to L1 cache):
4.6 GB of RAM per 1 TB initial LSFS size (with deduplication disabled)
7.6 GB of RAM per 1 TB initial LSFS size (with deduplication enabled)
LSFS maximum size is 11 TB
Basically you need to have 46GB of RAM for 10TB LSFS device.

oxyi, yeah, that's correct. AFAIK they (temporarily) removed cluster option from management console for some redesigning/revamping.
Sienar
Posts: 3
Joined: Sat Nov 14, 2015 4:37 am

Thu Nov 19, 2015 12:55 am

That's pretty odd. It let me create them in the previous build on a system with only 32GB of ram. 46GB of RAM just to mount a measly 10TB is not production ready...
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Nov 19, 2015 3:36 pm

It's not what he told. Clustering is there it's just management organized in different way. We're still working on the visualization here.
oxyi wrote:Am I reading it correctly? are you saying in the newer version of virtual SAN, there is no more clustering , like 2 nodes, 3 nodes?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Nov 19, 2015 4:00 pm

It really depends on how you look at the things and what's the competitor.

Bad news: Other guys have even higher memory requirements for logging and in-line dedupe. So Storage Spaces Direct now need ±10GB of RAM for 1TB of flash cache (not talking about actual storage!) and flash cache is mandatory with S2D.
ZFS needs ±200-300-400 bytes for every hash block below so it's ±1:20 ratio if we use 4KB blocks like we use. It's 100GB of RAM for 2TB of 4KB in-line deduplicated storage. We'll ask for 14GB only!

Good news: We're working on moving hash tables and index tables from RAM to flash. First will go index tables as they are same for dedupe and non-dedupe modes so we'll drop from 7GB per 1TB to 3.5GB and then to zero. First iteration is VERY close to being released.

We'll still keep it as an option as if you want to dedupe NVMe storage you'll be willing to play 1-2-3TB of flash per node and that means RAM-based index and hash tables are OK.

P.S. ZFS will do the same as well keeping hash tables in flash and not in RAM.

Please stay need.
Sienar wrote:That's pretty odd. It let me create them in the previous build on a system with only 32GB of ram. 46GB of RAM just to mount a measly 10TB is not production ready...
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply