A few config questions

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
kevrags
Posts: 12
Joined: Wed Dec 26, 2012 8:11 pm

Wed Dec 26, 2012 8:43 pm

I've been going through all the posts here, and have been using the Free Native San for a little while. I hope to build an HA/Hyper-V solution for one of our offices in the near future, with following specs:

Node Systems:
Dell R710, 2x E5645 6-Core, 192GB RAM
Supermicro JBOD - SAS Expanders with 16 2TB SAS drives initially (16TB licenses) in RAID10 array
Intel X520-DA2 10Gbe for Sync/Cluster links
LSI 9280-8e controllers

Anyhoo, my question concerns the RAID array and caching. I'm coming from a NexentaStor background, where we just bought the SAS HBAs and NS took care of the zdev creation/mirroring, and of course we used SSDs for L2ARC and a DDRDrive for the ZIL. From reading posts here, it would appear that using the Controller cache or CacheCade Pro and SSDs for read/write caching isn't recommended; rather, I should look to using StarWind RAM cache. This is why I specified 192GB RAM. Only 96GB is needed for our Hyper-V deployment, so the rest would be for cache.

Is this a correct assumption? Also, I'm just a little nervous about the recommendation to build the array as a RAID0 when in a 3-way HA environment, mainly because of the resync time required in the event of an array failure. Although, 16 drives cranking away at 100MB/s writes each would make that about 3-4 hours, I guess. And, I already have the 2TB drives, so ideally I'd try to use 16 1TB drives for something like that, given a 16TB license is my maximum budget spend at the moment.

Anyway, any advice or comments on this would be greatly appreciated. I will say that I've had much better luck setting up a failover cluster with Server 2012/Native SAN than I did with NexentaStor. I kept having issues with the cluster in the Nexenta storage going offline during failover testing or reboot, though maybe I didn't have it setup just right. And that was still a single point of failure, as we do not have the HA plugins.

Thanks,

Kevin
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Dec 27, 2012 9:33 pm

Yes, correct. More RAM you'll throw in - more cache we'll allocate to "spoof" writes and keep reads diskless.

Upcoming version of StarWind will have flash cache support so we'll take care of your SSDs (better PCIe rather then SATA attached) as well. Not as a ZIL but purely as a L2 cache. It would be up to you - use raw disks or pure logging mode (not a combination of a generic FS + acceleration log as ZFS is).

You may go with RAID10 /RAID5 / RAID6 instead of a RAID0 or JBOD - no problem.
kevrags wrote:I've been going through all the posts here, and have been using the Free Native San for a little while. I hope to build an HA/Hyper-V solution for one of our offices in the near future, with following specs:

Node Systems:
Dell R710, 2x E5645 6-Core, 192GB RAM
Supermicro JBOD - SAS Expanders with 16 2TB SAS drives initially (16TB licenses) in RAID10 array
Intel X520-DA2 10Gbe for Sync/Cluster links
LSI 9280-8e controllers

Anyhoo, my question concerns the RAID array and caching. I'm coming from a NexentaStor background, where we just bought the SAS HBAs and NS took care of the zdev creation/mirroring, and of course we used SSDs for L2ARC and a DDRDrive for the ZIL. From reading posts here, it would appear that using the Controller cache or CacheCade Pro and SSDs for read/write caching isn't recommended; rather, I should look to using StarWind RAM cache. This is why I specified 192GB RAM. Only 96GB is needed for our Hyper-V deployment, so the rest would be for cache.

Is this a correct assumption? Also, I'm just a little nervous about the recommendation to build the array as a RAID0 when in a 3-way HA environment, mainly because of the resync time required in the event of an array failure. Although, 16 drives cranking away at 100MB/s writes each would make that about 3-4 hours, I guess. And, I already have the 2TB drives, so ideally I'd try to use 16 1TB drives for something like that, given a 16TB license is my maximum budget spend at the moment.

Anyway, any advice or comments on this would be greatly appreciated. I will say that I've had much better luck setting up a failover cluster with Server 2012/Native SAN than I did with NexentaStor. I kept having issues with the cluster in the Nexenta storage going offline during failover testing or reboot, though maybe I didn't have it setup just right. And that was still a single point of failure, as we do not have the HA plugins.

Thanks,

Kevin
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
jimbyau
Posts: 22
Joined: Wed Nov 21, 2012 2:12 am

Tue Jan 01, 2013 3:28 pm

anton (staff) wrote: Upcoming version of StarWind will have flash cache support so we'll take care of your SSDs (better PCIe rather then SATA attached) as well. Not as a ZIL but purely as a L2 cache.
Hi Anton, any idea when the SSD caching feature in Starwind will be available for testing? Will this be a read only cache, or write as well? Will it be a smart caching algorithm?
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Jan 02, 2013 7:49 am

Supposed to be released with Beta-1 of a V8 we'll push in January (expected to do it in December but were late due to OEM builds).

Yes, full write-back. No read-only write-thru crap.

Improved ARC (next generation compared to one could be found inside ZFS) and classic LRU.
jimbyau wrote:
anton (staff) wrote: Upcoming version of StarWind will have flash cache support so we'll take care of your SSDs (better PCIe rather then SATA attached) as well. Not as a ZIL but purely as a L2 cache.
Hi Anton, any idea when the SSD caching feature in Starwind will be available for testing? Will this be a read only cache, or write as well? Will it be a smart caching algorithm?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
kevrags
Posts: 12
Joined: Wed Dec 26, 2012 8:11 pm

Thu Jan 03, 2013 4:36 pm

So, just to confirm. You don't recommend using caching at the controller level at all?

Thanks,

Kevin
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Wed Jan 09, 2013 1:07 pm

I`d recommended to stick with the StarWind cache only.
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
epalombizio
Posts: 67
Joined: Wed Oct 27, 2010 3:40 pm

Fri Mar 22, 2013 3:25 pm

Hi,
Any update on availability of a beta for testing? I'm in the process of planning my next SAN and I'd like to use Starwind.

Regarding RAM caching. I remember reading somewhere that anything more than 2GB allocated to cache per lun was a waste, is this true? If there is no limit, would it just make more sense to load up on RAM and leverage that as cache instead of using an SSD or PCI-e cache?

So Starwind is making the jump from v6 to v8 directly, or will there be a version 7?

Thanks,
Elvis
epalombizio
Posts: 67
Joined: Wed Oct 27, 2010 3:40 pm

Fri Mar 22, 2013 9:04 pm

Actually, the part about not using more than 3GB is directly from the Starwind High Availability Best Practices guide.
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Mon Mar 25, 2013 2:26 pm

Hi.

The beta version is coming really soon. Stay tuned.

As about the cache settings - there is no linear dependence for how the performance will be increased depending on the cache size, but in most of cases the possibility of using whole entire cache for the single device is becoming lower with every byte starting from 3GB, which makes the unused cache simple waste of hardware. The best practices document was created for some basics and for most common scenarios and it is always better to doublecheck about the project of your future system with the tech support (which comes free) and get high quality assistance.
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
Post Reply