I've been going through all the posts here, and have been using the Free Native San for a little while. I hope to build an HA/Hyper-V solution for one of our offices in the near future, with following specs:
Node Systems:
Dell R710, 2x E5645 6-Core, 192GB RAM
Supermicro JBOD - SAS Expanders with 16 2TB SAS drives initially (16TB licenses) in RAID10 array
Intel X520-DA2 10Gbe for Sync/Cluster links
LSI 9280-8e controllers
Anyhoo, my question concerns the RAID array and caching. I'm coming from a NexentaStor background, where we just bought the SAS HBAs and NS took care of the zdev creation/mirroring, and of course we used SSDs for L2ARC and a DDRDrive for the ZIL. From reading posts here, it would appear that using the Controller cache or CacheCade Pro and SSDs for read/write caching isn't recommended; rather, I should look to using StarWind RAM cache. This is why I specified 192GB RAM. Only 96GB is needed for our Hyper-V deployment, so the rest would be for cache.
Is this a correct assumption? Also, I'm just a little nervous about the recommendation to build the array as a RAID0 when in a 3-way HA environment, mainly because of the resync time required in the event of an array failure. Although, 16 drives cranking away at 100MB/s writes each would make that about 3-4 hours, I guess. And, I already have the 2TB drives, so ideally I'd try to use 16 1TB drives for something like that, given a 16TB license is my maximum budget spend at the moment.
Anyway, any advice or comments on this would be greatly appreciated. I will say that I've had much better luck setting up a failover cluster with Server 2012/Native SAN than I did with NexentaStor. I kept having issues with the cluster in the Nexenta storage going offline during failover testing or reboot, though maybe I didn't have it setup just right. And that was still a single point of failure, as we do not have the HA plugins.
Thanks,
Kevin
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software