Hi all,
Any guidance or opinions are greatly appreciated. I am currently planning out a virtualization migration plan for a customer. I'm using top of the line everything in the hardware but I'm a bit concerned about disk throughput with the iSCSI SAN. How should I benchmark (ahead of purchase!) if the solution can accommodate the required performance? For example, I'm building (proposed) an iSCSI SAN on HP DL370's. Twelve 600GB 6Gbps 15K SAS drives in a RAID 5 array. ***edit: RAID 10 if recommended*** So that's 6TB or so on RAID5 with the hot spare, much less on RAID 10. iSCSI traffic will go over quad bonded Gigabit nics, or 10GB nics if highly recommended instead. The two or three cluster nodes are DL380 G8's also with 15K SAS drives, probably a small RAID 10 array each for the OS. A total of 20 to 40 VM's will be hosted in the failover cluster, eventually.
Some VM's will host SQL 2008 server, some will be term servers, some basic Windows appliance servers. They might add an Oracle server VM or two down the road. I know that the new E5-2650 CPUs are sufficient for the VM load. I worry about the iSCSI throughput. I can't have these guys buy $70K of hardware and services and find that the VM load is too great for the disk array.
Would it be better for performance purposes to break up the iSCSI array into smaller arrays using a separate controller or controller channel? If so I lose a drive's worth of storage for each RAID 5 array I create.
Also, there will be a Starwind HA failover node. Can we use less expensive SATA drives for the replication partner or will that substantially slow down the replication process? Or worse, slow down the primary iSCSI LUNs?
Thanks for any thoughts!
Matt
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software