This is what I currently have to work with:
Two new Dell R520 servers
One older Dell R510 server
Each server has exactly the same configuration except the R510 which has slightly slower processors. Each server otherwise has:
1. Dual 10Gig NIC ports and dual 1Gig NIC ports.
2. Raid 1 146GB SAS OS disks (two)
3. Raid 0 1.2TB SAS (three)
4. 32GB of RAM
5. The 10Gig NIC ports are all configured with jumbo packets of 9014
I want to create the 3 node Starwind cluster over all three servers but I will also host the Hyper-V cluster on these servers. What I've done so far is connected each server directly with the 10Gig links as per the Starwind PDF 3-node diagram (Sync 1, 2 and 3). Since the Hyper-V cluster is local to the servers, I have teamed the 1Gig NIC ports on each server to our one switch (we do not have redundant switches yet).
So my questions are:
1. I have read that NIC teaming is not recommended for the cluster sync but I assume that it should be fine if it is only Hyper-V traffic going over it?
2. Is there is any issue or benefits for running a 2-node vs 3-node Hyper-V cluster right on top of the 3-node Starwind cluster? If running a 3-node Hyper-V is better due to redundancy, can it be configured to keep the R510 as the last possible server for live migration?
3. I can configure the 10Gig ports for jumbo packets of 9614, is there any benefit of this?
4. I configured the Raid 0 with the default stripe element size of 64KB, read ahead, write back and disk cache enabled. Are those settings ideal or is it better to have different options to cut down on overhead?
5. When I format the disk in Windows, should the allocation unit size be left at default or changed to match the stripe size?
6. Since the Sync and Heartbeat will be going over the 10gig links, is it fine to have the heartbeat only channel network connection shared with the Hyper-V connections?
Thanks in advance.
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software