Hi,
I am trying to figure out how to configure networking a two-node hyper-converged Hyper-V cluster with a VSAN deployed as a Windows application installed on the Hyper-V nodes. Best practice at https://www.starwindsoftware.com/best-p ... practices/ states that redundant links should be used for iSCSI as well as synchronization/heartbeat traffic (as in 2x iSCSI + 2x Sync/HB).
The best practice article also says:
"StarWind Virtual SAN does not support any form of NIC teaming for resiliency or throughput aggregation".
The guide at https://www.starwindsoftware.com/resour ... rver-2016/ shows how to configure external virtual switches for iSCSI and sync/HB, only to go on and describe the configuration of MPIO and the iSCSI initiator on the host itself.
I feel a bit discombobulated after reading the two documents.
Q1: Why do I need virtual switches for iSCSI and sync/HB, if both traffic types are going to be handled at host level?
Q2: If I have 2x iSCSI + 2x sync NICs (4 NICS in total), then would I create two external virtual switches for each traffic type ( a total of 4 virtual switches)? I take I cannot use a SET team or any hardware-level teaming like LACP. The guide makes no mention of it (or it escaped my attention).
Q3: In the light of Q1, what virtual machine would I connect to the virtual switches and what would be its role in the specific use case described in the guide?
Looking forward to some guidance.
Thank you,
Zoltan
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software