Hi,
Thanks for your replies!
Long story short (you can skip all that text below

), you need AT LEAST 2x network cards anyway for heartbeat OR 3 x nodes for node majority (those can be a 3-way mirror or 2x data +witness). If you have redundant switching, you can use 1 port for iSCSI and 1 port for Sync in each server. If there is no redundant switching, you can have iSCSI going to the switch and SYNC running directly between the servers. So you will use 3x ports for StarWind VSAN in each server.
Now the extended answer.
1. I wanted to clarify, you mentioned "iSCSI and Sync are teamed with management in a single Hyper-V switch.", did you mean sync and management are teamed together in vSwitch_02? Please note, there are a total of 4 ports per server (2x 10G, 2x 1G).
Thanks for your update.
Yes, I was referring to the adapters that have different purposes that are teamed in vSwitches.
Speaking of the redundancy, I am referring to the network cards, not ports. There must be 2 separate physical network cards.
2. You mentioned not having redundancy, however there are dual iSCSI 10G interfaces in the vSwitch.
Please do not team them. Use MPIO and iSCSI connectivity over multiple links for redundancy.
and MUST be dedicated for iSCSI and Sync (1 NIC for each)?
Yes. for CVM, please please use dedicated vSwitches built on top dedicated adapters for iSCSI and Sync.
3. Is the requirement of "2 or more physical network cards" because this is a 3 node cluster?
No. This si a general requirement for the Heartbeat Failover strategy. See more on failover strategies: heartbeat (
https://www.starwindsoftware.com/help/H ... ategy.html), node majority (
https://www.starwindsoftware.com/help/N ... ategy.html)
I would also be able to mount the NAS as an iSCSI disk assuming the NAS was on one of these two vlans, would I not?
Please do not mix StarWind traffic with anything else. This might cause performance inconsistencies and synchronization drops. You can try using iSCSI links for NAS connectivity, yet in all-flash systems 10 GBE network might be not enough.
4. Assuming the number of nodes was reduced to 2 nodes, would there still be a requirement for 4x 10G ports per node?
You still will need 2x NICs, Yet number of ports used for StarWind Synchronization and connecting iSCSI can be reduced to 1 per type of traffic.
You can also stick with 1x Sync and 1xiSCSI link per host dedicated to StarWind VSAN if the redundant switches are used. Alternatively, if you are short on ports, you can have iSCSI running through the switch and SYNC going directly.
5. Lastly, you mentioned if Starwind vSAN was run as a Windows native service (not a VM), I would no longer need to dedicate 2x 10G ports per server for ISCSI and sync
It does not depend on the solution you chose, it is rather the general system design. I would suggest booking a call with one or our techs to help get a better understanding of the system designs, OR check our best practices
https://www.starwindsoftware.com/best-p ... practices/.
effectively intercepts storage exports given by each host, in order to synchronize the volumes on each (therefore creating a "SAN")? Or am I way off?
That's correct.