How would I configure redundant iSCSI and synchronization links in a hyper-converged 2-node Hyper-V cluster?

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
ezoltan
Posts: 2
Joined: Sat Dec 21, 2024 9:41 pm

Sat Dec 21, 2024 10:15 pm

Hi,

I am trying to figure out how to configure networking a two-node hyper-converged Hyper-V cluster with a VSAN deployed as a Windows application installed on the Hyper-V nodes. Best practice at https://www.starwindsoftware.com/best-p ... practices/ states that redundant links should be used for iSCSI as well as synchronization/heartbeat traffic (as in 2x iSCSI + 2x Sync/HB).

The best practice article also says:
"StarWind Virtual SAN does not support any form of NIC teaming for resiliency or throughput aggregation".

The guide at https://www.starwindsoftware.com/resour ... rver-2016/ shows how to configure external virtual switches for iSCSI and sync/HB, only to go on and describe the configuration of MPIO and the iSCSI initiator on the host itself.

I feel a bit discombobulated after reading the two documents.

Q1: Why do I need virtual switches for iSCSI and sync/HB, if both traffic types are going to be handled at host level?

Q2: If I have 2x iSCSI + 2x sync NICs (4 NICS in total), then would I create two external virtual switches for each traffic type ( a total of 4 virtual switches)? I take I cannot use a SET team or any hardware-level teaming like LACP. The guide makes no mention of it (or it escaped my attention).

Q3: In the light of Q1, what virtual machine would I connect to the virtual switches and what would be its role in the specific use case described in the guide?

Looking forward to some guidance.

Thank you,
Zoltan
yaroslav (staff)
Staff
Posts: 3424
Joined: Mon Nov 18, 2019 11:11 am

Sun Dec 22, 2024 8:20 pm

Welcome to Starwind Forum!
You can go with 1x iSCSI and 1x Sync (physically separated) links. Ideally 2x iSCSI and 2x Sync. Preferably direct.
Q1: Why do I need virtual switches for iSCSI and sync/HB, if both traffic types are going to be handled at host level?
If you are deploying CVM, you need vSwitches to somehow connect the adapters to the VM.
If you are using a Windows-native application, no need for vSwitch for iSCSI and Sync.
If I have 2x iSCSI + 2x sync NICs (4 NICS in total), then would I create two external virtual switches for each traffic type ( a total of 4 virtual switches)? I take I cannot use a SET team or any hardware-level teaming like LACP. The guide makes no mention of it (or it escaped my attention).
Best practices say no teaming. Please do not use teaming for iSCSI and SYNC. That's why the guide mentioned no teaming.
Use 4 x separate vSwitches.
Q3: In the light of Q1, what virtual machine would I connect to the virtual switches and what would be its role in the specific use case described in the guide?
You can connect the physical hosts to the switches, yet they are to be redundant.
Normally direct links for iSCSI and SYNC should do it.
ezoltan
Posts: 2
Joined: Sat Dec 21, 2024 9:41 pm

Tue Dec 31, 2024 10:47 am

Thank you Yaroslav. It makes sense and I thought that might be the case, however since the guide doesn't draw a clear line between running vSAN on a physical host vs. virtual appliance, it got confused.

One more question: provided I'll be using 2x iSCSI + 2x Sync/HB physical links between the two hypervisors, does it mean that I'll have 3x iSCSI targets, that is, 127.0.0.1, Remote_Target_IP#1 and Remote_Target_IP#2, configured with a "Failover only" MPIO policy where the active path is 127.0.0.1? Technically it makes sense, however never worked with an odd number of iSCSI targets.

Can you please confirm.

Thank you.
yaroslav (staff)
Staff
Posts: 3424
Joined: Mon Nov 18, 2019 11:11 am

Tue Dec 31, 2024 11:32 am

You just need the righ guide (i hope nobody ruined documentation again :D ). Thanks for your feedback anyway.
For mpio, use Least Queue Depth, and (for SSDs) at least 3x loopback and 1x per partner. Make sure to change IscsiDiscoveryInterfaces to 1 (stop StarWindService on one node -> go to C:\Program Files\StarWind Software\StarWind\StarWind.cfg -> copy the file -> Edit the file -> save -> start the service -> wait for fast sync completion), repeat on another node.
That parameter will make configuring mupltiple iSCSI interfaces easier.

P.s. do not mix iSCSI and sync traffic.
Charles984
Posts: 1
Joined: Tue Jan 14, 2025 4:11 am
Location: United States
Contact:

Tue Jan 14, 2025 4:18 am

ezoltan wrote:
Sat Dec 21, 2024 10:15 pm
Hi,

I am trying to figure out how to configure networking a two-node hyper-converged Hyper-V cluster with a VSAN deployed as a Windows application installed on the Hyper-V nodes. Best practice at https://www.starwindsoftware.com/best-p ... ces/states that redundant links should be used for iSCSI as well as synchronization/heartbeat traffic (as in 2x iSCSI + 2x Sync/HB).

The best practice article also says:
"StarWind Virtual SAN does not support any form of NIC teaming for resiliency or throughput aggregation".

The guide at https://www.starwindsoftware.com/resour ... rver-2016/ shows how to configure external virtual switches for iSCSI and sync/HB, only to go on and describe the configuration of MPIO and the iSCSI initiator on the host itself.

I feel a bit discombobulated after reading the two documents.

Q1: Why do I need virtual switches for iSCSI and sync/HB, if both traffic types are going to be handled at host level?

Q2: If I have 2x iSCSI + 2x sync NICs (4 NICS in total), then would I create two external virtual switches for each traffic type ( a total of 4 virtual switches)? I take I cannot use a SET team or any hardware-level teaming like LACP. The guide makes no mention of it (or it escaped my attention).

Q3: In the light of Q1, what virtual machine would I connect to the virtual switches and what would be its role in the specific use case described in the guide?

Looking forward to some guidance.

Thank you,
Zoltan
You may need to focus on properly configuring virtual switches and check the documentation more closely to ensure iSCSI and Sync/HB data flows work independently, as this is an important factor in cluster setup.
Post Reply