Validation of network-based iSCSI scalable design (Hyper-V)

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
Electrum
Posts: 5
Joined: Tue Oct 08, 2024 2:22 pm

Tue Oct 08, 2024 2:28 pm

Hello,

I want to confirm the validity of my design for Starwind SAN. The hosts illustrated in this topology would have all flash storage, which is what will be "shared" between the cluster. The synology NAS will be a separate iSCSI provider to each host.

In the image, the interfaces documented would be VMNetworkAdapters tagged with -managementos and -access. Will the proposed design work and is it recommended?
Attachments
vSAN.drawio-min.png
vSAN.drawio-min.png (132.55 KiB) Viewed 6158 times
yaroslav (staff)
Staff
Posts: 3599
Joined: Mon Nov 18, 2019 11:11 am

Tue Oct 08, 2024 4:34 pm

Thanks for sharing your diagram.
You can check your setup against Some of the concerns
1. Traffic mixing.
iSCSI and Sync are teamed with management in a single Hyper-V switch.
2. no redundant communication
Make sure that NICs are redundant in the server. In other words, you need 2 or more physical network cards.
If direct connectivity is not possible, make sure that Switches that are used for communication are redundant in each location.

P.S. No need to use the Hyper-V switches for iSCSI and Synchronization if you use Windows-native StarWind Service.
Also, consider trialing and building a POC system with one of our techs https://www.starwindsoftware.com/v17-request-live-demo.
Electrum
Posts: 5
Joined: Tue Oct 08, 2024 2:22 pm

Tue Oct 08, 2024 8:10 pm

Hello,

Thank you for your reply. I had a few follow up questions:

1. I wanted to clarify, you mentioned "iSCSI and Sync are teamed with management in a single Hyper-V switch.", did you mean sync and management are teamed together in vSwitch_02? Please note, there are a total of 4 ports per server (2x 10G, 2x 1G).

2. You mentioned not having redundancy, however there are dual iSCSI 10G interfaces in the vSwitch. I assume you meant because in a VCM deployment, the NICs cannot be added to the vSwitch and MUST be dedicated for iSCSI and Sync (1 NIC for each)?

3. Is the requirement of "2 or more physical network cards" because this is a 3 node cluster?

For example:

-- iSCSI --
Host 1 < -> Host 2 : 172.16.10.0/30
Host 2 <-> Host 3: 172.16.11.0/30
Host 3 <-> Host 1: 172.16.12.0/30

-- Sync --
Host 1 < -> Host 2 : 172.16.20.0/30
Host 2 <-> Host 3: 172.16.21.0/30
Host 3 <-> Host 1: 172.16.22.0/30


If I am correct, this is only a requirement because each device is directly connected to every other device. If a switch is defined (with 2 VLANs and /24) each host would have communication to every other host, only requiring 2x 10G connections per host. I would also be able to mount the NAS as an iSCSI disk assuming the NAS was on one of these two vlans, would I not?


4. Assuming the number of nodes was reduced to 2 nodes, would there still be a requirement for 4x 10G ports per node?

5. Lastly, you mentioned if Starwind vSAN was run as a Windows native service (not a VM), I would no longer need to dedicate 2x 10G ports per server for ISCSI and sync. Looking at the documentation, it seems this approach effectively intercepts storage exports given by each host, in order to synchronize the volumes on each (therefore creating a "SAN")? Or am I way off?


This is for my home network, we might potentially look into this for work assuming it works out. :)
yaroslav (staff)
Staff
Posts: 3599
Joined: Mon Nov 18, 2019 11:11 am

Tue Oct 08, 2024 9:02 pm

Hi,

Thanks for your replies!
Long story short (you can skip all that text below :D), you need AT LEAST 2x network cards anyway for heartbeat OR 3 x nodes for node majority (those can be a 3-way mirror or 2x data +witness). If you have redundant switching, you can use 1 port for iSCSI and 1 port for Sync in each server. If there is no redundant switching, you can have iSCSI going to the switch and SYNC running directly between the servers. So you will use 3x ports for StarWind VSAN in each server.

Now the extended answer.
1. I wanted to clarify, you mentioned "iSCSI and Sync are teamed with management in a single Hyper-V switch.", did you mean sync and management are teamed together in vSwitch_02? Please note, there are a total of 4 ports per server (2x 10G, 2x 1G).
Thanks for your update.
Yes, I was referring to the adapters that have different purposes that are teamed in vSwitches.
Speaking of the redundancy, I am referring to the network cards, not ports. There must be 2 separate physical network cards.
2. You mentioned not having redundancy, however there are dual iSCSI 10G interfaces in the vSwitch.
Please do not team them. Use MPIO and iSCSI connectivity over multiple links for redundancy.
and MUST be dedicated for iSCSI and Sync (1 NIC for each)?
Yes. for CVM, please please use dedicated vSwitches built on top dedicated adapters for iSCSI and Sync.
3. Is the requirement of "2 or more physical network cards" because this is a 3 node cluster?
No. This si a general requirement for the Heartbeat Failover strategy. See more on failover strategies: heartbeat (https://www.starwindsoftware.com/help/H ... ategy.html), node majority (https://www.starwindsoftware.com/help/N ... ategy.html)
I would also be able to mount the NAS as an iSCSI disk assuming the NAS was on one of these two vlans, would I not?
Please do not mix StarWind traffic with anything else. This might cause performance inconsistencies and synchronization drops. You can try using iSCSI links for NAS connectivity, yet in all-flash systems 10 GBE network might be not enough.
4. Assuming the number of nodes was reduced to 2 nodes, would there still be a requirement for 4x 10G ports per node?
You still will need 2x NICs, Yet number of ports used for StarWind Synchronization and connecting iSCSI can be reduced to 1 per type of traffic.
You can also stick with 1x Sync and 1xiSCSI link per host dedicated to StarWind VSAN if the redundant switches are used. Alternatively, if you are short on ports, you can have iSCSI running through the switch and SYNC going directly.
5. Lastly, you mentioned if Starwind vSAN was run as a Windows native service (not a VM), I would no longer need to dedicate 2x 10G ports per server for ISCSI and sync
It does not depend on the solution you chose, it is rather the general system design. I would suggest booking a call with one or our techs to help get a better understanding of the system designs, OR check our best practices https://www.starwindsoftware.com/best-p ... practices/.
effectively intercepts storage exports given by each host, in order to synchronize the volumes on each (therefore creating a "SAN")? Or am I way off?
That's correct.
Post Reply