Page 1 of 1

Network Config Guidence for a 2-Node Hyper-V Cluster

Posted: Wed Feb 26, 2025 2:31 pm
by AGWin
I'm in the process of evaluating vSAN (free for right now) for a new Hyper-V 2022 2-node cluster. I've been reading the quick start guides as well as these forums and can't quite grasp the best way to accomplish my goals.

I have 2 identical servers each with two 2-port ConnectX-6 cards (4x 100GB ports total), two 2-port E810 cards (4x 10GB ports total), and one 4-port i350 card (4x 1GB total) for networking. The goal of this is to provide maximum fault tolerance and uptime for critical hosted VMs within the cluster.

My original design was to team 2 of the 100GB ports for a vSAN Synchronization channel, team the other 2 100GB ports for the iSCSI channel, team 2 of the 1GB ports for a vSAN HB channel, and team the other 2 1GB ports for a management channel. This would leave the 4 10GB ports for VM network access, and I was going to use the iSCSI channel for Failover Cluster communication/live migration.

However, after reading everything it doesn't seem that you can have separate iSCSI and HB channels and NIC teaming should be avoided.

Given my hardware then what would be the recommended setup to achieve my goals?

Re: Network Config Guidence for a 2-Node Hyper-V Cluster

Posted: Wed Feb 26, 2025 2:41 pm
by yaroslav (staff)
StarWind VSAN Trial will work the best for trialing, IMO https://www.starwindsoftware.com/free-s ... vsan-trial and schedule the remote session with one of our techs.
As for your setup, please no teaming. Teaming is not supported just as well as traffic mixing. Make sure that one NIC carries iSCSI (data) and SYNC pair. That's how you don't loose type of traffic in case of NIC malfunction.
after reading everything it doesn't seem that you can have separate iSCSI and HB channels
Heartbeat does not induce any load, in its turn. It serves just for pinging.

Good luck with your project!

Re: Network Config Guidence for a 2-Node Hyper-V Cluster

Posted: Wed Feb 26, 2025 3:39 pm
by AGWin
Appreciate the help! I don't want to have just a single wire for the Sync and iSCSI channels. Can I have multiple wires with individual IPs? so my sync and HB targets would look something like:

#primary node
$syncInterface="#p2=172.30.20.3:3260,172.30.20.4:3260" -f $addr2,
$hbInterface="172.30.30.3,172.30.30.4",

#secondary node
$syncInterface2="#p1=172.30.20.1:3260,172.30.20.2:3260" -f $addr,
$hbInterface2="172.30.30.1,172.30.30.2",

Re: Network Config Guidence for a 2-Node Hyper-V Cluster

Posted: Wed Feb 26, 2025 4:12 pm
by yaroslav (staff)
Multiple wires with individual IPs should be good.
Your lines miss ports and have -f $addrX
See this script https://forums.starwindsoftware.com/vie ... ilit=HINT8.

Re: Network Config Guidence for a 2-Node Hyper-V Cluster

Posted: Wed Feb 26, 2025 4:45 pm
by AGWin
Thanks again! I actually reached out for a quote for the paid version yesterday and am waiting to hear back. I don't want to try the full trial until I have pricing since if vSAN fits my needs, but we can't afford the full version, I still want to move forward with the free version.

I'm now running into a 200: Failed: invalid partner info.. error when I run the script. I think I'm doing this right (i.e. just installed vSAN on the partner server with no image files configured) but please let me know if I'm off base here. Script I'm using is below.

Code: Select all

param($addr="10.69.0.32", $port=3261, $user="root", $password="starwind",
	$addr2="10.69.0.31", $port2=$port, $user2=$user, $password2=$password,
#common
	$initMethod="Clear",
	$size=5960000,
	$sectorSize=4096,
	$failover=0,
#	$bmpType=1,
#	$bmpStrategy=0,
#primary node
	$imagePath="My computer\J\SW\Storages",
	$imageName="masterImg_SSD_1",
	$createImage=$true,
	$storageName="",
	$targetAlias="targetha_SSD_1",
	$autoSynch=$true,
    $poolName="SSD",
	$syncSessionCount=1,
	$aluaOptimized=$true,
	$cacheMode="none",
	$cacheSize=0,
	$syncInterface="#p2=172.30.20.3:3260,172.30.20.4:3260" -f $addr2,
	$hbInterface="#p2=172.30.30.3:3260,172.30.30.4:3260" -f $addr2,
	$createTarget=$true,
#	$bmpFolderPath="",
#secondary node
	$imagePath2="My computer\J\SW\Storages",
	$imageName2="partnerImg_SSD_1",
	$createImage2=$true,
	$storageName2="",
	$targetAlias2="partnerha_SSD_1",
    $autoSynch2=$true,
	$poolName2="SSD",
	$syncSessionCount2=1,
	$aluaOptimized2=$false,
	$cacheMode2=$cacheMode,
	$cacheSize2=$cacheSize,
	$syncInterface2="#p1=172.30.20.1:3260,172.30.20.2:3260" -f $addr,
	$hbInterface2="#p1=172.30.30.1:3260,172.30.30.2:3260" -f $addr,
	$createTarget2=$true
#	$bmpFolderPath2=""
	)
	
Import-Module StarWindX

try
{
	Enable-SWXLog

	$server = New-SWServer -host $addr -port $port -user $user -password $password

	$server.Connect()

	$firstNode = new-Object Node

	$firstNode.HostName = $addr
	$firstNode.HostPort = $port
	$firstNode.Login = $user
	$firstNode.Password = $password
	$firstNode.ImagePath = $imagePath
	$firstNode.ImageName = $imageName
	$firstNode.Size = $size
	$firstNode.CreateImage = $createImage
	$firstNode.StorageName = $storageName
	$firstNode.TargetAlias = $targetAlias
        $firstNode.AutoSynch = $autoSynch
	$firstNode.SyncInterface = $syncInterface
	$firstNode.HBInterface = $hbInterface
	$firstNode.PoolName = $poolName
	$firstNode.SyncSessionCount = $syncSessionCount
	$firstNode.ALUAOptimized = $aluaOptimized
	$firstNode.CacheMode = $cacheMode
	$firstNode.CacheSize = $cacheSize
	$firstNode.FailoverStrategy = $failover
	$firstNode.CreateTarget = $createTarget
#	$firstNode.BitmapStoreType = $bmpType
#	$firstNode.BitmapStrategy = $bmpStrategy
#	$firstNode.BitmapFolderPath = $bmpFolderPath
    
	#
	# device sector size. Possible values: 512 or 4096(May be incompatible with some clients!) bytes. 
	#
	$firstNode.SectorSize = $sectorSize
    
	$secondNode = new-Object Node

	$secondNode.HostName = $addr2
	$secondNode.HostPort = $port2
	$secondNode.Login = $user2
	$secondNode.Password = $password2
	$secondNode.ImagePath = $imagePath2
	$secondNode.ImageName = $imageName2
	$secondNode.CreateImage = $createImage2
	$secondNode.StorageName = $storageName2
	$secondNode.TargetAlias = $targetAlias2
        $secondNode.AutoSynch = $autoSynch2
	$secondNode.SyncInterface = $syncInterface2
	$secondNode.HBInterface = $hbInterface2
	$secondNode.SyncSessionCount = $syncSessionCount2
	$secondNode.ALUAOptimized = $aluaOptimized2
	$secondNode.CacheMode = $cacheMode2
	$secondNode.CacheSize = $cacheSize2
	$secondNode.FailoverStrategy = $failover
	$secondNode.CreateTarget = $createTarget2
#	$secondNode.BitmapFolderPath = $bmpFolderPath2
        
	$device = Add-HADevice -server $server -firstNode $firstNode -secondNode $secondNode -initMethod $initMethod
    
	while ($device.SyncStatus -ne [SwHaSyncStatus]::SW_HA_SYNC_STATUS_SYNC)
	{
		$syncPercent = $device.GetPropertyValue("ha_synch_percent")
	        Write-Host "Synchronizing: $($syncPercent)%" -foreground yellow

		Start-Sleep -m 2000

		$device.Refresh()
	}
}
catch
{
	Write-Host $_ -foreground red 
}
finally
{
	$server.Disconnect()
}

Re: Network Config Guidence for a 2-Node Hyper-V Cluster

Posted: Wed Feb 26, 2025 5:09 pm
by yaroslav (staff)
Got it! The script is not correct.
Pool name and IPs are introduced wrong. Please check this out
https://forums.starwindsoftware.com/vie ... ilit=HINT8

You are welcome to take my script and swap your IPs and mount points

Re: Network Config Guidence for a 2-Node Hyper-V Cluster

Posted: Wed Feb 26, 2025 5:24 pm
by AGWin
Absolutely beautiful, thank you! Plugged in my IPs and names and everything worked.

Re: Network Config Guidence for a 2-Node Hyper-V Cluster

Posted: Wed Feb 26, 2025 6:16 pm
by yaroslav (staff)
Great news :)
Good luck with your project.

Re: Network Config Guidence for a 2-Node Hyper-V Cluster

Posted: Thu Feb 27, 2025 2:03 pm
by AGWin
One more question, and then I think I'm good to release the cluster to test: since I'm using redundant iSCSI links on each node do I just add the additional IPs to the Target portals in the iSCSI Initiator Properties Discovery tab? So I would have on Node 1:

Initiator IP: Default
Target Portal IP: 127.0.0.1 / 3260

Initiator IP: 172.30.30.1
Target Portal IP: 172.30.30.3

Initiator IP: 172.30.30.2
Target Portal IP: 172.30.30.4

And the corresponding entries on Node 2.

Is my thought process correct here? Would this achieve the redundancy I'm looking for, i.e. if I lose one network card (172.30.30.1) the other network card (172.30.30.2) would continue to support connections to the iSCSI targets?

Re: Network Config Guidence for a 2-Node Hyper-V Cluster

Posted: Thu Feb 27, 2025 3:36 pm
by yaroslav (staff)
Looks good, but for redundancy, there should be different subnets there.
Also, make sure to adjust this parameter in the StarWind.cfg files <iScsiDisconnectRetryPeriodInSec value="1"/> to allow for multiple iSCSI connections.

Re: Network Config Guidence for a 2-Node Hyper-V Cluster

Posted: Thu Feb 27, 2025 3:50 pm
by AGWin
Got it, so I should have something like this:

Node 1:

Initiator IP: Default
Target Portal IP: 127.0.0.1 / 3260

Initiator IP: 172.30.30.1
Target Portal IP: 172.30.30.2

Initiator IP: 172.30.31.1
Target Portal IP: 172.30.31.2

Node 2:

Initiator IP: Default
Target Portal IP: 127.0.0.1 / 3260

Initiator IP: 172.30.30.2
Target Portal IP: 172.30.30.1

Initiator IP: 172.30.31.2
Target Portal IP: 172.30.31.1

Re: Network Config Guidence for a 2-Node Hyper-V Cluster

Posted: Thu Feb 27, 2025 5:32 pm
by yaroslav (staff)
Looks better, thanks!

Re: Network Config Guidence for a 2-Node Hyper-V Cluster

Posted: Thu Feb 27, 2025 5:50 pm
by AGWin
Thank you @yaroslav! I have to say I am totally amazed at the time you have taken to help me with all this. I'm certainly becoming a fan of StarWind!

Re: Network Config Guidence for a 2-Node Hyper-V Cluster

Posted: Thu Feb 27, 2025 6:57 pm
by yaroslav (staff)
Thanks for your kind words :)
Always happy to help.
P.s. do not miss the chance to trial the solution, our techs might share some useful hints for product deployment during proof of concept tests.