Hello,
Looking for some input on how to arrange 6 x 1GB nics on 2 Dell Poweredge servers, for 2 node Native San, Windows Server 2012 Hyper-V failover cluster. I brought this up with Starwind support, but decided to bring the discussion here so it may be helpful for others as well.
The 2 Poweredge servers each have an onboard 4 x 1GB nic, as well as an Intel 2 x 1GB PCI-E nic.
Im sure there is a few different ways this could be set-up...
I believe Microsoft best practices say:
Virtual switches / VM traffic should have 1 or more dedicated nics - No IPs assigned to this nic
Host management traffic should have 1 or more dedicated nics
Cluster live migration traffic should have 1 or more dedicated nics on a seperate subnet
Cluster communication traffic (mainly heartbeat traffic) can be shared with CSV traffic on 1 or more dedicated nics on a seperate subnet
I do not have the that many nics available. Out of 6 nics per host, Im planning on using 3 for Starwind storage.
So, for the 3 nics that arent used for storage, would something like this be suitable to get started with:
For Management / VM traffic: 192.168.250.x/24
Node1 Onboard Nic Port 1 <- Nic Team <-switch-> Nic Team -> Node2 Onboard Nic Port 1
Node1 Onboard Nic Port 2 <- Nic Team <-switch-> Nic Team -> Node2 Onboard Nic Port 2
-Cluster Network Properties: Allow cluster network communication
-Cluster Network Properties: Allow clients to connect through this network
-Let this nic team be used for live migration, with a lower preference
Cluster use, live migration:
Node1 Onboard Nic Port 3 <- Cluster traffic Crossover Cable 192.168.249.12/30 -> Node2 Onboard Nic Port 3
-Cluster Network Properties: Allow cluster network communication
-Cluster Network Properties: Dont allow clients to connect through this network
-Let this nic team be used for live migration, with a higher preference
-Starwind heartbeat also here?
Storage use:
Node1 Onboard Nic Port 4 <- Sync channel 1 Crossover Cable 192.168.249.0/30 -> Node2 Onboard Nic Port 4
Node1 PCI-E Dual Port 1 <- iSCSI traffic Crossover Cable 192.168.249.8/30 -> Node2 PCI-E Dual Port 1
Node1 PCI-E Dual Port 2 <- Sync channel 2 Crossover Cable 192.168.249.4/30 -> Node2 PCI-E Dual Port 2
-no cluster traffic or live migration traffic allowed on these networks
-Starwind sync and heartbeat on both sync channel networks
-Starwind heartbeat also on iSCSI network
Does this setup sound reasonable?
I do not have much experience with cluster networks, other than some simple lab setups. Hoping for some of the experienced members to comment on this setup and/or recommend any changes.
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software