2 node Native SAN with 6 x 1GB Nics

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
CCI
Posts: 10
Joined: Wed Dec 19, 2012 5:28 am

Sun Jan 13, 2013 9:56 pm

Hello,

Looking for some input on how to arrange 6 x 1GB nics on 2 Dell Poweredge servers, for 2 node Native San, Windows Server 2012 Hyper-V failover cluster. I brought this up with Starwind support, but decided to bring the discussion here so it may be helpful for others as well.

The 2 Poweredge servers each have an onboard 4 x 1GB nic, as well as an Intel 2 x 1GB PCI-E nic.

Im sure there is a few different ways this could be set-up...

I believe Microsoft best practices say:
Virtual switches / VM traffic should have 1 or more dedicated nics - No IPs assigned to this nic
Host management traffic should have 1 or more dedicated nics
Cluster live migration traffic should have 1 or more dedicated nics on a seperate subnet
Cluster communication traffic (mainly heartbeat traffic) can be shared with CSV traffic on 1 or more dedicated nics on a seperate subnet

I do not have the that many nics available. Out of 6 nics per host, Im planning on using 3 for Starwind storage.
So, for the 3 nics that arent used for storage, would something like this be suitable to get started with:

For Management / VM traffic: 192.168.250.x/24
Node1 Onboard Nic Port 1 <- Nic Team <-switch-> Nic Team -> Node2 Onboard Nic Port 1
Node1 Onboard Nic Port 2 <- Nic Team <-switch-> Nic Team -> Node2 Onboard Nic Port 2
-Cluster Network Properties: Allow cluster network communication
-Cluster Network Properties: Allow clients to connect through this network
-Let this nic team be used for live migration, with a lower preference

Cluster use, live migration:
Node1 Onboard Nic Port 3 <- Cluster traffic Crossover Cable 192.168.249.12/30 -> Node2 Onboard Nic Port 3
-Cluster Network Properties: Allow cluster network communication
-Cluster Network Properties: Dont allow clients to connect through this network
-Let this nic team be used for live migration, with a higher preference
-Starwind heartbeat also here?

Storage use:
Node1 Onboard Nic Port 4 <- Sync channel 1 Crossover Cable 192.168.249.0/30 -> Node2 Onboard Nic Port 4
Node1 PCI-E Dual Port 1 <- iSCSI traffic Crossover Cable 192.168.249.8/30 -> Node2 PCI-E Dual Port 1
Node1 PCI-E Dual Port 2 <- Sync channel 2 Crossover Cable 192.168.249.4/30 -> Node2 PCI-E Dual Port 2
-no cluster traffic or live migration traffic allowed on these networks
-Starwind sync and heartbeat on both sync channel networks
-Starwind heartbeat also on iSCSI network

Does this setup sound reasonable?

I do not have much experience with cluster networks, other than some simple lab setups. Hoping for some of the experienced members to comment on this setup and/or recommend any changes.
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Mon Jan 14, 2013 3:42 pm

Hello CCI,

First of all I'd recommend to do not use NIC Teaming. There are a lot of topics that explain why its better to use RR NLB policy instead.

One important thing: I'd recommend you to use different subnets on all the data links.

One thing that need to be explained, I guess: the SynchChannel includes the heartbeat functionality, so you don't need to configure HB on SyncChannel IPs.

All the rest looks OK to me.

If you will need some How-To`s you can find them here in the "StarWind Native SAN for Hyper-V Manuals" section.

I hope it helped.
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
Post Reply