Need help setting up Network and 1 node hyper-V cluster

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

VladR
Posts: 13
Joined: Thu Aug 18, 2016 7:41 pm

Tue Aug 23, 2016 12:49 pm

Moved to new post from "LAN set up with a 2 node Storage Cluster" thread
my current situation is this:

I have a DELL R730dx Server. Dual 6-core CPU , 32GB RAM.
Windows 2012 R2 Hyper-V setup on 2x200GB SSD in RAID-1 + 4x2TB HDD in RAID-10
Future Plan to add a second server in the same config to have a HA fail-over cluster.

what I am trying to do right now, is build out a single node Hyper-V cluster using server above + StarWind NFR license so when the second server come is
I can just configure it and add it to the cluster hot-plug style.

This is my first cluster build, let along my first build with StarWind, so I am getting a bit confused on things like how to configure my network and if I did configure SW properly to be used with Cluster setup.
what makes it even more frustrating is that I only have 4 1Gb NIC interface on the server and everything I read says it is not enough. but adding 10Gb hardware is not in the stars for the nearest future. we just had a hardware update few month ago to the 1Gb. i.e. did the site rewire to cat-6e with proper layout and updated all end points to cat-6 rj-45 jacks. replaced all switches to 1Gb managed from netgear, etc.

my questions are as follow:

#1. providing I must stick with using 1Gb interfaces, how essential it is to double up the nic per server (i.e. get the new server with 8x1Gb, and get a second 4 port card to the current server)

#2. what would be the proper way to configure the network on the server for 2-node HA cluster setup I plan.
please give me the config scenario using the IP address I actually can use.
my primary network is 192.168.1.0/24 my first server IP is 192.168.1.8/24

NOTE : I have tried a config based on a how-to I found but got a bit confused in the middle of it hence I am here.
what I have so far is, I created a Team on NIC1.
using Power Shell I configred a switch and 3 vNIC on the team.

CODE: SELECT ALL

Code: Select all

## Creat new switch on a team network
New-VMSwitch -Name ConvergedHVSwitch -NetAdapterName HVTeam -AllowManagementOS $False -MinimumBandwidthMode Weight


### Create Virtual Network adaptors
Add-VMNetworkAdapter -ManagementOS -Name "Management" -SwitchName "ConvergedHVSwitch"

Add-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -SwitchName "ConvergedHVSwitch"

Add-VMNetworkAdapter -ManagementOS -Name "CSV" -SwitchName "ConvergedHVSwitch"


### Set vNIC weight
Set-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -MinimumBandwidthWeight 40

Set-VMNetworkAdapter -ManagementOS -Name "CSV" -MinimumBandwidthWeight 5

Set-VMNetworkAdapter -ManagementOS -Name "Management" -MinimumBandwidthWeight 5



New-NetIPAddress -InterfaceAlias "vEthernet (Management)" -IPAddress 192.168.1.7 -PrefixLength "24" -DefaultGateway 192.168.1.2


what would be my next step with this config?

#3 how do I configure the remaining NICs to be used by StarWind and Hyper-V cluster?


my short term goal is to have a single node hyper-V cluster up and running with StarWind vSAN as soon as possible.
this way I can push for second server to be ordered sooner rather than later.

my long term is to have the setup above up ASAP and adding the second server to the mix with in a month. two at most.

Thanks
Michael (staff)
Staff
Posts: 319
Joined: Thu Jul 21, 2016 10:16 am

Fri Aug 26, 2016 6:49 pm

Hello VladR,
According to StarWind Requirements https://www.starwindsoftware.com/system-requirements, the minimal recommended bandwidth is 1 Gbps. Minimally, StarWind needs one channel for Synchronization traffic and one channel for iSCSI traffic. It would be better to connect them directly, without switch connection. Both channels should be separated by different subnets.
With 1 Gbps Synchronization connection, the total storage performance will be limited by 1 Gbps. In case if it is enough for your Production, you can leave it as is, but in most cases, it is always better to have more Synchronization throughput channel to avoid the bottleneck on networks.
Please keep in mind that StarWind does not support any form of NIC teaming, thus please do not configure any NIC teaming for SYNC and iSCSI channels.

To configure NICs for ISCSI and SYNC, you can use next PowerShell command:

Code: Select all

Get-NetAdapter "NIC 2"| Rename-NetAdapter -NewName ISCSI | Get-NetAdapter ISCSI | New-NetIPAddress -IPAddress 172.16.10.10 -PrefixLength 24   
Get-NetAdapter "NIC 3"| Rename-NetAdapter -NewName SYNC | Get-NetAdapter ISCSI | New-NetIPAddress -IPAddress 172.16.20.10 -PrefixLength 24 
On the second server, it will be 172.16.10.20 and 172.16.20.20 respectively.

Additionally, I would recommend you enabling Jumbo frames on each adapter. Please see below the example:

Code: Select all

Set-NetAdapterAdvancedProperty -Name ISCSI -RegistryKeyword “*JumboPacket” -Registryvalue 9014
Also, you will need to add MPIO feature as well as all other necessary roles on each server:

Code: Select all

Enable-WindowsOptionalFeature –Online –FeatureName MultiPathIO
Enable-MSDSMAutomaticClaim –BusType iSCSI

Code: Select all

Install-WindowsFeature Hyper-V, Hyper-V-PowerShell, Hyper-V-Tools, Failover-Clustering, RSAT-Clustering -IncludeAllSubFeature -Restart
As it was recommended by @PoSaP in previous thread (https://forums.starwindsoftware.com/vie ... f=5&t=4513), you can find a step by step guide of configuring StarWind Virtual SAN in hyper-converged scenario for Hyper-V here:
https://www.starwindsoftware.com/techni ... manual.pdf
VladR
Posts: 13
Joined: Thu Aug 18, 2016 7:41 pm

Fri Aug 26, 2016 8:41 pm

thank you Michael,
I understand that I am limiting my self to 1GB. unfortunately at this moment that is all I have.
also I have seen lots of how-to around, that contradict the no nic teaming statment
I have seen many suggestions when using 10Gb ports to team them up and use vSwitch for everything. are this are all bad advice?

relatively to my setup,
I have 4 nic ports, at the moment I teamed
at the moment I teamed
pNIC 1 and 2 teamed to HVTeam to be used with vSwitch(ConvergedHVSwitch) I created on top of HVTeam.

3 vNIC added to the switch
vNIC1 - "Management"
vNIC2 -"LiveMigration"
vNIC3- "Cluster"

vNIC1 - "Management" is assigned primary server IP of "192.168.1.7/24" and gateway
vNIC2 -"LiveMigration" some how got 2 local Ips of "192.168.1.71/24" and 192.168.1.72/24 during cluster setup
vNIC3- "Cluster" got 10.10.10.10 and 10.10.10.11


pNIC 3 - Reserved for Sync
pNIC4 - reserved for iSCSI

what is my best option to improve the setup here?

thanks
Michael (staff)
Staff
Posts: 319
Joined: Thu Jul 21, 2016 10:16 am

Mon Aug 29, 2016 2:48 pm

Hello VladR,
As for the iSCSI over NIC Team, technically you can do it, but there could be a performance issues, that is why it is not recommended. Please see the blog below for the details:
https://www.starwindsoftware.com/blog/l ... his-case-2
As for the Management over NIC Team - it makes sence to deploy it to make Live Migration faster.
As for the pNIC 3 and pNIC4, I would recommend you to leave them reserved as you are and use them separately, as I wrote below.
VladR
Posts: 13
Joined: Thu Aug 18, 2016 7:41 pm

Mon Aug 29, 2016 8:42 pm

thanks Michael,
so, if I understand everything I have read so far.

my best options to make all works is

#1. get a second 10GB card for the current NODE1
order the NODE2 with a second 10GB card installed

#2. Reserve the 10GB cards on both nodes for iSCSI and Sync
#3. I can then, team-up all ports on the 1GB cards on both nodes into single team and use that with the vSwitch for interfaces like Management, Cluster/HB , LiveMigration Etc.

the basic setup would be


connect all ports on 1GB cards to my main network (and configure as per #3)
connect NODE1.10GB.port1 to NODE2.10GB.port1 --- Sync
and NODE1.10GB.port2 to NODE2.10GB.port2 --- iSCSI

the connections on teamed ports will get IPs

NODE1
on the team :
192.168.1.7 for Management
192.168.1.71 for LiveMigration

10.10.10.07 for NODE1.10GB.port1
10.10.20.07 for NODE1.10GB.port2

NODE2
on the team :
192.168.1.8 for Management
192.168.1.72 for LiveMigration

10.10.10.08 for NODE2.10GB.port1
10.10.20.08 for NODE2.10GB.port2


when ready cluster will get IP 192.168.1.6 on LiveMigration port
and 10.10.10.06 on 10GB port 1

All iSCSI initiators will be configured to use 10GB. port2
all Sync will be configured to use 10GB port1

anything I have missed or misunderstood ?
Michael (staff)
Staff
Posts: 319
Joined: Thu Jul 21, 2016 10:16 am

Tue Aug 30, 2016 8:32 am

Hello VladR,
Everything looks correct. Since you going to use 10 Gbps adapters for Synchronization and iSCSI, I would recommend using Round Robin MPIO policy instead of Failover Only.
Please see the KB article for more details: https://knowledgebase.starwindsoftware. ... correctly/
VladR
Posts: 13
Joined: Thu Aug 18, 2016 7:41 pm

Thu Sep 22, 2016 8:04 pm

Hello again.

I got my second server this week and is setting it up right now.
my question is this, if I get all the installation and setup done including the Starwind.
can I reverse setup the starwind shares?
I mean I have all up and running on new server (2x10GB NICs, 2x1GB NICs in a Team mode) as discussed in my last post.
Starwind setup and shares created (1xWitness, 1xCSV1 and 1xCSV2)
if I add the new server to the cluster and move all the VM role storage from first node to new one and move all the roles as well
can I delete the shares from first node, add the 10GB nic to it and some more RAM, setup a HV replication of SW shares from second node to the first one?
I used LSFS types for the disks and it seams I should have used regular options.
what is my best bet on this?

I tried to setup HA shares and it wants to create new targets, not seeing the one I already created on new node.

one more thing

where do I select/use my iSCSI card and where do I use my Sync card?
I mean I am dedicating a single 10GB Nic for iSCSI and single 10GB for Sync
where do I use this?

right now I can not do anything since my first node does not have that setup yet, but once I do have it where do I use it?

thanks
Michael (staff)
Staff
Posts: 319
Joined: Thu Jul 21, 2016 10:16 am

Tue Sep 27, 2016 4:41 pm

Hello VladR,
When you will build the second node, just create the replicas to it by following the steps in this document: https://www.starwindsoftware.com/techni ... al_SAN.pdf
During the replica creation you will be asked for network options for Replication. Please specify your Synchronization network for Synchronization and Heartbeat and iSCSI network for Heartbeat. Also, you will use iSCSI network for targets discovery in iSCSI Initiator.
You can find this steps in this manual: https://www.starwindsoftware.com/techni ... manual.pdf

If you need to do maintenance on one of the nodes, there is not need to remove the replicas in StarWind Management Console. Just move all Cluster resources to another node and shut down the server.
Once you will add RAM and will install NIC, just turn the server on and StarWind will do devices Synchronization by itself.

Additionally, I would recommend you to read LSFS technical description here: https://knowledgebase.starwindsoftware. ... scription/ and keep some spare space on its underlying storage.
VladR
Posts: 13
Joined: Thu Aug 18, 2016 7:41 pm

Tue Sep 27, 2016 5:35 pm

thanks Michael ,
yes I used this docs for my setup, but I run ahead and created all the device son the second node before getting back to the docs for verification.
also I figure I want to convert the LSFS to normal file system. so is it possible to create an HA devices from targets with data already on them?
like is I setup my second node a new(in progress), create my targets on it using NTFS system, not LSFS, mount them ,setup iSCSI targets etc..
add it to the cluster, migrate all my data and VMs on it. than remove node1 from cluster and delete all SW targets from it.
than use the HA doc using my node 2 as source. will it disrupt the running services ,VMs ?
can I convert SW target to HA on live system ?
Michael (staff)
Staff
Posts: 319
Joined: Thu Jul 21, 2016 10:16 am

Wed Sep 28, 2016 4:24 pm

Hello VladR,
There is no possibility to convert LSFS device into Thick-Provisioned device. The only one way is Data Migration procedure. Also, there is no need to remove Node1 from Cluster at all.
The main steps to do it are:
- create new HA Thick-Provisioned device;
- connect newly created device in iSCSI initiators and add it to a Cluster as another CSV;
- move files for Virtual Machines to a new location with Storage Migration feature: http://blogs.msdn.com/b/clustering/arch ... 98203.aspx It can be done while VMs are running;
- remove old device from Cluster Shared Volume, disconnect it in iSCSI initiator and remove it from StarWind Management Console;

In case if you want to create the replica for the already connected device (convert to HA), you will be warned about short device disconnection. So, the downtime should be scheduled to replicate already connected device and recheck connections in iSCSI Initiators.
VladR
Posts: 13
Joined: Thu Aug 18, 2016 7:41 pm

Wed Sep 28, 2016 5:25 pm

thanks, that is exactly what I am doing right now. :-)
Michael (staff)
Staff
Posts: 319
Joined: Thu Jul 21, 2016 10:16 am

Fri Sep 30, 2016 2:23 pm

Great! I hope everything is good now.
VladR
Posts: 13
Joined: Thu Aug 18, 2016 7:41 pm

Wed Oct 05, 2016 12:46 pm

OK, thanks for all the help.
I do believe that I have my Starwind 2 node cluster up and running.
however I do have couple of questions left to validate.

#1 how can I validate my setup to make sure that all is working the way I want it to. i.e. that my syncs and iSCSI are actually using the 10GB ports as they should
that my HA disks are mounted properly etc..

#2 I get some errors in my cluster validation and I am not sure what the is problem. the cluster is up and running though
can some one help with that or point me to info I need.

thanks
Michael (staff)
Staff
Posts: 319
Joined: Thu Jul 21, 2016 10:16 am

Fri Oct 07, 2016 4:51 pm

Hello VladR,
In case if you have followed the manual, mentioned above, everything should be correct. You can check how everything works by restarting nodes one by one.
Please double check that all StarWind devices are synchronized on all nodes before you will restart the node.

Also, please share with us the errors in Cluster Validation, thus we could advise you something.
VladR
Posts: 13
Joined: Thu Aug 18, 2016 7:41 pm

Fri Oct 07, 2016 8:20 pm

hi Michael
I did followed the manual but I do not think it working.

the starwind says all is sync and nice, but when I shut down the clstr_node2 (coincidently that was the new machine where I had setup SW a new, and did the HA config from to clstr_node1) the whole hell broke loose.
apparently the CSV1 disk that holds all the VM storage is on node2 and when it went down all was down.
I think somehow I did a circular reference of all nodes and clients to the node2 and not to the local disks on each respectively
not sure how to explain.

is there a way for me to grab the SW config from each node to put it here?

thanks
Post Reply