Software-based VM-centric and flash-friendly VM storage + free version
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
-
logicmate
- Posts: 29
- Joined: Tue Sep 13, 2011 10:36 pm
Tue Apr 10, 2012 5:19 pm
I noticed the same behaviors on our iSCSI SAN in HA mode a few weeks ago but did not have time to investigate it further to report back. But when i saw this thread I thought I would check in and share my results as well. Our results before going HA were perfect, so the only change in the setup was going HA and upgrading from 5.7 to 5.8.
1. We use MPIO
2. Using Adaptec 5405Z raid 10 with 8x600GB 15K drives
3. 2 client NICS per node, so total of 4 client NICS
4. 4 Sync channel NICS per node
Thanks,
Ara
-
Attachments
-

- HA-Benchmark.png (20.08 KiB) Viewed 18420 times
-
logicmate
- Posts: 29
- Joined: Tue Sep 13, 2011 10:36 pm
Tue Apr 10, 2012 5:53 pm
Max (staff) wrote:2 Camelay,
Fixed path is not an option, HA benefits from Round Robin since it can use both pathes for reading the data. Fixed path should be avoided because it slows down the failover.
By the way, I've seen a similar performance drop with non-equal jumbo frames on the SAN servers. Could you please check the values in both device properties, and HP network utility (if used)
Max, when I setup my servers the first thing I do is setup Jumbo frames, so I decided to double check my results. Seems like a gremlin might have changed my Jumbo Frame settings
Attached is the new benchmark results with Jumbo frames corrected on all NICS, however we are still experiencing the same results.
Code: Select all
Node 1
netsh interface ipv4 show interfaces
Idx Met MTU State Name
--- ---------- ---------- ------------ ---------------------------
1 50 4294967295 connected Loopback Pseudo-Interface 1
14 10 9000 connected PrivateNetwork-B
23 10 1500 connected PrivateNetwork-D
16 10 9000 connected PrivateNetwork-A
24 10 1500 connected PrivateNetwork-F
22 5 1500 connected PublicNetwork-Teamed
25 10 1500 connected PrivateNetwork-E
26 10 1500 connected PrivateNetwork-C
Code: Select all
Node 2
netsh interface ipv4 show interfaces
Idx Met MTU State Name
--- ---------- ---------- ------------ ---------------------------
1 50 4294967295 connected Loopback Pseudo-Interface 1
11 10 9000 connected PrivateNetwork-A
21 10 9000 connected PrivateNetwork-C
15 10 9000 connected PrivateNetwork-B
19 5 9000 connected PublicNetwork-Teamed
22 10 9000 connected PrivateNetwork-D
23 10 9000 connected PrivateNetwork-E
24 10 1500 connected PrivateNetwork-F

- HA-Benchmark-JumboFrames.png (18.92 KiB) Viewed 18420 times
-
logicmate
- Posts: 29
- Joined: Tue Sep 13, 2011 10:36 pm
Tue Apr 10, 2012 8:20 pm
I just got some strange results, on Windows 2003 Target it seems to be working as expected, however the strange behavior is only being experienced in the Window 2008 R2 targets.

- Windows 2003 Target
- windows2k3.png (19.91 KiB) Viewed 18412 times

- Windows 2008 Target
- windows2k8.png (19.74 KiB) Viewed 18411 times
-
camealy
- Posts: 77
- Joined: Fri Sep 10, 2010 5:54 am
Wed Apr 11, 2012 8:06 pm
That is exactly what I am experiencing. 2008 and 2003 Vm's are fine, but 2008 R2 VM's are acting screwy. Unless it is just a coincidence which is possible. I don't know how to narrow it down further from here. Probably need to open a support case.
-
Max (staff)
- Staff
- Posts: 533
- Joined: Tue Apr 20, 2010 9:03 am
Fri Apr 13, 2012 8:38 am
Gentlemen,
I'm trying to wrap my head around this config and I've got some questions:
1. What OS is used on the storage nodes?
2. How is the iSCSI storage connected to the guest VMs? (The storage showing the above mentioned strange patterns)
a. iSCSI target is connected to the Hyper-V host, a CSV is created and VMs fully reside on the CSV
b. iSCSI target is connected to the VM directly
3. Are there any Virtual adapters used for iSCSI traffic (this can be checked in the iSCSI initiator session properties)?
Max Kolomyeytsev
StarWind Software
-
camealy
- Posts: 77
- Joined: Fri Sep 10, 2010 5:54 am
Fri Apr 13, 2012 12:13 pm
Max (staff) wrote:Gentlemen,
I'm trying to wrap my head around this config and I've got some questions:
1. What OS is used on the storage nodes?
2. How is the iSCSI storage connected to the guest VMs? (The storage showing the above mentioned strange patterns)
a. iSCSI target is connected to the Hyper-V host, a CSV is created and VMs fully reside on the CSV
b. iSCSI target is connected to the VM directly
3. Are there any Virtual adapters used for iSCSI traffic (this can be checked in the iSCSI initiator session properties)?
-I have two HA nodes each running 2008 R2 (10Gb sync channel)
-I have 2x 1GB through two switches (mpio round robin) from each 2008 R2 host in the cluster, CSV's that the 2008 R2 VM's live on
-no
-
logicmate
- Posts: 29
- Joined: Tue Sep 13, 2011 10:36 pm
Fri Apr 13, 2012 6:28 pm
Max (staff) wrote:Gentlemen,
I'm trying to wrap my head around this config and I've got some questions:
1. What OS is used on the storage nodes?
2. How is the iSCSI storage connected to the guest VMs? (The storage showing the above mentioned strange patterns)
a. iSCSI target is connected to the Hyper-V host, a CSV is created and VMs fully reside on the CSV
b. iSCSI target is connected to the VM directly
3. Are there any Virtual adapters used for iSCSI traffic (this can be checked in the iSCSI initiator session properties)?
- HA-Node #1 Windows 2008 x64 R2 - iSCSI SAN
- 4 x 1GB Sync channels
2 x 1GB iSCSI channels
HA-Node #2 Windows 2008 x64 R2 - iSCSI SAN
- 4 x 1GB Sync channels
2 x 1GB iSCSI channels
- Target client #1 Windows 2008 x64 R2 (physical machine, not a VM)
- 2 x 1GB iSCSI Channels - RR to all 4 iSCSI Channels on storage
Strange behavior on performance
- Target client #2 Windows 2003 x86 R2 (physical machine, not a VM)
- 2 x 1GB iSCSI Channels - RR to all 4 iSCSI Channels on storage
Performs as expected
-
Anatoly (staff)
- Staff
- Posts: 1675
- Joined: Tue Mar 01, 2011 8:28 am
-
Contact:
Mon Apr 16, 2012 8:30 pm
logicmate, thanks much for the information that you have provided. We will try to reproduce your issue in our test lab.
camealy wrote:-I have two HA nodes each running 2008 R2 (10Gb sync channel)
-I have 2x 1GB through two switches (mpio round robin) from each 2008 R2 host in the cluster, CSV's that the 2008 R2 VM's live on
-no
camealy, could you please be a little bit more specific?
logicmate provided information witha a good quality, you can use his answer as an example.
Thank you Gents!
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
-
Anatoly (staff)
- Staff
- Posts: 1675
- Joined: Tue Mar 01, 2011 8:28 am
-
Contact:
Tue Apr 17, 2012 8:47 am
camealy wrote:What do you want to know? I answered your questions.
You have mentioned that you achieved good results with WS 2003. Was it VM or physical machine?
Also, Gents, can you please tell us if you have used the cacheing? If yes, then what exactly and how much?
Thank you
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
-
camealy
- Posts: 77
- Joined: Fri Sep 10, 2010 5:54 am
Tue Apr 17, 2012 12:27 pm
The 2003 Servers are VM's in the same 2008 R2 Hyper-V cluster as the 2008 R2 VM's having the issue. We are using read/write cache on the targets. 3GB i believe.
Anatoly (staff) wrote:camealy wrote:What do you want to know? I answered your questions.
You have mentioned that you achieved good results with WS 2003. Was it VM or physical machine?
Also, Gents, can you please tell us if you have used the cacheing? If yes, then what exactly and how much?
Thank you
-
Anatoly (staff)
- Staff
- Posts: 1675
- Joined: Tue Mar 01, 2011 8:28 am
-
Contact:
Tue Apr 17, 2012 1:31 pm
Thank you for the information.
About 2003VM - Are there any Virtual adapters used for iSCSI traffic (this can be checked in the iSCSI initiator session properties)?
Also, have you used write-through or write-back cache?
Thank you
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
-
camealy
- Posts: 77
- Joined: Fri Sep 10, 2010 5:54 am
-
Anatoly (staff)
- Staff
- Posts: 1675
- Joined: Tue Mar 01, 2011 8:28 am
-
Contact:
Tue Apr 17, 2012 2:07 pm
Thank you.
logicmate, can you confirm that you have also used WB caching?
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
-
logicmate
- Posts: 29
- Joined: Tue Sep 13, 2011 10:36 pm
Tue Apr 17, 2012 5:33 pm
Anatoly (staff) wrote:Thank you.
logicmate, can you confirm that you have also used WB caching?
Yes 1GB WB Caching for each device.
Ara