I am in need of help, and not sure what to do at this point. I have been banging my head against the wall all day with performance issues. Right now I have 2 nodes in HA on Starwind. Each with 4 x 1GB NICS. 1 is for management, 1 is for sync, and 2 are the addresses actually being used for the iSCSI. I currently have Write Back caching turned on in the iSCSI target and jumbo frames all the way through. The physical layout looks like:
If I use fixed path path management I get basically wire speeds for gigabit as seen below.
But if I use MPIO and set iops = 1 I get:
As you can see some multi pathing is happening as I do manage to get a speed of 200MB/s once, but it's just all over the place and I don't know why.
The forums will only allow me to attach 3 screenshots, but the benchmark on the physical servers is coming in around 3.4GB/s so certainly not the bottle neck there.
Both Starwind boxes are Windows 2008 with the following set based on the starwind forums:
netsh int tcp set heuristics disabled - disables Windows scaling heuristics
netsh int tcp set global autotuninglevel=normal - turns-on TCP auto-tuning
netsh int tcp set global congestionprovider=ctcp - turns on using Compound TCP
netsh int tcp set global ecncapability=enabled - enables ECN
netsh int tcp set global rss=enabled - enables Receive-side Scaling (RSS). Note, that you should enable it only if your NIC supports it!
netsh int tcp set global chimney=enabled - enables TCP Chimney offload
netsh int tcp set global dca=enabled - enables Direct Access cache (should be supported by CPU, chipset and NIC)
I am running RAID 50 with a 64K stripe size and the Windows disk on the host is formatted with a 64K allocation unit size(the biggest you can get).
No tweaks other than jumbo frames have been applied to VMWare, and I have even tried removing the Cisco switch stack and putting in an HP procurve and the results don't change so I'm pretty confident it's not the switch.
Please help.
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software