The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
Max (staff) wrote:Hi Paul,
Could you please tell me what were the tests you performed in IOmeter: writes, reads, or both?
Which of them shows the degradation if MPIO is enabled?
Also, have you already disabled the delayed ACKs?
Can you confirm that I understood this correct: when you have this setting you are getting ~94% of network utilization when using Round Robin NLB policy to StarWind?paulow1978 wrote: When i use round robin if I used the default path policy in VMware with 1000 iops I see around 108mbps across both links.
paulow1978 wrote:Hiya,
What I mean is:
In round robin I see about 10-30% network utilization on each nic on the starwind server (iometer shows 40-60mbps on sequnetial read)
in fixed path using one link I see 90%+ (110mbps on sequential read in iometer)
I have spent a lot of time on this, messing around with the intel nic settings. After banging my head against a brick wall and running circa 40 iometer tests, I eventually disabled jumbo frames on the intel nics and the round robin performance has gone to 110mbps and sometimes up to 135mbps so it is looking much better. My switches are avaya/nortel 4500GT and have jumbo frames enabled. In fact I am using them on my iscsi vlan successfully with my equallogic and vmware so this issue seems localised to the windows starwind server (though looks now very much like a bad implementation of jumbo frames on this set of intel drivers).
Sorry to hassle you guys. It's not a starwind issue. i am prettu certain of that. I am only posting on here in the hope that someone else might know what the issue is being caused by, hope thats ok!
thanks,
Paul
paulow1978 wrote:hiya,
yes EQL has jumbo frames enabled. All good on that front
I will try the direct connection. Just need to find some straight throughs....