ESXi Round Robin performace worse that fixed path

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

craggy
Posts: 55
Joined: Tue Oct 30, 2012 3:33 pm

Fri Nov 16, 2012 1:05 pm

Thanks, I just replied to the mail now.
User avatar
Max (staff)
Staff
Posts: 533
Joined: Tue Apr 20, 2010 9:03 am

Tue Nov 20, 2012 10:56 am

Just to update - We're currently working on this case, me or one of my colleagues will post an update here
Max Kolomyeytsev
StarWind Software
craggy
Posts: 55
Joined: Tue Oct 30, 2012 3:33 pm

Tue Apr 09, 2013 11:38 pm

Sorry, it's been a while since i was on here for various reasons but I need to revisit this issue.

Basically we are still experiencing slow performance when using RR MPIO with ESXi 5.1.
When we switch to just using one link or the other we can saturate a 1gb link with 122MB/s write / 120MB/s read but once RR is enabled we can only get max 158MB/s write and 163MB/s read.

I have tried setting ESXi IOPs to 1000, 100, 10, 3, 2, and 1with best performance at 3 of around 150MB/s each way.
Then I tried setting ESXi IOPS to Bytes and set it to 8800, 8972 and 9000 with best performance at 8972 of around 163MB/s

So its really strange to see such low performance from RR when direct link can saturate a 1gbe nic to almost wirespeed.

Things i've enabled and disabled from both sides where possible are as follow:
Delayed ACK
RSS
LRO
LSO
Tcp/ip checksum offload
Tcp/ip send offload
Jumbo frames
Interrupt moderation disable (most significant performance gain here ~25%)
Receive window auto tuning
Chimney offload.

Don't think I can adjust anything else.

Any suggestions?
Thanks.
User avatar
Max (staff)
Staff
Posts: 533
Joined: Tue Apr 20, 2010 9:03 am

Mon Apr 15, 2013 3:35 pm

We're going to update the ESXi Guidelines really soon - these have a fully updated set of multipathing recommendations.
One question though - do you see consistent performance with just port 3 and port 4 being used for iSCSI traffic?
Max Kolomyeytsev
StarWind Software
craggy
Posts: 55
Joined: Tue Oct 30, 2012 3:33 pm

Thu Apr 18, 2013 10:36 pm

Max,

Sorry, when you say port 3 and 4 what exactly do you mean by that?
User avatar
Max (staff)
Staff
Posts: 533
Joined: Tue Apr 20, 2010 9:03 am

Mon Apr 22, 2013 9:52 am

craggy wrote:Max,

Sorry, when you say port 3 and 4 what exactly do you mean by that?
In your initial post you've described the configuration of the Storage box -
Storage Host:
Nic1 Broadcom Wan and MAnagement (80.9x.xx.xx subnet) vSwitch 1, Vmkernel1
Nic2 Broadcom Admin and Vmotion (10.10.1.x/24 subnet) Vswitch 2, vmkernel 2
Nic3 Intel iScsi (10.10.10.x/24 subnet) vSwitch 3, vmkernel 3
Nic4 Intel iScsi 2 (10.10.11.x subnet) vSwitch 4, Vmkernel 4
So Ports 3 & 4 are NICs 3 & 4 from the description
Max Kolomyeytsev
StarWind Software
craggy
Posts: 55
Joined: Tue Oct 30, 2012 3:33 pm

Mon Apr 22, 2013 8:37 pm

Sorry, i get you now.

Yep, if I disable port 3 or 4 and just use a single path I can saturate the link and get 120MBs / 123MBs read / write.
User avatar
Max (staff)
Staff
Posts: 533
Joined: Tue Apr 20, 2010 9:03 am

Wed Apr 24, 2013 12:36 pm

Yes, I've got that.
I was wondering if it aggregates well when you only use 3 & 4 together.
Max Kolomyeytsev
StarWind Software
Post Reply