Page 1 of 1

My Performance with ESX Mutipathing, Is it okay?

Posted: Wed May 25, 2011 12:44 pm
by habibalby
Any tips to improve it? Its without tweaking. Only Normal Mutipathing, each vSwitch mapped to two vmnic's and Starwind Server with two teamed pNICs.



ESX Hosts:
Dell PE: 2850 X 2
StarWind:
DL 380 G3

Physical Switch:
Dell 2850 Gigabit

Re: My Performance with ESX Mutipathing, Is it okay?

Posted: Wed May 25, 2011 4:00 pm
by anton (staff)
Pretty close to GbE wire speed. You could extract some more juice but not more then 10% in general. Nice setup!

P.S. I'd try adding MPIO to increase both redundancy and performance.

Re: My Performance with ESX Mutipathing, Is it okay?

Posted: Wed May 25, 2011 4:10 pm
by habibalby
thanx, any whitepaper to test?

Re: My Performance with ESX Mutipathing, Is it okay?

Posted: Sun May 29, 2011 3:16 pm
by Constantin (staff)
We recommend following config:
a) amount of VMkernel is aligned to amount of NICs of ESX(i)/StarWind used for iSCSI
b) All VMkernels are running under one vSwitch
c) Use RoundRobin for load balancing
d) All NICs are in the subnet
e) Sign VMNIC to iSCSI HBA using CLI
f) Play with amount IOPS before switching the path. Often making it lower (usually 1-3) significantly improves performance.

Re: My Performance with ESX Mutipathing, Is it okay?

Posted: Mon May 30, 2011 10:25 am
by habibalby
Hello,
a) What you mean by aligned to amount of ESX's NICs and Starwind used for iSCSI?
b) Yes, my vmkernels are on on vSwitch
c) I used round robin
d) All my NICs are on the same subnet
e) I have mapped /signed the vmkernels PGs to the vmhba33 "iSCSI HBA" using the CLI
esxcli swiscsi nic add vmk1 -d vmhba33
esxcli swiscsi nic add vmk2 -d vmhba33
f) Could you elaborate on this point please?

Thanks,
S.Hussain

Re: My Performance with ESX Mutipathing, Is it okay?

Posted: Mon May 30, 2011 11:38 am
by Constantin (staff)
a) For example you have 3 NICs on StarWind used for iSCSI. To achieve maximum performance you should use 3 NICs on ESX(i) host too, and 3 VMkernels.
f) This command -

Code: Select all

esxcli nmp roundrobin setconfig --device [UUID] --iops 3 --type iops
allows you to change amount of IOPS send by initiator before switching to another path. It has nothing to rescan, but can significantly improve overall performance.

If you have done all configs according to out recommendations and still face with long rescan I assume following two things:
a) check if all ESX(i) latest patches are installed
b) wait for reply from VMware.
I have faced with such things few times, but had no any option to resolve it. We have checked VMkernel\StarWind logs from such configurations and didn`t find any errors at all. If you`ll get any resolution from VMware, I would be very appreciated if you could point us to it here.

Re: My Performance with ESX Mutipathing, Is it okay?

Posted: Mon May 30, 2011 12:03 pm
by habibalby
Hi,

Actually, I have two nics in my StarWind Servers and they are teamed without Jumbo Frame as when I enabled the Jumbo frame on them I face an issue with LUN when it's presented to the ESX and it gives me an error Failed on Physical Path and pointing to the eui UUID of the StarWind LUN. Then I removed the jumbo frame from the StarWind Server and that error disappeared from the ESX Servers.

So, two nics in the starwind server and two nic in the vSwitch in the ESX Site.

I changed the iops to 3.....

Before:

Device Display Name: HP iSCSI Disk (eui.e9b88375cce28648)
Storage Array Type: VMW_SATP_DEFAULT_AA
Storage Array Type Device Config: SATP VMW_SATP_DEFAULT_AA does not support device configuration.
Path Selection Policy: VMW_PSP_RR
Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0;lastPathIndex=1: NumIOsPending=0,numBytesPending=0}
Working Paths: vmhba33:C0:T0:L0, vmhba33:C1:T0:L0

After:

Device Display Name: HP iSCSI Disk (eui.e9b88375cce28648)
Storage Array Type: VMW_SATP_DEFAULT_AA
Storage Array Type Device Config: SATP VMW_SATP_DEFAULT_AA does not support device configuration.
Path Selection Policy: VMW_PSP_RR
Path Selection Policy Device Config: {policy=iops,iops=3,bytes=10485760,useANO=0;lastPathIndex=0: NumIOsPending=0,numBytesPending=0}
Working Paths: vmhba33:C0:T0:L0, vmhba33:C1:T0:L0

Re: My Performance with ESX Mutipathing, Is it okay?

Posted: Mon May 30, 2011 12:26 pm
by Constantin (staff)
Disable Teaming on StarWind host (sorry, that didn`t write it previously).
Also, JumboFrames should be enabled either on VMKernel either on vSwitch. The problem here is that if you can change MTU on running vSwitch, but VMkernel should be re-created. You can read how-to in topic http://www.starwindsoftware.com/forums/ ... t2293.html
If error will continue to appear I would suggest to give a try to direct connection between SAN and ESXi.

Re: My Performance with ESX Mutipathing, Is it okay?

Posted: Mon May 30, 2011 12:27 pm
by habibalby
Before iops 3 configuration


After iops 3 configuration

Re: My Performance with ESX Mutipathing, Is it okay?

Posted: Mon May 30, 2011 12:30 pm
by habibalby
Okay, I will do another test without nic teaming, but will enable Jumbo Frame on each nic. Jumbo frame is not configured on the physical switch it's just plane gigabit Ethernet switch.

Re: My Performance with ESX Mutipathing, Is it okay?

Posted: Mon May 30, 2011 1:01 pm
by Constantin (staff)
Check if switch supports JumboFrames - if yes, it will simply handle them, if not - will break them and you`ll face with issues on the ESX side anyway.