
http://www.stuartcheshire.org/papers/NagleDelayedAck/
Yes, please give this one a try and let us know did it help in your case or not. Thank you!
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
CyberNBD wrote:The ESXi setting seems to have a little different meaning indeed. It has more to do with detecting network performance and congestion.
Mechanisms seem to be a little different on both sides which results in poor performance.
I've had some background about this when doing some cisco self-study and basically it comes to this:
* For every packet that's being sent the receiver will send an ACK packet back.
* When starting a transmission the sender will send one packet and wait for an ACK.
* When ACK is received well, the sender will increase the amount of packets per ACK.
* When, after a while the sender doesn't receive the ACK on time or not at all, it means there are network issues so the sender reverts back to the last amount of packages.
* Once in a while the sender will try and increase the amount of packages again to detect if network issues are resolved.
That way the sender can detect and adapt itself to network performance and congestion. It's understandable that if both sides don't use exactly the same settings for accomplishing this, it can result in serious performance issues. (Resending packets in different ways when ACK's are lost or waiting on each other for ACK's while they aren't meant to come etc..)
I found this http://kb.vmware.com/selfservice/micros ... Id=1002598 VMWare KB Article about it.
Strange enough they talk about poor read speeds while we are experience poor write speeds. Interesting
I didn't mention tis when posting my results, but I used 3 IOPS per path for this. I will see if I can compare the results using different settings.rchisholm wrote:Then I made the most important change. I went into the console of the ESXi servers and changed the number of IOPS per path from 1000 to 1. This made the writes 10X as fast.
Did you setup each of your iSCSI NIC's in ESXi in separate virtual switches? When I had a single virtual switch and the standard 1000 IOPS I got about 6 MB/s. Going to separate virtual switches for each iSCSI NIC gave me about 60 MB/s with sufficient load. Then I did the IOPS change which brought it over 600 MB/s. So, with the changes combined, it increased the write speed 100X.CyberNBD wrote:I didn't mention tis when posting my results, but I used 3 IOPS per path for this. I will see if I can compare the results using different settings.rchisholm wrote:Then I made the most important change. I went into the console of the ESXi servers and changed the number of IOPS per path from 1000 to 1. This made the writes 10X as fast.
rchisholm wrote:Did you setup each of your iSCSI NIC's in ESXi in separate virtual switches? When I had a single virtual switch and the standard 1000 IOPS I got about 6 MB/s. Going to separate virtual switches for each iSCSI NIC gave me about 60 MB/s with sufficient load. Then I did the IOPS change which brought it over 600 MB/s. So, with the changes combined, it increased the write speed 100X.CyberNBD wrote:I didn't mention tis when posting my results, but I used 3 IOPS per path for this. I will see if I can compare the results using different settings.rchisholm wrote:Then I made the most important change. I went into the console of the ESXi servers and changed the number of IOPS per path from 1000 to 1. This made the writes 10X as fast.
I'm going to continue testing with different IOPS settings and frame sizes. I will also be testing with Xen next week. I still need to do some testing with the fast RAID's also.
anton (staff) wrote:VLANs or virtual switches should not affect / boost performance. So it's something else...
Yes, please keep us updated. Thank you!
rchisholm wrote:I left out that the NiC's were teamed in the single virtual switch. Probably had to do with the performance problem.
anton (staff) wrote:VLANs or virtual switches should not affect / boost performance. So it's something else...
Yes, please keep us updated. Thank you!