I did not want to leave the HP switch in because it was a single unmanaged switch. I liked the idea of using the Avaya 5510's which I could stack, and spread out the connections over multiple connections for failover. Plus, I can achieve greater speeds - plus I'm persistent and hate to give up!
I have been able to spend a good deal of time on this, and I came across a problem that I was able to replicate several times. This problem caused me to receive mixed results ever since day one of my performance testing, and here I thought I found something but it was identified already earlier this year! It's actually a bug in my ESXI host, that is fixed in a patch which we have not applied however there is a work around. The problem is described in the link below.
http://kb.vmware.com/selfservice/micros ... Id=2008144
I had 2 virtual switches configured in my ESXI host, with 2 VMkernels in each. The article describes the issue that if you have a link go down in this setup, the Vswitch will try to push traffic from the 2 logical vmkernels out one physical connection causing latency/dropped packets. The work around is to create a Vswitch for each vmkernel, so I created 4 vswitches, each with 1 vmkernel and now I can reboot the switch stack and the speed doesn't drop. I haven't gotten a chance to speak with VMware again since I corrected this, to see why when I put the HP Procurve in - I was still at least able to get steady speed results. That's the only head scratcher I've got left with that one...
My 2nd issue, is adding the secondary array back into the mix...where I'm also dropping on my read performance once that's added. Of course I wasn't lucky enough to have this problem kill 2 birds with 1 stone! Once I add the partner SAN to the storage adapter in ESXI, my write performance stays lightning fast but my read performance again drops. I'm going to reach out to Vmware again today to see if it's another issue with their software or if this directs me back to Avaya.
I'll also note that any of my other testing/comments about changes made to the Avaya switches should be ignored - since the culprit was this issue described above. Everything that the user paulow1978 described in the link below needs to be done to receive optimal performance. You must also keep in mind, that if you are using Avaya/Nortel 5510 48-T switches, that there is a limited qos buffer space on the 5510 switches. You have to spread out your SAN/Host connections every 12 ports, since each bank of 12 share the buffer. I have a stack of 4 switches, so I have my SAN's spread out in ports 1/13,2/13,3/13,4/13 and the servers in 1/1/,2/1,3/1,4/1, etc. etc.
http://www.starwindsoftware.com/forums/ ... 85-15.html
I'll post my outcome after working with the techs on this second issue with my secondary SAN performing sluggish although it may be awhile as I'm heading out to the McAfee security conference next week
