Hello -
I'm experiencing some pretty poor performance when mounting VMFS5 volumes in ESXi 5.1 from a Windows 2008 R2 Starwind iSCSI SAN (running on physical hardware). I'm starting out with the most simple test - Copying an ISO image from one datastore to another. For example, the source data store is a RAID-5 (capable of 300+MB/sec) and the destination on the Starwind SAN side is also a RAID-5 capable of 200MB/sec writes (I see it every day with our Windows-based backup software).
I installed Starwind, and configured the standard iSCSI software initiator in ESXi. All went well on the configuration side. I deployed a few VMs to the iSCSI volume, and immediately noticed a pretty hard speed limit of between 10-20MB/sec (100-200Mbps on the networking side).
This host also serves as our backup device on the Windows/SQL side, so I know the NICs and RAID volumes can easily handle full bandwidth from a 1Gbe port. In fact, the disk volumes are configured to handle full saturation on all four 1Gbe ports up to 400MB/sec across 3 separate RAID-5's of 15 spindles a piece (we do this nearly every night during our backup process).
I've read two or three other posts on this topic, and sadly the solutions where that the people got frustrated and rebuild their environment from scratch and the issue went away... Unfortunately I don't have the luxury to do that in my case as this is a Production backup server.
So after spending a TON of time troubleshooting and reading articles on slow ESXi iSCSI performance, I submitted a support ticket with VMWare (we're fully supported on their Production SLA). The said they couldn't help me because the StarWind iSCSI software was only listed up to ESXi 4.1 on the HCL.
So two questions before I spend any more time on this:
1) Does StarWind plan on getting at least the pay-for version of this on the HCL list for ESXi 5.1? I can't purchase software for use with VMWare w/out it being on the HCL.
2) Are there any other advanced troubleshooting guides for connecting VMWare volumes to the StarWind SAN? I've seen the ESXi configuration guide with the MPIO settings, but that isn't really relevant when I can't saturate even a single 1Gbe NIC more than 20%. There has to be a logical reason why it's limited to 20MB/sec. The only other similarity with the other posts was that my RAID controllers are LSI, but so are probably 75% of the controllers on the market since both Dell and HP use LSI-based controllers now.
Any help would be greatly appreciated.
Thanks,
- Jeff
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software