ESXi VM Disk Performance

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
galbitz
Posts: 2
Joined: Thu May 23, 2013 3:00 am

Thu May 23, 2013 3:15 am

Hi,

I am testing the starwind iscsi nas. Currently I have the following issue and configuration:

I have 24 ssds in a raid 10. Local benchmarks of the array produce reads around 5000MBps and writes around 3500MBps (with array caching off).

If I present an ISCSI target to esxi 5.1 and configure a local disk in the vm to point back to the Starwind target I get about 500MBps Read and 450 Write.

The ESXi host is connected to the starwind server via a single Intel x520 10 GB adapter in each system (for simplicity, I have tried 2 adapters with MPIO with virtually no improvement). I am currently using a single .5m sfp+ passive cable, so there is no switch between them. I realize I should not expect to get 10Gbps, but since my disks are capable of much more I would expect to get at least half. Current network utilization on the Starwind host shows a max of 2.5Gbps. I also currently have write caching enabled on the starwind server (128 GB of ram in the physical starwind server). Am I incorrect in thinking that writes should therefore be at adapter speed? One thing I did notice is esxi considers jumbo framed 9000mtu, while windows considers jumbo frames 9014, but both at 1500 net the same results.

I recall getting similar numbers when I had a much less capable disk array. Is there some limitation to esxi or are the Intel network adapters possibly not capable? I am using jumbo frames on both sides + the iscsi software initiator in VMware (the Intel nics do not show up with an iscsi adapter). I would be interested in hearing from anyone with a these adapters or esxi, or anyone that has gotten much better single guest performance and what adapters you are using. MPIO with a second x520 shows no gain.I am open to making hardware changes if necessary.
galbitz
Posts: 2
Joined: Thu May 23, 2013 3:00 am

Thu May 23, 2013 4:05 pm

i wanted to post a followup.

I tried switching from lsi sas to vmware paravirtualization scsi adapters. The results were the same. Enabling the second intel x520 and using mpio with iops set to 1 doubled the throughput to about 920 Read and 750 write. For some reason the lsi logic scsi adapter did not benefit from the second NIC but paravirtualization did. I would still expect more throughput but atleast I am heading in the right direction. Still looking for somone else using these NICs to compare results.
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Mon May 27, 2013 8:59 am

Hi,

First of all - what StarWind device type are you using (DD, IBV, HA, etc.)?
Had you any chance to benchmark the network first before putting the SAN in the system?
Have you seen our Benchmarking Guide?
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
User avatar
kspare
Posts: 60
Joined: Sun Sep 01, 2013 2:23 pm

Sun Sep 29, 2013 1:53 pm

Did you ever figure this out? I'm running into the same problems with the x520 nic.

We are getting about 2.5gb/s on reads, I have had it as high as 6gbps i've even seen 8 but I can't keep it stable.
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Tue Oct 01, 2013 3:17 pm

Do you have the same config?
The performance of our solution is limited by the hardware of the server where it is running – mostly it is limited by the wire speed. As you may know Intel and Microsoft achieved 1000000 result in their result using StarWind as the SAN solution (to learn more about it you can use this link: http://www.starwindsoftware.com/news/32). So everything is basically needed (and this is not the easiest part), is properly configured system.
That is why we always ask users pretty same questions, and here they are:
I need to know the following information from you in order to provide a solution:
1. Operating system on servers participating in iSCSI SAN and client server OS
2. RAID array model, RAID level and stripe size used, caching mode of the array used to store HA images, Windows volume allocation unit size
3. NIC models, driver versions (driver manufacturer/release date) and NIC advanced settings (Jumbo Frames, iSCSI offload etc.)
4. Network scheme.
Also, there is a document for pre-production SAN benchmarking:
http://www.starwindsoftware.com/starwin ... ice-manual
And a list of advanced settings which should be implemented in order to gain higher performance in iSCSI environments:
http://www.starwindsoftware.com/forums/ ... t2293.html
http://www.starwindsoftware.com/forums/ ... t2296.html

Please provide me with the requested information and I will be able to further assist you in the troubleshooting
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
Post Reply