Hi,
I am testing the starwind iscsi nas. Currently I have the following issue and configuration:
I have 24 ssds in a raid 10. Local benchmarks of the array produce reads around 5000MBps and writes around 3500MBps (with array caching off).
If I present an ISCSI target to esxi 5.1 and configure a local disk in the vm to point back to the Starwind target I get about 500MBps Read and 450 Write.
The ESXi host is connected to the starwind server via a single Intel x520 10 GB adapter in each system (for simplicity, I have tried 2 adapters with MPIO with virtually no improvement). I am currently using a single .5m sfp+ passive cable, so there is no switch between them. I realize I should not expect to get 10Gbps, but since my disks are capable of much more I would expect to get at least half. Current network utilization on the Starwind host shows a max of 2.5Gbps. I also currently have write caching enabled on the starwind server (128 GB of ram in the physical starwind server). Am I incorrect in thinking that writes should therefore be at adapter speed? One thing I did notice is esxi considers jumbo framed 9000mtu, while windows considers jumbo frames 9014, but both at 1500 net the same results.
I recall getting similar numbers when I had a much less capable disk array. Is there some limitation to esxi or are the Intel network adapters possibly not capable? I am using jumbo frames on both sides + the iscsi software initiator in VMware (the Intel nics do not show up with an iscsi adapter). I would be interested in hearing from anyone with a these adapters or esxi, or anyone that has gotten much better single guest performance and what adapters you are using. MPIO with a second x520 shows no gain.I am open to making hardware changes if necessary.
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software