I have a iscsi server setup with the following configuration
Dell R510
Perc H700 Raid controller 1GB cache
raid 10
Windows Server 2012 R2
Intel Ethernet X520 10Gb
12 near line SAS drives
I am currently running the latest version of starwinds free version 8.0.7145
I have connected it to a HP 8212 10Gb port which is also connected via 10Gb to our vmware servers. I have a dedicated vlan just for iscsi and have enabled jumbo frames on the vlan.
I frequently see very high latency on my iscsi storage. So much so that it can timeout or hang vmware.
There are only three VM's on this iscsi datastore. I am running two vm's running VMware Data Protection Manager and one vm that is just running windows that have backups pushed to it throughout the day. I don't need amazing performance for this, but I expect better then I am getting.
I am trying to determine why I see such high latency 100'ms. Perfmon shows Avg. Disk sec/Transfer counter for the physical disk around ~.02 to .01 during the high latency issues.
Any thoughts about any configuration changes I could make to my vmware enviroment, network card settings or any ideas on where I can troubleshoot this. I am not able to find what is causing it. I reference this document and for changes to my iscsi settings
http://en.community.dell.com/techcenter ... 03565.aspx
Here are some IO Meter results run directly on the physical machine which is running starwind:
Here are some from the VMware ESXi 5.5 VM which is connected over iSCSI
Why is there such a massive difference in the latency of the second test?
I have been doing some reading about using Storage Spaces instead of the Perc 700. Creating a Raid 0 for the 12 drives on the Perc H700 and disabling Percs cache and then creating a mirror using the 12 drives in storage spaces. Any have experience with that? Would that help with performance?
Thank you for your time.
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software