Good morning,
Max (staff) wrote:Sounds great, please keep us posted!
Short of a little problem due to misalignment of the 2003er VMs (will be fixed soon), almost everything looks fine now. Almost:
Doing a storage vmotion of a 25Gig VM that is powered off takes 15 minutes. There are 3 NICs dedicated to iSCSI in the ESXi and 2 NICs dedicated to iSCSI target in each of the two Starwind hosts making a total of 12 paths for every LUN as all NICs are in the same subnet. Round robin is enabled. Network utilization at the Starwind host shows only a 4-6% usage of every NIC during a storage vmotion. During ATTO benchmarking I can see a load of up to 50% per target-NIC though.
Tweaks performed for ESXi:
- disabled DelayedAck for every iSCSI target
- enabled jumbo frames for vmkernels and vSwitches dedicated to iSCSI
- iops set to 5 (also tried 1000, 1, 3, 10, 15, 20, 50, 100 and 500)
- round robin enabled
Tweaks performed for the target side (Windows Server 2012 with current Starwind release):
- GlobalMaxTcpWindowSize = 0x1400000
- TcpWindowSize =0x1400000 (per Interface, not globally
your guide is wrong here)
- TcpAckfrequency = 1 (per interface)
- TCPNodelay = 1 (per interface)
and as per
your guide:
Code: Select all
netsh int tcp set heuristics disabled
netsh int tcp set global autotuninglevel=normal (really? shouldn't that be "disabled" or "experimental"?)
netsh int tcp set global ecncapability=enabled
netsh int tcp set global rss=enabled
netsh int tcp set global chimney=enabled
netsh int tcp set global dca=enabled
btw.:
Code: Select all
netsh int tcp set global congestionprovider=ctcp
returns an error in Windows 2012.
The LUNs are using write-back caching with a size of 2GB, all NICs are 1GBit.
Any ideas why storage vmotion is slow while ATTO shows good results?
bye
Volker