The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
nbarsotti wrote:Hi Garrett,
I have looked at iPerf but the problem is that virtually nothing can be installed or run from ESXi. iPerf can be run from the ESX service console but that is not an option with ESXi. This fall vSphere 5 is supposed to launch and there will be no ESX only ESXi, hopefully they will add some benchmarking troubleshooting tools to ESXi at that time.
Nick
nbarsotti wrote:Hi Anton,
1) I have 1 Windows 2008 R2 server with a dual port 10Gb NIC, and two ESXi hosts with dual port 10Gb NICs. The ESXi hosts connect to the Windows 2008 R2 server with cross-over cables (no switches). I am unaware of how to do any network throughput testing from a ESXi host. If you have any suggestion I would love to hear them.
2) Has Starwind found a way to swtich caching modes while leaving a target available? Or do I still need to delete my target and create a new one? Raid controllers have been able to swtich caching policies on the fly for years, why has it not been impleted by Starwind yet?
3) From my understanding Jumbo frames are a mandatory part of the 10Gb ethernet specification not but they are DEFINATELY NOT DEFAULT configuration for Windows 2008 R2 or ESXi 4.1u1. I don't know how to do pure network bandwidth testing from with in ESXi (open to suggestions). I have not enabled jumbo frames on my ESXi because the vSwitch that is connected to my 10Gb NICs is also connected to 1Gb nics that need to connect to my physical non-jumbo frame switch. I probably could enable jumbo frames on all the 10Gb NICs and the ESXi vSwitch without affecting my 1Gb ethernet, but my understanding was the jumbo frames was only good for 10-20% speed bumps. I'm seeing 50-60% lower speeds on my 10Gb.
4) Your are correct that my server running Starwind is different than the original thread starter. What is the same is that we both have server with VERY fast SSD based local storage, but when accessed via Starwind iSCSI over 10Gb ethernet show MUCH lower performance numbers.
ggalley wrote:Hello Anton,
I tried itterations of starind cache size up to 8000MB with the 5000ms setting
Even with the starwind cache up to 8000MB the reads would remain static around 200 MB/S.
I am not sure what you mean by queue depth?
overlapped I/O of 4 is not close to anything real. Anton you are correct I was trying to see what it would due under a crazy load put 50 VM's on something like this and you will start to see numbers like that.
I could be wrong but it appears when you create a RAM disk with starwind does it bypass all caching. Which totally makes sense once you think about it what exactly would you be caching.![]()
I think these RAM disks fall into that same category.
I could be totally off base here but I think starwind cache uses a 512k to read from since that is where all my testing started to have read problems.
georgep wrote:HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\<Interface GUID>Entry: TcpAckFrequency
Value Type: REG_DWORD, number
Valid Range: 0-255
Set to 1.
where do u change this ? on starwind sans and also on WIndows initiators...vms etc ?
Yes, it should be changed on all windows servers involved in the configuration.georgep wrote:HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\<Interface GUID>Entry: TcpAckFrequency
Value Type: REG_DWORD, number
Valid Range: 0-255
Set to 1.
where do u change this ? on starwind sans and also on WIndows initiators...vms etc ?