I have a new SAN server and having poor read performance on all VMs and a test Windows server. SAN network is a 10Gbe LAN with just the 2 servers connected to a Netgear XS708E 8-port 10Gbe switch. I originally had the servers connected together using crossover cables and performance did not change.

- Performance in a VM
- Disk.jpg (112.74 KiB) Viewed 30498 times

- Performance through locally installed StarWind iSCSI initiator on SAN server
- SAN iSCSI Local.png (94.46 KiB) Viewed 30498 times

- Performance of RAID 10
- SAN Local.jpg (115.09 KiB) Viewed 30498 times
Tests performed:
IP address for the iSCSI network configured with a /30 subnet
No internet access
Standalone Server
Removed Link-Layer Topology Discovery items
Unchecked Microsoft Networks items
Tested with Adaptive Load Balancing and as standalone NICs
Virtual disks have 512MB Write-Back cache and set to allow multiple iSCSI connections
Connected virtual disk locally with StarWind iSCSI connector
SAN server with these specifications:
StarWind 6.0.6399
Windows Server 2008 R2 Standard x64 6.1.7601 Service Pack 1 Build 7601
Supermicro X9DRi-LN4+/X9DR3-LN4+
Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz, 2400 Mhz, 4 Core(s), 4 Logical Processor(s)
Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz, 2400 Mhz, 4 Core(s), 4 Logical Processor(s)
Installed Physical Memory (RAM) 32.0 GB
Total Physical Memory 32.0 GB
Available Physical Memory 21.1 GB
Total Virtual Memory 63.9 GB
Available Virtual Memory 53.0 GB
Page File Space 32.0 GB
C:\ Drive is 120GB RAID 1 SSD drives
Intel(R) C600/X79 series chipset
Matrox G200eW (Nuvoton)
LSI MegaRAID SAS Adapter
- RAID 10 8x2TB SAS 6Gb/s drives with Write-Back Cache
Intel(R) C600 series chipset SATA AHCI Controller
Intel(R) C600 Series Chipset SAS RAID (SATA mode)
Intel(R) I350 Gigabit Network Connection
Intel(R) I350 Gigabit Network Connection #2
Intel(R) I350 Gigabit Network Connection #3
Intel(R) I350 Gigabit Network Connection #4
Team #0 - Intel(R) Ethernet Converged Network Adapter X540-T1
Team #0 - Intel(R) Ethernet Converged Network Adapter X540-T1 #2
- Dynamic Link Aggregation
- Aggregation not set at the switch because it will not work
- Driver settings:
Jumbo Packet: 9014
Max Number of RSS Processors: 16
Preformance Options (seems to have improved speed but only slightly):
• Receive Buffers: 2048
• Transmit Buffers: 8192
RSS Queues: 16
Starting RSS CPU: 0
Disabled Data Center Bridging option
- The only thing that helped slightly was the buffer settings
Ran the following commands:
netsh int tcp set heuristics disabled
netsh int tcp set global autotuninglevel=normal
netsh int tcp set global congestionprovider=ctcp
netsh int tcp set global ecncapability=enabled
netsh int tcp set global rss=enabled
netsh int tcp set global chimney=enabled
netsh int tcp set global dca=enabled
netsh int ipv4 set subint “<Name of NIC>” mtu=9000 store=persistent
VMWare ESXi 5.5 Server
Dell PowerEdge 2900 III
Intel(R) Xeon(R) E5420 @ 2.50GHz, 2493 Mhz, 4 Core(s), 4 Logical Processor(s)
Intel(R) Xeon(R) E5420 @ 2.50GHz, 2493 Mhz, 4 Core(s), 4 Logical Processor(s)
BIOS Version/Date Dell Inc. 2.7.0, 10/30/2010
Installed Physical Memory (RAM) 48.0 GB
Broadcom BCM5708C NetXtreme II GigE
Intel(R) Ethernet Converged Network Adapter X540-T1
- MTU: 9000
I have a second server with the same hardware specifications as the ESXi server and currently has Windows Server 2008 R2 for testing:
Dell PowerEdge 2900 III
Intel(R) Xeon(R) E5420 @ 2.50GHz, 2493 Mhz, 4 Core(s), 4 Logical Processor(s)
Intel(R) Xeon(R) E5420 @ 2.50GHz, 2493 Mhz, 4 Core(s), 4 Logical Processor(s)
BIOS Version/Date Dell Inc. 2.7.0, 10/30/2010
Installed Physical Memory (RAM) 48.0 GB
Broadcom BCM5708C NetXtreme II GigE
Intel(R) Ethernet Converged Network Adapter X540-T1
- Same configuration as SAN server