Wed Dec 29, 2010 9:21 pm
I have the same issue, slightly different chipset on the 10gb nic, I have a 57710, here is my setup:
Switches
2- Dell Powerconnect 6224 stacked each with a dual port 10GBase-T Module, configured for Jumo Frames 9216
Vpshere 4.1 ESX Servers
2- DellPoweredge SC1435 . 16GB Ram, 2 QuadCore Opteron 2.4 Ghz, 10GB Broadcom 57710 10GBase-T single nic (ISCSI) Jumbo Frames Enabled, Round Robin Config for Starwind LUNS, Dual gigabit for VM's LAN
Starwind Servers
2- Tyan S3892, 8GB DDR2 Ram, Dual DualCore Opterons at 2.9Ghz, Windows 2008 Ent R2 x64, 10GB Broadcom 57710 10GBase-T single nic (ISCSI) (latest driver), Starwind 5.5
each have a 8 drive WD2003FYYS RE2 2TB x Raid5 Set attached 3ware 9550sx-16ML Controller. also have a 256GB ADATA s599 SSD in each Starwind box for the C:\
drives on each VM, then using the Raid 5 set for low overhead storage.
My 10GB nic on each Starwind is config for Jumbo Frames, but I cannot get over 5% nic utilization no matter what, unless a sync is in progress & then I cannot seem to get over 15%. Locally I can get 580MB/s Read on the Raid5 Set / 380MB/s write, on the SSD 250MB/s Read 180MB/s write, but on the VM's the performance is terrible. I have a total of 2 HA volumes on the raid5 set & one on HA on the SSD
I've spent the last 8 months fighting slowness problems, etc. I started off with Windows 2003 x64 for starwind with bonded Gigabit nics & honestly after the $3000 upgrading to 10GBase-T Nics, Switch Modules, Windows Server 2008, there hasn't been a noticable increase in performance, other than with the Starwind 5.5 upgrade the HA sync performance has increased a little.