10GB adapter (Broadcom 57711) very bad iSCSI performance

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

patrick1140
Posts: 12
Joined: Sat May 17, 2008 8:20 am

Wed Dec 22, 2010 11:16 am

Hello Max,

Intel 10Gb are worst than Broadcom ?

I've sent an email to you also for a remote access to the san servers but is only possible this evening (not before, not after :( )

Patrick
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Dec 22, 2010 11:44 am

We like Intel more then Broadcom. Just better experience AT THIS MOMENT.

OK, thank you!
patrick1140 wrote:Hello Max,

Intel 10Gb are worst than Broadcom ?

I've sent an email to you also for a remote access to the san servers but is only possible this evening (not before, not after :( )

Patrick
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
seanwales
Posts: 3
Joined: Fri Jul 23, 2010 7:34 pm

Wed Dec 29, 2010 9:21 pm

I have the same issue, slightly different chipset on the 10gb nic, I have a 57710, here is my setup:

Switches
2- Dell Powerconnect 6224 stacked each with a dual port 10GBase-T Module, configured for Jumo Frames 9216

Vpshere 4.1 ESX Servers
2- DellPoweredge SC1435 . 16GB Ram, 2 QuadCore Opteron 2.4 Ghz, 10GB Broadcom 57710 10GBase-T single nic (ISCSI) Jumbo Frames Enabled, Round Robin Config for Starwind LUNS, Dual gigabit for VM's LAN

Starwind Servers
2- Tyan S3892, 8GB DDR2 Ram, Dual DualCore Opterons at 2.9Ghz, Windows 2008 Ent R2 x64, 10GB Broadcom 57710 10GBase-T single nic (ISCSI) (latest driver), Starwind 5.5
each have a 8 drive WD2003FYYS RE2 2TB x Raid5 Set attached 3ware 9550sx-16ML Controller. also have a 256GB ADATA s599 SSD in each Starwind box for the C:\
drives on each VM, then using the Raid 5 set for low overhead storage.

My 10GB nic on each Starwind is config for Jumbo Frames, but I cannot get over 5% nic utilization no matter what, unless a sync is in progress & then I cannot seem to get over 15%. Locally I can get 580MB/s Read on the Raid5 Set / 380MB/s write, on the SSD 250MB/s Read 180MB/s write, but on the VM's the performance is terrible. I have a total of 2 HA volumes on the raid5 set & one on HA on the SSD


I've spent the last 8 months fighting slowness problems, etc. I started off with Windows 2003 x64 for starwind with bonded Gigabit nics & honestly after the $3000 upgrading to 10GBase-T Nics, Switch Modules, Windows Server 2008, there hasn't been a noticable increase in performance, other than with the Starwind 5.5 upgrade the HA sync performance has increased a little.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Dec 30, 2010 9:29 am

Start with removing hardware (switches are first candidate) and running and publishing raw TCP performance numbers (see above how). After finding out what's broken we'll know for sure what to do. Thanks!
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply