IEEE 802.3ad NIC teaming a good idea? (no speed increase)

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
Paul W
Posts: 10
Joined: Thu Jul 14, 2011 7:22 pm

Fri Jul 22, 2011 12:06 pm

I've run into a snag.. I was hoping to use NIC teaming (link aggregation) to actually improve the throughput.

I was hoping to team 2 1Gb nics to improve speed, thus reaching close to 2Gbit throughput for my iSCSI channels.

I have Quad Intel NICs and a HP Procurve 1800 24G, and 802.3ad LACP is supported all the way. When I configure everything it all is reported to be good & working. Until I actually do some measurements: it's no faster than using a 1Gb NIC.

Then I read this: http://www.ieee802.org/3/hssg/public/ap ... 1_0407.pdf

Especially page 7 and the last page make me think I'm barking up the wrong tree. It is mentioned that to reach better speed multiple connections should be used, is that actually the case using Starwind or iSCSI for that matter?

Is teaming only good for redundancy, but won't it improve throughput? IS there anyone here that is getting more than 120MB/s through aggregated 1Gbit links?
georgep
Posts: 38
Joined: Thu Mar 24, 2011 1:25 am

Fri Jul 22, 2011 1:31 pm

For ISCSI I am getting very good results with MPIO RR.
Paul W
Posts: 10
Joined: Thu Jul 14, 2011 7:22 pm

Fri Jul 22, 2011 2:57 pm

I'm currently testing with MPIO RoundRobin from server1 to server2 over 2 1Gbit ports per server.
The best speeds I see using Atto is 115MB/s writes and 113MB/s reads (at 1024k blocks) which is what I also get just using 1 Nic port.

Are your results better? What are they?
kmax
Posts: 47
Joined: Thu Nov 04, 2010 3:37 pm

Fri Jul 22, 2011 6:18 pm

Does task manager show both iscsi interfaces utilized at around 50% when you do the test?
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Mon Jul 25, 2011 9:19 am

Actually the best results should show up at 32 and 64 block size
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
Paul W
Posts: 10
Joined: Thu Jul 14, 2011 7:22 pm

Mon Jul 25, 2011 11:25 am

Thanks guys,

I'm actually building that config as we speak; all links are teamed (802.3ad). Ive got two storage servers and two hyper-V servers. Using MPIO over the teamed links.
Using HP procurve switches that understand LACP.
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Mon Jul 25, 2011 11:30 am

I'm suggesting you to use IOmeter instead of ATTO. Setup 64 Outstanding IOps, 100% sequential, 100% read, 64 block size, disk should be unformatted.
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Jul 26, 2011 9:10 pm

For sync channel use teamed links for now (upcoming versions of StarWind should use custom MPIO and get rid of MS iSCSI initiator as a sync transport as it's slow and unreliable in some cases) and use MPIO configured for Round Robin for client connections. Using NIC teaming to accelerate iSCSI traffic does not work well as iSCSI PDUs don't chunk very well.
Paul W wrote:Thanks guys,

I'm actually building that config as we speak; all links are teamed (802.3ad). Ive got two storage servers and two hyper-V servers. Using MPIO over the teamed links.
Using HP procurve switches that understand LACP.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply