Software-based VM-centric and flash-friendly VM storage + free version
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
-
patrick1140
- Posts: 12
- Joined: Sat May 17, 2008 8:20 am
Tue Dec 14, 2010 10:19 am
Hello to all,
I configure 2 new HA nodes for a new production SAN and have an issue with Broadcom 57711.
My 2 systems are strictly identicals :
Dell R510 server with Windows 2008 R2 , latest starwind version
Dell Onboard NIC : Dual Broadcom 1GB (one is used for management)
2x Near Line SAS 1TB in Raid 1
2x WD Velociraptor 600GB in Raid 1
2X WD SATA 1TB Raid 1
Speed test for local HDD with HD Tach are arround 100-200MB/S for HDD
On SAN1 i create an volume test of 20GB and mount it on SAN2 with MS iscsi Initator through an DELL swith 5424 NOT optimized for iscsi (jumbo frames and other settings) and I mesure the speed with the managment LAN : +/- 60 MB/Sec
I made the same test with the 10GB Broadcom 57711 using a direct link (copper SPF+ or optical adapter) and the result is 25 MB/Sec...
I 've made all firmware updates, installed latest drivers, tried some "advanced" settings under nics advanced properties...
Nothing can help and solve the problem
Any idea ?
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Tue Dec 14, 2010 9:35 pm
Start with fining out who's issue is this... Run NTtcp and IPerf tests for TCP. 10 gigabit should do 600-700 megabytes per second in single direction. If you have less - it's Broadcom issue (as you use cross-over cable and no switch between nodes). Call their

support, make them jump on your hardware and punish them to configure network to match appropriate performance level. Then will go StarWind part of the story. If you have properly working TCP connection - it's our issue and we'll have to configure the stuff for you (and find out why you failed). We can do all the steps but you need to understand if it's Broadcom issue - they support staff should be more helpful / faster (at least I suppose so...). So let us know your final decision or / and preliminary network test results. Thanks!
P.S. 60 megabytes per second suck... You should have wire speed for single node with gigabit Ethernet. FYI.
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software

-
patrick1140
- Posts: 12
- Joined: Sat May 17, 2008 8:20 am
Wed Dec 15, 2010 8:43 am
Hi Anton,
I re-re-remake the tests this evening but a file transfert for 4GB (Iso image) take less than 6 seconds.
It's seems a broadcom issue but on their web site no information to contact technical guys...
For Nttcp and Ipper have you some recomandation for parameters to make the test (packet size, buffers....) ?
Regards,
-
patrick1140
- Posts: 12
- Joined: Sat May 17, 2008 8:20 am
Wed Dec 15, 2010 11:01 am
Hello Constantin,
Yes, same links.. and a web form for asking support...
I filled it and waiting contact...
I try nttcp and ipperf this evening, and send results to you by email (i've it from an previous support request)
I keep you (all) posted

-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Wed Dec 15, 2010 6:09 pm
OK, please provide some preliminary results and we'll see what could be done here. With or w/o Broadcom staff.
patrick1140 wrote:Hello Constantin,
Yes, same links.. and a web form for asking support...
I filled it and waiting contact...
I try nttcp and ipperf this evening, and send results to you by email (i've it from an previous support request)
I keep you (all) posted

Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software

-
patrick1140
- Posts: 12
- Joined: Sat May 17, 2008 8:20 am
Wed Dec 15, 2010 7:33 pm
First tests.... i've some replies from broadcom and testing with toe enabled or not but not changes....
Ipperf hangs under windows 2008R2 and can't be used
Only ntttcp results...
Tests with NTTTCP
With one thread
A) Over management network including Dell 5424 switch not optimized for iSCSI
SAN03 as sender – SAN04 as receiver
Thread Realtime(s) Throughput(KB/s) Throughput(Mbit/s) Avg Bytes per Completion
====== =========== ================ ================== ========================
0 11.342 118336.914 946.695 65536.000
Total Bytes(MEG) Realtime(s) Average Frame Size Total Throughput(Mbit/s)
================ =========== ================== ========================
1342.177280 11.342 1459.231 946.695
Total Buffers Throughput(Buffers/s) Pkts(sent/intr) Intr(count/s) Cycles/Byte
============= ===================== =============== ============= ===========
20480.000 1805.678 2 31911.04 7.3
Packets Sent Packets Received Total Retransmits Total Errors Avg. CPU %
============ ================ ================= ============ ==========
919784 334231 2 0 5.04
B) Over direct link with 10GB Broadcom 57711
(200000 Buffers)
Thread Realtime(s) Throughput(KB/s) Throughput(Mbit/s) Avg Bytes per Completion
====== =========== ================ ================== ========================
0 2.433 5387258.529 43098.068 639906.26
4
Total Bytes(MEG) Realtime(s) Average Frame Size Total Throughput(Mbit/s)
================ =========== ================== ========================
13107.200000 2.433 14449.357 43098.068
Total Buffers Throughput(Buffers/s) Pkts(sent/intr) Intr(count/s) Cycles/Byte
============= ===================== =============== ============= ===========
200000.000 82203.042 26 14188.24 0.1
Packets Sent Packets Received Total Retransmits Total Errors Avg. CPU %
============ ================ ================= ============ ==========
907113 452687 0 0 4.57
B with 4 Threads:
Thread Realtime(s) Throughput(KB/s) Throughput(Mbit/s) Avg Bytes per Completion
====== =========== ================ ================== ========================
0 4.509 297666.285 2381.330 65536.000
1 4.524 296679.328 2373.435 65536.000
2 4.524 296679.328 2373.435 65536.000
3 4.524 296679.328 2373.435 65536.000
Total Bytes(MEG) Realtime(s) Average Frame Size Total Throughput(Mbit/s)
================ =========== ================== ========================
5368.709120 4.524 1470.960 9501.634
Total Buffers Throughput(Buffers/s) Pkts(sent/intr) Intr(count/s) Cycles/Byte
============= ===================== =============== ============= ===========
81988.130 18122.929 6 127971.49 1.6
Packets Sent Packets Received Total Retransmits Total Errors Avg. CPU %
============ ================ ================= ============ ==========
3649799 1888993 0 0 10.91
-
Constantin (staff)
Thu Dec 16, 2010 9:51 am
Well, first of all you need to exclude switch from your network - looks like it`s on 1Gb switch:
Total Throughput(Mbit/s) - 946.695
While on direct connection with 4 threads you get: 9501.
So, can you make a direct connection between servers and lets schedule a meeting for deeper testing?
-
patrick1140
- Posts: 12
- Joined: Sat May 17, 2008 8:20 am
Fri Dec 17, 2010 3:31 pm
Hello,
Results for "A" are for one Broadcom 1GB card and Network Switch
Results for "B" are for 10GB broadcom cards (direct link) giving problems in iSCSI
Regards,
-
mgerszew
- Posts: 1
- Joined: Tue Apr 27, 2010 8:33 pm
Sun Dec 19, 2010 5:07 am
I've mostly been a viewer on these forums, but I've had some odd things with various 10 GB NICs.
Do you have jumbo frames enabled for your 10 GB link? If so, try setting it back to a regular frame size and do an iperf/transfer speed test. If you get more reasonalbe numbers (3-4 Gbps in iperf) then there is some driver or hardware issue that needs to be dealt with.
I was dealing with some rebranded 10 GB NICs, not broadcom, that would slow down the entire server when jumbo frames was enabled. I would get 2 Kbps transfer rates. Iperf would actually give decent numbers, but only because something having to do with the NIC was slowing down the system's processing capability, making 10 actual seconds only count as 1 second.
If I switched back to a "normal" frame size, I could push 3-4 Gbps no problem.
-
Max (staff)
- Staff
- Posts: 533
- Joined: Tue Apr 20, 2010 9:03 am
Tue Dec 21, 2010 10:18 am
One more issue with the 10GbE, thank's for informing.
StarWind support dept. was also alerted about more Jumbo frames issues both for 10 and 1 GbE cards few days ago, the issues where mostly for 10GbE CX4 and Gigabit Quadports from Intel. I strongly recommend to subsrcibe to the Intel forums for being up to date when any kind of bugfix is released.
Max Kolomyeytsev
StarWind Software
-
patrick1140
- Posts: 12
- Joined: Sat May 17, 2008 8:20 am
Tue Dec 21, 2010 11:43 am
Hello to all,
I've tested with and without jumbo frames (and more 10 other avanced parameters for each nic such toe, large send, checksum..) same BAD results.. also BAD support from broadcom... argghhhh
I've replaced with 2 dual ports 1GB links ... 10x more speed improvement

-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Tue Dec 21, 2010 2:29 pm
We'd like to jump on your hardware to see why TCP works fine but iSCSI does not. Is this possible?
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software

-
patrick1140
- Posts: 12
- Joined: Sat May 17, 2008 8:20 am
Wed Dec 22, 2010 9:12 am
Hi anton, these servers goes live toomorow in production.
I've replaced the 10Gb broadcom by 2 dual ports Intel 1GB for testing and these cards give very good performance . I'm waiting today 10GBps Intel card for testing this evening .
I've created four independants 1Gbps to replicate the iscsi volumes (one volume by one link)
During replication "only" 50% of the decicated link is used, is it possible to improve it ? (i think with a old contact with contantin, that's a limitation in starwind) ? Is it possible to use 100% if it's a dedicated link used only for replication and by only one volume ?
You can also contact me by email (patrick dot foubert at webplanet dot be )
Regards,
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Wed Dec 22, 2010 10:16 am
Too bad... I'd prefer to see why Broadcom sucked with iSCSI.
We use Intel 10 GbE hardware here in our lab so I would not expect any issues.
For now 50% network usage for cross-link is OK. We'll improve it with MPIO used for cross-link in upcoming minor releases. So stay tuned
patrick1140 wrote:Hi anton, these servers goes live toomorow in production.
I've replaced the 10Gb broadcom by 2 dual ports Intel 1GB for testing and these cards give very good performance . I'm waiting today 10GBps Intel card for testing this evening .
I've created four independants 1Gbps to replicate the iscsi volumes (one volume by one link)
During replication "only" 50% of the decicated link is used, is it possible to improve it ? (i think with a old contact with contantin, that's a limitation in starwind) ? Is it possible to use 100% if it's a dedicated link used only for replication and by only one volume ?
You can also contact me by email (patrick dot foubert at webplanet dot be )
Regards,
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software
