Recommendations for a Large SAN for Virtualization

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

oxyi
Posts: 67
Joined: Tue Dec 14, 2010 8:30 pm
Contact:

Thu Jan 20, 2011 6:27 pm

Just did a big file write test - 38.912GB.

Write still at 236.66MB/sec

Read without reboot is at 751.05 MB/sec

Read with machine rebooted is 728.83 Mb/sec

Doesn't seem to have any difference at all..

The above test is performed on the local server where raid 60 resides.


EDIT: does it matters that if I have BBU for my raid card, maybe that's why when I restart the cache still exist ?
User avatar
Max (staff)
Staff
Posts: 533
Joined: Tue Apr 20, 2010 9:03 am

Fri Jan 21, 2011 9:21 am

Hmm, if you have performed a clean shutdown the cash should be flushed to the disk.
I have some doubts about the write speeds you're getting, can you ask your RAID vendor about the approx. write speeds you shall get with the HDDs you've got.
Max Kolomyeytsev
StarWind Software
oxyi
Posts: 67
Joined: Tue Dec 14, 2010 8:30 pm
Contact:

Thu Jan 27, 2011 11:24 pm

Hmm Raid vendor... Areca is famous for their customer services... (sarcastic laugh)


Below is my Ntttcp result. 2 network teamed together on each server, and performed Ntttcp on.
Not sure if I did it correctly tho.. how does it look ?

Thread Realtime(s) Throughput(KB/s) Throughput(Mbit/s) Avg Bytes per Completion
====== =========== ================ ================== ========================

0 44.226 30346.668 242.773 65532.800
1 44.226 30346.668 242.773 65532.800
2 44.226 30348.150 242.785 65529.601
3 44.226 30343.704 242.750 65529.600


Total Bytes(MEG) Realtime(s) Average Frame Size Total Throughput(Mbit/s)
================ =========== ================== ========================

5368.381440 44.226 8183.321 971.082


Total Buffers Throughput(Buffers/s) Pkts(sent/intr) Intr(count/s) Cycles/Byte
============= ===================== =============== ============= ===========

81915.000 1852.191 2 6466.60 1.8


Packets Sent Packets Received Total Retransmits Total Errors Avg. CPU %
============ ================ ================= ============ ==========

656015 175026 52 0 0.61
User avatar
Max (staff)
Staff
Posts: 533
Joined: Tue Apr 20, 2010 9:03 am

Sat Jan 29, 2011 6:10 pm

Well, this looks pretty good.
I will try to dig out some more info on the '60. Maybe we're missing something
Max Kolomyeytsev
StarWind Software
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sat Jan 29, 2011 6:14 pm

I see two teamed NICs do only 1 Gbps actually... So 1) try playing with Jumbo frame size, we'd like to see at least 1.6 Gbps 2) Please grab something like ATTO Disk Benchmark and check how much your RAID does *locally*. W/o iSCSI attached to it. If writes suck locally - it's Areca's issue and I don't think we'll be be able to do anything (Driver? Firmware? RAID5 has write cache enabled?)
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
oxyi
Posts: 67
Joined: Tue Dec 14, 2010 8:30 pm
Contact:

Mon Jan 31, 2011 6:56 pm

Image

I just did a quick one with ATTO on one of the raid 60 drive. Does that look promising ?
I will do it on other drives and server to see if the data is consistence.
oxyi
Posts: 67
Joined: Tue Dec 14, 2010 8:30 pm
Contact:

Mon Jan 31, 2011 7:01 pm

Here is another benchmark using default setting on the second server.

Image
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Jan 31, 2011 8:40 pm

Could you please clarify what's on the pictures? Raw RAID volume performance or what?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
oxyi
Posts: 67
Joined: Tue Dec 14, 2010 8:30 pm
Contact:

Tue Feb 08, 2011 6:04 pm

Yes, it is a benchmark picture of my RAID volume performance. I am not sure if i did it correctly tho.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Feb 08, 2011 11:36 pm

You should not have such a huge difference between reads and writes (assuming it's RAID6 and not RAID5 and you have write cache enabled). I don't see how we could continue w/o making sure storage hardware is working properly (and it does not obviously). Did you play with a stripe size and hardware / emulated logical block size on your RAID? Partition aligned on physical block / stripe size? BTW, even with ~100 MB/sec we have wire speed for 1 GbE and going up is possible with properly configured 10 GbE only.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
oxyi
Posts: 67
Joined: Tue Dec 14, 2010 8:30 pm
Contact:

Thu Feb 10, 2011 4:02 pm

Hi Anton,

Sorry I should have do the benchmark test with the same settings.

If you looked at the first graphic with disparity, you would notice that the benchmark parameter is different. Hence the disparity.
I have the trasnfer size set at 2.0 to 8192 KB, and length at 1GB, therefore you are seeing that big difference between read/write.

Here is the correct one with default setting.
Image


Does this look more "normal" for you ?
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Feb 10, 2011 4:07 pm

Absolutely perfect!
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
oxyi
Posts: 67
Joined: Tue Dec 14, 2010 8:30 pm
Contact:

Thu Feb 10, 2011 7:05 pm

Great, not sure if HD tune is good indicator of speed, but here it is anyway.
Image


Since my local raid volume has good speed, I should start testing my network speed right ?
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Feb 10, 2011 7:17 pm

It's enough. So local I/O is OK. Now let's make sure network does wire speed with TCP.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
oxyi
Posts: 67
Joined: Tue Dec 14, 2010 8:30 pm
Contact:

Thu Feb 10, 2011 11:18 pm

Image


Network speed went down through the teaming network adapter.

I did the TCPIP registry thing -

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
GlobalMaxTcpWindowSize = 0x01400000 (DWORD)
TcpWindowSize = 0x01400000 (DWORD)
Tcp1323Opts = 3 (DWORD)
SackOpts = 1 (DWORD

and

I have the Intel Pro/1000 PT QuadPort, and I have the teaming set on
IEEE 802.3ad Dyanmic Link Aggregation. Any thought? It just doesn't seem like I'd reach that 1.6GB speed..
Post Reply