Same performance from mechanical and ssd's using starwind

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

eickst
Posts: 21
Joined: Sat Mar 09, 2013 5:25 am

Sat Mar 09, 2013 5:29 am

Any underlying reason why performance would be identical between mechanical and solid state disks?

I set up 4 vm's, 2 on ssd ha device and 2 on sas 10k ha device, and performance is identical in sqlio........

Same amount of cache (16GB) on each device.....
User avatar
lohelle
Posts: 144
Joined: Sun Aug 28, 2011 2:04 pm

Sat Mar 09, 2013 9:14 am

Not helpful to you to solve the problem (sorry!) , but I just want to say that real life performance using SSD for underlying storage should give a huge performance boost.

All my remote desktop servers, database servers etc see a 10-100x RANDOM IO-performance increase on reads on the SSD LUNS (bursts of writes goes to cache first anyways)
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sun Mar 10, 2013 12:36 am

Could you please tell more about configuration you have and how you test exactly?

Initially I have an impression you have a network bottleneck.
eickst wrote:Any underlying reason why performance would be identical between mechanical and solid state disks?

I set up 4 vm's, 2 on ssd ha device and 2 on sas 10k ha device, and performance is identical in sqlio........

Same amount of cache (16GB) on each device.....
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sun Mar 10, 2013 12:37 am

You're right. Lenar reads and writes could be more or less the same with spindles (or even better) but random I/O is *totally* different.
lohelle wrote:Not helpful to you to solve the problem (sorry!) , but I just want to say that real life performance using SSD for underlying storage should give a huge performance boost.

All my remote desktop servers, database servers etc see a 10-100x RANDOM IO-performance increase on reads on the SSD LUNS (bursts of writes goes to cache first anyways)
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
eickst
Posts: 21
Joined: Sat Mar 09, 2013 5:25 am

Mon Mar 11, 2013 5:33 pm

Ok, just an update, with one of the nodes turned off......Performance on the SSD is order of magnitude faster than HDD as the HDD Performance plummets!

Arrays are - 6x 900GB 10kRPM SAS2 Drives in RAID 10
Second array is 6x512GB Samsung 840 Pro in RAID 10

When just using the ssd array as local storage obviously the performance is killer, read/write sequential is over 2GB/s and random 4k with 32QD is well over 100k io/s.

I'm going to roll back to server 2008 and see if that increases the performance. I have to anyway since apparently DPM has issues with 2012 CSV. And 2012 is not as stable as I am accustomed to


*edit* sorry forget to include that two servers are connected directly to each other via 3 10 gig links, 1 for sync, 1 for iscsi, and one for live migration
robnicholson
Posts: 359
Joined: Thu Apr 14, 2011 3:12 pm

Tue Mar 12, 2013 6:13 pm

This is perplexing as on a SAN under typical day to day load, SSDs should offer huge performance boosts where there are lots of random access going on. Sure, on sequential access on a 6Gbit/s interface, you'll not see much difference between spinning metal and solid state.

Cheers, Rob.
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Thu Mar 14, 2013 3:26 pm

May I ask you to tell me what is your iSCSI connection bandwidth and the SyncChannel bandwidths are?
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
eickst
Posts: 21
Joined: Sat Mar 09, 2013 5:25 am

Fri Mar 15, 2013 5:23 am

Anatoly (staff) wrote:May I ask you to tell me what is your iSCSI connection bandwidth and the SyncChannel bandwidths are?
All 10 gig connections. The only 1 gigabit connection on the server is the management nic....

NTTTCP shows around 9.7gbps throughput on all 10 gig links. (one for sync, one for iscsi, one for failover/live migration between hyper v hosts)

iSCSI traffic never goes over 50mbits no matter how hard I try to hammer it.

I've used Round Robin (which is HORRIBLE performance BTW), least blocks, and least queue depth, with least blocks giving the best performance. I'm assuming because it can read so much locally that it doesn't need to hit the other link much....round robin seems to forces half of the blocks to the slower link (who would ever consider a 10gig link slower!)


I'm going to reconfigure the whole thing one more time (its good practice anyway) and test over and over again, both raw, iscsi (before failover) and then iscsi(with failover) and post the results.

I've been using sqlio for the most part but I can use iometer if anyone has a standard test to use for it....
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Mon Mar 18, 2013 10:25 am

Well, it all sounds really weird. Anyway we`ll look forward to get an update from you.
There are two documents that I want to provide you with before you will reconfigure everything:
http://www.starwindsoftware.com/starwin ... -practices
http://www.starwindsoftware.com/starwin ... t-practice
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
eickst
Posts: 21
Joined: Sat Mar 09, 2013 5:25 am

Mon Mar 18, 2013 4:48 pm

Hey Anatoly, thanks for the links.

I'm actually waiting for support to email a download link for something to fix the loopback performance on the servers. I believe its a new IOCTL but I don't have any details than what was gleamed from my other thread.

Performance for me was better to a remote node over 10gb iscsi than the local server......so like I said just waiting on that fix
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Thu Mar 21, 2013 10:18 am

Just want to let you know that currently the build of the solution that we are talking about os currently under testing and hopefully it will be ready for you on Monday.
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
Connective
Posts: 1
Joined: Mon Apr 15, 2013 1:17 pm

Mon Apr 15, 2013 1:20 pm

Is there significant Difference in performance with the loopback performance fix?
I'm setting up new servers right now, Just want to know.
User avatar
Max (staff)
Staff
Posts: 533
Joined: Tue Apr 20, 2010 9:03 am

Mon Apr 15, 2013 2:56 pm

Our Q&A department reports ~2.5 times performance increase (400 ->~1000 MB/s)
Max Kolomyeytsev
StarWind Software
eickst
Posts: 21
Joined: Sat Mar 09, 2013 5:25 am

Mon Apr 15, 2013 8:27 pm

I finally got around to setting this up this morning, we are not seeing a 2.5x increase, for us it's closer to 50% improvement, which is still pretty good. Went from 650MB/s to 900+MB/s

It's still a huge hit over the performance of the local array. So much so that we are considering just scrapping the SAN altogether and just going with local storage and good backups.
User avatar
Max (staff)
Staff
Posts: 533
Joined: Tue Apr 20, 2010 9:03 am

Tue Apr 16, 2013 7:49 am

Hmm, and are these results obtained from sequential load tests?
Max Kolomyeytsev
StarWind Software
Post Reply