Help me with configuration for starwind

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
petr
Posts: 13
Joined: Sun Feb 27, 2011 2:55 pm

Mon Apr 18, 2011 6:59 pm

Hi all,

we are planing to use starwind for our Hyper-V cluster. Now we have about 6 - 10 VPS servers each with (16 - 32 GB RAM) and we are running Hyper-V virtual servers on them. I would like to move to iSCSI solution. Could you help me with configuration (choose right parts)?

My first idea - each starwind HA node will contain:

+ Supermicro 3U 16 HDD chassis E1 model wid SAS expander
+ Adaptec 6805 with Flash backup module (600)
+ 16x WDC RE4 1TB or 2 TB hdds
+ Intel Xeon (don't know which to choose)
+ 16 or 32 GB RAM ECC

Every node will have 2-4 1Gbps connection each to another switch and to each starwind HA node. How fast will be this configuration with MPIO enabled?

Do you have any recommendations?

Thank you
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Apr 18, 2011 9:25 pm

1) You don't need any battery powered controllers with HA. Your UPS acts as a battery. The same about controller cache. It's a day before yesterday technology... Your whole machine acts as a storage controllers so system RAM is used for caching. Skip expensive RAID models and throw money into more SAS or SATA ports and more system RAM.

2) Go for cheap high-capacity SATA hard disks in RAID10 instead of expensive SAS ones in RAID5/6.

3) CPU horsepower does not matter much. More cores/sockets are preferred to high clock number however.

4) More RAM you're going to put into your HA nodes - more distributed write-back cache we'll allocated. Good number is 80% of your free system RAM.

With proper MPIO performance scales nearly linearly. So from 4 GbE links I'd expect ~400MB/sec for read or write and more if reads and writes are combined. But you'd need V5.8 (not released yet) for full speed. V5.6 and V5.7 we're going to release soon (V5.7 = V5.6 + bugfixes actually) have a lot of spin locks put to keep data safe (we don't confirm write before it would not touch both hard disk platters on both HA nodes) so for versions up to V5.7 HA performance is 50%-80% of non-HA performance. With V5.8 and up you'll see multiplication of number of links and nodes.
petr wrote:Hi all,

we are planing to use starwind for our Hyper-V cluster. Now we have about 6 - 10 VPS servers each with (16 - 32 GB RAM) and we are running Hyper-V virtual servers on them. I would like to move to iSCSI solution. Could you help me with configuration (choose right parts)?

My first idea - each starwind HA node will contain:

+ Supermicro 3U 16 HDD chassis E1 model wid SAS expander
+ Adaptec 6805 with Flash backup module (600)
+ 16x WDC RE4 1TB or 2 TB hdds
+ Intel Xeon (don't know which to choose)
+ 16 or 32 GB RAM ECC

Every node will have 2-4 1Gbps connection each to another switch and to each starwind HA node. How fast will be this configuration with MPIO enabled?

Do you have any recommendations?

Thank you
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
kmax
Posts: 47
Joined: Thu Nov 04, 2010 3:37 pm

Mon Apr 18, 2011 10:11 pm

Dumb question, but is MCS going to be supproted in the future?
kmax
Posts: 47
Joined: Thu Nov 04, 2010 3:37 pm

Mon Apr 18, 2011 10:19 pm

In regards to the original poster, sounds like you are looking at the Supermicro 836E1.

While Adaptec cards works fine, I would lean towards an LSI card since the backplane uses an LSI expander chip. I've had situations where my Adaptec card would not recognize any drives at all on the backplane...I would say out of 10 reboots it happened once, but that falls into unacceptable to me. A reboot at the "insert disk" prompt solved the problem though.

Another thing, with 14-16 drives a RAID 10 will outperform and RAID 6 in IOPs by about 2-2.5 times.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Apr 18, 2011 11:22 pm

Multiple paths are preferred to multiple connections per session. So I don't know yet. There are much more important and performance critical things still left unimplemented.
kmax wrote:Dumb question, but is MCS going to be supproted in the future?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Apr 18, 2011 11:23 pm

1) Thank you for sharing your Adaptec experience with us!

2) Yes, that's pretty much what we've seen here and on customers sites as well. RAID10 >>> RAID6. Thanks again for extra one confirmation.
kmax wrote:In regards to the original poster, sounds like you are looking at the Supermicro 836E1.

While Adaptec cards works fine, I would lean towards an LSI card since the backplane uses an LSI expander chip. I've had situations where my Adaptec card would not recognize any drives at all on the backplane...I would say out of 10 reboots it happened once, but that falls into unacceptable to me. A reboot at the "insert disk" prompt solved the problem though.

Another thing, with 14-16 drives a RAID 10 will outperform and RAID 6 in IOPs by about 2-2.5 times.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
petr
Posts: 13
Joined: Sun Feb 27, 2011 2:55 pm

Tue Apr 19, 2011 7:07 am

kmax wrote: While Adaptec cards works fine, I would lean towards an LSI card since the backplane uses an LSI expander chip. I've had situations where my Adaptec card would not recognize any drives at all on the backplane...I would say out of 10 reboots it happened once, but that falls into unacceptable to me. A reboot at the "insert disk" prompt solved the problem though.
We have Adaptecs in all of our servers so bringing anything new means creating new spare hardware which is costly. We have Adaptecs on stock.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Apr 19, 2011 9:38 am

Stick with the good experience in such a case.

P.S. Great benefit of going custom solution :)
petr wrote:
kmax wrote: While Adaptec cards works fine, I would lean towards an LSI card since the backplane uses an LSI expander chip. I've had situations where my Adaptec card would not recognize any drives at all on the backplane...I would say out of 10 reboots it happened once, but that falls into unacceptable to me. A reboot at the "insert disk" prompt solved the problem though.
We have Adaptecs in all of our servers so bringing anything new means creating new spare hardware which is costly. We have Adaptecs on stock.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply