Need help deciding between starwind and equallogic

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

JesseR
Posts: 1
Joined: Sun Mar 14, 2010 6:22 am

Sun Mar 14, 2010 7:17 am

Hi All,
I work for a small company who is looking to transition over to a VMware infrastructure. We're very tight on cash, and our IT consultants are pushing for a single equillogic 6100 for $30k, and I'm worried because we won't have HA with a secondary SAN. The equallogic would use twelve 1tb sata hard drives in raid-50, and they're expecting about 6000 IOPS. We have three types of virtualized servers running (sql database, exchange, and some low-load servers) with an estimated current use of 2000 IOPS.

I'm looking into the idea of building two SAN's and using starwind, and came up with two potential builds:

-=[ BUILD 1: Single Raid Volume ]=-
Case: SUPERMICRO CSE-846TQ-R900B Black 4U Rackmount Server Case w/ 900W Redundant Power Supply
Motherboard: SUPERMICRO MBD-X8DAH+-F-O Dual LGA 1366 Intel 5520 Extended ATX Dual Intel Xeon 5500
CPU: Intel E5520 Nehalem 2.26GHz 4 x 256KB L2 Cache 8MB L3 Cache LGA 1366
Heatsink: Intel BXSTS100A Active heat sink with fixed fan
RAM: Patriot PS312G13ER3K-E 12GB (3 x 4GB) 240-Pin DDR3 SDRAM ECC Registered DDR3 1333
RAID controller: Areca ARC-1680IX-24 24 Port PCIe x8 SAS RAID Card
NIC 1: Intel E10G41AT2 10 Gigabit AT2 Server Adapter - network adapter
NIC 2: Intel EXPI9402PT 10/ 100/ 1000Mbps PCI-Express PRO/1000 PT Dual Port Server Adapter 2 x RJ45
Hard Drives - OS: (2) Seagate ST380815AS 80GB 7200 RPM 8MB Cache SATA
Hard Drives - SAN: (12) Western Digital WD1001FALS 1TB 7200 RPM 32MB Cache SATA

-=[ BUILD 2: Two Raid Volumes ]=-
Case: SUPERMICRO 835TQ-R800B Black 3U Rackmount Server Case w/ 800W Redundant Power Supply
Motherboard: SUPERMICRO MBD-X8DAH+-F-O Dual LGA 1366 Intel 5520 Extended ATX Dual Intel Xeon 5500
CPU: Intel E5520 Nehalem 2.26GHz 4 x 256KB L2 Cache 8MB L3 Cache LGA 1366
Heatsink: Intel BXSTS100A Active heat sink with fixed fan
RAM: Patriot PS312G13ER3K-E 12GB (3 x 4GB) 240-Pin DDR3 SDRAM ECC Registered DDR3 1333
RAID controller 1: Areca ARC-1680IX-8 8 Port PCIe x8 SAS RAID Card
RAID controller 2: Areca ARC-1680IX-8 8 Port PCIe x8 SAS RAID Card
SAS HD Cages: Athena BP-SATA1842C 2.5" (SATA, SAS) Backplane Unit
OS HD Cage: iStarUSA BPU-2535V2 1 x 3.5" to 2 x 2.5" SATA I/II Hot-Swap Drive Cage
NIC 1: Intel E10G41AT2 10 Gigabit AT2 Server Adapter - network adapter
NIC 2: Intel: EXPI9402PT 10/ 100/ 1000Mbps PCI-Express PRO/1000 PT Dual Port Server Adapter 2 x RJ45
Hard Drives - OS: (2) Western Digital WD800BEVT 80GB 5400 RPM 8MB Cache 2.5" SATA
Hard Drives - SAS: (4) Seagate ST973452SS 73GB 15000 RPM 16MB Cache 2.5" SAS 6Gb/s
Hard Drives - ATA: (8) Western Digital WD1001FALS 1TB 7200 RPM 32MB Cache SATA


In short:
1. First potential build would have (12) 1tb sata drives, with the ability to scale to 24 drives. It would have 4 gigabit NIC's to run to the VM servers, and 1 10gb NIC to connect to the secondary SAN.
2. Second potential build would have (4) 73gb 15k SAS drives for SQL and exchange (scalable to 8 ), and (8) 1tb sata drives (not scalable) for all other VM's. Again, it would have 4 gigabit NIC's and 1 10gb NIC.
3. Each build is roughly $8k for each box, so $16k for a HA setup.

Couple questions:
1. What are the potential risks concerning building a SAN for use in a production environment? Is it worth the extra $14k to go with an equallogic (I know, loaded question)?
2. Do the builds look good? Any components have known compatibility or performance issues? Should I change anything (add 2nd CPU, more RAM, etc)?
3. Which build would give better performance? Does it make sense to break the use into a low spindle 15k RPM SAS and medium spindle 7.2k RPM SATA, or should we do a single high spindle 7.2k SATA?
4. What raid configurations would be the most appropriate? I would think raid-10 for the SAS drives, and raid-6 for the SATA drives. Does this make sense? What are the pros of using RAID-50?
5. How would these types of builds stack up performance wise (or otherwise) to an off the shelf approach like the equallogic 6000 series?

Thanks for the help, it's greatly appreciated :D
JLaay
Posts: 45
Joined: Tue Nov 03, 2009 10:00 am

Mon Mar 15, 2010 9:24 am

Hi,

First, I 'm not an expert om storage.
Like you I had to dive into this.

This thread below migth give you a better understanding:

http://www.yellow-bricks.com/2009/12/23/iops/

There are so many things to consider my head spins :roll: .

I don't know what brand/type of 1Tb SATA drive will be put in the Equillogic.
However > sata drive (thread 1) IOPs indication: 75 IOPS !
This might be higher with proprietary technologies used, but ....
I would demand a written guaranty.

Please keep us informed about choice and the reason(s).

Greetz Jaap
min
Posts: 13
Joined: Thu Jan 28, 2010 6:32 pm

Fri Mar 19, 2010 3:25 pm

eql would be easy to setup and use, but expensive to upgrade or expand. you cannot add disk trays. extra disks only come with new head. so expansion = repurchase.
eql uses 64MB page size in its snapshot space allocation, meaning 1 byte change in data will result in 64MB in snapshot space. it's particularly bad for database applications, which typically "spray" small amount of data across a large disk surface.

starwind, OTH, the last time i checked, does not do auto-HA failover, you have to choose one of replication, snapshot, HA, thin provision for a volume, you cannot have all the features for a volume. and starwind does not report error or send alert or provide performance statistics.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Fri Mar 19, 2010 7:06 pm

StarWind is not that difficult to setup either so it's actually "Web-based GUI Vs. Custom Management Console" battle :)

We use a bit more smart memory allocation logic, virtual page size could be different but usually it's 4KB - 64KB (depends of the volume size, to keep balance between data and metadata allocated). But not megabytes and not 64MB of course.

StarWind does not allow to use snapshots / CDP with HA. FOR NOW. Upcoming version will have re-worked storage stack allowing to mix "containers" in any combinations. Also we're working on alerts / performance monitoring so you'll also see them soon.
min wrote:eql would be easy to setup and use, but expensive to upgrade or expand. you cannot add disk trays. extra disks only come with new head. so expansion = repurchase.
eql uses 64MB page size in its snapshot space allocation, meaning 1 byte change in data will result in 64MB in snapshot space. it's particularly bad for database applications, which typically "spray" small amount of data across a large disk surface.

starwind, OTH, the last time i checked, does not do auto-HA failover, you have to choose one of replication, snapshot, HA, thin provision for a volume, you cannot have all the features for a volume. and starwind does not report error or send alert or provide performance statistics.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
min
Posts: 13
Joined: Thu Jan 28, 2010 6:32 pm

Sat Mar 20, 2010 2:12 am

anton (staff) wrote:StarWind is not that difficult to setup either so it's actually "Web-based GUI Vs. Custom Management Console" battle :)
sorry, not my intention to imply that starwind setup is difficult. it took me about 5 minutes to download the software and have the first volume up. but a little bit longer to come to realize that all the features i like were mutually exclusive. :cry:
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sat Mar 20, 2010 12:08 pm

Which ones? CDP / snapshots Vs. HA configuration?
min wrote:
anton (staff) wrote:StarWind is not that difficult to setup either so it's actually "Web-based GUI Vs. Custom Management Console" battle :)
sorry, not my intention to imply that starwind setup is difficult. it took me about 5 minutes to download the software and have the first volume up. but a little bit longer to come to realize that all the features i like were mutually exclusive. :cry:
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
cbsit
Posts: 3
Joined: Thu Mar 18, 2010 9:14 pm

Mon Mar 22, 2010 7:27 pm

I think you are looking at 2 extreme ends of the spectrum if you ask me. On the one hand you have a Dell Equallogic which is a vary fancy and costly iSCSI SAN solution and on the other you have what looks like a white box system made by hand or some supermicro reseller. I might recommend a middle of the road solution rather then then go all the way to the other end of white boxing a solution I would recommend looking at something like the HP StorageWorks X1600 servers which have 12-14 drive bays in a 2U enclosure and then run starwind on that. This way you can get "enterprise" grade hardware support more along the lines of the Equallogic but still save about 1/3rd. The other thing I think you might consider is using a RAID 1+0 or 1+0 SATA configuration for some of the storage for your SQL rather then going SAS which I am not sold is the best value option rather if you do have a high IOPs demand on a low data volume item like most SQL I would consider SSD. Something like 2x of Intels Enterprise SSDs (X25-E) in RAID 1 will blow away even 12 SAS in RAID 0 or more like 26 SAS drives in RAID 10. Of course X25-E are 400.00 for 32GB but if performance is the objective that will do it.
akhan
Posts: 1
Joined: Tue Mar 23, 2010 4:32 pm

Tue Mar 23, 2010 4:48 pm

We looked at the Equallogic solution as well but went with Startwind instead. So far so great and performance is great. Using Starwind v5 Enterprise HA with Hyper-V Clustered Hosts. Running SQL Server 2008 on a few VMs, Exchange 2010 and a quite a few IIS 7 Web Servers. Again, so far so great.

Hardware Used:
Dell PowerEdge R710 with 6 1TB 7200RPM SATA Drives 12GB RAM (Windows 2008 R2 Data Center Edition)
Dell 6224 Powerconnect Switches with Jumbo Frames enabled

HA Partner Hardware:
Dell PowerEdge 1950 connected to MD1000 with 12 750GB 7200RPM SATA Drives 16GB RAM (Windows 2003 Enterprise 64-Bit)
This was our first Starwind Server running Starwind 4 which has been upgraded to Starwind 5 for HA Setup.

Regards,
Akhan
Constantin (staff)

Wed Mar 24, 2010 4:16 pm

I`m glad to see that you are pleased with our software
s.price
Posts: 5
Joined: Tue Apr 27, 2010 1:21 pm

Tue Apr 27, 2010 4:23 pm

The other post on using enterprise hardware is a good choice. DELL and HP both offer 2U servers that old 12 drives and decent SAS RAID controllers.
Now that near line SAS drives are readily available, you can mix drive sizes in a server or storage node as mentioned above. Near line SAS drives are 7.2k RPM drives. Essentially the same as SATA drives but with SAS interfaces. We only gain the error correction with the SAS protocol.
A couple of items to note. If you need 6000 IOPS a single 12 drive storage node is not going to deliver with the exception of SSD. The 15k RPM SAS drives deliver about 180 IOPS each, 10k RPM SAS drive delivers 150 IOPS each and SATA/ Near Line SAS delivers about 80 IOPS each. Even if the drives were RAID 0 configured the goal of 6000 IOPS could not be reached. (12 x 180 = 2160)
The second item to be aware of is BER, bit error rate. On enterprise class drives the BER can occur every 1 to the 12th or 13th power, depending the drive and manufacturer. This was defined that every time X bits are read/written and error will occur and the function fails. So, take a RAID group of 8 1TB drives, a drive fails and a rebuild begins. Each drive will read ~ 7.2 terabits for a total > 50 terabits which begins to put the BER in statistical reality. If a BER were to occur during a rebuild, the array would essentially crash. The BER was of no concern for many years because with drives at 4, 9, 18, 36, 72 and even 146 GB, it was difficult to have enough spindles in a RAID group to approach the BER. The use of double parity or RAID 6 is implemented to protect large capacity arrays.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Apr 28, 2010 9:21 pm

From my point of view 10K and 15K rpm drives are waste of money. They don't show THAT performance benefit as SSD stuff does. That's why f.e. StarWind is going to support multi-level cache soon. L1 (RAM), L2 (SSD) and L3 (dedicated HDD for request indexing). And our future is SSD (primary storage) and RAM (distributed cache) of course.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
s.price
Posts: 5
Joined: Tue Apr 27, 2010 1:21 pm

Fri Apr 30, 2010 12:51 am

Interesting debate. The SSD performance is currently stunted. The lower cost platforms are attaching the devices to traditional SATA/SAS controllers which limits them to the controllers ASIC and PCIe bus. What is the point or value of spending the $ to be bottle necked by the controller. In addition, you would need to stack storage nodes and aggregate or span across the nodes each with 10Gbe cards. Which again increases the cost in the $ for 10Gbe switches. A lot of money for not so much capacity and expensive expansion.
The higher performance SSD systems are being installed directly in to the PCIe bus. We have tested a couple and they can be impressive but can require a reasonable amount of CPU. In addition, the limitation of how many cards/disks can be installed in to a system.
For someone to need that kind of IO, they would need to have a large number of VMs or highly transactional processing. But if that requirement is met then there is a need for a lot of capacity which SSD can not deliver for the value.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Fri Apr 30, 2010 8:34 am

Controller limitations: I don't see how 10 GbE and "lower cost platforms" (c) ... can mix. Honestly.

10 GbE switches: Please read above. Key word is "backbone". To have two hypervisor nodes say ESX1 and ESX2 MPIO-connected to the two StarWind nodes say SW1 and SW2 you don't need any switches. It's point-to-point connection. The same about cross-link between SW1 and SW2 (to run HA traffic).

CPU cycles: We don't expect StarWind storage nodes to any other job except serving I/O. Do you?

With the last statement I can agree 200%. I see dirt cheap and highly redundant SATA drives as a primary storage and multi-level cache (RAM and SSD) to accelerate it.
s.price wrote:Interesting debate. The SSD performance is currently stunted. The lower cost platforms are attaching the devices to traditional SATA/SAS controllers which limits them to the controllers ASIC and PCIe bus. What is the point or value of spending the $ to be bottle necked by the controller. In addition, you would need to stack storage nodes and aggregate or span across the nodes each with 10Gbe cards. Which again increases the cost in the $ for 10Gbe switches. A lot of money for not so much capacity and expensive expansion.
The higher performance SSD systems are being installed directly in to the PCIe bus. We have tested a couple and they can be impressive but can require a reasonable amount of CPU. In addition, the limitation of how many cards/disks can be installed in to a system.
For someone to need that kind of IO, they would need to have a large number of VMs or highly transactional processing. But if that requirement is met then there is a need for a lot of capacity which SSD can not deliver for the value.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
sielbear
Posts: 11
Joined: Sat Apr 24, 2010 6:20 pm

Wed May 05, 2010 5:51 am

Check out the super micro recommendations post. A couple of us built Sans with intel chassis and have been quite pleased with build quality and price. I used the $250 board to board raid module (based on Lsi 1078). It seems to be providing great performance. I have 2x2.5" boot disks and 12 drives in a 2u chassis.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed May 05, 2010 11:37 am

What hard drives did you finally come up with?
sielbear wrote:Check out the super micro recommendations post. A couple of us built Sans with intel chassis and have been quite pleased with build quality and price. I used the $250 board to board raid module (based on Lsi 1078). It seems to be providing great performance. I have 2x2.5" boot disks and 12 drives in a 2u chassis.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply