Software-based VM-centric and flash-friendly VM storage + free version
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
-
Slicster
- Posts: 1
- Joined: Mon Nov 01, 2010 5:57 pm
Mon Nov 01, 2010 6:10 pm
Hi Guys,
StarWind is new to me but sounds very interesting in terms of what I'd like to accomplish since most SSD SAN solutions provided by the big OEMs are too expensive at the moment.
I'm looking to create a SSD based SAN and would like to do it with StarWind since it is Windows Based and should allow me to create a reliable hardware RAID that will support most SSD garbage collection algorithms. Anyhow, here is what I would like to do but am not sure if it will work...
StarWind Server
1U or 2U rackable server
Intel Xeon QuadCore Processor
4GB RAM
2 x 1GbE for LAN
2 x 10GbE for iSCSI
2 x 32GB SSD (for OS in mirror)
6-8 2.5" 256GB SandForce based SSD in RAID-5 (1TB Datastore)
ESXi Servers
2 x IBM SystemX servers
2 x Xeon QuadCore Processors
32GB RAM
ESXi 4
2 x 1GbE for LAN
2 x 10GbE for iSCSI (cross-connected to StarWind)
So my ESXi could be cross-connected in 20GbE to my StarWind SAN.
Here are a few questions I have about the setup I want to put in place...
Do you think I will get good performance?
Would it even work as I'm planning or do I need to put additional components to get it working?
If anyone knows about SSD's and if they will work well in this sort of situation?
Is it possible to team 2x10GbE NICs to be used with iSCSI?
-
Max (staff)
- Staff
- Posts: 533
- Joined: Tue Apr 20, 2010 9:03 am
Tue Nov 02, 2010 12:24 pm
Hi, the config looks very good, it would be nice to see some benchmarks made on such a cheetah-speeded configuration:)
The VMware hosts should also be reviewed by guys from VMware but I think they will also give you thumbs up on those.
Max Kolomyeytsev
StarWind Software
-
Aitor_Ibarra
- Posts: 163
- Joined: Wed Nov 05, 2008 1:22 pm
- Location: London
Wed Nov 03, 2010 5:00 pm
Great looking specs for the Starwind boxes... however SSD for the OS is overkill - you don't need it for performance, as once booted, there will be very little i/o activity. But if you rationale is reliability (solid state vs mechanical) then go for it.
Sandforce SSDs have been criticised a bit for their performance numbers being a bit inflated: you aren't going to be seeing the claimed numbers in the real world unless your data is highly compressible. I can't vouch either way because I've not used them myself; but a few people on hardforum.com and other places have said performance can drop to significantly below intel speeds in real world usage. However, they are still going to kill hard drives on IOPS.
Personally I'd avoid RAID 5 and use RAID 10 instead - but with SSDs I can see the temptation, esp if the RAID card can keep up with them.
You may want to take a look at the FastPath and CacheCade options on LSI RAID cards. FastPath speeds up SSDs improving both transfer rates and IOPs. CacheCade is like Adaptec MaxIQ - uses your SSDs as a read cache of your hot data. Advantage over Adaptec is that the LSI cards are SAS 6G, so you can get more bandwidth to a bunch of SSDs, even if each SSD is only capable of 3G transfers.
Maybe more system RAM to act as a WB cache, if you have a good UPS...
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Sun Nov 07, 2010 3:01 pm
1) "Sad but true" (c) ... Aitor is 200% correct on SandForce-based SSDs. Right now we're working on StarWind-related project and have to abandon NTFS-formatted SSDs and use RAW file system as performance for reads and writes drop ~2x ~3x if the drive cannot predict next I/O operation address.
2) Indeed. See
http://www.baarf.com for details why you should avoid "complex" RAIDs as much as possible (except basic striping and mirroring).
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software
