Hardware Recommendations

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
dmansfield
Posts: 14
Joined: Sun Mar 14, 2010 8:34 pm

Thu Oct 07, 2010 3:08 pm

I am configuring two Dell T710 servers with Microsoft Server 2008 Standard x64 to run StarWind HA in an enterprise environment. We have three Dell R610 servers running VMware ESXi with each server licensed with Microsoft Windows 2008 Datacenter for unlimited VMs. Currently we are operating 30 VMs; four Citrix Presentation servers with 60 users, Sybase DB, Exchange 2010, two SQL servers, vCenter, file/print, etc. In the T710s, I intend on using two dual port Intel X520 SPF+ DA 10GbE nics, one for iSCSI traffic and one for syncronization of the HA nodes. I will be running four raid one groups locally in the servers with 146Gb 15K SAS drives; one for the OS and StarWind and the other three for database targets. Attached storage on each T710 will be an MD1000 connected via a Perc 6/E controller with 512MB of Ram. The disks in the MD1000 are 14 600Gb 15K SAS drives configured as one Raid 50 group with one hot spare. My questions are based on what I will be running what would be recommended in the T710 StarWind servers for the CPUs and how much memory should I get. Options I am looking at for the CPUs are either two Xeon E5520s or a single Xeon X5650. How much do the Intel X520 nics tax the CPUs? Other thoughts on configuring these for maximum performance? Thank you for any assistance.
User avatar
Aitor_Ibarra
Posts: 163
Joined: Wed Nov 05, 2008 1:22 pm
Location: London

Thu Oct 07, 2010 4:03 pm

Hi,

I assume that the T710s are new, but that you've already got the MD1000's and the Perc 6e, as if you were buying new you'd probably go for the newer SAS 2 stuff.

The T710 is a fantastic server, I've used it for a client in a SAN-less hyper-v deployment, and now that Dell backtracked from their stupid policy of drive lockin for the newer PERC RAID cards, would make fantastic storage server head ends, even if at 5U they do use quite a lot of rack space. The build quality is fantastic.

15K SAS for OS & starwind is overkill - get cheaper, slower drives (still RAID1though). Also consider the 16x 2.5" version of the chassis instead of the 8x 3.5". If you order the Perc 6i as internal RAID, you'll find that one of the connectors goes to the 16 drive backplane, and the other is free. Unless there is a firmware limitation, you can get a Supermicro "mobile rack" - 8x 2.5" in the space of 2x 5.25". This has a SAS expander and I know it works with the PERC 6i - I have a Dell server in the office running 4x 3.5" off one SAS connector and 8 x 2.5" off the other. The mobile rack is SAS-1 and probably won't work with the newer PERC cards though which are SAS-2 (LSI, who are behind the PERCs, have had issues with their SAS-2 cards running with SAS-1 expanders, even ones that use LSI chips, like Dell and Supermicro).

For performance I'd push the newer PERCs as they are PCIe 2.0 (not 1.0, so double speed per PCIe lane), SAS-2 (up to 600MB/sec per drive and more importantly 2400MB/sec between expander and RAID card) and come with up to 1GB cache. Dell are now selling Pliant SSDs - these are SAS-2 and full duplex - very high IOPs - and if you can stretch to a couple of these per server they will be fantastic for your SQL transaction logs, if your databases are very write intensive. Otherwise rely on cache, as those SSDs are almost as expensive per GB as RAM!

If you have a good UPS then get as much RAM as possible and use for Starwind cache. If you have lots of LUNs & lots of initiators, Starwind will create lots of threads, so will benefit from as many cores as possible (so go for 6 core 56xx Xeons), but high clock speed also helps too!

No experience of the newer SFP Intel Nics, I have previous gen CX4. Enable recieve side scaling on server and intiators and they really rock; it does tax the CPU more but the Xeons have lots of headroom. I can saturate 10Gbe with RAM disk even though I only have a single Xeon 55xx series, AND that's with Starwind in a VM.

I don't trust RAID 50 / 5, but if those MD1000's are legacy and you've only got those drives then you don't have much choice if you need the capacity. Otherwise I think you'd get better value and higher IOPS from RAID 10 using 7.2K SAS drives - 1 or 2 TB - but using more drives. Esp if these are on a different RAID controller to your internal drives. When you are using expensive JBOD chassis like Dell Powervault, you have a high per drive cost AND a high per drive bay cost. It probably makes more sense to fill every bay with cheaper drives than have just a few really fast ones.

Sounds like a really interesting build - be great to see how it goes.

cheers,

Aitor
dmansfield
Posts: 14
Joined: Sun Mar 14, 2010 8:34 pm

Fri Oct 08, 2010 3:38 pm

Aitor,

Thank you for the detailed response and recommendations. You are correct that I already have the MD1000s and I picked them up when Dell had some very aggressive pricing of less than half of list price. I chase RAID 50 based on what I saw Dell recommending for their Equalogic SAN implementations. We have been very happy with the performance; these are the results in MB/sec when running iometer:
Locally on current server (Dell PowerEdge 2950) with the attached MD1000 - 10 workers SEQ write 64k 703.48, 10 workers SEQ read 64k 643.47, and
Remote server to the data store over 1GB iSCSI connection - 10 Workers SEQ write 64K 116.73, 10 workers SEQ read 64K 107.54.
We are anticipating better results over iSCSI with the 10Gb nics.

For the T710s I am looking at the PERC H700 raid card with 1GB non volatile cache. I like your advice about the 16 x 2.5" drive bays. What do recommend for hard drives for the OS and StarWind, 10K SAS?

What are you using/recommending for RAM cache size for StarWind HA Image Devices?

Do you recommend using a direct point-to-point connection between the StarWind servers for HA synchronization?

Thanks again
User avatar
Aitor_Ibarra
Posts: 163
Joined: Wed Nov 05, 2008 1:22 pm
Location: London

Mon Oct 11, 2010 2:57 pm

Hi,

Those MD1000 results are actually a bit disappointing - unless maybe it's with a PERC 5e (my retired 2950 came with a 5i, and it's not as fast as a 6i). And it's weird that you getting faster writes than reads! With read speed, given that RAID 5 reads should perform similar to RAID 0 reads, at least in sequential, should be closer to number of drives x speed of drives. 3.5" 15K SAS should be able to do 120MB/sec to 200/MB/sec per drive... You should be able saturate SAS-1 (1200MB/sec) with six drives or more. Unless your iOmeter tests were doing a mixed read/write test, in which case the numbers are great!

For OS and Starwind, unless you really want to put an IMG on same disks (best avoided), you can get away with the slowest, cheapest drives possible - 7.2K SATA will be fine. This is because once you've booted, they won't be doing much more i/o than logging. So long as they are reliable. I would definitely put them into a RAID-1.

Alternatively, if you can afford it, have them the same spec as the other drives so that they can share a global hotspare with the other drives. PERCs support multiple virtual disks on same RAID set, so you could, in theory, put in 16 x 15K SAS or even hose Pliant SSDs, create a RAID 10 with 14 drives, and have two drives as hotspares, and set up one small virtual disk for boot and one massive one for data.Create the big one first if you use HDs so that the faster parts of the disks are used for data.

Yes, do a direct connection for sync if you can, as you save valuable switch ports, can use shorter/cheaper/lower power consumption cables, and you avoid the tiny bit of latency a switch adds.

You might find that even 10GbE is a bottleneck as the max throughput of Starwind will be limited by your sync channel and SAS-2 can reach 2400MB/sec per connector on a x4 connector, if the drives and card are up to it. You could give each 10GbE port a different IP address, then balance your HA sync channels across them, if you have enough 10GbE ports.

Mellanox do a 40GbE card (over one SFP) and there's even a 6x 10GbE SFP card out there... Useful if you run low on PCIe slots...

cheers,

Aitor
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Oct 25, 2010 9:06 pm

Performance is still limited with 1 GbE network so ~110-120 MB/sec is what you should be expecting. That's why investing into hardware SAS 15K spindles or SSD is not what going to help much... Better go for 10 GbE (Intel 10 GbE cards are less then 500 bucks each sold here) and more and more RAM (used for Write-Back Cache). And yes, going for cross-over cable is better idea then using switch. Just b/c you do isolate point of potential failure, cable is cheaper then switch (sic!) and cable don't have broken Jumbo frames support (so far we've discovered one of the big names here in our lab suffering from internal buffer overflow and doing 1-2 MB/sec instead of 100 full blown megabytes after maybe 30 minutes of hardcore tests run).
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply