Thu Oct 07, 2010 4:03 pm
Hi,
I assume that the T710s are new, but that you've already got the MD1000's and the Perc 6e, as if you were buying new you'd probably go for the newer SAS 2 stuff.
The T710 is a fantastic server, I've used it for a client in a SAN-less hyper-v deployment, and now that Dell backtracked from their stupid policy of drive lockin for the newer PERC RAID cards, would make fantastic storage server head ends, even if at 5U they do use quite a lot of rack space. The build quality is fantastic.
15K SAS for OS & starwind is overkill - get cheaper, slower drives (still RAID1though). Also consider the 16x 2.5" version of the chassis instead of the 8x 3.5". If you order the Perc 6i as internal RAID, you'll find that one of the connectors goes to the 16 drive backplane, and the other is free. Unless there is a firmware limitation, you can get a Supermicro "mobile rack" - 8x 2.5" in the space of 2x 5.25". This has a SAS expander and I know it works with the PERC 6i - I have a Dell server in the office running 4x 3.5" off one SAS connector and 8 x 2.5" off the other. The mobile rack is SAS-1 and probably won't work with the newer PERC cards though which are SAS-2 (LSI, who are behind the PERCs, have had issues with their SAS-2 cards running with SAS-1 expanders, even ones that use LSI chips, like Dell and Supermicro).
For performance I'd push the newer PERCs as they are PCIe 2.0 (not 1.0, so double speed per PCIe lane), SAS-2 (up to 600MB/sec per drive and more importantly 2400MB/sec between expander and RAID card) and come with up to 1GB cache. Dell are now selling Pliant SSDs - these are SAS-2 and full duplex - very high IOPs - and if you can stretch to a couple of these per server they will be fantastic for your SQL transaction logs, if your databases are very write intensive. Otherwise rely on cache, as those SSDs are almost as expensive per GB as RAM!
If you have a good UPS then get as much RAM as possible and use for Starwind cache. If you have lots of LUNs & lots of initiators, Starwind will create lots of threads, so will benefit from as many cores as possible (so go for 6 core 56xx Xeons), but high clock speed also helps too!
No experience of the newer SFP Intel Nics, I have previous gen CX4. Enable recieve side scaling on server and intiators and they really rock; it does tax the CPU more but the Xeons have lots of headroom. I can saturate 10Gbe with RAM disk even though I only have a single Xeon 55xx series, AND that's with Starwind in a VM.
I don't trust RAID 50 / 5, but if those MD1000's are legacy and you've only got those drives then you don't have much choice if you need the capacity. Otherwise I think you'd get better value and higher IOPS from RAID 10 using 7.2K SAS drives - 1 or 2 TB - but using more drives. Esp if these are on a different RAID controller to your internal drives. When you are using expensive JBOD chassis like Dell Powervault, you have a high per drive cost AND a high per drive bay cost. It probably makes more sense to fill every bay with cheaper drives than have just a few really fast ones.
Sounds like a really interesting build - be great to see how it goes.
cheers,
Aitor