I work at a small department within a state government. We run a data center that currently has about 25-30 servers, most are just 1GB boxes with a pair of mirrored HD's running dedicated apps or services. The state's in a push for green IT and server consolidation and we're refreshing about 1/2 our servers this year so we're looking at virtualizing a bunch of stuff. We've just retired an old fiber optic SAN that was nothing but problems, proprietary software/hardware, lack of knowledgable support, no current staff knew how to operate it, etc. The current line of thought with my manager is that we should just go with DAS drives on the virtual servers. I like the flexibility having a centralized storage solution brings to the various virtual server products, ability to easily migrate machines, etc. So I'm investigating iSCSI options for our Hyper-V infrastructure. At the moment this hasn't yet been officially sanctioned so I'm limited to the time and resources I can dedicate to the project. I like the idea of using Windows servers as the base for an iSCSI solution. We're 95+% a windows only shop and that leverages our expertise. StarWind isn't the least expensive solution but they've been around for a while and have support available so those are both pluses.
I have a few questions about hardware configurations, drive/spindle allocation to lun's, etc.
Currently I'm looking at using a spare dell 2950 w/ a single processor (3GHz Xeon - Dual Core). I can allocate either 4 or 8GB of RAM for the box, so my first question is whether there's any significant performance difference between x86 and x64 host OS installs (I'll probably be using WS2008 Std)? I think I saw someplace that StarWind has native 32 and 64 bit versions, but I can't find that data sheet now (might have been on the old rocket division site). How much would performance benefit from the extra 4GB of RAM?
My 2nd question deals with NIC's and network configurations. I have some spare 24 port enterprise level gigabit switches I'm planning on scavenging for the proof of concept portion of this project. I don't want to short myself on network through-put and have the concept fail because of lack of performance. I'm definately planning on running the iSCSI network on a segregated network with dedicated NICs. I've seen the white paper on MPIO. I'm wondering if there's a difference between NIC bonding (link aggregation) and MPIO in terms of performance. Or if NIC bonding (link aggregation) is even supported. The servers have dual broadcom nics that support bonding/aggregation, I also have some spare dual port PCIE intel nics around that support bonding/aggregation and my switches all support 802.11ad. Am I better off running aggregated links to aggregated ports within the same subnet/switch or going with multiple switches/subnets and setting up MPIO with one of the load balancing plans? Will a dual port nic give me enough throughput to host 10-15 lightly loaded servers or do I need to find a quad port nic? I was thinking of specing out 10GBe as an option for the final production deployment, but I see that there's a 4-500 bandwidth limit when using 10GBe anyways, so the cheaper quad port GBe cards would probably get close to that anyway using my existing switches.
The 3rd question deals with drives and spindles and disk allocations to LUNs. I've seen that I'll want to use either the flat file or the CPD file in order to take advantage of the full buffering capabilities of the product. I may convert a test database server over to the test iSCSI server for some performance testing. Again, I don't want to skew the results due to configuration issues. I currently have seperate raid arrays allocated for the OS, Temp, Data, Logs and Backups. Should I allocate flat or CPD files on seperate arrays under StarWind or do I get better results building a big RAID 6 array with a ton of spindles on it and allocating everything off the big array? Or should I leave my database and email servers on DAS and just take the hit on flexibility/recoverability for now and just consolidate the less IO intensive servers?
Thanks for any help/information you can provide on these questions, no doubt I'll think up more as I work through this process.
- Mike
