Developing a migration from ZFS platform on our network that was setup by previous IT person, and I am hitting some planning snags as I try to make the most out of our hard drives and network connections. I do not want to invest a lot in refurbishing the existing platform until I establish a baseline with a properly configured connectivity. Here is my existing setup:
Dell PE2950
Windows 2003R2x64
Xeon E5310
16GB RAM
3x500GB 7200 HDDs in RAID 5
2x1GB NIC
Domain Controller, File and Printer server
Dell PE2950
Citrix XenServer
Xeon 5310
16GB RAM
3x80GB 7200 HDDs in RAID 5
2x1GB NIC
-Guest OS:
Windows 2003R2x86 (File and SQL Server)
Windows XP
CentOS (Various db and hosting roles)
Super Micro CSE-836TQ-R800B
napp-it ZFS
Xeon E3-1220v2
16GB RAM
4x80GB 7200 HDDs (napp-it)
10x1TB 7200 HDDs (ZFS)
4x120GB SSDs (2 for logs, 2 for cache)
2x1GB NIC
Super Micro (too old to care or look up the case)
napp-it ZFS
2xXeon E5310
4GB RAM
2x120 GB 7200 RPM HDDs (napp-it)
4x2TB IntelliPower HDDs (ZFS)
2x1GB NIC
There are a lot of things wrong with this setup besides the fact that it is based on ZFS. The plan was for the big storage box to be the primary file storage repository on premises and the older Super Micro box to go into a data center and have a nightly upload performed. The primary storage box is currently configured with SMB shares and ACL handled by the 2003r2x64 server. It also shares out the VMs to the Citrix box, and there are user folder redirects mapped to the storage box as well. It gets better though, notice that the box ordered was the with the Super Micro 836TQ back plane, which must have each drive wired directly to the controller. I ordered the proper back plane now, and I am just waiting on the SFF-8087 cables to utilize all 16 drive bays. Currently, only 8x1TB drives are usable by this array since they ran out of cable space to hook up everything once they added the SSDs to try and improve performance. The other SuperMicro box is maxed out on the number of drives already, so there is no further upgrading available.
Moving forward my ideas I've been throwing around are as follows:
Both Dell PE2950s will be converted to 2012 Hyper-V hosts to balance out the load going to the sole Citrix box. The big storage device will also go 2012 Hyper-V with StarWind Native SAN for Hyper-V Free. One of the Dell PE 2950s will have the RAID 5 array removed and replaced with 4x600 15k SAS HDDs in RAID10. This will be the primary VM Host and the baseline for future VM hosting (justification for future investments in HDDs). The other will be upgraded as soon as that investment is realized (right now there is a SQL server stored on the ZFS VM volume--you can almost hear the tables collapsing under the weight of that much overhead).
The primary storage box will have 2012 Hyper-V loaded as the host OS so that I can use the StarWind Native SAN for Hyper-V Free edition. It is one more thing I need to ease into the budget and get away from "free" open source packages that are really just designed to get you to buy support that not even worth the free support that StarWind gives. As a technician, I don't want to spend time troubleshooting things that should just work. Since I am the only person who knows this technology, I also need a solution that I can either talk people through on the phone or leave simple instructions that will resolve any issues so that I can enjoy vacation time. I've had to do unscheduled (read: traumatic) reboots of this storage box three times since I started, which was only in August. HA storage is really not a concern of mine right now. I already have an active-passive storage configuration, and if I can get iSCSI and dedicated NICs for these pathways, I am going to get way more performance, especially when combined with dropping RAID5 for RAID10. Right now the dedicated NICs will have to wait because if this works out well, I want to get approval for 10GbE NICs to press forward with a HA paid StarWind solution.
The secondary storage box does nothing but make me frustrated. Trying to cram everything into 4x2TB IntelliPower drives is fine for just dumping files, but actually using them in a production environment with 40 users would terrify me. I use them in my home NAS, and in that limited environment with only a couple workstations on 1GbE, the real-world performance is terrible, although some of that could be attributed to them being in a Drobo box (I like easy mode at home--Apple and Drobo power my network at home). Eventually this box will end up doing exactly what it was intended to do, which is sit somewhere off-site and just be a disaster recovery box.
With the variety of hard drives and limitations on space, I am just stumped at the best way to move forward with the eventual plan of having a StarWind 3 node HA solution in place designed for Hyper-V with a light VDI eventually. As I prove the stability, scalability and speed of my systems, I would like the flexibility of hybrid public/private cloud within the next few years, so establishing a solid, high performance baseline is a must. I have considered turning the two PE2950s into SANs, but I will need to order 9 hard drive trays and the hard drives in order to meet our capacity needs. This would really only work well if I purchased the paid version, so that I could utilize all that extra RAM for cache, so again price starts to become a factor. This would require me to run our hypervisor on the storage box and utilize the faster processor. Having a single hypervisor running everything would never work unless I also added more NICs to that box, further increasing the cost not to mention the risk, although I could probably get away with running the PE2950s as emergency hypervisors in case of disaster, but I would hate to test out that DR plan.
My added complication to this is I have almost 4TB of data that is stored on the 2003R2x64 box and the napp-it ZFS array (includes all the VMs and shared folders). I plan to take down the storage box with the 4x2TB IntelliPower drives and use it as my initial iSCSI target while I sort out where the data goes and begin deploying a platform that will not only increase performance, but set the stage for my migration from 2003r2 to 2012, which is fast approaching. Any thoughts/comments for a lone IT guy trying to improve the experience of 40-50 users? Specifically, what do I do with all these drives besides throw them out and hope some magical SAS drives drop on my door? I am willing to take a performance hit from having a slower processor or non-HA configuration because I know ditching RAID 5 and running VMs from RAID10 will make a huge difference.
Thanks for your assistance. One of the primary reasons that StarWind caught my attention was the level of support on here and the flexibility of your products.
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software