StarWind Native for Hyper-V migration

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
danferguson
Posts: 1
Joined: Tue Nov 12, 2013 11:38 pm

Wed Nov 13, 2013 1:27 am

Developing a migration from ZFS platform on our network that was setup by previous IT person, and I am hitting some planning snags as I try to make the most out of our hard drives and network connections. I do not want to invest a lot in refurbishing the existing platform until I establish a baseline with a properly configured connectivity. Here is my existing setup:

Dell PE2950
Windows 2003R2x64
Xeon E5310
16GB RAM
3x500GB 7200 HDDs in RAID 5
2x1GB NIC
Domain Controller, File and Printer server

Dell PE2950
Citrix XenServer
Xeon 5310
16GB RAM
3x80GB 7200 HDDs in RAID 5
2x1GB NIC
-Guest OS:
Windows 2003R2x86 (File and SQL Server)
Windows XP
CentOS (Various db and hosting roles)

Super Micro CSE-836TQ-R800B
napp-it ZFS
Xeon E3-1220v2
16GB RAM
4x80GB 7200 HDDs (napp-it)
10x1TB 7200 HDDs (ZFS)
4x120GB SSDs (2 for logs, 2 for cache)
2x1GB NIC

Super Micro (too old to care or look up the case)
napp-it ZFS
2xXeon E5310
4GB RAM
2x120 GB 7200 RPM HDDs (napp-it)
4x2TB IntelliPower HDDs (ZFS)
2x1GB NIC

There are a lot of things wrong with this setup besides the fact that it is based on ZFS. The plan was for the big storage box to be the primary file storage repository on premises and the older Super Micro box to go into a data center and have a nightly upload performed. The primary storage box is currently configured with SMB shares and ACL handled by the 2003r2x64 server. It also shares out the VMs to the Citrix box, and there are user folder redirects mapped to the storage box as well. It gets better though, notice that the box ordered was the with the Super Micro 836TQ back plane, which must have each drive wired directly to the controller. I ordered the proper back plane now, and I am just waiting on the SFF-8087 cables to utilize all 16 drive bays. Currently, only 8x1TB drives are usable by this array since they ran out of cable space to hook up everything once they added the SSDs to try and improve performance. The other SuperMicro box is maxed out on the number of drives already, so there is no further upgrading available.

Moving forward my ideas I've been throwing around are as follows:
Both Dell PE2950s will be converted to 2012 Hyper-V hosts to balance out the load going to the sole Citrix box. The big storage device will also go 2012 Hyper-V with StarWind Native SAN for Hyper-V Free. One of the Dell PE 2950s will have the RAID 5 array removed and replaced with 4x600 15k SAS HDDs in RAID10. This will be the primary VM Host and the baseline for future VM hosting (justification for future investments in HDDs). The other will be upgraded as soon as that investment is realized (right now there is a SQL server stored on the ZFS VM volume--you can almost hear the tables collapsing under the weight of that much overhead).

The primary storage box will have 2012 Hyper-V loaded as the host OS so that I can use the StarWind Native SAN for Hyper-V Free edition. It is one more thing I need to ease into the budget and get away from "free" open source packages that are really just designed to get you to buy support that not even worth the free support that StarWind gives. As a technician, I don't want to spend time troubleshooting things that should just work. Since I am the only person who knows this technology, I also need a solution that I can either talk people through on the phone or leave simple instructions that will resolve any issues so that I can enjoy vacation time. I've had to do unscheduled (read: traumatic) reboots of this storage box three times since I started, which was only in August. HA storage is really not a concern of mine right now. I already have an active-passive storage configuration, and if I can get iSCSI and dedicated NICs for these pathways, I am going to get way more performance, especially when combined with dropping RAID5 for RAID10. Right now the dedicated NICs will have to wait because if this works out well, I want to get approval for 10GbE NICs to press forward with a HA paid StarWind solution.

The secondary storage box does nothing but make me frustrated. Trying to cram everything into 4x2TB IntelliPower drives is fine for just dumping files, but actually using them in a production environment with 40 users would terrify me. I use them in my home NAS, and in that limited environment with only a couple workstations on 1GbE, the real-world performance is terrible, although some of that could be attributed to them being in a Drobo box (I like easy mode at home--Apple and Drobo power my network at home). Eventually this box will end up doing exactly what it was intended to do, which is sit somewhere off-site and just be a disaster recovery box.

With the variety of hard drives and limitations on space, I am just stumped at the best way to move forward with the eventual plan of having a StarWind 3 node HA solution in place designed for Hyper-V with a light VDI eventually. As I prove the stability, scalability and speed of my systems, I would like the flexibility of hybrid public/private cloud within the next few years, so establishing a solid, high performance baseline is a must. I have considered turning the two PE2950s into SANs, but I will need to order 9 hard drive trays and the hard drives in order to meet our capacity needs. This would really only work well if I purchased the paid version, so that I could utilize all that extra RAM for cache, so again price starts to become a factor. This would require me to run our hypervisor on the storage box and utilize the faster processor. Having a single hypervisor running everything would never work unless I also added more NICs to that box, further increasing the cost not to mention the risk, although I could probably get away with running the PE2950s as emergency hypervisors in case of disaster, but I would hate to test out that DR plan.

My added complication to this is I have almost 4TB of data that is stored on the 2003R2x64 box and the napp-it ZFS array (includes all the VMs and shared folders). I plan to take down the storage box with the 4x2TB IntelliPower drives and use it as my initial iSCSI target while I sort out where the data goes and begin deploying a platform that will not only increase performance, but set the stage for my migration from 2003r2 to 2012, which is fast approaching. Any thoughts/comments for a lone IT guy trying to improve the experience of 40-50 users? Specifically, what do I do with all these drives besides throw them out and hope some magical SAS drives drop on my door? I am willing to take a performance hit from having a slower processor or non-HA configuration because I know ditching RAID 5 and running VMs from RAID10 will make a huge difference.

Thanks for your assistance. One of the primary reasons that StarWind caught my attention was the level of support on here and the flexibility of your products.
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Wed Nov 20, 2013 2:02 pm

Hello Dan,

First of all thank you for choosing StarWind, and for kind words that you said about the product and support - we really appreciate it. Also thank you for the detailed description that you`ve posted.

OK, let`s go through your plan one-by-one:
*Moving from WS2003 to WS2012 is not even a good idea, this is "must do" thing since we do тще support our software running on this OS edition, moreover it reached the ETA, so even MS will start supporting it soon, and I will not even start talking about outdated features and bugs.
*Moving from RAID 5 to RAID 10 sounds fantastic - there were too many talks in the internet about why noone should put RAID 5 into production. Recommended RAID for implementing an HA are RAID 1, 0 or 10, RAID 5 or 6 are not recommended due to low write performance ((btw the last two ones are not recommended to put into production by LSI, RAID vendor).The performance of a RAID array directly depends on the Stripe Size used. There are no exact recommendations of which stripe size to use. It is a test-based choice. As best practice we recommend at first step to set value recommended by vendor and run tests. Then set a bigger value and run tests again. In third step set a smaller value and test again. These 3 results should guide you to the optimal stripe size value to set. In some configuration smaller stripe size value like 4k or 8k give better performance and in some other cases 64k, 128k or even 256k values will give better performance.
Performance of the HA will depend on the performance of the RAID array used. It’s up to the customer to determine the optimal stripe size.
*As I understood your servers are not right now, and will not be in future, homogeneous. So basically you will have "powerful" 2 servers and the slow one. I`d suggest you to stick with one of two options: use the third server to hold some backup data while other two will hold the HA or you could use the huge cache plus ALUA combination and run 3 nodes HA cluster.
*As about waiting for SAS drives miraculously appeared on your desk so you could donate the old ones: before it will happen you could reconfigure the system into the something that is not slow as RAID 5, or (as an option) you can try out Beta of v8 that has performance boost for parity RAIDs. I should remind you here that everything that is not production ready is highly not recommended to put into production.
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
Post Reply