XenServer & StarWind Config - Thoughts?

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
john
Posts: 2
Joined: Mon Jan 17, 2011 4:03 am

Mon Jan 17, 2011 4:28 am

We already have two VM hosts (third on the way for standby/failover), will be purchasing new servers to run SW.

Current Xenserver hosts:
Poweredge R610
32GB RAM
2 x Intel X5670

Proposed Starwind hosts for HA:

Poweredge R515
2 x Opteron 4130
8GB RAM
*14 HDD support (2 internal, 12 hotswap)
2 x 146 10K for system (Sucks this is the smallest they now sell in the 2.5", so overkill here)
6 x 146 10K Raid 10 for higher I/O VM partitions
4 x 500 7.2K Raid 5 (Nearline SAS) for lower I/O storage
PERC H700 integrated w/ 512 cache
2 x dual port Broadcom 5709
1 x gig nic for mgmt interface (spare ones we have, not sure of model.)

In a 2 NIC MPIO config, will the iSCSI network be my bottleneck? How many NIC's can you reliably expand an MPIO configuration (i.e. if I were to go w/ 4 NIC's.. any caveats?)

How much performance can we really squeeze from a SW box? If we were to add in the future some external DAS, will we be overtaxing the box? Back to my question above, when adding the DAS can I just keep expanding the NIC's and RAM or is there a good rule of thumb on where to stop expanding? Of course we will be limited to the expansion of the server. How does the overall config look in general without details on my IOPS requirements? I know that could be a question to spark outrage amongst some, but the scope of my need for info here does not require going so deep.
User avatar
Max (staff)
Staff
Posts: 533
Joined: Tue Apr 20, 2010 9:03 am

Mon Jan 17, 2011 9:53 am

Hello John,
The overall config seems to look good, I like the idea to split the storage layer and the VM layer in RAID 5 and 10.
However I'd like to know which NICs you're going to use on the Xen side.
Regarding the NICs and MPIO question - my opinion is to have a 10 GbE links rather then building a snake nest of multiple GbE links, sooner or later your servers network will start look like that. This is also not very good for iSCSI because Xen can have issues with mounting all the available pathes to the iSCSI LUN when deploying a farm larger than 1 XenServer host.
Max Kolomyeytsev
StarWind Software
john
Posts: 2
Joined: Mon Jan 17, 2011 4:03 am

Mon Jan 17, 2011 7:43 pm

The Xen hosts each have a quad port 5709. We will also install a spare gig nic for the mgmt interface.

Regarding the 10Gb NIC's, I do not have funds at this time to go much past where I stand now, and that is using Gig switches we have-- Powerconnect 5448. Is this a problem particular to Xenserver, or just a general rule of thumb? Standard practice across our entire network is for dual nics in terms of redundancy... does not always need to include load balancing, but redundancy at the least (i.e. teaming NIC's in a load balance fashion or primary/standby fashion.) Loading all those servers up with the same number of NIC's in a 10 gig solution would be cost prohibitive for us right now.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Jan 18, 2011 8:54 am

1) You don't need to upgrade ALL your network to 10 GbE. Installing a pair (or just a single route) of 10 GbE cards for StarWind cross-linked traffic is going to provide major performance boost to the whole solution.

2) Keep two physically separate storage networks (pair of storage nodes, pair of switches and dedicated doubled wiring) to have maximum redundancy. No single point of failure should be present.
john wrote:The Xen hosts each have a quad port 5709. We will also install a spare gig nic for the mgmt interface.

Regarding the 10Gb NIC's, I do not have funds at this time to go much past where I stand now, and that is using Gig switches we have-- Powerconnect 5448. Is this a problem particular to Xenserver, or just a general rule of thumb? Standard practice across our entire network is for dual nics in terms of redundancy... does not always need to include load balancing, but redundancy at the least (i.e. teaming NIC's in a load balance fashion or primary/standby fashion.) Loading all those servers up with the same number of NIC's in a 10 gig solution would be cost prohibitive for us right now.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply