Configuration Questions

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
MBonner
Posts: 2
Joined: Wed Apr 15, 2009 8:56 pm

Wed Apr 15, 2009 9:36 pm

Hi All,

I work at a small department within a state government. We run a data center that currently has about 25-30 servers, most are just 1GB boxes with a pair of mirrored HD's running dedicated apps or services. The state's in a push for green IT and server consolidation and we're refreshing about 1/2 our servers this year so we're looking at virtualizing a bunch of stuff. We've just retired an old fiber optic SAN that was nothing but problems, proprietary software/hardware, lack of knowledgable support, no current staff knew how to operate it, etc. The current line of thought with my manager is that we should just go with DAS drives on the virtual servers. I like the flexibility having a centralized storage solution brings to the various virtual server products, ability to easily migrate machines, etc. So I'm investigating iSCSI options for our Hyper-V infrastructure. At the moment this hasn't yet been officially sanctioned so I'm limited to the time and resources I can dedicate to the project. I like the idea of using Windows servers as the base for an iSCSI solution. We're 95+% a windows only shop and that leverages our expertise. StarWind isn't the least expensive solution but they've been around for a while and have support available so those are both pluses.

I have a few questions about hardware configurations, drive/spindle allocation to lun's, etc.

Currently I'm looking at using a spare dell 2950 w/ a single processor (3GHz Xeon - Dual Core). I can allocate either 4 or 8GB of RAM for the box, so my first question is whether there's any significant performance difference between x86 and x64 host OS installs (I'll probably be using WS2008 Std)? I think I saw someplace that StarWind has native 32 and 64 bit versions, but I can't find that data sheet now (might have been on the old rocket division site). How much would performance benefit from the extra 4GB of RAM?

My 2nd question deals with NIC's and network configurations. I have some spare 24 port enterprise level gigabit switches I'm planning on scavenging for the proof of concept portion of this project. I don't want to short myself on network through-put and have the concept fail because of lack of performance. I'm definately planning on running the iSCSI network on a segregated network with dedicated NICs. I've seen the white paper on MPIO. I'm wondering if there's a difference between NIC bonding (link aggregation) and MPIO in terms of performance. Or if NIC bonding (link aggregation) is even supported. The servers have dual broadcom nics that support bonding/aggregation, I also have some spare dual port PCIE intel nics around that support bonding/aggregation and my switches all support 802.11ad. Am I better off running aggregated links to aggregated ports within the same subnet/switch or going with multiple switches/subnets and setting up MPIO with one of the load balancing plans? Will a dual port nic give me enough throughput to host 10-15 lightly loaded servers or do I need to find a quad port nic? I was thinking of specing out 10GBe as an option for the final production deployment, but I see that there's a 4-500 bandwidth limit when using 10GBe anyways, so the cheaper quad port GBe cards would probably get close to that anyway using my existing switches.

The 3rd question deals with drives and spindles and disk allocations to LUNs. I've seen that I'll want to use either the flat file or the CPD file in order to take advantage of the full buffering capabilities of the product. I may convert a test database server over to the test iSCSI server for some performance testing. Again, I don't want to skew the results due to configuration issues. I currently have seperate raid arrays allocated for the OS, Temp, Data, Logs and Backups. Should I allocate flat or CPD files on seperate arrays under StarWind or do I get better results building a big RAID 6 array with a ton of spindles on it and allocating everything off the big array? Or should I leave my database and email servers on DAS and just take the hit on flexibility/recoverability for now and just consolidate the less IO intensive servers?

Thanks for any help/information you can provide on these questions, no doubt I'll think up more as I work through this process.

- Mike :-)
Robert (staff)
Posts: 303
Joined: Fri Feb 13, 2009 9:42 am

Thu Apr 16, 2009 2:08 pm

Hi Mike,

The current version of StarWind Server has 32-bit and 64-bit installation components which identify your OS and install the program modules of the appropriate bit rate.

StarWind Server needs at least 512MB for its operations, but for getting higher results we would recommend to have at least 2 GB of RAM. It will be used for I/O caching and buffering. As for using 4 or 8 GB of RAM - that will depend on the overall load of the server, but I could say, the more - the better :)

As for the network configuration, StarWind Server works over an existing network connection and does not perform any network configuration itself. It just runs one level higher than your existing network. Both NIC teaming and MPIO are supported, however, we do not have any comparison charts on what is higher.
You can take a look at the below documents for additional information on this:
http://www.starwindsoftware.com/images/ ... d_MPIO.pdf

As for the type of the virtual storage used, image files prove toprovide the fastest productivity while mirroring (RAID-1) emulation provides the highest security in case of data loss or HD failures; however you can select any type of the virtual storage - StarWind is quite flexible for fitting your requirements.

Please let us know if you have other questions.

Thanks
Rob
Robert
StarWind Software Inc.
http://www.starwindsoftware.com
justin
Posts: 1
Joined: Fri Oct 09, 2009 11:02 pm

Fri Oct 09, 2009 11:12 pm

Hi Rob,

This is regarding the difference between MPIO and bonding of nics.

In MPIO we can enjoy load balancing, my doubt is when we do bonding whether we would be able to enjoy the bandwidth of both the nics or communication happens only through one nic?

Thanks in advance
Justin
adns
Posts: 23
Joined: Thu Sep 24, 2009 2:08 pm
Location: Asheville NC

Sat Oct 10, 2009 9:56 pm

Justin,

In my 2 weeks of testing so far I haven't been able to acheive an agregated throughput across multiple NICs but I've been testing on VMware you may have a different experience. At one point I had 4 NICs teamed on both sides but I could never break the ~100MB/sec barrier per I/O event. I experimented with MPIO and had simiar results. I bought some 10GBe NICs and some crossover cables and now I'm tweaking this setup to see what I can get. Once I finish all my testing I'll be posting it to this thread. Let us know if you get other results and good luck on your project!
JLaay
Posts: 45
Joined: Tue Nov 03, 2009 10:00 am

Sun Dec 06, 2009 8:13 pm

Hi all,

In the following link I found the "statement":

http://virtualgeek.typepad.com/virtual_ ... phere.html

Step 2 – configure explicit vmkNIC-to-vmNIC binding.

To make sure the vmkNICs used by the iSCSI initiator are actual paths to storage, ESX configuration requires the vmkNIC is connected to a portgroup that only has one active uplink and no standby uplinks. This way, if the uplink is unavailable, the storage path is down and the storage multipathing code can choose a different path. Let’s be REALLY clear about this – you shouldn’t use link aggregation techniques with iSCSI – you should/will use MPIO (which defines end-to-end paths from initiator to target). This isn’t stating that these aren’t bad (they are often needed in the NFS datastore use case) – but remember that block storage models use MPIO in the storage stack, not the networking stack for multipathing behavior.

With regards,

Jaap
Constantin (staff)

Fri Jan 15, 2010 2:56 pm

How your evaluating going? Or you have already finished it? If so, please inform me about your results.
Post Reply