Software-based VM-centric and flash-friendly VM storage + free version
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
-
bandersaurus
- Posts: 5
- Joined: Thu Jun 11, 2015 9:16 pm
Tue Jun 23, 2015 7:52 pm
We just determined that for some odd reason, setting MTU to 9000 on our 2012 R2 Hyper-V hosts actually slows performance down significantly. The most obvious impact is when migrating storage of a virtual machine. With Jumbo Frames enabled, we find that moving a particular VM takes about an hour. With them set at the default this takes ten minutes. We can observe the decrease in bandwidth by looking at the network utilization in task manager -- it's roughly in the 50-100 mbps range on both adapters when JF's are enabled, and around 270-300 with them disabled. Not sure if this is relevant, but we had Flow Control enabled in both tests.
We've looked at our switch settings and JF's are supported and enabled. They're also enabled on the NICS on the StarWind hosts. Anything else to consider or look at? Or should we just forget they exist and stick to the defaults?
-
darklight
- Posts: 185
- Joined: Tue Jun 02, 2015 2:04 pm
Fri Jun 26, 2015 10:31 am
Actually jumbo frames should always increase network performance for iSCSI operations since they were specially designed for this purposes. And they really do. In your case it seems like there is one or more knots between your sync hosts that do not support jumbo frames and lead to performance degradation. It could be a switch, network card drivers.
I have found a nice guide explaining how to test if jumbo frames operate normally just using a ping command:
http://www.mylesgray.com/hardware/test- ... s-working/
Give it a try and feed back here

-
bandersaurus
- Posts: 5
- Joined: Thu Jun 11, 2015 9:16 pm
Tue Jul 14, 2015 12:04 am
Hey all,
I ran the ping -f -l commands back when I was troubleshooting and that worked successfully from host to SAN and SAN to host. Yet, I still experienced better performance with jumbo frames disabled altogether.
We left it off for now because I'm wondering if part of the problem is our ancient hardware -- our hosts are old SUN servers with NVidia network controllers that don't even have a proper Windows 2008 driver available. If/when we get a new host we'll see if that winds up working better.
Thanks,
Brian