horrible slow connection to ESX

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

deiruch
Posts: 35
Joined: Wed May 25, 2011 12:16 pm

Sun Dec 04, 2011 10:14 pm

Have you tried attaching the disks directly, not through the PERC 5/iR? I once set up some Dell servers that had disastrous performance. As soon as I used dirt-cheap SATA disks, connected to the board (the mainboard had connectors too), the problem disappeared. I saw the same problems you describe here (except that I wasn't running StarWind on those servers).
ghost
Posts: 11
Joined: Wed Nov 16, 2011 9:11 pm

Mon Jan 23, 2012 5:07 pm

In the mean time i switched NICs (now onboard iSCSI <-> PCI-X or PCIe i forgot, normal LAN), still the same.

At the moment i cannot bypass the PERC controller, because Windows is installed on the same disks (Windows no software RAID1 for OS). Another 2 disks i also cannot install because there is no place with "big" DAT drive:
http://www.disasterzone.net/blog/upload ... inside.png
I already discovered that the controller is a lot faster when write cache is enabled, which is no default. With that i had even lower speeds but now its maybe a different problem.

But what i will also try is to directly connect the NICs without switch

What it is really stupid is, the problem takes some days to appear

@deiruch
were this really SAS disks with problem? Once i had big speed troubles with WD SATA RAID disks on a HP SAS controller. Now i have also SATA disks.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Jan 23, 2012 11:32 pm

Try isolating your issue removing (replacing) components one-by-one.

1) NICs
2) Switch
3) Controller
ghost wrote:In the mean time i switched NICs (now onboard iSCSI <-> PCI-X or PCIe i forgot, normal LAN), still the same.

At the moment i cannot bypass the PERC controller, because Windows is installed on the same disks (Windows no software RAID1 for OS). Another 2 disks i also cannot install because there is no place with "big" DAT drive:
http://www.disasterzone.net/blog/upload ... inside.png
I already discovered that the controller is a lot faster when write cache is enabled, which is no default. With that i had even lower speeds but now its maybe a different problem.

But what i will also try is to directly connect the NICs without switch

What it is really stupid is, the problem takes some days to appear

@deiruch
were this really SAS disks with problem? Once i had big speed troubles with WD SATA RAID disks on a HP SAS controller. Now i have also SATA disks.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
deiruch
Posts: 35
Joined: Wed May 25, 2011 12:16 pm

Wed Jan 25, 2012 9:41 pm

ghost wrote:@deiruch
were this really SAS disks with problem? Once i had big speed troubles with WD SATA RAID disks on a HP SAS controller. Now i have also SATA disks.
Sorry, can't remember, was some time ago. Can't you connect a single SATA disk, just for testing?
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Jan 26, 2012 12:53 am

Better SSD...
deiruch wrote:
ghost wrote:@deiruch
were this really SAS disks with problem? Once i had big speed troubles with WD SATA RAID disks on a HP SAS controller. Now i have also SATA disks.
Sorry, can't remember, was some time ago. Can't you connect a single SATA disk, just for testing?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
danisoto
Posts: 27
Joined: Thu Jan 26, 2012 12:21 pm

Fri Jan 27, 2012 8:51 am

ghost wrote:I only have windows clients and ESX, so i ran iperf between windows vm and windows machine with starwind. Got around 550mbit/s. But how this will test iSCSI network?
Hi,

To check iSCSI with VSphere, try "iperf" for ESXi:
http://www.vm-help.com/forum/viewtopic. ... erf#p11541

Best check it with the real hosts (ESXi to SAN) using the real path (if you have multiple paths).

Regards!
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Jan 30, 2012 10:32 am

Thank you for suggestion! Great tool to check raw TCP performance.
danisoto wrote:
ghost wrote:I only have windows clients and ESX, so i ran iperf between windows vm and windows machine with starwind. Got around 550mbit/s. But how this will test iSCSI network?
Hi,

To check iSCSI with VSphere, try "iperf" for ESXi:
http://www.vm-help.com/forum/viewtopic. ... erf#p11541

Best check it with the real hosts (ESXi to SAN) using the real path (if you have multiple paths).

Regards!
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
danisoto
Posts: 27
Joined: Thu Jan 26, 2012 12:21 pm

Mon Jan 30, 2012 11:18 am

anton (staff) wrote:Thank you for suggestion! Great tool to check raw TCP performance.
You welcome!
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Mon Jan 30, 2012 3:37 pm

ghost, we will wait for your update then.
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
ghost
Posts: 11
Joined: Wed Nov 16, 2011 9:11 pm

Wed Mar 07, 2012 5:44 pm

reboot every week seems to fix it, probably problem with cache though

absolutely legit :mrgreen:
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Fri Mar 09, 2012 10:32 am

Well, I`m glad that everything finished good! :D
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
Post Reply