The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
CaptainTaco wrote:Good day,
Not sure if this will help or not, but I ran into this problem awhile back. The exact same symptoms, high (proper) read performance and abysmal write performance. The biggest difference with my issue is I was running over a Cisco 2960-S switch, the starwind software running behind a 10Gig SFP+ connection and the ESXi servers running through 4x Gigabit connections, multipathing enabled, IOPS set to 1.
What I eventually figured out was the problem was not limited to just running with round robin multipathing, but that it was a problem over the individual gigabit lines, and quite random. Sometimes it would be present, and sometimes it would not. What I eventually discovered was the problem was caused by QoS settings on the switch itself. Since you are not going through a physical switch, the problem may not be the same, but I figured I would throw it out there anyways. Once the entire QoS buffer was dedicated to data and QoS itself disabled, the problem disappeared. I have since gone to just using 10 Gigabit connections, bypassing the switch completely and direct connecting to the SAN using the Gigabit connections as fail-over, but I did do extensive testing and confirmed the issue resolved.
I didn't go into great detail here as I am not sure the problem is the same, but the symptoms certainly are judging by the reports you placed in your original post. If you want additional details, at some point I can provide them.
craggy wrote:For this installation there are no network cables. Everything is on blade servers going through GBE2c blade switches internally within the blade enclosere.
I have tested each link separately and get full performance as expected but once i change to round robin using bith links at the same time thats when the problem arises.
To investigate further I have installed a brand new copy of server 2008 R2 and Starwind on a spare HP DL320s storage server running a dual core xeon, 8GB ram and a P800 raid controller with 6x 15k SAS disks in raid 5. I added a dual port intel Pro 1000 nic for the two iScsi links and the problem is exactly as before.
As for the raid in the nexenta servers its 6x 15k SAS disks in Raid 5 on a P400 controller and a dual port Intel Pro 1000 nic for iScsi links.