I'm trying to figure out if I'm doing something wrong. I've installed Windows 2008 R2 on a Dell PowerEdge 2950, Quad Core, 8GB RAM, 6x146GB SAS internal. Installed StarWind and set it up with 2 add in NIC's for iSCSI, leaving the onboard NIC's for data. These two ISCSI NIC's are on a different subnet, 10.0.1.x. I've created the target and disk, ensured the networking portion only sees/selects the two ISCSI NIC's. Set the two NIC's to Jumbo Frames (MTU=9000), and set it up on a separate switch configured for Jumbo Frames.
On the ESXi side, I've:
Created a new vSwitch, and set it for Jumbo Frames MTU=9000
Added two NIC's as individual port groups - ISCSI1, ISCSI2.
Each port group is setup to only have one pNIC assigned as active, and the other is set to unused.
Each Port Group is set to Jumbo Frames MTU=9000
Each Port Group has the Failback=No option set.
In the Software iSCSI initiator, I have provided Dynamic targets for each of the two IP's on the Starwind SAN, and added both NIC's to the HBA.
I'm able to scan for and find the target, and make a VMFS volume. The volume is set for Round Robin, and IOPS=1.
Whenever I copy data, clone, or svMotion to the volume, it just takes forever. When I look at Task Manager and the NIC's on the SAN, I see it using very little of the network.
I've been fighting this now for a couple of days, thinking maybe it's because I have older SuperMicro servers, PCI-X or PCI-E NIC's, maybe I did something wrong with the windows install, switches could be bad, ESXi could be wrong, etc. So for the heck of it, I brought home a surplus Equallogic PS5000XV from work to try. Same cables, same switches, same settings, same ISCSI setup on ESXi. Added the target IP's, rescanned, found and used the volume - I'm *easily* getting 200MB/sec sustained off two NIC's worth of bandwidth at the higher block sizes, and svMotion/cloning is only limited by the fact that I have one SATA disk per host as local storage to migrate to the SAN.
Now for what its worth, I had the same sort of issues with the Microsoft ISCSI Target v3.3 QFE. So I'm willing to believe its something in that server. But it's a clean install of Windows 2008R2, all Dell drivers installed, firmware is current and I've tried a few different NIC's, including the onboards vs add in, etc. Nothing seems to be working as I'd expect.
Now I've seen some details indicating that for MPIO, I should have two different vSwitches with 1 pNIC/Port Group per vSwitch, and each should be on its own subnet. Granted my experience is with Equallogic, and every vendor is different, but I've built out 8 data centers using 12 EQL's and never had to do that.
Am I missing something? I'm really hoping I can prove this to work, so that I can use it as a home lab, and we might be able to get it for the surplus lab at work for the guys to learn and work on, as well as possibly some Ro/Bo type installs. (So it's not just an I like Equallogic troll, I honestly am trying).
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software