The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
heitor_augusto wrote:We are testing a Starwind HA setup and are also having significant performance issues. Here are the details:
- Starwind 5.4
- Windows 2008 R2 Standard
- 02 storage boxes: Intel S3420GPLC, 3ware 9650SE-24M8, 24 HDs Western Digital WD5001AALS
- Sync link using 02 cat6 crossover cables and Intel 82574L Gigabit network cards
If we attach a VMWare ESX 4.1 host to a single target (no HA), we can write data at the expected rate of 117 MB/s.
However, if we start up HA, even with 08 GB write-back cache enabled, performance drops to around 30 MB/s. In this case, bandwidth usage on the sync link doesn't go above 45% showing there's no bottleneck there.
Has anyone had any success in sorting this out yet?
Yes, both nodes have write-cache and queueing (NCQ) enabled.anton (staff) wrote:Both nodes have write-back cache enabled?
heitor_augusto wrote:We are testing a Starwind HA setup and are also having significant performance issues. Here are the details:
- Starwind 5.4
- Windows 2008 R2 Standard
- 02 storage boxes: Intel S3420GPLC, 3ware 9650SE-24M8, 24 HDs Western Digital WD5001AALS
- Sync link using 02 cat6 crossover cables and Intel 82574L Gigabit network cards
If we attach a VMWare ESX 4.1 host to a single target (no HA), we can write data at the expected rate of 117 MB/s.
However, if we start up HA, even with 08 GB write-back cache enabled, performance drops to around 30 MB/s. In this case, bandwidth usage on the sync link doesn't go above 45% showing there's no bottleneck there.
Has anyone had any success in sorting this out yet?
MPIO is a client thing, not a setting on Starwind; for Windows 2008 R2, in the iSCSI initiator, once you've set up your paths, select one of them and click "Devices". Then click MPIO. You have the following load balancing policy choices:camealy wrote:Could you provide more details on the MPIO setup you are using? A lot of our installations were done by Bob when he was there.
Performance benefit should be for two reasons: 1) less disk thrashing and 2) if one path is faster than the other, like in my network. If you are fully 1GbE or fully 10GbE then you won't see this benefit, but you should still see 1).camealy wrote:That is good info, I believe all our Hyper-V clusters are setup with Round Robin to the Starwind HA. Is there a performance benefit to go failover only? And if so what is the downside from a actual failure standpoint?