Rebuild times in 3-node set-up
Posted: Thu Jul 31, 2014 4:54 pm
We currently have 5 x Hyper-V hosts in our farm so the idea of getting rid of some tin (the dedicated SAN server) and running StarWind on the Hyper-V hosts is attractive. There appears to be two options:
2-node HA with RAID-10 on the attached JBOD and a single 10GbE sync link between them
3-node HA with RAID-0 on the attached JBOD and a dual 10GbE sync link between them
The 3-node option is attractive in that you only need 3 x N disks whereas the 2-node option needs 4 x N disks. Considering that 4TB NL SAS is ~£450 in the UK, for a 64TB system that's £28.8k reduced to £21.6k. You also need an additional disk controller (£750), disk enclosure (£1k), dual-GbE NIC (£250) & StarWind license (???) so that saving isn't that big - maybe £4k.
I'm concerned though about rebuild times in the 3-node RAID-0 set-up. In the 2-node RAID-10 set-up when a disk fails, the RAID controller only has to rebuild 1 x 4TB which won't take that long to re-mirror. During this time, both StarWind nodes are functional.
However, in the 3-node set-up, if a disk fails in the RAID-0 array, it's game over isn't it? When you replace the disk, the volume is unrecoverable is it not? A whacking big 4TB hole missing in the middle. I assume you'd have to get StarWind to re-synchronise the entire 64TB? That's even if it's possible for the broken array to even carry on. Would you not have to remove the node from StarWind sync, remove the target from StarWind, re-create the RAID-0 volume, re-add to StarWind and re-sync the whole lot. That sounds like a lot of trouble compared to the RAID controller hot-swapping in a new 4TB drive and getting fully back online in hours, not days?
If it works differently, please let me know as for this reason, 3-node is not attractive.
2-node HA with RAID-10 on the attached JBOD and a single 10GbE sync link between them
3-node HA with RAID-0 on the attached JBOD and a dual 10GbE sync link between them
The 3-node option is attractive in that you only need 3 x N disks whereas the 2-node option needs 4 x N disks. Considering that 4TB NL SAS is ~£450 in the UK, for a 64TB system that's £28.8k reduced to £21.6k. You also need an additional disk controller (£750), disk enclosure (£1k), dual-GbE NIC (£250) & StarWind license (???) so that saving isn't that big - maybe £4k.
I'm concerned though about rebuild times in the 3-node RAID-0 set-up. In the 2-node RAID-10 set-up when a disk fails, the RAID controller only has to rebuild 1 x 4TB which won't take that long to re-mirror. During this time, both StarWind nodes are functional.
However, in the 3-node set-up, if a disk fails in the RAID-0 array, it's game over isn't it? When you replace the disk, the volume is unrecoverable is it not? A whacking big 4TB hole missing in the middle. I assume you'd have to get StarWind to re-synchronise the entire 64TB? That's even if it's possible for the broken array to even carry on. Would you not have to remove the node from StarWind sync, remove the target from StarWind, re-create the RAID-0 volume, re-add to StarWind and re-sync the whole lot. That sounds like a lot of trouble compared to the RAID controller hot-swapping in a new 4TB drive and getting fully back online in hours, not days?
If it works differently, please let me know as for this reason, 3-node is not attractive.