The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
Okay I think I understand this. You could always have decided to run Starwind on a Hyper-V host because it's just another Windows program. But in this scenario in v6, the Hyper-V host would have been using the Microsoft initiator to establish a connection to the storage and this went all the way through the iSCSI network path even though it was on the same server. So in v8, it knows the iSCSI target is on the same server so cuts out the networking stage.1) Does not really use iSCSI in hyper-converged scenario (running on the same hardware with Hyper-V). Yes, we LOOK and uplink to system like we're iSCSI but after connection is established TCP and iSCSI stacks on server are bypassed and everything is handled by kernel-mode acceleration path driver. Think about MSFT and when they go from TCP -> SMB Direct (RDMA). We do very similar thing here except we go not RDMA but actual DMA (same host in a loopback).
robnicholson wrote:>MSFT iSCSI initiator (we cannot use own one because of a long list of reasons) it DOES NOT go over TCP stack.
Even when a connection of made from a different server to the Hyper-V host running Starwind?
Cheers, Rob.
robnicholson wrote:I don't understand this - what if you have five Hyper-V hosts and Starwind is running on two of them. How should the other three connect to the cluster storage?
Cheers, Rob.
robnicholson wrote:>That's a broken config. We recommend going symmetric (StarWind runs on all the nodes inside a Hyper-V cluster).
Still struggling with this as it doesn't sound scalable. What happens if you have 30 Hyper-V hosts? You need 30 x Starwind and 30 x disk systems?
Cheers, Rob.
robnicholson wrote:Some diagrams would really help here, even if they are scans of hand drawn diagrams.
Cheers, Rob.
We currently have a five node Hyper-V cluster using a dedicated Starwind SAN with a combination of 15k SAS and 7k SATA disks connected via a 4 channel 1Gbit/s MPIO network. We're not running out of IOPS (occasional pauses if IT copy a 50GB vdisk) and the network is no where near bottleneck so in a real world example, what your suggesting doesn't stack up. And we're running a series of virtual XenApp servers on the nodes which really do cause high load on the system.Everything is just an opposite to what you say: keeping storage with a few nodes inside a massive hypervisor cluster does not scale out as 1) config runs out of IOPS very soon and 2) network becomes a real bottleneck.
robnicholson wrote:We currently have a five node Hyper-V cluster using a dedicated Starwind SAN with a combination of 15k SAS and 7k SATA disks connected via a 4 channel 1Gbit/s MPIO network. We're not running out of IOPS (occasional pauses if IT copy a 50GB vdisk) and the network is no where near bottleneck so in a real world example, what your suggesting doesn't stack up. And we're running a series of virtual XenApp servers on the nodes which really do cause high load on the system.Everything is just an opposite to what you say: keeping storage with a few nodes inside a massive hypervisor cluster does not scale out as 1) config runs out of IOPS very soon and 2) network becomes a real bottleneck.
I agree this isn't a massive cluster but equally I'm still unsure what the architecture looks like if we decided to run Starwind on the nodes themselves and add HA into the equation. I've seen diagrams of a two-node Hyper-V cluster (which made sense) so just need to see what the proposed architecture would be for a larger node array. I await the diagrams with interest.
Cheers, Rob.