How is StarWind Hyper-V aware?

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
robnicholson
Posts: 359
Joined: Thu Apr 14, 2011 3:12 pm

Thu Jun 19, 2014 9:03 am

Question says it all. Very familiar with v6 or v8 even on a standalone SAN. Works fine as it always has. But in what way is v8 enhanced to merit the Hyper-V tag?

I can guess one area: cache. Hyper-V and it's VMs can use dynamic memory. A great feature whereby a server only uses as much memory as it needs and across a host, Hyper-V will balance the memory, asking servers to reduce their memory if they can when more memory is needed elsewhere.

It would make sense for Starwind to take part in this scheme with it's cache. Given a two-node Hyper-V system whereby you specify the memory in each node to be sufficient to run all VMs in case of node failure, when running normally, half the memory is free. Our servers have 80GB so 40GB+ is not been used.

So it would be great if Starwind could use that 40GB for it's cache (neat!) but if there was a failover situation, give up that memory back to Hyper-V so it could failover over VMs.

Barking up the right tree?

Cheers, Rob.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Jun 19, 2014 10:14 pm

V8 is very different to V6 (you could see however some of the primary scenarios with V8 as an experimental ones with V6). When talking Hyper-V StarWind:

1) Does not really use iSCSI in hyper-converged scenario (running on the same hardware with Hyper-V). Yes, we LOOK and uplink to system like we're iSCSI but after connection is established TCP and iSCSI stacks on server are bypassed and everything is handled by kernel-mode acceleration path driver. Think about MSFT and when they go from TCP -> SMB Direct (RDMA). We do very similar thing here except we go not RDMA but actual DMA (same host in a loopback).

2) Does not really use iSCSI in dedicated (running separate compute and storage layers, MSFT "blessed" approach). We help to build Scale-Out File Servers (automatic configuration and all the back door housekeeping for a block storage) and Hyper-V used SMB 3.0 to talk to SoFS set. Value we bring here is getting rid of a requirement in a physical shared storage (FC or iSCSI SAN or SAS JBOD with hell lot of a SAS switches for 2+ node configs). iSCSI (OK, it's own big-block-protocol-over-iSCSI) is only used to run a background sync between SoFS nodes.

That's in terms of a storage protocols. In terms of a management we do SMI-S now so you can use System Center Virtual Machine Manager to control StarWind and not use StarWind Management Console at all.

So... 1) native uplink protocols 2) management and 3) automation and 4) (OF COURSE) running inside Hyper-V kernel is what we think makes StarWind the only real NATIVE solution on the market. We're not "just another iSCSI target running inside virtual machine" or similar "solutions" on the market built on top of an open-source stuff.

Dynamic cache is not YET inside StarWind Virtual SAN but it would be there for sure in mid-Fall 2014 (VMworld 2014 in SF is coming and we're busy with some of the VMware-specific things).
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
robnicholson
Posts: 359
Joined: Thu Apr 14, 2011 3:12 pm

Fri Jun 20, 2014 8:50 am

1) Does not really use iSCSI in hyper-converged scenario (running on the same hardware with Hyper-V). Yes, we LOOK and uplink to system like we're iSCSI but after connection is established TCP and iSCSI stacks on server are bypassed and everything is handled by kernel-mode acceleration path driver. Think about MSFT and when they go from TCP -> SMB Direct (RDMA). We do very similar thing here except we go not RDMA but actual DMA (same host in a loopback).
Okay I think I understand this. You could always have decided to run Starwind on a Hyper-V host because it's just another Windows program. But in this scenario in v6, the Hyper-V host would have been using the Microsoft initiator to establish a connection to the storage and this went all the way through the iSCSI network path even though it was on the same server. So in v8, it knows the iSCSI target is on the same server so cuts out the networking stage.

Separate iSCSI server:

Hyper-V Host -> Microsoft iSCSI initiator -> TCP/IP network -> Starwind Server listener -> Starwind SAN -> Disks

When running on Hyper-V host:

Hyper-V host -> iSCSI initiator (your own?) -> Starwind SAN -> Disks

Yes I can see how that would give a performance boost.

A further question - that's fine for a single host. If you have other Hyper-V hosts (ignoring SAN mirror for now), I assume they would connect via the traditional method:

Another Hyper-V Host -> Microsoft iSCSI initiator -> TCP/IP network -> Starwind Server listener on Hyper-V host -> Starwind SAN -> Disks

So their performance would be much as it would be with Starwind running on a dedicated server? Therefore in terms of design, you'd get better performance if the VMs were running on the same Hyper-V host (i.e. make that the preferred Hyper-V server) but it would continue to work if you happened to failover to another Hyper-V host without Starwind running locally.

>MSFT "blessed" approach

I'm afraid I'm not familiar with these terms so will need to do some more background reading.

>That's in terms of a storage protocols. In terms of a management we do SMI-S now so you can use System Center Virtual Machine Manager to control StarWind and not use StarWind Management Console at all.

We've yet to implement VMM - we've bought it as part of System Centre data edition version just haven't had time.

>4) (OF COURSE) running inside Hyper-V kernel is what we think makes StarWind the only real NATIVE solution on the market. We're not "just another iSCSI target running inside virtual machine" or similar "solutions" on the market built on top of an open-source stuff.

You need to leverage on this as I've sat through all the recent videos and this didn't jump out at me.

>Dynamic cache is not YET inside StarWind Virtual SAN but it would be there for sure in mid-Fall 2014 (VMworld 2014 in SF is coming and we're busy with some of the VMware-specific things).

As I said in the other post, this is a little disappointing as it would be a great selling point. Many Hyper-V hosts run at way less than maximum memory (just in case of other host failover) so been able to use this RAM whilst it's free for write-back caching would be excellent. Would go someway to my reservations about how much cache is needed for large disk arrays in another post.

So glad that it's something you're looking at although the devil's advocate would say your release schedules have slipped in the past ;-) BUT I've been a developer so I know it's better to get a solid product first before adding features. Especially in the SAN world where it's the backbone of your entire system alongside the network. If that's broken, then you're in deep doo doo.

Cheers, Rob.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Fri Jun 20, 2014 8:06 pm

No it does not work the say you describe. First of all iSCSI is not supposed to be used with dedicated setup AT ALL rather it's SMB3. With a co-existing on the same hardware we look like we're iSCSI but after data gets into MSFT iSCSI initiator (we cannot use own one because of a long list of reasons) it DOES NOT go over TCP stack.

We'll do have a dynamic cache. It's not that easy (unfortunately).
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
robnicholson
Posts: 359
Joined: Thu Apr 14, 2011 3:12 pm

Sat Jun 21, 2014 11:01 am

>MSFT iSCSI initiator (we cannot use own one because of a long list of reasons) it DOES NOT go over TCP stack.

Even when a connection of made from a different server to the Hyper-V host running Starwind?

Cheers, Rob.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sat Jun 21, 2014 11:18 am

This does work but it's not supposed to be used.
robnicholson wrote:>MSFT iSCSI initiator (we cannot use own one because of a long list of reasons) it DOES NOT go over TCP stack.

Even when a connection of made from a different server to the Hyper-V host running Starwind?

Cheers, Rob.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
robnicholson
Posts: 359
Joined: Thu Apr 14, 2011 3:12 pm

Sun Jun 22, 2014 12:02 pm

I don't understand this - what if you have five Hyper-V hosts and Starwind is running on two of them. How should the other three connect to the cluster storage?

Cheers, Rob.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sun Jun 22, 2014 9:42 pm

1) That's a broken config. We recommend going symmetric (StarWind runs on all the nodes inside a Hyper-V cluster).

2) iSCSI would be used to represent a LU to layer CSV on. But. On the same host we'll use DMA (own fast path driver below MSFT iSCSI initiator) so no iSCSI-over-TCP-in-a-loopback (what others will try selling you).

3) With a dedicated setup SMB 3.0 would be used to talk from Hyper-V hosts to a Scale-Out File Servers set.
robnicholson wrote:I don't understand this - what if you have five Hyper-V hosts and Starwind is running on two of them. How should the other three connect to the cluster storage?

Cheers, Rob.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
robnicholson
Posts: 359
Joined: Thu Apr 14, 2011 3:12 pm

Mon Jun 23, 2014 12:15 pm

>That's a broken config. We recommend going symmetric (StarWind runs on all the nodes inside a Hyper-V cluster).

Still struggling with this as it doesn't sound scalable. What happens if you have 30 Hyper-V hosts? You need 30 x Starwind and 30 x disk systems?

Cheers, Rob.
robnicholson
Posts: 359
Joined: Thu Apr 14, 2011 3:12 pm

Mon Jun 23, 2014 12:15 pm

Some diagrams would really help here, even if they are scans of hand drawn diagrams.

Cheers, Rob.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Jun 23, 2014 9:49 pm

Everything is just an opposite to what you say: keeping storage with a few nodes inside a massive hypervisor cluster does not scale out as 1) config runs out of IOPS very soon and 2) network becomes a real bottleneck. With a StarWind scale out design I/O for a set of a VMs is "bonded" to local hypervisor node so virtually never touches network (except to confirm the writes with a sync replication partner).
robnicholson wrote:>That's a broken config. We recommend going symmetric (StarWind runs on all the nodes inside a Hyper-V cluster).

Still struggling with this as it doesn't sound scalable. What happens if you have 30 Hyper-V hosts? You need 30 x Starwind and 30 x disk systems?

Cheers, Rob.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Jun 23, 2014 9:52 pm

Sure please contact Max (StarWind PM) and he'll provide you with a Scale Out drafts with all the diagrams and description. We'll publish them soon.
robnicholson wrote:Some diagrams would really help here, even if they are scans of hand drawn diagrams.

Cheers, Rob.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
robnicholson
Posts: 359
Joined: Thu Apr 14, 2011 3:12 pm

Tue Jun 24, 2014 8:22 am

Everything is just an opposite to what you say: keeping storage with a few nodes inside a massive hypervisor cluster does not scale out as 1) config runs out of IOPS very soon and 2) network becomes a real bottleneck.
We currently have a five node Hyper-V cluster using a dedicated Starwind SAN with a combination of 15k SAS and 7k SATA disks connected via a 4 channel 1Gbit/s MPIO network. We're not running out of IOPS (occasional pauses if IT copy a 50GB vdisk) and the network is no where near bottleneck so in a real world example, what your suggesting doesn't stack up. And we're running a series of virtual XenApp servers on the nodes which really do cause high load on the system.

I agree this isn't a massive cluster but equally I'm still unsure what the architecture looks like if we decided to run Starwind on the nodes themselves and add HA into the equation. I've seen diagrams of a two-node Hyper-V cluster (which made sense) so just need to see what the proposed architecture would be for a larger node array. I await the diagrams with interest.

Cheers, Rob.
User avatar
anton (staff)
Site Admin
Posts: 4021
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Jun 24, 2014 9:51 am

That's a different config from the one you've been describing above (5 + 2 instead of pure 5). If you're using separated layers for compute and storage and you use Hyper-V then you need to go SoFS / SMB3 route (StarWind Virtual SAN works @ back end).

If you need to a) feed the storage to non-SMB3 compatible client (Xen, DPM, ESXi etc) *or* you are using hybrid scenario (some Hyper-V nodes have storage and others don't) then you obviously cannot use SMB3 and have to stick with iSCSI.

I repeat: please PM or e-mail and I'll bring you in touch with StarWind PM who';; provide you with a Scale-Out PDFs. One picture costs thousand of words :)
robnicholson wrote:
Everything is just an opposite to what you say: keeping storage with a few nodes inside a massive hypervisor cluster does not scale out as 1) config runs out of IOPS very soon and 2) network becomes a real bottleneck.
We currently have a five node Hyper-V cluster using a dedicated Starwind SAN with a combination of 15k SAS and 7k SATA disks connected via a 4 channel 1Gbit/s MPIO network. We're not running out of IOPS (occasional pauses if IT copy a 50GB vdisk) and the network is no where near bottleneck so in a real world example, what your suggesting doesn't stack up. And we're running a series of virtual XenApp servers on the nodes which really do cause high load on the system.

I agree this isn't a massive cluster but equally I'm still unsure what the architecture looks like if we decided to run Starwind on the nodes themselves and add HA into the equation. I've seen diagrams of a two-node Hyper-V cluster (which made sense) so just need to see what the proposed architecture would be for a larger node array. I await the diagrams with interest.

Cheers, Rob.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply