Page 1 of 1

windows 2012 storage tech and starwind

Posted: Thu Dec 05, 2013 9:14 am
by barrysmoke
came up with several questions tonight..

1)drive upgrades on iscsi san replication group
this might be in faq, or forum somewhere, but i wanted a discussion, possibly a comparison to zfs.
with zfs, you can take the drives out one at a time, put a bigger one in, and wait for rebuild...then do the next drive, until all your san drives are replaced, and your available san storage space is increased.
with all the new windows server 2012 storage technologies, is this possible? storage spaces, scale out file server, ReFS, any ideas?

2)how would the storage tiers feature of storage pools work in relation to a starwind iscsi target? would it be detrimental, or complementary to the built in l1/l2 caching? I'm thinking they have done something similar to nutanix here. this is not a cache, but actual storage of the files/bits based on frequently used data, so it is another layer that could improve storage efficiency.

3)I noticed lsfs is only enabled for thin provisioned devices...so this is probably a stupid question(I just don't understand)
wouldn't you want to have the option to put that on any device, not just thin provisioned?

4)another lsfs question. lsfs is supposed to help out, so that parity type storage structures work better(taken from other conversations in the forum about raid5, etc..)
how does lsfs relate to storage spaces, where for resilient storage, you can choose from mirror/parity/or simple.
I have a lot of reading to do on these types, since it's not cut and dry like a raid5/6
for instance, the description on parity says to protect against 1 drive failure, you use 3 disks(understood), but for 2 drive failures, you use at least seven disks.
the mirror description says 2 disks for single failure, and 5 disks for double failure.
I'll have to play around with these, and understand them better.

5)storage spaces write back cache...since this could potentially be multiple ssds, so more iops...would it be better to utilize this write back cache instead of starwind, or in addition to starwind?
I'm wondering if the ssd's in this scenario could pull double duty, as in the storage tiers scenario. I know you could take a raid card, and stripe the drives, then present as l2 cache for an iscsi target.

6)starwind l2 cache...at what point is this actually a bottleneck? is it better to utilize a l2 ssd cache on multiple ssd's, and at what ratio to drives in your san array? If I have 8 sas drives in an array, that can do 50k write iops, that's more than a single ssd drive can do.

Re: windows 2012 storage tech and starwind

Posted: Thu Dec 05, 2013 5:33 pm
by anton (staff)
1) ZFS and Windows are built with one running instance in mind and StarWind is clustered out-of-box. So with StarWind you think NODES and not DRIVES.

To increase served capacity on cluster you:

- Provision the LUN from the free space you have.
- Add one or two partners to build HA config.
- Get the target with a corresponding LUN up.
- Create new datastore (CSV etc) on that LUN and migrate VMs (or provision them).

or

- You just grow capacity from the space you have on the existing running StarWind target.
- You grow file system dynamically (OK for Windows and CSV) on-the-fly.

or

- Put node down.
- Replace drives with a bigger capacity and build new RAID.
- Create bigger LUN with StarWind.
- Synchrize.
- Repeat for all partner nodes (1 or 2 more times).
- Grow capacity for LUN like in the scenario above.

We don't talk physical disks so don't care what you use Storage Spaces (there are issues with 4KB blocks so far), ReFS or NTFS or ZFS.

2) We'll work with tiering just fine. Post V8 will have own tiering (maybe) as Windows built-in is a bit limited. You can combine our cache with Windows tiering
(after we'll fix StarWind to work on top of SS of course).

Nutanix is another story. It's basically a cluster-in-a-box (two server blades + shared storage in one chassis and "NO SAN" sticker). Get SAS JBOD and pair
of Windows servers to build your own Nutanix free of charge.

3) TP is a native feature of LSFS. LSFS cannot be thick-provisioned by design.

4) LSFS is layered on top of existing file system. Again, WE DON'T TALK DRIVES WE TALK NODES. So RAID0 is recommended as we handle redundancy @ upper level.
LSFS is using strong checksums so ReFS is not recommended (waste of CPU). Also ReFS is not stable enough (at least yet).\

5) Don't use Windows flash cache when you use StarWind one. We have it distributed so we're a) safer b) faster because we have many MPIO paths and c) keep VM "hot"
with movements between nodes. Windows don't do anything from a), b) and c). That's for multi-node config. For a single node config StarWind has no beneft and Windows
cache should be used.

6) RAID0 your SSDs with a very small stripe size to be used as a L2 cache and you'll be fine.