windows 2012 storage tech and starwind
Posted: Thu Dec 05, 2013 9:14 am
came up with several questions tonight..
1)drive upgrades on iscsi san replication group
this might be in faq, or forum somewhere, but i wanted a discussion, possibly a comparison to zfs.
with zfs, you can take the drives out one at a time, put a bigger one in, and wait for rebuild...then do the next drive, until all your san drives are replaced, and your available san storage space is increased.
with all the new windows server 2012 storage technologies, is this possible? storage spaces, scale out file server, ReFS, any ideas?
2)how would the storage tiers feature of storage pools work in relation to a starwind iscsi target? would it be detrimental, or complementary to the built in l1/l2 caching? I'm thinking they have done something similar to nutanix here. this is not a cache, but actual storage of the files/bits based on frequently used data, so it is another layer that could improve storage efficiency.
3)I noticed lsfs is only enabled for thin provisioned devices...so this is probably a stupid question(I just don't understand)
wouldn't you want to have the option to put that on any device, not just thin provisioned?
4)another lsfs question. lsfs is supposed to help out, so that parity type storage structures work better(taken from other conversations in the forum about raid5, etc..)
how does lsfs relate to storage spaces, where for resilient storage, you can choose from mirror/parity/or simple.
I have a lot of reading to do on these types, since it's not cut and dry like a raid5/6
for instance, the description on parity says to protect against 1 drive failure, you use 3 disks(understood), but for 2 drive failures, you use at least seven disks.
the mirror description says 2 disks for single failure, and 5 disks for double failure.
I'll have to play around with these, and understand them better.
5)storage spaces write back cache...since this could potentially be multiple ssds, so more iops...would it be better to utilize this write back cache instead of starwind, or in addition to starwind?
I'm wondering if the ssd's in this scenario could pull double duty, as in the storage tiers scenario. I know you could take a raid card, and stripe the drives, then present as l2 cache for an iscsi target.
6)starwind l2 cache...at what point is this actually a bottleneck? is it better to utilize a l2 ssd cache on multiple ssd's, and at what ratio to drives in your san array? If I have 8 sas drives in an array, that can do 50k write iops, that's more than a single ssd drive can do.
1)drive upgrades on iscsi san replication group
this might be in faq, or forum somewhere, but i wanted a discussion, possibly a comparison to zfs.
with zfs, you can take the drives out one at a time, put a bigger one in, and wait for rebuild...then do the next drive, until all your san drives are replaced, and your available san storage space is increased.
with all the new windows server 2012 storage technologies, is this possible? storage spaces, scale out file server, ReFS, any ideas?
2)how would the storage tiers feature of storage pools work in relation to a starwind iscsi target? would it be detrimental, or complementary to the built in l1/l2 caching? I'm thinking they have done something similar to nutanix here. this is not a cache, but actual storage of the files/bits based on frequently used data, so it is another layer that could improve storage efficiency.
3)I noticed lsfs is only enabled for thin provisioned devices...so this is probably a stupid question(I just don't understand)
wouldn't you want to have the option to put that on any device, not just thin provisioned?
4)another lsfs question. lsfs is supposed to help out, so that parity type storage structures work better(taken from other conversations in the forum about raid5, etc..)
how does lsfs relate to storage spaces, where for resilient storage, you can choose from mirror/parity/or simple.
I have a lot of reading to do on these types, since it's not cut and dry like a raid5/6
for instance, the description on parity says to protect against 1 drive failure, you use 3 disks(understood), but for 2 drive failures, you use at least seven disks.
the mirror description says 2 disks for single failure, and 5 disks for double failure.
I'll have to play around with these, and understand them better.
5)storage spaces write back cache...since this could potentially be multiple ssds, so more iops...would it be better to utilize this write back cache instead of starwind, or in addition to starwind?
I'm wondering if the ssd's in this scenario could pull double duty, as in the storage tiers scenario. I know you could take a raid card, and stripe the drives, then present as l2 cache for an iscsi target.
6)starwind l2 cache...at what point is this actually a bottleneck? is it better to utilize a l2 ssd cache on multiple ssd's, and at what ratio to drives in your san array? If I have 8 sas drives in an array, that can do 50k write iops, that's more than a single ssd drive can do.