The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
so, if I'm hearing you correctly, by using a virtual disk image, that strips out the ntfs/refs overhead. even though, the container file is logically placed inside a formatted disk/volume with ntfs/refs, by putting it into a container file, that bypasses the overhead for all read/writes...is that what I'm understanding? I had thought reading/writing to any file inside a filesystem added the filesystem's overhead, but if that is indeed the case, I can really see the flexibility in the starwind solution.anton (staff) wrote:1) It's a bad idea to pass the raw disk to StarWind. Whole virtualization layer we provide does a lot of things including write coalescing, log structuring, snapshotting etc. At the same time we don't use file I/O so think about StarWind imgs as a containers only, I/O is handled by storage stack directly (no NTFS/ReFS overhead).
anton (staff) wrote: 2) Parity RAID (especially dual parity) is a bad idea with a typical virtualization workload. Tons of small writes will initiate whole stripe updates and system would become a write pig. We fix this with LSFS but it's experimental in V8.
Ah, so this gives you control over your config, for tweaking performance based on what you have/what is faster, such as a ramdisk, or a good ssd...I'll have to play with that.anton (staff) wrote: 3) ALUA is for HA configs with a different types of storage. So you combine RAM and flash or flash and spindle inside the same cluster and want writes and reads to go to the faster node initially.
so that's a yes, but it's junkanton (staff) wrote: 4) ReFS is junk... We do have a stronger checksums as part of LSFS but for flat image files you can use ReFS (just make sure you enable not only metadata hashing). V8 will work with storage spaces as they report hardwar 4KB blocks, V6 cannot do this.
barrysmoke wrote:any issues with say a 21TB, or larger virtual disk?
**nevermind, I saw the 10TB file creation thread, and ntfs on 2008 has a limit, but 2012 is like 256TB, so plenty of room to grow***
kspare wrote:Why would LSI's Cachecade be a bad idea? We're using it and it's working pretty good. All the vms we have attached over 10gb ethernet are working great!
Ok, I must be missing something so bear with me.anton (staff) wrote:Because it's waste of money. Modern storage paradigm is to run RAID in software (ZFS, Storage Spaces etc) with a combination of a strong hashes done by file system, using flash and RAM as a system-wide resource sitting on a fast system bus (RAM) or directly on PCIe (flash) and being accessible to the whole system and not to just controller. Running cache on controller is 1) stealing resources out of the system as they cannot be used by OS any more and 2) slowing down the things as CPU is always faster and memory bus is wider then controller on-board I/O processor and PCI-e attached now cache. Batteries on controllers don't have any sense as we run clusters so every node is an effective stand-alone controller by itself + UPS for a whole system. There's no much value in the whole thing any more.
kspare wrote:Why would LSI's Cachecade be a bad idea? We're using it and it's working pretty good. All the vms we have attached over 10gb ethernet are working great!
kspare wrote:Ok, I must be missing something so bear with me.anton (staff) wrote:Because it's waste of money. Modern storage paradigm is to run RAID in software (ZFS, Storage Spaces etc) with a combination of a strong hashes done by file system, using flash and RAM as a system-wide resource sitting on a fast system bus (RAM) or directly on PCIe (flash) and being accessible to the whole system and not to just controller. Running cache on controller is 1) stealing resources out of the system as they cannot be used by OS any more and 2) slowing down the things as CPU is always faster and memory bus is wider then controller on-board I/O processor and PCI-e attached now cache. Batteries on controllers don't have any sense as we run clusters so every node is an effective stand-alone controller by itself + UPS for a whole system. There's no much value in the whole thing any more.
kspare wrote:Why would LSI's Cachecade be a bad idea? We're using it and it's working pretty good. All the vms we have attached over 10gb ethernet are working great!
I have 4 ssd's in my server, currently running windows 2012 and I have cachecade enabled on my controller, so I use then in a raid 10 cachecade volume.
While I am running StarWind v6, how would I better utilize the ssd's and get rid of cachecade?
kspare wrote:That explanation doesn't make sense.
CacheCade is using the ssd's right now.
to whom do I assign the ssd's? Starwind V6 doesn't do anything with them yet, only v8 does. Because V8 is beta it's irrelevant at this point.
So who uses the ssd's?
kspare wrote:thats basically how cachecade works.
Take cachecade out of the picture.
I have server 2012 and Starwind V6, how do you propose I use my ssd's? the way I see it right now, cachecade is my only option. You give a very vague description of allocating them elsewhere....where is elsewhere?
Until V8 is actually out, cachecade is our best bet?