The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
robnicholson wrote:Another wish - background shrink of thin provisioned disks. We sized our Exchange log drive at 2TB but the logs weren't getting deleted (problem with backup software) so they grew and grew and grew and nearly filled 1TB. They have been pruned down to a few hundred megabytes but the disk image is stuck at 1TB thus using up nearly 800GB of disk space on an expensive SAN.
Our only option (unless u tell me otherwise) is to create another image and copy the files across.
A background shrink of that disk run when activity is low would be (c) "wonderful".
Cheers, Rob.
I promise to backup my data firstWe can parse NTFS content and deallocate blocks marked as "free" in a free space bitmap. But it's a very dangerous feature
robnicholson wrote:I promise to backup my data firstWe can parse NTFS content and deallocate blocks marked as "free" in a free space bitmap. But it's a very dangerous featureSeriously, I wouldn't do it without a backup. Also, whilst "While live" would be really neat, and offline shrink would be fine as well. VMware workstation has been able to shrink VMDK files for years - a challenge
![]()
Cheers, Rob.
megacc wrote:My wishlist features
-Asynchronize mode for replication.
[ Already under way. Not sure about V5.8 but either V5.9 or V6 will have +1 async replication node ]
-ability to stop synchronization for ha devices in either node
[ Why do you need this? ]
-ability to add partner without taking ha device offline.
[ Already part of V5.8 we should release soon ]
-additional nodes for backup and snapshots
[ What to you mean here? Please clarify ]
-additional nodes as accelerators or a gateways
[ Already under way. V6 probably ]
-terminate sessions
[ Why do you need this and how do you want this implemented? ]
-web interface
[ 50% of people love it and 50% of people hate it. As we cannot keep both we've decided to keep the one I like. So I don't know here... ]
-API or a script console for customization & development.
[ This is already done. Version 5.8 will go out with COM object to control StarWind and set of PowerShell scripts to run ]
-tools for maintain, debugging and repairing img devices
[ They are plain images. you can mount them with StarPort and edit with whatever you want. Anything I miss here? ]
- , tools for migrate/convert actual physical partitions[locally/remotely] into img devices
[ Please clarify. Is it simple tool to take RAW disk and take a IMG from it? ]
-virtual raid if possible
[ What's it? Something like RAID5/6 or RAID4 LeftHand does over server nodes or do you mean own built-in RAID for local volumes?
In other works whom do you like us to "clone" - LeftHand with their VSA or Veritas with their Logical Volume Manager?]
-permissions based on domain groups for each task performed within gui & web interface.
[ Please clarify ]
-more detailed documentations
[ What exactly you miss here? "Hate list" please! ]
-certified from microsoft & Vmware .
[ We're already certified by Xen, Microsoft & VMware
http://www.starwindsoftware.com/certifications
]
anton (staff) wrote:My comments inside original text block.
megacc wrote:My wishlist features
-Asynchronize mode for replication.
[ Already under way. Not sure about V5.8 but either V5.9 or V6 will have +1 async replication node ]
\\good
-ability to stop synchronization for ha devices in either node
[ Why do you need this? ]
\\To perform maintenance in a particular partition [rebuild , defrag, chkdsk ,,,]
-ability to add partner without taking ha device offline.
[ Already part of V5.8 we should release soon ]
\\Im waiting
-additional nodes for backup and snapshots
[ What to you mean here? Please clarify ]
\\third or forth node for backup & snapshot for HA device, you don't need to store backup and snapshots in ha devices 's nodes as it will rise a risk losing your backup data in case total failure for either node .
-additional nodes as accelerators or a gateways
[ Already under way. V6 probably ]
\\ It will be a big jump for starwind if there is separate tiering components responsible for caching
-terminate sessions
[ Why do you need this and how do you want this implemented? ]
\\when performing maintenance(rebuild/resize/defrag...] for certain partition where the ha device's img file is resided and you need to maintain the performance of the ha device, you need first to shift the client to the other node by either :
-shutdown starwind service where the maintenance is performed, which is inconvenient
-if you have a permission to logon to the server(iscsi initiator client) then you can shift it to another node by using mpio tab
-if you haven't a permission then using terminate sessions feature will solve this problem .
then stop the replication between the ha devices until the maintenance is finished and resume the replication .
-web interface
[ 50% of people love it and 50% of people hate it. As we cannot keep both we've decided to keep the one I like. So I don't know here... ]
\\well , you can add it as optional component
-API or a script console for customization & development.
[ This is already done. Version 5.8 will go out with COM object to control StarWind and set of PowerShell scripts to run ]
\\Cool can’t wait any more
-tools for maintain, debugging and repairing img devices
[ They are plain images. you can mount them with StarPort and edit with whatever you want. Anything I miss here? ]
\\ what I mean is for example , upgrade the img to support chap authentication , repair header , convert between ha and basic img and vise versa, compare two img files ,copying img file to external hdd in emergency conditions ,checking the integrity , and so on ,
- , tools for migrate/convert actual physical partitions[locally/remotely] into img devices
[ Please clarify. Is it simple tool to take RAW disk and take a IMG from it? ]
\\ for a start yes identical copy of existing disk with same header size exported to either ha device or basic device .
-virtual raid if possible
[ What's it? Something like RAID5/6 or RAID4 LeftHand does over server nodes or do you mean own built-in RAID for local volumes?
In other works whom do you like us to "clone" - LeftHand with their VSA or Veritas with their Logical Volume Manager?]
\\what I mean is storing the img file in multiple partitions[controllers] for redundancy and performance using a built-in starwind raid(1,4,5,6) architect - which is independent from the OS - to double the speed of read/write
-permissions based on domain groups for each task performed within gui & web interface.
[ Please clarify ]
\\for example, certain ha devices should only be maintained(start sync, remove nodes, acl editing for specific device,,) by specific domain group, other groups are denied.
-more detailed documentations
[ What exactly you miss here? "Hate list" please! ]
\\you can’t find any any paper how starwind will react [disallow /allow clients] when nic is disconnected/reconnected or when starwind service restarted or when recreating ha devices, converting from ha to basic,best practice and recommendations for configuring hardware components like teaming raid levels strip size to guarantee best performance,…
-certified from microsoft & Vmware .
[ We're already certified by Xen, Microsoft & VMware
http://www.starwindsoftware.com/certifications
]
\\fantastic, now it’s the time to certified some hardware components and configuration which is tested by your company
megacc wrote:anton (staff) wrote:My comments inside original text block.
megacc wrote:My wishlist features
[ ... ]
-ability to stop synchronization for ha devices in either node
[ Why do you need this? ]
\\To perform maintenance in a particular partition [rebuild , defrag, chkdsk ,,,]
+++ OK, we'll add this to V5.8 I think.
[ ... ]
-additional nodes for backup and snapshots
[ What to you mean here? Please clarify ]
\\third or forth node for backup & snapshot for HA device, you don't need to store backup and snapshots in ha devices 's nodes as it will rise a risk losing your backup data in case total failure for either node .
+++ How much is this different from +1 node for replication? It seems to me it's the same feature, no?
[... ]
-terminate sessions
[ Why do you need this and how do you want this implemented? ]
\\when performing maintenance(rebuild/resize/defrag...] for certain partition where the ha device's img file is resided and you need to maintain the performance of the ha device, you need first to shift the client to the other node by either :
-shutdown starwind service where the maintenance is performed, which is inconvenient
-if you have a permission to logon to the server(iscsi initiator client) then you can shift it to another node by using mpio tab
-if you haven't a permission then using terminate sessions feature will solve this problem .
then stop the replication between the ha devices until the maintenance is finished and resume the replication .
+++ We'll try to back-port this one from custom OEM build. Maybe even in V5.8 we'll release soon.
-web interface
[ 50% of people love it and 50% of people hate it. As we cannot keep both we've decided to keep the one I like. So I don't know here... ]
\\well , you can add it as optional component
+++ Well... It requires to have dedicated team doing Web GUI only. For now we'll release Web-based GUI as a prototype for our backup project. Then we'll *maybe* consider having Web GUI for StarWind itself.
[ ...]
-tools for maintain, debugging and repairing img devices
[ They are plain images. you can mount them with StarPort and edit with whatever you want. Anything I miss here? ]
\\ what I mean is for example , upgrade the img to support chap authentication , repair header , convert between ha and basic img and vise versa, compare two img files ,copying img file to external hdd in emergency conditions ,checking the integrity , and so on ,
+++ This one is a bit complicated as we'll merge all the image formats into one soon. Some sort of converter between old and new image formats OF COURSE would be represented (like we do have one for old and new style de-duplication).
- , tools for migrate/convert actual physical partitions[locally/remotely] into img devices
[ Please clarify. Is it simple tool to take RAW disk and take a IMG from it? ]
\\ for a start yes identical copy of existing disk with same header size exported to either ha device or basic device .
+++ We'll add P2V migration (actually RAW disk as a source) to our StarWind V2V Converter (we'll rename it to StarWind Converter).
-virtual raid if possible
[ What's it? Something like RAID5/6 or RAID4 LeftHand does over server nodes or do you mean own built-in RAID for local volumes?
In other works whom do you like us to "clone" - LeftHand with their VSA or Veritas with their Logical Volume Manager?]
\\what I mean is storing the img file in multiple partitions[controllers] for redundancy and performance using a built-in starwind raid(1,4,5,6) architect - which is independent from the OS - to double the speed of read/write
+++ This one is HUGE. I'll try to describe what we're going to do and what we're NOT going to do as well.
1) Having RAID for redundancy is a bad idea. RAID is managed by OS or by RAID hardware so duplicating the same functionality is not a good idea. Also
we provide a concept where only boot disk should be protected as SAN space should be HA (duplicated or triplicated). There's no need to have RAID for redundancy if you have redundant SAN nodes.
Performance - YES (RAID0). Redundancy - NO (RAID1,5,6). At the same time our deduplication engine we'll push as a main container HAS ability to span over different volumes for performance.
And we'll release automatic tiering (placing more frequent used data on a faster disk, probably SSD) quite soon.
2) Having true network RAID is not a good idea. Building RAID5/6 over network has huge performance penalties. LHN sample network RAID does not scale very well for more then 3 nodes. And there's a reason why.
So we'll build many node storage clusters but with RAID1 over two or three nodes only. For single LUN I mean.
If you have different vision here please provide your feedback.
-permissions based on domain groups for each task performed within gui & web interface.
[ Please clarify ]
\\for example, certain ha devices should only be maintained(start sync, remove nodes, acl editing for specific device,,) by specific domain group, other groups are denied.
+++ We'll do ACL with some next update.
-more detailed documentations
[ What exactly you miss here? "Hate list" please! ]
\\you can’t find any any paper how starwind will react [disallow /allow clients] when nic is disconnected/reconnected or when starwind service restarted or when recreating ha devices, converting from ha to basic,best practice and recommendations for configuring hardware components like teaming raid levels strip size to guarantee best performance,…
+++ We'll take care of this as well.
-certified from microsoft & Vmware .
[ We're already certified by Xen, Microsoft & VMware
http://www.starwindsoftware.com/certifications
]
\\fantastic, now it’s the time to certified some hardware components and configuration which is tested by your company
+++ We'll provide cross-certification with hardware vendors. Already working on this.