Again some questions about LSFS devices
Posted: Fri Jun 10, 2016 7:20 am
Hi there,
we had this LSFS bug last year which filled our disks with .spspx files constantly.
A short time after that a new build with a fix came out. So this is no issue anymore.
We use a roundabout 13 Terabyte Near Line SAS RAID5 Volume for a LSFS LUN without deduplication and StarWind VTL on top to backup our VMs with DPM 2012 R2. (I would say dedup with slow Near Line disks is not a good idea.)
I know it´s normal that a thin provisioned LUN does not shrink itself. I made a full backup for some VMs and the new LSFS LUN was filled with 500GB backup data after that.
I deleted all the backup data and started the backup of one single VM with about 45 GB.
Now the RAID volume was filled with 506 GB of spspx files. 6 GB more data although 45 GB are much smaller than the already growed and "empty" 500 GB.
The first questions is: When and how does StarWind defragmentation work? I read in your FAQs that this feature "clears up" the .spspx file structure to reclaim disk space.
But I can´t see any disk space reclaiming, even when I start defragmentation manually.
You say that LSFS needs 2.5 to 3 t imes more space than you can actively use. Is this still correct?
How much space do I really have to calculate? Is there a point where the LSFS device does not grow anymore?
Actually we have 700 GB filled with .spspx file on the LUN RAID volume and only 50 GB actively used in the LUN.
Let´s say we would backup another VM with 30 GB the RAID volume would fill again with some GB of .spspx files.
What happens when the RAID volume has reached its capacity? Does LSFS clear the .spspx files up? Or does this lead to a functional stop of the LSFS device?
Thanks in advance for any help!
Regards
we had this LSFS bug last year which filled our disks with .spspx files constantly.
A short time after that a new build with a fix came out. So this is no issue anymore.
We use a roundabout 13 Terabyte Near Line SAS RAID5 Volume for a LSFS LUN without deduplication and StarWind VTL on top to backup our VMs with DPM 2012 R2. (I would say dedup with slow Near Line disks is not a good idea.)
I know it´s normal that a thin provisioned LUN does not shrink itself. I made a full backup for some VMs and the new LSFS LUN was filled with 500GB backup data after that.
I deleted all the backup data and started the backup of one single VM with about 45 GB.
Now the RAID volume was filled with 506 GB of spspx files. 6 GB more data although 45 GB are much smaller than the already growed and "empty" 500 GB.
The first questions is: When and how does StarWind defragmentation work? I read in your FAQs that this feature "clears up" the .spspx file structure to reclaim disk space.
But I can´t see any disk space reclaiming, even when I start defragmentation manually.
You say that LSFS needs 2.5 to 3 t imes more space than you can actively use. Is this still correct?
How much space do I really have to calculate? Is there a point where the LSFS device does not grow anymore?
Actually we have 700 GB filled with .spspx file on the LUN RAID volume and only 50 GB actively used in the LUN.
Let´s say we would backup another VM with 30 GB the RAID volume would fill again with some GB of .spspx files.
What happens when the RAID volume has reached its capacity? Does LSFS clear the .spspx files up? Or does this lead to a functional stop of the LSFS device?
Thanks in advance for any help!
Regards