Page 1 of 1

Deduplication on thick image

Posted: Fri May 01, 2015 4:24 pm
by sloopy
Is deduplication on thick images an option and I'm just missing it? We do not really want to utilize the LSFS right now, which is where I see the only option to enable it.

Otherwise if you could please explain-- I've seen this come up a few times on the forums but there hasn't really been a straight answer.

We want to utilize the dedup for storage savings, but it seems to be negatively offset by the extra 3x storage required for LSFS. For example, if I have 1TB and enable dedup, let's say I can get 1.2TB with the data I'm storing. I'll use this table to explain the amount of storage I need.

Thick:
1TB SSD x 3 Starwind nodes = Qty 3 SSD's

LSFS:
(3x storage over-provision)(1TB SSD) x 3 Starwind nodes = Qty 9 SSD's

Maybe I could squeeze a hint of better performance out of LSFS, but I'm not convinced based on what I've seen and it definitely won't justify the cost in this particular case, nor the possible risk of running out of space.

Thank you.

Re: Deduplication on thick image

Posted: Fri May 01, 2015 4:34 pm
by anton (staff)
There's no such thing as "extra 3x storage requirement for LSFS". First of all it will shrink back when scrubber will kick in. Second you can make it 1x if you want (performance suffer).

Dedupe for thick images is done with off-line dedupe and we'll publish it as update later.
sloopy wrote:Is deduplication on thick images an option and I'm just missing it? We do not really want to utilize the LSFS right now, which is where I see the only option to enable it.

Otherwise if you could please explain-- I've seen this come up a few times on the forums but there hasn't really been a straight answer.

We want to utilize the dedup for storage savings, but it seems to be negatively offset by the extra 3x storage required for LSFS. For example, if I have 1TB and enable dedup, let's say I can get 1.2TB with the data I'm storing. I'll use this table to explain the amount of storage I need.

Thick:
1TB SSD x 3 Starwind nodes = Qty 3 SSD's

LSFS:
(3x storage over-provision)(1TB SSD) x 3 Starwind nodes = Qty 9 SSD's

Maybe I could squeeze a hint of better performance out of LSFS, but I'm not convinced based on what I've seen and it definitely won't justify the cost in this particular case, nor the possible risk of running out of space.

Thank you.

Re: Deduplication on thick image

Posted: Fri May 01, 2015 4:59 pm
by sloopy
My 3x quote comes from many other posts out there. Here is one for example:

https://forums.starwindsoftware.com/vie ... tra#p23310

As well during some testing I did, I was using about 80GB of a 117GB partition and overnight the partition filled up and the LUN was no longer inaccessible. Just testing with it thankfully so I was able to just delete it.

anton (staff) wrote:There's no such thing as "extra 3x storage requirement for LSFS". First of all it will shrink back when scrubber will kick in. Second you can make it 1x if you want (performance suffer).

Dedupe for thick images is done with off-line dedupe and we'll publish it as update later.
sloopy wrote:Is deduplication on thick images an option and I'm just missing it? We do not really want to utilize the LSFS right now, which is where I see the only option to enable it.

Otherwise if you could please explain-- I've seen this come up a few times on the forums but there hasn't really been a straight answer.

We want to utilize the dedup for storage savings, but it seems to be negatively offset by the extra 3x storage required for LSFS. For example, if I have 1TB and enable dedup, let's say I can get 1.2TB with the data I'm storing. I'll use this table to explain the amount of storage I need.

Thick:
1TB SSD x 3 Starwind nodes = Qty 3 SSD's

LSFS:
(3x storage over-provision)(1TB SSD) x 3 Starwind nodes = Qty 9 SSD's

Maybe I could squeeze a hint of better performance out of LSFS, but I'm not convinced based on what I've seen and it definitely won't justify the cost in this particular case, nor the possible risk of running out of space.

Thank you.

Re: Deduplication on thick image

Posted: Fri May 01, 2015 6:02 pm
by anton (staff)
Old build did not have automatic scheduler for defragmentation and new ones do. Also upcoming one will have page manager and ability to actually select how much underlying disk free space LSFS can use.
sloopy wrote:My 3x quote comes from many other posts out there. Here is one for example:

https://forums.starwindsoftware.com/vie ... tra#p23310

As well during some testing I did, I was using about 80GB of a 117GB partition and overnight the partition filled up and the LUN was no longer inaccessible. Just testing with it thankfully so I was able to just delete it.