What are the expected dedup speeds supposed to be? Granted, right now I'm testing with a dedup image on a USB drive. My .spbitmap file is 512MB and my .spmetadata file is 2GB.
I test by copying a 20GB file. It runs around 40MB/sec for the first 2GB of xfer and then crawls down to about 12MB/sec from that point forward. I assume the .spmetadata is a cache where it is non-dedup and then the speed of the disk determines how fast it calculates it into the .spdata file as dedup data.
What would be a best practice if I plan on working with very large files? I plan on using it to store MSSQL backups. The sizes range anywhere from a few hundred megs up to 640GB. Do I need to specify a large cache when initially configuring? I thought that was RAM cache but it appears to be the size of the .spmetadata file.
For the test above I selected write-back, 50,000ms cache time, 2GB cache size, and 4K block size. If there is a better config for my use case, please let me know and I'll run some tests on that.
The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software