I think there may be a chunker (not sure if it’s the default) that uses some kind of statistical analysis to make chunking result in a very high number of duplicates (high deduping factor) happening, even in cases where perhaps a large file has a single block of bytes added to the front of it which would theoretically make every chunk hash differently. But with this special algo/chunker it will still divide up the data so that dedup happens a lot.
UPDATE: I found the old discussion where I learned about his: