Hi all, first of all thanks for your amazing work. I have set up a private ipfs-cluster (go-ipfs 0.7.0 + Badger) with 300 GB of storage. Its almost getting full of pinned images (around 80%). I have unpinned the oldest images and used the garbage collector to reclaim disk space as explained in [1], and [2]. However, I haven’t been able to free any disk space, no matter how many images I unpin. I also have made local tests with a standalone ipfs with a small virtual disk, and it seems like I can free some space but its random I cannot reproduce it, and eventually the disk gets full. I don’t know if I am missing something important here?
Yeah, seems something is fixed in badgerv2, but ipfs still uses v1 (they are not compatible). This is not cluster-specific as cluster just calls ipfs repo gc (unless the problem is disk space used by cluster’s config folder).
There is a manual way of running GC which is rm -rf the datastore and re-adding the content or re-syncing from other node.
So using IPFS with badger for storing many files does not seem very convenient right now. Another solution I have tried as a test was to make a backup with badger and then restore it:
badger backup --dir=./db
badger restore --dir=./db_new --backup-file=./badger.bak
It effectively reduced the size of the database.
If you delete anything manually from the go-ipfs node, cluster realizes and re-pins it because it expects it to be pinned.
ipfs-cluster-ctl sync was removed because the tracking mechanism works well in the background now. Before, it was a patch to manually fix when things got out of sync.