Prevent GC when adding large file

When using the IPFS Cluster REST API to add a large file, one has to prevent Garbage Collection of that file.

From this page and the IPFS Cluster REST API one, I’d say we should use “local=true” and “allocations=true” query parameter (from --allocations flag (not sure this one really exists)) to force the pin to be allocated to the same peer ID on which it is being added.

Is that so?

Why ? IPFS-Cluster does that for you:

$ ipfsc add main.go 
added QmVk8b4UCxTFfmL3Dj4yRmUKnRK7Ykj46Qs2TQWXRnnjcK main.go
$ ipfsc pin ls | grep QmVk8b4UCxTFfmL3Dj4yRmUKnRK7Ykj46Qs2TQWXRnnjcK
QmVk8b4UCxTFfmL3Dj4yRmUKnRK7Ykj46Qs2TQWXRnnjcK |  | PIN | Repl. Factor: -1 | Allocations: [everywhere] | Recursive | Metadata: no | Exp: ∞ | Added: 2022-03-09 16:09:45

NVM I have seen your other post.

The best solution is to NOT run the --enable-gc (note GC is opt in with ipfs daemon --enable-gc).
And run ipfs-cluster-ctl repo gc from time to time when you know you are safe.

1 Like

According to Adding and pinning - Pinset orchestration for IPFS

Adding with --local with positive replication-factors may mean that the content is added on a peer that is finally not allocated to store the content and will not end up pinning it. Sometimes, when this is relevant, it can be worked around using the --allocations flag to force the pin to be allocated to the same peer ID on which it is being added.


that piece of documentation is wrong as the behavior changed in the last release. Adding with local=true will result in the local peer being part of the allocations now. I have updated the documentation.

Also, allocations work like ?allocations=<peerid1>,<peerid2> (not true).

All that said, that does not necessarily prevent any GCing, it does potentially leave less things to be GC’ed behind (as the blocks will be added to a peer that becomes a pinner afterwards).