Workaround to deal with go-ipfs GC

We have a go-ipfs node where new data (file objects as well as IPLD data) gets pinned every few minutes(via RPC API). Our data pinned to the node grows at the rate of 15GB per day. We have disabled auto-gc as it was blocking pin requests every time GC runs(which leads to lot of timeout).
We will keep archiving data older than 30 days to filecoin/web3.storage. This can be done via “ipfs dag export”.
Once we have exported the older data to filecoin, we would no longer like to keep the pins in our ipfs node and reuse the same disk space to pin newer data. But, as there will be many pins (in millions), unpinning and freeing space via manual GC has following issues:

  1. When manual GC is run via “repo gc” command, any PIN operations are blocked/failed on the RPC API.
  2. As there are many objects to be cleaned, it takes lot of time. We have tried to test it when we have unpinned close to a million entries(GC ran for 1 hour and no cleanup was done at all).
    But we had to stop our main processing as app IPFS add operations were failing.

Is there any workaround for this problem?
Can we leverage ipfs-cluster to run multiple nodes and address this use-case somehow.

Note: Considering that this archival process happens once a week, at which time we still need ipfs RPC API to function in order to pin new data.

1 Like

One workaround I can think of is to have 2 nodes, and direct pins between the nodes. So when one node is doing GC, the other node can handle pin requests.

I don’t have hands-on experience with ipfs-cluster but I think it may be a good solution to this problem as well. You can run GC on just specific nodes, otherwise it runs GC “sequentially” on all nodes in the cluster. It might be the easier option here.

1 Like

ipfs-cluster already implements this, ipfs-cluster-ctl ipfs repo gc will cycle GC the nodes, but never all of them concurrently.

3 Likes

Thanks a lot @Jorropo , this should help with the issue.
Will test this out on a ipfs-cluster setup and get back in case i face any issues.