My node chews through a lot of data, we never pin anything. The .ipfs directory keeps getting bigger, I started trying to run the gc at around 125GB and now it’s around 450, bumping up against my partition size so I’m running out of time. gc doesn’t seem to work, so is there a safe alternative way, like manually deleting the database that ideally doesn’t require restarting the ipfs daemon? If I have to I will, but would prefer to avoid it.
I would appreciate any specific steps people have taken for this problem, from the github issues it seems like this is a known problem.
Are you actually running with gc (by default gc isn’t runned as it pause the node for a while you need to do, ipfs repo gc or start with ipfs daemon --enable-gc in order to have GC (also note that the second one will trigger at intervals in your config, so for example each hour that mean between thoses GC your node might still overreach the limit))
I have gc enabled in my system unit file. it doesn’t appear to ever gc though because the data partition just keeps growing. I have tried calling gc manually from the command line, it doesn’t seem to pause anything.
I just upgraded to 0.10.0, all issues were on the previous version.
to clear it out right now I just deleted everything and ran init again, but this time with a zfs mount and a snapshot of when it is empty.