We have an IPFS server in production with badgerds (private swarm). We have noticed very high memory usage. In the systemd unit file we set MemoryMax to 7G, and it gets killed due to oom at least once a day.
We believe the problem is badgerds, so we would like to migrate back to the file-based store. Is there a way to do this? Also, is this memory usage normal? If not, what would be needed to help diagnose the problem?
Eingetragen AG MĂĽnchen, HRB-Nr.: 223941
Vorsitzender des Aufsichtsrats/Chairman of the Supervisory Board: Raimund Diederichs
Vorstand/Board of Management: Oliver Stollmann (Vorsitzender/Chairman), Maximilian Fischer
One option would be to create a separate IPFS node on the same machine that was initialized with the default datastore, then retrieve all of the content from the other local badger node using ipfs pin add (for example). Once the migration is complete, then shutdown and remove the badger IPFS node.
I think badger’s memory usage is dependent on how large the database is, so maybe (?).
Which version of IPFS are you running? How large is your IPFS repository (ipfs repo stat --human)?
Thanks for your quick reply @leerspace. We were running 0.4.18, I upgraded to 0.4.19 yesterday to see if there was any difference but we’re seeing the same thing.
That doesn’t seem very large. It might be worth opening an issue on github specifically for high memory usage (I thought there was one, but I can’t find it).