I’m running the IPFS daemon, and its memory footprint increases over time in a logarithmic way. At the same time, it seems that the number of peers also grows over time, so my intuition is that the daemon opens connections with any peer it finds, which eventually eats up all the RAM.
Is my intuition correct? How could I prevent that? Run a bigger instance and hope for a bounded number of peers? Am I missing something?
I think right now, the workaround is to restart the daemon regularly on daemons running on low memory hosts. However, I think memory-reduction features are on the works and close to release. I’m afraid I have lost track. Do you know @kubuxu ?
The biggest issues around memory usage right now are the fact that we dont close connections, and that each connection uses yamux multiplexing (and yamux is quite the memory hog). We’re working on fixing both of these issues, patches for both problems are already written and should be integrated in either the next release or the one after that.
One thing thats really useful to us here is to provide detailed descriptions of your workloads, and also provide stack dumps of when memory usage is high (via curl 'localhost:5001/debug/pprof/goroutine?debug=2'). This helps us analyze parts of the codebase that need work.
I’ve also noticed that 0.4.10 is a bigger strain on the system. I’m not running out-of-memory (yet), but CPU etc. seems to have been a little better on 0.4.9.