Hey, I’m glad to hear you are using cluster at that scale!
Is the memory spike from the cluster process or from the ipfs one (or simply general ram usage in the box?)
In any case, I would assume the increase corresponds to the list 400.000 items being held in memory by cluster and this is bad and known.
The good news: we are switching to the stateless pintracker in the next release, which does not need to do this (state sync/ipfs sync) at all and should improve this significantly.
Is the 2.5GB spikes something you can live with until then?
thank you for the fast response. We believe in ipfs-cluster because it fits completely fits our needs. I played around a bit and when i disable the ipfs_sync_interval (set it to a high value) the spikes are gone.
In the screenshot below you see the yellow line is the ipfs-cluster process and the blue one the ipfs process before and after the config adjustment (before i set the ipfs_sync_interval to 45m)
The good news: we are switching to the stateless pintracker in the next release, which does not need to do this (state sync/ipfs sync) at all and should improve this significantly.
I also tried to remove the maptracker from the config so that only the stateless tracker is in the config, that didn’t helped.
When you want to release the new Version? (we’re currently on 0.10.0) And i also can offer pretesting the stuff The number of 400.000 pins will not be the end in our case
I want to release 0.11.1 in a couple of weeks if there are no issues (there will be at least one release candidate to test as well).
In order to use the stateless pin tracker you need to launch the daemon with ipfs-cluster-service daemon --pintracker stateless. I realize that because the stateless one was very experimental this option is hidden from --help…