Memory leaks in 0.24

Dear,
I encounter the same issue as you describe in this post.
Likewise, I restart the Kubo running virtual machine every few days in order to avoid the daemon to be stopped by oom-kill due to its constant increase of memory usage.
Have you managed to fix this issue for your case?
Regards.

No solution for me yet. Still restarting the daemon every other day.
After a full machine restart it does no longer eats all memory in 2 days but went back to filling up 8Gb of memory in about a week.

Any chance you can open an issue in the Kubo repo for this? That would be the best way to track this issue.

Hi,

Iā€™m seeing the same sort of memory leaks on the v0.24 version(s). I donā€™t have the same sort of graphs as the OP, but I have the same oom-kill events every couple weeks.

I have generated a diag profile zip file during a high memory period (as well as the output of the ā€˜swarm peersā€™/ā€˜stats provideā€™ during the same time), and will be happy to share it in a Github issue. Before I do so, though, can a developer please confirm for me that the .zip file would not contain any sensitive data? I kind of donā€™t know what Iā€™m looking at, and so I didnā€™t want to upload it anywhere without being sure of that first. Thanks!

They donā€™t contain sensitive data.

Anyone with high mem issues: we need memory profiles per ipfs diag profile. go tool pprof will usually show culprits.