I have the following scenario: There are files I need to pin and serve with as much uptime as possible, thus I must setup a machine that permanently runs go-ipfs in the background. Since I don’t currently have a dedicated server, I’ve considered doing this on my mother’s computer. However her PC has few system resources, often using 100% CPU and operating close to its RAM limit. She already struggles browsing websites properly, and if anything takes up more system resources she’ll barely be able to use a web browser.
While go-ipfs seems to keep its CPU usage in check, there’s a huge issue with the amount of memory it takes up: After just 10 minutes of uptime, the process reaches roughly 500MB of RAM on my machine. That is not something her computer could handle: I’d need to limit IPFS to about 100MB of RAM there.
Are there any options to limit the amount of memory IPFS may use, imposing a hard limit after which the daemon will un-cache files? Also is the daemon aware of system resources, so if go-ipfs detects that system memory is low it automatically knows how much it can take up? Thanks.
At the moment, no. Really, go-ipfs should use no more than 50-100MiB of memory and we should have a low-power mode that uses more like 25-50MiB of memory. However, we aren’t there yet.
In the meantime, you can play with the connection killing settings: Swarm.ConnMgr.LowWater and Swarm.ConnMgr.HighWater. These allow one to limit the number of connections your peer maintains and therefore the amount of memory it uses.
We also currently have an issue where the peerstore (the service that tracks the addresses of your peers) grows unbounded. We’re working on a couple of fixes (one is to put it on disk).
No malware, just an old computer: It has 4GB of RAM, and because of a weird BIOS issue it only detects and uses 3GB out of those. Firefox quickly uses over 1GB, and with other system processes running it’s always just 500MB away from filling up. We don’t use Windows any more just Linux (openSUSE Tumbleweed x64 / KDE).
I’ve definitely had go-ipfs climb up to this memory usage: When the daemon first starts it barely uses anything, just a measly 15MB… but as time goes by, it slowly grows until it seems to stop at roughly 480MB.
Could this be because I’ve pinned and am seeding some large video files? The content I’m serving are my DTube videos, so perhaps if too many peers are requesting them IPFS is caching too much data at once.
Thank you for the advice! I’ll try those options if IPFS starts doing it on her computer too.
Also, you should probably run the service as a dedicated ipfs user, to protect your mom’s account. Your own account (via ssh?) will still be able to communicate via the API endpoints on localhost.
So, part of this is memory fragmentation. That is, IPFS may only be using 200-300MiB but the OS can’t reclaim some of the free memory because it’s mixed in with the used memory. Unfortunately, the go language makes managing memory manually rather difficult so memory fragmentation is unlikely to improve.
However, a large portion of that is probably the peerstore. We had a bug for a while where peers behind NATs were advertising every single ephemeral port they had used since they started. We’ve fixed that bug but still have a lot of nodes advertising these massive address lists. We’re working on a couple of mitigations, but they’re still in progress.
This is not really an answer to your question regarding memory usage, but there are several ways to run ipfs nodes for zero cost.
One way is to register for the free tier of various cloud service providers. E.g. Amazon AWS offers a t2.micro instance in the free tier for one year, which is big enough to easily run ipfs (but be careful to have proper limits in place so they don’t bill you for excessive bandwidth usage).
Another option that is not free but very cheap is to run ipfs on a raspberry pi. A Pi 3 runs ipfs just fine, and it is really easy to set up. And since it consumes very little power it is actually much nicer as a 24/7 ipfs node than a powerful development machine that consumes lots of power…
I’m using ulimit. It’s only respected for a day or less, but it helps my Mac not immediately fall into heavy swapping. I think eventually I’ll need to make a cron that stops and restarts this daily. I left my mac unattended for a week and the GUI session quit, which killed my ipfs shell.
ulimit still feels like a workaround. It would be ideal if the IPFS daemon could have both a disk and memory limit, and only pin or cache content as long as it’s within it.