I start the ipfs daemon, then follow the tail of the log:
joe@instance-1:~$ nohup ipfs daemon > ipfslog.txt&
[1] 4903
joe@instance-1:~$ nohup: ignoring input and redirecting stderr to stdout
joe@instance-1:~$ tail -f ipfslog.txt
Swarm listening on /ip4/10.142.0.2/tcp/4001
Swarm listening on /ip4/127.0.0.1/tcp/4001
Swarm listening on /ip6/::1/tcp/4001
Swarm listening on /p2p-circuit/ipfs/QmXAFziajn3R4ePCUjeeWv621xzpbt1zSHNxTz8kde1 7Fg
Swarm announcing /ip4/10.142.0.2/tcp/4001
Swarm announcing /ip4/127.0.0.1/tcp/4001
Swarm announcing /ip6/::1/tcp/4001
API server listening on /ip4/0.0.0.0/tcp/5001
Gateway (readonly) server listening on /ip4/0.0.0.0/tcp/8080
Daemon is ready
Then in another window:
joe@instance-1:~$ ps aux|grep ipfs
joe 4903 1.7 5.4 316292 32944 pts/2 Sl 16:29 0:00 ipfs daemon
joe 4914 0.0 0.1 5876 744 pts/2 S+ 16:29 0:00 tail -f ipfslog.txt
joe 4916 0.0 0.1 12784 1020 pts/0 S+ 16:29 0:00 grep ipfs
Walk away for a while, then do it again:
joe@instance-1:~$ ps aux|grep ipfs
joe 4914 0.0 0.0 5876 76 pts/2 S+ 16:29 0:00 tail -f ipfslog.txt
joe 5109 0.0 0.1 12784 928 pts/0 S+ 16:55 0:00 grep ipfs
The daemon has stopped running but the last line of the log still says āDaemon is readyā?
Why did it die?
Does dmesg | egrep -i 'killed process'
or egrep -i -r 'killed process' /var/log/
output anything?
yes, I got the following:
joe@instance-1:~$ sudo dmesg | egrep -i ākilled processā|grep 25516
[26450291.910378] Killed process 25516 (ipfs) total-vm:784564kB, anon-rss:222044kB, file-rss:0kB, shmem-rss:0kB
What does that mean?
That suggests that youāre running out of memory and the OOM killer killed your IPFS process.
How much memory do you have on the machine youāre trying to run IPFS on? If it has limited memory, then you might need to tune some settings in order to reduce IPFSā memory consumption.
1 Like
Thanks.
I am running a free Google Compute Engine f1-micro instance 0.6 GB memory.
How do I tune ipfs to use less memory?
How large a memory do I need to run a stable ipfs node?
0.6 GB of memory for the OS, other applications, and IPFS might be difficult to fit into.
I think this depends on a variety of factors, but the smallest node I run a stable IPFS node on is a cheap laptop with 8 GB of RAM. Itās probably possible to go smaller without needing to tune anything, but I donāt know what the floor is. Iām also currently using the badger datastore on all of my nodes, which I think might come along with higher memory utilization.
Some of the things to reduce memory usage overall arenāt implemented yet, but here are some of the options Iām aware of that might help.
- Reduce connections by lowering the
LowWater
and HighWater
values in the config. They default to 600 and 900, but you could probably cut them in half (or more) and if that causes problems then nudge them back upwards until the problems go away.
"Swarm": {
"AddrFilters": null,
"DisableBandwidthMetrics": false,
"DisableNatPortMap": false,
"DisableRelay": false,
"EnableRelayHop": false,
"ConnMgr": {
"Type": "basic",
"LowWater": 600,
"HighWater": 900,
"GracePeriod": "20s"
}
},
- Try using the dht client option when running the daemon. I think this is mainly a way to reduce bandwidth, but I suspect (but havenāt tested) that it might also reduce memory usage. I think this setting basically turns your node into the equivalent a āleechā in the BitTorrent world, so Iām not sure if you can āseedā content from your node with this option enabled.
P.S. because youāre running on a cloud machine, it might be worth using the server config profile to prevent it from trying to connect to other IPs on the local network ā which some hosting providers see as malicious.
1 Like
Thanks. I am giving up for now, I will come back to play again in a year or two.