High Bandwidth and CPU usage from go-ipfs

I installed go-ipfs on a Gentoo system and shared a single small image file on the network.

Whenever the ipfs service is started, I get the ipfs application using about 20% of my CPU, and around 10kbps as displayed by nethogs. The file only has 2MB, but this usage doesn’t go down, it just keeps going and eventually my network connectivity in firefox slows down a lot. Not sure why that would happen from 10kbps, I get a lot more traffic for torrents and video streaming, but maybe the ISP finds the traffic weird…

Any ideas what could be happening?

IPFS use a DHT (Distributed Hash Table) to find content.

The background thing you see is people asking you “hey I’m looking for XYZ, do you know about it ?” then your node do some keyspace magic math and it will tell you “closer” peers. It is not related to your hosting file that much.

You can use :

ipfs config profile apply lowpower

Or change the Routing to dhtclient in the config to disable that.

thank you, that fixed the issue (for now, I’ll come back in a day or so to confirm again since it always took a bit to ramp up to the stage where everything gets bricked).

Any idea why lowpower wouldn’t be the default setting if it causes issues on desktop computers? I am not aware that my set-up is particularly low-powered. As far as I can tell most of the trouble is with the router (which, as I found out in the mean time, can be crashed if IPFS is left to run long enough).

The more bullet-proof way (as far as I know) is to run IPFS via “trickle” and don’t even give it the physical possibility of being a resource hog:

trickle -u 10 -d 20 daemon

Here’s my project if you want to see examples of that in docker (docker-compose):

EDIT: Also this allows you to run with SERVER profile (not lowpower) but still stop it from bricking your server. :slight_smile:

Because if that was the case there would like 50 people running DHT nodes and the network would be very centralised, it would be trivial to spin up a cluster of DHT node and control any content discovery request.

There is ongoing work on the ressource manager and lighter config defaults that would make running DHT nodes lighter.

EDIT: When you run lowpower your content is also not as discoverable, so other peers might have issues trying to download files from you.

well, just to confirm, this works well with the lowpower config, nopthing bricks, router doesn’t crash :slight_smile:

In lowpower mode you loose some discoverabilty AND still have no actual control over the bandwidth. I like trickle because (unless I’m misunderstanding something) I get full IPFS capabilities AND control over max bandwidth allotment.

Tried trickle with go-ipfs but AFAIK trickle only works with non-forking processes. Works great with curl, wget, that kind of thing. Tracking the bandwidth stats with ipfs stats bw, it seems trickle has absolutely no impact, i can ipfs get full speed …

Not sure you can throttle go-ipfs’s traffic in userspace land, but some iptables rules will do the trick …

I checked my Linode CPU and Network and it looked like ‘trickle’ was working, but maybe that was ‘false positive’, or not reliable result. Thanks @reload. One way or another I do need to avoid LOWPOWER, and still control bandwidth. There are several other options: “TC”, “wondershaper”, but I’ve never tried them.

Oh you have the router crash issue ?

Can you try something for me pls ? Someone think to have found the issue but I that doesn’t make any sense, so I would like to try their fix and tell me if it does fix it.

So revert back to your first config (or just init a new one) then start the router without TCP PortReusing:

# Run this while your main deamon is off
export IPFS_PATH=$(mktemp -d) # Make a new repo

ipfs init

LIBP2P_TCP_REUSEPORT=false ipfs daemon & # Start the deamon

# Now use your node, download a few files, browse a few things just to generate a bit of traffic

# Then let it run whatever long it would have took to crash in the first place.
# If it doesn't crash I guess the fix works if it does well then it doesn't.

# To remove it:
ipfs shutdown
wait # Wait on the deamon to finish shutting down
rm -rf $IPFS_PATH # Remove the new temporary path
unset IPFS_PATH # Return back to the default ~/.ipfs path

trickle cannot do any throttling on the sockets opened by go-ipfs, i’m pretty sure. It also doesn’t work for things like HTTP or FTP servers for example. I think a good way to prevent go-ipfs from using too much resources is to limit the number of opened sockets, but i don’ know if it can be done on linux. Setting low values for swarm lowwater/highwater has some impact.

This one is pretty easy to use, the configuration is done in YAML: https://github.com/cryzed/TrafficToll. I think it’s a wrapper around tc.

With Swarm LowWater/HighWater set to 100/200 go-ipfs 0.12.2, the swarm stabilizes at 1200 peers on average …

I agree “tc” shows the most promise, for doing this in Linux, but when I googled this I got back a lot of garbage results (stuff that just has a bad smell to it), so I haven’t done it quite yet myself, but it’s near the top of my list.

Been playing with TrafficToll and trying to throttle go-ipfs. It has a massive impact on the number of swarm peers and overall CPU usage, more so than any other setting in the ipfs config. This is the config i’ve tried:

    # Higher numbers => lower priority
    download-priority: 10
    upload-priority: 10

    download: 20kbps
    upload: 30kbps

    recursive: True
      - name: ipfs

Run it as root with:

tt eth0 ipfs-config.yaml --delay 5

(replace eth0 with your network interface)

I set the settings intentionally very low, you’ll use something way higher.

Without tt running, ipfs maintains >2200 active swarm peers, no matter the LowWater/HighWater, CPU usage well over 60% on average. With tt running, ~500 peers and CPU usage WAY down. Not too surprising, you’re setting limits at the kernel level. I suggest you give it a try.

Are you running accelerated client ?

That is gonna make IPFS perform much worst.

IPFS is not good in low bandwidth environments.
Just use dhtclient instead as a bad DHT node is actually harming the network, at least don’t make other nodes suffer pls.

I’m running with dhtclient and in the accelerated DHT client mode, LowWater/HighWater: 150/300. AcceleratedDHTClient has a minor impact on CPU usage.

Absolutely, but it’s also triggering some bug that makes go-ipfs eat the memory like crazy, 6Gb in 30 mins, a record. Adjusted the limits and it behaves pretty well.

Without the traffic shaping, the node’s swarm routinely spikes to 2500 peers and hogs the CPU for ages without any correlation to any user activity, i often have to kill the daemon. When the traffic shaping is on, it’s not peering with more than ~1100p which gives the CPU a lot of air.

If a slow peer can harm the network, I wonder why that’s not an attack vector? If someone can bring online 100s or 1000s of “slow” nodes, is that a potential kind of DDOS vector, or does the network have enough smart to use slow nodes only when faster ones aren’t available?

:pray: thx

accelerated DHT client scans more less the full network at intervals. It’s expected for it to dial a lot of peers, that doesn’t happen with the default client.
FYI, accelerated DHT client works by keeping a more less full map of the nodes in network in memory, so when you search for a file, you can instantly ask the right node instead of having to walk the keyspace.

yeah, that probably the bitswap (block exchange protocol) code. Could be something else.

If you want you can do ipfs diag profile while IPFS is using lots of memory, then I or you can open it with pprof to see what is using all of this memory.

That is just accelerated client doing it’s scans.

I agree that it’s maybe too agressive. But it’s expected.