Ipfs-go question

Greetings,

If add a new file(for instance a media of ~2GB) to ipfs-go then I do observe an avalanche of UDP packets on the routers WAN interface.
Because of that huge amount of UDP packets the access to internet is going down, when I stop ipfs-go in ~10 min the aceess is restored.

Ipfs-go resides behind the NAT and I also have set the port forward for port 4001.

Not sure why the router behaves like I described, perhaps someone has an idea.

Please advise

We have a tracking issue for Router crashes when Kubo is being used · Issue #9998 · ipfs/kubo · GitHub it contains two workarounds you should try.

It would help us if you could also comment a report on the issue :heart: :slight_smile:

We got a very similar report here IPFS in Brave slows internet speeds to a crawl

About why it does that, this is normal this is the DHT provide process, your node is storing itself in all the CIDs you are creating inside the Distributed Hashed Table. This is a big decentralised phone book that contains a map of CID → List of nodes hosting it and Node → IP + PORT + Protocol list

You can enable accelerated DHT client https://github.com/ipfs/kubo/blob/master/docs/config.md#routingaccelerateddhtclient, however it is extremely bursty, overall it makes multiple order less connections and is a few hundred thousands times faster, but in a short time bursty.

Thanks for your (@Jorropo) quick turnaround.
I will fill out the report, and should I raise a PR or I can paste it here?

Speaking about the workaround.

No Reuseport

This is exactly what is happening on the router, it receives a lot of UDP packets on a port that is already closed and the kernel sends an ICMP message with RST flag.

I will test it over the weekend and keep posted.

Thanks

I have tried the workarounds proposed by @Jorropo and none of them worked.

I suspect the ISP(Bell Canada) applies some kind of rate limit, and I am not sure how to confirm that.

Probably I could do another test, for instance create a VPN tunnel and divert that traffic via the tunnel.

@Jorropo
I have created IPinIP tunnel and migrated the service to IPv6 and I confirm no throttling was observed.

I come to conclusion that quic protocol generates a high volume of PPS(packet per second) and for home users Bell ISP applies the rate limit on UDP traffic and that causes the connection to be throttled.

I am attaching a chart with PPS, please note that the interval is per minute and the spikes sometimes reach 3.5 milions packets per minute

what is your advertised bandwidth by your ISP ?

The bandwidth is 1Gbit/s up and down over the fiber.

1Gbit/s / 1500bytes → 83Khz → 5 000 000 packets per minute so as advertised your connection is able to handle 5m packets per minute flat.

This is exactly what is happening on the router, it receives a lot of UDP packets on a port that is already closed and the kernel sends an ICMP message with RST flag.

I’m a bit confused / this is intresting, LIBP2P_TCP_REUSEPORT only affects TCP, UDP listener is always reused.
On what port are you seeing UDP RSTs ?

Also which kernel is doing this, the router’s kernel or your IPFS host’s one ?

The router and ipfs are on the same host, where ipfs is running in a isolated process and network namespace.

  1. The 1st attempt was using ipv4 where ipfs used to be behind NAT and with a port forward on port 4001. I noticed UDP tried to establish the connection on a closed sockets and on top of that ISP seems to apply the rate limit and that basically causes the Internet connection throttle.

  2. The 2nd attempt I migrated ipfs to ipv6 via a VPN tunnel with no ipv4 configuration. In this set up I haven’t noticed the Internet connection to be throttled.

PS: It seems that ISP checks the amount of UDP packets sent per an interval form the home users and if it is more than X then the rate limit is applied. When sending ipfs traffic via a tunnel apparently it bypasses ISPs rate limits filters.

Conclusion: the amount of PPS generated by ipfs IMHO are normal, the problem is that the ISPs react on that traffic and apply various filters/traffic shaping…

1 Like