on Friday I’ve been banned by my internet provider, for flodding their WiFi network with gazilion small packets to/from port 4001 and 8080 despite the fact that I was way bellow the speed limits I’m actually paying for. (Btw. I do have public IP addresses for both IPv4 and IPv6) At that time I had Swarm.ConnMgr.HighWater=5000, but the disruptions continued even if I lowered it all the way to 200. I am^H^H was running go-ipfs 0.4.22 and my full config is here: My IPFS config
Do you have any clues what could be wrong or how to optimize go-ipfs for being WiFi friendly ?
p.s. my appologies if this is something obvious I should have spotted right away.
Nope, I don’t use VPN and I don’t plan to. If running a local IPFS gateway would become a problem I’d rather run it in the cloud instead, but both of these solution kind of eliminate the benefit of using IPFS and downloading all the stuff just once from the closest peer. As well once full IPFS node will become integral part of the browsers I’ll probably hit this issue again. So I’d prefere to understand what is wrong with my setup instead. But thanks for a hint !
No, I don’t have limited data, I just have the speed throtled at ~16MB/s. The ISP provider banned me explicitly even though I didn’t crossed any limits. They just called me and said: “Sorry man, your node is causing havoc in our network, your router is probably hacked, we have disconnected you from the network. Bye.” … and then I talked with the technicans and they give me the details about the origin/dest ports of the trafic, etc…
In fact the technicians were quite friendly and tried to help and debug the problem with me, but they obviously can’t afford to have a lot’s of customers affected because one person runs “something strange” on his node.
Their problem was that the node was sending too many tiny packets to too many hosts at once, which saturated their backbone wifi link, even it wasn’t fully utilized bandwith-wise (maybe they were even multicasts/broadcasts that need to be send at the lowest speed the AP supports so that all peers on the wifi network are able to receive it even if they have poor signal).
Btw. changing to other provider is kind of difficult here as they are prety much the only provider here (if I don’t count cellular connections). As well this seems to be technical problem on the WiFi layer so I don’t think other WiFi providers would be able to do better.
I was hoping that someone would be able to figure out, why IPFS sends soo many tiny packets so I could disable that part of the IPFS service.
Btw. is it possible to re-configure go-ipfs to use the DHT in “client mode only” so that it doesn’t really serve any DHT requests from other peers and just queries the DHT when it needs it itself ?
As well I’m curious if Swarm.DisableRelay: false could be a cause for this behaviour even if Swarm.EnableAutoRelay is false. And last but not least, is it possible that setting Gateway.HTTPheaders.Access-Control-Allow-Origin: * could cause that the gateway is announced somewhere and other people try to use it despite the fact that the port 8080 is banned on firewal and cause something like SYN flooding ?
Some good points there, pezinek, especially about the local gateway. I hope you get some good answers. This might sound sarcastic, but have you tried turning it off and turning it on again?! You might be able to concentrate the bothersome traffic into time slices instead of having lots of little packets all the time.
There will surely be a much better suggestion than this though! Good luck
Sounds like you are running a server in a network provided by a consumer-grade ISP.
Start with applying the server profile:
ipfs config profile apply server
It will ensure you dont do MDNSdiscovery and won’t attempt connections to local IP ranges (profile.go#L49-L57). It disables NAT hole punching too, so you need to set Swarm.DisableNatPortMap to false if you need it.
You also don’t want to run Relay, as it is super expensive bandwidth-wise.
Yes, to switch to DHT client mode, execute:
ipfs config Routing.Type dhtclient
Note you need to restart your node for config changes to be applied.
The lot of tiny packets are most likely DHT traffic. A lot nodes asking you where to find content, and you answering with routing information.
Big packets (if any) would be if you are/were a relay (actually receiving and forwarding data).
Traffic in local network is mDNS to dicover peers in the local network.
and this was quite interesting (if I understand the dump correctly):
i kept Swarm.ConnMgr.HighWater to 200 but the number of peers was all the time at arround 400
(so the ConMgr had obviously hard time to keep-up with the incomming connections - WHY ?)
once I started go-ipfs the packet loss went from 0 to ~30% at the IPS’s internet gateway
(so likely a trafic shaping)
64 % of all packets had size between 80-159 bytes (so mostly DHT trafic it seems)
only 2% of all packets were “large packets” (so I likely am not a relay)
multicasts/broadcasts were only 0.6% of all packets and MDNS only 0.03% (so the teory that the wifi was clogged by slow multicast frames is wrong)
the IPFS daemon actually talked with ~ 4000 IP addresses over UDP where about 2/3 of these connections only exchanged 3 packets ! and with roughtly 7000 IP adresses over TCP where the median is ~119 packets per connection (43 packets send and 77 received).
37% of all packets were actually TCP failures, like duplicate ACK’s, re-transmited frames etc.
so my new theory is that the root-cause of this problem is actually a trafic shaping at ISP’s side, which is seriously disrupting those miriads of short-lived connections, which then respond with re-sending the trafic, which leads to packet multiplication and eventual congestion of the wireless backbone link.
Any cures for that ? E.g. any way how to make my ipfs daemon less desirable peer for others ?
I’m going to test it with DHT in client-only mode as recommended by lidel and see how it goes.
Comoon System ! , I just posted a picture with some stats there.
The stats btw. indicate that the problem likely is in the trafic shaping on the ISP side.
There is a lot’s of short-lived connections which got seriously disrupted by the trafic shaper,
and respond by re-sending packets, so ~37% of the trafic are duplicate ACK’s/Frames and other TCP breakages which cause further trafic multiplication.
My next step was to switch routing to dhtclient mode and the trafic was much better than before,
but the wifi backbone was congested similarly as in my previous attempt and packet loss went up
to 35% as well.
So I believe I have solution. Thanks a lot @lidel - this worked !
especially the part that prevented connections to private IP addresses.
And I have to admit that all what I said above was wrong.
The number of packets problem was a red hering and obviously
even the ISP technicans didn’t really knew what caused the issues on their network.
I have reconfigured the IPFS daemon to become full DHT client, connect to 5000 peers and have the NAT hole punching enabled and the connection is stable even at ~4000 packets/sec despite the fact that before it was disrupted at only ~1500 packets/sec: