Hi,
I’ve read quite a many topics about bandwidth issues and in fact currently it’s my main counter point why I’m not running my IPFS node continuously.
I don’t know what is the plan with it, but I’ve a suggestion.
My suggestion would offload bandwidth management from IPFS to the OS. Unfortunately it’s based on a Linux (and maybe BSD) -only tool: the traffic shaper, tc
. tc
lives in the network stack and has many “modules” (queueing disciplines, qdisc) which basically control in which order IP packets are sent on the wire.
I already utilize tc
to prioritize traffic sent by various services on my computer:
- prio A: infrastructure-critical traffic, DNS, ICMP, management ports
- prio B: TCP handshake, ssh interactive connection
- prio C: default, everything else
- prio D: bulk traffic, torrent, downloads
So in my setup, bulk downloads (prio D) can go full speed, but when the browser needs bandwidth (prio C) it gets priority like there was not background downloading at all.
IPFS traffic types (dht, bitswap, …) perfectly fit into this scheme, IMO. If only we could distinguish which packet carries what type of traffic. Traffic classification is IP packet-level. (packets can be identified by system user too. but all IPFS processes are in one system process, if i’m not mistaken, so uid-based differentiation can not be used.) My suggestion is to let IPFS set different DSCP bits (formerly IP QoS field) on different traffic types (it can be done by setting socket options on the socket object), so tc
can sort them to different priority bands.
In this way you don’t need to provision a hard bandwidth limit for ipfs unutilizing idle capabilities while still limiting other traffic (like with trickle
), but ipfs meta-traffic can utilize all of the bandwidth when there is no other traffic, and user-facing applications and ipfs data-traffic (directly requested by the user) flow steadily when needed.
This needs a relatively small effort so far (setting QoS marker bits). The other missing bit is the ingress traffic. Scilicet, tc
primarily is for egress traffic control, since egress is what you can control directly and effectively. Ingress traffic is only controlled indirectly as a result of cleverly timed outgoing messages of certain protocols (eg. TCP/ACK). But ipfs by nature does not communicate on a stream-like channel like TCP does, I guess. Therefore we have to anticipate that approximately how much ingress traffic a certain outgoing message would trigger, and QoS-mark them accordingly.
I turn to the developers who knows more of IPFS than me: is it possible to affect amount of ingress by delaying outgoing/reply messages? or is there something like “pong, helo, send me your DHT query, but this-and-that slowly” in the protocol? is it possible to know how much data will be received when ack’ing peer requests?
An other dirty solution is to apply ingress traffic control (tc
can do it) but it’s quite a waste, ie. dropping data which already successfully transmitted. And again, I don’t know how ipfs’s protocols tolerate it: do they gradually back off, timeout, or do nothing?
thanks for your attention.
i hope I helped somewhat.
looking forward for feedback.