I was running a public node that was doing very little in a public cloud provider.
It seems thought that proto /ipfs/id/push/1.0.0 uses a large amount of *egress * bandwidth
-> % k exec zipfs-0 -- ipfs stats bw --proto /ipfs/id/push/1.0.0
TotalIn: 479 kB
TotalOut: 66 MB
RateIn: 990 B/s
RateOut: 4.0 kB/s
-> % k exec zipfs-0 -- ipfs stats bw
TotalIn: 128 MB
TotalOut: 97 MB
RateIn: 134 kB/s
RateOut: 123 kB/s
As far as I can tell id/push should only update data to peers when local addresess/protocols changes and then it might push it to a large number of peers (since its a public node)
I don’t think adress changes is happening in my kuberentes environment as the pod is pretty stable and protocols shouldn’t be changing
I eventually solved this through resource manager (though I used system as I couldn’t figure out how to limit at protocol layer. But I am curious if people think this is a bug or just how idpush works.
Still on 0.17 though I should be able to confirm in 0.18 soon.
04:33PM - 09 Aug 16 UTC
We need to place limits on the bandwidth ipfs uses. We can do this a few differe
… nt ways (or a combination thereof):
- per peer limiting on each actual connection object
- low coordination cost (no shared objects between connections)
- should have lower impact on performance than blindly rate limiting the whole process
- no flow control between protocols, dht could drown out bitswap traffic
- per subnet limiting
- avoids rate-limiting LAN/localhost connections.
- it's not always possible to tell what's "local" (e.g., with IPv6).
- per protocol limiting on each stream
- should have the lowest impact on system performance of the three options
- each protocol gets its own slice of the pie and doesnt impact others
- increased coordination required, need to reference the same limits across multiple streams
- still makes it difficult to precisely limit the overall bandwidth usage.
- global limiting using a single rate limiter over all connections
- will successfully limit the amount of bandwidth ipfs uses.
- ipfs will be quite slow when rate limited in this way
Her’s a screen shot of my egress bandwidth before limiting resourcemanager it was ~1 gig every 15 mintues where now it is around 10 mb.