High bandwidth usage by /ipfs/id/push/1.0.0

I was running a public node that was doing very little in a public cloud provider.

It seems thought that proto /ipfs/id/push/1.0.0 uses a large amount of *egress * bandwidth

-> % k exec zipfs-0 -- ipfs stats bw --proto /ipfs/id/push/1.0.0
Bandwidth
TotalIn: 479 kB
TotalOut: 66 MB
RateIn: 990 B/s
RateOut: 4.0 kB/s


-> % k exec zipfs-0 -- ipfs stats bw                            
Bandwidth
TotalIn: 128 MB
TotalOut: 97 MB
RateIn: 134 kB/s
RateOut: 123 kB/s

As far as I can tell id/push should only update data to peers when local addresess/protocols changes and then it might push it to a large number of peers (since its a public node)
I don’t think adress changes is happening in my kuberentes environment as the pod is pretty stable and protocols shouldn’t be changing

I eventually solved this through resource manager (though I used system as I couldn’t figure out how to limit at protocol layer. But I am curious if people think this is a bug or just how idpush works.
Still on 0.17 though I should be able to confirm in 0.18 soon.

Her’s a screen shot of my egress bandwidth before limiting resourcemanager it was ~1 gig every 15 mintues where now it is around 10 mb.