High egress from Go-IPFS

I’m running an IPFS node that downloads a large number of files using dag get and cat. I’m consistently seeing TotalOut data close to or higher than the TotalIn.

I get that there will be some egress from getting files, but the same magnitude as the ingress seems quite excessive. This is especially problematic running in the cloud since egress fees add up quickly.
I’ve tried the lowpower profile, running the daemon with --routing=dhtclient, and further limiting the ConnMgr low and high watermarks and grace period, and nothing seems to reduce overall egress.

I’ve also tried using bandwidth shaping tools like trickle and tc but when I limit egress, downloads also slow down significantly.

Any ideas where this data out is coming from and what could be done to reduce it?