How can I disable DHT in kubo?

Hi @hector and @danieln,

we talked to our admin about the firewall issues. Below is his response … hopefully it clarifies the situation:

Our firewall does not do NAT, and the policies are strictly configured to only allow traffic that we explicitly approved. Here’s what the current firewall rules allow:

Inbound

  • Only to TCP or UDP destination port 4001

Outbound

  • Only to TCP or UDP destination port 4001
  • Only from source port 4001 (TCP or UDP)

Everything else is blocked, including any traffic from or to non-allowed ports.


Importantly

  • The firewall itself is carrier-grade. Performance, session tables, etc., are not an issue.
  • Logging, however, is an issue. All blocked packets (inbound and outbound) are logged.
  • Our IPFS node generates a huge amount of blocked events — often millions per day, accounting for 80–90% of all logged events, making log analysis painfully slow.

This high volume of blocked packets strongly suggests that IPFS is not limiting its connections to port 4001, despite our policies.

While TCP source port binding to 4001 may be unrealistic, especially for outgoing connections, UDP is definitely behaving unexpectedly as well — we see outbound UDP traffic using random source ports, which violates our policies.


Summary

The actual issue isn’t firewall performance or DHT per se — the problem is that IPFS is not consistently using the ports it’s supposed to.

Given that, we’d be very interested in a way to force Kubo/IPFS to use only port 4001 for all incoming and outgoing traffic, both source and destination, for both TCP and UDP — or otherwise restrict its behavior to match the firewall rules.

Sorry but that makes little sense. People are able to run IPFS on whatever ports they want, or whatever ports their router opens for them via UPnP, or whatever ports NAT-hole punching pierced for them. So limiting the destination port to 4001 is something that we cannot control nor do anything about. If you want you can block it locally via iptables so that your admin doesn’t see it, but this will degrade your experience by making peers on non-default ports undiallable.

Regarding source port being 4001, what platform/arch are you running Kubo on? On any linux lsof -i -n -P | grep ipfs should show all current connections and which source port they are using. They should be using 4001 for both tcp and udp. What are you seeing there? Are there any machines/router in the middle that might be NATing you and switching ports before the firewall?

When I look at the ipfs id above, there is port 48049 for quic. It may be UPnP or AutoNAT at play. Please disable them and send ipfs id again:

You should probably apply the server profile as well: kubo/docs/config.md at master · ipfs/kubo · GitHub

To be honest, if log analysis is slow because they are blocking traffic that they should not block for the application that you are running, the solution here is to allowlist your IP and not log its traffic since it matches what you are running. What do they expect to find in those millions of events that they need to log them?

Hi @hector,

thanks for your reply. We talk again to our admin.

We are running our test Kubo on AlmaLinux 9.x. We are now using the local service without docker.

We have made the config changes you suggested.

This is how ipfs id now looks like:

{
        "ID": "our-ipfs-ID",
        "PublicKey": "our-public-key",
        "Addresses": [
                "/ip4/127.0.0.1/tcp/4001/p2p/12D3KooWC5aNScCwC2vdx785iZ9tZ3WEP85DxBQPP1PwhfD5zpx4",
                "/ip4/127.0.0.1/udp/4001/quic-v1/p2p/12D3KooWC5aNScCwC2vdx785iZ9tZ3WEP85DxBQPP1PwhfD5zpx4",
                "/ip4/127.0.0.1/udp/4001/quic-v1/webtransport/certhash/uEiCiJUVpHl8UKrAvHuX8OV1sicULhvNcocgkyBnMsxNO6A/certhash/uEiCAKxvdUkTb2WOIQeJvlqtPUtNv3CxS4tcJBr3MQkISaA/p2p/12D3KooWC5aNScCwC2vdx785iZ9tZ3WEP85DxBQPP1PwhfD5zpx4",
                "/ip4/127.0.0.1/udp/4001/webrtc-direct/certhash/uEiAzvBW1kCb7-gukM6aYrZS0LZf2MsKabmC9p1EAjOY3OA/p2p/12D3KooWC5aNScCwC2vdx785iZ9tZ3WEP85DxBQPP1PwhfD5zpx4",
                "/ip4/136.172.60.121/tcp/4001/p2p/12D3KooWC5aNScCwC2vdx785iZ9tZ3WEP85DxBQPP1PwhfD5zpx4",
                "/ip4/136.172.60.121/udp/4001/quic-v1/p2p/12D3KooWC5aNScCwC2vdx785iZ9tZ3WEP85DxBQPP1PwhfD5zpx4",
                "/ip4/136.172.60.121/udp/4001/quic-v1/webtransport/certhash/uEiCiJUVpHl8UKrAvHuX8OV1sicULhvNcocgkyBnMsxNO6A/certhash/uEiCAKxvdUkTb2WOIQeJvlqtPUtNv3CxS4tcJBr3MQkISaA/p2p/12D3KooWC5aNScCwC2vdx785iZ9tZ3WEP85DxBQPP1PwhfD5zpx4",
                "/ip4/136.172.60.121/udp/4001/webrtc-direct/certhash/uEiAzvBW1kCb7-gukM6aYrZS0LZf2MsKabmC9p1EAjOY3OA/p2p/12D3KooWC5aNScCwC2vdx785iZ9tZ3WEP85DxBQPP1PwhfD5zpx4",
                "/ip6/::1/tcp/4001/p2p/12D3KooWC5aNScCwC2vdx785iZ9tZ3WEP85DxBQPP1PwhfD5zpx4",
                "/ip6/::1/udp/4001/quic-v1/p2p/12D3KooWC5aNScCwC2vdx785iZ9tZ3WEP85DxBQPP1PwhfD5zpx4",
                "/ip6/::1/udp/4001/quic-v1/webtransport/certhash/uEiCiJUVpHl8UKrAvHuX8OV1sicULhvNcocgkyBnMsxNO6A/certhash/uEiCAKxvdUkTb2WOIQeJvlqtPUtNv3CxS4tcJBr3MQkISaA/p2p/12D3KooWC5aNScCwC2vdx785iZ9tZ3WEP85DxBQPP1PwhfD5zpx4",
                "/ip6/::1/udp/4001/webrtc-direct/certhash/uEiAzvBW1kCb7-gukM6aYrZS0LZf2MsKabmC9p1EAjOY3OA/p2p/12D3KooWC5aNScCwC2vdx785iZ9tZ3WEP85DxBQPP1PwhfD5zpx4"
        ],
        "AgentVersion": "kubo/0.35.0/",
        "Protocols": [
                "/ipfs/bitswap",
                "/ipfs/bitswap/1.0.0",
                "/ipfs/bitswap/1.1.0",
                "/ipfs/bitswap/1.2.0",
                "/ipfs/id/1.0.0",
                "/ipfs/id/push/1.0.0",
                "/ipfs/ping/1.0.0",
                "/libp2p/circuit/relay/0.2.0/hop",
                "/libp2p/circuit/relay/0.2.0/stop",
                "/libp2p/dcutr",
                "/x/"
        ]
}

Here is the output of lsof -i -n -P | grep ipfs:

lsof -i -n -P | grep ipfs 
ipfs    68773 k204228   12u  IPv4 618866      0t0  UDP *:4001 
ipfs    68773 k204228   13u  IPv4 616137      0t0  TCP *:4001 (LISTEN)
ipfs    68773 k204228   14u  IPv6 616139      0t0  TCP *:4001 (LISTEN)
ipfs    68773 k204228   16u  IPv6 618867      0t0  UDP *:4001 
ipfs    68773 k204228   22u  IPv4 622191      0t0  TCP 136.172.60.121:4001->109.199.123.18:4001 (ESTABLISHED)
ipfs    68773 k204228   27u  IPv4 617142      0t0  TCP 127.0.0.1:5001 (LISTEN)
ipfs    68773 k204228   29u  IPv4 617144      0t0  TCP 127.0.0.1:8080 (LISTEN)
ipfs    68773 k204228   45u  IPv4 619663      0t0  TCP 136.172.60.121:4001->84.46.241.21:4001 (ESTABLISHED)
ipfs    68773 k204228   46u  IPv4 617290      0t0  TCP 136.172.60.121:4001->135.181.213.172:4001 (ESTABLISHED)

Does this help?

We appreciate your support.

Seems correct, although you only have 3 established connections. Is this because you disabled the DHT?

Hi @hector,

we have not disabled the DHT. But we were doing this on our test instance which is only up for testing purposes.

Cheers,
Carsten

I suspect that if this is the culprit. Some nodes listen on a different port, Helia/js-libp2p for example binds by default to port 0, deferring to the OS to pick a port helia/packages/helia/src/utils/libp2p-defaults.ts at a0266903e981c5e6e2771e7d0d4b60fc659e0eef · ipfs/helia · GitHub) and then announce that port.