How can I disable DHT in kubo?

Hello,

how can i disable the distributed hash table (DHT) in my kubo ipfs node?

I tried with the following config change:

ipfs config --json Routing.Type none

… but unfortunately it didn’t have any effect on the network traffic.

Do you have some advice?

After setting the Routing.Type to none, what output do you get from ipfs id?

Also, what do you get when running ipfs stats dht and ipfs swarm peers?

There’s a good chance that the network traffic isn’t necessarily from DHT traffic.

Hi Daniel,

thanks for your response :slight_smile:

We applied the following config change to the default config:

ipfs config --json Routing.Type null

Here is now the response to ipfs id:

{
        "ID": "Our ID !!!",
        "PublicKey": "Our public key !!!",
        "Addresses": [
                "/ip4/10.89.0.3/tcp/4001/p2p/12D3KooWKnNCnWFMYwTow5WH5vYmSVJJaCZQ2aCdQu8Ss1o2yoW3",
                "/ip4/10.89.0.3/udp/4001/quic-v1/p2p/12D3KooWKnNCnWFMYwTow5WH5vYmSVJJaCZQ2aCdQu8Ss1o2yoW3",
                "/ip4/10.89.0.3/udp/4001/quic-v1/webtransport/certhash/uEiAhhTqbE1jSobD8IaIe49ROZbsDl5qQFslub1Dif5FUBA/certhash/uEiASlef5ibsdvfQ6sFKf3JsNGCUYM8980ijeTb7Ws8XFJA/p2p/12D3KooWKnNCnWFMYwTow5WH5vYmSVJJaCZQ2aCdQu8Ss1o2yoW3",
                "/ip4/127.0.0.1/tcp/4001/p2p/12D3KooWKnNCnWFMYwTow5WH5vYmSVJJaCZQ2aCdQu8Ss1o2yoW3",
                "/ip4/127.0.0.1/udp/4001/quic-v1/p2p/12D3KooWKnNCnWFMYwTow5WH5vYmSVJJaCZQ2aCdQu8Ss1o2yoW3",
                "/ip4/127.0.0.1/udp/4001/quic-v1/webtransport/certhash/uEiAhhTqbE1jSobD8IaIe49ROZbsDl5qQFslub1Dif5FUBA/certhash/uEiASlef5ibsdvfQ6sFKf3JsNGCUYM8980ijeTb7Ws8XFJA/p2p/12D3KooWKnNCnWFMYwTow5WH5vYmSVJJaCZQ2aCdQu8Ss1o2yoW3",
                "/ip4/136.172.60.121/tcp/4001/p2p/12D3KooWKnNCnWFMYwTow5WH5vYmSVJJaCZQ2aCdQu8Ss1o2yoW3",
                "/ip4/136.172.60.121/udp/4001/quic-v1/p2p/12D3KooWKnNCnWFMYwTow5WH5vYmSVJJaCZQ2aCdQu8Ss1o2yoW3",
                "/ip4/136.172.60.121/udp/4001/quic-v1/webtransport/certhash/uEiAhhTqbE1jSobD8IaIe49ROZbsDl5qQFslub1Dif5FUBA/certhash/uEiASlef5ibsdvfQ6sFKf3JsNGCUYM8980ijeTb7Ws8XFJA/p2p/12D3KooWKnNCnWFMYwTow5WH5vYmSVJJaCZQ2aCdQu8Ss1o2yoW3",
                "/ip4/136.172.60.121/udp/48049/quic-v1/p2p/12D3KooWKnNCnWFMYwTow5WH5vYmSVJJaCZQ2aCdQu8Ss1o2yoW3",
                "/ip4/136.172.60.121/udp/48049/quic-v1/webtransport/certhash/uEiAhhTqbE1jSobD8IaIe49ROZbsDl5qQFslub1Dif5FUBA/certhash/uEiASlef5ibsdvfQ6sFKf3JsNGCUYM8980ijeTb7Ws8XFJA/p2p/12D3KooWKnNCnWFMYwTow5WH5vYmSVJJaCZQ2aCdQu8Ss1o2yoW3",
                "/ip6/::1/tcp/4001/p2p/12D3KooWKnNCnWFMYwTow5WH5vYmSVJJaCZQ2aCdQu8Ss1o2yoW3",
                "/ip6/::1/udp/4001/quic-v1/p2p/12D3KooWKnNCnWFMYwTow5WH5vYmSVJJaCZQ2aCdQu8Ss1o2yoW3",
                "/ip6/::1/udp/4001/quic-v1/webtransport/certhash/uEiAhhTqbE1jSobD8IaIe49ROZbsDl5qQFslub1Dif5FUBA/certhash/uEiASlef5ibsdvfQ6sFKf3JsNGCUYM8980ijeTb7Ws8XFJA/p2p/12D3KooWKnNCnWFMYwTow5WH5vYmSVJJaCZQ2aCdQu8Ss1o2yoW3"
        ],
        "AgentVersion": "kubo/0.29.0/3f0947b/docker",
        "Protocols": [
                "/ipfs/bitswap",
                "/ipfs/bitswap/1.0.0",
                "/ipfs/bitswap/1.1.0",
                "/ipfs/bitswap/1.2.0",
                "/ipfs/id/1.0.0",
                "/ipfs/id/push/1.0.0",
                "/ipfs/kad/1.0.0",
                "/ipfs/lan/kad/1.0.0",
                "/ipfs/ping/1.0.0",
                "/libp2p/autonat/1.0.0",
                "/libp2p/circuit/relay/0.2.0/hop",
                "/libp2p/circuit/relay/0.2.0/stop",
                "/libp2p/dcutr",
                "/x/"
        ]
}

Here is the first part of the output of ipfs stats dht:

DHT wan (196 peers):                                                                 
  Bucket  0 (20 peers) - refreshed 2m25s ago:                                        
    Peer                                                  last useful  last queried  Agent Version
  @ 12D3KooWSoKBzdA3Kz6pByws9AtWrZpPU8pzS4UxY4VWNPvm5rnx  5s ago       5s ago        kubo/0.22.0/3f884d3/gala.games
  @ QmdgqumYL1BK1x7LFq8o3AVAbtD9wQkF3H5k5rirUXxwfF        7s ago       7s ago        p2pd/0.1
  @ 12D3KooWGcxSPPp5wuKy8VrREi3tjHdBhptSyZdU2nQkqi5REo4p  9s ago       9s ago        kubo/0.22.0/3f884d3/gala.games
  @ 12D3KooWSnEt1GcVNSUVFS89Vgp84bvTUsH3vq5ZYbYLswJfpJxa  19s ago      19s ago       kubo/0.22.0/3f884d3/gala.games
  @ 12D3KooWBbqvi6axwMLYRZpRbvcGMU2zwfVNVwQ7k86Q9xeHjeoF  20s ago      20s ago       kubo/0.33.2/
  @ 12D3KooWFaRGd2GrByryBEKRxRdramUBGWAZMCTNkNhvQ7nRsfgF  21s ago      21s ago       go-ipfs/0.11.0/
    12D3KooWC4Skt2CmM6w9zouhybkUTGCZm68hSv4Bg2PZPKMf92jZ  25s ago      25s ago       kubo/0.22.0/3f884d3/gala.games
  @ 12D3KooWMToZwX9ALhhofZGBYgUJCTjo6Xq1nMRAqH2PMtLVALom  30s ago      30s ago       kubo/0.22.0/3f884d3/gala.games
  @ 12D3KooWHjaXV5T5cH9jgxhxs3XmVbicjMTjifntoWmZJv2BPYbX  30s ago      30s ago       kubo/0.22.0/3f884d3/gala.games
  @ QmWhiaLrx3HRZfgXc2i7KW5nMUNK7P9tRc71yFJdGEZKkC        32s ago      32s ago       p2pd/0.1
  @ QmXVP2e6BUHF12YMswcaZ8VNNAFtJfhAg4D2WSnnR7z3Q7        33s ago      33s ago       p2pd/0.1
    12D3KooWLvKgJubtNfEddJD67FsKVokTpDFCiR5YmmbcmLPLjz5K  34s ago      34s ago       kubo/0.32.1/9017453/docker
    12D3KooWLF5qNZVZ1NEqurmWaVPbWvanq2Eg5wutg8yukovvop2X  37s ago      37s ago       kubo/0.33.2/
  @ 12D3KooWQ6J5wo7rJKXdBWqirHSqdZBXJ5WsCjDZWu7J4uSiA9UY  38s ago      38s ago       kubo/0.34.1/desktop
  @ 12D3KooWBbkCD5MpJhMc1mfPAVGEyVkQnyxPKGS7AHwDqQM2JUsk  38s ago      38s ago       kubo/0.33.2/
  @ 12D3KooWGbqAGGQWH1WyQpzz8iNiR1CzoxM4bNvhzbnSCNftEkz6  47s ago      47s ago       kubo/0.15.0/3ae52a4
  @ 12D3KooWC2nrP7DrgZV2tx3ewiBp8c6sWhVaQ8CDBL4kUuYVsys2  50s ago      50s ago       kubo/0.31.0/
    12D3KooWEcZSmL2GPVjTWGPyPfp18Kvfamh82P929XHuzpuGcpc8  50s ago      50s ago       kubo/0.30.0/
  @ 12D3KooWF92UXuEpPaAzyqDLXyMg7bAzwQf7dhyfYfqxktfUDGVM  1m0s ago     1m0s ago      kubo/0.15.0/3ae52a4
  @ 12D3KooWJCJTpmEmr5h8DDSqDDRPcfFYhezKFhKK85kK1bY1xUn9  1m3s ago     1m3s ago      github.com/libp2p/go-libp2p
                                                                                     
  Bucket  1 (20 peers) - refreshed 10s ago:                                          
    Peer                                                  last useful  last queried  Agent Version
  @ 12D3KooWRTVpjDjKn6g6s4Jpbw4HtFP3icwXJiy1ZSmkFTv3KzZA  13s ago      12s ago       kubo/0.16.0/
  @ 12D3KooWLB5VaiNuq8n8Ts7BEjAjW8oz4ZNcmRwox7TBq4P37ZNh  14s ago      14s ago       kubo/0.18.1/675f8bd/docker
  @ 12D3KooWCVP6dgwqoRsG3NXYSX1VkcJam3E4bwUFBgu14yHUQvf7  14s ago      14s ago       kubo/0.22.0/3f884d3/gala.games
  @ 12D3KooWJ5dw92kixmBRg8FXNTUdXcMQQk6GSyHHsq5hFHFMcfu3  15s ago      15s ago       kubo/0.22.0/3f884d3/gala.games
  @ 12D3KooWMdA6pkZ344UeT6pHUDy4yF58DZpi4dqcptD4Ztignini  15s ago      15s ago       kubo/0.29.0/
  @ QmRRd42Yax4p2uahyu9tmrZUqXWaLmfeVFCnvKEhgoGPE1        15s ago      15s ago       go-ipfs/0.8.0/
  @ 12D3KooWHEmQpeq3CsQwvMtJBkvk5LXNJYD1cbJSiZT7APwTt4PL  15s ago      15s ago       kubo/0.28.0/
  @ 12D3KooWQys3HPveyj12cZS7bEZqsSe77nNWS7Uj3rszrMDCW5pK  15s ago      15s ago       kubo/0.22.0/3f884d3/gala.games
  @ 12D3KooWHdS63KCJZzPx2RUtsDqBmfLyZvcK4mGasgjUhPDsfBqc  15s ago      15s ago       kubo/0.23.0/
  @ 12D3KooWBbNtEFJT3hfrnMp9g2TDtSHC6VSgDMDAjCYbQtVsUVcb  15s ago      15s ago       kubo/0.22.0/3f884d3/gala.games
  @ 12D3KooWN4TUVdceX2T1UUCgGssfq3RNW23GKzctqKzE9apeuEM1  15s ago      15s ago       kubo/0.22.0-dev/895963b95-dirty/docker
  @ 12D3KooWGLUb3iXf6CVpCcRPQJcpXjeUCrpXqaGv9U3ath4MwgPx  15s ago      15s ago       kubo/0.22.0/3f884d3/gala.games
  @ 12D3KooWL92FgtU5oZ36eXC9nT3AAxTKyEsY49k2BSmLMMvuqJir  15s ago      15s ago       kubo/0.22.0/3f884d3/gala.games
  @ 12D3KooWR7VcC5NoJAH7TMrMzJNPYXDn3uVBfoCZ3j9kQHUH3sCu  15s ago      15s ago       kubo/0.26.0/
  @ 12D3KooWKgrGZnGHrN4m9dspVjfnx3Cwrib9rftiJ3KCL12D52kg  15s ago      15s ago       kubo/0.22.0/3f884d3/gala.games
  @ 12D3KooWJejUZveE9q1HjWpJnuZw9B2NLHBienXSmfA3reVbuDm5  15s ago      15s ago       kubo/0.18.1/675f8bd/docker
  @ 12D3KooWNyZ4x4JLRhB7BJ6oj3s5qsC5opG8pxsi7BDyFPxxZgX3  16s ago      15s ago       kubo/0.33.2/
  @ 12D3KooWLJfLLMqghC6TWXNRrTrcwUAtx8sm8dFeonCuHvN3aYk7  16s ago      16s ago       kubo/0.30.0/
  @ 12D3KooWKcW8JLQRbVr5QSrbHHaMkG52VGqvSAeVNsASpZo7HcYz  16s ago      16s ago       kubo/0.22.0/3f884d3/gala.games
  @ 12D3KooWMkGzWcWrs3pVsaNr7A8zg7ZTcw4hJXEy8ZBTLUaUzgzZ  16s ago      16s ago       kubo/0.34.1/4649554

And here we have the beginning of the output (around 200 peers in total) of ipfs swarm peers:

/ip4/103.168.2.20/tcp/62883/p2p/QmfLV6P7482yyq1nyZ2JyUjGyp5W3T4vKS4eaMN317FJGi
/ip4/104.238.187.126/udp/4001/quic-v1/p2p/12D3KooWF6asfk6suLodGAHqA3KnUKdDEWE7ommSaNdV7tTyvbpe
/ip4/107.173.147.120/udp/41846/quic-v1/p2p/12D3KooWHuy4CTR8nuTHNMXPq74vyM2RE7oUDNNsPjqZoD7MhYFU
/ip4/107.173.91.197/udp/40612/quic-v1/p2p/12D3KooWSkM8LnFS4JWdZNafrkuNpw5XQJmF96eWffsKvXhsUxex
/ip4/114.32.64.6/udp/50972/quic-v1/p2p/QmYrXJyK2W2kFALUSow18SUBnY3j6TwZn8UCKFP4LDtztz
/ip4/116.98.254.95/udp/6923/quic-v1/p2p/12D3KooWKwmFqksupSVhHwixEFDgLGx8g1GaAaUAYHHh5k2as9bB
/ip4/120.244.206.21/udp/5007/quic-v1/p2p/12D3KooWHAhjWy4H5nvArukb9EaabBBwKkfMydsE21G7KPKVgRV1
/ip4/13.218.230.16/tcp/4001/p2p/12D3KooWAbLqxtgqfjFiop7w4pjkwAa38t9DXGn5p2Ja1AxDDBWs
/ip4/134.209.198.177/udp/43553/quic-v1/p2p/12D3KooWGX4C7USACiLogtg8Ytr7B6u5ZuDaBQA92Zu2PgeJEKhb
/ip4/136.172.60.151/udp/4001/quic-v1/p2p/12D3KooWN1cJjVBqXmCmaNF6yihB9vTuSSeSHJ2kw6waaQ5Mvmsm
/ip4/136.244.81.198/tcp/4001/p2p/12D3KooWLfrEv3dPrwmrtGYQZxFaPzTbwUVMW9Uc4vZQfCFMPJTr
/ip4/138.197.123.17/udp/4001/quic-v1/p2p/12D3KooWPg8bc8D9fqAd5gFoZFb61JgpLa7xAKyrWkaxtyTPgQ95
/ip4/138.68.95.57/udp/4001/quic-v1/p2p/12D3KooWSeuphK8rLbTHWyTktLwNSgxoBDcsUFH1VdaFKQL5FCeM
/ip4/139.144.73.203/udp/12975/quic-v1/p2p/12D3KooWAcNNq2MNe83R3uhpWr2a2GGu2geAD2y8yyGQJbyPoN22
/ip4/139.180.214.173/udp/11412/quic-v1/p2p/12D3KooWGJ2SvjXUHtAkZdbWe6562GzSYjTDwNfwnaRbDZyu4WEg
/ip4/139.214.97.36/udp/60369/quic-v1/p2p/12D3KooWCkvGAb7SVoAc9xSx698uNXRo3C5QB5JmNneAtK1HjXUu
/ip4/14.23.53.205/udp/33531/quic-v1/p2p/12D3KooWSJUAEM2U1rRXKSza6LrHJ8Sb2dZdNjuDWJ5k7vMPJFeK
/ip4/140.82.22.204/tcp/4001/p2p/12D3KooWMZ8t6jTqo8eqquSNmgJpUrJqV2c5K21pKm5Ryru9mZad
/ip4/140.82.26.70/udp/5625/quic-v1/p2p/12D3KooWPRWF54BtAaYQ9Mx4DeZeRKvs9VreYLzPS4eYxmDBRvHF
/ip4/140.82.3.97/udp/4001/quic-v1/p2p/12D3KooWBVRioVWGZA9cnqtuTx3Kpb6HJvMUGTWbNBocVavPseWP
/ip4/141.164.37.195/udp/8109/quic-v1/p2p/12D3KooWDGEpKefVtRETFF8uXJT4yFguCEyMPascxwKFm7oQpif7
/ip4/142.93.215.219/udp/11251/quic-v1/p2p/12D3KooWK2hpzjpssVWTVFj6QV6oaRbA8ScisRCngt9U23pD6Fxs
/ip4/142.93.5.190/tcp/4001/p2p/12D3KooWEphMNmFMAfGn48vnnyRpEZy5gR5tTieXF8ohGgbcAvJp

I hope this helps. Do you have ideas what we can do next?

The correct way to disable it is to make sure the config is set to none rather than null

The following command should

ipfs config Routing.Type "none"

After which, you should get the following output:

$ ipfs stats dht

Error: routing service is not a DHT

@danieln thanks :slight_smile: this worked.

1 Like

Hi @danieln,

now we run in our next issue. We want to use delegated routing in ipfs-kubo:

And we want to try this with the public delegated endpoint.

Do you know how we can configure this?

In Kubo 0.35 (should be released in the coming days) and the following configuration:

ipfs config --json Routing.DelegatedRouters '["https://delegated-ipfs.dev"]'
ipfs config Routing.Type "autoclient"

However, in this case, you will still have a DHT Client.

1 Like

Hi @cehbrecht, kubo 0.35 is now live: Release v0.35.0 · ipfs/kubo · GitHub

Thanks @danieln @mosh

We got the delegated routing with the “autoclient” type working. We also were able to pin external CIDs.

But … with “autoclient” there is still a lot of network traffic which our firewall admin does not like.

What can we do?

Can we use delegated routing completely without DHT?

AFAIK right now there is no easy way to do it. I’ve filled feature request based on your need:

Raising need for this feature by commenting on the issue would be helpful.

Until that is resolved, you could try setting Routing.Type=custom and then remove DHT-related entries from this example, but this is custom opt-in logic added a while ago only for research, so YMMV.

Two caveats

  1. If you do this, and disable DHT in Kubo, make sure to replace the public good routing endpoint at delegated-ipfs.dev with your own instance of Someguy with proper failover and load-balancing. You don’t want routing on your production IPFS instances to hard fail if a domain you don’t control has an outage or starts rate-limiting you.

  2. Be mindful, that if you disable Amino DHT client, your node will no longer announce it has the data, and other peers will not be able to retrieve it from you (ok if you are running client-only, but very very bad if you run as a storage provider).

1 Like

@lidel Thanks for your quick reply and the feature request :slight_smile:

Unfortunately our IPFS node needs to be a storage provider.

Is there another possibility to announce our stored data via another service (like Someguy and not DHT)?

1 Like

This is a bit tricky. Only the provider (holder of the Peer private key) can announce to the DHT, and you can’t share the same Peer ID between Someguy and Kubo.

If you don’t need your content to be routable on IPFS Mainnet (Amino DHT), you can spin up your own delegated routing endpoint (boxo/routing/http at main · ipfs/boxo · GitHub) and add the endpoint to all your clients.

@cehbrecht Can you tell us a bit more about your intended use? Are you providing storage for your own data or expecting others to store data on your node?

Also, just a bump to comment on this issue, which will help prioritize your request: Add `Routing.Type=delegated` to only use `Routing.DelegatedRouters` · Issue #10824 · ipfs/kubo · GitHub

Hi @mosh,

thanks for coming back to us.

We want to provide an IPFS node for our scientific users. We pin all relevant data used by the scientists at our IPFS node. The scientists use their IPFS clients to retrieve this data. We act as a pinning service.

All of this works with our current IPFS node.

However … this setup creates issues with our firewall due to heavy traffic (not allowed by firewall policy). This makes our firewall admin very unhappy.

We thought disabling the DHT and delegate the traffic to an external service (like someguy) would solve our firewall issue. But … as we understood … in this scenario the data on our IPFS node is not discoverable anymore.

What can we do?

1 Like

Hi,

so what is actually making your firewall admin unhappy?

Is it the bandwidth usage? Is it the number of connections? Is it ipv4 specifically? Is it UDP or TCP? How much bandwidth is Kubo using and how much do you expect it to use or what is the limit?

Some options:

  • If its the bandwidth usage, you can run Kubo in a VM (i.e. with Proxmox limiting bandwidth is well supported).
  • If it’s the connections, then it is important to know if it’s all of them or just some, and what the actual issue in the firewall is that bothers them so much.
  • Next release will have improvements to reduce bitswap chatter by up to 80%, you can try it already in master (kubo/docs/changelogs/v0.36.md at master · ipfs/kubo · GitHub). This might fix the problem, or alleviate it, but DHT connections will still be around, assuming that is the issue.
  • if you enable AcceleratedDHTClient you will have 10 minutes of many connections every hour and 50 minutes of DHT quietness. Worth it if you are providing content all the time.
  • Are you providing all the time? How many blocks are we talking about? What is the reprovider strategy?
1 Like

Hi @hector

thanks for the quick reply :slight_smile:

  • If its the bandwidth usage, you can run Kubo in a VM (i.e. with Proxmox limiting bandwidth is well supported).

no … it is not the bandwidh

  • If it’s the connections, then it is important to know if it’s all of them or just some, and what the actual issue in the firewall is that bothers them so much.

yes. We have configured the firewall (allowed specific egress and ingress ports). But there are many connections which are blocked by the firewall because they don’t fulfil this policy. These blocked connections consume large resources of our firewall.

Thanks for pointing this out. We give it a try. But the main issue is probably the DHT connections.

  • if you enable AcceleratedDHTClient you will have 10 minutes of many connections every hour and 50 minutes of DHT quietness. Worth it if you are providing content all the time.

Is this option also relevant for an IPFS node which serves data? Would it also reduce the total number of connections?

  • Are you providing all the time?

yes

How many blocks are we talking about?

we are still in the testing phase. There is not much file transfer happening yet.

What is the reprovider strategy?

we kept the defaults.

Thanks for your help :slight_smile:

1 Like

On default settings, the use of reuse-port means all connections will come and go from the same port. That means, if you are listening on :4001 , all egress connections source-port is set to 4001. Per the ipfs id output above, it would seem AutoNAT has discovered port 48049 as well or upnp has assigned that port. Is this firewall NAT’ing you?

Given all the source port for all connections is 4001, in the case of NAT, only one external-facing port needs to be assigned in the firewall. If you set up port forwarding manually, this may be 4001 as well and you can disable upnp etc. You can use https://check.ipfs.network/ to check which addresses can be found and work from the outside. Incoming connections should only go to “advertised” ports (tcp 4001, udp 48049, udp 4001 per the above).

Now, what exactly is the policy blocking connections?

  • Is it blocking egress or ingress? Is it resetting connections or pooling them for later?
  • Is it blocking IPv6, which is not NAT’ed anyways?
  • Does it handle UDP/TCP traffic differently?
  • What is the limit? 100? 200? 1000?

I don’t know what the firewall is doing that connections hurt its performance (it is its job to handle them?). Sometimes UDP connection tracking in Firewalls come with large timeouts, which cause otherwise short-lived connections to build-up on the tracking table (even though you have only 200 peers, the number may be bigger from the firewall’s point of view). You can try disabling UDP-based transports and see if your admin is happier:

"Transports": {
      "Multiplexers": {},
      "Network": {
        "QUIC": false,
        "WebTransport": false,
        "WebRTCDirect": false,
        "TCP": true,
        "Websocket": true
      },
      "Security": {}
    }

In order to avoid many new but short-lived connections, you can actually increase the connmgr settings to reduce churn, while lowering GracePeriod:

   "ConnMgr": {
      "LowWater": 500,
      "HighWater": 1000,
      "GracePeriod": "5s"
    },

It is still not clear what exactly is bothering your admin or your firewall. A full network crawl, IPFS will top at around 6500 egress-connections at the firewall table. This should be manageable by non consumer-grade hardware with ease. From their side it would be interested to know (if you manage to have a conversation about it):

  • What are the UDP timeouts? What about the TCP_SYN timeouts? Large timeouts may cause build up of otherwise short-lived connections.
  • Confirm whether source port for all egress connections is 4001: it should be, otherwise SO_REUSEPORT is not working on your platform (you can check this yourself via lsof -n -i -P | grep ipfs
  • How many connections can they attribute to your Kubo box at any given time?
  • What is the state of those connections? Is there UDP build up? Many stuck TCP SYNs?
  • What are the limits on? Per max connections? Per protocol? Per IP? Per direction?

If the answer is “you can only have 50 connections every 5 minutes” then it is going to be very hard to work on the public network, without requiring specific configuration/peering in clients. The ConnMgr can be use as baseline, but DHT lookups or announcements will spike total connections. There are some improvements coming here as well that will smooth things out significantly (new DHT reprovider) but right now controlling those spikes will hurt lookup/provide reliability.

1 Like

Is global discoverability important for your use case? As in, do you random people to be able to retrieve data from your node?

Hi @hector

thanks for your detailed feedback.

We are not so deep in the firewall topic. We need to talk to our admin. We will come back with our feedback here.

Cheers,
Carsten

Hi @danieln

yes … global discoverability is important for our use case. Every scientists of our community needs to access our data.

Cheers,
Carsten

1 Like