Unable to resolve IPNS name via remote IPFS nodes or public gateways (used to work fine)

Hi,

I am running Kubo 0.27.0 (been running it for years on earlier versions) on a single node, and have a Python script on that node that regularly fetches some files, constructs a folder containing those files and publishes the results as a new CID to IPFS. It then publishes my IPNS pointer to point at that latest CID.

The IPNS is /ipns/k51qzi5uqu5dlnwjrnyyd6sl2i729d8qjv1bchfqpmgfeu8jn1w1p4q9x9uqit

This has been working pretty well for some years, but in the last 6 months or so the IPNS has been increasingly ‘falling off the radar’ - public internet gateways like Cloudflare’s, Pinata etc stop resolving the IPNS.

I also spun up an IPFS instance on a totally remote server using Kubo and am unable to resolve the IPNS node from it:

ipfs@mypfs:~$ ipfs name resolve /ipns/k51qzi5uqu5dlnwjrnyyd6sl2i729d8qjv1bchfqpmgfeu8jn1w1p4q9x9uqit
Error: could not resolve name

Yet, if I know the IPFS CID that the IPNS points to, I can grab it just fine (granted, it takes a while) (again this is on a remote server that has no relationship to the publishing node):

ipfs@mypfs:~$ ipfs get /ipfs/QmRppJ5FQcHhCDCyAGauLzMmeDo6iYEaJRhRYQ8zENGcud
Saving file(s) to QmRppJ5FQcHhCDCyAGauLzMmeDo6iYEaJRhRYQ8zENGcud
 8.91 MiB / 8.91 MiB [====================================================================================================================================================] 100.00% 10s

My publishing script, which publishes both the CID and the IPNS pointer, still seems to be working fine with no apparent errors. Here is the script: ipfs-publish · GitHub

Here is the output of running the script:

Getting the file name
Adding the file to IPFS and get the CID hash
{'Name': 'incidents.csv', 'Hash': 'QmVZApxY4DWnfTirxs97sA4mS5kHyoHcqF7BhNptszDTYj', 'Size': '9288754'}
Resolving the current CID of the IPNS pointer
{'Path': '/ipfs/QmW2sXMSMQ1r2rUkdb34QSp8xHemFSsjoqVuRDfpUCrwUj'}
Getting the CID of the file we care about within IPNS CID
{'Data': {'/': {'bytes': 'CAE'}}, 'Links': [{'Hash': {'/': 'QmeiShHHbtEhivQc1XsT1YSY3U3mCUH58tErKDwHqXXr1d'}, 'Name': 'README.md', 'Tsize': 1715}, {'Hash': {'/': 'QmQppmLpp4PeXKmrbJR5B9pjSZKhMj8VTkv85RiXrMvsng'}, 'Name': 'incidents-log.csv', 'Tsize': 53658}, {'Hash': {'/': 'QmW9yhosKC8c9in5ANHAJjPi7JwXSmzaSW7oyf2vqsDDh7'}, 'Name': 'incidents.csv', 'Tsize': 9288754}]}
Adding attributes and contents of changelog and README for publishing
Publishing folder containing the files for human-friendly viewing. Still just as IPFS, not IPNS
{"Name":"pft-incidents/README.md","Hash":"QmeiShHHbtEhivQc1XsT1YSY3U3mCUH58tErKDwHqXXr1d","Size":"1715"}
{"Name":"pft-incidents/incidents.csv","Hash":"QmVZApxY4DWnfTirxs97sA4mS5kHyoHcqF7BhNptszDTYj","Size":"9288754"}
{"Name":"pft-incidents/incidents-log.csv","Hash":"QmQppmLpp4PeXKmrbJR5B9pjSZKhMj8VTkv85RiXrMvsng","Size":"53658"}
{"Name":"pft-incidents","Hash":"QmRppJ5FQcHhCDCyAGauLzMmeDo6iYEaJRhRYQ8zENGcud","Size":"9344302"}

Pointing our IPNS at the new directory CID hash
{'Name': 'k51qzi5uqu5dlnwjrnyyd6sl2i729d8qjv1bchfqpmgfeu8jn1w1p4q9x9uqit', 'Value': '/ipfs/QmRppJ5FQcHhCDCyAGauLzMmeDo6iYEaJRhRYQ8zENGcud'}

So, publishing works, and naturally I can resolve the IPNS on the originating machine itself (because of cache), what do I need to do to get it to be reachable again from other nodes?

Here is my ipfs config too (private key/peer id removed):

{
  "API": {
    "HTTPHeaders": {}
  },
  "Addresses": {
    "API": "/ip4/127.0.0.1/tcp/5001",
    "Announce": null,
    "AppendAnnounce": [
      "/ip4/107.170.254.5/tcp/4001",
      "/ip4/107.170.254.5/udp/4001/quic",
      "/ip4/107.170.254.5/udp/4001/quic-v1",
      "/ip4/107.170.254.5/udp/4001/quic-v1/webtransport"
    ],
    "Gateway": "/ip4/127.0.0.1/tcp/8080",
    "NoAnnounce": [
      "/ip4/100.64.0.0/ipcidr/10",
      "/ip4/169.254.0.0/ipcidr/16",
      "/ip4/172.16.0.0/ipcidr/12",
      "/ip4/192.0.0.0/ipcidr/24",
      "/ip4/192.0.0.0/ipcidr/29",
      "/ip4/192.0.0.8/ipcidr/32",
      "/ip4/192.0.0.170/ipcidr/32",
      "/ip4/192.0.0.171/ipcidr/32",
      "/ip4/192.0.2.0/ipcidr/24",
      "/ip4/192.168.0.0/ipcidr/16",
      "/ip4/198.18.0.0/ipcidr/15",
      "/ip4/198.51.100.0/ipcidr/24",
      "/ip4/203.0.113.0/ipcidr/24",
      "/ip4/240.0.0.0/ipcidr/4",
      "/ip6/100::/ipcidr/64",
      "/ip6/2001:2::/ipcidr/48",
      "/ip6/2001:db8::/ipcidr/32",
      "/ip6/fc00::/ipcidr/7",
      "/ip6/fe80::/ipcidr/10"
    ],
    "Swarm": [
      "/ip4/0.0.0.0/tcp/4001",
      "/ip4/0.0.0.0/tcp/4002/ws",
      "/ip4/0.0.0.0/udp/4001/quic-v1",
      "/ip4/0.0.0.0/udp/4001/quic-v1/webtransport",
      "/ip6/::/tcp/4001",
      "/ip6/::/udp/4001/quic-v1",
      "/ip6/::/udp/4001/quic-v1/webtransport",
      "/ip6/::/tcp/4002/ws"
    ]
  },
  "AutoNAT": {},
  "Bootstrap": [
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
    "/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
    "/ip4/104.131.131.82/udp/4001/quic-v1/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa"
  ],
  "DNS": {
    "Resolvers": null
  },
  "Datastore": {
    "BloomFilterSize": 1048576,
    "GCPeriod": "1h",
    "HashOnRead": false,
    "Spec": {
      "mounts": [
        {
          "child": {
            "path": "blocks",
            "shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
            "sync": true,
            "type": "flatfs"
          },
          "mountpoint": "/blocks",
          "prefix": "flatfs.datastore",
          "type": "measure"
        },
        {
          "child": {
            "compression": "none",
            "path": "datastore",
            "type": "levelds"
          },
          "mountpoint": "/",
          "prefix": "leveldb.datastore",
          "type": "measure"
        }
      ],
      "type": "mount"
    },
    "StorageGCWatermark": 90,
    "StorageMax": "10G"
  },
  "Discovery": {
    "MDNS": {
      "Enabled": false,
      "Interval": 10
    }
  },
  "Experimental": {
    "FilestoreEnabled": false,
    "GraphsyncEnabled": false,
    "Libp2pStreamMounting": false,
    "P2pHttpProxy": false,
    "StrategicProviding": false,
    "UrlstoreEnabled": false
  },
  "Gateway": {
    "APICommands": [],
    "HTTPHeaders": {},
    "NoDNSLink": false,
    "NoFetch": false,
    "PathPrefixes": [],
    "PublicGateways": null,
    "RootRedirect": "",
    "Writable": false
  },
  "Identity": {
    "PeerID": "xxxxxxxxxxx",
    "PrivKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
  },
  "Internal": {},
  "Ipns": {
    "RecordLifetime": "",
    "RepublishPeriod": "",
    "ResolveCacheSize": 128
  },
  "Migration": {
    "DownloadSources": null,
    "Keep": ""
  },
  "Mounts": {
    "FuseAllowOther": false,
    "IPFS": "/ipfs",
    "IPNS": "/ipns"
  },
  "Peering": {
    "Peers": null
  },
  "Pinning": {
    "RemoteServices": null
  },
  "Plugins": {
    "Plugins": null
  },
  "Provider": {
    "Strategy": ""
  },
  "Pubsub": {
    "DisableSigning": false,
    "Router": ""
  },
  "Reprovider": {},
  "Routing": {
    "AcceleratedDHTClient": true,
    "Methods": null,
    "Routers": null
  },
  "Swarm": {
    "AddrFilters": [
      "/ip4/10.0.0.0/ipcidr/8",
      "/ip4/100.64.0.0/ipcidr/10",
      "/ip4/169.254.0.0/ipcidr/16",
      "/ip4/172.16.0.0/ipcidr/12",
      "/ip4/192.0.0.0/ipcidr/24",
      "/ip4/192.0.0.0/ipcidr/29",
      "/ip4/192.0.0.8/ipcidr/32",
      "/ip4/192.0.0.170/ipcidr/32",
      "/ip4/192.0.0.171/ipcidr/32",
      "/ip4/192.0.2.0/ipcidr/24",
      "/ip4/192.168.0.0/ipcidr/16",
      "/ip4/198.18.0.0/ipcidr/15",
      "/ip4/198.51.100.0/ipcidr/24",
      "/ip4/203.0.113.0/ipcidr/24",
      "/ip4/240.0.0.0/ipcidr/4",
      "/ip6/100::/ipcidr/64",
      "/ip6/2001:2::/ipcidr/48",
      "/ip6/2001:db8::/ipcidr/32",
      "/ip6/fc00::/ipcidr/7",
      "/ip6/fe80::/ipcidr/10"
    ],
    "ConnMgr": {
      "GracePeriod": "20s",
      "HighWater": 2000,
      "LowWater": 1500,
      "Type": "basic"
    },
    "DisableBandwidthMetrics": false,
    "DisableNatPortMap": true,
    "RelayClient": {},
    "RelayService": {},
    "ResourceMgr": {
      "Limits": {}
    },
    "Transports": {
      "Multiplexers": {},
      "Network": {},
      "Security": {}
    }
  }
}

In the firewall, I have TCP and UDP port 4001 open inbound, everything is allowed outbound. There is no NAT, the machine is a DigitalOcean droplet with the public IP as its primary eth0 interface. The public IP as seen in the config is 107.170.254.5 (I added those ‘AppendAnnounce’ lines, but it hasn’t seemed to help)

I appreciate any assistance!

Setting ipfs config --json Ipns.UsePubsub true on both my publishing node and my ‘retrieving’ node seems to have suddenly helped. The only other changes I made were:

Setting these:

net.core.rmem_max = 2500000
net.core.wmem_max = 2500000

I also removed the extra AppendAnnounce sections on the publishing node’s configs.

The public web gateways still can’t reach the IPNS, but it’s instantaneous now on my receiver node, for some reason (maybe pubsub has helped?)

1 Like

Hi @mig5

I grabbed the CID from the last line (QmRppJ5FQcHhCDCyAGauLzMmeDo6iYEaJRhRYQ8zENGcud)

Pointing our IPNS at the new directory CID hash
{'Name': 'k51qzi5uqu5dlnwjrnyyd6sl2i729d8qjv1bchfqpmgfeu8jn1w1p4q9x9uqit', 'Value': '/ipfs/QmRppJ5FQcHhCDCyAGauLzMmeDo6iYEaJRhRYQ8zENGcud'}

and I tried to fetch that, and I can’t seem to load it either. So I’m not certain it’s an IPNS specific issue, my guess is that your node is no longer reachable for some reason. Would you mind sharing your PeerID so I can debug connectivity to your node, or doing an IPFS check yourself (https://ipfs-check.on.fleek.co/ or https://check.ipfs.network/) on that CID to see what’s going on?

1 Like

Thanks @ianconsolata . The CID changes all the time as the content regularly changes. Maybe it fell off the network as no longer fresh. At the time of writing, the current CID of the IPNS pointer is /ipfs/QmW2Uf7xGTqBUfe3xbxnQxXUqYTCRFfRcQkp6ucurPNHFs

The peer ID of the instance doing the publishing is 12D3KooWLJ2DwWAMd2a3s5JabdVKM1srt7D2TnwjCjr4hwnqk6zh

Having said all the above, I have been running a cron job on my totally-separate IPFS ‘receiver’ node since late last week. All that the cron job is doing is ipfs name resolve /ipns/k51qzi5uqu5dlnwjrnyyd6sl2i729d8qjv1bchfqpmgfeu8jn1w1p4q9x9uqit.

I started the cron job at Fri Mar 8 05:45:01 AM UTC 2024.

It took until Sat Mar 9 01:55:01 PM UTC 2024 to be able to resolve the name. But ever since then, it has been 100% stable (not a single resolution failure ever since), even as the underlying CID that the IPNS points to has changed.

Here is the output from my cron job gist:f943b8313bbb11342641ea40ed08475f · GitHub . You can see the CID changing as the content has been changed and re-published - exactly as it should… this is good.

This really seems to suggest to me that:

  1. Running pubsub seems to help
  2. Checking the IPNS via an IPFS-native service (in this case another Kubo ipfs installation) rather than hitting the public gateways seems to help (the public gateways at Pinata, Cloudflare, IPFS.io still seem to report it as down all this time)
  3. Checking it regularly seems to help (every 5 minutes) - previously my check has been hitting just the public gateways and only once per hour.

The fact that I can get a result at all seems to tell me there’s nothing wrong with my IPFS publisher. It would suggest it’s a matter of ‘freshness’ or popularity fetching the content…

But I welcome any insight if you check my Peer ID above and can identity any issue obvious to someone who knows IPFS better than I! Thanks!

Interesting! I just ran a similar script on my local node (running on my laptop). At first I wasn’t able to resolve it, but after checking the “Enable IPNS over PubSub” setting I was able to get it to resolve. “Enable PubSub” alone did not fix the issue, and disabling IPNS over PubSub meant I could no longer resolve it. So my guess is that the gateways are not enabling this setting by default, which is why they aren’t able to resolve your IPNS identifier.

Also of note, I can’t seem to fetch the CID the IPNS identifier resolves to using lassie, the default retrieval client:

❯ lassie fetch QmS5nvQeGv3WQU9tNobNkZ5CXSuxGw76TS3QcqGmus8qsB
Fetching QmS5nvQeGv3WQU9tNobNkZ5CXSuxGw76TS3QcqGmus8qsB
2024-03-12T10:40:02.830-0700    FATAL   lassie  lassie/main.go:57       no candidates

Using the IPFS checker and the PeerID you shared, I got the following

According to the IPFS debugging docs (Diagnostic tools | IPFS Docs), it looks like that might mean you need to enable the “Accelerated DHT Client.” Do you know if that setting is already enabled on your node?

Thanks again @ianconsolata . I also saw the recommendation for AcceleratedDHTClient. That setting is (and has been for a long time) already enabled on my node that is doing the publishing - you can see it in the config in the earlier reply Unable to resolve IPNS name via remote IPFS nodes or public gateways (used to work fine) :

  "Routing": {
    "AcceleratedDHTClient": true,
    "Methods": null,
    "Routers": null
  },

I’m not clear on whether this set of settings is sufficient though…

Thanks for helping me out with this. It seems to match my suspicion that the public gateways don’t use the PubSub feature at least for IPNS…

I’ve been looking at your problem ever since you made your first post, but, while there are some mistakes and some things that could be improved in your config, I don’t see anything there that could cause your problem.

Your problem is this (and I don’t think anyone has stated it yet): while your node reprovides its addresses to the DHT and is reachable (I have no problem connecting to it), it doesn’t seem to reprovide your IPNS records or your cached/pinned CIDs … at all!

I’d like to see two things to try and get a clue on what is going on:

  • what is the CLI command you use to launch your daemon?
  • please run: ipfs stats provide and copy the output here

If the output of stats provide is empty (or near empty), please allow the node to run for at least 24 hours after you launch it and try again. If still empty, that proves it’s not reproviding.

Thank you @ylempereur !

  • what is the CLI command you use to launch your daemon?

I am using the Ansible role GitHub - madoke/ansible-ipfs-cluster: Ansible roles for go-ipfs and ipfs-cluster . The systemd unit file I have, says this:

[Unit]
Description=IPFS daemon
After=network.target

[Service]
Type=notify
User=ipfs
Group=ipfs
StateDirectory=ipfs
TimeoutStartSec=10800
LimitNOFILE=4092
MemoryMax=3.0G
MemorySwapMax=0
Environment="IPFS_FD_MAX=4092"
ExecStart=/usr/local/bin/ipfs daemon --migrate
Restart=on-failure
KillSignal=SIGINT

[Install]
WantedBy=multi-user.target
  • please run: ipfs stats provide and copy the output here

Hmm, indeed, that seems bad:

ipfs@ipfs:~$ ipfs stats provide
TotalProvides:          0
AvgProvideDuration:     0s
LastReprovideDuration:  0s
LastReprovideBatchSize: 0

The daemon has been running since March 4th already.

root@ipfs:/home/mig5# systemctl status ipfs.service
● ipfs.service - IPFS daemon
     Loaded: loaded (/etc/systemd/system/ipfs.service; enabled; preset: enabled)
     Active: active (running) since Mon 2024-03-04 22:45:12 UTC; 1 week 1 day ago
   Main PID: 238926 (ipfs)
      Tasks: 12 (limit: 4651)
     Memory: 2.4G (max: 3.0G swap max: 0B available: 602.5M)
        CPU: 2d 16h 50min 9.065s
     CGroup: /system.slice/ipfs.service
             └─238926 /usr/local/bin/ipfs daemon --migrate

Could it be a firewall thing somehow? As mentioned, I’m allowing anything inbound for TCP and UDP ports 4001, and I’m allowing all egress.

Version info in case it helps:

ipfs@ipfs:~$ ipfs version --all
Kubo version: 0.27.0
Repo version: 15
System version: amd64/linux
Golang version: go1.21.7

I welcome any recommendations on fixing any issues in the config, too! Or if you think there’s anythong wrong with the way I publish my CIDs and IPNS pointer (per the script)

Thanks again.

Yep, it’s failing to reprovide, which means none of what your node has is discoverable, which is why nothing resolves (except for its addresses, which are definitely in the DHT, somehow).

Let’s first try and solve the problem, then we’ll discuss the config cleanup.

I’d like you to make a few changes to the config and the command, and report if anything improves.

Change the command to: ipfs daemon --migrate --enable-gc --routing=auto

Change the following sections in your config:

  "Reprovider": {
    "Interval": "11h0m0s",
    "Strategy": "all"
  },
...
  "Swarm": {
...
    "ConnMgr": {},
...
    "ResourceMgr": {
      "Enabled": true,
      "Limits": {},
      "MaxMemory": "4GiB"
    },
...
  }

This will double the rate at which it reprovides (default is 22h), allow the connection manager to decide what the proper values are for your setup, and bump your memory use a bit (3GB is a bit low for what you are trying to do, especially with the AcceleratedDHTClient). Of course, you’ll have to bump up your service’s memory allocation to allow for it.

If none of this makes any difference, I’ll try to think of something else to test. Something is preventing reprovide from running, and we have to find what it is.

Thank you @ylempereur - I have made those changes, the only other change I made was that I raised the memory option from 4 to 6GB (as I doubled the size of the VM from 4GB to 8GB).

● ipfs.service - IPFS daemon
     Loaded: loaded (/etc/systemd/system/ipfs.service; enabled; preset: enabled)
     Active: active (running) since Wed 2024-03-13 03:02:11 UTC; 38s ago
   Main PID: 680 (ipfs)
      Tasks: 14 (limit: 9482)
     Memory: 766.6M (max: 6.0G swap max: 0B available: 5.2G)
        CPU: 1min 17.273s
     CGroup: /system.slice/ipfs.service
             └─680 /usr/local/bin/ipfs daemon --migrate --enable-gc --routing=auto

Mar 13 03:02:11 ipfs ipfs[680]: Swarm announcing /ip4/127.0.0.1/udp/4001/quic-v1/webtransport/certhash/uEiB6eLQbeSeLiJk7KiO1o7pdid4fDf0cSjIVfBoX5IdC_Q/certhash/uEiArWY4ok9bDv>
Mar 13 03:02:11 ipfs ipfs[680]: Swarm announcing /ip6/::1/tcp/4001
Mar 13 03:02:11 ipfs ipfs[680]: Swarm announcing /ip6/::1/tcp/4002/ws
Mar 13 03:02:11 ipfs ipfs[680]: Swarm announcing /ip6/::1/udp/4001/quic-v1
Mar 13 03:02:11 ipfs ipfs[680]: Swarm announcing /ip6/::1/udp/4001/quic-v1/webtransport/certhash/uEiB6eLQbeSeLiJk7KiO1o7pdid4fDf0cSjIVfBoX5IdC_Q/certhash/uEiArWY4ok9bDv7HOghp>
Mar 13 03:02:11 ipfs ipfs[680]: RPC API server listening on /ip4/127.0.0.1/tcp/5001
Mar 13 03:02:11 ipfs ipfs[680]: WebUI: http://127.0.0.1:5001/webui
Mar 13 03:02:11 ipfs ipfs[680]: Gateway server listening on /ip4/127.0.0.1/tcp/8080
Mar 13 03:02:11 ipfs ipfs[680]: Daemon is ready
Mar 13 03:02:11 ipfs systemd[1]: Started ipfs.service - IPFS daemon.

I’ll keep an eye on the results/stats to see what happens.

I’ve been looking at other IPFS configs seen online and wondering if I should have these in the Swarm section:

"/ip4/0.0.0.0/udp/4001/quic",
"/ip6/::/udp/4001/quic",

is it important? My section looks like this right now:

  "Addresses": {
    "API": "/ip4/127.0.0.1/tcp/5001",
    "Announce": null,
    "Gateway": "/ip4/127.0.0.1/tcp/8080",
    "NoAnnounce": [
      "/ip4/100.64.0.0/ipcidr/10",
      "/ip4/169.254.0.0/ipcidr/16",
      "/ip4/172.16.0.0/ipcidr/12",
      "/ip4/192.0.0.0/ipcidr/24",
      "/ip4/192.0.0.0/ipcidr/29",
      "/ip4/192.0.0.8/ipcidr/32",
      "/ip4/192.0.0.170/ipcidr/32",
      "/ip4/192.0.0.171/ipcidr/32",
      "/ip4/192.0.2.0/ipcidr/24",
      "/ip4/192.168.0.0/ipcidr/16",
      "/ip4/198.18.0.0/ipcidr/15",
      "/ip4/198.51.100.0/ipcidr/24",
      "/ip4/203.0.113.0/ipcidr/24",
      "/ip4/240.0.0.0/ipcidr/4",
      "/ip6/100::/ipcidr/64",
      "/ip6/2001:2::/ipcidr/48",
      "/ip6/2001:db8::/ipcidr/32",
      "/ip6/fc00::/ipcidr/7",
      "/ip6/fe80::/ipcidr/10"
    ],
    "Swarm": [
      "/ip4/0.0.0.0/tcp/4001",
      "/ip4/0.0.0.0/tcp/4002/ws",
      "/ip4/0.0.0.0/udp/4001/quic-v1",
      "/ip4/0.0.0.0/udp/4001/quic-v1/webtransport",
      "/ip6/::/tcp/4001",
      "/ip6/::/udp/4001/quic-v1",
      "/ip6/::/udp/4001/quic-v1/webtransport",
      "/ip6/::/tcp/4002/ws"
    ]
  },

Don’t, that protocol has been discontinued.

And yes, some of the problems are with your filters, some lines are redundant. We’ll address that later.

1 Like

Thanks @ylempereur .

I just ran the stats command again and now see this:

ipfs@ipfs:~$ ipfs stats provide
TotalProvides:          31k (31,507)
AvgProvideDuration:     11.352ms
LastReprovideDuration:  5m39.168878s
LastReprovideBatchSize: 31k (31,463)

Oh, your node is starting to reprovide!

➜  ~ ipfs routing findprovs QmW2Uf7xGTqBUfe3xbxnQxXUqYTCRFfRcQkp6ucurPNHFs
12D3KooWKuSzGoorvsBB8zdXCYVMEfybYKa7sDgXT3AkofAzX3FV
12D3KooWKeyz8B8vYaLgy7MqreBiDFyiGR16rScCAebBQrH5EJ2r
12D3KooWLJ2DwWAMd2a3s5JabdVKM1srt7D2TnwjCjr4hwnqk6zh
➜  ~ ipfs routing findprovs QmRppJ5FQcHhCDCyAGauLzMmeDo6iYEaJRhRYQ8zENGcud
12D3KooWKuSzGoorvsBB8zdXCYVMEfybYKa7sDgXT3AkofAzX3FV
12D3KooWGwdJPdyxECX4Q9NLsg37pjfRbPtqHWCYJFujD2aP9ESg
12D3KooWLJ2DwWAMd2a3s5JabdVKM1srt7D2TnwjCjr4hwnqk6zh
➜  ~ ipfs name resolve /ipns/k51qzi5uqu5dlnwjrnyyd6sl2i729d8qjv1bchfqpmgfeu8jn1w1p4q9x9uqit
Error: could not resolve name
➜  ~ 

Note how your node’s address is appearing on the list now. The IPNS key still isn’t resolving, but it might be because your node hasn’t gotten to it yet. I’ll check again later.

You might want to try and publish it again, to force the issue (make sure to set a lifetime and a ttl).

Thanks - I just forced a publish, and I set ttl=1h0m0s&lifetime=48h0m0s in my call to the API (even though I believe these are the defaults already if unspecified).

{'Name': 'k51qzi5uqu5dlnwjrnyyd6sl2i729d8qjv1bchfqpmgfeu8jn1w1p4q9x9uqit', 'Value': '/ipfs/QmcEm5oZ2siV8s17VF44iAHE95SuUa1R8anmNM7HjdAfQc'}

I just checked a public gateway and suddenly it is working again :open_mouth:

Thank you so much, so far this looks like it might’ve been the magic I needed!

(My post got hidden by Akismet) I forced a republish and set ttl and lifetime (even though I believe they are set by default and I used the default values).

The IPNS is appearing now on the Cloudflare public gateway, I’m trying others too (they are timing out still but will keep testing). Thank you again, this seems to have been the magic that was needed!!

WOOT! It works for me now:

➜  ~ ipfs name resolve /ipns/k51qzi5uqu5dlnwjrnyyd6sl2i729d8qjv1bchfqpmgfeu8jn1w1p4q9x9uqit
/ipfs/QmcEm5oZ2siV8s17VF44iAHE95SuUa1R8anmNM7HjdAfQc
➜  ~ ipfs routing findprovs QmcEm5oZ2siV8s17VF44iAHE95SuUa1R8anmNM7HjdAfQc
12D3KooWLJ2DwWAMd2a3s5JabdVKM1srt7D2TnwjCjr4hwnqk6zh
➜  ~ ipfs object stat QmcEm5oZ2siV8s17VF44iAHE95SuUa1R8anmNM7HjdAfQc
NumLinks:       3
BlockSize:      175
LinksSize:      173
DataSize:       2
CumulativeSize: 9381446
➜  ~ 

It’s late now, I’ll take another look at your config tomorrow, and post some final cleanup suggestions.

1 Like