IPNS Stopped Resolving

I have a kubo node running behind NAT; I set it up as per the NAT guidelines. It is listening on port 4001 and the router is port forwarding my public IP to it.

However, IPNS rarely or never (I cannot tell which) resolves except for local, i.e.:

% ipfs resolve /ipns/k51qzi5uqu5dj97sy14ngfq4qcsgml3a852s6tp4djsqqpoeyseruggaec8veh
/ipfs/QmSufjuDKAya5CvHWtoELr1h21tWojF63tyapvJDTrUHbm

This resolves and that is the CID that should give:

However, if I do:

I get a time out at the gateway. It should get:

My configuration is as follows - the “X” is my public IP address.

Telnetting to it on port 4001 will get a “multistream 1…” so there is a kubo node of some sort (or rather, the standard libp2p response is shown) listening publicly.

Notice that I do use UsePubSub - this seemed to “fix” the issue before (ipns used to work) but now it’s just…confused maybe? I don’t know.

I use the IPFS companion to published the IPNS and it claims it is seeding 20 peers successfully.

I am wondering why can’t ipfs.io, dweb.link etc. etc. resolve the IPNS name??? Or how I might even go about diagnosing this.

{
	"API": {
		"HTTPHeaders": {
			"Access-Control-Allow-Methods": [
				"PUT",
				"POST"
			],
			"Access-Control-Allow-Origin": [
				"http://192.168.2.11:5001",
				"http://localhost:3000",
				"http://127.0.0.1:5001",
				"https://webui.ipfs.io",
				"http://67.249.18.121",
				"https://67.249.18.121",
				"/ip6/::/udp/4001/quic"
			]
		}
	},
	"Addresses": {
		"API": "/ip4/0.0.0.0/tcp/5001",
		"Announce": [],
		"AppendAnnounce": [
			"/ip4/X/tcp/4001",
			"/ip4/X/udp/4001/quic",
			"/ip4/X/udp/4001/quic-v1",
			"/ip4/X/udp/4001/quic-v1/webtransport"
		],
		"Gateway": "/ip4/0.0.0.0/tcp/8080",
		"NoAnnounce": [],
		"Swarm": [
			"/ip4/0.0.0.0/tcp/4001",
			"/ip6/::/tcp/4001",
			"/ip4/0.0.0.0/udp/4001/quic",
			"/ip4/0.0.0.0/udp/4001/quic-v1",
			"/ip4/0.0.0.0/udp/4001/quic-v1/webtransport",
			"/ip6/::/udp/4001/quic",
			"/ip6/::/udp/4001/quic-v1",
			"/ip6/::/udp/4001/quic-v1/webtransport"
		]
	},
	"AutoNAT": {},
	"Bootstrap": [
		"/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
		"/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
		"/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
		"/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
		"/ip4/104.131.131.82/udp/4001/quic/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
		"/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN"
	],
	"DNS": {
		"Resolvers": {}
	},
	"Datastore": {
		"BloomFilterSize": 0,
		"GCPeriod": "1h",
		"HashOnRead": false,
		"Spec": {
			"mounts": [
				{
					"child": {
						"path": "blocks",
						"shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
						"sync": true,
						"type": "flatfs"
					},
					"mountpoint": "/blocks",
					"prefix": "flatfs.datastore",
					"type": "measure"
				},
				{
					"child": {
						"compression": "none",
						"path": "datastore",
						"type": "levelds"
					},
					"mountpoint": "/",
					"prefix": "leveldb.datastore",
					"type": "measure"
				}
			],
			"type": "mount"
		},
		"StorageGCWatermark": 90,
		"StorageMax": "10GB"
	},
	"Discovery": {
		"MDNS": {
			"Enabled": true
		}
	},
	"Experimental": {
		"AcceleratedDHTClient": false,
		"FilestoreEnabled": false,
		"GraphsyncEnabled": false,
		"Libp2pStreamMounting": false,
		"OptimisticProvide": false,
		"OptimisticProvideJobsPoolSize": 0,
		"P2pHttpProxy": false,
		"StrategicProviding": false,
		"UrlstoreEnabled": false
	},
	"Gateway": {
		"APICommands": [],
		"HTTPHeaders": {
			"Access-Control-Allow-Headers": [
				"X-Requested-With",
				"Range",
				"User-Agent"
			],
			"Access-Control-Allow-Methods": [
				"GET"
			],
			"Access-Control-Allow-Origin": [
				"*"
			]
		},
		"NoDNSLink": false,
		"NoFetch": false,
		"PathPrefixes": [],
		"PublicGateways": null,
		"RootRedirect": ""
	},
	"Identity": {
		"PeerID": "12D3KooWPxCbs6bFm5Z5SAvmrgzSvEcumsvZJgaHiEf4ECz5xAhb"
	},
	"Internal": {},
	"Ipns": {
		"RecordLifetime": "",
		"RepublishPeriod": "",
		"ResolveCacheSize": 128,
		"UsePubsub": true
	},
	"Migration": {
		"DownloadSources": [],
		"Keep": ""
	},
	"Mounts": {
		"FuseAllowOther": false,
		"IPFS": "/ipfs",
		"IPNS": "/ipns"
	},
	"Peering": {
		"Peers": null
	},
	"Pinning": {
		"RemoteServices": {}
	},
	"Plugins": {
		"Plugins": null
	},
	"Provider": {
		"Strategy": ""
	},
	"Pubsub": {
		"DisableSigning": false,
		"Router": ""
	},
	"Reprovider": {},
	"Routing": {
		"Methods": null,
		"Routers": null
	},
	"Swarm": {
		"AddrFilters": null,
		"ConnMgr": {},
		"DisableBandwidthMetrics": false,
		"DisableNatPortMap": false,
		"RelayClient": {
			"Enabled": true
		},
		"RelayService": {},
		"ResourceMgr": {
			"Limits": {}
		},
		"Transports": {
			"Multiplexers": {},
			"Network": {},
			"Security": {}
		}
	}
}```
1 Like

Something is screwy in your config. Here is what I just got back on a quick test:

> ipfs id 12D3KooWPxCbs6bFm5Z5SAvmrgzSvEcumsvZJgaHiEf4ECz5xAhb
Error: failed to dial 12D3KooWPxCbs6bFm5Z5SAvmrgzSvEcumsvZJgaHiEf4ECz5xAhb:
  * [/ip4/192.168.2.11/udp/4001/quic-v1] dial backoff
  * [/ip4/192.168.2.11/tcp/4001] dial backoff
  * [/ip4/67.249.18.121/tcp/1165] dial backoff
  * [/ip4/67.249.18.121/udp/4001/quic-v1] CRYPTO_ERROR 0x12a (local): peer IDs don't match: expected 12D3KooWPxCbs6bFm5Z5SAvmrgzSvEcumsvZJgaHiEf4ECz5xAhb, got 12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw
  * [/ip4/67.249.18.121/tcp/4001] failed to negotiate security protocol: peer id mismatch: expected 12D3KooWPxCbs6bFm5Z5SAvmrgzSvEcumsvZJgaHiEf4ECz5xAhb, but remote key matches 12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw
> nc 67.249.18.121 4001
/multistream/1.0.0
^C
>  
1 Like

I did rotate the keys a few hours ago in working with someone on the Discord server to see if something is wrong. So that ID doesn’t “exist” any longer…

And it appears it may be working albeit very slowly; something like one of the random peers that may know about the thing I’ve published being somewhat slow at resolving.

PS C:\Users\Dell\Downloads\kubo> .\ipfs.exe id 12D3KooWPxCbs6bFm5Z5SAvmrgzSvEcumsvZJgaHiEf4ECz5xAhb
Error: failed to dial 12D3KooWPxCbs6bFm5Z5SAvmrgzSvEcumsvZJgaHiEf4ECz5xAhb:
  * [/ip4/67.249.18.121/udp/4001/quic-v1] CRYPTO_ERROR 0x12a (local): peer IDs don't match: expected 12D3KooWPxCbs6bFm5Z5SAvmrgzSvEcumsvZJgaHiEf4ECz5xAhb, got 12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw
  * [/ip4/192.168.2.11/udp/4001/quic-v1] CRYPTO_ERROR 0x12a (local): peer IDs don't match: expected 12D3KooWPxCbs6bFm5Z5SAvmrgzSvEcumsvZJgaHiEf4ECz5xAhb, got 12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw
  * [/ip4/67.249.18.121/tcp/4001] failed to negotiate security protocol: peer id mismatch: expected 12D3KooWPxCbs6bFm5Z5SAvmrgzSvEcumsvZJgaHiEf4ECz5xAhb, but remote key matches 12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw
  * [/ip4/192.168.2.11/tcp/4001] failed to negotiate security protocol: peer id mismatch: expected 12D3KooWPxCbs6bFm5Z5SAvmrgzSvEcumsvZJgaHiEf4ECz5xAhb, but remote key matches 12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw
  * [/ip4/67.249.18.121/tcp/1165] dial tcp4 0.0.0.0:4001->67.249.18.121:1165: i/o timeout

…is what I get if I resolve the old ID but I don’t believe that’s an error (the one ending in jw is the current ID of the node in question).

that’s the problem, your config file still contains the old ID and private key. That’s what needs to be fixed.

No, it doesn’t.

{
  "API": {
    "HTTPHeaders": {
      "Access-Control-Allow-Methods": [
        "PUT",
        "POST"
      ],
      "Access-Control-Allow-Origin": [
        "http://192.168.2.11:5001",
        "http://localhost:3000",
        "http://127.0.0.1:5001",
        "https://webui.ipfs.io",
        "http://67.249.18.121",
        "https://67.249.18.121",
        "/ip6/::/udp/4001/quic"
      ]
    }
  },
  "Addresses": {
    "API": "/ip4/0.0.0.0/tcp/5001",
    "Announce": [],
    "AppendAnnounce": [
      "/ip4/67.249.18.121/tcp/4001",
      "/ip4/67.249.18.121/udp/4001/quic",
      "/ip4/67.249.18.121/udp/4001/quic-v1",
      "/ip4/67.249.18.121/udp/4001/quic-v1/webtransport"
    ],
    "Gateway": "/ip4/0.0.0.0/tcp/8080",
    "NoAnnounce": [],
    "Swarm": [
      "/ip4/0.0.0.0/tcp/4001",
      "/ip6/::/tcp/4001",
      "/ip4/0.0.0.0/udp/4001/quic",
      "/ip4/0.0.0.0/udp/4001/quic-v1",
      "/ip4/0.0.0.0/udp/4001/quic-v1/webtransport",
      "/ip6/::/udp/4001/quic",
      "/ip6/::/udp/4001/quic-v1",
      "/ip6/::/udp/4001/quic-v1/webtransport"
    ]
  },
  "AutoNAT": {},
  "Bootstrap": [
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
    "/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
    "/ip4/104.131.131.82/udp/4001/quic/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN"
  ],
  "DNS": {
    "Resolvers": {}
  },
  "Datastore": {
    "BloomFilterSize": 0,
    "GCPeriod": "1h",
    "HashOnRead": false,
    "Spec": {
      "mounts": [
        {
          "child": {
            "path": "blocks",
            "shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
            "sync": true,
            "type": "flatfs"
          },
          "mountpoint": "/blocks",
          "prefix": "flatfs.datastore",
          "type": "measure"
        },
        {
          "child": {
            "compression": "none",
            "path": "datastore",
            "type": "levelds"
          },
          "mountpoint": "/",
          "prefix": "leveldb.datastore",
          "type": "measure"
        }
      ],
      "type": "mount"
    },
    "StorageGCWatermark": 90,
    "StorageMax": "10GB"
  },
  "Discovery": {
    "MDNS": {
      "Enabled": true
    }
  },
  "Experimental": {
    "AcceleratedDHTClient": false,
    "FilestoreEnabled": false,
    "GraphsyncEnabled": false,
    "Libp2pStreamMounting": false,
    "OptimisticProvide": false,
    "OptimisticProvideJobsPoolSize": 0,
    "P2pHttpProxy": false,
    "StrategicProviding": false,
    "UrlstoreEnabled": false
  },
  "Gateway": {
    "APICommands": [],
    "HTTPHeaders": {
      "Access-Control-Allow-Headers": [
        "X-Requested-With",
        "Range",
        "User-Agent"
      ],
      "Access-Control-Allow-Methods": [
        "GET"
      ],
      "Access-Control-Allow-Origin": [
        "*"
      ]
    },
    "NoDNSLink": false,
    "NoFetch": false,
    "PathPrefixes": [],
    "PublicGateways": null,
    "RootRedirect": ""
  },
  "Identity": {
    "PeerID": "12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
    "PrivKey": "XXX",
  },
  "Internal": {},
  "Ipns": {
    "RecordLifetime": "",
    "RepublishPeriod": "",
    "ResolveCacheSize": 128,
    "UsePubsub": true
  },
  "Migration": {
    "DownloadSources": [],
    "Keep": ""
  },
  "Mounts": {
    "FuseAllowOther": false,
    "IPFS": "/ipfs",
    "IPNS": "/ipns"
  },
  "Peering": {
    "Peers": null
  },
  "Pinning": {
    "RemoteServices": {}
  },
  "Plugins": {
    "Plugins": null
  },
  "Provider": {
    "Strategy": ""
  },
  "Pubsub": {
    "DisableSigning": false,
    "Router": ""
  },
  "Reprovider": {},
  "Routing": {
    "Methods": null,
    "Routers": null
  },
  "Swarm": {
    "AddrFilters": null,
    "ConnMgr": {},
    "DisableBandwidthMetrics": false,
    "DisableNatPortMap": false,
    "RelayClient": {
      "Enabled": true
    },
    "RelayService": {},
    "ResourceMgr": {
      "Limits": {}
    },
    "Transports": {
      "Multiplexers": {},
      "Network": {},
      "Security": {}
    }
  }
}

I commented out the private key with XXX (because that’s probably meant to be private :P) but the rest is the current configuration verbatim.

Aha! That explains that. I was trying to connect with the old ID which was still in the DHT and resolved to the same IP address. But, once connected, it returned a different ID, which caused the problem.

Here is a new test, with the new ID. It seems to work properly.

> ipfs id 12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw
{
	"ID": "12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
	"PublicKey": "CAESINKX0BXL9Q7p0KrXrngb/bqxkVvmSovSVbUuMwhFoUGG",
	"Addresses": [
		"/ip4/127.0.0.1/tcp/4001/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
		"/ip4/127.0.0.1/udp/4001/quic-v1/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
		"/ip4/127.0.0.1/udp/4001/quic-v1/webtransport/certhash/uEiDXmFrIfqpnZC9HwqSsqtclbcpwPKZZPaMvXmaD296nTw/certhash/uEiDAPFJIPsYFUtuwXDVocrm45IDuWbkDGxCNNAYUAjM4Hw/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
		"/ip4/127.0.0.1/udp/4001/quic/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
		"/ip4/192.168.2.11/tcp/4001/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
		"/ip4/192.168.2.11/udp/4001/quic-v1/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
		"/ip4/192.168.2.11/udp/4001/quic-v1/webtransport/certhash/uEiDXmFrIfqpnZC9HwqSsqtclbcpwPKZZPaMvXmaD296nTw/certhash/uEiDAPFJIPsYFUtuwXDVocrm45IDuWbkDGxCNNAYUAjM4Hw/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
		"/ip4/192.168.2.11/udp/4001/quic/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
		"/ip4/67.249.18.121/tcp/1187/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
		"/ip4/67.249.18.121/tcp/4001/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
		"/ip4/67.249.18.121/udp/4001/quic-v1/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
		"/ip4/67.249.18.121/udp/4001/quic-v1/webtransport/certhash/uEiDXmFrIfqpnZC9HwqSsqtclbcpwPKZZPaMvXmaD296nTw/certhash/uEiDAPFJIPsYFUtuwXDVocrm45IDuWbkDGxCNNAYUAjM4Hw/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
		"/ip4/67.249.18.121/udp/4001/quic/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
		"/ip6/::1/tcp/4001/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
		"/ip6/::1/udp/4001/quic-v1/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
		"/ip6/::1/udp/4001/quic-v1/webtransport/certhash/uEiDXmFrIfqpnZC9HwqSsqtclbcpwPKZZPaMvXmaD296nTw/certhash/uEiDAPFJIPsYFUtuwXDVocrm45IDuWbkDGxCNNAYUAjM4Hw/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw",
		"/ip6/::1/udp/4001/quic/p2p/12D3KooWPzS3pN5nMuJMQbbEjXA9oAQXTHxpRDGxC7NuZK4drSjw"
	],
	"AgentVersion": "kubo/0.20.0/",
	"ProtocolVersion": "ipfs/0.1.0",
	"Protocols": [
		"/floodsub/1.0.0",
		"/ipfs/bitswap",
		"/ipfs/bitswap/1.0.0",
		"/ipfs/bitswap/1.1.0",
		"/ipfs/bitswap/1.2.0",
		"/ipfs/id/1.0.0",
		"/ipfs/id/push/1.0.0",
		"/ipfs/kad/1.0.0",
		"/ipfs/lan/kad/1.0.0",
		"/ipfs/ping/1.0.0",
		"/libp2p/autonat/1.0.0",
		"/libp2p/circuit/relay/0.2.0/hop",
		"/libp2p/circuit/relay/0.2.0/stop",
		"/libp2p/dcutr",
		"/libp2p/fetch/0.0.1",
		"/meshsub/1.0.0",
		"/meshsub/1.1.0",
		"/x/"
	]
}

I tried your IPNS address, and it resolved quickly.

> ipfs name resolve /ipns/k51qzi5uqu5dj97sy14ngfq4qcsgml3a852s6tp4djsqqpoeyseruggaec8veh
/ipfs/QmSufjuDKAya5CvHWtoELr1h21tWojF63tyapvJDTrUHbm

The block also resolved.

> ipfs object stat QmSufjuDKAya5CvHWtoELr1h21tWojF63tyapvJDTrUHbm
NumLinks:       0
BlockSize:      66700
LinksSize:      4
DataSize:       66696
CumulativeSize: 66700

So, things appear to work properly from here.

2 Likes

Indeed; it seemed that forcing an ID change and one (or more!) of my restarts caused things to work again. Unfortunately, now that it does work it’s hard to figure out why it wasn’t working but I’m going to assume it’s going to stay working…until it doesn’t again.

1 Like

NB. Thank you for verifying that it does work and looking at my configuration. I forgot to say in my last message(s).

1 Like