IPNS name resolve fails the first time - name.publish and name.resolve are called but works on next call

We’ve raised this issue on github but have had no luck solving it. We would appreciate any feedback or suggestions.

2 Likes

@danieln once an IPNS record is updated can we switch between consistency and availability based on the situation?
right after the record is updated, someone tries to resolve it we would want them to resolve to the latest record with the downside being the resolve might fail but in this scenario consistency is critical and in a normal scenario where someone wants to just persist their IPNS names, having availability will do the job. so is it possible to switch between consistency and availability?

to get this working we are publishing and resolving twice where the first resolve basically causes the kubo node to start subscribing to the topic for the IPNS name - as explained by achingbrain on our github issue.

But now we have started observing that if we set the lifetime to a low value such as “12min” or as high as “100day” the record that was published the first time gets resolved even after updating IPNS name to new records multiple times. As per the helpful details & insights shared by @lidel @danieln tinytb we assumed that even though the record has not expired but any new updates to IPNS records will be updated and returned when resolved.

If the expected behaviour is to resolve to the latest record then this might be a bug. Also, if it is supposed to resolve to latest record, means that a user can choose to not have a re-publishing mechanism since longer lifetime can be set when using IPNS with PubSub enabled(thanks to @danieln, tinytb & @lidel for pointing this out). Anyone looking for IPNS persistence over pubsub can set lifetime to a big enough value.

We have enabled PubSub for the above mentioned scenario and are running our IPFS node like so:

ipfs daemon --enable-namesys-pubsub --enable-pubsub-experiment
1 Like

Does this ever happen after the lifetime of the first record has expired? If so, that is likely a bug.

A couple of things that could be happening here:

  • For whatever reason the newer record is not properly propagated
  • when the newer record is created, the nonce isn’t incremented (this would likely be a bug)

I think the best thing to do here as a next step is to create a runnable reproduction repo with as much detail as possible. So that the engineers can take a look at if. Based on our previous conversation, I presume you’re using js-ipfs?

2 Likes

hi @danieln

we have created a runnable reproduction repo which can be found here: GitHub - imthe-1/keychain-ipns-sample at debug/lifetime-parameter.

It is basically a react app that does the publishing and resolution and logs it in the console tab inside browser dev tools. we have used chrome browser for running it.

Please do the following to get the code sample working as intended:

  1. Run IPFS node with namesys-pubsub and pubsub-experiment enabled:

ipfs daemon --enable-namesys-pubsub --enable-pubsub-experiment

  1. Please change line #6, 7 at “src/App.js” to match your IPFS node url and its websocket port respectively. it is hardcoded to this currently.
const IPFS_NODE = 'http://localhost:5001';
const IPFS_WS_PORT = 4001;
  1. Run: npm i
  2. Run: npm start

Note: we have tested the above code fragment using kubo v0.17.0 and v0.18.1 and in both the scenarios the 2nd/3rd/4th/… updates via IPNS publish were never resolved. The resolved value was always the first record that we published WHEN lifetime was set to a bigger value, in this case “3day”.
We have modified the standard ipfs-core, libp2p and libp2p-crypto libraries to accept an optional privateKey parameter for secp256k1 IPNS keypairs which allows us to create deterministic IPNS keypairs and names.
While using kubo v0.18.1 the code fragment might throw an error related quic-v1 protocol.

apologies for sharing this late. we ran into issues related to the quic-v1 protocol and could not share it because of this issue. If this is faced at your side, try removing addresses with quic-v1 protocol if possible or we can downgrade to v0.17.0

Please let me know if you face any issues. Thanks!

2 Likes