Understanding guarantees of the improved IPNS over pubsub

I know that in 0.5 go-ipfs landed improved IPNS-over-pubsub but I am struggling to find information about its mechanism and guarantees. I found some bits in the experimental features docs but there is still a few questions unanswered.

I am mainly interested in what happens with records of offline nodes.

  1. I know that for the old DHT method you have to republish the IPNS record once in a while, is it needed also for pubsub?
  2. If a record is kept alive “in pubsub”, then it most probably can’t be retrieved using the DHT method right?
  3. If republishing or something else is needed to keep record alive in network, can it be delegated to somebody else? Preferably without giving away the private key?

@adin could you please comment on these?

Sort of. IPNS over PubSub periodically rebroadcasts records as a backup in case any issues arise. Additionally, IPNS records in general must be republished once their “lifetime” has elapsed (i.e. every record has an expiration time in order to allow the publisher to guarantee some degree of freshness about the data, more info in the spec).

As described in that doc go-ipfs will publish to BOTH the DHT and PubSub. However, if for some reason you published IPNS record v7 over PubSub and v8 over the DHT go-ipfs will not find v8 for you since it won’t waste time doing a DHT query when the data is already available.

Yes, this is actually achievable now (as long as the record isn’t expired) for both PubSub and the DHT it’s just that the are unfortunately not yet APIs to make third party republishing of IPNS records easy.

  • DHT
  • PubSub
    • This will actually work by default if you just keep a third party node online. However, if your third-party node restarts then it won’t keep rebroadcasting the record
    • You can manually publish the IPNS record to the channel with the correct name if you want (getting the pubsub topic name can be computed with the tool above or just following the spec)

Note about lifetimes: There’s currently a bug in go-ipfs where the republisher will clobber the lifetime you originally set after a republish (IPNS republisher ignores existing EOLs · Issue #7537 · ipfs/kubo · GitHub).

1 Like

Thanks a lot! I will have to revisit our solution and maybe will come up with some follow up questions. Thanks again!

@adin

As described in that doc go-ipfs will publish to BOTH the DHT and PubSub

I do see the published values on both on my machine, but, I’m having trouble finding my IPNS value on the DHT on other machines?

:white_check_mark: go-ipfs$ ipfs name resolve // resolves ok
:white_check_mark: go-ipfs$ ipfs dht get /ipns/Qmhashhhh // shows value ok

BUT, when I go to a go-IPFS node on another machine, I cannot
:x: go-ipfs-another-machine$ ipfs name resolve // Error: routing: not found

Even though the 2 nodes are connected:
:white_check_mark: go-ipfs-another-machine$ ipfs swarm connect /ip4/..../mutiladdr // connected ok

So I try to see the providers, but
:x: go-ipfs$ ipfs dht findprovs /ipns/Qmhashhhhh // Error: selected encoding not supported

So, my many questions are:
:question:Any ideas on how to troubleshoot why my published value isn’t showing elsewhere on the DHT?

:question: What encoding is required for ipfs dht findprovs so I can see where my values are being replicated?

:question: I presume I don’t have to manually ipfs dht provide, do I? If so 1) is there a way to automatically do this, and 2) is there any special encoding I need for the in ipfs dht provide <key> (I tried ipfs dht provide /ipns/QmHashhhh but encoding is not supported, is it b64 or b32)?

@DougAnderson444 Without a little more information I cannot tell you why ipfs name resolve /ipns/QmKeyHash isn’t resolving on the other machine. It also seems like this issue has very little to do with IPNS over PubSub and is likely a result of either networking/configuration issues (more likely) or other IPNS issues (also possible).

Below is some more information on IPNS over PubSub that should clarify a bit more how it interacts with the DHT and clear up some of your questions/issues. Lmk if you have any more questions/concerns.

DHT usage by IPNS

DHT FindProvs and IPNS over PubSub

TLDR: ipfs dht findprovs $ipnsDHTRendezvous, where ipnsDHTRendezvous is defined for a given base58 encoded multihash of an IPNS public key QmIPNSKey as multihash(sha256, ("floodsub:" + /record/base64url-unpadded("/ipns/base58Decode(QmIPNSKey)))). Note: that while ipfs dht findprovs internally uses multihashes it takes a CID. The go-libp2p-discovery code deals with this by creating a CIDv1 with a Raw multicodec.

Recall IPNS over PubSub is not enabled by default (as of go-ipfs v0.6.0) and requires --enable-namesys-pubsub.

ipfs dht findprovs finds Provider Records (i.e. the multiaddrs of peers who have advertised some interest in a particular key, in this case a multihash). This is used for find people who have IPFS content, as well for finding people who have expressed interest in an IPNS over PubSub topic.

The key used for IPNS over PubSub provider records is SHA256(“floodsub:” + IPNS-over-PubSub Topic Name). The IPNS over PubSub Topic Name is specified/defined, however the IPNS over PubSub DHT record isn’t yet in the spec.

The IPNS over PubSub topic name is defined in https://github.com/ipfs/specs/blob/master/naming/pubsub.md#translating-an-ipns-record-name-tofrom-a-pubsub-topic and https://github.com/ipfs/specs/blob/master/IPNS.md as /record/base64url-unpadded("/ipns/BINARY_ID"), where BINARY_ID is the wire representation of the multihash of the IPNS public key.

I’ve added some tools for calculating some of these identifiers to https://github.com/aschmahmann/ipns-utils, and just (as of a few minutes ago) added a new function for calculating the DHT rendezvous record name.

DHT IPNS Records

IPNS records (i.e. the full record as defined in the IPNS spec and contains things like the path that it points to, e.g. /ipfs/QmMyData) are published to the DHT using the equivalent of ipfs dht put $key $value where the value is the IPNS record and the key is /ipns/BINARY_ID (defined above and in the spec).

1 Like

IPNS over PubSub periodically rebroadcasts records as a backup in case any issues arise.

@adin
is the behaviour same for the IPNS over DHT?

@rick-li IPNS as a whole has a rebroadcast mechanism (https://github.com/ipfs/go-ipfs/blob/master/docs/config.md#ipnsrepublishperiod). IPNS over PubSub has an additional rebroadcast mechanism as well.

Thanks @adin

@AuHau

I found it’s actually supported by the ipfs cmd.
ipfs dht get /ipns/QmWAu9aR9FjeeiWDoGnQMwgdJ9Cmsp485
EFkx3y9Wjcu6v > ipns_record

the output is protobuf marshelled IPNS record, share the file with other node and run
ipfs dht put /ipns/QmWAu9aR9FjeeiWDoGnQMwgdJ9Cmsp485
EFkx3y9Wjcu6v ./ipns_record
and it will broadcast the record to the DHT.

But it leads me to another question, if someone holds old record and keeps putting it to DHT would that prevent other node to get the new record? Assuming the old record has not expired.
Edit:
I just tested, republishing a valid old record will revert ipns record, DHT can’t tell which one is newer, should ipfs include publish-time to the record and discard the old record if newer one is published?

I found it’s actually supported by the ipfs cmd.

:+1:

I just tested, republishing a valid old record will revert ipns record, DHT can’t tell which one is newer

The DHT can tell which one is newer based on the sequence numbers in the records and will reject older records. The code that does that is here https://github.com/ipfs/go-ipns/blob/5976a80227cc5199414119585cca347bc814647a/record.go#L103 and https://github.com/libp2p/go-libp2p-kad-dht/blob/57a258ff447b13a34ee75c2de13911d98fa0706c/handlers.go#L195

I’m not sure exactly how you crafted and published your records but if you try and do an ipfs dht put with an old record your node won’t even publish it and you’ll get

ipfs dht put /ipns/YOUR_KEY ipns_record
Error: can't replace a newer value with an older value

Thanks @adin for the detail
I just tried again and i can reproduce it
node1 = 0.7.0-rc1
node2 = 0.6.0 without --enable-namesys-pubsub
node1 - ipfs name publish -t 24h --ttl 1h QmHashOld
node2 - ipfs dht get /ipns/QmIPNS > ipns_record
node1 made some new changes and ipfs name publish -t 24h --ttl 1h QmHashNew
node2 - ipfs dht put /ipns/QmIPNS ./ipns_record //i don’t see any error here.
- ipfs name resolve -n QmIPNS //returns QmHashOld
even i restart node2 it still resolve to QmHashOld

I do see the “can’t replace a newer value with an older value” error if i republish the old record on the original node, but seems the validation doesn’t work from a different node.

@rick-li I assume you mean QmHashNew instead of QmHashOld here, right?


So the behavior you’re experiencing is actually the result of a weird interaction and the fact that publishing IPNS records from more than one node is unsupported behavior and so occasionally runs into weird corner cases.

As described in the issue below essentially what’s happening is that you’re not searching the network. If you spin up a new node node3 and have it do ipfs name resolve QmIPNS you should get the latest record. Thanks for prompting me to dig into it :smile:

Sorry I meant QmHashNew here let me correct it .

Thanks for the explanation, I spinned up a new node and confirmed it can get the new hash.

Will PubSub do the same validation? i don’t seem to find it in https://github.com/libp2p/go-libp2p-pubsub/blob/efd56962bced7064fce46dc412971c1182a807f1/validation.go
Thanks!

Yes. The code you linked to is for pubsub validators which are functions defined per topic. The validator for IPNS over PubSub topics are defined here https://github.com/libp2p/go-libp2p-pubsub-router/blob/ac8fa95f2c05627619a2d650bf00b729d8e25d40/pubsub.go#L190 (well technically these are for arbitrary records and the IPNS part is passed into the NewPubsubValueStore function when it’s called in go-ipfs).

1 Like

Since i manually put the ipns record to local DHT, now it just returns the old record immediately without querying new record from the remote network anymore, i understand this is being investigated but is there a way to clean up the local DHT record?

You’d have to manually mess with with the repository in order to do it. The key should be as used here https://github.com/libp2p/go-libp2p-kad-dht/blob/57a258ff447b13a34ee75c2de13911d98fa0706c/handlers.go#L174 which should be

base32.RawStdEncoding.EncodeToString("/ipns/BINARY_ID")

I recently hacked together a very basic tool for messing with the datastore GitHub - aschmahmann/ipfs-ds: Utility for working with the ipfs datastore. If it’s not enough to get the job done feel free to respond here or open an issue on that repo.

1 Like