How do I make my IPNS records live longer?

Does anyone know what unit of measurement ttl is for IPNS? (specs/IPNS.md at main · ipfs/specs · GitHub)

Also for Kubo the IPNS settings: kubo/config.md at master · ipfs/kubo · GitHub. What does RecordLifetime effect? Both TTL and Validity?

Will a record live in the DHT until Validity expires? Or is there another way it can be dropped?

I’m asking because currently I have all my data live, however if my laptop goes to sleep for long periods of time the IPNS record seems to expire and no one can access my content, even if it’s available. I’m looking to resolve that.

2 Likes

Here is the command I used the last time I republished my default IPNS:

ipfs name publish --lifetime=168h --ttl=5m QmT...

It means that the DHT should keep the record for 7 days and nodes that look up that record should keep it cached for 5 minutes before looking it up again

1 Like

Awesome, thanks! This will definitely help mitigate the issue in the meantime. I wonder how I could configure Kubo to publish like that by default. I didn’t even think to read the help page for the command, because I was so focused on changing the defaults :stuck_out_tongue:.

Also, keep in mind that your node has to be up 24/7 so that it can “reprovide” the record every 4 hours (that’s the default). And, if you don’t update your IPNS that often, use a longer lifetime.

1 Like

If I fail to reprovide within the 4hr window does it drop from the DHT? If so, I’m not quite understanding the point of “lifetime”.

1 Like

What you are missing is the “network churn”. Yes, the nodes you have “provided” to will keep the record for the lifetime you request, but, they are not promising to stay up. So, the vast majority of the nodes you provide to will be gone within a few hours (the harsh reality of the state of the network). By reproviding every 4 hours, you replenish the DHT (in the hope that not all those nodes will be gone by the time you do it again, 4 hours from now)

2 Likes

Ah, thanks a lot! I wonder if I could get my VPS to republish the records then. Basically I find it wild how I can publish a CID, make that CID available over like 6 nodes, but then the IPNS record seems to all hinge on one machine. I’m trying to figure out how to improve that situation.

1 Like

IPNS “re-pinning” has been a subject of conversation lately, and I even think someone implemented a service that does it, but generally speaking, yes, only your node can do it (it’s the only one with the private key)

3 Likes

Alrighty I’ll build a custom solution then. Thanks a lot, I really appreciate your help ^-^.

1 Like

It seems that you should be able to change the default lifetime configuration in Kubo: https://github.com/ipfs/kubo/blob/master/docs/config.md#ipnsrecordlifetime

I wonder if I could get my VPS to republish the records then. Basically I find it wild how I can publish a CID, make that CID available over like 6 nodes, but then the IPNS record seems to all hinge on one machine. I’m trying to figure out how to improve that situation.

I share your bewilderment. It was my understanding that in terms of the DHT, IPNS records are just like provider records in so far as they’re distributed across the DHT using the XOR metric.

Granted, it is fundamentally different since a record is mutable which changes a lot of the assumptions about freshness and how long it’s kept around for. Since

IPNS “re-pinning” has been a subject of conversation lately, and I even think someone implemented a service that does it, but generally speaking, yes, only your node can do it (it’s the only one with the private key)

It’s true that only the private key holder can create new records (with the Sequence incremented), thereby invalidating older records. But since the IPNS record that is distributed is signed, doesn’t that mean that you could theoretically build a service that just observes and republishes a given IPNS key, i.e., the hash of a public key?

2 Likes

It’s true that only the private key holder can create new records (with the Sequence incremented), thereby invalidating older records. But since the IPNS record that is distributed is signed, doesn’t that mean that you could theoretically build a service that just observes and republishes a given IPNS key, i.e., the hash of a public key?

Not only that but with IPNS via pubsub it is only a matter of listening on the topic for updates.

Someone could also build a service with a specific topic and (on a small scale) “pin” any records received on said topic.

1 Like

Not only that but with IPNS via pubsub it is only a matter of listening on the topic for updates.

Does that require that the node that originally creates the IPNS record publish it via pubsub? IIRC, IPNS pubsub publishing is not enabled by default (https://github.com/ipfs/kubo/blob/master/docs/config.md#ipnsusepubsub)

Problem is that setting isn’t defined. I have no idea what it does. It says “RecordLifetime” but does that effect TTL or Validity?

Edit: On second read, it appears it’s “Validity” I see.

It would, but for my purposes I do have IPNS PubSub enabled.

1 Like

Let’s consider the following scenario:

If you set the:

  • lifetime to 7 days
  • TTL to 10 minutes
  1. You publish the IPNS record and then shut down your IPFS node.
  2. You try to resolve the IPNS record 3 days later via a public gateway, a long as the DHT nodes that you wrote the IPNS record two are still up, you will be able to resolve the IPNS record.
  3. In the 10 minutes following the request to the gateway, IPNS request for the same hash will be resolved from cache without going to the DHT.
  4. On the 4th day after publishing the first record you decide to create a new record with the same private key and an incremented sequence number. Lifetime and TTL are the same. After publishing the record you shut down your node again.
  5. You make another request for the IPNS record to a public gateway. Since it’s been over 10 minutes since the last request, it looks up the DHT and finds the new record and returns it.
  6. Subsequent requests in the 10 minutes following the first request to the gateway will return the record from cache.

Am I understanding the two values (lifetime and TTL) correctly? If so, what’s the tradeoff for setting a high lifetime value?

@Jorropo

2 Likes

@lidel Would you maybe be able to chime in on my last question?

1 Like

iirc DHT nodes will drop stored values after ~24h, no matter what Lifetime and TTL you set.
Someone needs to keep reproviding your signed IPNS record every day, or it will be “forgotten” by the DHT.

Tip: if you enable Ipns.UsePubsub then DHT limitation is lifted, and you can fully benefit from longer Lifetime: other peers who are following the same IPNS name can send you a valid, signed IPNS record, even if it is no longer available on DHT (see Layering persistence onto libp2p PubSub).

Mostly correct. The caveat is that DHT is dropping records older than 1 day, and Kubo nodes are not re-providing third party signed records on DHT, only on PubSub, but that is not enabled by default.

In case of Kubo IPFS implementation, IPNS things you can adjust are:

  • Lifetime (ipfs name publish --lifetime, or the global Ipns.RecordLifetime)
    Controls how long a singed IPNS record is valid (how long, in theory, a third-party service or peer without a private key can re-publish it on your behalf to keep it alive).
  • TTL (ipfs name publish --ttl)
    Controls for how long IPNS resolver can cache a record before trying to resolve it again (to check if there is a new version).
  • Ipns.RepublishPeriod
    (iirc) Controls how often your node will bump Sequence and publish new signed IPNS record. It is ow enough to account for network churn and DHT “forgetting” records older than 24h.
  • Ipns.UsePubsub
    Disabled by default in Kubo (tracking issue), but enabling it is a good idea if you plan to publish IPNS records with long Lifetime.

Correct.

We don’t have “IPNS Pinning Services” yet, but someone could create one. A service that periodically resolves IPNS name via DHT and listens on IPNS pubsub channel and then re-provides long-Lifetime records on both. This “IPNS keepalive” can be done in “trustless” manner, without revealing private keys to the service.

Some relevant work I am aware of :

5 Likes

So, if I understand things correctly:

  • re-provide (not actually an english word, but we get it) = act as a host for an ipns record available only in PubSub routing system for IPNS (aka IPNS over Pubsub)? In other words, if original publisher goes offline and others are online, they can still provide the OP’s record. Additionally it will be accepted or rejected on a “peer-to-peer” basis, if the record is expired.

  • republish (1st party) = standard behavior in IPNS, uses DHT to keep your record alive while your node is online. Others cant revalidate your record to keep it alive on the network for longer than the OP’s specified lifetime. But they can act as a host of your record for content discovery?

  • republish (3rd party) = The idea of a third party signing anew your valid record to keep alive in IPNS over the DHT. (not currently a thing, would require access to the OP’s private key)

  • revalidate? = IMO the idea of a third party using some separate key to keep your record alive somehow (also not a thing, would require implementing another set of permission keys or something in IPNS). This would be over DHT and have the benefits of re-providing records.

Thoughts? @danieln @lidel

2 Likes

Yes, but:

  • having four words starting with re- may not be the best for documenting this :-]
  • only the first two operations exist today (if someone has private key, they become first-party from the perspective of the system).

Alternative way of describing IPNS operations:

  • providing IPNS record: happens when an already signed IPNS record is provided to the network.
    • Since there is no requirement for private key, this can be done by anyone.
  • updating IPNS record: happens when a new IPNS record is created and signed using the private key
    • This operation can be done only by someone who has the private key (aka “publisher”)

All the additional nuance comes from specific transport (DHT, PubSub) and implementation (Kubo’s internal architecture and/or legacy decisions).

2 Likes

Just to confirm I understood this:
Does that mean that the two terms: providing and updating are orthogonal to the transport, i.e., DHT or PubSub?

I believe so:

  • updating (signing new revision of the record) is transport-independent, you can do that offline
  • providing implementation details may depend on the transport, but in the end you always provide the same bytes representing the same signed record
1 Like