It’s actually a combination of DNS (long term) and HTTP (fast updates). Again, the 12h limitation is a limitation of the DHT, not IPNS specifically. They’re two different systems.
For example, we’re currently working on making IPNS work over pubsub (we’ll likely have even shorter lifetimes for those records) to better cover the HTTP case (faster updates). As I said before, we’d also like some form of blockchain solution (would solve the 12h limit and work well for long-lived records) but that has yet to be implemented or even fully designed, thoughts here:
How does the 12h limitation solve this? You could write a script that generates a large number of keys and publishes values for those keys as quickly as possible to overload the DHT. I suspect that you could do some real damage with just a single powerful node in 12h.
At the moment, you could probably do this. Eventually, we’ll likely need to find a way to protect against DoS attacks (e.g., proof of work, reputation, etc.). However, for now, the 12h limitation means that nobody will expect records to stick around forever so nodes that become overloaded can flush infrequently requested IPNS records (not currently implemented because this isn’t on fire (yet)).
Couldn’t you do something like renew names whenever they are resolved?
As I said above, we’ve considered doing this explicitly: you’d give a republish key to a semi-trusted node to allow it to republish your IPNS records. However, this hasn’t been a high priority. Our current priority with respect to IPNS is making IPNS queries faster (they’re currently absurdly slow).
Or introduce the concept of “pinning” a name (a node indicates that it is interested in the value of a name and thus willing to share the burden to remember the value)?
That wouldn’t work with a DHT. By the nature of how DHTs operate, DHT nodes can’t choose which records they store (they can store them anyways but it’ll be hard for peers to find these records).
Also, if we simply allowed nodes to serve records indefinitely (no re-signing), we’d still have the replay attack issue.
Will this still have the 12h limitation? From the mission statement “… span massive distances (>1AU) or suffer long partitions (>1 earth yr) …” it seems not. 12h is not a long time for real interplanetary use cases.
The system as a whole is designed to handle these cases; that doesn’t mean the current implementation does. Interplanetary DHTs (well, Kademlia DHTs at least) don’t work at all. We’d probably have separate DHTs (or something better) on every planet and specify timeouts using a clock that takes distance into account (
expires = date + distanceFromSource/c).
That statement is specifically referring to the fact that the current system (HTTP+SSL+central servers) can’t work on an interplanetary scale (without trusted servers on every planet). As a matter of fact, it already starts breaks down on a planetary scale (hence CDNs).
On topic, with respect to IPRS…
No, IPRS is just a generalization of IPNS. It’s completely unrelated to the 12 hour issue, that’s DHT related. As this is all a bit confusing, here are the three layers:
- IPNS - a name system. Maps a cryptographic key to a file in IPFS.
- IPRS - a record system. An abstract record that can be validated and queried for. All IPNS records are (well, would be) IPRS records but not all IPRS records would be IPNS records.
- DHT, Blockchains, Pubsub, etc. - Ways to distribute these records (and more).
The 12 hour limitation comes from level 3. In order to be able to publish permanent records, we’d need a some key-value store with actual consistency guarantees (like a blockchain).