Why is IPNS so slow

Re opening this thread:

"
It usually takes 2~ minutes for command ipfs name publish <hash-id> (running on my local). And also more 2 minutes for ipfs name resolve <peer-id> (running on my digitalocean vps).
"
Has this been resolved?

1 Like

IPNS publish/resolve times using the DHT are unfortunately still slower than they could/should be since as of today (06/26/20) much of the network is still running pre 0.5 nodes that are behind NATs and are advertising themselves to the rest of the network. As more people upgrade to go-ipfs >=0.5 network performance should increase, however there’s also interest in figuring out a good way to upgrade the network for new nodes without waiting for users to upgrade their nodes.

However, if you use IPNS over PubSub (--enable-namesys-pubsub) and the publishers and subscribers are running go-ipfs >=0.5 then an IPNS resolve should be about as fast as an IPFS resolve.

2 Likes

Any news on this interest in figuring out a way for updated nodes to not be slowed down by legacy nodes?

Hi @adin, would you have any update about this? Thanks!

as of 2 months ago the :latest tagged docker image was still delivering an 0.4.xxx version . i had no idea until i tried to use stuff that didnt exist. fun times. manually told it to use 0.8 after that. could this be the cause for a lack of higher versions being deployed?

IPNS Publishes + Resolves have gotten faster, although they’re still not as fast as they could be. IPNS over PubSub is still going to be the fastest resolution option though. If you want to use IPNS over the DHT-only you can speed things up significantly by using -dhtrc and setting it to a smaller number (it defaults to 20, so 10 or even 5 should be much faster).

Is your main concern about publish or resolve times?

Today I can see that latest and v0.8.0 have the same digests and so should match.

1 Like

[Sorry, didn’t get notified of your reply]

Actually I’m concerned about both. But my question was really about this specific statement of yours:

Ah, not much progress on a defined playbook here. So far there are two major options available with their own tradeoffs:

  1. Just run two DHTs for some period of time before sunsetting the older one (e.g. one with protocol /ipfs/kad/1.0.0, and the other with /ipfs/kad/2.0.0) and try them both in parallel.
  2. Try nesting one DHT inside of another one, i.e. by saying that every member of /ipfs/kad/2.0.0 is also a member of /ipfs/kad/1.0.0 it’s possible to start queries in the new DHT and continue them in the old one.

Option 2 is nice because it’s less resource intensive and nicely scales to if there are multiple version upgrades happening together. However, it adds some complexity in implementation and management (e.g. some of the Sybil resistance rules, such as those described in Hardening the IPFS public DHT against eclipse attacks | IPFS Blog & News, are a little weird to enforce in smaller networks). These aren’t game stoppers, but putting in the rest of the work here will probably end up waiting until doing a protocol-level upgrade becomes important enough to rise to the top of the TODO list.

1 Like

Speaking of updates, I always thought it would be cool if Ipfs-update pulled the new ipfs version from ipfs if a daemon was running. There isn’t really a compelling need to do that but it would be cool from a dogfooding perspective. It would also be nice if the GitHub and ipfs distros page listed the cid’s as well.

1 Like

I’d recommend starting a new thread to talk about that, this one has drifted pretty far off topic already :smiley:. Feel free to tag me though.