Connectivity difficulties

I run into this problem more often than I believe is “normal” and each time I try to figure out the cause with no definitive answer. For reasons unknown the problem goes away, and I can’t correlate it to moving to different hardware, which I do pretty often as I build systems.

I’m using kubo 0.17.0 on a node in a datacenter, and version 0.22.0 on another node at home. I can get files from the datacenter node on my node at home reasonably quickly, but the datacenter node can’t even get small files I publish on my home node.

I have added the datacenter peer to the peering section of the config file on my home node. When I use ipfs swarm peers | grep <home node peerID> 3 multiaddresses are returned, only 1 matches my home node IP address. If I shut down my home node it goes away, but the other 2 persist. I can use ipfs swarm disconnect <peerID multiaddress> to remove them, but they always come back after a short while, even if my home node isn’t running.

I can ping either node from the other node using ipfs ping <peerID>, with double digit millisecond response times in either direction.

What are the 2 multiaddresses in the swarm with my peerID, and why do they persist? How do I go about investigating the cause of this connectivity problem?

Please upgrade both nodes to the latest version (v0.26.0 I think) and come back if still doesn’t work.

Are you a bot? That’s a typical answer a bot might respond with.

It’s not a totally unreasonable response, but is the ipfs network really that finicky? I truly don’t believe that’s the answer. I’ve been using IPFS for 5 years now, and if the cause was due to version differences it I would have been able to correlate it. I’ve mixed versions further apart than these without seeing this occur.

The 2 nodes I run at datacenters are huge repositories and it isn’t a trivial matter to upgrade them. I’t far easier to downgrade the one I’m running at home to determine if the version difference is the issue.

Sorry if my knee-jerk reaction to your reply offended you. It’s just so easy to give an answer like that than to offer suggestions with deeper analysis.

But like I said, it’s not totally unreasonable, just looking for more alternatives than just doing a time consuming upgrade, which provide no guarantees will resolve the problem.

Besides, it was only a few weeks ago these two versions were working together just fine.

Upgrading from 0.22 to 0.26 wont be slower because you have lots of data.
There is a migration but it only touch the json config file.

You can revert back too.

I think it’s completely reasonable to update the binary file run the migration (do ipfs daemon manually to be prompted) which wont touch your dataset and run the updated version.

After rereading I realised your datacenter node is on 0.17, this wont touch data either.
It’s only since 0.12 or 0.13 that there were a migration of data.

That’s great news, thanx for sharing that. I’ll do a backup of course prior to the upgrade, as a safety measure just in case of difficulties.

I am not :slight_smile:

Look, every single component related to your issue has evolved since 0.17.0. The most probable thing is that, whatever the problem (or problems) is, it has been fixed.

The easiest way to know if this has been fixed is to upgrade. Kubo devs can spend time debugging and fixing things that are now broken. Kubo devs cannot spend time debugging things only to realize that it’s a problem that was corrected a year ago. Because if that is the case, the solution is to upgrade anyways. We need to be sure the problems happens now on the latest versions.

Even though the the ipfs network shouldn’t be that finicky… in your case it seems it is, and the best solution is to upgrade. Your data is not touched and the upgrade should be less time-consuming that this convo.

2 Likes