About IPFS establish connection

I am using IPFS version 0.4.4 inside a google instance(https://console.cloud.google.com/), I opened its port 4001 and instance has a external IP.

Here @VictorBjelkholm recommends me to use following line of code in order to check accessibility of the ipfs-peer:

$ nc gateway-int.ipfs.io 4001

Replace gateway-int.ipfs.io 4001 with your IP and your port (defaults to 4001, only use 4003 if you made a choice to change it by changing the config)

[Q] What should I do if nc peerIP 4001 returns nothing? Since that returns nothing I am not able to make a connection into ipfs-peer using ipfs swarm connect :confused:

What should I do to make my peer return /multistream/1.0.0 in order to establish a connection?

Thank you for your help.

What does sudo netstat -lptn | grep 4001 show on the IPFS node you’re trying to connect to?

Again empty string. @leerspace

That means there’s nothing listening on port 4001 on the node you’re trying to connect to.

Assuming the daemon is already running on the node you’re trying to connect to, can you post the output of ipfs config Addresses from the node you’re trying to connect to?

Sorry I forget to run ipfs daemon &

$ ipfs daemon &
$ sudo netstat -lptn | grep 4001
tcp        0      0  *               LISTEN      19088/ipfs          
tcp6       0      0 :::4001                 :::*                    LISTEN      19088/ipfs 

$ ipfs config Addresses
  "API": "/ip4/",
  "Announce": [],
  "Gateway": "/ip4/",
  "NoAnnounce": [],
  "Swarm": [

If you’re running the daemon on a remote server using ipfs daemon &, it’s going to get killed when you log out.

Can you try to connect using IPFS while the IPFS daemon is for sure running on the node you’re trying to connect to? If this also doesn’t work, then I suspect there’s something blocking the connection.

I am running IPFS as: nohup ipfs daemon &

I guess google-instance blocks it by default, not sure why.

Oh, cool. I didn’t know about nohup.

I don’t know much about google compute instances beyond what I can find by searching around, but how did you try to open port 4001 to your google instance?

I can ping to its IP address also. I have follow the same question’s answer. Let my reboot and try again.

$ gcloud compute firewall-rules list | grep 'ipfs'
ipfs4001                default  INGRESS    1000      tcp:4001

I selected:
Targets => All instances in the network and it works! I guess by default its blocked by the firewall.

Thanks for the help @leerspace

1 Like

I am trying to connect from my local docker to a remote IPFS node I have running.

I am able to NC and get the multistream response but then the NC command never terminates, keep hanging, not sure if that’s expected?

root@6c5fbf58bf1f:/go# nc <IP> 4001

And connecting results in deadline error:

ipfs swarm connect /ip4/<IP>/tcp/4001/ipfs/Qm...w3hb7

failure: dial attempt failed: context deadline exceeded

IPFS VERSION: go-ipfs/0.4.16-dev/

Any idea?

Update: doesn’t work neither from my local machine so is not a docker issue, my remote node is running in gcloud with exposed port.

Update2: i am able to connect from another gcloud node so maybe it’s something gcloud related.
Update3: the external loadbalancer port is exposed correctly so no idea what’s the issue:

kubectl expose pod ipfs-pod --type=LoadBalancer --name ipfs-ext --port=4001 --target-port=4001

I also added a general firewall rule to allow all tcp:4001 ingress/egress traffic on all gcloud machines. What else could it be? :confused: @leerspace @avatar-lavventura

Update: I also opened the 4001 TCP port on my localhost just in case but no success. Still with the same error.

Update: neither works on IPFS 0.4.15 official release so is not the dev version issue.

hey @whyrusleeping, did u experienced such a problem in the past?

This is expected and what I see with my own nodes.

This is what it sounds like to me – either something specific to gcloud in general or the load balancer. How many nodes do you have behind the load balanced IP? This is just an idea (should be taken with a grain of salt since I’ve never used gcloud and I’m making assumptions about your setup), but if you issue the ipfs swarm connect command with a given IP and node ID and there are multiple nodes with different IDs behind the load-balanced IP, then if the load-balancer directs you to a different node than the one you specified I’d expect the connection to fail.

it’s only one. The reason of load balancer is to have a never changing IP. Anyway, if I am getting the “/multistream/1.0.0” response then it means the connection is working fine isn’t? Why would the swarm connect fail then? @leerspace

It means there’s an IPFS node listening at that IP address. My idea was that the load balancer was directing the connection to the load-balanced IP to the wrong node, but if there’s only one node then that obviously can’t be it.

Assuming you’ve already done what worked for the OP in this thread, I don’t currently have any other ideas of things to check unless you can try removing the load balancer from the equation.

yea that’s actually what i was thinking as well. Tomorrow I will deploy a new pod without load balancer and try to connect. If you have time you can try to create your little cluster as well. I think people must have tried setup IPFS on aws/gcloud already hm.

Will keep this thread updated.

It’s the second day I am trying to make it work without success. I removed one load balancer in the middle but the same problem persist. The thing is in google cloud u must have the load balancer because that’s what exposes your external IP. But anyway, i am able to get the “multistream” response so the communication is possible :confused:

I created completely new pod, service, virtual machine under a different name to be sure the request doesn’t get lost somewhere and no luck. Anyone would like to try it himself?

SOLVED!!! god damn, this co-working space in Barcelona seems to be messing with firewall I think… I tried to connect my laptop to my wifi hotspot from phone and now it works… omg, like 10h bug.

Seems like I have a similar issue, messing with this two days already and I’ll be very very grateful for help.
I’m playing with ipfs-cluster and its pins do not work well.

pin_timeout is set to 30m in ipfs-cluster config.
I do:

ipfs add <file>


ipfs-cluster-ctl pin add <hash>

ipfs-cluster-ctl status says that it’s PINNED on the node that executes these commands, but it remains PINNING on the other node, typically ending up with the following message in the other node’s log:

09:20:54.539 ERROR   ipfshttp: error posting to IPFS:Post context deadline exceeded ipfshttp.go:775
09:20:54.540 ERROR  p2p-gorpc: Post context deadline exceeded client.go:245

Only once it went through, but I suppose that the file was taken from the public nodes, since the file’s content was as simple as foobar.

I googled a bit and figured out that nodes should be able to see each other in the swarm.

But when I try to connect with

ipfs -D swarm connect /ip4/<ip>/tcp/4001/ipfs/<hash>

I get this:

Error: connect <hash> failure: dial attempt failed: failed to dial <peer.ID eXAr9a> (default failure)

Instantly, and ipfs -D gives nothing too.

Meanwhile the firewalls seems to be OK, no rules at iptables, and when I do

nc -v <ip> 4001

I get

Connection to <ip> 4001 port [tcp/*] succeeded!

And I get Connection refused for closed ports. Though, no /multistream/1.0.0 message appears event in debug mode.

Any ideas what could be wrong with the swarm connection, and could it affect cluster pinning?

That should show up even if you’re not running in debug mode. That you don’t see the message suggests that there’s not an IPFS node listening at that port.

I don’t know much about the setup of ipfs-cluster or any current issues since I haven’t used it before. I’m mostly waiting for the implementation of cluster sharding before I go through the process of setting up a cluster.