I’m in the process of trying to create a service that basically works like this:
*People provide me a hash
*I pin that hash from the network
I’m having quite a bit of problems with this.
Neither the “ipfs pin add (hash)” and the “ipfs dht findprovs (hash)” commands are finding anything. I’m letting each of these commands run until they naturally time out (which appears to be something like 10minutes) and I still get no results.
The only way either of these commands work is if I’m directly connected to the node I’m trying to grab content from.
I know the content resides on the network, as whenever I visit ipfs.io/ipfs/myHash, the content is found easily.
Does anybody have any tips on how to make this process actually work in an efficient manner? It’s not feasible for me to have to manually connect to all nodes I’m trying to pin content from, as I often don’t know the multiaddresses of nodes that are hosting the content.
Another solution would be to ensure that your nodes are connected to each other! Maybe that works too, I find this a quiet common problem and the only solution is to ensure direct connectivity.
If you have static IP consider to add them as bootstrap nodes too:
@leerspace I tried on multiple different machines running the “telnet myDOIP 4001” command and every time received “unable to connect to remote host: connection timed out”
Which is strange to me, as this digital ocean machine has “all ports” opened up for TCP.
@koalalorenzo I actually do this with many of my nodes. However this is a pretty fragile process, as frequently the nodes forget about each other, and often I run into connectivity errors such as the ones mentioned in this bug report:
As @leerspace mentioned in the bug report, my two issues may be related, but I’m not 100% positive.
Also, on the subject of the nodes forgetting about each other, does anybody know if there is anything on the IPFS roadmap that allows two bootstrapped nodes to stay connected to each other unless manually disconnected? I know for me at least this would be a useful feature.
If it helps, I’m currently running Ubuntu 16 for my nodes, and have the most current version of IPFS on all of them. I’m also running NGINX in front of them. While I don’t believe that to be causing any issues, it’s something to be aware of.
TL;DR: It all Just Works® with the vanilla DigitalOcean Ubuntu 16.04 image and DigitalOcean project defaults. I didn’t have any connectivity issues or problems with blocked ports since it turns out that DigitalOcean’s firewall doesn’t appear to block anything by default (nor does Ubuntu).
If you want to skip to what I think the problem is, skip to the very end of this post. Details follow immediately below so it’s clear what I did.
Here’s what I did:
Created a DigitalOcean account
Created a new project
Created the cheapest droplet they have with an ssh key, Ubuntu 16.04, and otherwise default settings
From my home IP, ran telnet my.do.public.ip 4001 and got a /multistream/1.0.0 response
This all took about 15 minutes, so I’m curious if they’ll even bother with charging me $0.007 for the droplet’s hourly charge :).
Here’s what I think we need to troubleshoot further:
Does your DigitalOcean cloud firewall have any rules (should be none by default)? If so, please post screenshots.
Output from sudo iptables -L -n on the droplet
Output from sudo ufw status verbose
If you followed something like this guide (which they suggested to me after I created the droplet) for setting up Ubuntu on Digital Ocean, I think it’s likely that you enabled a restrictive firewall on Ubuntu that’s blocking almost everything except for explicitly allowed services.
Spoiler (or what I think the solution likely is)
If this is the case, then you’ll need to run sudo ufw allow 4001 to open up the port needed for IPFS. Unless there’s a custom DigitalOcean firewall too, then that should allow connections to your daemon’s swarm port.