Cannot transfer between two firewalled machines

From @xloem on Sun Jan 08 2017 20:43:29 GMT+0000 (UTC)

Does ipfs allow transfer between two machines, neither of which will accept incoming connections?

I seem to be able to view the files on gateway.ipfs.io, but not retrieve them from the commandline.

ipfs get just hangs for a long time. ls -l /ipfs/[hash] gives ls: cannot access /ipfs/[hash]/[file]: No such file or directory.

Copied from original issue: https://github.com/ipfs/support/issues/48

From @xloem on Mon Jan 09 2017 00:41:49 GMT+0000 (UTC)

To answer my own question: no, ipfs does not support this.

Open port 4001 if you want to share files with others.
If you are firewalled and want to download a file from a machine that is firewalled, visit the file on gateway.ipfs.io first, so that it has a non-firewalled source.

when I start the ipfs daemon it also says something about port 5001 (api server) and port 8080 (gateway server). Should I also open this ports on my firewall? Or the only port to open is 4001?

the answer is in here: IPFS ports firewall

The 5001 port only has to be opened in the firewall if youā€™re planning to use your IPFS daemon remotely. Same goes for the HTTP gateway, if youā€™re not using it, no need to use the ports. The important ports are the swarm ones, as those are the ones being used to communicate/transfer between nodes.

What do you mean by that? Serving files to the outside of my LAN or controling the daemon remotely? Iā€™m new to all of this.
I also donā€™t know which cases I will need the HTTP gateway. Can you also give me an example of such cases?
Thank you @VictorBjelkholm

No worries, Iā€™m happy to explain :slight_smile:

5001 is the API port you can use to send commands to the daemon to control it. Internally, go-ipfs uses this for every command in the CLI, but if youā€™re not planning to control the node externally (for example, running a IPFS node on a server and connect to it from your local machine), itā€™s not necessary to expose it.

8080 is the HTTP gateway port. It allows you to get/add IPFS objects via HTTP and itā€™s meant to be exposed if you want to allow traditional applications (applications using HTTP really) access some IPFS objects. For example, we run a public IPFS-gateway at ipfs.io, so not just can you do ipfs cat QmT78zSuBmuS4z925WZfrqQ1qHaJ56DQaTfyMUF7F8ff5o in a terminal, you can also navigate to https://ipfs.io/ipfs/QmT78zSuBmuS4z925WZfrqQ1qHaJ56DQaTfyMUF7F8ff5o to get the same object

Exposing 4001 (swarm port) allows you to serve content via the IPFS network, and is all you need. If you need compatibility with HTTP, you should also expose the HTTP gateway. And finally, if you want need to run the daemon on a separate machine than where you use the CLI, you can expose the API port.

Hope that clears it up!

3 Likes

It explains a lot. Thank you for it :wink:
Only one more thing:

  1. if I want to get a file (that iā€™m serving from another machine) from http://ipfs.io/ipfs/Qm... I will need to open the port 8080 on the serving machine. Right?
  2. now if I use http://localhost:8080/ipfs/Qm... to get that file, will I need to open port 8080 on the ā€œserving fileā€ machine?

Hi @VictorBjelkholm

I am trying to upload a file to the IPFS node which is running in GCP (using js-ipfs).

this.ipfsApi = ipfsAPI(ā€˜localhostā€™, ā€˜5001ā€™) - works fine
but
this.ipfsApi = ipfsAPI(ā€˜GCP Instanceā€™s IPā€™, ā€˜5001ā€™) - does not work and it says connection refused. I have allowed all ports actually, but still no luck reaching out to this.

I want to upload the files from my client to the remote node, without using the local node.

Thanks in advance.