Two separated VPS, running docker with public addresses. I deployed one cluster and one IPFS instance in each, based on the docker compose provided. I want to connect both using the public IP, not the internal docker one.
When adding manually one of the peers in the peerstore, they connect each other, but after shutting down, only the public address provided manually is saved. In other words, when listing the peers:
As you can see, in the IPFS part the public IP multiaddress is listed, but for the cluster is not. So when one of the nodes restarts, the public ip of the other peer is lost. Is this a limitation of the docker setup? Why is it listed in IPFS then?
the compose relevant part:
cluster0:
container_name: cluster0
image: ipfs/ipfs-cluster:latest
depends_on:
- ipfs0
environment:
CLUSTER_PEERNAME: cluster0
CLUSTER_SECRET: ${CLUSTER_SECRET} # From shell variable if set
CLUSTER_IPFSHTTP_NODEMULTIADDRESS: /dns4/ipfs0/tcp/5001
CLUSTER_CRDT_TRUSTEDPEERS: '*' # Trust all peers in Cluster
CLUSTER_RESTAPI_HTTPLISTENMULTIADDRESS: /ip4/0.0.0.0/tcp/9094 # Expose API
CLUSTER_MONITORPINGINTERVAL: 2s # Speed up peer discovery
ports:
# Open API port (allows ipfs-cluster-ctl usage on host)
- "127.0.0.1:9094:9094"
# The cluster swarm port would need to be exposed if this container
# was to connect to cluster peers on other hosts.
# But this is just a testing cluster.
- "9096:9096" # Cluster IPFS Proxy endpoint
The cluster peers are not aware of their external IP
Therefore they are not announcing it during identify()
Therefore it is not part of the peerstore of the other peer
Therefore it is not saved on shutdown.
The thing is Identify() will only start advertising one of those addresses when it has seen at least 4 connections in the last hour (which is the signal that such address is actually a public one). Of course, the thousands of nodes in the ipfs network trigger such condition pretty easily and your ipfs nodes show the public IP, but having only 2 cluster peers this never happens.
I’ll open an issue to reduce the threshold. The workaround is to write your public addresses manually in the peerstore for the moment. If they change, I recommend using /dns4//… form if possible.
Nice catch @hector! Is there any workaround for a node to force to be aware of its own public IP provided I (or another process in the same host) already knows it?
An ideal scenario would be to provide an ENV to the cluster process such as OWN_PUBLIC_IP or something alike so it can advertise this info to other peers
Hey @dapplion, there is a “latest” ipfs/ipfs-cluster docker image from today that enables a bunch of NAT-hole-punching things. Can you test it out? It’s not a specific fix for this, but maybe as a side-effect it gets what you want.