When I enter ipfs swarm peers there is nothing. So I am trying to manually connect the nodes by ipfs swarm connect "/ip4/$BOOTSTRAP_SERVICE_SERVICE_IP/tcp/4001/ipfs/BOOTSTRAP_HASH" but it gives me always
Error: connect failure: failed to dial: no good addresses
I checked for the open ports in the Cluster IP by telnet $BOOTSTRAP_SERVICE_SERVICE_IP 5001 for example and it is opened. I have opened port 4001 also, but when do the same telnet $BOOTSTRAP_SERVICE_SERVICE_IP 4001 I am getting Connection closed by foreign host
The difference between port 5001 and 4001 is that port 5001 is opened on IP 0.0.0.0 but swarm tells me it is listening on IP 127.0.0.1.
I am using minikube for local development and in their documentation, there is " If the service is bound only to localhost (127.0.0.1), this will not work.". That being said in my view the problem is because swarm is listening on 127.0.0.1 but not on 0.0.0.0. Is there a way to configure it to listen on 0.0.0.0?
You should have the IP address of the other private node and not 0.0.0.0 ā¦ make sure that the other nodeās IP address is reachable.
Follow the example of the default bootstrap format:
/ip4/other.node.reachable.ip/tcp/4001/p2p/NodeID
Opening port 5001 to everyone means that anyone can modify your pins. You probably donāt want to do thatā¦
2 ClusterIP
IPFS Clusters generally run on private IPFS nodesā¦ itās probably a good idea to get the nodes talking to one another using a private IPFS network first and then add on the cluster.
IPFS private networks require generating a shared swarm key and placing that key in the IPFS directory of each private node. Since youāre using containersā¦ youāll need to place the swarm key inside each containerā¦
@ipfsme@hector Thanks for your responses, but ā¦ If I try to request a pod from another one on port 5001(it is only internally exposed) it is successful, but if I try to do that with port 4001, the connection is being closed by foreign host. So I can not interconnect 2 IPFS nodes. If I run IPFS Cluster on top, it is connected perfectly, but because there is no underlying IPFS nodes connection, pinning does not work in the case: Node A goes down, Upload a new file, Node A comes up again, the status is āpinningā and it stays forever that way. So I believe that is because of miss IPFS nodes connectivity. But for some reason, I can not connect both nodes.
I am not sure how swarm is listening on 0.0.0.0, because when I open the log file, it shows me Swarm is listening on 127.0.0.1 no matter what I do.
Is ipfsās swarm configured to listen on 0.0.0.0 per the configuration?
If yes, it would seem that only the localhost interface is visible and it can only listen on that. You would need to check kubernetes side of things, perhaps the process or the user that runs the process is missing some priviledges or something.
ipfs config --json Addresses.Swarm ā["/ip4/0.0.0.0/tcp/4001"]ā So I think it is configured. Actually, if I run the IPFS Cluster with 0.0.0.0 everything works perfectly in terms of IPFS Cluster connectivity(IPFS Cluster uses libp2p as well, right ?). That is why I assume swarm is not working on 0.0.0.0 no matter if I have configured it to do so