I can't connect peers with ipfs-cluster

Hello, everybody. I have a question.
Based on the page of Quickstart, I started two AWS servers and tested the IPFS cluster.
I have trouble with Step1.
I get the following error with $ ipfs-cluster-service daemon --bootstrap /ip4/

16:37:09.501  INFO    service: Initializing. For verbose output run with "-l debug". Please wait... daemon.go:45
16:37:09.502  INFO  consensus: cleaning empty Raft data folder (/home/ubuntu/.ipfs-cluster/raft) raft.go:678
16:37:09.502 WARNI    service: the /home/ubuntu/.ipfs-cluster/raft folder has been rotated.  Next start will use an empty state state.go:191
16:37:09.511  INFO    cluster: IPFS Cluster v0.9.0 listening on:

16:37:09.512  INFO    restapi: REST API (HTTP): /ip4/ restapi.go:443
16:37:09.512  INFO    restapi: REST API (libp2p-http): ENABLED. Listening on:

16:37:09.513  INFO  ipfsproxy: IPFS Proxy: /ip4/ -> /ip4/ ipfsproxy.go:272
16:37:09.513  INFO  consensus: peer is ready to join a cluster raft.go:222
16:37:09.514  INFO    service: Bootstrapping to /ip4/ daemon.go:172
16:37:14.515 ERROR  p2p-gorpc: dial backoff call.go:63
16:37:14.515 ERROR    cluster: dial backoff cluster.go:732
16:37:14.515 ERROR    service: bootstrap to /ip4/ failed: dial backoff daemon.go:175
16:37:29.514 ERROR    cluster: ***** ipfs-cluster consensus start timed out (tips below) ***** cluster.go:411
16:37:29.514 ERROR    cluster: 
This peer was not able to become part of the cluster.
This might be due to one or several causes:
  - Check the logs above this message for errors
  - Check that there is connectivity to the "peers" multiaddresses
  - Check that all cluster peers are using the same "secret"
  - Check that this peer is reachable on its "listen_multiaddress" by all peers
  - Check that the current cluster is healthy (has a leader). Otherwise make
    sure to start enough peers so that a leader election can happen.
  - Check that the peer(s) you are trying to connect to is running the
    same version of IPFS-cluster.
16:37:29.514  INFO    cluster: shutting down Cluster cluster.go:483
16:37:29.514  INFO  consensus: stopping Consensus component consensus.go:185
16:37:29.514 ERROR       raft: NOTICE: Some RAFT log messages repeat and will only be logged once logging.go:105
16:37:29.515 ERROR       raft: Failed to take snapshot: nothing new to snapshot logging.go:105
16:37:29.515  INFO    monitor: stopping Monitor pubsubmon.go:162
16:37:29.515  INFO    restapi: stopping Cluster API restapi.go:481
16:37:29.515  INFO  ipfsproxy: stopping IPFS Proxy ipfsproxy.go:247
16:37:29.515  INFO   ipfshttp: stopping IPFS Connector ipfshttp.go:201
16:37:29.515  INFO pintracker: stopping MapPinTracker maptracker.go:124

Secret is shared.
I have not set up any firewall.
Both daemons are working.
I have checked two servers can connect with $ ping

You need to open ports (the cluster swarm port tcp:9096) in your security rules. I think EC2 instances allow ping by default.

For more info about ports: https://cluster.ipfs.io/documentation/security/#quick-overview

Thanks for your reply.
I have set up ufw and opened ports(4001,5001,8080,9094,9095,9096).
But I receive the same error message.

The issue is the EC2 instance’s Security Groups, not ufw (which is prob disabled by default). You should not open 5001 to the world (nor 9095).

1 Like

I succeeded to add new members to the cluster.
Thank you so much !

1 Like

Hi @hector, I am just curious to understand why we see this warning message on the console every time with a new node joining the cluster (even in the success scenario):

Does that mean that all the pins under that node will be flushed in the next instance of the daemon ?

Using --bootstrap will rorate any old raft folder. This is to ensure you don’t have an old peer with diverging data join a new cluster. By next start it means this current start. Unfortunately we share the same function with ipfs-cluster-service state cleanup and the message is confusing.

I don’t know if this has been like this all the time, but it seems as of the current version, this warning is printed in every start with --bootstrap, even if there was no raft folder to rotate in the first place. In any case, you can safely ignore it, it does not mean anything bad.

Since this part of the code is getting pretty much rewritten for the next release I think this message will no longer be there then.