–peers Qm…,Qm… https://ggc.world
2020-06-05T11:30:04.743+0200 INFO config config/config.go:481 Saving configuration
configuration written to /home/marco/.ipfs-cluster/service.json.
2020-06-05T11:30:04.743+0200 INFO config config/identity.go:73 Saving identity
new identity written to /home/marco/.ipfs-cluster/identity.json
new empty peerstore written to .
(base) marco@pc01:~$
But when trying to run the ipfs cluster peer I get this error:
(base) marco@pc01:~$ ipfs-cluster-service daemon
2020-06-05T13:08:41.176+0200 INFO service ipfs-cluster-service/daemon.go:46 Initializing. For verbose output run with “-l debug”. Please wait…
2020-06-05T13:08:41.176+0200 INFO config config/config.go:361 loading configuration from 23a…
error loading configurations: could not fetch configuration from source: 23a…
(base) marco@pc01:~$
How to solve the problem?
Looking forward to your kind help.
Marco
The --custom-secret flag prompts for the secret interactively. If you wish to set it automatically you need to set the CLUSTER_SECRET environment variable.
The way you called the command, the secret is being interpreted as the [http-source-url] part. Because you are providing a “source-url”, the secret is not being prompted. I realize this is not the best UX and the command should have failed in the first place.
So, if I understand correctly, I just need to put into .bashrc file export CLUSTER_SECRET=“23a…” and then run ipfs-cluster-service init .
Actually one more thing I actually do not undertand how to proceed in initializing the ipfs-cluster, the remote configuration part. May be it is a step forward, but it would be nice to use a remote configuration file accessible on an HTTP(s) location. But how to concretely do it?
@hector When initializing without specifying the peers , I’m actually able to run the daemon:
(base) marco@pc01:~$ ipfs-cluster-service init --consensus crdt
2020-06-05T15:54:37.189+0200 INFO config config/config.go:481 Saving configuration
configuration written to /home/marco/.ipfs-cluster/service.json.
2020-06-05T15:54:37.190+0200 INFO config config/identity.go:73 Saving identity
new identity written to /home/marco/.ipfs-cluster/identity.json
new empty peerstore written to /home/marco/.ipfs-cluster/peerstore.
(base) marco@pc01:~$ ipfs-cluster-service daemon
2020-06-05T16:06:04.901+0200 INFO service ipfs-cluster-service/daemon.go:46 Initializing.
For verbose output run with "-l debug". Please wait...
2020-06-05T16:06:04.930+0200 INFO cluster ipfs-cluster@v0.13.0/cluster.go:132 IPFS
Cluster v0.13.0 listening on:
/ip4/127.0.0.1/tcp/9096
/p2p/12D3KooWMVhpQXrGhu7m5UCJHusiUJxLxieEJHFQuFiYpRfAWY2X
/ip4/192.168.1.7/tcp/9096
/p2p/12D3KooWMVhpQXrGhu7m5UCJHusiUJxLxieEJHFQuFiYpRfAWY2X
/ip4/172.17.0.1/tcp/9096
/p2p/12D3KooWMVhpQXrGhu7m5UCJHusiUJxLxieEJHFQuFiYpRfAWY2X
2020-06-05T16:06:04.931+0200 INFO restapi rest/restapi.go:515 REST API (HTTP):
/ip4/127.0.0.1/tcp/9094
2020-06-05T16:06:04.931+0200 INFO ipfsproxy ipfsproxy/ipfsproxy.go:320 IPFS Proxy:
/ip4/127.0.0.1/tcp/9095 -> /ip4/127.0.0.1/tcp/5001
2020-06-05T16:06:04.931+0200 INFO crdt go-ds-crdt@v0.1.12/crdt.go:275 crdt Datastore
created. Number of heads: 0. Current max-height: 0
2020-06-05T16:06:04.931+0200 INFO crdt crdt/consensus.go:272 'trust all' mode
enabled. Any peer in the cluster can modify the pinset.
2020-06-05T16:06:04.934+0200 INFO cluster ipfs-cluster@v0.13.0/cluster.go:619 Cluster
Peers (without including ourselves):
2020-06-05T16:06:04.934+0200 INFO cluster ipfs-cluster@v0.13.0/cluster.go:621 - No
other peers
2020-06-05T16:06:04.934+0200 INFO cluster ipfs-cluster@v0.13.0/cluster.go:634 ** IPFS
Cluster is READY **
So… my question is : how to correctly specify during the initialization phase the peers and, possibly, the remote configuration file?
Thank you Frank for explaining me how to add followers to my cluster.
Actually I was two steps behind, in trying to understand how to add a “normal” peer to the cluster, which I did without pain, this time.
At the moment I do not have a third laptop/pc to install another peer and try to add it as follower.
But I will try for sure.