Sync data from an existing node to the entire cluster

I have a node which was not previously part of a cluster.
It already has lots of production data (65377 pins to be exact).

To add the cluster, I’m using ipfs-cluster-service@0.13.1 in this node and another one.
I followed the guide on the docs and I can see the nodes are connected through ipfs-cluster-ctl peers ls.

However the data is not being copied from one node to the other.

I should also clarify that stopping the current node to copy all the data to the other node is not an option. Ideally this should be done without a downtime.

Does that mean that I must manually pin all local pins to the entire cluster?
Is that a better way to handle this?

1 Like

Do you mean that you are adding an node in which IPFS has 65377 into a cluster?

Those pins are in the ipfs daemon. ipfs-cluster does not know about them (ipfs-cluster-ctl pin ls shows what cluster knows about). For ipfs cluster to track those pins, you need to pin them using ipfs cluster.

ipfs pin ls --type=recursive | xargs ipfs-cluster-ctl pin add <your options>

would do it I think?

If you are using CRDT-consensus mode, after pin-adding those 60k pins, you may want to compact the state by manually doing state-export and then state-import on your nodes (Data, backups and recovery - Pinset orchestration for IPFS).

1 Like

I see, thank you for that.

Regarding the export/import, I was reading through the docs and found this section:

In crdt , importing will clean the state completely and create a single batch Merkle-DAG node. This effectively compacts the state by replacing the Merkle-DAG, but to prevent this peer from re-downloading the old DAG, all other peers in the Cluster should have replaced or removed it to.

I’m not sure I understood the implications. Could you please clarify on that?
Should I make sure I run the export/import in all nodes in the cluster?

It means that if you export/import, you need to do it in all your cluster nodes (with the same file you exported in one of them). (or import it in one, but do a state cleanup in the rest).

Gotcha… Thanks again.

Now I’m trying to run ipfs-service state export, however I keep getting the following error:

error creating state manager: could not determine the consensus component

The cluster node was started with the default CRDT consensus. The .consensus section of service.json contains only:

  "consensus": {
    "crdt": {
      "cluster_name": "ipfs-cluster",
      "trusted_peers": [

Any ideas?

Are you sure your configuration is where ipfs-cluster-service expects it to be?

Can you run with --debug: ipfs-cluster-service --debug state export.

I don’t think this error can happen with state export and not happen when running with daemon

Oh, nevermind… it was something wrong with the configuration of the VM.
It worked perfectly in another one I had.
Thank you.

1 Like