Trying to fire this command :
ipfs-cluster-ctl sync Qma4Lid2T1F68E3Xa3CpE6vVJDLwxXLD8RfiB9g1Tmqp58
getting this response :
No help topic for ‘sync’
Does the command syntax changed ?
Trying to fire this command :
ipfs-cluster-ctl sync Qma4Lid2T1F68E3Xa3CpE6vVJDLwxXLD8RfiB9g1Tmqp58
getting this response :
No help topic for ‘sync’
Does the command syntax changed ?
We switched to a stateless
pin tracker and the sync operation did no longer make sense as things do not get out of sync anymore. If you found a place in our documentation that told you to run this, let me know so it can fix it.
Check second screenshot on this page please.
and also check here --> http://cluster.ipfs.io.ipns.localhost:8080/documentation/guides/pinning/#syncing
Okay so do you mean you can´t tweak your pin status anymore ?
I just checked unpin an object on the ipfs and got an error that QM… is not pinned … in the ipfs-cluster.
So if I do a ipfs-cluster-ctl status i will get always the true status… no chance to fake anymore ?
That would be great !!!
Okay short update on my findings :
When a new follower joins the cluster and I pin or unpin after the follower joined; the status is real - like pinned, pinning or error - all good ; also when i remove the object via ipfs the pintracker catch it up again.
But if I have already pinned objects on the cluster and a new follower joined the cluster the status keeps REMOTE for such objects. Also the objects are not really pinned on the follower side.
I keep the follwer now up and running maybe the status will change via a pintracker action ?
Here my config part :
“connection_manager”: {
“high_water”: 400,
“low_water”: 100,
“grace_period”: “2m0s”
},
“state_sync_interval”: “5m0s”,
“pin_recover_interval”: “12m0s”,
“replication_factor_min”: 2,
“replication_factor_max”: 100,
“monitor_ping_interval”: “15s”,
“peer_watch_interval”: “5s”,
“mdns_interval”: “10s”,
“disable_repinning”: false,
Hint: If I would unpin and pin now a CID the status would got to PINNED for this follower on this CID
While checking my config again - is it releated to my entry replication_factor_min: 2 ?
I have 3 nodes running all have pinned the CIDs.
Now when I start the 4th node it seems not to pin because of that ?
Could at be that the replication_factor_max: 100 is only used in the moment of pinning - while the factor_min is always checked and if min reached no pinning ?
should i go for :
“replication_factor_min”: -1,
“replication_factor_max”: 100,
to avoid this ?
If the CID is sufficiently pinned per the replication factor, nothing will change when new peers join unless you repin.
If you want to pin on all the peers, leave replication factors at the -1
default.
Thx - i just played with that !
Keep factor 2 had 3 nodes pinned one not
Shutdown two nodes with the pinned object
The one node with status REMOTE picked it up!
Nice - works like it should do
Okay thx - if i keep factor_min 2 but do a -1 for a single pin on the cluster –
would this CID pinned from all nodes because I overwrite the default value from my service.json ?