I have done the private network setup with 3 ipfs node and 3 ipfs-cluster peers.
I have set the replication_factor_min and replication_factor_max to 0, and added file with --replication option and it worked perfectly.
But currently with --allocations option data is being allocated to all the peers, irrespective of peer ids provided in command.
ipfs-cluster-ctl add --allocations PEERID1, PEERID3 samplefile.txt
same with
ipfs-cluster-ctl pin add --allocations PEERID1, PEERID3 samplefile.txt
Could you please correct me here with right way of allocating the content.
I have set the replication_factor_min and replication_factor_max to 0,
That is your default replication factor for pins for which you don’t specify anything. When set to 0 it is understood that you want cluster’s default so it is -1 (pin everywhere).
Then you are pinning something without specifying --rmin or --rmax manually, so it will get pinned everywhere because that is the configured setting as cluster sees it.
You need to pin like this ipfs-cluster-cl pin add --allocations "PEERID1,PEERID2" --replication 2 <CID>.
The given allocations will be used as preferential destinations: if your desired replication factor is lower, it would just use the first peers in that list. If it is higher, it will use the peers in the list and select the rest from the available pool.
They all are privately connected through the swarm.key and when I run the nodes they show the PEERID1 that they are connected too. I have added the address in the peerstore.
But still it is not able to retrieve the contents after pinning to PEERID1 and when the peer which have the actual file goes offline.
Sorry, can you explain again your setup? Is the node that has the content part of the cluster? I’m confused about what is running and what is not running when you pin.
I think you did things right wrt cluster. This looks like a problem with IPFS daemons not being connected.
What is ipfs swarm peers showing?
They are in a private network so they cannot rely on bootstrappers. You need to let them bootstrap to, for example, the ipfs daemon in peer one. Cluster attempts to connect ipfs daemons among peers, but this depends on what IPs they are reporting to have. Are peers on the same LAN? If not, you need to ensure that IPFS daemons are connected among themselves too.
As a small node, I would re-use the same peerstore file for all 3 cluster peers, containing all 3 addresses. You want Peer2 and Peer3 to be connected too to each other.
PEER1 and PEER2 are not on same LAN so as you mentioned I connected them through swarm connect.
Then I was able to pin the file on the PEER1. I could see the status as pinned.
I don’t want to connect any other nodes in the cluster.
The model looks like many to one (n-1 nodes in the cluster which are all connected to the 1 remaining node in the cluster).
When I did from PEER2
ipfs-cluster-ctl pin add --wait <file CID/hash>
The file cid was pinned on the PEER1. I could see the status as PINNED
But when I do
ipfs-cluster-ctl pin add --wait --replication 1 --allocations <file CID/hash>
The status shows as REMOTE.
Not sure on how to pin a cid on to a particular node/peer.
What is the value to be used for --allocations flag ? Is it the ipfs id? or ipfs-cluster-ctl id?
And is it sufficient if we give the id itself of the whole address like /ip4//tcp/9096/p2p/
Note if your replication factor is 1, and you already pinned in 1 peer, I think it won’t pin it somewhere else because the replication factor is satisfied. The allocations flag is a priority list of where things should be pinned, but it does not need to make that decision since things are already pinned. So if you want to move the pin from one allocation to other the best way is to unpin and then pin again.
Yes. I think that was the problem, as the replication value is satisfied it wasn’t pinning. So, I changed the replication flag value to 2 and gave one allocation value (PEER 1) and then it pinned. Showing the two peerid values in the allocation.