From what i understand, we could use IPFS cluster with redundancy to ensure the file can be retrieved from IPFS peers although some of the nodes are down based on replication factor.
When the file is added and pinned in Cluster peers, is it possible to segment the file so that the chunks of a file are stored across the nodes and not in any single node? Is it something which is available by default? Or the all cluster peers stores the entire content of the file? Thank you.
Not possible right now. This is called
sharding. You can find more infos searching the forums and in Open FIXMEs in sharding · Issue #496 · ipfs/ipfs-cluster · GitHub .
@hector thanks for the reply. Is it then different from how ipfs works with out cluster? In IPFS (w/o Cluster), the file has a hash but internally multiple chunks are stored in different nodes? Is it so?
In other words, is ipfs cluster not decentralized?
Multiple chunks are stored in the same node or in nodes that request them. It is not different, it is just that cluster has not the ability to orchestrate the storage of some of the chunks in one place and some of the chunks in another place, because cluster tells IPFS to pin a file and that means storing all of it.
ok got it. As a follow up question, is an admin accesses the node, they should be able to retrieve the contents of the node right? or only those who have cryptographic hash will be able to access it?
I don’t fully understand the question since all content is retrieved by crytographic hash in IPFS. Cluster just orchestrates IPFS daemons, it does nothing with the data itself, other than storing more metadata.