what I read about IPFS is it address the issue of duplication but letting one file for each hash in a way to keep the file in one node, and remove it from other nodes.
my questions is , what if that node who keeps our file go down. is that mean our file is unacceptable?
how can IPFS address this issue ?
That’s not how it works (thankfully). IPFS does not remove content from other nodes so that only one node has the content for a given hash. But the content for a given hash referenced from multiple places only needs to be stored once per node.
Thanks so much for replying.
you have clarified the idea in a good way. just one more question , If all nodes who store a specific content go down. how does ipfs address this issue?
If all nodes who provide a hash go down then the hash becomes unretrievable until either one of the nodes with the hash comes back online or until someone adds the content to their own IPFS node.