When file upload to node that can ipfs segmentation file to many nodes?

Hi everyone,

I am very interest IPFS protocol. I start to study and ready to build some project. Frist I need to clearly to know some technical problem in IPFS. When the file use api upload to the node that can the node will auto share to other nodes? Can segmentation the file to many nodes to save a lot of part, or that mush full size to upload one node? Is the IPFS like a proxy server, but that get the source from many servers?

This is my imagine case:
I have a KTV company client, that have a main files server 100G storage and each video files size is 1G. They have 4 stores and each stores have 10 rooms and great network. They are want to use IPFS setup the files system.

So I need to build 4 100G server for 4 stores, because the source have 100G data? If I have low budget each stores juct can set 50G servers, that not good for run?

Thank you!

I read ipfs cluster guide maybe solve my problem

https://github.com/ipfs/ipfs-cluster/blob/master/docs/ipfs-cluster-guide.md

and I have other problem, when I uploaded one file to the cluster that the cluster will share the file to other nodes and segmentation file, like a RAID 50/60?

ipfs-cluster does not provide at the moment data ingestion+chunking that would allow implementing raid strategies or similar, so it is limited to replicating CIDs which fit in an ipfs node.

That said, work is being done on that direction and it is our plan to add that funtionality in the future.

1 Like

thank you! :muscle:t2: