If I have a large file that needs to be stored in the cluster, but the capacity of one node is not enough, can I divide the file into blocks and store it on different nodes?
I read the documentation and it says that IPFS uses root hash to query DHT to get the root block and then download the file, so if I get the file from multiple nodes does it mean that all nodes store the complete file?
Easy, ipfs already uses blocks, just have one single node that has enough capability to store the file, then the rest of the nodes in the cluster will store blocks of data but not complete, but overall you get a bunch nodes that can provide the object
So the correct answer is that cluster has implemented this functionality, but depends on IPFS implementing depth-limited pinning, something that has not happened yet. But yes it can be done. Depending on your file, you could pin sub-DAGs to different IPFS nodes.
Depends on the DAG layout, but you can pin the subdags, and then pin the root block alone (using --type=direct) on one (or all) of the nodes.
This is assuming that there is a root node with multiple links and that the subdags on those links fit the nodes, otherwise you need to split further etc.