So, just to make sure you’re aware, IPFS doesn’t automatically distribute your files. Other nodes have to download it (a common misconception I like to clear up).
However, if you want to figure out how many pieces a file has been broken into, you can use ipfs refs -r /ipfs/MyFile. That’ll list the hashes of each chunk.
To figure out which nodes are storing each chunk, you can run ipfs dht findprovs $CHUNK_HASH. However, this’ll only tell you about nodes you can find that claim to be storing the data. Other nodes may also be storing your files and/or the nodes that claim to store them may not be.
The intention would be to run a modified version of the original code that has a seperate client/server build in, where all added files are pinned automatically on all connected nodes in the private network.
So after sending the ipfs add command, this hash would be distributed accross all connected nodes that pin it on their turn.
Hence the need to know what node is storing what part exactly, and how redundant this is.
If using cluster, you can run ipfs add and the trigger a cluster pin add to achieve this. Upcoming versions of cluster will allow you to add files directly with cluster too.
Actually, if you’re files are small enough, you can use the IPFS proxy provided by ipfs cluster and do ipfs --api <ipfs-cluster-proxy-endpoint> add (instead of using the ipfs daemon api directly). This will add to ipfs and pin the result too. But as I say, only for files small enough that can be buffered in memory before adding.