From @sybrandy on Tue Oct 27 2015 16:37:29 GMT+0000 (UTC)
I know there have been several similar questions, but nothing really answered this. We have some files that can be over 1GB in size that we need to distribute to various nodes in our cluster. Currently, this is a very bandwidth intensive process as all of the files are distributed to all of the potential nodes at one time. What we would like is to have the files pulled down on demand, but also to have them cached locally in the event they need to be reread.
Can this be done easily with IPFS? I’ve seen a couple answers allude to this type of capability, but I don’t know if this is automatic or something that needs to be done manually. E.g. pull down the file and add it to the local instance.
Copied from original issue: https://github.com/ipfs/faq/issues/69