Can a node alturistically store files?

Now the ipfs node only logs up providers in the DHT, instead of the real file, which means these files are still held by just one node,and is unavailable when this holder goes offline. However, if the files are stored at, for example, just another node, it would greatly improve data availability. Would everyone(at least some nodes) alturistically store files for others, the whole web will be greatly stronger.

Is there an addon or a rule to fulfill this function? For example, I spare some space(1G for example) ,randomly grasp (or somehow detect the least held) data blocks from DHT, and store it at my own node, keep it until the allocated space is full. Then, I remove older blocks with newer ones. During the whole process, I pay my own space and bandwith, but expect nothing in return.

1 Like

There is not this feature builtin but you could do it, nothing in the protocol would prevent this to work.

1 Like

So, What do you think is the proper and easy way to do it?

I think it contains two steps.

  1. To gather blocks from dht, the program should look into the local part of DHT, get provider records and extract CID from them.
  2. for each CID, get it to local node, and keep records for it.

The main difficulty is the former, Kubo and Helia don’t have an API to reveal the intrinsic of DHT(only methods for get and put an exact value), so to get a provider record I have to implement an API function on my own and rebuild the whole binary.

Which part of the program contains the local DHT datastore? Once I located it, I can expose it to outer programs, then I can continue my work.The whole project is so big that exploring its organization is already exhausting …(。•ˇ‸ˇ•。)…

If you want to help you can also just run a public http gateway. People will request blocks that they need from it and it will effectively cache stuff both for http and ipfs users.

DHT code is here: GitHub - libp2p/go-libp2p-kad-dht: A Kademlia DHT implementation on go-libp2p

1 Like