Offline cache update

Is there a way to update the ipfs cache offline?
For example, the ipfs node (let’s call it node a) generates a file of missing blocks that it needs.

Then this file is transferred on a flash drive to another ipfs node (let’s call it node B). Node " b " processes the request and exports the necessary blocks to the flash drive.
The flash drive then returns to node “a”. node " a " imports the received data into its cache.

I guess you want to air-gap the network of node A?

Theorically, you just have to write the relevant file to the flash drive thanks to node B, then run “ipfs add” these files on node A. Node A will chunk them and should find the same hashes.
Be sure to use the same options on both nodes (default, for example).

yes working with air-gap
data exchange should be two-way.

it is not known in advance which blocks will be requested by node “a”. Node “B”, if necessary, will receive them from the Internet or from neighbors. That is, we need to make node " b "a proxy for node “a”.
(It’s like a link to a space station orbiting Saturn)

Good luck delivering the flash drive then :wink:

If A is air-gapped, I’m curious about how it will be able to constitute a want-list of hashes in the first place…

Anyway, I don’t know exactly where in the code would B “export on the drive” the requested blocks. Maybe in Bitswap? Or Offline Physical Transport Via Thumb Drive should be implemented as a transport?

@stebalien would know for the Go side.


The signal waiting time of 71 minutes justifies the transition to unrelated mode. During this time, you may well write the data on a “flash drive” and take them by car to his country village, and it will be faster than downloading them via gprs. And the whole village will be able to use the transferred data almost immediately after they are placed in the local IPFs cache.

It seems to me that in this case IPS is very good. but for this he needs Autonomous transport. Unfortunately I not know implementation such a transport, that in turn led me to create this themes.

I think this is possible after the implementation of the search index or global resource catalog.
Interested nodes can subscribe to it via /ipns links.

Unrelated nodes can generate a file with the necessary / ipns links (and possibly a list of blocks that they already have on the requested links or even a list of available).

These files are thrown off on a flash drive and given to a trusted place where there is a normal connection with the rest of the world. at this point, it is possible to start an application that reads and writes the requested data to the flash drive. The flash drive is then sent to where the data is needed.

The flash drive is then sent to where the data is needed. there, this data is imported into the IPF s cache and the loop repeats. It’s like delivering a website via email in the 90s.

(flashback: search engines were not yet efficient enough. The Internet was spotty and expensive used mostly for email, search was ineffective. It was the time of Internet resource directories, UseNet . Mail robots that loaded a page and sent email. Then it was possible to subscribe to regular updates of the site page and receive them by e-mail. RSS hasn’t been invented yet).

Sorry for my broken english

1 Like