IPFS interface with HTTP/FTP/Backward compatibility

I am wondering if IPFS get or some other command can obtain files over FTP/HTTP to retain backward compatibility (incase the hash/file is not available over IPFS).

Use case: a lot of large files are currently hosted on FTP/HTTP, but not on IPFS. The user already has IPFS installed. Using this command they can easily obtain the file through a standard ipfs command that fetches and hashes and seeds/pins the file to IPFS local node, regardless of what protocol is used to fetch the file. Then the file is served to the network. Because this process is just as simple as getting a file over HTTP/FTP/SCP/IPFS, and IPFS provides a method which abstracts the protocols. This leads more users to migrate over to IPFS, and when the file is hashed and seeded, it becomes available to everybody on the network (using a IPFS native protocol: ipfs get )… This in turn drives user adoption.

Perhaps it doesnt have to have “IPFS get”, it could be IPFS awesomeGet, :slight_smile: or some other command which use ipfs get/ftp/http or other protocols and invokes a hash not to the desired file but the hash to another file (which contains the addresses of the file stored at the HTTP/FTP/SCP locations.).

To implement this, I was thinking we would need to have a meta-file, which would have the instructions for everywhere that this file can be obtained from:

file1.txt (with #hash1):

FTP: 123.123.124.444/xyz.tar
HTTP 223.123.124.444/xyz.tar: 
SCP: 323.123.124.444/xyz.tar
IPFS Hash: #hash2 #if avalailable already.

On running “ipfs awesemeGet #hash1”, ipfs get the above file1.txt, parses and runs FTP/HTTP/SCP/IPFS and fetches files from the specific locations mentioned in the file. If its not obtained from the IPFS network, it then hashes the file, pins it the local directory and makes it available to the network.

Perhaps there are some better implementations?

Is there any wrapper function like this currently implemented in IPFS?

It would probably be easier, and more reliable, to use ipfs-pack on the FTP and HTTP servers.

Otherwise, what you’re talking about is making FTP and HTTP datastore implementations for IPFS, similar to the github datastore that @stebalien has discussed elsewhere (if the git hashes aren’t on IPFS it tries to retrieve that content from github).

2 Likes

It would make more sense to implement existing file-sharing protocols such as BitTorrent in my opinion, since you can verify them easier. Related threads:

An easy solution to the problem with malicious attackers is to ONLY fetch the file from the person asserting the mapping is correct. That way, they can’t do more damage than they could by just sending random garbage over UDP. You could also slowly build up trust even if you don’t have any keys signed in the following manner:

Node B asserts that piece 1-2000 in torrent X corresponds to piece 1-1000 of file.
Node A picks a random piece and starts downloading it from node B. If it matches, node A can repeat this process until it’s satisfied a large portion of the mappings are correct, at which point it can start downloading from the entire network. Node B can’t give a partially valid mapping, since it doesn’t know which pieces node A will pick. If node A downloads 10% of a 1000-chunk file from node B, and node B gives bad mappings for 1% of it, the probability of detection will be 63%, so it doesn’t need to get a large portion either. This could be compared with BitTorrent’s free pieces, which are used to “bootstrap” the tit-for-tat system…

Isn’t this (ipfs-pack) what @jbenet talked about, namely making content available on the IPFS by referencing it outside of the IPFS, i.e. not putting it into the filestore? I didn’t even know this was already available. OK, granted, not within go-ipfs, but this is great.

Thanks for the points… Going through the links…

The description of what I initially wrote was a bit hurried and not very clear… as I was still processing the idea.

Will look at these and report here…