Finding old added hashes

From @keskella on Tue Dec 29 2015 21:19:35 GMT+0000 (UTC)

Hi, I’m wondering how you can get the hashes for stuff you’ve added before, but did not pin. Ie. uploading stuff to get the hash (and potentially retrieve it when there’s the same file in the network in the future) and then clearing cache to save space. Do I have to manually copy the hashes from ipfs refs local ? or is there a log of everything I’ve added.

(changed a brainfart)


Copied from original issue: https://github.com/ipfs/faq/issues/86

1 Like

From @Kubuxu on Tue Dec 29 2015 21:22:24 GMT+0000 (UTC)

AFAIK everything you add is pinned by default.

$ echo "LONG LONG LONG TEST" | ipfs add
added QmVQbMbSw8jk24kzJoG7c1BBM7KswZWN4tu4Bv6ki19eUX QmVQbMbSw8jk24kzJoG7c1BBM7KswZWN4tu4Bv6ki19eUX
$ ipfs pin ls | grep QmVQbMbSw8jk24kzJoG7c1BBM7KswZWN4tu4Bv6ki19eUX
QmVQbMbSw8jk24kzJoG7c1BBM7KswZWN4tu4Bv6ki19eUX recursive

From @keskella on Tue Dec 29 2015 21:28:29 GMT+0000 (UTC)

Yes, but that doesn’t answer the question. I don’t want it pinned, as I only want to receive the hash and retrieve later when the network grows large enough to have copies of the same file floating around again.

From @Kubuxu on Tue Dec 29 2015 21:30:32 GMT+0000 (UTC)

There might be a log then, I have no idea if there is any.

From @almereyda on Wed Dec 30 2015 05:56:39 GMT+0000 (UTC)

Just asking myself: How could a file not-cached in the IPFS swarm reappear

@keskella (and potentially retrieve it when there’s the same file in the network in the future)

This assumes other peers requested this hash before and ideally pinned it, to make it not dissapear by garbage collection.

As far as I understand IPFS, it only provides you with an API to add objects to your local node and request foreign names. Distribution of hashes is externalised into IPNS and the social layer: by sharing the hashes (or IPNS paths / open secrets, capability tokens) requests are generated that lead to distributed cashing of them. Only manual pinning on any of the foreign nodes would allow for long-term availability of the data object, due to garbage collection.

Tahoe-LAFS and Mojette Transform come to mind.


Do you assume files uploaded to IPFS get distributed via an Erasure code to peers and thus remotely cached? Or is this even the case?

Skimming through the specs didn’t help in answering. Slightly related would be:

No sign of active replication into the swarm. Is this a feature request, maybe?

From @keskella on Wed Dec 30 2015 17:35:39 GMT+0000 (UTC)

I don’t even presume the files to be uploaded anywhere, I’m presuming the hashes stay the same meaning that later when ipfs is inter-planetary, the files are available and I can fetch them then. I don’t see any reason why the my keys merkle tree would get cleaned from the hashes I’ve added to it. Isn’t the point of this to be the permanent web?

(edit: brainfart keys /= hashes)

From @drelephant on Thu Jun 16 2016 06:58:58 GMT+0000 (UTC)

So the files were never uploaded anywhere, but you somehow expect them to magically reappear on the network later? I think you are misunderstanding what a hash is. It’s like a checksum, not the actual file. You can keep searching forever for your old hashes but if there’s nobody with the actual files that those hashes correspond to, you will never get a positive response, and the files will never be sent back to you.

If what you are suggesting was possible, then why ever send a file to anybody? Why not just hash it, delete it, then wait until it is findable on the magical inter-planetary file system?

From @keskella on Thu Jun 16 2016 12:23:15 GMT+0000 (UTC)

I’m counting on the “if”. If someone has the same public domain files, then they WILL exist on the network in the future. I’m not uploading anything unique or mutable. And I did understand hashes correctly, I’m just more optimistic on ipfs then you. The internet is large and if ipfs takes over a portion of it, then the scenario you have is perfectly reasonable, just send a hash to someone and they will fetch it from ipfs. There’s seven billion people and they have a lot of files that are common among them. My files are not so unique that there wouldn’t be redundancy among those people.