Can I delete my content from the network?

From @mildred on Mon Mar 07 2016 12:05:35 GMT+0000 (UTC)

> If you don’t want people to look at your bytes – use encryption BEFORE you put them online.

It is also possible to restrict who can access the files you add in IPFS. When you add a file, it’s only available on your node at first. You could configure your node so that the file is only accessible to few other selected friends on the network. If they are trustful, these friends should not propagate the file further.

This could be interesting especially in the case where a crypto algorithm gets broken years later. You may not want your private conversation to be public then.

I think this is also discussed elsewhere, see: ipfs/go-ipfs#1386

From @Beastio on Thu Mar 24 2016 02:09:42 GMT+0000 (UTC)

You can neither unsay nor unhear something in the real world. We need to regard the hash link as its contents. If someone has the link they should be considered to have the file. To delete the file you need to remove all references to it, like garbage collection in programming languages. So we need to convince the legal system that having or publishing, a hash link, to be the crime, while actually having the illegal file that the hash links to on a server to not necessarily be illegal.

From @ingokeck on Sat Apr 09 2016 21:30:05 GMT+0000 (UTC)

@fazo96 There is no global understanding of what is illegal and what is not. Just one example: Publication of Nazi insignia outside historical context or research/education is illegal in Germany but perfectly legal in the USA as far as I know.

From @fazo96 on Sun Apr 10 2016 14:15:09 GMT+0000 (UTC)

@ingokeck True, but it just means that we need multiple blocklists. One of the first will probably be a blocklist for DMCA takedowns: someone will maintain a list and publish it via IPNS, and all public gateways will opt into it. Then if another list needs to be made for any other reason, it’s gonna be made and used.

From @stuffyvater on Sun Oct 09 2016 17:08:01 GMT+0000 (UTC)

Is it possible to restrict content prior to accessing it on IPFS? Lets say I’m hosting a file on my node. I know exactly who should access my node because I have their addresses whitelisted on an Ethereum Smart contract.
My contract wrote Adress A = 0xA…, Address B = 0xB…, and Address C= 0xc… My IPFS node hosts another smart contract and a file. The smart contract prompts MetaMask upon arrival at my content and checks my Address. My address is 0xB, so I am whitelisted on the contract. If i was 0xD, could my contract stop IPFS from sending the content to 0XD?

From @mixmastamyk on Fri Oct 14 2016 20:10:58 GMT+0000 (UTC)

Hi, read all this and still don’t understand if old files can be deleted or not? I have a network of media players which I’d like to share download responsibilities. Is old content able to get deleted or not? If the disks are going to fill up in a few days ipfs is not a solution for me. :wink:

From @alexpmorris on Fri Oct 14 2016 21:06:34 GMT+0000 (UTC)

Files can be deleted from your own local instance(s), but you cannot force remote nodes that already have a cached copy to delete their copies.

From @krionical on Sat Oct 22 2016 16:51:32 GMT+0000 (UTC)

I think the idea of having the ability to “remove” is part of the problem with the web. As long as there remains the possibility of “Taking Down” stuff, then it will create a situation where those that want things taken down will over-extend their power. On the other hand, I think where people should have more ability to change things is at the input point–i.e. if your friend has taken a picture of you or something it should notify you somehow that the picture is taken, and then you can decide whether or not your friend gets to keep the shot.

I think Etherum has the right idea as far as contracts go for consumables like movies.

From @NDuma on Tue Oct 25 2016 10:55:15 GMT+0000 (UTC)

My understanding:

{Your [encrypted] }files will be deleted when the action of accessing them ceases (metric for relevance & demand necessity / ‘data stockpile’ / ‘warehousing’ criteria is met); if, nobody else accesses them; and, they are cleared out in retired ‘junk’ cache by incoming data demands.

Otherwise - What you post … is Posted.

Web Apps will Update with new URI’s = old data not accessed anymore = deleted if nobody cares for it.

Otherwise; look for IPFS to be a new host to torrentz of leakz - “Hillary Under the Cover(s)” & “Trump’s Rumps” e-mails & failed steak campaigns which can never be unseen; much like the IPFS, a torrent inspired web.

Why This Thread Should End Here

"The Permanent Web"
‘deeMCA is not in the house; everyone grab a mic & drop it.’ - BlackLists

From @NDuma on Tue Oct 25 2016 11:03:42 GMT+0000 (UTC)

Ok, the thread shouldn’t ‘end here’
The notion of a private network which has agreements to host files is interesting.
Paid ‘club’ membership - or to individuals - or geolocalized nodes…
This would require control over distribution to networks.
& xanadu is still cool; good to see it ‘roun’.

From @kakra on Mon May 01 2017 22:26:29 GMT+0000 (UTC)

@mixmastamyk You don’t have to bother with your local storage filling up. Just don’t pin the content. The local storage only acts as a limited sized cache, unused data will be discarded.

Regarding the rest: Part of this could be solved by deploying encryption by default. If you want to share with everyone, just don’t encrypt. Otherwise some sort of key chain is needed that ensures only allowed parties can view the content but it would still be distributed accross the ipfs network.

BTW: Does the ipfs ever forget? I mean, after all it should be permanent. But if nobody references the files any longer, and nobody requests them, they should fade away, do they?

Regarding DCMA requests and blacklists: Is this really a usable solution? I could change one bit in a video stream, that wouldn’t hurt the stream at all. But the hash would be completely different. This smells like the myth of Sisyphos… It would result in requests to detect offending files by fingerprinting, would there be a solution using fingerprinting? Maybe by distributing (and enforcing) additional metadata with the files…

IPFS is awesome even in terms of DMCA. And fundamentally, it isn’t really different today: if the copyright trolls contact you, it’s because you’ve shared a file with a specific hash. The hash (among other things, of course) is important, otherwise they wouldn’t survive in court. This is actually a big plus for IPFS: you just blacklist a hash, the default nodes will comply, and unless you tell your node to opt out of the blacklists, you won’t be able to “ipfs get” the file, and this will be network-wide. If someone changes the hash, e.g. by printing a carriage return to the end of the file, then all the copyright owners need to do is blacklist the new hash, and it will be network-wide immediately. It’ll probably still be a whack-a-mole, but i’s way better, easier & faster than location-based blocking, removing search results from Google and what not. IPFS is (in probably every respect) the perfect solution.

Good night! I am hosting a small website on ipfs. The problem I am having is that every time I push a new version, the old version still online. Is there anyway of removing the old versions? Who is hosting the old versions? I did the IPNS staff in order to point my new Hash to my Peer Id. But I am worried about all those old files. Anyway I love IPFS and all the decentralised concept. Thanks! :slight_smile:

hello dFran

in IPFS you cannot remove a reesource IPFS was created to keep a historic to your ressource. I think in your case you need need to use IPNS with publish instead of use directly the CID of your ressource.


1 Like

Not sure what exactly you mean but I will take a look for sure. I just started with IPFS few hours ago. Thanks! :stuck_out_tongue:

This thread relates to an idea I’ve had that can be summarized as “Polite DRM”.

I started by thinking about using IPFS as a HIPAA compliant data store, where encryption keys within a personal medical data IPNS manifest would always be controlled by the patient themselves, and only temporarily and selectively loaned to care providers as needed. These loans would obviously require repository segment distribution reporting back to key control nodes, with full revocation and secure deletion on demand based on HIPAA compliance contracts.

After thinking this through further, I realized this would also be a solution for Netflix style rentals and DMCA compliant media distribution, without the inherent insecurity of trying to keep any purchased hardware outside of its owner’s control (the core defect in the appropriate Defective By Design label).

The core element needed in both use cases is some method of distributing encryption keys independently of the data they decrypt, and a manner to request full key and decrypted-version deletion by [smart] contractual demand. This sort of deletion request mechanism is the polite part, and the DMCA enforcing contract over key distribution is the DRM part, or in HIPAA and similar use cases the privacy-preserving part. The EU Right To Be Forgotten and “whoops” version tree pruning use cases could also be addressed in this system.

I have more ideas about key-revocation enforcement via Filecoin incentives and a sort of “auto-snitch” distributed reporting system, but those can be addressed elsewhere in a similar manner to how Bitswap is a pluggable feature of IPFS.

Interesting thread. I have some follow up questions regarding EU Compliance issues. (GDPR

Scenario: Multiple entities wish to protect their data from either intentional or accidental deletion. They also have compliance requirements saying they need to know where their data is stored.

I see that a IPFS Private Cluster can solve the “where” part so that they can have a list of all members.
But still the deletion part is not solved in this scenario. They can use blocklists between each other, but its still a trust relationship outside the chain.
If IPFS is basically not designed to be able to delete data easily, then maybe using some of the underlying libraries which IPFS is relaying on can make a solution like this?

What about human oracles in private cluster. Is that something that have been worked on?
If you have a private cluster and a group of Oracles which get GDPR request for a file to be deleted, they must all agree that for the file to be deleted. Sounds time consuming, but maybe it could be optimized by tagging content that already are GDPR sensitive or maybe create some intelligent automation.

Are there maybe other solutions than IPFS that solve this scenario better?