Limit "ipfs get" command

beginner ipfs enthusiast here.

as i understand it, there are the basic “ipfs get” and "ipfs cat"functions to either download the content of a hash to local storage or just view it from it’s source location.

I have an art website. Is it possible to only allow the ipfs cat function? So anyone can browse but not download.

ipfs get and ipfs cat do the same thing. The difference is that ipfs get writes the file to your hard disk while ipfs cat writes the file to standard out (the screen).

I have an art website. Is it possible to only allow the ipfs cat function? So anyone can browse but not download.

This is fundamentally impossible; viewing implies downloading. With special hardware support, it is sometimes possible to encrypt downloaded content until it reaches the user’s screen but there are moral implications with tech like that (see https://www.defectivebydesign.org/).

1 Like

I’m confused…(sry i’m a very low lvl enthusiast but i really wanna get the hang of this, it’s game changer)

The difference is that ipfs get writes the file to your hard disk while ipfs cat writes the file to standard out (the screen).

Viewing content onscreen seems very different than being able to also download it. With http (which I hope ipfs will replace soon) I can save a webpage, but not all the content in all it’s sub-directories. (i guess i’m wrong o this point?)

This is fundamentally impossible; viewing implies downloading.

How does viewing imply downloading? I’m use i’m missing something with how a browser works. Viewing is not the same as having the original file (like a song or digital art).

Thanks for the link to https://www.defectivebydesign.org/what_is_drm_digital_restrictions_management.
I have strong thoughts about it but that’s for another post somewhere else.

they are actually very similar, the only difference is how long you (or your computer, in this case) stores the file
when you are viewing a webpage, your browser downloads it into its cache, then deletes it later. this is what IPFS’s garbage collection does.
when you press Ctrl + S on a webpage, it downloads it locally to your computer so you can see it later, even while offline. this is what ipfs get does.

when you use ipfs get on a webpage, i believe it recursively downloads the whole site (you might need to add the -r flag though), which includes the sub-directories. not sure about ipfs cat though, it is based on ipfs get like @stebalien says.

check above for my explanation.

many music sites now use streaming, which sends parts of the files to your browser’s cache for you to play locally. again, look above for my explanation on how viewing and downloading websites are essentially the same thing.

1 Like

@emuNAND beat me to it!

How does viewing imply downloading? I’m use i’m missing something with how a browser works. Viewing is not the same as having the original file (like a song or digital art).

It might not be super obvious at a first glance, but whenever you view some data in a browser or any other program, that data needs to be somewhat on your machine (otherwise it can’t be shown).

That’s exactly what browser for example do. When you view an image, the browser downloads that image so it can render it. When you watch a video, the video data is streamed so you can watch it.

Same goes for IPFS. While it’s not http, machines/programs still download data so it can be viewed. So even if you use ipfs cat <CID>, it might not be written to disk, but what gets rendered in your program is still that data that needed to be downloaded first (even if not persisted).

Viewing is not the same as having the original file (like a song or digital art).

True, but originality gets kind of lost in these systems, as every download is basically a full copy of the original.

2 Likes

Thanks so much for all your explanations. Taking time to educate a muggle…

I understand the difference between ‘ipfs get’ (saving the exact file structure+sub-directories and anything else in the #hash to your local storage) and ‘ipfs cat’ (view cached version in browser, delete upon closing browser). Maybe we have different ideas about content rights but to me this is a huge difference. Having the exact file or having a cache version, for me, is not the same at all… (Am I repeating myself too much? sorry…Do I really not get it?)…

Does having a cached version give you the ability to recreate or manipulate the content after you leave the website?

I realize most creative content websites only include thumbnails, short audio samples or watermarked images and videos to protect the content. This is what I will do if there is no other way to protect my originals from being downloaded. Customers would view samples, then request a file. I would encrypt it and send the key (i guess by email) so they could unlock the file on their local storage. I was really hoping IPFS would have this kind of node protection built it but if not…I’m still using IPFS!!

I’ve tried to test what you guys say about saving a webpage with “Ctrl + S” but it doesn’t seem the same as “ipfs get” at all.

This is my IPNS testing space
http://127.0.0.1:8080/ipns/QmfQ1YxZTCRw63DXhsMKGzHqncWaEwJBX6rFxcn1Rf5bmT

As you can see the output of the 2 commands is quite different.
‘ctrl-s’ simply saves the current page. When I go offline I can only view the content on the page I saved, not the rest of the site. No sub-directories or apps folder available.
"ipfs get’ on the other hand gives me everything…even files that are not displayed on the page. (I realize this is what the command does. It’s just to show difference.)

Saving a page with ctrl-s video
http://127.0.0.1:8080/ipfs/QmRZXzz6GSi1qqEa6L9xjFuw7ARPjo3r9KLy6bwPF8b82G

1 Like

So, in order to view anything on a web page you need to download it first, right? You can’t view an image unless that image travels through the internet, on to your computer, and then on to your monitor.

If I wanted to download an entire website, instead of just downloading one page, I would write a script to go and press Ctrl+S on every page I can find. In fact, I don’t even have to do that, since a tool to do that is included in almost every linux install. I’d run a command that looks something like wget -mpck --user-agent="" -e robots=off --wait 1 http://example.com.

That command is equivalent to ipfs get. So even if the developers removed support for ipfs get, I could write a piece of software that does the exact same thing just by running ipfs cat on every page I can see, and saving the result.

So ultimately, blocking ipfs get wouldn’t change anything. Anyone could still download those files, the same as they do now. It would just be slightly more annoying for them since they’d have to install an extra tool.

1 Like

@traverseda. Thanks for that. I happened to discover wget yesterday. Totally closes the issue for me.

@stebalien
@emuNAND
@PascalPrecht

Thanks for the awesome feedback and support. I hope to connect as many people as I can to ipfs and help develop this project.

1 Like