Copy/pasting my response to the same thread on reddit: Reddit - Dive into anything
The title promises a fundamental problem, but I’d argue that the content of the post contains some issues that do not rise to fundamental problems along with some misunderstandings.
The first being that no user would want to setup a full node.
This is obviously false to start with. There are already at least a thousand nodes – so many that they had to accelerate the implementation of connection closing to prevent your node from connecting to too many of them.
For those that don’t want to run a full node there’s js-ipfs. All you need is to load a website or install a browser plugin (or maybe someday there will be built-in browser support) in order to use IPFS.
If a public node is available at ipfs.io/ipfs, why in the world should I setup my own node??
There are a variety of reasons.
- guaranteed availability
- no blocklists
- control
- performance
- security (the gateway can serve whatever it wants, but your IPFS node can verify the integrity of what was requested)
I myself am guilty of joining the network, uploading my files, pinning them on public gateways, and leaving.
After that another problem is that what if I upload illegal content, and make a few nodes pin it??
That’s not how pinning works. By default, you can’t make someone else’s node pin something. They need to pin it themselves. Either that or they need to give the internet access to the API (not recommended and not the default).
Finally, hosting a website on IPFS. IPNS is simply terrible, and not user friendly at all, though that may change in the future.
IPNS is currently very slow, but I believe this is a technical deficiency rather than a fundamental problem.
My main concern about hosting a site on IPFS is why should I do it?? There would be at max a few thousand nodes running on the network. Even if all of them pin my site, A CDN service such as cloud flare would be way more effective and efficient.
I don’t know where your max of a “few thousand nodes comes from”, but here are some reasons.
- easy to version and snapshot
- someone who has already downloaded my site doesn’t need to reach out to me or a CDN to view it again
- I’d argue that p2p networks are better than CDNs for distributing data, but I don’t have time to look for supporting sources at the moment
- potentially improved performance in edges of the network
Though it only works for static websites, IPFS can also only deal with static sites, as it is only a file system, not a compute sharing service.
IPFS isn’t limited to static content, though implementing dynamic functionality is more difficult if you want to do it purely over IPFS. I guess this gap in difficulty is also present with current websites when hosting static websites vs a dynamic website with client-server communication. Here is some discussion about this.
IPFS also consumes a lot of bandwidth and users with metered connection don’t appreciate that.
I think this is largely a technical implementation issue. There are already some options that can be enabled to limit bandwidth, for example by running your daemon with --routing=dhtclient
.