IPFS can't realize its promise

unless it improve the performance to acceptable degree

The fact that it’s a scam can also be reflected by the IPFS whitepaper.
It don’t talk much about the most important part, routing, which is relevant to performance directly.
And spend tons of words about BitSwap, Merkle DAG, IPNS, which is easy and less important.

This reads like a troll post, but just in case it’s not…

What are the performance problems you’re having? What are the steps to reproduce them?

How does supposedly experiencing issues while using beta software amount to a “scam”?

Do you have any suggestions on what kind of additional detail you’d like to see in the existing routing section of the whitepaper?

3 Likes

It’s better to dive a bit into IPFS Github before stating anything strong(“scam”) :slight_smile:

I have been into decentralised space for about 30 months, and I can that IPFS is one of the most promising projects out there.

How can free software be a scam?

1 Like

Before IPFS come up with an applicable high performance DHT algorithm which is the most basic and important part that support the whole system, it’s just a scam, no matter how many fancy words and concepts added to it.

Do you already understand the current DHT implementation? The source is all public.

You have yet to explain this. Nor answer any of the other questions.

“You keep using that word. I do not think it means what you think it means.”
-Inigo Montoya

edit: curious, are you using go-ipfs or js-ipfs?

Details of “routing” process should be described.
How does the DHT algorithm that IPFS uses improve existing algorithms, and make the “routing” process high performance enough, and consume acceptable network resources?

I’m using go-ipfs

IPFS boasting about what it can realize, but actually can’t.

Adding more detail on performance and DHT resource limiting sounds like good feedback for a later draft (latest is draft 3). However, performance and resource usage isn’t only dependent on routing and also includes some of the things you think are “easy and less important” – like bitswap.

I don’t know how closely the current implementation in github matches draft 3 of the whitepaper, but there’s already a higher-level description in the routing and DHT sections. Reference sections 2.1 and 3.3.

Please be specific. What do you think are insurmountable technical challenges that can’t be solved as distinguished from beta issues that are in the process of being addressed or just need dev time spent on them to be addressed before v1.0. “Can’t” suggests that you think there are current challenges that are impossible to solve – or maybe you’re just using it to refer to issues you can’t think of a solution to (or haven’t yet).

Performance of existing DHT algorithms.
I don’t think it can’t be solved. And yeah, many existing issues can very probably be solved in the future, such as unlimited energy, and there are already many plans.

But it don’t mean that you can boast that you can reach there before you have a good enough solution.

Do you already have a solution to solve the performance problem?

Solutions in Content Resolution And Gateway Performance · Issue #6383 · ipfs/kubo · GitHub are not enough, they can’t solve the basic problem.
And GitHub - libp2p/research-dht: Moved discussion notes to https://github.com/libp2p/notes don’t have an applicable plan too.

  • section 2.1: it’s like state of art
  • section 3.3: there is not that much useful information there

My position is not that I have all of the answers. My position is that as the person making the claims in this thread like “it’s a scam”, the burden is on you to substantiate your claims.

To cut to the chase, is your position (“the basic problem”) that DHTs are slow and impossible to make fast? If so, I’d be interested in reading more about this if you’re aware of studies or something that document that known DHT implementations are impossible to scale past some limit.

Are you actually personally experiencing any performance problems? At least one of the issues you’re linking to seems like it could be caused by connectivity issues to the peer experiencing the issues.

I’m not going to copy/paste the whole whitepaper, but according to section 3.3,

IPFS nodes require a routing system that can find (a)
other peers’ network addresses and (b) peers who can serve
particular objects. IPFS achieves this using a DSHT based
on S/Kademlia and Coral, using the properties discussed in
2.1.

So the content in section 2.1 is relevant to section 3.3.

No, I don’t think it’s impossible. Science keep growing by keep researching.
But IPFS does not have a solution yet, and begin boasting what it can do, and can’t reach the goal after several years passed.

Yes, I have experienced these problem.
At first, I think it may be my personal problem.
But then I find many people face the same problem, too. And IPFS team don’t have a solution yet. It’s arisen from the design flaw of IPFS. Even the whitepaper doesn’t describe detail about the routing process.

Yeah, IPFS coined a good algorithm name, and no detail described.

Do you have any evidence or research on latency and scalability limits of existing DHT technologies?

If this is just your personal opinion, I’ll just leave it at that.

It’s impossible to troubleshoot a problem you won’t share details on how to recreate. It’s also not clear how unspecified beta issues are even remotely related to your original claim.

What? The entire implementation is public.

P.S. the implementation is so public that completely unrelated entities can attempt to commandeer the source code to use for their own profit (lol).

It needs a systematic analysis of existing DHT technologies, which needs to read lots of academic papers.
I have not dive that deep yet, although I will.

But I can show some comments first:

DHT can either rely on a small set of stable nodes with limited collective capacity, or a larger set of potentially less stable nodes and suffer maintenance and data redundancy overhead.[1]

I don’t see any single magic solution to making a secure and performant DHT[2]

The DHT is slow and unreliable.[3]


  1. https://www.researchgate.net/profile/K_Harfoush/publication/224710888_On_the_Stability-Scalability_Tradeoff_of_DHT_Deployment/links/0c96052debf709dfca000000.pdf ↩︎

  2. DHT improvement ideas · Issue #21 · libp2p/notes · GitHub ↩︎

  3. Content Resolution And Gateway Performance · Issue #6383 · ipfs/kubo · GitHub ↩︎

I’ve found some research for people to look into.
Research:
https://www.academia.edu/Documents/in/DHT


Mass adoption potential (no blockchain technology yet):

" The Vuze DHT debuted first, with version 2.3.0client on May 2, 2005. In its announcements back then, they were keen to stress the difference from eXeem, stating it was a decentralized layer on top of BitTorrent, rather than a decentralised BitTorrent system itself. Within 24 hours there were more than 200,000 peers, and there are currently around 1.1 million peers on the network."

This topic is going nowhere fast and is a waste of everyone’s time. Anyone who wants to continue discussing the merits of various DHT implementations/designs should open a new topic.

2 Likes