Can you develop a fully decentralized Application with IPFS only?

The question is significant to me, because I would like to convert a classical client-server Application into a more decentralized, independent and censorship resistant application and wanted to know, if it was required for dApps to use a blockchain.

I have until now only seen blog posts that required Ethereum Smart Contracts, even when working with IPFS.

I don’t know why. Is there some design benefit, that a blockchain has, that IPFS doesn’t?

I’m trying to do this just now and I believe the answer is yes, we can build a decentralised system on IPFS without a blockchain.

With IPFS we can use pubsub for messaging and IPFS itself to share state and data artifacts, with messaging and state sharing we can implement an arbitrary system.

What IPFS doesn’t give you is consensus. You can build a network where individual parties can interact, but without implementing your own consensus or using a blockchain you can’t have a shared “Authoritative source of truth”.

In particular, smart contracts define ways in which “global state” can be modified by users of a chain, the global state cannot be modified without using those contracts therefore the ways in which global state can changed are governed by the rules of the smart contracts.

If we try to do something similar with IPFS we quickly find that though we can develop sets of rules for how the DAG should be modified, we need to enforce them in every client. If we make a central processor and authority that decides which state modifications are OK and which aren’t then we’ve just come full circle and implemented a blockchain on top of IPFS!

I’d love to hear more about this subject and in particular from people involved with OrbitDB, I think this is likely what you would want to use but I don’t understand how it deals with consensus just now

As long as your site is hosted by ipfs gateways or relies on specific bootstrap nodes, you are not really completely decentralized. Browsers needed to be full IPFS peers. Even in ORBITDB it is not entirely P2P.

@lyrx I’m hopeful that wont always be the case. If people start using IPFS node extensions in their browsers or running local nodes the issues with the gateways go away - those concerns all arise solely from the strict rules on what code running in your browser can do, and rightly so! It’s worth noting that the android version of Opera has IPFS built in nowadays so things are moving in the right direction.

From what I see the centralisation is all to do with being interoperable with the existing web rather than a weakness in IPFS itself, and if IPFS gets mainstream adoption these issues will disappear. To reiterate, the IPFS Dapps are decentralised, but the way in which people can access them through browsers isn’t.

Agreed! Let’s work all together to make it happen :smiley:

I am working on that subject for more than a year now…
Actual experimentation is combining scuttlebutt with ipfs.

The network object resulting is like a “meta-Intranet” unique to each node depending on ssb relationships :wink:

ipfs is used as a proto-blockchain and data hash storage
Nodes index and data are published asynchronously in /ipfs/.$IPFSNODEID
Each node “swarm peers” connections are based on SSB relationships, so both are “glued”
/ipfs_swarm/ is replicating indexes and forming a common “decentralized database”…
Any file added to ipfs is linked to a ipns key along with a “zen token” coin counter, allowing contracts to be applied when a new ssb relation is happening, and file “wallet” to be paid when shared, and be paying nodes for being pinned…

It is acting like a “cellular automaton”.
Alpha stage code is experimented on ~10 nodes

https://ssbc.github.io/scuttlebutt-protocol-guide/

ipfs cant handle mutable references. Applications needs them. You need a mutable reference layer.

Some alternatives to a blockchain are
SSB
OrbitDB
Gun

Or more likely IPNS.

I do this just now with what I would say is the cheapest IPNS rip off ever. I sidestepped using IPNS because it wasn’t clear how to get IPNS events, by rolling my own with pubsub this was trivial.

I set up a system where each node regularly broadcasts its root hash (the stat of the root of its MFS, I used MFS rather than working with IPLD nodes directly because I thought it would be easier, I wish I hadn’t, but it works!), directly analogous to an IPNS publish.

Nodes which are interested in mutable state from other nodes check each incoming root hash, if it comes from a node they care about they inspect the tree to see if any of the files under paths they care about have been updated, and if they have the node will add that to their own mutable root.

Specifically just now I use this to share “jobs” units which describe fully a task that a node should carry out. When a node wants another node to do something they create the job spec and send a message to the target node saying “Check out this job at this path in my mutable state”

Once this message is processed now the target and host nodes are both watching for updates on that job via each others regularly published root hashes. The target will add a file with a signature, either “.accepted” or “.rejected” in the job directory to accept or reject the job. If accepted, the target will then process the job, adding any produced artifacts to the job directory and finally signing a “.finished” file to say they have completed work on the job. I use a (currently not enforced) set of rules on the valid states of a job directory to avoid conflict, the files each node is allows to create in the directory are defined such as only the target being able to do “.accepted”, “.rejected”, “.finished” and only the host being allowed to do “.released”

I’d love to hear if someone has already implemented a pattern like this, I have it all running but I wasn’t intending to invent “public mutable state based job distribution” it’s just a thing I needed!

The thing of note, this is an almost decentralised system right now which allows any network participant to ask any other network participant to do anything in an auditable way (you cannot issue a job without anyone on the network being able to prove you issued the job) - the only weakness right now is the bootstrap nodes. It’s worth noting I use a private network just now but this is not my ultimate goal, I went with pnet because it’s a lot easier to manage during development, I found I had to do quite a bit of configuration on the main net to stop routers where the nodes are from blowing up and maintaining connections to nodes that care about the pubsub channels wasn’t reliable either. I believe when I switch to using the main net and the IPFS bootstrap nodes the system will be “as decentralised as is useful”

it’s worth noting, I’m pretty new to P2P systems, I’d love any feedback on my approach here, or to know of tools that can better do some of the steps I have already implemented, less code makes better days!

1 Like

How do you do that?
As far as i know there is no API that allows you to do just that.
Just curious :slight_smile:

I am actually working on something that smells pretty similar. I’m working on a kind of “amazon lambdas” implementation in a distributed manner. For that i’m using both NKN (a blockchain for quick routing of network data) and IPFS. I’m using NKN to “register” the nodes that can handle “distributed lambdas” and i’m using IPFS to store the code that needs to be executed in those instances. It’s doing this by letting each node listen on a given pubsub channel (in NKN, it too has pubsub). If someone wants to execute something, they just send a message with a request to execute some code and an IPFS hash of with the actual code. Then one of the nodes in that pubsub channel will respond and directly connect to the user who asked to execute something to send the results too.

You could also swap NKN’s pubsub with IPFS. The difference here being that IPFS uses gossip. It’s fast but could be slow. The NKN pubsub is done by maintaining a list of subscribers and sending them all a message which is direct but is more bandwidth intensive.

As a proof of concept, this already works! But it’s thoroughly insecure. There is no form of authentication at all yet. There needs to be some form of node authentication which is really difficult to do in a fully autonomous manner. I’d need some form of consensus and proof-of-work to reliable do that, which is too complicated. Any other option seems to be adding a manual step in the mix that i try to prevent.

I’m very interested in what you are trying to do!

So the incoming messages are via a custom pubsub, the message only contains the new root hash. I rely on pubsubs own authentication for security there which means as long as users keep their IPFS private keys safe nobody can impersonate them and send a false state update.

As for comparing the hashes, I use /api/v0/object/diff, each node has a list data data records which look roughly like this:

[local state path] [remote id] [remote state path]

Lets call them “sync records”

When an update is received the process is:

incoming_id = the state update ID
incoming_hash = the state update hash
for s in sync_records:
    # Check if this sync record matches the incoming id
    if incoming_id != s.remote_id:
        continue
    # Local DAG node
    ldag = my_node.ipfs.files.stat(s.local_state_path)
    # Remote DAG node
    rdag = my_node.ipfs.object.stat("/ipfs/" + incoming_hash + remote_state_path)

    diff = my_node.ipfs.object.diff(ldag, rdag)
    for d in diff:
         # Make each differing path match

So there is a little bit of work we need to do on top of IPFS to make it work. I’ve been starting to look at the graphsync work as I think this achieves the same end with IPLD selectors but I need to read more to understand better!

I sidestep some of the issues with pubsub propogation by running an extra piece of code which makes a node try to maintain connections to anything it shares a “sync record” with, this is something I will revisit in the future as it isn’t particularly elegant just now. I’m looking at some of the issues around better control of the connection manager in go-ipfs - the biggest issue for me just now is that it doesn’t appear to score nodes it shares a pubsub subject with highly so it will drop these connections even though for my application they are the “ideal connection nodes”