Lifecycle of publishing a post on a subplebbit
Checkout pronto. Uses pubsub, IPFS streams, RDF/SparQL and decentralized identities. First UI is done in QML.
You’ve heard the phrase “Protocols over Platforms” right? I coined that, and then Jack Dorsey started using it about his BlueSky project he’s pretending to be working on.
Anyway, the idea is that the world doesn’t need an “implementation” of a new Social Media network. We need a “protocol” for one. The protocol should be so easy a few lines of JavaScript (using only IPFS as any external dependency) can at least perform the minimal capability of reading some messages and posting a message.
Is there a documented protocol for “pronto” somewhere?
Hi @wclayf
Agreed. Pronto doesn’t reinvent the wheel (no time/skills for that), but rather builds upon great standards like RDF, SparQL … and IPFS
The end result is a giant P2P RDF graph easily queryable and extensible. One thing that had to be fixed is that some resources like DIDs change over time and their representation in the graph has to be unique and only the DID owner can “overwrite” the previous triples. That’s been taken care of, and graph upgrades are done “triple by triple” (instead of doing a naive RDF graph merge which could produce duplicates).
Right now documentation is limited … The best way to learn about it ATM is to run galacteek, open the pubsub sniffer and look at the messages on the galacteek.ld.pronto topic. SmartQL IPFS service streams use this protocol name format:
/x/smartql/beta/urn:ipg:g:h0/1.1
The SmartQL service has a /sparql HTTP endpoint to run SparQL queries remotely. Other endpoints are there to pull graphs by resource URI.
Here’s a link I already posted in another thread, but it belongs here too:
That guy gets it. He wrote the “simplest possible” PublicKey based social messaging over IPFS PubSub that can realistically exist…and he did it in two days, and with amazingly clean code too.
To build the next Social Network it needs to be that simple at it’s foundation, and with zero-barrier-to entry. That kind of simplicity is what’s needed to get everyone onboard, and build huge momentum, because 100% of developers of all ages will be able to dabble in it successfully.
RDF has it’s place too, as does all the other advanced features, but not as part of the lowest common denominator.
Great project … Thanks for sharing.
In your system each Subplebbit owner would have absolute authority.
Since they are the one anchoring the discussion, it’s not really decentralized.
What is the difference between that and multiple reddit websites controlled by different people?
That’s why one core tenet of IPSM (Interplanetary Social Media) has to be that it’s based on anyone posting anything to any topic, and it’s up to the consumers of the topics to do the filtering. No central authority.
And it immediately follows from that, that they will be pretty big fire hoses of data, so we need to limit to just a PublicKey and CID being posted (into the PubSub). This means just by looking at PublicKeys you can know if you want to discard the CID or not.
I wrote the IPSM spec last week:
…and “Undying Wraith” was able to cobble together a working prototype in two days, proving my point that: “If it’s simple enough then adoption is almost certain.”
That’s already how Reddit works, the owner of each subreddit has absolute authority, so the design is not sacrificing any feature of Reddit. Yet it removes the need for Reddit admins and their entire server and legal infrastructure.
Having absolute authority on your own subplebbit that you created is not any more centralized than having absolute authority on your own coins in your Bitcoin wallet. You own it, it is yours. Unlike on Reddit, where you’re only owning a subreddit you created as long as the admins let you.
Also it is not actually absolute, the client can offer many protection such as verifying that posts are signed by each user, the subplebbit owner cannot temper with a user’s posts other than to delete it, and anyone can keep a “moderation log” by being a peer in the pubsub network and use it to expose a misbehaving subplebbit owner.
I’ve become a decentralization purist… oh no!
The same features but without the central company or website, sign me up! With crypto signature you can prevent impersonation and since your content is immutable you could copy all the content but with a new owner.
Keep us posted on your progress, I would love to integrate your protocol with mine.
Where’s your actual protocol @SionoiS ?
That is, what’s the format for how to post a message and how to read a message, without involving or referencing any of your actual code?
I need to write the specifications! Can one call it a protocol if there’s no specs yet? 
After getting some feedback I have added 2 new sections:
Censorship resistance of the captcha server
Captcha servers are not as censorship resistant as a purely P2P network, because it requires a direct connection to some HTTP endpoint. If this endpoint is blocked by your ISP or DDOSed, then you can’t connect. These attacks can be mitigated in a few minutes by changing the captcha server URL of your subplebbit, or using DDOS protection like Cloudflare. In a pure P2P network, if some peer is blocked by your ISP or DDOSed, some other peer should be available. A pure P2P captcha server solution seems impossible at this time because requesting a captcha challenge is not deterministic, so how would peers in this network deterministically block a bad peer spamming captcha challenge requests? If a solution for a P2P captcha server is found it should be attempted.
Using anti-spam strategies other than the captcha server
The captcha server can be replaced by other “anti-spam strategies”, such proof of balance of a certain cryptocurrency. For example, a subplebbit owner might require that posts be signed by users holding at least 1 ETH, or at least 1 token of their choice. Another strategy could be a proof of payment, each post must be accompanied by a minimum payment to the owner of the subplebbit. This might be fitting for celebrities wanting to use their subplebbit as a form of “onlyfan”, where fans pay to interact with them. Both these scenarios would not eliminate spam, but they would bring them down from an infinite amount of spam, to an amount that does not overwhelm the pubsub network, and that a group of human moderators can manage. Proof of balance/payment are deterministic so the P2P pubsub network can block spam attacks deterministically. Even more strategies can be added to fit the need of different communities if found, but at this time the captcha server remains the most versatile strategy.
The idea for proof of payment/holding came from @wclayf
I realized that a full captcha challenge request-anwser-validation actually is deterministic, and could work over P2P. If a peer or IP address relays too many captcha challenge requests without enough correct captcha challenge answers, it gets blocked from the pubsub, deterministically. The captcha challenge request alone is not deterministic, but the entire exchange is. This would require the subplebbit owner’s peer to broadcast the result of all captcha challenge answers, and for each peer to keep this information for some time.
So the “captcha server” over HTTP in the original design can be replaced for a “captcha service over peer-to-peer pubsub” design, which would make the entire design of Plebbit peer-to-peer. I will post an update to the entire redesign soon.
Captcha service over peer-to-peer pubsub
An open peer-to-peer pubsub network is susceptible to spam attacks that would DDOS it, as well as makes it impossible for moderators to manually moderate an infinite amount of bot spam. We solve this problem by requiring publishers to first request a captcha challenge from the subplebbit owner’s peer. If a peer or IP address relays too many captcha challenge requests without providing enough correct captcha challenge answers, it gets blocked from the pubsub. This requires the subplebbit owner’s peer to broadcast the result of all captcha challenge answers, and for each peer to keep this information for some time.
Note: The captcha implementation is completely up to the subplebbit owner. He can decide to prompt all users, first time users only, or no users at all. He can use 3rd party services like Google captchas.
Lifecycle of publishing a post on a subplebbit
- User opens the Plebbit app in a browser or desktop client, and sees an interface similar to Reddit.
- The app automatically generates a public key pair if the user doesn’t already have one.
- He publishes a cat post for a subplebbit called “Cats” with the public key “Y2F0cyA…”
- His client joins the pubsub network for “Y2F0cyA…”
- His client makes a request for a captcha challenge over pubsub.
- His client receives a captcha challenge over pubsub (relayed from the subplebbit owner’s peer).
- The app displays the captcha challenge to the user in an iframe.
- The user completes the captcha challenge and publishes his post and captcha challenge answer over pubsub.
- The subplebbit owner’s client gets notified that the user published to his pubsub, the post is not ignored because it contains a correct captcha challenge answer.
- The subplebbit owner’s client publishes a message over pubsub indicating that the captcha answer is correct or incorrect. Peers relaying too many messages with incorrect or no captcha answers get blocked to avoid DDOS of the pubsub.
- The subplebbit owner’s client updates the content of his subplebbit’s public key based addressing automatically.
- A few minutes later, each user reading the subplebbit receives the update in their app.
- If the user’s post violates the subplebbit’s rules, a moderator can delete it, using a similar process the user used to publish.
Note: Browser users cannot join peer-to-peer networks directly, but they can use an HTTP provider or gateway that relays data for them. This service can exist for free without users having to do or pay anything.
2 new sections have been added to the whitepaper:
Improving speed of public key based addressing
A public key based addressing network query is much slower than a content addressing based one, because even after you find a peer that has the content, you must keep searching, in case another peer has content with a later nonce (more up to date content). In content based addressing, you stop as soon as you find a single peer, because the content is always the same. It is possible to achieve the same speed in Plebbit, by having public key based addressing content expire after X minutes, and having the subplebbit owner republish the content after the same X minutes. Using this strategy, there is only ever one valid content floating around the network, and as soon as you find one peer that has it, you can deterministically stop your search.
Unlinking authors and IP addresses
In Bittorrent, an attacker can discover all the IP addresses that are seeding a torrent, but he can’t discover the IP address of the originator of that torrent. In Bitcoin, an attacker can directly connect to all peers in the network, and assume that the first peer to relay a transaction to him is the originator of that transaction. In Plebbit, this type of attack is mitigated by having the author encrypt his comment or vote with the subplebbit owner’s public key, which means that while the attacker can know the peer published something, he doesn’t know what or from what author.
Very strong text!
I think you are underestimating the importance of integrating search functionality into Plebbit. One reason people turn to alternative platforms is because of censorship. Content is deleted and lost. A lot of work, for example research or creating content or organizing archives is lost because it is no longer discoverable.
I hope you include search and archiving tools into Plebbit. It should certainly have a bookmarking system that helps people catalogue content and share notes. There is excellent Free Software for this already: Shaarli. Perhaps Plebbit could integrate some Shaarli functionlity.
Search is not included not because it’s not a wanted feature, it’s not included because it seems impossible to do P2P.
Not having search doesn’t seem to be a dealbreaker in terms of core functionality of reddit, I’ve never once used the search function of reddit in the 10 years I’ve used it. Reddit does come up on Google, which is very useful, but my hope is that independent people will run “archivers” similar to how they do it for 4chan. 4chan posts expire after a few days, but there are several archivers that archive them and those can be found on google and searched.
It’s very easy to archive a complete subplebbit over P2P using plebbit, but it’s very slow.


