Besides a browser add-on, one could also get shared storage requiring user consent by hosting a file on trusted domains (ipfs.io?) which explicitly requested permissions of the user before use and which used a service worker to continue working offline after the initial download. Even some browsers at present allow hashes within script tags to insist on a particular version of a file being used so this could be trusted if used as such: https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity .
One problem here though (besides these scripts not being themselves within IPFS) is that each domain will be given storage limits by the browser, so one could have a script which managed the storage amounts and directed users to other domains when the amounts became unwieldy, though that in turn requires being connected (and a good number of domains mirroring such (identical) code). (This is, from my naive point of view at least, starting to sound a bit like a blockchain itself!)
Good to know. The issue here though is that, like you said, it has to do with content origin policies. On IPFS everything is a file that gets torrented by the browser from multiple origins, it’s entirely agnostic to source! That’s precisely the point of why I want to use only IPFS, it’s full decentralization where origin doesn’t matter in any way. However if you do that, how do you remain sure that you maintain full account security away from hackers?
Anyway I’ve been thinking more about my idea, and remixed it a little in my head: If I’ll ever attempt such an application, I don’t think I’m going to make it a real website per say, but instead each user profile will be a site of its own which uses a list of other profiles (ran with the same site code) for features such as watches or search. This would be even more flexible and decentralized, and should be a very interesting concept to see in action
Okay… here is an example of what my idea would imply. This is separate from storing user settings, but the answers should apply to that question in a similar fashion.
To make a profile, people would download the site code and use “ipfs add” and other commands to upload a copy with their username in the IPFS cloud, thus initializing their site-profile. When that happens, the site adds its IPNS address to a cloud list (potentially customizable), which is a simple json array containing the URL’s of all profiles created with the site. Oppositely, if a profile removes itself from a cloud list, the entry must be removed from this array too. This list exists so that users have search and notification functions which can access the profiles and posts of other users.
The file must be possible to reference with the same URL whenever it’s edited (IPNS should fix this).
Next the file must withstand a tremendous amount of editing, in case a lot of people will add or remove accounts from a cloud list. Depending on how used it would get, even 1000 edits per second.
I should ask a more specific question, based on my train of thought with what I know at the moment:
Something tells me it’s more complicated than that. But I’d rather not have to use external services, which seems to be the main alternative here… if I can I’d make such a site work fully within IPFS.
Hello @MirceaKitsune! Men, I can’t belie I came across this post, for about two days now, I am wondering about the exact same thing! Its freaky how my approach resembles yours: independent app/page for editing content, the use of JSON as a data store, private/public keys for authenticatoin, central page for indexing individual user’s pages, etc., for achieving a truly decentralised platform. I already went a bit ahead, sketching out a a renderer function that would update the users “portfolio page”, upon changes made to the JSON file. Btw, how is your progress?
Greetings. I’m glad that you like my idea, and also that other people are sharing it! So far there’s no progress and only planning: I haven’t even touched IPFS as I’m waiting for my Linux distribution to add the IPFS daemon as an official package.
My plan however is to create a fully decentralized and non-moderated alternative to Twitter / Facebook / Youtube / Deviantart / etc: No central authority taking down any content, use weighted tags to both follow content you like and block content you don’t like. I’m especially focusing on those plans in retaliation to the European Union wanting to fine social media if they don’t police the free speech of users as the government sees fit.
The program itself will be an interface, which does nothing but interpret the profile file you give it (with public / private key pairs) in order to upload or update files: Users register and edit their profiles by sending change requests to any trusted interface, telling it “this is my profile, here is the private key for its public key so you know it’s me, and this is the file I would like you to change”… the script then generates a new profile json file if the decryption is successful, and makes the profile point to it instead of the old one.
I absolutely agree on the implementation, and approach. Regarding the issue of keeping updated files “indexed”, after its hash changed, a simple solution could be to hold all profile files inside a directory, so on the “main page” you just iterate over the directory’s content (regardless of the hashes) and load the profile files into the view.
ipfs add NEW_PROFILE_SETTINGS.json
ipfs name publish HASH_OF_NEW_PROFILE_SETTINGS.json
Sadly I’m both very busy and very new to IPFS. I just got the IPFS daemon working locally (pretty annoyed there’s no RPM repository for openSUSE yet) and messed around with a few basic commands.
I wish to have a good structure and proof-of-concept for the project first, I will likely attempt this locally. If it grows into something big, it will definitely need a team to develop it further, as there’s only so much I can do with my limited experience.
If you add a directory but only one file is new, the adding process will generally be faster. But if you have 1000s of files, while only 1% is newer, it might be faster to write a script that just adds what you want to add.
You can manipulate directories in IPFS with the ipfs object patch commands.
ipfs object patch - Create a new merkledag object based on an existing one.
ipfs object patch
'ipfs object patch <root> <cmd> <args>' is a plumbing command used to
build custom DAG objects. It mutates objects, creating new objects as a
result. This is the Merkle-DAG version of modifying an object.
ipfs object patch add-link <root> <name> <ref> - Add a link to a given object.
ipfs object patch append-data <root> <data> - Append data to the data segment of a dag node.
ipfs object patch rm-link <root> <link> - Remove a link from an object.
ipfs object patch set-data <root> <data> - Set the data field of an IPFS object.
Use 'ipfs object patch <subcmd> --help' for more information about each command.
Thank you… I really need to look into this. I do not want to use IPNS for every single file that is modified, that sounds way overkill. Object paths definitely seem like the better solution, so one can reference an updating file using a constant address.
Hello, I just found your post and I had something similar in mind. I wanted to use blockstack for user authentication&stuff…
but that requires user to download blockstack browser.
Have you been able to produce any code yet?
Maybe we could do it in a collaboration since I am new to IPFS &Blockchain aswell (got few years of programming expirience)
I haven’t started coding this project yet: Very busy working on other things, getting a browser with IPFS support (including js-ipfs-api) is still tricky at this date, and I need to decide how I’ll be fixing the limitations imposed by lack of a server side processor.
I’ve considered using pubsub as a database for a workaround, similarly to how a normal Apache server would use MySQL. The problem I understand is that pubsub isn’t persistent, so it’s more like TCP where you receive a data packet only once then it’s lost forever. Makes this all the more complicated.