Im the developer of a dApp storing certified files/documents, multiparty agreement etc on blockchain.
Im testing the implementation of IPFS on our project but still have hard time to clear up some points.
I understand that for a file to remain available it needs to be in demand, in our case, no one will request the file aside of the person that added it, so i understand that for it to remain available we needs to pin it.
At the moment i use ipfs.infura.io to add() and pin files but to me if there is only one point of entry and all files only pinned there, it look a bit centralized + we have to deal with infura limitation (100mb max) which they will lowered soon according to them.
So in that situation what would be the best practice? run our own node on the public network? But then, if a project run his own node, pin all file on his own node then what the difference with getting a centralized option? If we do that its like simply uploading the file to our own server no? What the advantage of this vs regular storage?
I also try understand how the storage work, if everytime we upload a file it is split in 256Ko chunk and distributed to different nodes, does it mean that if i upload a 10Mb file, pinned on my node, only a small part of it go to my storage (and then less than 10Mb space taken)? or the entire file go on the storage where file is pinned?
The idea scenario is that users keep their own files then. If no one will request the file aside of the person that added it then I don’t know why even storing things on IPFS.
The practical scenario is that people let their users store the content, and ALSO run or rely on a pinning service. Either Infura, Piñata, TemporalX (etc), or setup something using Textile Cafes, or work directly with IPFS Cluster.
The idea was that if they want they can have the file stored aside of the hash written on blockchain, ether for them to acces later of to share with someone else and my understanding was that using IPFS make it not own by “us” developer of this specific dapp vs using aws or so which would be own by us.
Meaning users needs to have an IPFS node?
Im using infura at the moment, will look into Textile and Cluster too
Well yes, if you are creating a Dapp, that means that every user runs their peer, which can be a JS-ipfs node embedded in the browser, but it is still one peer.
If i use the js-ipfs implementation then it run a node on client side browser for each user, in that case, where are stored the files on the client side? It also mean his file is only available when he is connected on the web app else his node is offline?
Doing this, pin on user browser nodes + pin on infura basically would mean there is 2 differents peers right?
Did some test, with ipfs node in browser, works well.
I still don’t get where the data/files are stored then on client side.
Also when i close the webapp, even the entire browser, the data are still available from any gateway like https://gateway.ipfs.io/ipfs/qm… even though my browser in initial machine is offline, i didnt pin on any other node and im on a different device.
And there’s also Filecoin. I don’t know exactly what the state of development is but I think it’s still in the testing phase. https://filecoin.io/
I think only on testnet for now, as my dapp is on mainnet already i may implement for now js-ipfs node in browser + pin on infura and eventually in future use filecoin if fit the needs.
I still dont find details about where node in browser store data, is it cache? Localstorage?
The node shutdown when leave the webapp its implemented in?