I have being doing an exploration in attempt to provide an interface for reading / writing data into “user library” across all app / sites on the system. Where “user library” is user data on IPFS network.
In an attempt to provide a seamless user experience PoC attempts to use native IPFS node on the device and falls back to in-browser IPFS node implementation running in SharedWorker. But since user may have different devices and / or have different node (in browser or native one) available at different times experience would suffer as different data may end up in different nodes.
This made me realize that ipfs cluster is kind of solution to this problem - I could form a cluster with in-browser node and native node to keep the same data available across them. This also would neatly address cases where user has multiple devices by adding nodes into a cluster. Additionally server nodes could be added to improve availability / sync.
However I think there is a design mismatch (unless I’m overlooking something). From user perspective there set of devices / nodes available and data in the library. However some devices might have less capacity so replicating all of the data across all the nodes does not seem to be a good fit, which is to suggest that allowing a cluster to maintain multiple pinset with different peersets might be a better fit. Does that makes sense ? Or should just different clusters formed per datasets ? Later seems at odds with user perspective as it’s still same cluster just some data may be not immediately available across all nodes and in fact having list of nodes who have that data might be useful in getting it faster when needed.
Any feedback pointers and thoughts would be appreciated.