I’m trying to test OrbitDB, which is using IPFS. I’m getting this error:
(node:11441) UnhandledPromiseRejectionWarning: InvalidValueError: The ipfs-repo-migrations package does not have all migration to migrate from version 7 to 9
at verifyAvailableMigrations (/home/user/orbit-playground/music-app-no-react-ipfsv0.46.0/node_modules/ipfs-repo-migrations/src/index.js:231:11)
at Object.revert (/home/user/orbit-playground/music-app-no-react-ipfsv0.46.0/node_modules/ipfs-repo-migrations/src/index.js:160:3)
If I switch to “ipfs”: “0.49.0” in dependencies, I will have
version 8 to 9 error. I don’t understand what does ipfs-repo-migrations do.
This is the OrbitDB-related source code: https://0bin.net/paste/YtqE5tb4#y57TijqKq-IdswqGPcNY+iHG6fHYzFHtJ9VgwV0F3r2
What is causing this?
DB migration from this version to the next is not possible.
My solution was to delete
orbitdb folder and
.jsipfs in my home folder, but of course this only works if you don’t need the data.
Speaking of saving your data: I’d never trust IPFS as a database. IPFS guys are great developers and doing great P2P work, but this technology is just not a database. You’re going to need to replicate the data either as plain file system files, or in an actual database, IMO, if you want to ‘trust’ with certainty you’ll never lose your data no matter what.
Everyone should store their own critical PINNED data somewhere where they can reload it on demand on a ‘from scratch’ new installation of IPFS any time you want, rather than just loosing all your data if there’s an IPFS bug, or problem with an upgrade.
Edit: And this is especially true when you consider the chunking. IPFS doesn’t even store a single contiguous binary of any given file. Everything is scattered out in it’s proprietary chunks for performance and to do P2P the way BitTorrent does. So your “real copy” (primary copy) of your pinned data should be somewhere in a rock solid database or in Linux files, and just think if every IPFS file as a kind of “caching layer” that’s sole purpose is to make IPFS operate.
Actually this is untrue. It’s great to have backups for sure but just like on our decentralized social network, you don’t actually have to store the content outside of IPFS (as long as you pin it), probably also violates a few design patterns. We’re using dags of which we only store the ref to the latest node (and important other refs for caching purposes). Never had any problems and the dag interface hasn’t changed in what feels like and probably is years.
@wclayf This is supposed to be a decentralized blog. It should work like someone uploads a new post, first a cluster will serve it (3 servers which are running IPFS), then users can pin it and host it themselves. Do you think this is not safe?
@imestin My only point was that I don’t consider any proprietary file format, like IPFS chunk files, to be a viable storage format for archival purposes (not yet as of 2020). A better format would be plain OS files, a real database, or ZIP/TAR, etc.
However this doesn’t mean you wouldn’t also want to keep backups of the actual IPFS files also, solely for the purpose of getting your systems back running rapidly if there’s a failure. These are only my opinions, and may change in the future.