has anybody an idea how I can forcibly overwrite that dead block?
I’ve done a dag export on a node with a good copy and dag import but it’s still got the same error
ipfs block rm QmxxxxxpMYKZu
cannot remove QmxxxxpMYKZu: pin check failed: failed to get block for QmxxxxxxxxxxQJb3WiX: read /data/ipfs/blocks/GT/CxxxxxxxxxxxxxxGTA.data: input/output error
Error: some blocks not removed
I really don’t want to have to have to re-pin 750+ GB to a fresh repo if I can avoid it
Either your hard shutdown your computer and your FS is corrupted (basically remove the partition and reformat a new one loosing all files in the process). That rare ext4 have a good jounral, the most your should expect is partial or total rollbacks of some modifications.
Maybe just delete that block in the file system, the path of your block is likely has something to do with its path in the ipfs’ local repository where it stores all the blocks. On windows it’s inside Repo folder in the user’s folder.
because after deleting that block, you can get it back from others.
I was thinking that block rm could be modified to include an arg to just force remove the block from the db.
I tried to unpin the block but if you check my first post it encounters errors and fails on another dead block higher in the merkle tree.
I’ve tried following the advice to manually deleting those blocks from the filesystem, so all that remains are the invalid entries in the datastore db which I have no tools to remove. So any attempt to replace the bad blocks results in the daemon refusing to write new copies because the db claims they still exist.
A low level cli util to manually edit the db would be nice; even if it’s not packaged with ipfs itself; as a sort of extra ‘I know what I’m doing. No really. This crap is already fubar’ed and it can’t get worse so don’t even bother warning me!’
Or perhaps the blockstore manager could write a field in the db to mark a block as invalid and needs replacing regardless of it’s pin status when it encounters errors?
Maybe always forcibly (re)fetch so marked blocks via p2p rather than the local blockstore so you can at least continue ops e.g. unpin and gc?
Or manually via a command? ipfs block mark invalid?
Hello, this look like a hardware failure. ipfs block rm is your way of forcefully removing a block. But if the disk is broken, you cannot write or read to it and IPFS can do little about it.
What you call “db” is literally everything in the blocks/ folder, which are just files. This is not solvable by anything that IPFS can do because the disk is broken.
I am not sure what “db” you want to manually edit, because other than the pinset, which has the pin roots, there is no such db. IPFS cannot “replace blocks” if the disk refuses to read-write things. If you manually remove the file and restart the ipfs daemon, then any operation on that block (ipfs block stat) will re-fetch it (and if you use --offline it will complain that it is not found). And for all I know, IPFS may fail to write it again because the disk is borked.
So slightly on a different tack but related, I’ve now managed to have a bad block in my s3 api backed blockstore.
I’ve tried to find exactly what’s the object key for the underlying dead object so I can replace it.
Unlike with a flat file blockstore on local disk I can’t seem to figure out what debug options to enable to monitor the calls to the s3 api.
I’ve enabled debug logging for pin, blockstore & blockservice but none of those show anything more than the v0 cid of the dead block, which I already know.
Is there any way to work out what the object name is in s3 from the cid?