Yesterday I have encountered a key that I am unable to publish to. Any attempt to publish to the key produces the following error message:
$ ipfs name publish --key=thebrokenkey /ipfs/QmQ579LpiPt84mL5CzcQN51eymu4xAAxpSP9m89RUHVHNG
Error: can't replace a newer value with an older value
I dug in to the code a bit and found the place where this error string is being produced here
But not being a go programmer and not having a go IDE, I was unable to figure out what produces the i != 0. Some kind of validator that compares the old value to the new value. But shouldn’t I be able to publish arbitrary values in IPNS, as long as they are proper CIDs?
Any idea what caused this issue? It forced us to replace the key with a new one, which was quite painful since it was used in multiple places. And I would really like to understand this.
IPFS version was 0.4.17, on Linux amd64.
p.s. I don’t think this is related in any way, but this is a private swarm.
This is a private swarm, and there is only one node on which we publish via IPNS. So this is not possible.
Is there any way to get this “unstuck” without having to generate a new keypair?
A feature being a bit slower is preferable to it not working at all. Currently the situation is that we publish to a IPNS keypair and occasionally get into the situation where we can not publish to it anymore, no matter what we do. The only thing that works is generating a new keypair, which is extremely disruptive. We are about to give up on the PKI part of IPNS and just do DNS… But that is a problem because DNS TXT record lookup does not work on android.
Well, it is understood that this can only ever be best effort because of partitions. E.g. you have sequence number 10 and somewhere in another partition there is a sequence number 20. There is no way for you to find out, but once the partition heals the 20 will win I guess…
If you’re only publishing from one node, this should not happen (with or without the network request). In that case, this is a bug.
Does this happen every time? The record in the DHT should expire after 24 hours after which this should start working again. If it keeps working and then not working on the same node, something is very wrong (it shouldn’t fail in the first place but this’ll help track down where the problem is).
In all cases where we had this, we are publishing from one node. Basically it is a node that is used from travis ci to publish new stuff in a private swarm. There are plenty of other devices in the private swarm, but none of them are set up to publish. They don’t even have keys.
This happens rarely, but once it happens it happens on every publish attempt for a keypair. Unfortunately in all situations where we had this we had to resort to changing to a new keypair and then updating the ipns links in various places, so we don’t know whether it would work again after 24h.