This supports multiple clients. But data loss could happen when many clients update the same IPNS endpoint at the same time.
Let me explain my idea in more detail. A proposed Tag object has two fields, âNameâ is the name of the tag, and âMultiHashâ is the multihash of the IPFS object the tag pointing to. Letâs defined H1: multihash(Tag.Name), and H2: multihash(Tag).
H2 is the traditional IPFS content addressed key. It is used as usual to find the Tag object and then the content object the tag pointing to.
H1 is used for tag query, which is different than normal content query. A tag query could return a list of (H2, H2â, H2â', âŚ), and itâs up to the application to decide how to deal with the data.
To support tag query, the routing key for tag is (H1, H2) which has one more dimension than usual content key. Content propagation is driven by H2 and the flooding pattern shall be the same as usual. We might have (H1, H2) and (H1, H2â) cached in the same node and itâs OK.
The IPFS tag query operates at a different plane to support IPFS content annotation. The above design shall serve our goal and make IPFS content searchable/addressable by tag name. Note that when the same tag Name pointing to different IPFS objects, there will be multiple different IPFS Tag objects. A tag query for Name can therefore return multiple results.
For the implementation, we need to augment IPFS routing API, e.g. add new APIs PutTag(), GetTag(), etc. Tag objects are small and can be stored within DHT.
Iâm afraid this kind of feature shall be supported by Applications. We donât need to worry about it here.
Itâs far from a key-value store. I believe an IPFS directory (folder) is no different than a blob object. Anything changed within the subtree creates a new IPFS object, which requires publish the directory again over IPNS. With shared keys, it supports multiple clients. But itâs not safe and could have data loss.