Is this possible to have several IPFS nodes with the same keys across them?
What we want is to store our files on several nodes, but we would like to access them via IPNS names, and as I understand we need the same key on all nodes to have the same IPNS name on them. Is this possible at all?
Multiple nodes with the same peer ID is a big no no and doesn’t work.
However you can have alternative keys to use for IPNS (see ipfs key --help
), if you have duplicates of that, it does likely not much. One of the peer counter will be higher than the other and only that peer publishes will go through. (that a guess, I havn’t actually tested maybe I’m wrong).
However, you don’t need to have IPNS running on all nodes for them to share the files.
You can have a single node publishing IPNS and other ones just storing the same CIDs behind the IPNS record. (there is still a single point of failure for the IPNS publishing but all nodes will serve the files).
How can I copy files already uploaded from the first node to other nodes? Is there some kind of sync or something like that?
Run this on the other nodes:
ipfs pin add Qmfoo
Replace Qmfoo
with your CID.
That kinda of a pain because you need to do that on all nodes, personally for that I use: cluster.ipfs.io you add or pin the file once, and it get broadcasted to all the nodes.
I was searching for that and, you can link IPNS names to IPFS content. Redirecting a .crypto / .wallet (from Unstoppable Domains) address to an IPFS link, or a “.eth” (Ethereum domain) to an IPFS link (or even, as shown below, a simple DNSLink to your “web 2.0” domain via DNS). Then your domain.crypto or domain.eth (or domain.com) will link to all nodes.
i.e., what I was looking for, and what you are looking for if you arrived here (hi!) could also be this, or something similar.
Hardened, site-specific DNSLink gateway.
Disable fetching of remote data (NoFetch: true) and resolving DNSLink at unknown hostnames (NoDNSLink: true). Then, enable DNSLink gateway only for the specific hostname (for which data is already present on the node), without exposing any content-addressing Paths:
$ ipfs config --json Gateway.NoFetch true
$ ipfs config --json Gateway.NoDNSLink true
$ ipfs config --json Gateway.PublicGateways '{
"en.wikipedia-on-ipfs.org": {
"NoDNSLink": false,
"Paths": []
}
}'
Gateway.NoDNSLink is set to false by default, which means Kudo will resolve “TXT” DNSLinks in name servers, by default.
Does it also resolve .eth/.crypto domain names (etc.) by default too, now? Looks like it does. Neat stuff! (Edit: Otherwise, this fixes it.)
EDIT: Well, I should have first mentioned my website is only .html files and images. So: Warning: This ONLY works for a static website, which you update all the ipfs instances at the same time. On anything non-static, as explained below by @ylempereur, these instructions will just… create a real mess (i.e.: a random number of people will be accessing a random snapshot of your website done a random point in recent time – always).
Guess what. One year later and I finally was able to try everything properly and I can now tell you, YES, it’s possible to have the same exact, immutable, IPNS key for multiple servers, and it works. The standard kubo instance only has (auto-generated) “self” keys, but you can create other keys:
ipfs key gen super
ipfs key export super
(The key will then be saved as super.key in the current folder.)
Then, you take this super.key file, and put it in the other server(s), possibly somewhere in the .ipfs folder to avoid weird “permission denied” errors. And on all these other server(s) you do:
ipfs key import super super.key
ipfs key list -l
ipfs key list -l will give you the long CID of the “super” key. Then, on all your servers, you will be able to:
ipfs name publish --key=(long-superkey-IPNS-CID) (your-latest-files-IPFS-cid)
Your IPFS CIDs can change as much as they want, they will now always be linked by an immutable IPNS CID – just like a domain name. Yes. The same IPNS address. Forever. Associated to multiple servers. No need to install an IPFS Cluster. No need to buy a .eth or any other domain. Otherwise, no need to update your DNSLinks anymore, because it will always be the same IPNS address!
I find it so odd that there’s basically no info AT ALL on this when the main thing about IPFS is redundancy, i.e. the ability to host things on multiple servers. I spent two years thinking an IPNS key had to mean “one IP address of one machine”, therefore you had to get a domain name and update IPFS links all the time… until I found this message from 2017.
This really needs to be added everywhere in the help files and how-to’s. Otherwise it seems like it’s impossible to even host a simple, static website on multiple servers linked to one immutable address unless you spend a lot of time and money with IPFS Clusters and IPLD (?) and whatever else… But this might be just how the “big data guys” working with IPFS for a living do it too? I just did not knew it!
P.S.: Some guy at IPFS sure likes to have fun. Searching for base32/base36 encodings and which ones work and don’t, and at the end of the list was… base256emoji.
https://ipfs.io/ipfs/🚀🪐⭐💻😅😍🌼✌👌😉👋💔🤘🤐✈✊💿👈💣👎😕🤬😰😱😆😎🖤😶🙊🐶💎😒💜😴💕😀🦋
This is an actual website. This is https://en.wikipedia-on-ipfs.org/ … encoded in “base256emoji”.
P.S. 2: Parenthesis: Unstoppable domains.
I don’t know if “.eth” domains work, but Unstoppable Domains will not allow you to put something starting with “k51…” (any IPNS key) in the “IPFS” infos. You can:
ipfs cid format -v 1 -b base32 (k51…)
Which will change your key to base32 (starting with “bafzaa…”). The website will accept it… then mess everything up. Because it will redirect to ipfs .io/ipfs/bafzaa…(etc). : and give an Internal server error, because it’s supposed to be ipfs .io/ipns/bafzaa…(etc).
So, add that to the list of things Unstoppable Domains screwed up, I guess.
Anyway… enjoy!
Eek, don’t do that, it’s a really bad idea and it will not work properly.
@Jorropo tried to explain why in his first post to this thread, but I guess you either missed it or didn’t understand it. I’ll try and clarify the two reasons why this won’t work:
Each node keeps a counter along with each ipns key that it uses to number each ipns record when you publish. The latest record it itself published is then reprovided every four hours. Here is what that looks like:
Let’s say you only have one node and you publish a new CID on a new key. Let’s call the new CID A. The record will be published with a count of 1. Let’s call this record 1A. From that point on, the node will reprovide record 1A every 4 hours. Let’s say you then publish a newer CID, B. this will create ipns record 2B, which will then be reprovided every 4 hours. If you again publish an even newer CID, C, the newest record 3C will now be reprovided every 4 hours. Let’s say you did this fairly quickly, all three records still exist in the DHT. If a client looks up your key, all three records will be returned! The client will choose the one with the highest counter, 3C, and use that.
If you duplicate the key to another node and publish a newer CID, D, it will create record 1D and start reproviding it every four hours. From this point on, the first node is reproviding 3C and the second node is reproviding 1D. If a client looks up your key, both records will be returned, and it will choose 3C as the latest (highest count).
If you publish a new CID to the second node, you end up with 2E, and a client still chooses 3C. Publish again, and you have 3F. Now, a client will choose randomly between 3C and 3F, for each lookup.
Add a third node, and now you have a real mess.
The point is, you still only have one node reproviding your latest record, and clients may choose a different record coming from another node just because it has a higher count. In short, it won’t do what you want.
What you are looking for is ipns record re-pinning, which is a fairly recent thing that some providers have started to support. You still only have one node that publishes new records, with only one counter, but you have other nodes picking up that latest record when it appears, and re-pinning it and helping reprovide it from that point on. So, now, you have multiple nodes providing that latest record, making it more available.
See feat(ipns): Add a parameter in name.publish to change the sequence number. by gsergey418 · Pull Request #10851 · ipfs/kubo · GitHub which is related and should in theory allow re-publishing safely from different places, assuming those places coordinate on the sequence number.
I should have pointed out that my website is a static website (and above all, I will add a note at the start of my last message… do not do that when your website is not just simple .html files etc, for obvious reasons). I fully understand that on anything non-static, my way of doing it will be a real mess.
Now… for my static website? That’s exactly what I want. I want my website to always be available, even if one of the multiple IPFS instances crash (and that is all). I obviously update my IPNS “super.key” on all the IPFS instances at the same time.
If you publish a new CID to the second node, you end up with 2E, and a client still chooses 3C. Publish again, and you have 3F. Now, a client will choose randomly between 3C and 3F, for each lookup.
Oh well, as a bonus, it also acts as a load balancer. Neat!
Edit: Yeah I also had messed up and said my website was non-static somewhere in my last message I’m sorry.