Can a public IPFS server be made into a private bootstrap node with ipfs-cluster?

I have 2 applications I’m investigating for use under IPFS.

One is to protect content on various public sites from censorship, the other a private network.

I’m hoping ipfs-cluster could be used for both. Let me explain.

I have an IPFS server I setup “normally” (uses default bootstrap list) with lots of files I’ve saved from various public sites. Now I want to make those files available to a private ipfs-cluster network.

I realize that any other public IPFS nodes that have linked to content on my public IPFS server will be cut off from the content if it goes private. I really don’t care, b/c one of my buddies has already pulled that content and pinned it on his IPFS node, so mine can go away and be private.

The idea is to convert my public IPFS node to private by changing it to a bootstrap node, where other IPFS nodes can join the private network if they are given the private network swarm key.

My thinking is that the .ipfs/datastore and .ipfs/blocks hold the actual content, but don’t know if the hashes for the content I added would be invalidated or if they would be served to clients in the private network.

Thinking I could rename the .ipfs folder of my public node and create a new empty one, run ipfs init to generate a new peer identity (but not start the daemon). I would use that to replace the peer id in old .ipfs folder and rename it back to .ipfs and continue with the process described here for bootstrap creation. Would that work? Would the content replicate with new client IPFS nodes if provided the private swarm key?

1 Like

Hi, I don’t see how, what you are describing, involves ipfs-cluster.

By adding a swarm.key to your current ipfs node you will turn it in to a private network node. There is no need to regenerate the identity (but you can) or to move the .ipfs repository files. The hashes will not be invalidated in any way.

What IPFS cluster can help you with is to add content to several nodes at once (by using the cluster /add endpoints and having a cluster made of both public and private nodes), as cluster can submit content to several peers at once. Note that I am talking about “adding”, not “pinning” (cluster cannot make private network peers pin content from the public network as they are split from that network, but as I say, it can add content to them directly).

Thank you Hector for that info.

You have helped to clear up some unknowns for me, such as conversion of a previously public node into a private one by adding a swarm key doesn’t invalidate the content obtained while a public node.

Last thing I did late last night was determine my bootstrap node was not coming up in private network mode, at least I didn’t see the swarm is limited to private network of peers with the swarm key. I used the Go swarm key generator instead of the manual bash method described in that Medium article, and put that key in $HOME/.ipfs/swarm.key

First thought I have about that today is it’s a problem with systemd unit I start the daemon with. I do have the env: LIBP2P_FORCE_PNET=1, and the correct swarm key as per the Medium link in my OP. I will start the daemon manually and see if the result is correct to eliminate systemd as the issue.

Using 2 Raspberry Pi 3s to test, and both are fresh installs of IPFS (not trying the conversion from public node just yet). Both can connect to each other thru ssh with private key auth, however configured as client & bootstrap (tho config seems incorrect) they get no response when I try to fetch a simple file I added to bootstrap node, from either node using http://127.0.0.1:8080/ipfs/<hash>

Just to double-check, you are using the same swarm.key for both peers right?

Are both peers in your private network connected (ipfs swarm peers). If not, can you connect them manually (ipfs swarm connect <multiaddress>) or does it error?

I’ve been caught up in admin hell today handling emails and such, haven’t gotten back to the tech fun yet…

My approach was to get the 2 Pis working in private net just like the Medium article, and when I got that going would try to copy & switch my public IPFS to a private network.

Thanks for the suggestions. I’m like 99% sure the swarm keys are identical. Besides, gotta start by figuring out why bootstrap node doesn’t see / use the swarm key. That is very puzzling.

Do you happen to know what the correct length of the swarm key is? I could also try to use the bash method to gen the key, but a major bug where the Go key gen program produced the wrong swarm key seem extremely unlikely.

If the key is wrong and go-ipfs tried to use it, it would complain. Check that the systemd service is actually using the ipfs home that you think is using (maybe you have things at your user’s home but it uses the ipfs user instead or something like that).

Well, it’s not complaining so I’ll assume the swarm key is correct.
Just getting back into the tech fun now. Will retrace steps on bootstrap installation.

The issue I found was the systemd unit I launc http://127.0.0.1:5001/h the node with. When I launch on cmd line I see the appropriate msg for private network now.

Don’t quite understand why this unit doesn’t work. My guess is that the LIBP2P_FORCE_PNET var needs to be exported so sub-processes see that env as well, not just the main ipfs the unit spawns directly.

[Unit]
Description=IPFS daemon
After=network.target
[Service]
User=ipfs
LimitNOFILE=65536
Environment=“IPFS_FD_MAX=4096”
Environment=“LIBP2P_FORCE_PNET=1”
Environment=“IPFS_PATH=/home/ipfs/.ipfs”
ExecStop=/usr/bin/pkill ipfs
ExecStart=/home/ipfs/go/bin/ipfs daemon
Restart=on-failure
[Install]
WantedBy=multi-user.target

I created a simple bash script for systemd to run that in turn runs the daemon after setting env:

#!/bin/bash
export IPFS_PATH=/home/ipfs/.ipfs
export LIBP2P_FORCE_PNET=1
export IPFS_FD_MAX=4096
/home/ipfs/go/bin/ipfs daemon

I can run that script from root cmd line and it works. It also works on the client node. Weird thing is on client node daemon is started upon reboot, but on bootstrap it produces an error and won’t start. Same unit def, same script.

Not repeatable tho. I also tried to use a long invocation line on unit’s ExecStart:

ExecStart=/bin/bash -c ‘export… ipfs daemon’

Both techniques work, not sure why it didn’t the first try.

When I look at the daemon’s status with systemctl status ipfs on bootstrap node I do not see the private net msg as I do when I invoke the script or launch manually. Systemd weirdness?

Also, after starting the daemon manually I get a hang when I try to access a file I added thru:

http://127.0.0.1:5001/webui
but it works via:
http://127.0.0.1:8080/ipfs/\<hash>

but only on bootstrap node. The exact same cmd on client hangs.

Appreciate your replies :slight_smile:
Have a nice weekend!

When I look at the daemon’s status with systemctl status ipfs on bootstrap node I do not see the private net msg as I do when I invoke the script or launch manually. Systemd weirdness? To be sure I was seeing all of the info the ipfs daemon was writing I changed the start script to redirect sdterr & stdout to a file. Contents of that file showed ipfs was started in private network mode.

Since the client node started it apparently connected with bootstrap node, that’s the point of the LIBP2P_FORCE_PNET=1 env var according to the Medium article.

I expected to be able to read the file I added on bootstrap with a query of localhost:8080 for the file’s hash (127.0.0.1:8080/ipfs/<hash> but it just hangs forever on client node (I waited 5 minutes). Works fine from bootsrtap node where it was added. Neither node would respond properly on the webui port (it hangs forever from browser web request), but I could connect to port 5001 with telnet.

Also thought I could get the webui to come up but no. After starting the daemon manually I get a hang. If I leave off the /webui I get a 404 response, so the server is listening. Also confirmed with netstat ipfs is listening on ports 0.0.0.0:4001, 127.0.0.1:5001 and 127.0.0.18080 on ipv4 but 2 ports for ipfs on ipv6: :::4001 (tcp6) and :::5353 (udp6). There may be a conflict with the udp 5353 port with avahi-daemon which appeared to also use udp :::5353. That may be an anomaly with netstat’s default handling of ipv6 tho.