I was able to reproduce this during this afternoon’s deployment. Please see this (unlisted) pastebin document: https://pastebin.com/eMSFXhR0
(My previous post also has a pastebin link, but may have been missed since it was rendered in a way that is easy to dismiss as an advert)
My random sampling suggests that the stuck nodes are indeed connected to other nodes which have all the data they want.
I reran the script that generated the above but this time appended an ipfs pin add
command, and that unstuck the nodes: https://pastebin.com/zV7N0Ngx
Oh, and I should explain the lipfs
in my script - it’s a simple wrapper script to make it easy to work with a docker-based IPFS node. The L is for “local” as in private:
#!/bin/bash
# Launch IPFS if necessary
if [[ $(docker container inspect ipfs --format '{{.State.Running}}' 2>&- ) != "true" ]]; then
# Not running - has it ever?
if [[ $(docker container inspect ipfs --format '{{.State.Status}}' 2>&- ) != "" ]]; then
echo "IPFS container is no longer running. Check 'docker logs ipfs' to diagnose or delete with 'docker rm ipfs' and try again." >&2
exit 1
fi
sudo mkdir -p /external-ssd/ipfs
sudo chown $(id -u):$(id -g) /external-ssd/ipfs
# Note: Assumes `ipfs:latest` is present locally (because it's part of dbfOS)
echo "Launching IPFS ..." >&2
docker run \
--detach \
--user $(id -u):$(id -g) \
--name ipfs \
--network host \
--volume /etc/ipfs/swarm.key:/swarm.key:ro \
--volume /:/host \
--volume /external-ssd/ipfs/:/data/ipfs/ \
ipfs >/dev/null
echo "Waiting for IPFS to become ready ..." >&2
until curl --silent http://127.0.0.1:7070/ &>/dev/null; do
if [[ "$(docker container inspect --format '{{.State.Running}}' ipfs)" != "true" ]]; then
echo "IPFS does not appear to have become ready! Try 'docker logs ipfs' to diagnose," >&2
exit 1
fi
done
echo "IPFS service started" >&2
fi
exec docker exec -it -w "/host/$PWD" ipfs ipfs "${@}"
The entrypoint of this custom docker container is as follows, it just checks for the private swarm key and sets the bootstrap nodes:
#!/bin/sh
# Configure an existing IPFS repo
if [[ ! -r /swarm.key ]]; then
echo "ERROR: /swarm.key not present or not readable! Did you remember to mount it from /etc/ipfs/swarm.key?" >&2
exit 1
fi
cp -v /swarm.key /data/ipfs/swarm.key
export GOMAXPROCS=$(nproc)
ipfs init
ipfs config profile apply local-discovery
ipfs bootstrap rm all # Do not use any public nodes in this swarm
ipfs bootstrap add /ip4/10.58.203.80/tcp/4001/p2p/12D3KooWMq2yauGRfCU5AUKsiajvtrhEt2TnFfiYbfpgTX9hDk2A # nc-b9-8-2.iaas.eu02.arm.com, a bare metal host
ipfs bootstrap add /ip4/10.7.76.84/tcp/4001/p2p/12D3KooWGiXNzrq22WThzCKqv3xwv5f46xQL9Pn7fTp6wavKfbjV # aarch64.noir.arm.com, a bare metal host
ipfs config Routing.Type dht # Only use this swarm for finding peers and data, not public services
ipfs config Addresses.Gateway /ip4/127.0.0.1/tcp/7070 # Avoid default :8080, that's used by other services on dbfOS
ipfs config --json Gateway.PublicGateways '{"localhost":{"Paths":["/ipfs", "/ipns"], "UseSubdomains":false}}' # Prevent the gateway service (:7070) from redirecting users to bafyhashstuff.localhost:7070/
ipfs config Datastore.StorageMax 100GB
# Defaults for `ipfs add`
ipfs config Import.HashFunction blake3 # Faster than SHA256 on RPi4B
ipfs config Import.UnixFSChunker buzhash # Split on certain hash values, similar in function and intent to `gzip --rsyncable`
ipfs daemon --enable-gc
The custom image’s Dockerfile is also very simple, and most importantly sets LIBP2P_FORCE_PNET
in the environment, to ensure our private/local swarm does not connect to the public internet:
# Note: ipfs/kubo:release doesn't seem to work on Raspbian (32-bit), it always has this error on launch:
# /sbin/tini: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
# So we will make our own
FROM alpine:3.20 AS downloader
ARG VERSION=v0.32.1
# Build arg to get OS + architecture for multi-arch building (defined by buildx)
ARG TARGETOS
ARG TARGETARCH
ADD https://github.com/ipfs/kubo/releases/download/${VERSION}/kubo_${VERSION}_${TARGETOS}-${TARGETARCH}.tar.gz /tmp/ipfs.tar.gz
RUN cd /tmp && tar xvf ipfs.tar.gz
FROM alpine:3.20
COPY --from=downloader /tmp/kubo/ipfs /usr/local/bin/ipfs
# Limit traffic to private swarms; those for which a shared secret
# (`swarm.key`) is defined. If the shared secret is not available `ipfs daemon`
# will fail to launch with the following error:
#
# Error: constructing the node (see log for full detail): privnet: private network was not configured but is enforced by the environment
ENV LIBP2P_FORCE_PNET=1
# Set our data directory
ENV IPFS_PATH=/data/ipfs
VOLUME /data/ipfs
# Install our configuration script, which is run every time the container
# starts (after `ipfs init` but before the user-specified command, such as
# `dameon`)
COPY entrypoint.sh /entrypoint.sh
RUN chmod 0755 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]