I’m looking to set up an IPFS cluster using docker compose. I want to be able to query the REST API as described at REST API - Pinset orchestration for IPFS
Here is the docker-compose.yml that I’m using
version: '3.4'
# This is an example docker-compose file to quickly test an IPFS Cluster
# with multiple peers on a contained environment.
# It runs 3 cluster peers (cluster0, cluster1...) attached to kubo daemons
# (ipfs0, ipfs1...) using the CRDT consensus component. Cluster peers
# autodiscover themselves using mDNS on the docker internal network.
#
# To interact with the cluster use "ipfs-cluster-ctl" (the cluster0 API port is
# exposed to the locahost. You can also "docker exec -ti cluster0 sh" and run
# it from the container. "ipfs-cluster-ctl peers ls" should show all 3 peers a few
# seconds after start.
#
# For persistence, a "compose" folder is created and used to store configurations
# and states. This can be used to edit configurations in subsequent runs. It looks
# as follows:
#
# compose/
# |-- cluster0
# |-- cluster1
# |-- ...
# |-- ipfs0
# |-- ipfs1
# |-- ...
#
# During the first start, default configurations are created for all peers.
services:
##################################################################################
## Cluster PEER 0 ################################################################
##################################################################################
ipfs0:
container_name: ipfs0
image: ipfs/kubo:release
# ports:
# - "4001:4001" # ipfs swarm - expose if needed/wanted
# - "5001:5001" # ipfs api - expose if needed/wanted
# - "8080:8080" # ipfs gateway - expose if needed/wanted
volumes:
- ./compose/ipfs0:/data/ipfs
cluster0:
container_name: cluster0
image: ipfs/ipfs-cluster:latest
depends_on:
- ipfs0
environment:
CLUSTER_PEERNAME: cluster0
CLUSTER_SECRET: ${CLUSTER_SECRET} # From shell variable if set
CLUSTER_IPFSHTTP_NODEMULTIADDRESS: /dns4/ipfs0/tcp/5001
CLUSTER_CRDT_TRUSTEDPEERS: '*' # Trust all peers in Cluster
CLUSTER_RESTAPI_HTTPLISTENMULTIADDRESS: /ip4/0.0.0.0/tcp/9094 # Expose API
CLUSTER_MONITORPINGINTERVAL: 2s # Speed up peer discovery
ports:
# Open API port (allows ipfs-cluster-ctl usage on host)
- "127.0.0.1:9094:9094"
# The cluster swarm port would need to be exposed if this container
# was to connect to cluster peers on other hosts.
# But this is just a testing cluster.
# - "9095:9095" # Cluster IPFS Proxy endpoint
# - "9096:9096" # Cluster swarm endpoint
volumes:
- ./compose/cluster0:/data/ipfs-cluster
##################################################################################
## Cluster PEER 1 ################################################################
##################################################################################
# See Cluster PEER 0 for comments (all removed here and below)
ipfs1:
container_name: ipfs1
image: ipfs/kubo:release
volumes:
- ./compose/ipfs1:/data/ipfs
cluster1:
container_name: cluster1
image: ipfs/ipfs-cluster:latest
depends_on:
- ipfs1
environment:
CLUSTER_PEERNAME: cluster1
CLUSTER_SECRET: ${CLUSTER_SECRET}
CLUSTER_IPFSHTTP_NODEMULTIADDRESS: /dns4/ipfs1/tcp/5001
CLUSTER_CRDT_TRUSTEDPEERS: '*'
CLUSTER_MONITORPINGINTERVAL: 2s # Speed up peer discovery
volumes:
- ./compose/cluster1:/data/ipfs-cluster
##################################################################################
## Cluster PEER 2 ################################################################
##################################################################################
# See Cluster PEER 0 for comments (all removed here and below)
ipfs2:
container_name: ipfs2
image: ipfs/kubo:release
volumes:
- ./compose/ipfs2:/data/ipfs
cluster2:
container_name: cluster2
image: ipfs/ipfs-cluster:latest
depends_on:
- ipfs2
environment:
CLUSTER_PEERNAME: cluster2
CLUSTER_SECRET: ${CLUSTER_SECRET}
CLUSTER_IPFSHTTP_NODEMULTIADDRESS: /dns4/ipfs2/tcp/5001
CLUSTER_CRDT_TRUSTEDPEERS: '*'
CLUSTER_MONITORPINGINTERVAL: 2s # Speed up peer discovery
volumes:
- ./compose/cluster2:/data/ipfs-cluster
# For adding more peers, copy PEER 1 and rename things to ipfs2, cluster2.
# Keep bootstrapping to cluster0.
I run docker compose up
and all the containers spin up successfully. I see ipfs0, ipfs1, ipfs2, cluster0, cluster1, cluster2 all running/healthy when I type docker ps
.
The problem I’m seeing is that when I visit http://localhost:9094
in Firefox, I see a redirection error displayed.
The page isn’t redirecting properly
Firefox has detected that the server is redirecting the request for this address in a way that will never complete.
This problem can sometimes be caused by disabling or refusing to accept cookies.
Additionally, I see errors in the cluster0 docker logs. A flood of 307 redirections.
docker logs cluster0 ✔
Changing user to ipfs
/usr/local/bin/entrypoint.sh: line 13: su-exec: not found
ipfs-cluster-service version 1.0.6+gitd4484ec3f1fb559675555debf1bb9ac3220ad1af
This container only runs ipfs-cluster-service. ipfs needs to be run separately!
Initializing default configuration...
2023-11-25T15:25:30.644Z INFO config config/config.go:482 Saving configuration
configuration written to /data/ipfs-cluster/service.json.
2023-11-25T15:25:30.646Z INFO config config/identity.go:73 Saving identity
new identity written to /data/ipfs-cluster/identity.json
new empty peerstore written to /data/ipfs-cluster/peerstore.
2023-11-25T15:25:31.657Z INFO service ipfs-cluster-service/daemon.go:50 Initializing. For verbose output run with "-l debug". Please wait...
2023-11-25T15:25:31.700Z INFO service ipfs-cluster-service/daemon.go:276 Datastore backend: pebble
2023-11-25T15:25:31.715Z INFO cluster ipfs-cluster/cluster.go:137 IPFS Cluster v1.0.6+gitd4484ec3f1fb559675555debf1bb9ac3220ad1af listening on:
/ip4/127.0.0.1/tcp/9096/p2p/12D3KooWS9GFRMZ8FLveBFxNt19j9Urug2tDiyBnRUfPRGaPATGb
/ip4/192.168.144.7/tcp/9096/p2p/12D3KooWS9GFRMZ8FLveBFxNt19j9Urug2tDiyBnRUfPRGaPATGb
2023-11-25T15:25:31.717Z INFO restapi common/api.go:454 RESTAPI (HTTP): /ip4/0.0.0.0/tcp/9094
2023-11-25T15:25:31.717Z INFO pinsvcapi common/api.go:454 PINSVCAPI (HTTP): /ip4/127.0.0.1/tcp/9097
2023-11-25T15:25:31.717Z INFO ipfsproxy ipfsproxy/ipfsproxy.go:331 IPFS Proxy: /ip4/127.0.0.1/tcp/9095 -> /ip4/127.0.0.1/tcp/5001
2023-11-25T15:25:31.720Z INFO crdt go-ds-crdt@v0.5.1/set.go:122 Tombstones have bloomed: 0 tombs. Took: 2.678826ms
2023-11-25T15:25:31.721Z INFO crdt go-ds-crdt@v0.5.1/crdt.go:296 crdt Datastore created. Number of heads: 0. Current max-height: 0. Dirty: false
2023-11-25T15:25:31.721Z INFO crdt go-ds-crdt@v0.5.1/crdt.go:505 store is marked clean. No need to repair
2023-11-25T15:25:31.722Z INFO crdt crdt/consensus.go:320 'trust all' mode enabled. Any peer in the cluster can modify the pinset.
2023-11-25T15:25:31.723Z INFO cluster ipfs-cluster/cluster.go:735 Cluster Peers (without including ourselves):
2023-11-25T15:25:31.723Z INFO cluster ipfs-cluster/cluster.go:737 - No other peers
2023-11-25T15:25:31.723Z INFO cluster ipfs-cluster/cluster.go:747 Waiting for IPFS to be ready...
2023-11-25T15:25:31.724Z INFO cluster ipfs-cluster/cluster.go:756 IPFS is ready. Peer ID: 12D3KooWL2xU2Ufki3YbgLEV7pDCuTGCctTnADtyrGeBsnLDjurj
2023-11-25T15:25:31.724Z INFO cluster ipfs-cluster/cluster.go:764 ** IPFS Cluster is READY **
2023-11-25T15:26:12.312Z ERROR restapi common/api.go:680 sending error response: 404: not found
2023-11-25T15:26:12.312Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:12 +0000] "GET /api/v0/health HTTP/1.1" 404 35
2023-11-25T15:26:15.002Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.017Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.021Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.025Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.035Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.043Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.046Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.050Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.053Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.057Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.060Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.063Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.066Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.070Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.073Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.076Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.079Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.082Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.085Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.089Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
2023-11-25T15:26:15.096Z INFO restapilog common/api.go:111 192.168.144.1 - - [25/Nov/2023:15:26:15 +0000] "GET / HTTP/1.1" 307 37
I’m pretty stumped on this one. I’m expecting to get an HTTP 200 response, or maybe a 404 because I don’t expect anything to be at the root (/) path. I don’t know why I’m seeing a 307 redirect.
When I try http://localhost:9094/api/v0/health
, I get a JSON response indicating an HTTP 404 error.
{"code":404,"message":"not found"}
Again, this is not expected. At this endpoint, I’m expecting to get an HTTP 204 response.
I’m hoping to get some help on this. I’m hoping to get a cluster setup that I can use as a private pinning service but I’m blocked by the lack of a functioning REST API. I’ve tried this setup on my localhost, and also a VPS running on Vultr. I’m seeing the same behavior on both.