Private cluster with selective replication

I am looking at setting up private IPFS network across region (US-UK) with persistent store. Can I validate my understanding from what I read?

  1. Create our custom pinning service according to [1]

  2. Monitor the storage space of the underlying persistent service, with ipfs-cluster metrics, such as disk:metric_type and disk:reposize using Prometheus [2], and add new IPFS nodes by joining the ipfs-cluster [3]

  3. Create our ipfs nodes with swarm_key for our private network

  4. Create one global cluster, with IPFS nodes from both US and UK, and uses --allocation [4].
    Question: how do I configure tag:region for the ipfs-cluster to allocate_by [5], so that we can configure what pinset to be replicated to each region? I don’t see any of the tags configurations in the IPFS node [6].

Thanks in advance for all the guidance!

References:
1: Work with pinning services | IPFS Docs
2: How to expose all Prometheus metrics?
3: Joining a collaborative cluster - Pinset orchestration for IPFS
4: Is there any provision to choose ipfs-cluster peer for content replication? - #2 by hector
5: Adding and pinning - Pinset orchestration for IPFS
6: kubo/config.md at master · ipfs/kubo · GitHub

If you configure cluster’s service.json like:

  "informer": {
    "disk": {
      "metric_ttl": "5m",
      "metric_type": "freespace"
    },
    "tags": {
      "metric_ttl": "5m",
      "tags": {"dc": "<SET_TO_NODE'S_DATACENTER>"}
    },
    "pinqueue": {
      "metric_ttl": "5m",
      "weight_bucket_size": 100000
    }
  },
  "allocator": {
    "balanced": {
      "allocate_by": ["tag:dc", "pinqueue", "freespace"]
    }
  },

Then pins will be distributed in a balanced fashion among different DCs.

The allocation process is explained here: Adding and pinning - Pinset orchestration for IPFS

1 Like

This is what I have

Can I clarify a few points?

  1. Do I still need the port 4001 for the ipfs swarm to connect with other peers? In my docker compose I didn’t export port 4001 but the pins were replicated so I assumed ipfs cluster will handle the block transfer between the ipfs nodes?
  2. From How to upload the file in IPFS-cluster when I know only CID - #2 by hector, can I check if I should always ipfs-cluster-ctl add to pin the object, and if ipfs uses local pin as per my diagram shows?
 % curl -v -XPOST http://fs-node02:9095/api/v0/add -F file=@testfile.txt
> POST /api/v0/add HTTP/1.1
> User-Agent: curl/7.79.1
> Accept: */*
> Content-Length: 210
> Content-Type: multipart/form-data; boundary=------------------------81ac6f2122b9c4e5
> 
* We are completely uploaded and fine
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Access-Control-Expose-Headers: X-Stream-Output, X-Chunked-Output, X-Content-Length
< Cache-Control: no-cache
< Connection: close
< Content-Type: application/json
< Server: ipfs-cluster/ipfsproxy/1.0.2+gitc4d78d52f8a37ad28fdf12c3c6af8009d05c4a6e
< Trailer: X-Stream-Error
< Vary: Origin
< Vary: Access-Control-Request-Method
< Vary: Access-Control-Request-Headers
< X-Chunked-Output: 1
< Date: Sat, 20 Aug 2022 14:35:08 GMT
< Transfer-Encoding: chunked
< 
{"Name":"testfile.txt","Hash":"QmYYG9kKXdbbQWyfQwJsT1r2BDiFbGNfAXCRng8CMY19Gi","Size":"28"}
* Closing connection 0
/ # ipfs pin ls QmYYG9kKXdbbQWyfQwJsT1r2BDiFbGNfAXCRng8CMY19Gi
QmYYG9kKXdbbQWyfQwJsT1r2BDiFbGNfAXCRng8CMY19Gi recursive
  1. How do I access the cluster health metrics via API?
/ # ipfs-cluster-ctl pin ls
QmYYG9kKXdbbQWyfQwJsT1r2BDiFbGNfAXCRng8CMY19Gi |  | PIN | Repl. Factor: -1 | Allocations: [everywhere] | Recursive | Metadata: no | Exp: ∞ | Added: 2022-08-20 14:35:08
/ # ipfs-cluster-ctl health metrics freespace
12D3KooWHsRnSKFyYeB211wRxAGFGqBGr4WJvjgvJSGkYnKsVfHh | freespace: 10 GB | Expires in: 20 seconds from now
12D3KooWKT5gp8c6o3uu1ZLSn64QsGhFE5g67xYPcCdSoykoJwgb | freespace: 10 GB | Expires in: 17 seconds from now
  1. Is there any webui for my users to upload via 9005 or it requires some work with the js-cluster-client?
  2. If I would need to change the ipfs chunker, and assuming ipfs-cluster 9095 is gateway to ipfs 5001, should I create our implementations of ipfs for different chunker mechanism?

#3 - found the answer

In general, you should make sure your IPFS peers are reachable. In your case, since it is a private network, I guess it is just fine if they are reachable from the other peers in the docker network and not from the outside world.

I don’t fully understand the question. ipfs-cluster-ctl add uses the cluster_node:9096/add endpoint (REST API endpoint). The :9095/api/v0/add is the ipfsproxy endpoint. They both work similarly, but I’d recommend using the REST API endpoint rather than the proxy one unless you require full compatibility with ipfs add (technically you could do something like ipfs --api /ip4/clusterip/tcp/9095 add too, using ipfs instead of ipfs-cluster-ctl).

Would require some work unless you point the regular IPFS webui to the :9095 port, instead of that from ipfs, and use that.

Request query options are the same as for ipfs: /add?chunker=<your_chunker>..., at least those that can be interpreted by the cluster peer (most in Kubo RPC API | IPFS Docs).

1 Like

Thanks @hector as always!

I have decided to implement a simple webui that only allows pin add and pin download so that I don’t have to expose other advanced features to the internal users.

For pin download, I noticed ipfsproxy.go does not hijackSubrouter to /api/v0/get, while I will be calling the ipfs 5001 directly, but may I understand the design decision, if we are to keep the cluster within the replication function and leave the data plane to the ipfs directly?

Hmm, what would you have /api/v0/get do if it was hijacked? Requests to that in the proxy endpoint are simply passed to the underlying node, so similar to talking to it directly.

The design decision is to hijack only operations that can have a cluster-meaning (i.e. adding a pin). I guess the cluster could hijack /get and proxy it to a location where the content is stored or something, but that seemed a bit out of scope for the thin layer that the proxy is.