Simplest way to pool blocks and datastore among disks?

Sure zpool or glusterfs could do the job, but Iā€™m trying to avoid overhead from more IO layers.

It looks like this should be possible by adding entries to ipfs config Datastore.Spec.mounts, but Iā€™m having trouble with the syntax. Should I copy the child block and change the path, or copy the whole first mount entry and change the mountpoint?

[
  {
    "child": {
      "path": "blocks",
      "shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
      "sync": true,
      "type": "flatfs"
    },
    "mountpoint": "/blocks",
    "prefix": "flatfs.datastore",
    "type": "measure"
  },
  {
    "child": {
      "compression": "none",
      "path": "datastore",
      "type": "levelds"
    },
    "mountpoint": "/",
    "prefix": "leveldb.datastore",
    "type": "measure"
  }
]

Ideally I could let IPFS fill entire dedicated partitions, and possibly some sort of automated load balancing across mounts.

By the way, EDITOR=nano ipfs config edit hangs on exit even without saving changesā€¦

1 Like

Iā€™m handling all this through LVM in my development environment. I believe there are some issues with IPFS and multi-datastores however Iā€™m not 100% certain on that so that is why im using LVM.

1 Like

Thanks for the suggestion! Iā€™m currently evaluating bcachefs as a modern general-purpose filesystem which could also solve this nicely.

interesting, iā€™ll take a look at that as well. Thanks.

At the moment, there are no datastore adapters for sharding (just mounting datastore prefixes).