Falling behind DHT Reprovides on a Pi4

I’m attempting to host a dataset of a 8 thousand 4k images. It’s 50ish GB. I was hoping to do this with a Raspberry Pi 4, but I’m wondering if the hardware can’t handle it or if I could improve my software configuration.

When I run sudo systemctl status ipfs:

● ipfs.service - IPFS daemon
     Loaded: loaded (/etc/systemd/system/ipfs.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2025-02-14 19:21:58 EST; 1 week 2 days ago
   Main PID: 34337 (ipfs)
      Tasks: 14 (limit: 9244)
     Memory: 1.3G
        CPU: 2d 8h 48min 14.030s
     CGroup: /system.slice/ipfs.service
             └─34337 /home/joncrall/.local/bin/ipfs daemon

Feb 14 19:21:59 jojo ipfs[34337]: Daemon is ready
Feb 15 21:40:26 jojo ipfs[34337]: 2025-02-15T21:40:26.459-0500        ERROR        core:constructor        node/provider.go:96
Feb 15 21:40:26 jojo ipfs[34337]: 🔔🔔🔔 YOU ARE FALLING BEHIND DHT REPROVIDES! 🔔🔔🔔
Feb 15 21:40:26 jojo ipfs[34337]: ⚠️ Your system is struggling to keep up with DHT reprovides!
Feb 15 21:40:26 jojo ipfs[34337]: This means your content could partially or completely inaccessible on the network.
Feb 15 21:40:26 jojo ipfs[34337]: We observed that you recently provided 1001 keys at an average rate of 15.509795478s per key.
Feb 15 21:40:26 jojo ipfs[34337]: 💾 Your total CID count is ~266328 which would total at 1147h24m52.810064784s reprovide process.
Feb 15 21:40:26 jojo ipfs[34337]: ⏰ The total provide time needs to stay under your reprovide interval (22h0m0s) to prevent falling behind!
Feb 15 21:40:26 jojo ipfs[34337]: 💡 Consider enabling the Accelerated DHT to enhance your reprovide throughput. See:
Feb 15 21:40:26 jojo ipfs[34337]: https://github.com/ipfs/kubo/blob/master/docs/config.md#routingaccelerateddhtclient

My config via ipfs config show is:

{
  "API": {
    "HTTPHeaders": {}
  },
  "Addresses": {
    "API": "/ip4/127.0.0.1/tcp/5001",
    "Announce": null,
    "AppendAnnounce": null,
    "Gateway": "/ip4/127.0.0.1/tcp/8080",
    "NoAnnounce": null,
    "Swarm": [
      "/ip4/0.0.0.0/tcp/4001",
      "/ip6/::/tcp/4001",
      "/ip4/0.0.0.0/udp/4001/webrtc-direct",
      "/ip4/0.0.0.0/udp/4001/quic-v1",
      "/ip4/0.0.0.0/udp/4001/quic-v1/webtransport",
      "/ip6/::/udp/4001/webrtc-direct",
      "/ip6/::/udp/4001/quic-v1",
      "/ip6/::/udp/4001/quic-v1/webtransport"
    ]
  },
  "AutoNAT": {
    "ServiceMode": "disabled"
  },
  "Bootstrap": [
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
    "/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
    "/ip4/104.131.131.82/udp/4001/quic-v1/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ"
  ],
  "DNS": {
    "Resolvers": {}
  },
  "Datastore": {
    "BloomFilterSize": 0,
    "GCPeriod": "1h",
    "HashOnRead": false,
    "Spec": {
      "mounts": [
        {
          "child": {
            "path": "blocks",
            "shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
            "sync": true,
            "type": "flatfs"
          },
          "mountpoint": "/blocks",
          "prefix": "flatfs.datastore",
          "type": "measure"
        },
        {
          "child": {
            "compression": "none",
            "path": "datastore",
            "type": "levelds"
          },
          "mountpoint": "/",
          "prefix": "leveldb.datastore",
          "type": "measure"
        }
      ],
      "type": "mount"
    },
    "StorageGCWatermark": 90,
    "StorageMax": "100GB"
  },
  "Discovery": {
    "MDNS": {
      "Enabled": true,
      "Interval": 10
    }
  },
  "Experimental": {
    "FilestoreEnabled": false,
    "GraphsyncEnabled": false,
    "Libp2pStreamMounting": false,
    "P2pHttpProxy": false,
    "StrategicProviding": false,
    "UrlstoreEnabled": false
  },
  "Gateway": {
    "APICommands": [],
    "HTTPHeaders": {},
    "NoDNSLink": false,
    "NoFetch": false,
    "PathPrefixes": [],
    "PublicGateways": null,
    "RootRedirect": "",
    "Writable": false
  },
  "Identity": {
    "PeerID": "12D3KooWBwURxUQaBe9s8G4bVMG6hNddptp5jmK1s2ri4r5QTSb4"
  },
  "Internal": {},
  "Ipns": {
    "RecordLifetime": "",
    "RepublishPeriod": "",
    "ResolveCacheSize": 128
  },
  "Migration": {
    "DownloadSources": [],
    "Keep": ""
  },
  "Mounts": {
    "FuseAllowOther": false,
    "IPFS": "/ipfs",
    "IPNS": "/ipns"
  },
  "Peering": {
    "Peers": null
  },
  "Pinning": {
    "RemoteServices": {
      "web3.storage.erotemic": {
        "API": {
          "Endpoint": "https://api.web3.storage"
        },
        "Policies": {
          "MFS": {
            "Enable": false,
            "PinName": "",
            "RepinInterval": ""
          }
        }
      }
    }
  },
  "Plugins": {
    "Plugins": null
  },
  "Provider": {
    "Strategy": ""
  },
  "Pubsub": {
    "DisableSigning": false,
    "Router": ""
  },
  "Reprovider": {
    "Interval": "0",
    "Strategy": "all"
  },
  "Routing": {
    "AcceleratedDHTClient": false,
    "Type": "dhtclient"
  },
  "Swarm": {
    "AddrFilters": null,
    "ConnMgr": {
      "GracePeriod": "1m0s",
      "HighWater": 40,
      "LowWater": 20,
      "Type": "basic"
    },
    "DisableBandwidthMetrics": false,
    "DisableNatPortMap": false,
    "RelayClient": {},
    "RelayService": {},
    "ResourceMgr": {},
    "Transports": {
      "Multiplexers": {},
      "Network": {
        "QUIC": false
      },
      "Security": {}
    }
  }
}

EDIT: As I’m writing this, I check that AcceleratedDHTClient, and it is off… so I’m turning that on and restarting. I’ll close if the issue is fixed, but I have a memory of AcceleratedDHTClient not working (maybe RAM related?). If that’s true, then I’m wondering if anything else can be done.

The Accelerated DHT is very resource hungry, so I’m not surprised you are having problems with it on a Pi.

The other thing to consider is changing the Reprovider.Strategy to pinned or roots. See the config docs to understand the implications of this:

I should say that falling behind on reprovides is not uncommon, especially on small nodes with a lot of data. We’re working to improve that. See the following issue for more information