How to check available/balance space in ipfs?

Hi all,
if there possible to check the available/balance space in ipfs??
by getting this information we can know whether we still can add file to ipfs.


See ipfs repo stat --help .

Thanks for your replied.

if i have setup 3 ipfs node in 3 diff pods, can i use this? curl -X POST “” which node it will get the diskspace information?

The node at I guess your 3 nodes have other IPs.

@hector yes… client will access the ipfs gateway REST API. if this case, how i can ensure the repo stat provided are apply for all 3 nodes?

You need to make 3 requests and add the usage amounts.

3 requests from 1 source of ipfs gateway REST API?

is there go api for this command ipfs-cluster-ctl health metrics freespace?

Yes, if you use ipfs-cluster you can query something like

See for how to discover actual endpoints and response formats from the ipfs-cluster-ctl tool.

this will return all free space from all peer? btw is there any monitoring configuration which allow us to set we set a percentage of storage usage? like 60% 80% when the space used up…?

Yes, that is the total space in the cluster. The available space comes from the MaxStorage setting in ipfs. It is not possible to set a dynamic %.

is there any dashboard available for this?

i try / # ipfs-cluster-ctl health metrics freespace, in one of my k8s pods i get freespace 0? what is the possible reason the freespace is 0B? it should have some available space. i’m using persistent volume.
12D3xxxx0 | freespace: 0 B | Expires in: 27 seconds from now
12D3xxxx1| freespace: 0 B | Expires in: 24 seconds from now
12D3xxxx2 | freespace: 0 B | Expires in: 22 seconds from now

MaxStorage setting below the current datastore size is probably the cause.

below is my setting, anything i missed out?

“Datastore”: {
“BloomFilterSize”: 1048576,
“GCPeriod”: “1h”,
“HashOnRead”: false,
“Spec”: {
“child”: {
“path”: “badgerds”,
“syncWrites”: false,
“truncate”: true,
“type”: “badgerds”
“prefix”: “badger.datastore”,
“type”: “measure”
“StorageGCWatermark”: 90,
“StorageMax”: “2GB”

in ipfs we cant set the maximum allow storage? if hit this max value it will not allow file to add.

This is how I regularly adjust Datastore.StorageMax to half of the total disk space.

availableDiskSize=$(df -P ~/ | awk 'NR>1{sum+=$4}END{print sum}')
diskSize="$((availableDiskSize / 2))"
ipfs config Datastore.StorageMax $diskSize

Thanks for your sharing


  1. Yesterday, I loaded about 18 several files totaling 11.4 GB on my first importing of files on IPFS. The IPFS app on my Win10 Desktop PC stopped importing in the middle of a file larger than 1.0 GB, and the the Repo now totals 9.4 GB files hosted.
  2. When I go to my Status page, it reports 404 and no Imported files are shown.
  3. When I next tried to visit this Help page, it said I was not Signed Up for an account, and when tried to create an account and Log In, I did not remember my password (which was reportedly stored in my FireFox broswer), and after repeated attempts, it never sent the Change Password email to my email address.
  4. Today, I signed up for an account at IPFS with the same email address and a custom avatar image, and the main page at my account does not include a link to access the files I imported or to import a file.
  5. I currently have 3 more files larger than 1.0 GM to Imported to my IPFS account node, and more in the future as they are created.
    I am not a coder. Any information you could provide to increase my MaxSpace or other solution to the problem would be appreciated.

I don’t know what you are talking about since IPFS itself does not require account registrations etc. You must be talking about some 3rd party app.