Ran out of inodes in go-ipfs

We ran into an issue where go-ipfs started to return no disk space error even though there was disk space in the system. The system has 500GB diskspace out of which only 50% was being utilized.

On debugging further we had realized that the system ran out of inodes which has caused this error. This could be due to the fact that lot of small files were created by ipfs.
We are running go-ipfs on an AWS instance with ubuntu and ext4 filesystem.

Is there anything that can be tuned in go-ipfs that can help handle this?
Also, it would be helpful if there a way to estimate approximate inode usage based on pin traffic. We pin approximately 800 (files + DAG-blocks) for every 2 minutes.
This would also help us create a plan to backup data into cold storage like filecoin.

df -h
Filesystem       Size  Used Avail Use% Mounted on
/dev/root        485G  233G  252G  49% /

df -i
Filesystem        Inodes    IUsed   IFree IUse% Mounted on
/dev/root       64832000 55171154 9660846   86% /

You can use bigger blocks when you add file --chunker=size-1048576 --raw-leaves. However that only matters if your files are bigger than 256KiB. I guess you have many small files, which each create a block.

Your best solution is to use a filesystem that can dynamically allocate freespace either to metadata or data.
Like ZFS or BTRFS.

You could also switch to a different datastore, but badger (which is the main alternative for on disk storage) doesn’t scale well.

I’m currious of what are you doing ? I have never seen anyone run into issues like this.

AFAIT the default on most unix distros is one inode per 16KiB, which is less than the default 256KiB chunk size.


Yeah, we snapshot smart-contract data and store it on dencentralized storage like IPFS after processing it.
Then run indexing on top of it to generate higher order information.

Hence, our file size is small and is causing this issue.
We will migrate to ZFS and see if it helps.
But at the same-time would be looking at how to combine and store data rather than small files.

1 Like

I would move to ZFS or BTRFS instead of changing your app.
ZFS is probably good to have anyway because of all the good features it has.

Ok, We will migrate to ZFS and update after that.

As ZFS requires lot of resources, we are using XFS and we have also created the fileSystem with a smaller blockSize which would give us more room to have larger number of inodes.

Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/nvme1n1 810971472 23640706 787330766 3% /data

1 Like