Hi. I’m newbie of IPFS. I’ve got an error and I don’t know how to fix it.
“ERROR providers providers/providers_manager.go:154 error adding new providers:committing batch to datastore at /:write .ipfs/datastore/001001/log: no space left on device”
I think .ipfs/datastore have a not enough space to IPFS add.
So How to remove the IPFS datastore?
Also when I remove the folder in /.ipfs/datastore, the IPFS add does not work!
Can you give a feedback of solving this problem??
I’m really appreciated your help. Thanks!!
I found it.
Use ipfs repo gc
What you are doing is ‘Garbage Cleaning’ manually.
Run the daemon with --enable-gc
option to do this automatically.
1 Like
Thanks a lot!
I use Raspberry pi to work on IPFS real-time 24 hours communication.
So I have to manually controll the system, not automatically
If I use run daemon with --enable-gc
, the ipfs add doesn’t work because of locking the IPFS
So I use subprocess to run ‘ipfs pin rm’ with ‘ipfs repo gc’ when file is already uploaded in the IPFS cluster more than 100 files.
Anyway, thank you for answering my question!
I couldn’t understand. What do you mean by because of locking the IPFS
?
Pinning/unpinning a file is different activity by ipfs. Garbage cleaning does not delete pinned data chunks. It will be safe on your datastore even after you gc.
And if you don’t want those 100 files, why are you pinning them at first?
I make real-time data acquisition system based on IPFS node(Raspberry pi) and IPFS cluster.
For every 10 seconds, I make the wav file(recording) and upload file in IPFS and also pinning them.
And I transfer the hash to IPFS cluster and store the data in IPFS cluster.
So there is no need to have a data in IPFS node storage after I upload in IPFS cluster.
If I use --enable-gc
then, I cannot upload the file in IPFS when garbage collection is working.
That is why I say 'locking of IPFS'
. The data cannot be uploaded!
But if I don’t use garbage collection, then my storage is full, so I should do garbage collection.
Because of two conditions, I made the decision to use manually run the garbage collection.
Unpinning the data in IPFS node and run garbage collection(it almost finish at 18 seconds).
And then I can make the system work well!
Are you the same person @ChanHyukLeee and @chanchan ?