I’m looking for really clear instructions on how to upload over 10GB to IPFS. It appears that pinning services like Pinata and Eternum have their limit set to the default, which is 10GB or less and I am mostly looking to upload folders ranging from 50-500GB.
I imagine if I learn how to upload them without the pinning service at this size, I can always use said pinning service once I have generated the CID.
So, how does one go about setting themselves up to override the default size limit and upload data without a pinning service? Can this be done through IPFS For Desktop?
You can use ipfs add /path/to/file
or use the Import button under the Files tab in the Web UI (I imagine that it’s the same on IPFS Desktop).
Wouldn’t the desktop GUI also be set to the 10GB default? How would I change that? I’m not fluent in any programming language, so I would need pretty clear instructions.
I appreciate your time.
Wouldn’t the desktop GUI also be set to the 10GB default?
It seems that such configuration is for garbage collection,
not for the content.
How would I change that?
In the settings tab you can change StorageMax to something else.
I’m not fluent in any programming language,
so I would need pretty clear instructions.
In the files tab click Import and choose the file or directory
you want to add to IPFS. My MFS is tied to Pinata gratis plan (1 GB)
for some production data so I can’t test this for you, if there’s
any error feel free to follow up though.
We do have one individual using IPFS for Desktop that is trying to upload an approximately 128GB file. They have cleared 300GB on their computer (apparently) and they are still getting an error, usually around the 60GB mark.
What is the error you encountered? Could you describe what you saw or better share a screenshot?
Here’s another example:
The file is just over 186GB but it keeps stopping even though the person has over 500GB free on their computer. We’ve tried countless times and always have an error.
Let me try to replicate the situation on my side. If it does replicate, you may have encountered a software bug.
Hi has anyone replicated this for various files, which larger groups needed to access at the same time and how about performance times?