Hey guys…
Recently, however, I have started working with larger datasets — files ranging from several gigabytes to even terabytes — and I’ve noticed that the performance of my IPFS node has significantly slowed down. I’m aware that IPFS isn’t necessarily designed for high-speed data transfer like some other protocols, but I’m hoping to find some ways to optimize my setup and get better performance when dealing with these larger files.
Here is some issues that was I faced:
-
Even on a fast network, it seems to take much longer than expected to add files to IPFS and to retrieve them. Is there a specific configuration or hardware setup that can help speed this up?
-
My node sometimes uses a lot of CPU and memory when processing large datasets. Are there any best practices for managing resource usage, or certain settings that can be adjusted to reduce the load?
-
I’ve also noticed that my node occasionally drops connections or struggles to maintain a consistent number of peers. Are there ways to improve the reliability of these connections? Would changing the network settings help?
I’m running my IPFS node on a relatively standard server setup (16GB RAM, 8-core CPU) with plenty of storage space. The server runs Ubuntu, and I’ve tried a few tweaks, like increasing the cache size and adjusting the swarm settings, but I haven’t seen a significant improvement.
I also check this: https://discuss.ipfs.tech/t/ipfs-files-rm-not-accepting-options-objectmendix But I have not found any solution. Could anyone guide me about this?
Your help will be grateful for me!
Thanks in advance
Respected community member!