Does it take too long to query DHT if you keep a large number of small files in IPFS?
If I set up a private network and store a large number of small files on a limited number of nodes, each node would need to maintain a relatively large DHT, which would lead to a long query time. When this happens, can we solve the performance problem by adding more nodes?
For example, if I set up a private network with only three nodes, if I save 300,000 files in the private network, each of these three nodes will maintain a DHT with 10,000 entries on average, will it lead to a long query time?
Hector, sir, are you saying that hosting 300k files on a node is not the same as maintaining 300k connections so if I use ipfs cat in hawaii I can get the string in that object.
Because 300k connections seems like a big challenge even if ipfs is implemented in go which they say is so concurrent.
all jokes aside, my scenario is running a node in local host, I’ve never tested myself but can you promise me first that calling http://localhost:5001/api/v0/cat?arg={CID} (with proper methods for calling) wouldn’t take at least 3 seconds. while that object with said CID is only 300 bytes long. I’ve been developing a module for our system that use ipfs and I am getting a feeling that calling http api on ipfs is so slow even when it’s in localhost
Can you also promise me that I can make at least 2000 request /s to my local ipfs nodes through ipfs http api?
For a public hosted ipfs api, obviously I have to proxy my request to ipfs using nginx and apache and 2000 request /s isn’t when they are the bottle neck.
Since I am not going to start a new ipfs process for every request, and talks to local ipfs gateway as HTTP API | IPFS Docs (ipns.localhost) reccommands, I hope it can handle this much