I want to pin a large number of objects (say 100k). I am sending a bunch of request to the HTTP API at the same time. As far as I can tell, if one of the objects cannot be found, I never get an error, it just keeps waiting. So I just stop waiting after a minute, and send another request for another file.
What is going on inside? Does IPFS itself give up at some point? Is there a limit of parallel pinning requests it will entertain? In other words, what is the right number of concurrent requests to send, and at what point will my script timeout simply because they are waiting inside IPFS internal queue, not because they are truly not accessible in the network?
IPFS is trying to locate and download all the DAG nodes referenced by the pin.
Not by itself. You can cancel/close the request when you think it is enough.
No, other than your bandwidth etc. Pins should proceed independently from each others. Of course, more things downloading usually mean they will download slower.
Depends on your network, and specs on which IPFS is running.
When you cancel it, otherwise it will keep asking for the blocks until they appear.
You might find IPFS Cluster useful for the task, as it allows you to define pin timeouts, retries and control parallelism so that managing the 100k pin queue should be easier (you can just run a single cluster peer on top of a single IPFS daemon for that, no need to setup a cluster with multiple peers).