Hi Guys,
when trying to display a large IPFS folder via the link im getting the error:
context deadline exceeded
any ideas?
Hi Guys,
when trying to display a large IPFS folder via the link im getting the error:
context deadline exceeded
any ideas?
or does it have to do that it needs time to load the 50k files first?
the strange thing is I have for example 5 folders, of which im 2 able to see via the ipfs.io/ipfs/ link with the CID, and some of the other folders it’s just timing out
after waiting like 20 mins this error comes: 504 Gateway Time-out
I don’t think so. You should double-check that you are providing correctly the content, and that it is discoverable on the network. The root directory should be automatically HAMT-sharded, I think, as long as you’re using a recent version of Kubo.
Hi hector
What do you mean load files correctly?
Loading 50k files with total .480 mb via gui isnt possible
So i do via cli
Ipfs add - r c:\folder
After a few mins all is loaded, in the GUI i add the CID via ipfs and see the CID and 480mb size for example
At status page i see you hosted 53mib of data (but goes up like 2mb an hour or so)
If i go to files page it says: 481 mib files and 53mib all blocks
Hi Guys,
So the issue, we use IPFS Desktop
Tried adding a folder with 50.000 10kb large.png files (loading via GUI not even possible, just keeps hanging) tested on multiple systems
Then went to CLI (via windows) it added the folder pretty nice and you see the progress also, but if you then take the CID for the folder and users ipfs.io to check the content etc, it takes ages and then just times-out with an error 504.
If i take just 10.000 files, and load the folder with the 10k files, everything works normaly.
Can we conclude that IPFS is not suitable for that many small files, or is it the issue to have IPFS running on Windows? what could be the issue, so that we can continue with a solution please
So when I run a test on the CID via ipfs-check.on.fleek.co i get these results:
Successfully connected to multiaddr
Found multiaddrs advertised in the DHT:
/ip4/102.219.181.102/tcp/4001
/ip4/102.219.181.102/udp/4001/quic
/ip4/127.0.0.1/tcp/4001
/ip4/127.0.0.1/udp/4001/quic
/ip4/144.202.68.68/tcp/4001/p2p/12D3KooWLH7bPqAwBpBY3qjHduhLmD9LP76dXXpeGdLXKTWoG1K8/p2p-circuit
/ip4/144.202.68.68/udp/4001/quic/p2p/12D3KooWLH7bPqAwBpBY3qjHduhLmD9LP76dXXpeGdLXKTWoG1K8/p2p-circuit
/ip4/45.32.81.162/tcp/4001/p2p/12D3KooWDbMWcXZPQ3fDHBi4pHzHh13H17RWvf9FVYfPSzxoga9V/p2p-circuit
/ip4/45.32.81.162/udp/4001/quic/p2p/12D3KooWDbMWcXZPQ3fDHBi4pHzHh13H17RWvf9FVYfPSzxoga9V/p2p-circuit
/ip6/::1/tcp/4001
/ip6/::1/udp/4001/quic
Found multihash advertised in the dht
The peer responded that it has the CID
Results are good I guess?
hi guys,
okey so don’t mind about the folder listing anymore via a CID, we don’t care
I know have a 22GiB folder with files added, when trying to lookup the CID externaly, it tells me this:
Could not find the multihash in the dht
What could be the issue? or does a large files base like that need some time to propagate?