I deployed IPFS for a blockchain project, there is a need to get the NFT (metadata and picture) from its CID, but our IPFS gives timeout 504, I found to peer our IPFS with strong peers listed here Peering with content providers | IPFS Docs, still it gives timeout for some CIDs.
My question is how to implement a robust IPFS gateway? the same CID gets resolved by some of the public IPFS gateways.
Furthermore, I am running a single IPFS service n inside EKS not IPFS swarm or cluster. (edited)
@danieln thank you for replying,
I am able to get the content which is pinned in our own IPFS. but we have NFT indexer DB which is being updated by NFT metadata and data CIDs.
Below is list of the CIDs that our IPFS gives timeout.
QmeMBccZ4XwZExsPNaGANDDPq4KHEghXAQH91cKA2167pJ
QmXaHteiJkmKpqsAS1xCEet8kZqQWuZ3GncjMR5afsrWay/1108.json
QmV8UTwuFQPP4x4pRLKcmWf59uH1dssmvBZrQi3dQMVbz8
I checked those addresses you provided, and, while I was able to retrieve all of them, they each took a long time to find. Meaning, the problem isn’t on your end, it’s just that whoever is providing those blocks isn’t doing a very good job of it. Unfortunately for you, that’s what needs to improve.
Thanks @ylempereur,
Is there anyway to increase the IPFS timeout, something that I can change in ~/.ipfs/config?
our user to IPFS traffic is via AWS ALB ===> Nginx Reverse Proxy ====> IPFS Pod.
So to concluse we should expect 40% loss of the request from our IPFS, right?
Usually, the timeout actually comes from the web server/proxy in front of the gateway, not the gateway itself (but I’m sure the gateway has a timeout too).
I just use the ipfs command on the CLI, which doesn’t timeout, and let it run, possibly for hours, until it locates the data. That’s how I got your blocks.
Actually, it just means you got them from my node, as they must still be in the cache, and my node does a really good job of providing. Which doesn’t solve your problem, but demonstrates that this can work really well, if it’s done right.
A quick question please, does your machine have a public IP attached to it so other nodes find your local IPFS, or it is using your router’s dynamic public IP assigned by ISP?
My situation is a little bit complex, but I’ll give you a simplified, more typical version of it:
my ISP assigns one public IP to my router, which then uses NAT to provide connectivity to the various devices in my home. I have set up port-forwarding on the router to make TCP 4001 and UDP 4001 forward to the same ports on my Mac, which the IPFS daemon uses for communication.
the daemon will announce my public IP along with the TCP and UDP ports, which is what other nodes will use to establish a connection.
on occasion, the IP address changes, but the daemon is able to notice and announce my new address when it happens.
@ylempereur is this neccessary to expose port 4001, since in the EKS/pod/svc we are not exposing it. also the AWS ALB is internal means, no one can reach https://our..internal.ipfs.com/ipfs/.
we access it via VPN.
now I have doubt if we don’t expose (port-forward) the 4001, how our IPFS see all the peers (around 600) and how the pinned a content can be accessed via other public IPFS gateways?
but I see that deamon is having public IP?
I think IPFS one purpose is also to bypass all the restrictions like firewall rules, …