Pinned files in my azure container hosted IPFS is not publicly available

Hi IPFS People

I am currently in a struggle with my current IPFS setup, the problem is that the pinned contents seems to be unreachable by public gateways like `https://ipfs.io/ipfs/.

I have hosted my ipfs in azure container instance and opened the required ports. I also configured to be publicly available but failed to do so it seems, below is my config

{
    "API": {
        "HTTPHeaders": {
            "Access-Control-Allow-Headers": [
                "X-Requested-With",
                "Range",
                "User-Agent"
            ],
            "Access-Control-Allow-Methods": [
                "PUT",
                "GET",
                "POST",
                "OPTIONS"
            ],
            "Access-Control-Allow-Origin": [
                "*"
            ]
        }
    },
    "Addresses": {
        "API": "/ip4/0.0.0.0/tcp/5001",
        "Announce": [
            "/ip4/104.43.103.27/tcp/4001",
            "/ip4/104.43.103.27/tcp/443/https",
            "/ip4/104.43.103.27/tcp/4001/ws"
        ],
        "AppendAnnounce": [],
        "Gateway": "/ip4/0.0.0.0/tcp/8080",
        "NoAnnounce": [],
        "Swarm": [
            "/ip4/0.0.0.0/tcp/4001",
            "/ip6/::/tcp/4001",
            "/ip4/0.0.0.0/udp/4001/webrtc-direct",
            "/ip4/0.0.0.0/udp/4001/quic-v1",
            "/ip4/0.0.0.0/udp/4001/quic-v1/webtransport",
            "/ip6/::/udp/4001/webrtc-direct",
            "/ip6/::/udp/4001/quic-v1",
            "/ip6/::/udp/4001/quic-v1/webtransport"
        ]
    },
    "AutoNAT": {
        "ServiceMode": "enabled"
    },
    "Bootstrap": [
        "/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
        "/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
        "/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
        "/ip4/104.131.131.82/udp/4001/quic-v1/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
        "/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
        "/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa"
    ],
    "DNS": {
        "Resolvers": {}
    },
    "Datastore": {
        "BloomFilterSize": 0,
        "GCPeriod": "1h",
        "HashOnRead": false,
        "Spec": {
            "mounts": [
                {
                    "child": {
                        "path": "blocks",
                        "shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
                        "sync": true,
                        "type": "flatfs"
                    },
                    "mountpoint": "/blocks",
                    "prefix": "flatfs.datastore",
                    "type": "measure"
                },
                {
                    "child": {
                        "compression": "none",
                        "path": "datastore",
                        "type": "levelds"
                    },
                    "mountpoint": "/",
                    "prefix": "leveldb.datastore",
                    "type": "measure"
                }
            ],
            "type": "mount"
        },
        "StorageGCWatermark": 90,
        "StorageMax": "10GB"
    },
    "Discovery": {
        "MDNS": {
            "Enabled": true
        }
    },
    "Experimental": {
        "FilestoreEnabled": false,
        "Libp2pStreamMounting": false,
        "OptimisticProvide": true,
        "OptimisticProvideJobsPoolSize": 0,
        "P2pHttpProxy": false,
        "StrategicProviding": true,
        "UrlstoreEnabled": false
    },
    "Gateway": {
        "DeserializedResponses": null,
        "DisableHTMLErrors": null,
        "ExposeRoutingAPI": null,
        "HTTPHeaders": {},
        "NoDNSLink": false,
        "NoFetch": false,
        "PublicGateways": null,
        "RootRedirect": ""
    },
    "Identity": {
        "PeerID": "12D3KooWNCPbnZeKQgt9mwykfaRuqHsHSBNzE2qRHs6r8X3iGS1Z"
    },
    "Import": {
        "CidVersion": null,
        "HashFunction": null,
        "UnixFSChunker": null,
        "UnixFSRawLeaves": null
    },
    "Internal": {},
    "Ipns": {
        "RecordLifetime": "",
        "RepublishPeriod": "",
        "ResolveCacheSize": 128
    },
    "Migration": {
        "DownloadSources": [],
        "Keep": ""
    },
    "Mounts": {
        "FuseAllowOther": false,
        "IPFS": "/ipfs",
        "IPNS": "/ipns"
    },
    "Peering": {
        "Peers": null
    },
    "Pinning": {
        "RemoteServices": {}
    },
    "Plugins": {
        "Plugins": null
    },
    "Provider": {
        "Strategy": "all"
    },
    "Pubsub": {
        "DisableSigning": false,
        "Router": ""
    },
    "Reprovider": {
        "Interval": "1h",
        "Strategy": "all"
    },
    "Routing": {
        "Methods": null,
        "Routers": null,
        "Type": "dhtserver"
    },
    "Swarm": {
        "AddrFilters": null,
        "ConnMgr": {
            "GracePeriod": "2m",
            "HighWater": 300,
            "LowWater": 100
        },
        "DialTimeoutSeconds": 60,
        "DisableBandwidthMetrics": false,
        "DisableNatPortMap": false,
        "EnableAutoRelay": true,
        "EnableHolePunching": true,
        "RelayClient": {
            "Enabled": true,
            "StaticRelays": [
                "/dns4/relay.ipfs.io/tcp/4001/p2p/QmQvM2mpqkjyXWbTHSUidUAWN26GgdMphTh9iGDdjgVXCy",
                "/dns4/relay.dev.ipfs.io/tcp/4001/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN"
            ]
        },
        "RelayService": {},
        "ResourceMgr": {},
        "Transports": {
            "Multiplexers": {},
            "Network": {
                "Relay": true
            },
            "Security": {}
        }
    },
    "Version": {}
}

I really need your help on this.

Can you share a CID that should be available? You can also try https://check.ipfs.network/ (paste just the CID, leave the address empty).

Hi Ligustah

This is the latest CID that I have pinned QmdzjV3kLuEEmZtesrEtjkBWDgAvQjA8mHoMpEpoP224pt

I already tried https://check.ipfs.network to see if there are any response from the pinned file but there is nothing.

I also tried the ipfs router provide to see the providers, got some response it’s just that I can’t paste it here.

|{
    "Extra": "",
    "ID": "12D3KooWMRL1mXLNSoWFfn8qzdRJey9HXCrBaTEAc7SCmvkiMEcw",
    "Responses": null,
    "Type": 7
}

Hi Ligustah

Here are some other files that I have pinned

{
    "Keys": {
        "QmNnrgUMyJZCRUR3p9hjjte7Fpws1jtGS4Tsn4ecADm4Xy": {
            "Type": "recursive",
            "Name": ""
        },
        "QmQA2aSBriQ3rmk1GYSk5WJ55nqptu7AUunF9apqbH3kAB": {
            "Type": "recursive",
            "Name": ""
        },
        "QmR289r26zEA1VQQXtrZ1rnie3vrTiGXEeTtby1V9qSUDf": {
            "Type": "recursive",
            "Name": ""
        },
        "QmR7Q65RBaVD8AecBVnzjH1WFHS3Z5sABJNc8WMQvpwodn": {
            "Type": "indirect",
            "Name": ""
        },
        "QmSE73KuDQ2rYrKU5GBKrTKeDDdFppbfppZDeSnY67Syan": {
            "Type": "recursive",
            "Name": ""
        },
        "QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn": {
            "Type": "recursive",
            "Name": ""
        },
        "QmWrBVwQ71vYAfzU3t7XpguP34CfrLS5MbiUN21ELJvagc": {
            "Type": "recursive",
            "Name": ""
        },
        "Qmb58dvSripJLWxKPH6nfBiNinLq5vDraiZQxwhiZKJo5k": {
            "Type": "indirect",
            "Name": ""
        }
    }
}

The check works here for me: IPFS Check

I was able to fetch it through the gateway. Now the question is why you are putting private data like electricity bills on IPFS?

The StrategicProviding experiment disables announcement to Amino DHT, and explains why other peers can’t find your peers as a provider (you are not announcing it correctly).

Please update to the latest Kubo (>=0.31.0) and apply:

$ ipfs config profile apply announce-on # to restore announcements
$ ipfs config Reprovider.Interval 22h # DHT peers keep data for 48h, 1h is way way too low
$ # remember to restart your ipfs daemon to apply new config

ps. relay.ipfs.io does not exist, you may want to clean that up with ipfs config --json Swarm.RelayClient "{}" (and if it was intentional use of custom dns, please use .local or .example.net domains to reduce confusion).

I just randomly selected a file to test my node.

It is still not working for me thought :sob:

Thanks Lidel for providing some help.

So the StrategicProviding should be set to false correct? and Reapply the other commands

Hi Lidel

My version is already up to date
image

I am trying to perform the check but it’s not working for me though, do I need to be on another network or in VPN?

Wow, your node needs … TLC (my guess is, you’ve poked around at a bunch of things trying to debug your problem, and made things worse).

The first problem we’ll need to fix is that you didn’t configure your firewall/routing properly, which forces your node into relay mode (in spite of your announce).

I need you to open TCP/UDP 4001 in your firewall (and if you are in a NAT situation, set up port mapping for those two as well). Once done, I’ll retest that part.

Next, I’ll need you to (re)post the following:

  1. your config as it stands right now, we’ll have to clean it up
  2. the CLI command you issue to run the node (from the Dockerfile in your case)
  3. an estimate of how many blocks you have in your cache

Once you’ve done all this, I’ll guide you through the clean up.

One final note, don’t rely on public gateways to test your node, they hardly ever work. Use this instead: https://check.ipfs.network/

Thanks Ylempereur

For analyzing my problem, yes I did poke a little bit to the point that I am not sure what the problem is anymore :smiley:

By default, ACI (Azure container instance) exposes ports you specify in the container group configuration, and there’s no built-in firewall.

I only expose the ports for UTP 4001, 5001 and 8080 but UDP doesn’t seem to go well with azure container.

So I am not sure how to deal with the firewall stuff

I don’t know anything about Azure, so that part will be on you, sorry. Do not expose ports 5001 and 8080 to the public, that would be … bad. But I do need you to expose TCP 4001 and UDP 4001, or your node will keep relying on relays, and that’s not a good thing for a DHT server (hole punching isn’t very reliable and is slow to establish when it works).

There are three places I would look to do this:

  1. your Dockerfile
  2. Azure port mapping capability if using NAT (likely)
  3. Azure firewall

Thanks, Ylempereur! :person_raising_hand:

I’ll check out how to implement your suggestions. You’ve given me a clearer picture of what to do next.

Much appreciated!

Hi ylempereur

It seems that opening 2 ports simultaneously in container is a bit tricky but when I check the logs I notice that it shows TCP+UDP does this work?

Swarm listening on 127.0.0.1:4001 (TCP+UDP)
Swarm listening on 192.168.0.67:4001 (TCP+UDP)
Swarm listening on [::1]:4001 (TCP+UDP)

I did not setup a firewall and NAT cause according to the documentation of container instance that is not necessary for container can already expose ports based on the configuration

Ylempereur

you mentioned about the cleanup, If we could try that out probably it would help fixed some issues?

Thank you

Sorry, but wrong. This line in your post tells me that you are indeed using NAT:
Swarm listening on 192.168.0.67:4001 (TCP+UDP)

Also, I tried to reach TCP 4001 on your node (using netcat) and there’s no answer. So, port mapping isn’t set up (and possibly a firewall is in the way too).

So, you need to port map external 4001 to 192.168.0.67:4001 for both TCP and UDP.

Let me know when it’s done, and I’ll retest.

Also, if you want me to give you the cleanup, you need to post the three things I asked for (but part of it relies on the port mapping being done first. it would be different without it).