How to setup V1 Relay in the new config

yes, of course
“ID”: “12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV”,
“PublicKey”: “CAESIJaGVFy2Ff5wVSUQccqpCj3ZOyXr192rRfVSV+r8pkC6”,
“Addresses”: [
“/ip4/125.253.126.88/tcp/4001/p2p/12D3KooWAAm17JYqc7fM5qsZAdnkKSqmQF7bqsCKAQ1tww5RQTCA/p2p-circuit/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV”,
“/ip4/125.253.126.88/udp/4001/quic/p2p/12D3KooWAAm17JYqc7fM5qsZAdnkKSqmQF7bqsCKAQ1tww5RQTCA/p2p-circuit/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV”,
“/ip4/127.0.0.1/tcp/4001/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV”,
“/ip4/127.0.0.1/udp/4001/quic/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV”,
“/ip4/192.168.1.233/tcp/4001/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV”,
“/ip4/192.168.1.233/udp/4001/quic/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV”,
“/ip4/54.169.103.175/tcp/4001/p2p/12D3KooWR9zzY5ZZnKyvwm5cKTxexPFQ4DZCypAtqZ1sHDm2GtqP/p2p-circuit/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV”,
“/ip4/54.169.103.175/udp/4001/quic/p2p/12D3KooWR9zzY5ZZnKyvwm5cKTxexPFQ4DZCypAtqZ1sHDm2GtqP/p2p-circuit/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV”,
“/ip6/64:ff9b::7dfd:7e58/tcp/4001/p2p/12D3KooWAAm17JYqc7fM5qsZAdnkKSqmQF7bqsCKAQ1tww5RQTCA/p2p-circuit/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV”,
“/ip6/64:ff9b::7dfd:7e58/udp/4001/quic/p2p/12D3KooWAAm17JYqc7fM5qsZAdnkKSqmQF7bqsCKAQ1tww5RQTCA/p2p-circuit/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV”,
“/ip6/::1/tcp/4001/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV”,
“/ip6/::1/udp/4001/quic/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV”
],

Hope you can help us

It works for me, but that’s probably because my node is reachable, so when I connect to your “hidden” node, it doesn’t really have to establish a “hole punch”, it just calls mine back. Anyway, here is the run, and you can see at the end that it lists two connections, the relay one, and then the direct one that ipfs id caused (and you can see that the direct one is inbound, meaning your node called me). So, your problem is definitely the inability of your two hidden nodes to establish a “hole punch” connection.

>  ~ ipfs swarm connect /ip4/54.169.103.175/tcp/4001/p2p/12D3KooWR9zzY5ZZnKyvwm5cKTxexPFQ4DZCypAtqZ1sHDm2GtqP
connect 12D3KooWR9zzY5ZZnKyvwm5cKTxexPFQ4DZCypAtqZ1sHDm2GtqP success
>  ~ ipfs id 12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV
{
	"ID": "12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV",
	"PublicKey": "CAESIJaGVFy2Ff5wVSUQccqpCj3ZOyXr192rRfVSV+r8pkC6",
	"Addresses": [
		"/ip4/125.253.126.88/tcp/4001/p2p/12D3KooWAAm17JYqc7fM5qsZAdnkKSqmQF7bqsCKAQ1tww5RQTCA/p2p-circuit/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV",
		"/ip4/125.253.126.88/udp/4001/quic/p2p/12D3KooWAAm17JYqc7fM5qsZAdnkKSqmQF7bqsCKAQ1tww5RQTCA/p2p-circuit/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV",
		"/ip4/127.0.0.1/tcp/4001/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV",
		"/ip4/127.0.0.1/udp/4001/quic/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV",
		"/ip4/192.168.1.233/tcp/4001/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV",
		"/ip4/192.168.1.233/udp/4001/quic/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV",
		"/ip4/54.169.103.175/tcp/4001/p2p/12D3KooWR9zzY5ZZnKyvwm5cKTxexPFQ4DZCypAtqZ1sHDm2GtqP/p2p-circuit/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV",
		"/ip4/54.169.103.175/udp/4001/quic/p2p/12D3KooWR9zzY5ZZnKyvwm5cKTxexPFQ4DZCypAtqZ1sHDm2GtqP/p2p-circuit/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV",
		"/ip6/64:ff9b::7dfd:7e58/tcp/4001/p2p/12D3KooWAAm17JYqc7fM5qsZAdnkKSqmQF7bqsCKAQ1tww5RQTCA/p2p-circuit/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV",
		"/ip6/64:ff9b::7dfd:7e58/udp/4001/quic/p2p/12D3KooWAAm17JYqc7fM5qsZAdnkKSqmQF7bqsCKAQ1tww5RQTCA/p2p-circuit/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV",
		"/ip6/::1/tcp/4001/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV",
		"/ip6/::1/udp/4001/quic/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV"
	],
	"AgentVersion": "kubo/0.15.0/",
	"ProtocolVersion": "ipfs/0.1.0",
	"Protocols": [
		"/ipfs/bitswap",
		"/ipfs/bitswap/1.0.0",
		"/ipfs/bitswap/1.1.0",
		"/ipfs/bitswap/1.2.0",
		"/ipfs/id/1.0.0",
		"/ipfs/id/push/1.0.0",
		"/ipfs/lan/kad/1.0.0",
		"/ipfs/ping/1.0.0",
		"/libp2p/autonat/1.0.0",
		"/libp2p/circuit/relay/0.1.0",
		"/libp2p/circuit/relay/0.2.0/stop",
		"/libp2p/dcutr",
		"/p2p/id/delta/1.0.0",
		"/x/"
	]
}

>  ~ ipfs swarm peers --direction | grep 12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV                
/ip4/54.169.103.175/tcp/4001/p2p/12D3KooWR9zzY5ZZnKyvwm5cKTxexPFQ4DZCypAtqZ1sHDm2GtqP/p2p-circuit/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV outbound
/ip6/2405:4802:90d2:2b80:cddc:e33a:7aa1:c782/udp/4001/quic/p2p/12D3KooWKwx8DRkgFXisEyZQLfu6cPJNrFu77cxEyTY4eeNk9wQV inbound
>  ~ 

You are probably going to need port forwarding on one of your hidden nodes if hole punch can’t work for you.

Our problem is: the user/app can get any file (CID) that other user/app add/pin no matter where they are. Do you think this problem can be solved using ipfs - hole punching v2 or any suggestions?

I’m not clear on why hole punching doesn’t work for you as it stands, it should. The only other solution is to set up port forwarding (which is always better anyway).

1 Like

Yes, do you have any docs that can help us setup port forward in ipfs and the app. Basically, we want to get the file - directly from other computer via ipfs :slight_smile:

Unfortunately, port forwarding is set up on your router, so you need to refer to your router’s manual.

1 Like

I think I found the problem : swarm listens on ipv6 by default but that computer does not have ipv6 so the get hang. My question:

  1. Can we config always using ipv4 or
  2. Force Swarm listening on ipv4 (I tried “/ip4/0.0.0.0/tcp/4001” in Addresses.Swarm but it failed to listen) any idea

If you look at the run I gave you, your node called me using IPv6, so I’d say it has IPv6 support :grimacing:

Anyway, if you simply remove the ip6 lines from the swarm (and leave in the ip4 lines), it will only listen on IPv4.

When trying to open a connection, the caller will actually try all possible addresses in parallel and use the first one that connects.

Hi @susarlanikhilesh , @JACKYTRSON was any progress made on this? Is there any possible means to configure Kubo to use v1 relay? Please do let me know. I also was able to setup v2 relay and discover my nodes behind the NAT. But unfortunately when I tried this example of netcat using ipfs p2p forward and ipfs p2p listen given here kubo/docs/experimental-features.md at master · ipfs/kubo · GitHub did not work when nodes were discovered through relay.

Hi @ylempereur I saw you mention that it won’t work with private swarms. Even though this is under private swarm oddly enough I’m able to ping the nodes discovered through relay. Just that whatever services which have been hosted on them are not accessible. Does anyone know how to fix this? If not can we revert back to v1 relay and use the newest Kubo with it?

@master_chief06 Relay v1 has been deprecated for a long time now, and replaced with relay v2; I don’t think it’s supported at all anymore. The ping goes through the relay v2, which is why it works for you. However, anything that actually has to send data, like id for example, has to establish a direct connection, which is done using hole punching (that’s a technology that solves the “NAT at both ends” problem). However, if your problem is more complex than that, you will have to establish port mapping at at least one end of the connection for it to be successful. Ideally, having port mapping on every node will make connecting a breeze (it’s what I do).

1 Like

Hi @ylempereur, thank you for the quick response :blush:. To give some context on my issue:

My scenario I am trying to do a socket communication between 2 nodes which are behind a NAT. So as I mentioned before my nodes are able to see and ping each other, but not able to do socket communication. Based on your explanation I understood that pings and all go through relay (I have 3 bootstrap nodes hosted on AWS, and one bootstrap node on ngrok), but the data flows through a direct connection. I tried to establish a direct connection using “ipfs swarm connect” command and it displays “connected”. But even after this it is not able to communicate. So then I tried to use the example mentioned in kubo/docs/experimental-features.md at master · ipfs/kubo · GitHub, which shows how to connect 2 netcat server to netcat client, in a similar manner as my socket program(to eliminate any socket programming related errors from my side). I guess holepunching has failed for me as they are not able to communicate. Are there any config errors in my setup?

Config of nodes behind NAT:

{
  "API": {
    "HTTPHeaders": {}
  },
  "Addresses": {
    "API": "/ip4/127.0.0.1/tcp/5001",
    "Announce": [],
    "AppendAnnounce": [],
    "Gateway": "/ip4/127.0.0.1/tcp/8080",
    "NoAnnounce": [],
    "Swarm": [
      "/ip4/0.0.0.0/tcp/4001",
      "/ip6/::/tcp/4001",
      "/ip4/0.0.0.0/udp/4001/quic-v1",
      "/ip4/0.0.0.0/udp/4001/quic-v1/webtransport",
      "/ip6/::/udp/4001/quic-v1",
      "/ip6/::/udp/4001/quic-v1/webtransport"
    ]
  },
  "AutoNAT": {},
  "Bootstrap": [
    "/ip4/<bootstrap_ip1>/tcp/4001/p2p/12D3KooWJeCNUxFXmD8pqf27HC9h9rbRxcHfAqc61GCUuTvCiUPE",
    "/ip4/<bootstrap_ip2>/tcp/4001/p2p/12D3KooWR8FQ3pAtnXwJgKKJ2tkwrYbgpzZZUpo7aQcLyVPSqoxL",
    "/ip4/<bootstrap_ip3>/tcp/4001/p2p/12D3KooWCc2QDzx75SGCTuuYnWYFunQTBS8Cd3JTHRvsTWJKK74R",
    "/dns/<bootstrap_ip4>/tcp/19295/p2p/12D3KooWQUJu9N1G7vqbDkzPuKEgjo8NNWkPAoTr4hd2sLVyqMfQ"
  ],
  "DNS": {
    "Resolvers": {}
  },
  "Datastore": {
    "BloomFilterSize": 0,
    "GCPeriod": "1h",
    "HashOnRead": false,
    "Spec": {
      "mounts": [
        {
          "child": {
            "path": "blocks",
            "shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
            "sync": true,
            "type": "flatfs"
          },
          "mountpoint": "/blocks",
          "prefix": "flatfs.datastore",
          "type": "measure"
        },
        {
          "child": {
            "compression": "none",
            "path": "datastore",
            "type": "levelds"
          },
          "mountpoint": "/",
          "prefix": "leveldb.datastore",
          "type": "measure"
        }
      ],
      "type": "mount"
    },
    "StorageGCWatermark": 90,
    "StorageMax": "10GB"
  },
  "Discovery": {
    "MDNS": {
      "Enabled": true
    }
  },
  "Experimental": {
    "FilestoreEnabled": false,
    "Libp2pStreamMounting": true,
    "OptimisticProvide": false,
    "OptimisticProvideJobsPoolSize": 0,
    "P2pHttpProxy": false,
    "StrategicProviding": false,
    "UrlstoreEnabled": false
  },
  "Gateway": {
    "APICommands": [],
    "DeserializedResponses": null,
    "DisableHTMLErrors": null,
    "ExposeRoutingAPI": null,
    "HTTPHeaders": {},
    "NoDNSLink": false,
    "NoFetch": false,
    "PathPrefixes": [],
    "PublicGateways": null,
    "RootRedirect": ""
  },
  "Identity": {
    "PeerID": "12D3KooWFXoGG4a3hcXXaeDc3Qwsg1rbAZoSXjGCkhHaMHE49kht"
  },
  "Internal": {},
  "Ipns": {
    "RecordLifetime": "",
    "RepublishPeriod": "",
    "ResolveCacheSize": 128
  },
  "Migration": {
    "DownloadSources": [],
    "Keep": ""
  },
  "Mounts": {
    "FuseAllowOther": false,
    "IPFS": "/ipfs",
    "IPNS": "/ipns"
  },
  "Peering": {
    "Peers": null
  },
  "Pinning": {
    "RemoteServices": {}
  },
  "Plugins": {
    "Plugins": null
  },
  "Provider": {
    "Strategy": ""
  },
  "Pubsub": {
    "DisableSigning": false,
    "Router": ""
  },
  "Reprovider": {},
  "Routing": {
    "AcceleratedDHTClient": false,
    "Methods": null,
    "Routers": null,
    "Type": "dht"
  },
  "Swarm": {
    "AddrFilters": null,
    "ConnMgr": {},
    "DisableBandwidthMetrics": false,
    "DisableNatPortMap": false,
    "EnableHolePunching": true,
    "RelayClient": {
      "Enable": true
    },
    "RelayService": {},
    "ResourceMgr": {},
    "Transports": {
      "Multiplexers": {},
      "Network": {},
      "Security": {}
    }
  }
}

Config of bootstrap:

{
  "API": {
    "HTTPHeaders": {}
  },
  "Addresses": {
    "API": "/ip4/127.0.0.1/tcp/5001",
    "Announce": [],
    "AppendAnnounce": [
      "/ip4/<bootstrap_ip2>/tcp/4001"
    ],
    "Gateway": "/ip4/127.0.0.1/tcp/8080",
    "NoAnnounce": [],
    "Swarm": [
      "/ip4/0.0.0.0/tcp/4001",
      "/ip6/::/tcp/4001",
      "/ip4/0.0.0.0/udp/4001/quic-v1",
      "/ip4/0.0.0.0/udp/4001/quic-v1/webtransport",
      "/ip6/::/udp/4001/quic-v1",
      "/ip6/::/udp/4001/quic-v1/webtransport"
    ]
  },
  "AutoNAT": {},
  "Bootstrap": [
    "/ip4/<bootstrap_ip1>/tcp/4001/p2p/12D3KooWJeCNUxFXmD8pqf27HC9h9rbRxcHfAqc61GCUuTvCiUPE",
    "/ip4/<bootstrap_ip2>/tcp/4001/p2p/12D3KooWR8FQ3pAtnXwJgKKJ2tkwrYbgpzZZUpo7aQcLyVPSqoxL",
    "/ip4/<bootstrap_ip3>/tcp/4001/p2p/12D3KooWCc2QDzx75SGCTuuYnWYFunQTBS8Cd3JTHRvsTWJKK74R",
    "/dns/<bootstrap_ip4>/tcp/19295/p2p/12D3KooWQUJu9N1G7vqbDkzPuKEgjo8NNWkPAoTr4hd2sLVyqMfQ"
  ],
  "DNS": {
    "Resolvers": {}
  },
  "Datastore": {
    "BloomFilterSize": 0,
    "GCPeriod": "1h",
    "HashOnRead": false,
    "Spec": {
      "mounts": [
        {
          "child": {
            "path": "blocks",
            "shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
            "sync": true,
            "type": "flatfs"
          },
          "mountpoint": "/blocks",
          "prefix": "flatfs.datastore",
          "type": "measure"
        },
        {
          "child": {
            "compression": "none",
            "path": "datastore",
            "type": "levelds"
          },
          "mountpoint": "/",
          "prefix": "leveldb.datastore",
          "type": "measure"
        }
      ],
      "type": "mount"
    },
    "StorageGCWatermark": 90,
    "StorageMax": "10GB"
  },
  "Discovery": {
    "MDNS": {
      "Enabled": true
    }
  },
  "Experimental": {
    "FilestoreEnabled": false,
    "Libp2pStreamMounting": true,
    "OptimisticProvide": false,
    "OptimisticProvideJobsPoolSize": 0,
    "P2pHttpProxy": false,
    "StrategicProviding": false,
    "UrlstoreEnabled": false
  },
  "Gateway": {
    "APICommands": [],
    "DeserializedResponses": null,
    "DisableHTMLErrors": null,
    "ExposeRoutingAPI": null,
    "HTTPHeaders": {},
    "NoDNSLink": false,
    "NoFetch": false,
    "PathPrefixes": [],
    "PublicGateways": null,
    "RootRedirect": ""
  },
  "Identity": {
    "PeerID": "12D3KooWR8FQ3pAtnXwJgKKJ2tkwrYbgpzZZUpo7aQcLyVPSqoxL"
  },
  "Internal": {},
  "Ipns": {
    "RecordLifetime": "",
    "RepublishPeriod": "",
    "ResolveCacheSize": 128
  },
  "Migration": {
    "DownloadSources": [],
    "Keep": ""
  },
  "Mounts": {
    "FuseAllowOther": false,
    "IPFS": "/ipfs",
    "IPNS": "/ipns"
  },
  "Peering": {
    "Peers": null
  },
  "Pinning": {
    "RemoteServices": {}
  },
  "Plugins": {
    "Plugins": null
  },
  "Provider": {
    "Strategy": ""
  },
  "Pubsub": {
    "DisableSigning": false,
    "Router": ""
  },
  "Reprovider": {},
  "Routing": {
    "AcceleratedDHTClient": false,
    "Methods": null,
    "Routers": null,
    "Type": "dhtserver"
  },
  "Swarm": {
    "AddrFilters": null,
    "ConnMgr": {},
    "DisableBandwidthMetrics": false,
    "DisableNatPortMap": false,
    "EnableHolePunching": true,
    "RelayClient": {},
    "RelayService": {
      "Enable": true
    },
    "ResourceMgr": {},
    "Transports": {
      "Multiplexers": {},
      "Network": {},
      "Security": {}
    }
 

Right now I understand all data flows through port 4001 on every node (even if netcat/any services use different ports, the custom protocol name given in the example highlihited on the link I shared above forwards this to the appropriate port using ipfs p2p forward/ipfs p2p listen commands).

Can you please explain how port mapping solves the holepunching issue? Also can you please let me know how the port mappings are done or do you have a document for how to get it done?

@master_chief06 Port mapping is a feature of your Router/Firewall, it is not done on your node. It tells your router to map an external port (world visible) to an internal node and port (the local IP of your node and the port it is listening on), so that any traffic coming into the external port is automatically routed to your internal node.

oh yeah I see, so I can map an external port to the internal port of 4001, but it really does not solve the NAT issue with mobile/GSM networks. Also does not work in scenarios where the router is not under the user’s control…

But do you know why my holepunching config did not work?

@master_chief06 Hole punching is a horrible kludge and only works some of the time; it is an abuse of the way NAT protocols are implemented, which itself (NAT) is an abuse of the way the internet was intended to work. It’s a miracle it works at all.

Unfortunately, until people stop using NAT (fat chance, unless we really start to migrate over to IPv6 (all my nodes support IPv6, btw)), there really isn’t another solution than establishing port mapping. Another “solution” is to rely on UPnP (it’s an automatic port mapping), which Kubo supports, but it is a security threat to have it enabled, and I wouldn’t do that myself.

The bottom line is, if you can’t have port mapping configured on at least one side of the connection, you have to rely on hole punching, which works some of the time.

1 Like

Hi all, I’ve only skimmed through the thread but want to ask something:

Have you tried to use the relay daemon (GitHub - libp2p/go-libp2p-relay-daemon: A standalone libp2p circuit relay daemon providing relay service for version v2 of the protocol.), as v2 relay? One relay can serve multiple Kubo nodes (they should be configured to advertise relay addresses).

The relay daemon has a RelayLimit configuration option that defaults to a max of 128kB per stream, but iirc setting it to 0 should allow unlimited data transfer, akin to how relay v1 worked.

There’s some additional setup work like protecting connections to relays etc. But i think it might work unless I’m missing something.

1 Like

Thank you @ylempereur, yeah thats correct. So UPnP does not require you to have access to the router? It does a port forwarding from Kubo itself? That’s intriguing. I have heard of universal Plug n Play but not used it or got any idea of how it works. So it has security loopholes…guess if nothing works I’ll go back to relay v1 with go-ipfs.

UPnP is something you have to enable on your router (that’s the dangerous part), and then enable on Kubo (it’s enabled by default). If both are enabled, Kubo will establish a port mapping automatically.

1 Like

Thank you @hector :blush: for the reponse. I had tried to use the relay daemon before. I download v0.4.0 source from github and ran the mainfile with flag -swarmkey and used the defaul config that comes from config.go (I saw v2 relay is enabled by default).

go run main.go -swarmkey ../../../.ipfs/swarm.key
PSK detected, private identity: b23eba110a6344e91db60f979e5007c9
I am 12D3KooWDNYyXGWk7mxG6w1fUmLKVgd7jcW428LgbfWnsfmcANLk
Public Addresses:
Registering pprof debug http handler at: http://localhost:6060/debug/pprof/
Starting RelayV2...
RelayV2 is running!

Then I added the relay daemons peerid to my nodes behind the NAT as bootstrap node IDs like:

ipfs bootstrap add /ip4/publicipv4/tcp/4001/p2p/12D3KooWDNYyXGWk7mxG6w1fUmLKVgd7jcW428LgbfWnsfmcANLk

I saw my nodes could discover the relay node but they were not able to establish p2p-circuit to discover each other.

Then I ran ipfs stats dht on the nodes and saw that the nodes did not add the relay node to their dht, whereas with kubo bootstrap nodes they were added.

So do I have to change the config from default config? Also doesn’t this one use the same holepunching as kubo bootstrap node? If yes, how does it make a difference from running a kubo node as bootstrap. Also I have port 4001 opened on my relay node, do I need to open any other ports?

I will try to run it again and see if it works or if I missed any steps please do let me know :blush:

Ah I see, but I don’t control the router I use :sweat_smile:. Also I wanted to use a solution for GSM networks which don’t offer any control on the router settings. The saddest thing is that relay v1 was so reliable and solved all these problems…but I guess v2 adds less load on the relays. I will use relay v1 if nothing works out by reverting to go-ipfs.