How "ipfs daemon" is detecting node NAT public address

In the same LAN, on some of my node, I see external router IP in .Addresses[]
, but not on others

ipfs id | jq -r .Addresses[]

This is Upnp related?
Is there any command or startup option for node to recognize its NAT conditions?

thanks

Upnp, nat pmp or AutoRelay (a protocol where you node dials outside nodes and ask it : “hey what is my ip pls ?”).

1 Like

So Addresses list is auto upgraded ?
I wonder how I could know this address from command line?

I don’t see why ipfs id | jq -r ".Addresses[$x]" is not fine.
Else maybe try upnpc from MiniUPnP.

on some NAT nodes their “External IP” is populated, on some others not…

As I wish to make this “NATed nodes” into the same swarm, one send its address to the other in order to use ipfs swarm connect command.

As ipfs is able to discover its External IP and NATed port on some of my node, I wonder why the other is not?

I take it, this behaviour (has external address or not) is consistent for each node in question?

If so, it probably help to know how the network behaves when faces with UPnP and nat-pmp requests at each of these nodes. According to the libp2p documentation, IPFS nodes can perform port forwarding and address discovery using UPnP and nat-pmp. Additionally there is a set of libp2p internal protocols for external address discovery (identity protocol, akin to STUN), testing of NAT hole-punching techniques (AutoNAT) and forwarding through remote nodes (Circuit Relay, akin to TURN).

For starters it would be good if you could run the commands upnpc -l and natpmpc to check if external address discovery using any of these two protocols work. Also you can using something like upnpc -a "<local IP address>" 7779 7779 TCP and natpmpc -a 7778 7778 TCP to check if external address forwarding is available using each of these protocols. If any of these tools report an internal address as “external address” then this could hint at a double-NAT configuration that is generally very problematic to break out off.