@master_chief06 What we are dealing with, from a high level perspective, is a collision of intents. Let me explain:
The internet was initially designed with the ability for any node to connect to any other node by each having a public IP address. This is what allows peer to peer networks to work, and is how IPv6 still works. However, the IPv4 network started to run out of IP addresses, and people were dragging their feet on switching to IPv6 (they have been for 25 years), so a stop gap solution was created.
They realized that the network had evolved into a client/server model, where the vast majority of nodes were just clients to a few (relatively speaking) server nodes. So, they could devise a new protocol that would allow for many more client nodes to be created, as long as they would never act as servers themselves. Enters NAT.
However, since those “client” nodes are no longer reachable, the concept of a peer to peer connection is no longer possible. Two client nodes cannot reach each other anymore.
Well, IPFS is a peer to peer network, it relies on nodes to establish direct connections with each other. So, as long as you are using server nodes, everything is fine. In fact, if at least one of the two involved is a server node, everything is still fine. But, if both nodes are “client” nodes, you are screwed.
All the other things we’ve talked about are attempts at getting around that basic fact. Some work better than others. But, in the end, NAT has to die, and the only way for that to happen is to finally migrate to IPv6.
Thanks @ylempereur I am aware of the NAT and ipv4 conflict. Well can’t wait for ipv6 (also at this point we need ipv8) more to release, have to use relays which could relay data through. Will try v1 which is my last ditch effort if nothing works.
Right, I remember now that there’s is code in the DHT to filter out relay addresses. Standard peers will not store peer-routing information for /relay/ multi-addresses. I don’t have a good solution for this, altough perhaps you can configure your peers to use a custom discovery mechanism (like https://cid.contact/) that does help discovery, or you’ll need to hardcode their addresses, or you need to reliably manage to hole-punch/expose your peers (and then you don’t need relays).
One thing is that you can reach your peers over relayed connections (for the purpose of any multistream protocols). The relay daemon is the relay that allows to configure the default data limits so that connection isn’t interrupted as it would happen with a normal Kubo node.
Another thing is that EnableHolePunching (along with RelayClient and perhaps AutoNAT) are enabled in a Kubo node, in which case they may use your relays or other open relays for the purpose of hole punching, which is a specific protocol triggered via relays. But I think this would require your peers to be aware of their external IPs at least, as otherwise you will still have no DHT routing. I’m not super confident on how all this works though.
Everything should travel on 4001.
I don’t fully understand if, as of today, relay v1 would be any different from v2 without data cap. Both should lack DHT peer routing iirc.
@hector, thank you so much for your answer . I have a few queries, if you don’t mind can you please clarify them?
Is getting added to the dht of client nodes necessary for relay nodes to serve their purpose? I mean if it was intentionally made to ignore relay nodes from being added to DHT, then there must be a reason for that right?
Also is the default configuration for config.go for relay nodes sufficient for them to act as relays? (I also added the swarm key)
Also, I tried using v1 relays too, even they are not working for me (earlier for a project I used it that’s why I thought it might work, but like you said it makes no difference here). v1 relays are not even able to establish a p2p-circuit, but v2 relays were able to lead the nodes to be discoverable under p2p-circuit. So I’m currently back to square one . I think it’s just that hole punching isn’t working for my setup. I noticed that when running v2 relays the public address is empty
go run main.go -swarmkey ../../../swarm.key
PSK detected, private identity: b23eba110a6344e91db60f979e5007c9
I am 12D3KooWJSnYi9G8kx2vh1f1WGr3oXinVHxDX1y2JPQtBeBCDCLa
Public Addresses:
Registering pprof debug http handler at: http://localhost:6060/debug/pprof/
Starting RelayV2...
RelayV2 is running!
Could this be an issue? if yes, do you know where I can hardcode the public IP (I am using the default config that is returned by config.go)?
What stood out in your comment was that the nodes should be aware of their external IP. So in the AppendAnnounce I can give my external IP(of the router) but then again, I will need to provide the port 4001 there. But if that port is not open on the router would it fail (unfortunately I don’t control the router of the network I’m using)? cuz there should be a port mapping as @ylempereur mentioned, else the traffic won’t get routed to my PC right? Is there another way to make the node aware of its external IP?
Also, I read through the process of how holepunching works on libp2p, but is there a way to debug this? like are there logs to indicate what stage my node is in terms of establishing hole punching?
The nodes not getting added to the DHT are the relayed ones. The relay nodes will have an /ip4/xxxx/p2p/yyyy entry in the DHT and can be looked up by peer ID. Addresses like /ip4/xxx/p2p/yyyy/p2p-circuit/p2p/zzzzz however are not stored so if peer’s zzzzz only “public” endpoint is through a relay address, that address cannot be looked up via dht.
The default relay-daemon is sufficient but it has the data cap. For fully relayed unlimited connections, at least custom RelayLimit needs to be provided to override the default limit.
Right, because Kubo itself does not support being a v1-relay client anymore.
Hole punching could work without you needing to run your own relays. I’m not even sure your Kubo peers would use your relays for that task, or just other relays they find in the network.
Right, I’m guessing this might make your relays itself not discoverable but I’m not sure if there is a mechanism by which they would automatically learn their addresses later as they bootstrap. In any case, they have a cfg.Network.AnnounceAddrs section where their public addresses can be hardcoded similar to Kubo.
AutoNAT does this for you (kubo/docs/config.md at master · ipfs/kubo · GitHub) and is enabled by default, so nothing to manually do there. You can tell your Kubo node to announce /ip4/xxx/p2p/yyyy/p2p-circuit/p2p/zzzzz but since this won’t be stored by DHT I’m not sure it gets you anywhere regarding discoverability. That said, you should be able to ipfs swarm connect from anywhere to such address and ipfs get big files then fully through the relayed connection without interruption after a few kBs.
Also, I’m not sure but it may be that hole-punching is only implemented via udp/quic and not tcp at this point. So you should play a bit with the protocols your Kubo peers are using (i.e. just enable quic and nothing else, or just enable tcp and nothing else) and see what happens: does the node become aware of external IPs is a pretty good first question.
You don’t have access to your router and it may well actively prevent hole-punching, but other than that the hole punching protocol relies on some timing magic and even if you do everything right, it may just not work.