IPFS, NAT and k8s

Hi everyone,

I have a k8s cluster running several pods, each with IPFS nodes running within - in order to peer explicitly with a single IPFS container from outside the cluster, we came up with a combination of pod cluster ip + pod cluster port along with some rules to prevent these pods from being moved around on restart. Using this information we have successfully peered within the cluster itself.

Which looks something like this:

$ ks describe pod ipfs-1 | grep Node:
Node: ip-192-xyzā€¦us-west-2.compute.internal/192-xyz

$ k describe node ip-192-xyzā€¦us-west-2.compute.internal | grep ExternalIP:
ExternalIP: 44.222.33.555

$ ks get svc | grep ipfs-1
ipfs-1 NodePort 10.100.180.231 4001:32639/TCP,5001:32058/TCP, 80m

Putting it all together,

ipfs//tcp//ipfs/

/ip4/44.222.33.555/tcp/32639/ipfs/QmVyKLpvaā€¦

^Peering with the above multiaddress from outside the cluster works perfectly.

However, this means that propagation to the public gateways isā€¦still an issue since the announced value of ā€˜ipfs idā€™ contains an internal value. Explicit attempts to call 'ipfs dht provide ā€™ are still not propagated externally.

Ex. of ips id:
ā€œ/ip4/127.0.0.1/tcp/4001/ipfs/QmVyKLpvaā€¦ā€,
ā€œ/ip4/192.168.3.7/tcp/4001/ipfs/QmVyKLpvaā€¦ā€

Is there a recommended approach for this specific scenario?

You could set to set your ā€œAnnounceā€ address to ā€œ/ip4/44.222.33.555/tcp/32639/ipfs/QmVyKLpvaā€¦ā€ in the Addresses section of the IPFS config file.

1 Like

Thanks for the help Jim! This actually worked perfectly for what we needed to do!

Minor note is that the Announce config doesnā€™t expect peerID:
ā€œAnnounceā€: ["/ip4/44.222.33.555/tcp/32639"] is what worked

1 Like

Iā€™m trying something similar, but running into an error about ā€œfailed to negotiate security protocolā€.

First I tried using a GCP LoadBalancer service in order to preserve the 4001 port.
I update the Announce address with the LoadBalancer address:

/ # ipfs swarm addrs local
/ip4/35.223.117.213/tcp/4001

Then, from another node:
$ ipfs dht findpeer Qmb4n3WteiFydckcw8bCpgmi7xXU6F8hweDddUaxwJXeau
/ip4/35.223.117.213/tcp/4001

$ ipfs swarm connect /ip4/35.223.117.213/tcp/4001/ipfs/Qmb4n3WteiFydckcw8bCpgmi7xXU6F8hweDddUaxwJXeau
Error: connect Qmb4n3WteiFydckcw8bCpgmi7xXU6F8hweDddUaxwJXeau failure: failed to dial : all dials failed

  • [/ip4/35.223.117.213/tcp/4001] failed to negotiate security protocol: read tcp4 165.22.38.112:4001->35.223.117.213:4001: read: connection reset by peer

So, I switched to a NodePort, and updated the Announce config

/ # ipfs swarm addrs local
/ip4/104.197.190.12/tcp/30641

Then, from another node:
$ ipfs dht findpeer Qmb4n3WteiFydckcw8bCpgmi7xXU6F8hweDddUaxwJXeau
/ip4/104.197.190.12/tcp/30641

$ ipfs swarm connect /ip4/104.197.190.12/tcp/30641/ipfs/Qmb4n3WteiFydckcw8bCpgmi7xXU6F8hweDddUaxwJXeau
Error: connect Qmb4n3WteiFydckcw8bCpgmi7xXU6F8hweDddUaxwJXeau failure: failed to dial : all dials failed

  • [/ip4/104.197.190.12/tcp/30641] failed to negotiate security protocol: read tcp4 165.22.38.112:4001->
    104.197.190.12:30641: read: connection reset by peer

Any idea whatā€™s failing to negotiate and why my kubernetes node is resetting the connection?

Thanks!
Ben

Hey Ben, did you manage to resolve the issue? Iā€™m hitting the same problem.

No, I designed around this instead and used a non-kubernetes cluster for public connections.

Iā€™ve been meaning to retry with the new release (and know that I know a bit more).

In case itā€™s useful, Iā€™ve been able to swarm connect successfully after changing the externalTrafficPolicy on the NodePort to Local.

Some more info on that: