Hi, I’ve been trying for a few days now to setup a private network of ipfs using swarm.key so they can all be in a private network.
I’ve set up a node on my local machine, another node dockerized on my local machine and a third node on a cloud service ( also dockerized ).
I’ve confirmed the ports are open and I can ping each node from inside the others. However if I try to netcat into the port 4001 ( the one that handles the swarm connections ) I get weird characters that I can not put in here because the post will get removed for some weird reason, I will share a screenshot instead.
![image](https://us1.discourse-cdn.com/flex020/uploads/ipfs1/original/2X/c/cf2f370eec8efeaf79126199ff4047406f2cc18f.png)
On the other hand, when configuring all the nodes. I get the correct intialization using the swarm.key and they are all on private mode. I remove all bootstrap nodes and start from 0 to configure manually the connections. After adding the node 0 to every other node I then try to connect using ipfs swarm connect, it does not matter from where I try to connect I just can’t. I get the following errors:
Error: connect failure: failed to dial: failed to dial : no good addresses
- [/ip4//tcp/4001] gater disallows connection to peer
- [/ip4//tcp/4001] gater disallows connection to peer
Some help would be much apreciated.
Thanks!
Do you have addrFilters set in the config and is the peer’s IP in one of them? I.e. private ranges when initialized with server profile.
I do have addrFilters set, but I did not set them manually, should I have ?
And no, the ip add of the other containers are not in the main one or on the others
I have removed all addrFilters from both nodes ( even though the other nodes were not getting filtered, or at least I thought so )
Now I’m getting the following error when trying to connect to one of the nodes.
Error: connect failure: failed to dial: failed to dial : all dials failed
- [/ip4//tcp/4001] failed to negotiate security protocol: incoming message was too large
- [/ip4//tcp/4001] dial tcp4 0.0.0.0:4001->10.99.2.2:4001: i/o timeout
Also, should the node0 be initialized as server mode? When I tried to connect the nodes in non-private mode I just had them all as regular nodes.
I’ve also set the nodes as debug mode so I can see the logs, when trying to connect from one node. The other one spits out this log:
2024-05-09T05:59:48.024Z DEBUG upgrader upgrader/listener.go:126 accept upgrade error: failed to negotiate security protocol: incoming message was too large (/ip4//tcp/4001 ↔ /ip4//tcp/57156)
I finally managed to solve the issue.
I set both my local node and the external ones as dhtservers, then restarted the whole process from scratch just to be sure everything was correct. I got the nodes connected properly!
I think I’ve seen this when the private key is different between nodes.