Running IPFS and IPFS CLUSTER as single process or binary

Hi @hector ,

How easy of difficult it would be to run ipfs and ipfs-cluster as part of single process.
we have a wrapper process that talks with ipfs and ipfs-cluster in order to add, pin and read files from ipfs. Now, we want to have some kind of authentication in such a way that only our binary should be able to talk with ipfs and ipfs cluster and nobody else.

we are thinking in the direction to combine our binary, ipfs , ipfs-cluster and directly call the endpoints inside ipfs and ipfs-cluster

I would call it a difficult endeavor.

Also not sure it is the best way to provide the isolation you want. Better to isolate the machine on which they run?

Thank you for the response.
what about this option => forking IPFS and IPFS-cluster code and adding code to authenticate the ID sent by our binary? we are basically using add/read/pin/status/shards calls .

What is your attack scenario?

In case, if someone gets access to the node, then he can directly interact with the IPFS and IPFS-cluster processes and continuously add files, pin, unpin, etc and put a load on the process. To avoid this we are thinking in the direction to put some authentication in IPFS and IPFS-cluster binary itself.

If someone access your node they can do many things. Having authentication between cluster and ipfs running locally does not help much.

we are trying to build a decentralized storage open network with the ability to create a private network, then creating applications on top of it which supports multiple encryptions for file write and one of the scenarios is that anyone can come up with their own node with storage and connect with an already existing IPFS private network. we will incentivize them based on space-time for the content they store.

Since they control the node, we want that only our binary running on their node should be able to talk with IPFS and Cluster process running on their node so they cannot directly interact with the process and harm the entire network. we are already protecting the swarm key, cluster secret by encrypting the configuration files.

Could you please point out what many things a node owner can do which might affect the network he is part of ? I am aware that he can do nasty things to his own node, but can he harm the network he is part of in anyway ?

If the code is running on the user machine, they will potetially have access to the private network swarm key or the configuration, or the cluster-secret, even if they have to extract it from memory after decryption while the process runs.

Bundling or authenticating the local APIs is just obfuscating things at best.

The way to approach these scenarios in distributed networks is not “how to prevent node X from doing something bad”, but rather “How to avoid any issues on my node [and any other node] when node X does something bad”.

thanks for the suggestion hector , will keep your inputs in mind.

I am currently trying to call ipfs-cluster PIN method in /go/src/moibit-go-ipfs-cluster/rpc_api.go:
/ Pin runs Cluster.pin().
func (rpcapi *ClusterRPCAPI) Pin(ctx context.Context, in *api.Pin, out *api.Pin) error {
// we do not call the Pin method directly since that method does not
// allow to pin other than regular DataType pins. The adder will
// however send Meta, Shard and ClusterDAG pins.
fmt.Println(" ClusterRPCAPI: PIN==>")
fmt.Println(" context value received:", ctx.Value(“value”))
pin, _, err := rpcapi.c.pin(ctx, in, []peer.ID{})
if err != nil {
return err
}
*out = *pin
return nil
}
I see context value getting printing properly just before sending http request to ipfs-cluster process at client side but getting nil in the ipfs-cluster process.

while calling PIN from the client I am setting value in the context but at the receiving end i.e in ipfs-cluster process I am not able to get this value ? am missing anything ? Ideally I should be able to pass any value as part of context right ?

The context as you set on the HTTP client method does not travel with the HTTP request.