I’ve been following the IPFS project now for some time. I’m also a keen Docker user having deployed multiple clusters for several companies with generally good results. In my opinion though there is one thing holding back the Docker project at the moment and IPFS could be the answer.
Docker makes my life as a server admin significantly easier. It’s much easier handling multiple versions of software and having to make very few code changes in order to deploy. Particularly it helps not having legacy code holding back your upgrade process. Scaling is easier, however data storage is a huge problem when scaling software across multiple nodes. Ultimately the correct solution involves rewriting software and is time consuming and expensive, so ultimately the solution remains unscalable.
I think IPFS could be the ultimate solution to this problem. If it were possible to have a volume plugin which one could use with docker swarm in order to share file storage across multiple docker instances on multiple machines it would likely achieve widespread adoption. It would need to be simple to deploy and mount, following the ‘simple but powerfully configurable’ methodology but it solves a huge issue which IMO is holding Docker back massively at the moment. The caching mechanisms in IPFS would make it a no brainer for me as a Docker user. I would have it so each node had an IPFS instance running perhaps.
Anyone got any experience of running IPFS in this way? Any thoughts?