I’d love to use IPFS for professional projects. What makes me worry a bit though is that it’s still in beta and there’s no “stable” version out there yet.
Can anyone give comments as to how stable IPFS is? Investing my time on software that can’t be securely used anyway is kind of pointless. It’d be great to learn from people who had experience with IPFS in production (or know its weaknesses).
Honestly, I find it pretty stable even if it is beta. The main reason that I can say it is still beta/alpha, is that a lot of functions and features are still under development and, for example, even if the “file sharing” part is slow, it works ok 95% of the time.
But this should not stop you from using it and reporting problems and bugs! This is another way to help the project
Thank’s for the feedback. What do you mean by “it works ok 95% of the time”? Are there certain times when adding a file to IPFS where it silently fails?
Yes, if you are trying to download a file using a gateway, if the file is bigger than 50 MB (I did this test with some 100MB random files), if the gateway still has to find the file in the network, it will start the download but it will never finish it (it keeps the connection alive). You have to restart the download again when the gateway peer will have gathered all the objects/chunks/pieces of the file.
Other times it is just peering/networking problem! but that is not due to IPFS implementation
Okay great, I’ll keep that in mind then. Thanks for helping out
I noticed the same thing. I did a test with a file at ca. 700 MB recently, and cURLing via ipfs.io didn’t work the first time – sometimes also with smaller files, e.g. 100–250 MB: in my case cURL just timed out, in most cases after ca. 40 to 50 MB, independent of the original file size. But usually the second time around it’ll work. So at the moment, if you want to share a file on the IPFS, it’s good to cURL it at least once through a gateway, so it’s fully cached somewhere else than only on your own node.
As for the beta aspect, I haven’t run into any real problems, that I couldn’t pinpoint myself, and it was mostly due to file availability on the network, i.e. missing blocks etc. Otherwise it already seems rock solid, even the ipfs files API seems to be stable. The rest is development, for me primarily, what I’m looking for, is the auto-republication of ipns hashes that derive from different keys than “self”. Next big steps for me are file encryption, IP anonymity etc., but that’ll surely take time.
That seems weird and I’m not experiencing the same thing. Maybe there is something funky with your connection to the gateway. Just did a test with a file ~400MB and got it to finish without any restarts of the download.
$ mktemp -d
$ cd /tmp/tmp.oZzes9n9hL
$ head -c 300000000 /dev/urandom | base64 > testfile
-rw-r--r-- 1 victor victor 387M jun 26 14:12 testfile
$ ipfs add testfile
added QmdbY3phVY3WmMZ2ESgYx9b8WWM27MANsbgygGNR3R5Wua testfile
$ time curl https://ipfs.io/ipfs/QmdbY3phVY3WmMZ2ESgYx9b8WWM27MANsbgygGNR3R5Wua > testfile-retrieved
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 386M 100 386M 0 0 2602k 0 0:02:32 0:02:32 --:--:-- 9810k
curl https://ipfs.io/ipfs/QmdbY3phVY3WmMZ2ESgYx9b8WWM27MANsbgygGNR3R5Wua > 0,54s user 2,04s system 1% cpu 2:32,25 total
$ ipfs add testfile-retrieved
added QmdbY3phVY3WmMZ2ESgYx9b8WWM27MANsbgygGNR3R5Wua testfile-retrieved
If you can find a way of reliably reproducing the “works 95% of the times”, we would love it if you could open a issue on ipfs/go-ipfs on Github so we can fix any issues.