Can IPFS be used to make a Video editing program?

@ricebox, the project I am working on is tending toward finding real solutions somewhat along the lines of the ideas you have presented here. At first we will be working on solving distributed transcoding / rendering on a cluster-group of trusted peers. When that is working, we will work our way backwards in the pipeline - possibly making a proxy-based NLE using a bridge to MELT. There are, however, real limits to what can be accomplished in the browser. Some examples:

I don’t know if you have ever tried to framesync audio to specific points in a timecode using javascript with less than 8ms latency - but I can tell you firsthand that it isn’t easy (or anything that even approaches native code speed). Or, how do you deal with the overhead of having perhaps dozens (or hundreds) of files open and perhaps a few WebGL shaders as effect filters on top of them, some animated text flying on a canvas here and there… You will struggle to get anything approximating object tracking working, even though you can get something like openCV to work, I really doubt your ServiceWorker is going to be able to keep up. Keying might be a bit better, but are you going to stay in Rec709 color space? Speaking of which, what about grading? Do you proxy-render to 720pYUV420 for the editors, and somehow figure out a good way of managing RAW gamuts and LUTs? I could go on and on, but I just wanted to make the point:

There are a lot of REALLY hard problems involved in Video Editing generally, and suitably treating them in the browser is crazy talk. But we’re going to try to do some of it anyway. You can read more about what we are doing here: Millions of pins in a transient ipfs-cluster