It ain't over 'til it's over.
Once you've done the first ninety percent, you have the other ninety percent left.
Real artists ship.
You've probably heard them all - reminders that the job isn't done until it is really done, which, for the software industry, means that you have shipped or released a finished version to the end users. Over the past year I've written about motion interpolation[a], zoomable images[b], timelapses[c], timelapse exposure correction[d], how to re-sample an image sequence along the time axis[e] and other image-processing tricks that one can do.
Always when writing about these things, however, I've felt very awkward, because I keep alluding to various software packages that I use. Now, for example, I have a longer blog entry about astrophotography[f] in the "drafts" folder. It is fairly complete with illustrations and what I consider exemplary writing. But I just can't publish it.
Astrophotography is, so far, the most post-processing intensive kind of photography I've ever done. In all other cases, the post processing have been optional in the sense that the image, as shot, could stand on its own. The post processing only served to emphasize or de-emphasize certain parts of the composition, in order to bring it out more fully to the viewer. When it comes to astrophotography, however, the image-as-shot is either black or too noisy. You don't see any stars beside the really bright ones, unless you carefully align and stack the images and then tweak the result. This stacking and processing can be done manually, but realistically you're not going to do it any other way than by computer. So in the article I describe the use of some programs that automate the alignment and stacking of images. But those programs aren't available anywhere except on my harddrive. I wrote them myself. Without the programs, the article makes no sense. It is not a tutorial, it is just an explanation of what I did with no way for the reader to replicate the steps. One is reminded of the recipe for elephant stew that starts with "first, find and kill an elephant".
Since I thought that other people may want to try out the techniques I use on their own photos and videos, I've spent some time trying to get the code into a good shape for release as open source. But it's hard work. If you yourself is the only user of the code, you tend to accept user interfaces that do the bare minimum but not more, algorithms that work on the use cases you use, documentation that is all in your brain and so on. If a release is to actually be useful for the recipients, they need to be presented with something that meets reasonable standards of usability.
I could only use code that I wanted to release. This meant that Bigshot could not depend on any other software library that I had written, unless I also wanted to release that library.
I had to make sure that everything worked, was well documented, and understandable.
I had to make sure that the released package was usable for people. Tutorials had to be written, thought given to ease of use, ease of upgrade and versioning.
All the above went recursively for any supporting libraries.
What this all resulted in was that Bigshot took about one day to write and five days to get into release-worthy shape. Bigshot was easy. Today I've tried for closer to two weeks to get my image and video processing framework into releasable shape. I don't even know where to begin.
For the image and video procesing framework, none of that is true: It is a swiss army knife of image and video processing, making use of multiple supporting libraries that also have to be released as open source. Currently written in Java, but who knows if that's the right format. Even if Java itself is cross-platform, one needs to integrate the application with each target platform - for Windows, you must supply an
.exe launcher, for Linux a launcher script and for Mac... well... ever since Java was made deprecated I have no idea what to do. Add to that installers for Windows, RPMs and other packages for Linux and bundles for Mac.
In despair I checked out Blender[h]. Maybe I could integrate the code with its powerful video sequence editor? The answer was "yes, probably - but it's going to take some work". So I'm currently tinkering with the Blender source code. Unfortunately, getting what I want into it seems to take some serious tinkering, but the payoff would be great - Blender has a good UI (with the 2.5 series) and a solid framework underpinning it. Unfortunately it is fifteen years old and a lot of the assumptions one can make when designing a 3d-modeller doesn't lead to optimum results when one tries to fit a video editing package into it.
I've also considered rewriting everything in Qt / C++. That would take care of some problems - most notably the problem of the dependency on Java. It would also come with its own set of dependency problems, making the whole affair more a step sideways than a step forward.
So that's where things stand now. In case anyone wonders what I use to get the photos I do, and why my tech articles omit any references to many of the programs I use.