Bigshot VR

Bigshot VR

Three months ago I released Bigshot, mostly because I wanted a better way to present big images on this blog. Now it also has experimental support for VR panoramas, as users of WebGL-capable browsers can see on the Bigshot VR demo page[a]. The code is still in development, so don't be surprised if you find bugs.

Here's an example VR panorama. I assembled it from 38 photos using Hugin[b] and Photoshop. Browsers with CSS3D or WebGL will get the Bigshot viewer, others will get SaladoPlayer. Both viewers (Bigshot and SaladoPlayer) use the same source image file, a 20MB Deep Zoom Image[c] generated by Bigshot's MakeImagePyramid and deployed as a Bigshot archive via the bigshot.php image server.

2011-01-02 22:49

tech, vr

Bigshot, VR

The motivation for this new feature comes from my experience with using Bigshot 1.0 on this blog. Bigshot was good - and cross browser - but wasn't very good at displaying wrap-around panoramas, and had no way to display full, 360-degree VR panoramas. Bigshot could display a long strip, so the user could pan left and right for 360 degrees, but up and down was not possible. It was also limited to cylindrical or equirectangular projections in those cases.

So when I started experimenting with VR panoramas I had to resort to Flash, in the form of SaladoPlayer[d]. Now, I don't have anything against Flash per se. Properly used it is great, for everything else there's Flashblock[e] and SaladoPlayer is awesome. But if one looks at "web standards", Flash isn't among them, and this has resulted in some downsides to it.

  • First, Flash objects aren't really part of HTML - they can be ambedded all right, but aren't completely integrated (try using CSS Z-Index on a page with flash objects, for example). You even need a special JavaScript library[f] to load them properly and reliably across browsers.

  • Second comes a reason that stems from web standards often being embarrasingly far behind the cutting edge. This is just the nature of things: 90% of cutting edge stuff end up failing, and we should be conservative before calling something a "standard", so standards tend to trail the cool stuff. Flash's original name was "Future Splash" so it is obviously cool stuff. For Adobe this meant that they had to invent their own non-standard solutions to the problems that they encountered. The upside is that the problems got solved, the downside is that the solutions were sometimes suboptimal[g]. The scary side is that this has resulted in Flash being a kind of parallel ecosystem to the rest of the web, and I don't like that.

  • Third, Flash is more heavyweight that most web standards. In particular, it requires a you to compile it. Javascript can be compressed to save space, but you're not buying into a whole toolchain the same way as you do with Flash.

There's also the disturbing fact that iPhone and iPad users are left out. While my concern for hipsters in general and Mac-fanboys[h] in particular is nonexistent, it seems to me that one should use the most basic, widespread and stable technology that gets the job done. In this case, if it can be done using web standards, it should.

Besides, it might be fun to do some 3D coding. I used to do a lot of that way back around when DirectX 1.0 was released. Unfortunately, there's very little 3D done in the kind of enterprise-y applications I do nowadays.

I started with Pre3d[i], a software renderer for JavaScript. Using some clever canvas transformations it is capable of rendering textured 3d polygons in any browser that supports the canvas element. Unfortunately, since the perspective transform required for proper 3d texturing isn't linear[j], it wasn't able to render the VR cube with enough precision to be useful for my purposes. It was also a too slow, despite the optimizations. But I had achieved one big goal: Given a rendering API, I could generate and render an adaptive level of detail VR cube. This was the first cut of Bigshot VR.

After trying to optimize it some more I finally gave up and went looking for some proper in-browser, hardware accelerated 3D. I found it in WebGL[k], a web-standards 3D API based on OpenGL ES 2.0[l]. While only supported in Firefox 4 (not released yet) and Chrome 9, I figured I could be a bit ahead of the curve here. So I headed off to the Learning WebGL[m] website and started going through the lessons. Ok, I skipped to lesson 6[n], copied the code, refactored it and manhandled it into something that looked kinda sorta right, then lifted Bigshot up on a set of jacks, removed the Pre3d code and slid the new WebGL code in under it.

The resulting code works great for me.

The Immediate Future

My big worry right now concerns the texture handling. The VR cube is light on vertex and face data, but big on texture data - pretty much the opposite of the typical situation. It is not possible to upload the texture data to the card on every frame and maintain a decent framerate, so I've set up a multilevel LRU cache:

  1. X textures in WebGL texture memory

  2. Y cached HTMLImageElements in system memory

  3. Z cached images in the browser cache

  4. Cache miss and network fetch

The value Z is not controlled by me, but I'm trying to find values for X and Y that will allow me optimal use of the memory available.