Saving out video frames from a WebGL app

Recently, I wanted to create a small video clip of a WebGL demo of mine. The route I ended up going down was to send the frames by XHR to a small web server that writes the frames to disk. Here's a quick article detailing how you can do the same.

Here's the video I made.

Set up your web server

I used plain Node.js for my server. It's got CORS headers set, so you can send requests to it from any port or domain you want to use. Heck, you could even do distributed rendering with all the renderers sending finished frames to the server.

Here's the code for the server. It reads POST requests into a buffer and writes the buffer to a file. Simple stuff.

// Adapted from this _WikiBooks OpenGL Programming Video Capture article_.
var port = 3999;
var http = require('http');
var fs = require('fs');
http.createServer(function (req, res) {
    res.writeHead(200, {
        'Access-Control-Allow-Origin': '*',
        'Access-Control-Allow-Headers': 'Content-Type, X-Requested-With'
    if (req.method === 'OPTIONS') {
        // Handle OPTIONS requests to work with JQuery and other libs that cause preflighted CORS requests.
    var idx = req.url.split('/').pop();
    var filename = ("0000" + idx).slice(-5)+".png";
    var img = new Buffer('');
    req.on('data', function(chunk) {
        img = Buffer.concat([img, chunk]);
    req.on('end', function() {
        var f = fs.writeFileSync(filename, img);
        console.log('Wrote ' + filename);
}).listen(port, '');
console.log('Server running at'+port+'/');

To run the server, save it to server.js and run it with node server.js.

Send frames to the server

There are a couple things you need to consider on the WebGL side. First, use a fixed resolution canvas, instead of one that resizes with the browser window. Second, make your timing values fixed as well, instead of using wall clock time. To do this, decide the frame rate for the video (probably 30 FPS) and the duration of the video in seconds. Now you know how much to advance the time with every frame (1/FPS) and how many frames to render (FPS*duration). Third, turn your frame loop into a render & send loop.

To send frames to the server, read in PNG images from the WebGL canvas using toDataURL and send them to the server using XMLHttpRequest. To successfully send the frames to the server, you need to convert them to binary Blobs and send the Blobs instead of the dataURLs. It's pretty simple but can cause an hour of banging your head to the wall (as experience attests). Worry not, I've got it all done in the below snippet, ready to use.

Here's the core of my render & send loop:

var fps = 30; // Frames per second.
var duration = 1; // Video duration in seconds.

// Set the size of your canvas to a fixed value so that all the frames are the same size.
// resize(1024, 768);

var t = 0; // Time in ms.

for (var captureFrame = 0; captureFrame < fps*duration; captureFrame++) {
 // Advance time.
 t += 1000 / fps;

 // Set up your WebGL frame and draw it.
 uniform1f(gl, program, 'time', t/1000);
 gl.drawArrays(gl.TRIANGLES, 0, 6);

 // Send a synchronous request to the server (sync to make this simpler.)
 var r = new XMLHttpRequest();
 r.open('POST', 'http://localhost:3999/' + captureFrame, false);
 var blob = dataURItoBlob(glc.toDataURL());

// Utility function to convert dataURIs to Blobs.
// Thanks go to SO: http://stackoverflow.com/a/15754051
function dataURItoBlob(dataURI) {
 var mimetype = dataURI.split(",")[0].split(':')[1].split(';')[0];
 var byteString = atob(dataURI.split(',')[1]);
 var u8a = new Uint8Array(byteString.length);
 for (var i = 0; i < byteString.length; i++) {
  u8a[i] = byteString.charCodeAt(i);
 return new Blob([u8a.buffer], { type: mimetype });


And there we go! All is dandy and rendering frames out to a server is a cinch. Now you're ready to start producing your very own WebGL-powered videos! To turn the frame sequences into video files, either use the sequence as a source in Adobe Media Encoder or use a command-line tool like ffmpeg or avconv: avconv -r 30 -i %05d.png -y output.webm.

To sum up, you now have a simple solution for capturing video from a WebGL app. In our solution, the browser renders out frames and sends them over to a server that writes the frames to disk. Finally, you use a media encoder application to turn the frame sequence into a video file. Easy as pie! Thanks for reading, hope you have a great time making your very own WebGL videos!


After posting this on Twitter, I got a bunch of links to other great libs / approaches to do WebGL / Canvas frame capture. Here's a quick recap of them: RenderFlies by @BlurSpline - it's a similar approach to the one above but also calls ffmpeg from inside the server to do video encoding. Then there's this snippet by Steven Wittens - instead of requiring a server for writing out the frames, it uses the FileSystem API and writes them to disk straight from the browser.

And finally, CCapture.js by Jaume Sánchez Elias. CCapture.js hooks up to requestAnimationFrame, Date.now and setTimeout to make fixed timing signals for a WebGL / Canvas animation then captures the frames from the canvas and gives you a nice WebM video file when you're done. And it all happens in the browser, which is awesome! No need to fiddle around with a server.

Thanks for the links and keep 'em coming!


Techniques for faster loading

Speed is nice, right? Previously fhtr.net was taking half a second to get to the start of the first WebGL frame. With a hot cache, that is. Now, that's just TERRIBLE! Imagine the horror of having to wait for half a second for a website to reload. What would you do, who could you tell, where could you go! Nothing, no one and nowhere! You'd be stuck there for 500 milliseconds, blinking once, moving your eyes a few times, blinking twice, maybe even thrice before the site managed to get its things in order and show you pretty pictures. Clearly this can not stand.

Fear not, I am here to tell you that this situation can be amended. Just follow these 10 weird tricks and you too can make your static HTML page load in no time at all!

OK, let's get started! First off, I trimmed down the page size some. By swapping the 400 kB three.js library for 1 kB of WebGL helper functions, I brought the JS size down to 8 kB. This helped, but I still had to wait, like, 350 ms to see anything. Jeez.

My next step in getting the page load faster was to make my small SoundCloud player widget load the SoundCloud sdk.js asynchronously and do a timeout polling loop to initialize the widget once the SDK had loaded. Now, that didn't help all that much, but hey, at least now I was in control of my destiny and didn't have to wait for external servers to get around to serving the content before being able to execute crazy shaders. I also inlined the little logo image as a data URL in the HTML to avoid that dreadful extra HTTP request.

To further investigate the reason for the slowness in the page load, I turned my eye to the devtools network pane. 'Lo behold, what a travesty! I was using synchronous XHR gets to load in the two fragment shaders. For one to start loading, the other had to finish. And they were both loaded by my main script, which was in a file separate from the HTML.

I didn't want to inline the JS and the shaders into the HTML because I don't have a build script ready for that. But I could still fix a few things. I made the XHRs asynchronous so that the shaders load in parallel. Then I moved the shader loader out of the script file into a small script inlined in the HTML. Now the shaders start loading as the HTML loads, similar to the main script file.

Timing the code segments a bit, I noticed that my code for generating a random 256x256 texture was taking ~16 ms. Not too long, but hey, way too long, right? So I moved that out of the main script file and inlined it into the HTML, after the shader-loading snippet. Now the texture is generated while the browser is downloading the shaders and the script file. This squeezed out a few extra milliseconds in the hot cache scenario. Later on, I stopped being stupid and used an Uint8Array for the texture instead of a canvas, bringing the texture creation time to 2ms. Yay! Now it's practically free as it takes the same amount to generate the texture as it takes to load in the scripts.

My other major delays in getting the first frame to the screen were creating a WebGL context (15 ms or so), compiling the shaders (25 ms a pop) and setting up the initial shader uniforms (7 ms per shader). To optimize those, I made the page compile and set up only the first visible shader, and pared down the initial uniforms so as not to overlap with the ones I set before every draw. That brought down my shader setup cost from 64 ms to 26 ms. As for the WebGL context setup, I moved it into the inline script, after texture generation, so that it overlaps with the I/O. Maybe it helps by a millisecond or so. Maybe not.

As for caching. I'm using AppCache. It downloads the whole site on your first visit and keeps it cached. On subsequent visits (even if you're offline!), you get the page served from cache. Which is nice. And a bit of a hassle, as AppCache is going to require some extra logic to update the page after you have downloaded a new version of it.

Well, then. What is the result of all this effort? Let me tell you. On a hot cache and a particularly auspicious alignment of the stars in the eternal firmanent of space, when I sit down in front of my iMac and hit reload, the first WebGL frame starts executing in 56 milliseconds. That's less than the time it takes you to move your eyes from the address bar to the page itself. It's still TOO SLOW because, as everyone knows, websites should load in less than a single frame (at 60Hz).

Furthermore, alas, in a new tab, setting up the page takes ~350 ms. And with a cold cache - who can tell - 650 ms or more. Therefore, the next step in this journey is to spend a few minutes switching the page from GitHub Pages to Amazon CloudFront and hopefully claw back a few hundred ms of I/O time.

Carry on, wayward soldier.


Opus 2, GLSL ray tracing tutorial

A bit older shader on fhtr.net this time, from the time I was getting started with ray tracing. This one's path tracing spheres and triangles with motion blur. Might even work on Windows this time, but YMMV. I should get a test machine for that.

"So", I hear you ask, "how does that work?" I'm glad you asked! It's a common Shadertoy-style WebGL shader where two triangles and a trivial vertex shader are coupled with a delightfully insane fragment shader. In this case, the fragment shader takes the current time as its input and uses it to compute the positions of a few objects and the color of the background. The shader then shoots a couple of rays through the scene, bouncing them off the objects to figure out the color of the current pixel. And yes, it does all this on every single pixel.

Path tracing

The path tracing part is pretty simple. The shader shoots out a ray and tests it against the scene. On detecting a hit, the ray is reflected and tested again. If the ray doesn't hit anything, I set its color to the background color. If the ray does hit something and gets reflected into the background, I multiply the object color with the background color. If the reflected ray hits another object, I give up and leave the color black. To make this one-bounce lighting look less harsh, I mix the ray color with the fog color based on the distance traveled by the ray. For the fog color, I'm using the background color so that the objects blend in well with the bg.

Digression: The nice thing about ray tracing -based rendering is that a lot of the things that are difficult and hacky to do with rasterization suddenly become simple. Reflections, refractions, transparent meshes, shadows, focus blur, motion blur, adaptive anti-aliasing schemes, all are pretty much "shoot an extra ray, add up the light from the two rays, done!" It just gets slowwwer the more rays you trace.

And if you're doing path tracing, you're going to need a whole lot of rays. The idea behind path tracing is to shoot a ray into the scene and randomly bounce it around until it becomes opaque enough that further hits with light sources wouldn't contribute to the pixel color. By summing up enough of these random ray paths, you arrive at a decent approximation of the light arriving to the pixel through all the different paths in the scene.

For specular materials, you can get away with a relatively small amount of rays, as the variance between ray paths is very small. A mirror-like surface is going to reflect the ray along its normal, so any two rays are going to behave pretty much the same. Throw in diffuse materials, and you're in a whole new world of hurt. A fully diffuse surface can reflect a ray into any direction visible from it, so you're going to need to trace a whole lot of paths to approximate the light hitting a diffuse surface.

Motion blur

The motion blur in the shader is very simple. Grab a random number from a texture, multiply the frame exposure time with that, add this time delta to the current time and trace! Now every ray is jittered in time between the current frame and the next frame. That alone gets you motion blur, albeit in a very noisy fashion.

I'm using two time-jittered rays per pixel in the shader, first one at a random time in the first half of the exposure time, the second in the second half. Then I add them together and divide by two to get the final motion blurred color. It looks quite a bit better without totally crashing the frame rate. For high-quality motion blur, you can bump the ray count to a hundred or so and revel in your 0.4 fps.

Background color

I made the background by adding up a bunch of gradients based on the sun position and the plane of the horizon. The gist of the technique is to take the dot product between two directions and multiply the gradient color with that. Quick explanation: the dot of two directions (or unit vectors) is the cosine of the angle between them, its value varies from 1 at zero angle to 0 at a 90 degree angle and -1 at 180 degrees. To make a nice diffuse sun, take the dot between the sun direction and the ray direction, clamp it to 0..1-range, raise it to some power to tighten up the highlight and multiply with the sun color. Done! In code: bgCol += pow(max(0.0, dot(sunDir, rayDir)), 256.0)*sunColor;

You can also use this technique to make gradients that go from the horizon to the zenith. Instead of using the sun direction, you use the up vector. Again, super simple: bgCol += pow(max(0.0, dot(vec3(0.0, 1.0, 0.0), rayDir)), 2.0)*skyColor;

By mixing a couple of these gradients you can get very nice results. Say, use low-pow sun gradient to make haze, high-pow for the sun disk, a horizon-up gradient for the skydome, horizon-down gradient for the ground and a reversed high-pow horizon gradient to add horizon glow (like in the shader in the previous post).

Let's write a path tracer!

Here's a small walk-through of a path tracer like this: First, normalize the coordinates of the current pixel to -1..1 range and scale them by the aspect ratio of the canvas so that we get square pixels. Second, set up the current ray position and direction. Third, shoot out the ray and test for intersections. If we have a hit, multiply the ray color with the hit color, reflect the ray off the surface normal and shoot it out again.

vec2 uv = (-1.0 + 2.0*gl_FragCoord.xy / iResolution.xy) * vec2(iResolution.x/iResolution.y, 1.0);
vec3 ro = vec3(0.0, 0.0, -6.0);     // Ray origin.
vec3 rd = normalize(vec3(uv, 1.0)); // Ray direction.
vec3 transmit = vec3(1.0);          // How much light the ray lets through.
vec3 light = vec3(0.0);             // How much light hits the eye through the ray.

float epsilon = 0.001;

float bounce_count = 2.0; // How many rays we trace.

for (int i=0; i<bounce_count; i++) {
  float dist = intersect(ro, rd);
  if (dist > 0.0) { // Object hit.
    transmit *= material(ro, rd); // Make the ray more opaque.
    vec3 nml = normal(ro, rd);    // Get surface normal for reflecting the ray.
    ro += rd*dist;                // Move the ray to the hit point.
    rd = reflect(rd, nml);        // Reflect the ray.
    ro += rd*epsilon;             // Move the ray off the surface to avoid hitting the same point twice.
  } else { // Background hit.
    light += transmit * background(rd); // Put the background light through the ray and add it to the light seen by the eye.
    break;                              // Don't bounce off the background.

gl_FragColor = vec4(light, 1.0); // Set pixel color to the amount of light seen.


Ok, let's get real! Time to trace two spheres! Spheres are pretty easy to trace. A point is on the surface of a sphere if (point-center)^2 = r^2. A point is on a ray (o, d) if point = o+d*t | t > 0. By combining these two equations, we get ((o+d*t)-center)^2 = r^2 | t > 0, which we then have to solve for t. First, let's write it out and shuffle the terms a bit.

(o + d*t - c) • (o + d*t - c) = r^2

o•o + o•dt - o•c + o•dt + dt•dt - c•dt - c•o - c•dt + c•c = r^2

(d•d)t^2 + (2*o•dt - 2*c•dt) + o•o - 2o•c + c•c - r^2 = 0

(d•d)t^2 + 2*(o-c)•dt + (o-c)•(o-c) - r^2 = 0

Ok, that looks better. Now we can use the formula for solving quadratic equation and go on our merry way: for Ax^2 + Bx + C = 0, x = (-B +- sqrt(B^2-4AC)) / 2A. From the above equation, we get

A = (d•d)t^2   // We can optimize this to t^2 if the direction d is a unit vector.
               // To explain, first remember that a•b = length(a)*length(b)*cos(angleBetween(a,b))
               // Now if we set both vectors to be the same:
               // a•a = length(a)^2 * cos(0) = length(a)^2, as cos(0) = 1
               // And for unit vectors, that simplifies to u•u = 1^2 = 1
B = 2*(o-c)•d
C = (o-c)•(o-c) - r^2

And solve

D = B*B - 4*A*C
if (D < 0) {
  return No solution;
t = -B - sqrt(D);
if (t < 0) { // Closest intersection behind the ray
  t += 2*sqrt(D); // t = -B + sqrt(D)
if (t < 0) {
  return Sphere is behind the ray.
return Distance to sphere is t.

In GLSL, it's five lines of math and a comparison in the end.

float rayIntersectsSphere(vec3 ray, vec3 dir, vec3 center, float radius, float closestHit)
  vec3 rc = ray-center;
  float c = dot(rc, rc) - (radius*radius);
  float b = dot(dir, rc);
  float d = b*b - c;
  float t = -b - sqrt(abs(d));
  if (d < 0.0 || t < 0.0 || t > closestHit) {
    return closestHit; // Didn't hit, or wasn't the closest hit.
  } else {
    return t;


Right, enough theory for now, I think. Here's a minimal ray tracer using the ideas above.

float sphere(vec3 ray, vec3 dir, vec3 center, float radius)
 vec3 rc = ray-center;
 float c = dot(rc, rc) - (radius*radius);
 float b = dot(dir, rc);
 float d = b*b - c;
 float t = -b - sqrt(abs(d));
 float st = step(0.0, min(t,d));
 return mix(-1.0, t, st);

vec3 background(float t, vec3 rd)
 vec3 light = normalize(vec3(sin(t), 0.6, cos(t)));
 float sun = max(0.0, dot(rd, light));
 float sky = max(0.0, dot(rd, vec3(0.0, 1.0, 0.0)));
 float ground = max(0.0, -dot(rd, vec3(0.0, 1.0, 0.0)));
  (pow(sun, 256.0)+0.2*pow(sun, 2.0))*vec3(2.0, 1.6, 1.0) +
  pow(ground, 0.5)*vec3(0.4, 0.3, 0.2) +
  pow(sky, 1.0)*vec3(0.5, 0.6, 0.7);

void main(void)
 vec2 uv = (-1.0 + 2.0*gl_FragCoord.xy / iResolution.xy) * 
  vec2(iResolution.x/iResolution.y, 1.0);
 vec3 ro = vec3(0.0, 0.0, -3.0);
 vec3 rd = normalize(vec3(uv, 1.0));
 vec3 p = vec3(0.0, 0.0, 0.0);
 float t = sphere(ro, rd, p, 1.0);
 vec3 nml = normalize(p - (ro+rd*t));
 vec3 bgCol = background(iGlobalTime, rd);
 rd = reflect(rd, nml);
 vec3 col = background(iGlobalTime, rd) * vec3(0.9, 0.8, 1.0);
 gl_FragColor = vec4( mix(bgCol, col, step(0.0, t)), 1.0 );

Demo of the minimal ray tracer.

I also made a version that traces three spheres with motion blur. You can check out the source on Shadertoy.

Demo of a motion blur tracer.


If you got all the way down here, well done! I hope I managed to shed some light on the mysterious negaworld of writing crazy fragment shaders. Do try and write your own, it's good fun.

Simple ray tracers are fun and easy to write and you can get 60 FPS on simple scenes, even on integrated graphics. Of course, if you want to render something more complex than a couple of spheres, you're probably in a world of pain. I should give it a try...

P.S. I put together a bunch of screenshots from my ray tracing shaders to make a project pitch deck. You can check out the PDF here. The first couple screenies have decals and clouds painted in PS, the rest are straight from real-time demos.



Cooking a bit with shaders on fhtr.net. You won't see the cube on Windows as ANGLE doesn't like complex loops very much. The clouds are made by raymarching fractional brownian motion (= hierarchical noise, p=currentPoint; d=0.5; f=0; octaves.times{ f += d*noise(p); p *= 2; d /= 2; }), with the noise function from iq's LUT-based noise (and a dynamically generated noise LUT texture using the power of canvas).

I don't know where that's going. A cursed artifact, long dead. Flying neon beams lighting up the night, the last shards of the sun seeping into the iridescent tarsands. Powercrystals booming to life.


PDF export tweaks

I did a small tweak to Message's Print to PDF -feature. It now uses the CSS @page { size: 1920px 1080px; } attribute on Chrome to make you nice full HD PDF slides, and falls back to normal printing on Firefox.

What I'd really like to see is a way to save the page to a PDF from JavaScript with something like window.print('pdf'). It'd open a "Save as PDF"-dialog to save a PDF of the current page. In a nutshell, it'd be a shortcut to press the "Save as PDF"-button in the print dialog, with print annotations turned off and defaulting to zero margins.

I created a Chromium issue on that: DOM API for Save as PDF.


Message update - start of October

PDF Export

Let's see... what happened over the last two weeks in Message? The latest news is probably the biggest. Yes! You can now create PDFs of your presentations. Now you can grab a PDF and email your slides to everyone out there without having to worry about the browser they're using. Similarly, you'll have a good fallback for those times when you need to use computer at the venue to do your presentation.

To create a PDF of your slides, go to the "Share"-tab and click on the new "Print to PDF"-button. You'll get a print dialog where you can save your slides as a PDF, or print them if you need a physical copy. To make the PDF look good on screen, choose landscape mode in the print dialog (Page Setup in Firefox). Now click on "Save as PDF" and you should receive a nice PDF of your presentation.

Markdown template

What's that? You want to use Markdown to write your slides? Here's a template that lets you do just that: Markdown template. It's lacking a few things on the CSS side, namely auto-scaling images, so you'll have to decide how to deal with those.

To use the Markdown template, paste this template ID into the "Load Template"-input in the "Theme"-tab and hit enter: 52488616c072d10200000006

Note: loading a new template replaces your existing theme and JS. Fear not, you can revert back to them using the version history.

Spinning theme template

Here's another template, this one's based on the Basics of Three.js slide deck. It's got fancy CSS 3D animations and tilted title slides with animated headings to double up on the awesome. The name of the template is Spinning [preview] and you can load it up using the template ID 5219ec577a35d40200000002

Friendly URLs

The stable Message URLs look quite intense, right? That long list of alphanumerics in v.fhtr.net/5219ec577a35d40200000002, not very appetizing. What if you could refer to the presentation with v.fhtr.net/ilmari.heikkinen/spinning-theme? Turns out, now you can!

One caveat though, if you change the title of your presentation, the friendly URL changes as well. If you don't change the title, all is well (as long as you don't have several presentations with the same title). I'll get around to fixing that soon-ish.

Similarly, you can visit a person's profile at message.fhtr.net/username. To find out your username, head over to the Account page, click on the "Presentations"-link and look at the person=username -part of the page URL.


I'm also trying to make this thing profitable and make it grow faster, thus far with no success. In that vein, I was experimenting with making a paywall for a few days (PayPal-powered Subscribe-button) and seeing if I get any subscriptions. Result: Nope. Though I had just ~6 uniques/day at that time.

After a few days, I turned off the paywall and went with in-app payments by putting the "Subscribe"-button on the presentation list page. Now you see the "Subscribe"-button if you're already signed up (and have a better idea of the app). Few days of that now, no takers yet. Not very many people on the app either.

I also experimented with AdWords to buy traffic onto the site. Got a few hundred hits, no signups. Which tells me that the landing page didn't sell for the people who clicked on the ad, and that the sign up flow is very broken.

In a nutshell: business part of the equation is still lacking very much, ditto for salable product. So... time to start looking for freelance gigs while learning about selling a product and looking for people who could help on that side.


The latest motivational flicks

Here are two documentaries that will change your perspective on life.

The first one is Touching the Void. It recounts the experiences of two British mountain climbers trying to climb a previously unclimbed mountain in the Peruvian Andes. Perseverance is the word.

The second movie is Jiro Dreams of Sushi. It's a documentary about a top-rate sushi chef. His life, seventy years of trying to top yourself every single day, trying to improve your craft.

Ars longa, vita brevis, eh?