OpenGL Programming/Video Capture

From Wikibooks, open books for an open world
Jump to navigation Jump to search

apitrace is a nice debugging tool.

It has a save/replay feature that captures everything you told the graphic card, and can reproduce it later on, and even save the result as a stream of pictures.

We'll use this to create a video suitable for upload at Wikicommons!

Compilation (instructions for GNU/Linux)[edit | edit source]

Get the source code:

cd /usr/src/
git clone https://github.com/apitrace/apitrace.git
cd apitrace/

Install cmake and the dependencies:

apt-get install cmake libx11-dev

Compile!

cmake -H. -Bbuild
cd build
make

Note: For 32bit build on 64bit host:

CFLAGS="-m32" CXXFLAGS="-m32" cmake -H. -Bbuild

Capture[edit | edit source]

Let's test with the simple wave post-processing effect:

cd wikibooks-opengl-modern-tutorials/obj-viewer/
LD_PRELOAD=/usr/src/apitrace/build/wrappers/glxtrace.so ./obj-viewer

Run the program for a few seconds, rotate the object, zoom, etc., and then quit the program.

This will create a obj-viewer.trace binary file. If the file already exists, it will create obj-viewer.1.trace, and so on.

Replay[edit | edit source]

You use the glretrace command, passing it the trace file as parameter:

/usr/src/apitrace/build/glretrace suzanne.trace

Of course, this is not interactive anymore, this is just a replay.

Convert to video[edit | edit source]

Let's install ffmpeg:

apt-get install ffmpeg

Note: we tried to use ffmpeg's scaling filter to reduce the screen size (-vf scale=200:150), but sadly it caused a segfault however we tried[1]. So instead we just recompiled the application and specified a smaller screen size directly.

ffmpeg's options are organized the following way:

ffmpeg <input options> -i source <output options> destination
  • We'll use a .ogg output since that's the only format accepted by Wikicommons (if you know who could get support for the free and better .webm format, please contact him/her!).
  • The screen used during the tutorial has a refresh rate of 75Hz, but videos usually are 25 images/s, so we'll reduce the rate (double input and ouput -r parameters)
  • We'll use a fixed, good quality (-qscale)
  • We overwrite the destination file (-y)

We get:

/usr/src/apitrace/build/glretrace -s - suzanne.trace \
  | ffmpeg -r 75 -f image2pipe -vcodec ppm -i pipe: -r 25 -qscale 31 -y output.ogg

We've got our video - and no additional code was needed.

Here's the result!

WebGL variant[edit | edit source]

When running a WebGL application, capturing the browser didn't work well for us: performance was poor, and it captured the whole Firefox window - not just the animation.

So we implemented an internal capture system:

Time control[edit | edit source]

One pro of doing the capture manually is that you can control the time flow as slow as you want, and hence avoid any performance issue during the capture. We did so by adding a little wrapper around the time function (in our case, threejs'):

    var capture_rate = 30;
    var capture_frame = 0;
    ...
    function getElapsedTime() {
        if (capture) {
            return capture_frame/capture_rate;
        } else {
            return clock.getElapsedTime();
        }
    }
    ...
    function render() {
        ...
        var angle = getElapsedTime() * 10;  // 10° per second
        ...
        if (capture) {
            capture_frame++;
        }
    }

We decided to use a 30FPS frame rate, which is common in videos (as of 2013).

Copy the WebGL frame[edit | edit source]

Use your WebGL canvas' toDataURL method :

var base64_encoded_image = renderer.domElement.toDataURL();

This returns a base64-encoded image in the form data:image/png;base64,iVBORw0KGgoAAAA....

Export using AJAX[edit | edit source]

JavaScript cannot save local files directly, so we'll export these images to a webserver using Ajax :

    if (capture) {
        capture_frame++;
        r = XMLHttpRequest();
        r.open('POST', 'http://localhost:1337/' + capture_frame);
        r.send(renderer.domElement.toDataURL().substr("data:image/png;base64,".length));
    }

Minimal web server[edit | edit source]

We can write a minimal webserver that will just store the images we exported, using NodeJS:

var http = require('http');
var fs = require('fs');
http.createServer(function (req, res) {
    var idx = req.url.split('/')[1];
    var filename = "capture/" + ("0000" + idx).slice(-5)+".png";
    var img = '';
    req.on('data', function(chunk) { img += chunk });
    req.on('end', function() {
        f = fs.writeFileSync(filename, Buffer(img));
        res.end();
    });
    console.log('Wrote ' + filename);
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');

You can run it with :

mkdir capture/
nodejs save_capture.js

Assemble the video[edit | edit source]

This step is similar to the one with glretrace above. We'll just use a special syntax to grab all the PNG files:

avconv -r 30 -i capture/%05d.png -y capture/output.webm

(By the way, avconv is a fork/alternative to ffmpeg, with nearly similar command-line options.)

We're done: we've got a perfect-sync capture of our WebGL animation!

Going further[edit | edit source]

Contribution on how to capture synchronized application audio streams would be welcome :)

References[edit | edit source]

  1. This bug might be related to https://ffmpeg.org/trac/ffmpeg/ticket/397

< OpenGL Programming

Browse & download complete code