OpenGL Programming/Supersampling

From Wikibooks, open books for an open world
< OpenGL Programming
Jump to navigation Jump to search


When drawing a black line on a white background, a computer or GPU would, on the most basic level, just determine which pixels it should make black. This works very well for horizontal or vertical lines. However, if a line is diagonal, you will immediately see that the line is not smooth, but jagged. However if we would not just use black and white, but also strategically place grey pixels, we can fool our eyes and make the line look much smoother. This is called anti-aliasing. There are various ways to anti-alias lines, other shapes or even text. The oldest trick in the book is to draw a line twice as big, and then to scale down the resulting black-and-white image to the original size, taking the average of every group of 2x2 pixels. The resulting image will have 5 shades of grey (including black and white), and is a big improvement. Of course, you can also draw the line three, four or even more times as big (for 10, 17 or more shades of grey). This technique is called supersampling. It is very time consuming, since you have to render a much larger image, and it gives you only a limited amount of shades. However, the big advantage is that this technique is very simple, and works with *any* kind of image, whether it is a line, text, or a 3D scene.

Supersampling using the accumulation buffer[edit]

We can implement the same technique using the accumulation buffer. The problem however is that the accumulation buffer is not larger than the normal color buffers, it has exactly the same size. Instead, we can render the same scene multiple times, but each time shifted a tiny bit. Suppose we start with the following code that sets up the model-view-projection matrix, and then renders a frame:

glm::mat4 modelview = glm::lookAt(...);
glm::mat4 projection = glm::perspective(...);
glm::mat4 mvp = projection * modelview;
glUniformMatrix4fv(uniform_mvp, 1, GL_FALSE, glm::value_ptr(mvp));

If we want to do 2x2 antialising, we shift the scene by (0, 0), (0.5, 0), (0, 0.5) and (0.5, 0.5) pixels respectively. This is easy to do; after we have applied the model-view-projection matrix to the vertices, we have screen coordinates, where (0, 0) corresponds to the bottom left, and (1, 1) to the top right of the viewport. So to shift by (0.5, 0.5) pixels, we need to apply a translation of (0.5 / w, 0.5 / h) units, where w and h are the width and height of the viewport in pixels. This is the result:

glm::mat4 modelview = glm::lookAt(...);
glm::mat4 projection = glm::perspective(...);

for(int i = 0; i < 4; i++) {
  glm::vec3 shift = glm::vec3((i % 2) * 0.5 / w, (i / 2) * 0.5 / h, 0);
  glm::mat4 aa = glm::translate(glm::mat4(1.0f), shift);
  glm::mat4 mvp = aa * projection * modelview;
  glUniformMatrix4fv(uniform_mvp, 1, GL_FALSE, glm::value_ptr(mvp));
  glAccum(i ? GL_ACCUM : GL_LOAD, 0.25);

glAccum(GL_RETURN, 1);


  • Apply this technique to any of the previous tutorials.
  • Try out 3x3, 4x4, 8x8 and 16x16 pixel anti-aliasing. Where do you think the point is after which you gain nothing in quality?
  • Is this really the same as rendering a larger image and then scaling it down? What about a line that is thinner than one pixel?
  • Can you combine this technique efficiently with motion blur?