OpenGL Programming/Glescraft 7

From Wikibooks, open books for an open world
Jump to navigation Jump to search
A voxel world.

Introduction[edit | edit source]

Already we have seen that we can greatly reduce the number of vertices to draw in a voxel world by only drawing visible cube faces, and even by merging adjacent faces. We can further reduce the amount of data to send to the GPU by using a geometry shader. The idea is to send the most compact representation of a voxel face to the geometry shader, and have it produce the six vertices (for the two triangles that make up a face) and any other data that we might need. We could represent a whole voxel with one vertex, but that would mean the geometry shader does not know which sides to render, so it will render all six faces, whether they are obscured or not. It is likely that drawing the obscured faces takes more GPU time than the time saved dealing with vertices by such a geometry shader. A better way is to send two vertices (A and C in the diagram below) for each face to the vertex shader from two diametrically opposed corners of that face. This way we can also represent merged faces. Knowing that the face is rectangular and lies in an x, y or z-plane, we can reconstruct the other two corners (B and D), and from the four corners we can create two triangles in a strip (BAC and ACD). It is also possible to reconstruct the normal of the face this way (using the cross product of the lines AC and AB), which we can use for lighting calculations.

A voxel where the red line with vertices A and C spans the top face. The vertices B and D can be reconstructed in the geometry shader, as well as the normal (blue line) which is the cross product of the red and green lines.

Before, we had to pass the same texture coordinate in all six vertices of a face. With the geometry shader, the shader has access to both input vertices at the same time, so we only need to pass the texture coordinate in one of them. The shader can copy it to all six output vertices. That also means we can use the w coordinate of the second input vertex for something else, for example intensity information.

Enabling geometry shading[edit | edit source]

Before using a geometry shader, we can check if it is actually supported by your GPU using GLEW:

  if(!GLEW_EXT_geometry_shader4) {
    fprintf(stderr, "No support for geometry shaders found\n");
    exit(1);
  }

We compile and link the geometry shader the same way as with the vertex and fragment shaders, except that we need to tell OpenGL what kind of input the geometry shader expects, and what kind of output it generates. In our case, it expects LINES as input, and produces TRIANGLE_STRIPS as output. It is done as follows:

  GLuint vs, fs, gs;
  if ((vs = create_shader("glescraft.v.glsl", GL_VERTEX_SHADER))   == 0) return 0;
  if ((gs = create_shader("glescraft.g.glsl", GL_GEOMETRY_SHADER_EXT)) == 0) return 0;
  if ((fs = create_shader("glescraft.f.glsl", GL_FRAGMENT_SHADER)) == 0) return 0;

  GLuint program = glCreateProgram();
  glAttachShader(program, vs);
  glAttachShader(program, fs);
  glAttachShader(program, gs);

  glProgramParameteriEXT(program, GL_GEOMETRY_INPUT_TYPE_EXT, GL_LINES);
  glProgramParameteriEXT(program, GL_GEOMETRY_OUTPUT_TYPE_EXT, GL_TRIANGLE_STRIP);

  glLinkProgram(program);

When we draw, we just act like we draw GL_LINES, the GPU will take care of the rest.

Creating vertices for the geometry shader[edit | edit source]

Before, in our update() function, we had to generate six vertices, like so (for the faces that are viewed from the negative x direction):

          // Same block as previous one? Extend it.
          if(vis && z != 0 && blk[x][y][z] == blk[x][y][z - 1]) {
            vertex[i - 5] = byte4(x, y, z + 1, side);
            vertex[i - 2] = byte4(x, y, z + 1, side);
            vertex[i - 1] = byte4(x, y + 1, z + 1, side);
            merged++;
          // Otherwise, add a new quad.
          } else {
            vertex[i++] = byte4(x, y, z, side);
            vertex[i++] = byte4(x, y, z + 1, side);
            vertex[i++] = byte4(x, y + 1, z, side);
            vertex[i++] = byte4(x, y + 1, z, side);
            vertex[i++] = byte4(x, y, z + 1, side);
            vertex[i++] = byte4(x, y + 1, z + 1, side);
          }

We can simply modify that piece of code to generate the two vertices for our geometry shader:

          // Same block as previous one? Extend it.
          if(vis && z != 0 && blk[x][y][z] == blk[x][y][z - 1]) {
            vertex[i - 2].y = y + 1;
            vertex[i - 1].z = z + 1;
            merged++;
          // Otherwise, add a new quad.
          } else {
            vertex[i++] = byte4(x, y + 1, z, side);
            vertex[i++] = byte4(x, y, z + 1, intensity);
          }

Notice how we pass intensity information in the second vertex.

Shaders[edit | edit source]

The geometry shader is as follows:

#version 120
#extension GL_EXT_geometry_shader4 : enable

varying out vec4 texcoord;
varying out vec3 normal;
varying out float intensity;
uniform mat4 mvp;

const vec3 sundir = normalize(vec3(0.5, 1, 0.25));
const float ambient = 0.5;

void main(void) {
  // Two input vertices will be the first and last vertex of the quad
  vec4 a = gl_PositionIn[0];
  vec4 d = gl_PositionIn[1];

  // Save intensity information from second input vertex
  intensity = d.w / 127.0;
  d.w = a.w;

  // Calculate the middle two vertices of the quad
  vec4 b = a;
  vec4 c = a;

  if(a.y == d.y) { // y same
    c.z = d.z;
    b.x = d.x;
  } else { // x or z same
    b.y = d.y;
    c.xz = d.xz;
  }

  // Calculate surface normal
  normal = normalize(cross(a.xyz - b.xyz, b.xyz - c.xyz));

  // Surface intensity depends on angle of solar light
  // This is the same for all the fragments, so we do the calculation in the geometry shader
  intensity *= ambient + (1 - ambient) * clamp(dot(normal, sundir), 0, 1);

  // Emit the vertices of the quad
  texcoord = a; gl_Position = mvp * vec4(a.xyz, 1); EmitVertex();
  texcoord = b; gl_Position = mvp * vec4(b.xyz, 1); EmitVertex();
  texcoord = c; gl_Position = mvp * vec4(c.xyz, 1); EmitVertex();
  texcoord = d; gl_Position = mvp * vec4(d.xyz, 1); EmitVertex();
  EndPrimitive();
}

The vertex shader has nothing to do but pass on the vertices calculated by the geometry shader:

#version 120

attribute vec4 coord;

void main(void) {
  gl_Position = coord;
}

The fragment shader is as follows:

#version 120

varying vec4 texcoord;
varying vec3 normal;
varying float intensity;
uniform sampler3D texture;

const vec4 fogcolor = vec4(0.6, 0.8, 1.0, 1.0);
const float fogdensity = .00003;

void main(void) {
  vec4 color;

  // Look at normal to see how to map texture coordinates
  if(normal.y != 0) {
    color = texture3D(texture, vec3(texcoord.x, texcoord.z, (texcoord.w + 0.5) / 16.0));
  } else {
    color = texture3D(texture, vec3(texcoord.x + texcoord.z, -texcoord.y, (texcoord.w + 0.5) / 16.0));
  }
  
  // Very cheap "transparency": don't draw pixels with a low alpha value
  if(color.a < 0.4)
    discard;

  // Attenuate
  color *= intensity;

  // Calculate strength of fog
  float z = gl_FragCoord.z / gl_FragCoord.w;
  float fog = clamp(exp(-fogdensity * z * z), 0.2, 1);

  // Final color is a mix of the actual color and the fog color
  gl_FragColor = mix(fogcolor, color, fog);
}

Exercises[edit | edit source]

  • Try different ways of assigning intensities to voxels.
  • The fragment shader still contains an if-statement to remap the texture coordinates. Can we move that to the geometry shader?
  • Is the order of the two input vertices important?