GLSL Programming/Unity/Projection of Bumpy Surfaces

From Wikibooks, open books for an open world
Jump to navigation Jump to search
A dry-stone wall in England. Note how some stones stick out of the wall.

This tutorial covers (single-step) parallax mapping.

It extends and is based on Section “Lighting of Bumpy Surfaces”.

Improving Normal Mapping[edit | edit source]

The normal mapping technique presented in Section “Lighting of Bumpy Surfaces” only changes the lighting of a flat surface to create the illusion of bumps and dents. If one looks straight onto a surface (i.e. in the direction of the surface normal vector), this works very well. However, if one looks onto a surface from some other angle (as in the image to the left), the bumps should also stick out of the surface while the dents should recede into the surface. Of course, this could be achieved by geometrically modeling bumps and dents; however, this would require to process many more vertices. On the other hand, single-step parallax mapping is a very efficient techniques similar to normal mapping, which doesn't require additional triangles but can still move virtual bumps by several pixels to make them stick out of a flat surface. However, the technique is limited to bumps and dents of small heights and requires some fine-tuning for best results.

Vectors and distances in parallax mapping: view vector V, surface normal vector N, height of height map h, offset o to intersection of view ray with surface at height h.

Parallax Mapping Explained[edit | edit source]

Parallax mapping was proposed in 2001 by Tomomichi Kaneko et al. in their paper “Detailed shape representation with parallax mapping” (ICAT 2001). The basic idea is to offset the texture coordinates that are used for the texturing of the surface (in particular normal mapping). If this offset of texture coordinates is computed appropriately, it is possible to move parts of the texture (e.g. bumps) as if they were sticking out of the surface.

The illustration to the left shows the view vector V in the direction to the viewer and the surface normal vector N in the point of a surface that is rasterized in a fragment shader. Parallax mapping proceeds in 3 steps:

  • Lookup of the height at the rasterized point in a height map, which is depicted by the wavy line on top of the straight line at the bottom in the illustration.
  • Computation of the intersection of the viewing ray in direction of V with a surface at height parallel to the rendered surface. The distance is the distance between the rasterized surface point moved by in the direction of N and this intersection point. If these two points are projected onto the rendered surface, is also the distance between the rasterized point and a new point on the surface (marked by a cross in the illustration). This new surface point is a better approximation to the point that is actually visible for the view ray in direction V if the surface was displaced by the height map.
  • Transformation of the offset into texture coordinate space in order to compute an offset of texture coordinates for all following texture lookups.

For the computation of we require the height of a height map at the rasterized point, which is implemented in the example by a texture lookup in the A component of the texture property _ParallaxMap, which should be a gray-scale image representing heights as discussed in Section “Lighting of Bumpy Surfaces”. We also require the view direction V in the local surface coordinate system formed by the normal vector ( axis), the tangent vector ( axis), and the binormal vector ( axis), which was also introduced Section “Lighting of Bumpy Surfaces”. To this end we compute a transformation from local surface coordinates to object space with:

where T, B and N are given in object coordinates. (In Section “Lighting of Bumpy Surfaces” we had a similar matrix but with vectors in world coordinates.)

We compute the view direction V in object space (as the difference between the rasterized position and the camera position transformed from world space to object space) and then we transform it to the local surface space with the matrix which can be computed as:

This is possible because T, B and N are orthogonal and normalized. (Actually, the situation is a bit more complicated because we won't normalize these vectors but use their length for another transformation; see below.) Thus, in order to transform V from object space to the local surface space, we have to multiply it with the transposed matrix . In GLSL, this is achieved by multiplying the vector from the left to the matrix .

Once we have the V in the local surface coordinate system with the axis in the direction of the normal vector N, we can compute the offsets (in direction) and (in direction) by using similar triangles (compare with the illustration):

  and  .

Thus:

  and  .

Note that it is not necessary to normalize V because we use only ratios of its components, which are not affected by the normalization.

Finally, we have to transform and into texture space. This would be quite difficult if Unity wouldn't help us: the tangent attribute Tangent is actually appropriately scaled and has a fourth component Tangent.w for scaling the binormal vector such that the transformation of the view direction V scales and appropriately to have and in texture coordinate space without further computations.

Implementation[edit | edit source]

The implementation shares most of the code with Section “Lighting of Bumpy Surfaces”. In particular, the same scaling of the binormal vector with the fourth component of the Tangent attribute is used in order to take the mapping of the offsets from local surface space to texture space into account:

           vec3 binormal = cross(gl_Normal, vec3(Tangent)) * Tangent.w;

In the vertex shader, we have to add a varying for the view vector V in the local surface coordinate system (with the scaling of axes to take the mapping to texture space into account). This varying is called viewDirInScaledSurfaceCoords. It is computed by multiplying the view vector in object coordinates (viewDirInObjectCoords) from the left to the matrix (localSurface2ScaledObject) as explained above:

            vec3 viewDirInObjectCoords = vec3(
               modelMatrixInverse * vec4(_WorldSpaceCameraPos, 1.0) 
               - gl_Vertex);
            mat3 localSurface2ScaledObject = 
               mat3(vec3(Tangent), binormal, gl_Normal); 
               // vectors are orthogonal
            viewDirInScaledSurfaceCoords = 
               viewDirInObjectCoords * localSurface2ScaledObject; 
               // we multiply with the transpose to multiply with 
               // the "inverse" (apart from the scaling)

The rest of the vertex shader is the same as for normal mapping, see Section “Lighting of Bumpy Surfaces”.

In the fragment shader, we first query the height map for the height of the rasterized point. This height is specified by the A component of the texture _ParallaxMap. The values between 0 and 1 are transformed to the range -_Parallax/2 to +_Parallax with a shader property _Parallax in order to offer some user control over the strength of the effect (and to be compatible with the fallback shader):

           float height = _Parallax 
               * (-0.5 + texture2D(_ParallaxMap, _ParallaxMap_ST.xy 
               * textureCoordinates.xy + _ParallaxMap_ST.zw).a);

The offsets and are then computed as described above. However, we also clamp each offset between a user-specified interval -_MaxTexCoordOffset and _MaxTexCoordOffset in order to make sure that the offset stays in reasonable bounds. (If the height map consists of more or less flat plateaus of constant height with smooth transitions between these plateaus, _MaxTexCoordOffset should be smaller than the thickness of these transition regions; otherwise the sample point might be in a different plateau with a different height, which would mean that the approximation of the intersection point is arbitrarily bad.) The code is:

           vec2 texCoordOffsets = 
              clamp(height * viewDirInScaledSurfaceCoords.xy 
              / viewDirInScaledSurfaceCoords.z,
              -_MaxTexCoordOffset, +_MaxTexCoordOffset);

In the following code, we have to apply the offsets to the texture coordinates in all texture lookups; i.e., we have to replace vec2(textureCoordinates) (or equivalently textureCoordinates.xy) by (textureCoordinates.xy + texCoordOffsets), e.g.:

             vec4 encodedNormal = texture2D(_BumpMap, 
               _BumpMap_ST.xy * (textureCoordinates.xy 
               + texCoordOffsets) + _BumpMap_ST.zw);

The rest of the fragment shader code is just as it was for Section “Lighting of Bumpy Surfaces”.

Complete Shader Code[edit | edit source]

As discussed in the previous section, most of this code is taken from Section “Lighting of Bumpy Surfaces”. Note that if you want to use the code on a mobile device with OpenGL ES, make sure to change the decoding of the normal map as described in that tutorial.

The part about parallax mapping is actually only a few lines. Most of the names of the shader properties were chosen according to the fallback shader; the user interface labels are much more descriptive.

Shader "GLSL parallax mapping" {
   Properties {
      _BumpMap ("Normal Map", 2D) = "bump" {}
      _ParallaxMap ("Heightmap (in A)", 2D) = "black" {}
      _Parallax ("Max Height", Float) = 0.01
      _MaxTexCoordOffset ("Max Texture Coordinate Offset", Float) = 
         0.01
      _Color ("Diffuse Material Color", Color) = (1,1,1,1) 
      _SpecColor ("Specular Material Color", Color) = (1,1,1,1) 
      _Shininess ("Shininess", Float) = 10
   }
   SubShader {
      Pass {      
         Tags { "LightMode" = "ForwardBase" } 
            // pass for ambient light and first light source
 
         GLSLPROGRAM
 
         // User-specified properties
         uniform sampler2D _BumpMap; 
         uniform vec4 _BumpMap_ST;
         uniform sampler2D _ParallaxMap; 
         uniform vec4 _ParallaxMap_ST;
         uniform float _Parallax;
         uniform float _MaxTexCoordOffset;
         uniform vec4 _Color; 
         uniform vec4 _SpecColor; 
         uniform float _Shininess;
 
         // The following built-in uniforms (except _LightColor0) 
         // are also defined in "UnityCG.glslinc", 
         // i.e. one could #include "UnityCG.glslinc" 
         uniform vec3 _WorldSpaceCameraPos; 
            // camera position in world space
         uniform mat4 _Object2World; // model matrix
         uniform mat4 _World2Object; // inverse model matrix
         uniform vec4 unity_Scale; // w = 1/uniform scale; 
            // should be multiplied to _World2Object
         uniform vec4 _WorldSpaceLightPos0; 
            // direction to or position of light source
         uniform vec4 _LightColor0; 
            // color of light source (from "Lighting.cginc")
 
         varying vec4 position; 
            // position of the vertex (and fragment) in world space 
         varying vec4 textureCoordinates; 
         varying mat3 localSurface2World; // mapping from 
            // local surface coordinates to world coordinates
         varying vec3 viewDirInScaledSurfaceCoords;
 
         #ifdef VERTEX
 
         attribute vec4 Tangent;
 
         void main()
         {                                
            mat4 modelMatrix = _Object2World;
            mat4 modelMatrixInverse = _World2Object * unity_Scale.w;
 
            localSurface2World[0] = normalize(vec3(
               modelMatrix * vec4(vec3(Tangent), 0.0)));
            localSurface2World[2] = normalize(vec3(
               vec4(gl_Normal, 0.0) * modelMatrixInverse));
            localSurface2World[1] = normalize(
               cross(localSurface2World[2], localSurface2World[0]) 
               * Tangent.w);

            vec3 binormal = 
               cross(gl_Normal, vec3(Tangent)) * Tangent.w; 
               // appropriately scaled tangent and binormal 
               // to map distances from object space to texture space
 
            vec3 viewDirInObjectCoords = vec3(modelMatrixInverse 
               * vec4(_WorldSpaceCameraPos, 1.0) - gl_Vertex);
            mat3 localSurface2ScaledObject = 
               mat3(vec3(Tangent), binormal, gl_Normal); 
               // vectors are orthogonal
            viewDirInScaledSurfaceCoords = 
               viewDirInObjectCoords * localSurface2ScaledObject; 
               // we multiply with the transpose to multiply 
               // with the "inverse" (apart from the scaling)

            position = modelMatrix * gl_Vertex;
            textureCoordinates = gl_MultiTexCoord0;
            gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
         }
 
         #endif
 
         #ifdef FRAGMENT
 
         void main()
         {
            // parallax mapping: compute height and 
            // find offset in texture coordinates 
            // for the intersection of the view ray 
            // with the surface at this height
            
            float height = 
               _Parallax * (-0.5 + texture2D(_ParallaxMap,  
               _ParallaxMap_ST.xy * textureCoordinates.xy 
               + _ParallaxMap_ST.zw).a);
            vec2 texCoordOffsets = 
               clamp(height * viewDirInScaledSurfaceCoords.xy 
               / viewDirInScaledSurfaceCoords.z,
               -_MaxTexCoordOffset, +_MaxTexCoordOffset);

            // normal mapping: lookup and decode normal from bump map
            
            // in principle we have to normalize the columns 
            // of "localSurface2World" again; however, the potential 
            // problems are small since we use this matrix only 
            // to compute "normalDirection", which we normalize anyways
            vec4 encodedNormal = texture2D(_BumpMap, 
               _BumpMap_ST.xy * (textureCoordinates.xy 
               + texCoordOffsets) + _BumpMap_ST.zw);
            vec3 localCoords = 
               vec3(2.0 * encodedNormal.ag - vec2(1.0), 0.0);
            localCoords.z = sqrt(1.0 - dot(localCoords, localCoords));
               // approximation without sqrt: localCoords.z = 
               // 1.0 - 0.5 * dot(localCoords, localCoords);
            vec3 normalDirection = 
               normalize(localSurface2World * localCoords);
 
            // per-pixel lighting using the Phong reflection model 
            // (with linear attenuation for point and spot lights)
 
            vec3 viewDirection = 
               normalize(_WorldSpaceCameraPos - vec3(position));
            vec3 lightDirection;
            float attenuation;
 
            if (0.0 == _WorldSpaceLightPos0.w) // directional light?
            {
               attenuation = 1.0; // no attenuation
               lightDirection = normalize(vec3(_WorldSpaceLightPos0));
            } 
            else // point or spot light
            {
               vec3 vertexToLightSource = 
                  vec3(_WorldSpaceLightPos0 - position);
               float distance = length(vertexToLightSource);
               attenuation = 1.0 / distance; // linear attenuation 
               lightDirection = normalize(vertexToLightSource);
            }
 
            vec3 ambientLighting = 
               vec3(gl_LightModel.ambient) * vec3(_Color);
 
            vec3 diffuseReflection = 
               attenuation * vec3(_LightColor0) * vec3(_Color) 
               * max(0.0, dot(normalDirection, lightDirection));
 
            vec3 specularReflection;
            if (dot(normalDirection, lightDirection) < 0.0) 
               // light source on the wrong side?
            {
               specularReflection = vec3(0.0, 0.0, 0.0); 
                  // no specular reflection
            }
            else // light source on the right side
            {
               specularReflection = attenuation * vec3(_LightColor0) 
                  * vec3(_SpecColor) * pow(max(0.0, dot(
                  reflect(-lightDirection, normalDirection), 
                  viewDirection)), _Shininess);
            }
 
            gl_FragColor = vec4(ambientLighting + diffuseReflection 
               + specularReflection, 1.0);
         }
 
         #endif
 
         ENDGLSL
      }
 
      Pass {      
         Tags { "LightMode" = "ForwardAdd" } 
            // pass for additional light sources
         Blend One One // additive blending 
 
         GLSLPROGRAM
 
         // User-specified properties
         uniform sampler2D _BumpMap; 
         uniform vec4 _BumpMap_ST;
         uniform sampler2D _ParallaxMap; 
         uniform vec4 _ParallaxMap_ST;
         uniform float _Parallax;
         uniform float _MaxTexCoordOffset;
         uniform vec4 _Color; 
         uniform vec4 _SpecColor; 
         uniform float _Shininess;
 
         // The following built-in uniforms (except _LightColor0) 
         // are also defined in "UnityCG.glslinc", 
         // i.e. one could #include "UnityCG.glslinc" 
         uniform vec3 _WorldSpaceCameraPos; 
            // camera position in world space
         uniform mat4 _Object2World; // model matrix
         uniform mat4 _World2Object; // inverse model matrix
         uniform vec4 unity_Scale; // w = 1/uniform scale; 
            // should be multiplied to _World2Object
         uniform vec4 _WorldSpaceLightPos0; 
            // direction to or position of light source
         uniform vec4 _LightColor0; 
            // color of light source (from "Lighting.cginc")
 
         varying vec4 position; 
            // position of the vertex (and fragment) in world space 
         varying vec4 textureCoordinates; 
         varying mat3 localSurface2World; // mapping
            // from local surface coordinates to world coordinates
         varying vec3 viewDirInScaledSurfaceCoords;
 
         #ifdef VERTEX
 
         attribute vec4 Tangent;
 
         void main()
         {                                
            mat4 modelMatrix = _Object2World;
            mat4 modelMatrixInverse = _World2Object * unity_Scale.w;
 
            localSurface2World[0] = normalize(vec3(
               modelMatrix * vec4(vec3(Tangent), 0.0)));
            localSurface2World[2] = normalize(vec3(
               vec4(gl_Normal, 0.0) * modelMatrixInverse));
            localSurface2World[1] = normalize(
               cross(localSurface2World[2], localSurface2World[0]) 
               * Tangent.w);

            vec3 binormal = 
               cross(gl_Normal, vec3(Tangent)) * Tangent.w; 
               // appropriately scaled tangent and binormal 
               // to map distances from object space to texture space
 
            vec3 viewDirInObjectCoords = vec3(modelMatrixInverse 
               * vec4(_WorldSpaceCameraPos, 1.0) - gl_Vertex);
            mat3 localSurface2ScaledObject = 
               mat3(vec3(Tangent), binormal, gl_Normal); 
               // vectors are orthogonal
            viewDirInScaledSurfaceCoords = 
               viewDirInObjectCoords * localSurface2ScaledObject; 
               // we multiply with the transpose to multiply 
               // with the "inverse" (apart from the scaling)

            position = modelMatrix * gl_Vertex;
            textureCoordinates = gl_MultiTexCoord0;
            gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
         }
 
         #endif
 
         #ifdef FRAGMENT
 
         void main()
         {
            // parallax mapping: compute height and 
            // find offset in texture coordinates 
            // for the intersection of the view ray 
            // with the surface at this height
            
            float height = 
               _Parallax * (-0.5 + texture2D(_ParallaxMap,  
               _ParallaxMap_ST.xy * textureCoordinates.xy 
               + _ParallaxMap_ST.zw).a);
            vec2 texCoordOffsets = 
               clamp(height * viewDirInScaledSurfaceCoords.xy 
               / viewDirInScaledSurfaceCoords.z,
               -_MaxTexCoordOffset, +_MaxTexCoordOffset);

            // normal mapping: lookup and decode normal from bump map
            
            // in principle we have to normalize the columns 
            // of "localSurface2World" again; however, the potential 
            // problems are small since we use this matrix only to 
            // compute "normalDirection", which we normalize anyways
            vec4 encodedNormal = texture2D(_BumpMap, 
               _BumpMap_ST.xy * (textureCoordinates.xy 
               + texCoordOffsets) + _BumpMap_ST.zw);
            vec3 localCoords = 
               vec3(2.0 * encodedNormal.ag - vec2(1.0), 0.0);
            localCoords.z = sqrt(1.0 - dot(localCoords, localCoords));
               // approximation without sqrt: localCoords.z = 
               // 1.0 - 0.5 * dot(localCoords, localCoords);
            vec3 normalDirection = 
               normalize(localSurface2World * localCoords);
 
            // per-pixel lighting using the Phong reflection model 
            // (with linear attenuation for point and spot lights)
 
            vec3 viewDirection = 
               normalize(_WorldSpaceCameraPos - vec3(position));
            vec3 lightDirection;
            float attenuation;
 
            if (0.0 == _WorldSpaceLightPos0.w) // directional light?
            {
               attenuation = 1.0; // no attenuation
               lightDirection = normalize(vec3(_WorldSpaceLightPos0));
            } 
            else // point or spot light
            {
               vec3 vertexToLightSource = 
                  vec3(_WorldSpaceLightPos0 - position);
               float distance = length(vertexToLightSource);
               attenuation = 1.0 / distance; // linear attenuation 
               lightDirection = normalize(vertexToLightSource);
            }
 
            vec3 diffuseReflection = 
               attenuation * vec3(_LightColor0) * vec3(_Color) 
               * max(0.0, dot(normalDirection, lightDirection));
 
            vec3 specularReflection;
            if (dot(normalDirection, lightDirection) < 0.0) 
               // light source on the wrong side?
            {
               specularReflection = vec3(0.0, 0.0, 0.0); 
                  // no specular reflection
            }
            else // light source on the right side
            {
               specularReflection = attenuation * vec3(_LightColor0) 
                  * vec3(_SpecColor) * pow(max(0.0, dot(
                  reflect(-lightDirection, normalDirection), 
                  viewDirection)), _Shininess);
            }
 
            gl_FragColor = 
               vec4(diffuseReflection + specularReflection, 1.0);
         }
 
         #endif
 
         ENDGLSL
      }
   } 
   // The definition of a fallback shader should be commented out 
   // during development:
   // Fallback "Parallax Specular"
}

Summary[edit | edit source]

Congratulations! If you actually understand the whole shader, you have come a long way. In fact, the shader includes lots of concepts (transformations between coordinate systems, application of the inverse of an orthogonal matrix by multiplying a vector from the left to it, the Phong reflection model, normal mapping, parallax mapping, ...). More specifically, we have seen:

  • How parallax mapping improves upon normal mapping.
  • How parallax mapping is described mathematically.
  • How parallax mapping is implemented.

Further Reading[edit | edit source]

If you still want to know more

  • about details of the shader code, you should read Section “Lighting of Bumpy Surfaces”.
  • about parallax mapping, you could read the original publication by Tomomichi Kaneko et al.: “Detailed shape representation with parallax mapping”, ICAT 2001, pages 205–208, which is available online.


< GLSL Programming/Unity

Unless stated otherwise, all example source code on this page is granted to the public domain.