Movie Making Manual/Visual Effects

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Note that this page covers Visual Effects and not Special Effects. SFX are things like pyrotechnics, rain and snow. Visual Effects (VFX) are the optical tricks that are used, including projection, green-screen, miniatures, etc...

Visual Effects have always existed; they somewhat predated Cinema. Cinema itself, based on the illusion of movement, is an Effect. Every second of projection is an illusion of movement, a lie told by the director to the audience, a mandatory preface that brings us to the practical aspects of visual effects. It is not possible to ignore the revolution in the field brought by the employment of digital software. The digital tools today replace a lot of arts in the past based just on human skills.

Visual Effects are at the core of the movie making process.[edit | edit source]

If you cannot get a special effect in the camera, then you must create the visual effect in post production. With major motion pictures, each shot can cost thousands of dollars. But for low budget film makers, there are cheaper alternatives.

Do you want to start doing amazing special effects with your small DV Camcorder? Then you have at least two options: Matte Paintings and Budget Green Screen Shooting.

Budget Green Screen Shooting[edit | edit source]

Green screen is one way of adding beautiful backgrounds to live action shots. Probably the most extreme example of this is Robert Rodrigues' Shark Boy and Lava Girl. But even if you are shooting with only a DV camcorder, you can do inexpensive green screen shots.

One can cut the cost of the shot substantially by doing your green screen shooting outdoors using a thermonuclear device (the sun) which is free (on sunny days). This means that all you have to hire is the green screen itself - none of the finely calibrated even lighting that's normally so essential for the computer to get a good key on the other end.

As an alternative to purchase or rental, one can manufacture a green screen. Commercial lighting supply houses sell paints specially manufactured for the purpose, but it is probably possible to get by with an ordinary house paint chosen carefully to be "green enough" for the computer to pull a key.

The most common difficulties with green-screen are:

  • getting even lighting on the screen.
  • getting the lighting on the foreground to match the lighting of the (separately shot) background.

By shooting outside, with a diffusion frame hung over the shot you will get naturally even sun-light all over the screen and actor. This will also match the daylight conditions of the background that later replaces the green. If your intended background is not going to be normal daylight, or if you get a lot of cloud movement, this may not work for you.

Make sure your foreground actor is a good distance from the screen, so you don't get even a hint of green light reflected back onto him/her, as this will create problems later - but with your light source being the sun (overhead), you're less likely to get green spill than if the lights were hitting the screen from the front, as they probably would in a studio.

Front Projection[edit | edit source]

Front Projection (often abbreviated FP) is a technique which can achieve the same result as green screen, but "in camera," that is to say, the composition of subject and background is complete as the combined image is acquired by the camera.

The technique uses a beamsplitter located in front of the camera in such a way as to completely fill the camera's field of view, and oriented at a precise 45 degrees to the camera's shooting axis. (This rotation from the normal may be left-right or up-down.) The other components to the system are a transparency projector (still or motion picture), and a special retro-reflective lenticular screen positioned behind the action.

A beamsplitter acts as both mirror and window, reflecting a portion of the incident light, and reflecting another portion. Beamsplitters are chosen for a specific application based on the ratio of reflectance to transmission. Common types are 50R/50T (50% reflectance, 50% transmission) and 70R/30T.

The beamsplitters used in Front Projection cinematography are of the plate type, simply a piece of plate glass with a special coating on one side designed to reduce the amount of light absorbed by the beamsplitter, and consequently neither reflected nor transmitted. The coated side faces the action, and is referred to as the "front" surface. The purpose of this coating is to reduce the loss of light due to absorption by the beamsplitter, which serves only to heat the glass.

A retroreflective screen is set behind the actors and other set pieces. This screen is not just a typical diffusive projection screen, which disperses light evenly so that a large audience composed of people sitting at many different angles to the screen sees a uniformly bright image. Instead, the retroreflective screen tends to send light right back where it came from.

The classic material for retroreflective FP screens is a made by 3M, and sold under the trademarked name of "Scotchlite". Scotchlite is used in signmaking and conspicuity applications (night time motor vehicle safety visibility). It is available from commercial signmaking supply houses.

Retroreflection in Scotchlite is achieved using millions of microscopic glass beads suspended in a transparent substrate bound to opaque vinyl sheeting. It is available in rolls of up to four feet in width.

While constructing the large (40 feet by 100 feet) screen of Scotchlite for the film 2001: A Space Odyssey, director Stanley Kubrick and special effects supervisor Tom Howard initially laid strips of Scotchlite side by side, but found that variations in manufacturing made the seams between adjacent strips glaringly obvious in the final product. Their solution involved tearing the Scotchlite into irregular overlapping pieces, minimizing the occurrence of variations of retroreflectivity large and regular enough to be discernible to the audience. Still, as Martin Hart has observed, careful examination of the FP scenes of 2001 reveal flaws introduced by variations in retroreflectivity between adjacent random patches.

A more sophisticated solution was presented in an SMTPE paper: a review of this paper will be presented in a near future version of this article.

Having discussed the nature of the physical components used in Front Projection, we turn to the preparation and arrangement of these components in a working FP system.

A still or motion picture transparency projector containing the desired background image, or "plate," is placed so that the projection axis is perpendicular to the camera's shooting axis, meeting at the place where the camera's shooting axis touches the front surface of the beamsplitter. (Thus the beamsplitter's orientation is 45 degrees to both camera and projector.)

When the projector is operating, the background plate is projected onto the front surface of the beamsplitter. A portion of the image is transmitted through the beamsplitter. In ordinary applications, the transmitted part of the image is absorbed by a black surface on the side of the beamsplitter opposite the projector, to avoid stray reflections.

The portion of the image which is not transmitted or absorbed by the beamsplitter is reflected through an angle of 90 degrees, and consequently projected over the action along the camera's shooting axis, falling onto both foreground actors and objects as well as the retroreflective screen behind them.

Retroreflective materials tend to reflect light back along the path of incidence. In FP work, the background plate image is retroreflected, back toward the beamsplitter. Part of the retroreflected background image is again lost, as it is either absorbed by the beamsplitter or reflected back into the projection lens. The remainder enters the camera where it is photographed along with the action.

The only portion of the image not accounted for in the foregoing discussion is that part of the projected background matte which falls on the actors or other foreground subjects. Foreground lighting, combined with the extreme deficit in retroreflectivity of the foreground subjects in comparison to the special screen, mean that the part of the projected image which falls on the actors is so dim as to not be detectable in-camera.

Precise alignment of system components is required to make sure that foreground objects perfectly cover their own shadows, cast by the projector on the screen. This rules panning and tilting, except in the special case where the camera is mounted so that either panning, tilting, or both occurs around the rear nodal point of the camera lens: so called "nodal pans" and "nodal tilts." In addition, the beamsplitter must be large enough, and the camera close enough, so that the camera does not take the edge of the beamsplitter into view.

Examples of nodal pan-and-tilt camera work in the context of FP can be seen in the "Dawn of Man" sequence in the film 2001: A Space Odyssey (1968), particularly the watering hole scenes. (The front projection effects on 2001 were executed by Stanley Kubrick with assistance from Tom Howard.)

A change of focal length (zoom) does not present the same difficulty as do panning or tilting. The camera can zoom in or out as long as the edges of the beamsplitter (or of the projected matte image) are not in view at the widest point of the zoom. In 2001, Kubrick also used large set pieces at either end of some FP shots in such a way as to hide the edges of his already-gigantic retroreflective screen.

In fact, a special and inventive application of zooming was used by Zoran Perisic, who worked as a rostrum or animation stand cameraman on 2001, to enhance the FP process for the film Superman: The Movie (1978). Electronically controlled motorized zoom lenses are placed on both camera and projector, and synchronized with one another so that both lenses zoom together and at the same focal length at all times. This means that the background image will not change its apparent size when the camera zooms in, as the projector simultaneously projects a reduced image. In Persic's phrase, the projector zooms and the camera zooms to "embrace" the smaller image. However, the zoom causes foreground objects to appear to rush toward or away from the camera. The combination of the "static" background and the "moving" foreground enabled the visually effective flying scenes which helped to make the film a success.

To enhance this effect still further, the use of FP in Superman introduced two other innovations: use of travelling mattes (using a motion picture projector instead of a still transparency projector, in order to project a moving background); and the mounting of the entire front projection rig (camera, projector, and beamsplitter) on a large motion-controlled robotic-arm with six degrees of freedom, and using a massive curved screen.

In rear projection process photography the projector's shutter must be synchronized with the camera's using mechanical or electronic means, in order to avoid background flicker.

The motion-controlled front-projection mount was a masterpiece of engineering for 1978, and used an early microprocessor for control. Every aspect of the rig's operation and motion could be recorded to computer tape for later automatic playback, causing the rig to move and operate exactly as trained.

Matte Paintings[edit | edit source]

The oldest and probably the most underrated visual effect is matte paintings. We see these all the time but because they look so natural, we don't notice.

Originally, matte paintings was done on glass that stood directly in front of the camera. To do matte painting, a partial set is created which is only as big as the actors and only extends to where the actors will perform. The rest of the movie set is empty space (or something that you don't want to be seen in the movie.) Except for a tiny spot of the glass which is clear, the rest of the movie set is painting on the glass. This allows you to add more scenery buildings as paintings. As long as the actors can be seen through the clear space of the glass, you cannot tell that the actors are not apart of the painted movie set.

Today, matte paintings are done with both paint and with CG (computer generated visual effects). Rather than filmed through glass, the actors are filmed normally and later composited into the matte painting. Therefore, now the distinction between matte paintings and computer generated visual effects is blurred. If the actors are filmed on a partial movie set (without any green screen, etc.) then the effect is a matte painting... even if you use computer generated effects to get the effect.

3D Animation for Visual Effects[edit | edit source]

When you start looking at the possibility of using 3D computer generated effects, you need understand the different types of 3D animation.

1. General Purpose Animation
Programs such as Blender, LightWave, Maya, and 3D Studio Max are general purpose animation programs. They are very powerful, expensive (except Blender, free), have steep learning curves and are used on most high end effects movies.
2. Special Purpose Animation Programs
Program such as Vue, Bryce, Poser, and DAZ Studio are designed for a specific purpose. Vue and Bryce are designed to create realistic scenery from nature. Poser and DAZ Studio are designed to work with special computer models called Poser figures or Digital Puppets. Some of these programs are even free such as DAZ Studio, Blender and Bryce.
3. Special software Plug-ins
Software modules such as Character Studio work inside of a general purpose animation program to create a special kind of animation similar to a Special Purpose Animation Program. LightWave, Maya, and 3D Studio Max can be greatly expanded through the use of plug-in modules.
4. Support Programs
Programs that add in the animation but do not actually do any rendering can be extremely useful when you have a general purpose animation program which would be more awkward to use for a special task. These expand the power of LightWave, Maya, and 3D Studio Max without making these programs too cumbersome.

3D Modeling[edit | edit source]

All elements in 3D animation must be modeled. Programs such as LightWave, Maya, XSI and 3D Studio Max come with a modeler module built in. Programs such as Vue, Bryce, Poser, and DAZ Studio do not but in the case of Poser and DAZ Studio, you can buy hundreds of figures designed for these programs. Pixologic Zbrush, and Autodesk Mudbox are especially good at modeling organic objects such as humans and creatures.

3D Animation[edit | edit source]

Animation is done in three parts, the modeling, the actual animation and the rendering. The actual animation can be:

1. Keyframe animation
Each movement is entered into the computer system by noting the position of objects at specific frames or points in time. The movement between these key points is then calculated by the animation program based on rules set up by the animator (straight line, curved, etc.)
2. Programmed animation
For a flock of birds, rather than record the position and movement of each bird, a computer program calculates where all the birds go and how they move.

Compositing[edit | edit source]

Compositing is the process of combining various elements, such as 3D imagery, live action film footage and still imagery, to create a finished shot. All visual effects that include live actors will require compositing. As mentioned above, matte paintings are no longer painted onto glass. Rather the live action is composited with matte paintings using a compositing program such as Adobe After Effects or Apple's Shake.

Software Compositing Applications

  • Apple Shake (Discontinued)
  • Adobe After Effects
  • Autodesk Combustion
  • Blender
  • The Foundry Nuke
  • Eyeon Digital Fusion
  • Jahshaka
  • sony vegas pro

Software & Hardware Compositing Systems

  • Autodesk Inferno
  • Autodesk Flame
  • Autodesk Flint
  • sony vegas pro

All compositing applications provide the same basic tools and there is no hard and fast rule regarding which applications are for film and which applications are for commercials. For example, while After Effects has been used on such films as "The Aviator" it is also very widely used for broadcast and title design. Some applications do, however, come with specific tools which may prove advantageous depending on the task at hand.

The primary difference between software based compositing applications and the combined software and hardware solutions is that the large Autodesk systems are significantly more expensive but provide near real-time performance at film or HD resolutions.

Compositing applications typically follow two different working paradigms.

  • Layer Based
  • Node Based

Layer based compositing applications may seem more easily approachable to the novice or beginner compositor especially those who have worked with still footage in Photoshop or The Gimp. While layer based applications can provide the same results as node based applications, node based applications are far easier to composite with when dealing with numerous elements and multiple 3D render passes. While daunting at first, node based systems may provide the compositor with greater control over the shot and easier problem solving. Even a beginning artist may benefit from learning a node based package in that she will gain a deep understanding of the exact operations which are taking place in order to create the effect.

8-Bit Graphics[edit | edit source]

Every pixel of an image used in a composite is composed of four color channels: Red, Green, Blue and Alpha. Each channel has an 8-Bit color depth resulting in a 32-Bit image. Higher color depths include 16-Bit per channel and 32-Bit per channel (float). These higher color depths allow for smoother display of color variation, for example, in gradients.

Rotoscoping[edit | edit source]

Rotoscoping, or masking, is the basis of compositing. It is the process of drawing a mask around an element in a frame or sequence of frames. The resulting image is a combination of the Red, Green and Blue color channels in addition to an Alpha channel which defines transparency.it works on frame to frame, you can not ignore a single frame which is not perfectly mask.

Tracking[edit | edit source]

Tracking is the process of matching a foreground-element's motion to that of the background. Basically you pick a decent (i.e. defined, unique, contrasty) point on your background and tell the computer to follow that point. What you get is the motion-path of your point, which you can then apply to your foreground.

Tracking is divided into two categories:
2D Tracking:
Described above... (most times referred to only as "tracking", almost all compositing applications support it)
3D Tracking:
3D Tracking attempts to "solve" or derive spatial relationships between points you tracked in 2D space and approximate the distance and parallax between these points in 3D space through the use of complex photogrammetry algorithms. The end result is an approximation of the motion of the camera used to film the scene that can be exported to a 3D application or 2D application to aid in the process of matchmoving. Shots which track most effectively tend to be those with a constant camera path and a significant amount of parallax.
Many compositing applications can handle 3D tracking data with varying degrees of proficiency.