Advanced Simple Profile
Also known as MPEG-4 Part 2, ASP defines a highly compatible subset of features of the MPEG-4 standard, that is well established in the market. The most common ASP codecs are Divx and XviD. Please see Video Codecs.
Advanced Video Coding
An analysis pass (aka Avisynth job) is a job which simply requests each frame consecutively from AviSynth. This is useful in two cases:
- for two-pass AviSynth scripts, when the first pass is simply collecting metrics (dedup, some tdecimate modes, etc.)
- benchmarking avisynth scripts.
Anamorphic encoding is the process of creating a video where the encoded aspect ratio (AR) is different from the displayed aspect ratio. This allows for better image quality without having to upsize the image during encoding. The most common scenario for anamorphic encoding is for encoding DVDs at their full resolution.
Anamorphic encodes are usually found in MKV or MP4 files, but is technically also supported in AVI files, either through storing the AR information in the MPEG-4 bitstream (not supported by all codecs) or by storing it in the container (not supported by any AVI splitters). Suffice to say, anamorphic encoding in AVI files is possible, but these files may be displayed wrongly by some programs.
The ratio of a video's horizontal display length to its vertical display length. In other words, the relative horizontal length compared to the vertical. For example, you can determine a square pixel video's aspect ratio by dividing the width by the height (4 / 3 = 1.333, 16 / 9 = 1.77). This would give you the decimal aspect ratio ( 16:9 = 1.77 ). Some videos are anamorphic, so you cannot always rely on the ratio of horizontal resolution / vertical resolution to give the correct value. Common aspect ratios are 1.33 (4:3 'square'), 1.78 ( 16:9 widescreen), 1.85:1 and 2.35:1 (common American anamorphic widescreen ratios).
See Aspect ratio signalling in AviSynth scripts for how MeGUI handles them.
There are now some hardware devices which support AVC video (like the iPod, PSP). The AVC standard includes something called Levels, which is a specification of video complexity in terms of processing power required to decode a given stream. This is useful for low-power applications like the iPod. In the x264 codec configuration, there is a box which allows the level to be set. Enforcing this level could mean limits on the allowed resolution. MeGUI will tell you what changes need to be made to enforce that level if you go Tools->Validate AVC Level. You can then make the changes. Alternatively, if you have a level set and you use the One Click Encoder, if your chosen resolution is too high for the level, it will find the highest resolution that satisfies the level, and use that.
The upshot of this note means that if you want to make a Hardware-compatible encode, you should download the relevant profile by Sharktooth and use that. If there is a level set, then you can just use this with the One Click Encoder, or you could generate your own script manually and see if it qualifies using the "Validate AVC Level" option in the Tools menu.
See Wikipedia for a list of all AVC Levels specified, which should give you a pretty good idea of what they are used for.
AviSynth is a very powerful scripted backend for video (pre-)processing used by many video tools, MeGUI included. As it is script based, it is highly customisable and fast.
AviSynth files have an extension of '.avs' and are plain text script files. MeGUI provides the AviSynth script creator for the easy generation of scripts.
For more information on AviSynth:
A backend is a tool which does most of the actual processing yet, because of a limited user interface, is generally used via a frontend (such as MeGUI). Examples:
- x264 is a backend which does h264 encoding for MeGUI
- AviSynth is the backend which does video pre-processing for MeGUI
See also Wikipedia's article on backends
A clip's compressibility is a measure of how easily a video encoder can compress it. It is a purely rule of thumb measure. For example, a clip of grass growing is cosidered more 'compressible' than an action packed clip from the Matrix. This is because if you were to feed both videos to an encoder and tell it to encode them to the same quality, the former clip will come out much smaller.
Other rules of thumb which help to determine compressibility of a clip:
- Sharperning reduces compressibility
- Low motion is more compressible than high motion
A type of file that is created by DGIndex and is used as the input for encoding. See also DGIndex
Display Aspect Ratio. The ratio of width to height of the displayed image (i.e. after any resizing that occurs from an anamorphic encode). This contrasts with SAR which refers to the width:height ratio of an individual pixel.
A program used prior to video encoding that allows one to frameserve videos into other applications that cannot open MPEG-2 files. This is the backend that MeGUI uses when you use the D2V Creator tool.
You can read more about DGIndex on the official site.
Demuxing (short for demultiplexing) is the act of extracting a specific stream (or streams) from a file. For example, extracting the mp3 audio track from an AVI file is demuxing.
See also Muxing
A video processing interface found in the Windows OS. Used almost exclusively as the framework for decoding video for media players (apart from players that use only inbuilt decoders, such as VLC and mplayer). Also can be used for encoding tasks, or any task that requires throughput of video (muxing, etc.), although this usage is still rare. DirectShow is simply a framework, filters are registered into it, and these filters are assembed into 'chains' for transforming the video data passed from the start to the end.
A frontend is a program (usually a GUI) which is used to interface with a backend program. MeGUI is an example of a multi-function frontend, which gives the user a simple interface to the programs doing all the hard work
Short for GNU General Public License. GNU is a recursive acronym that stands for GNU is Not Unix. This is a common license used by free software. You can read about this license at the official site. All programs licensed under the GNU GPL must include the license text.
A video stream where the data is transmitted in fields. Fields are made up of alternate lines of a full frame, so instead of transmitting lines 1,2,3,4...., an interlaced stream transmits 1,3,5,7..., and then 2,4,6,8.... Compare with progressive streams. More on Wikipedia.
Inverse Telecine. A method of 'un-telecining' a telecined video stream. The most common application is applying IVTC to a TCed 29.97i source. Most NTSC movies are released this way, either broadcast on TV or distributed on DVD.
Mux path finding
Mux path finding refers to MeGUI dynamically deducing which container formats are possible given a set of input files. It uses a clever algorithm which allows muxers to be easily registered and unregistered while still being able to tell what container formats are possible. A lot of MeGUI uses mux path finding when possible, which (for the user) means that only the filetypes which are achievable are displayed.
Muxing (a contraction of multiplexing) is the act of combining multiple video, audio and subtitle streams into one file. For example, combining MP3 audio and XviD video (either already in avi or in m2v (raw) format) into a single AVI file is muxing.
See also demuxing
Checking this box before you click 'enqueue' will create an extra job that runs before the encoding. This job will encode the input script to a (lossless) HuffYUV file, and then use that file for input for your encoding. The advantage of this is that your avisynth script will only have to run once, meaning a 2 pass encode will run faster.
- Lossless files can be large. A 2hr DVD movie will come in around 30gb, a 2hr 720p file closer to 60gb.
- The huffyuv file is output to the same location as the input script, not to the same location as the final file.
Raw video is video data not stored in any container. It can be referred to as an elementary stream. Raw video must usually be muxed to a container such as AVI or MKV before it can be viewed. Different file extensions are used to signify the fact that raw streams are not compatible with each other: for MPEG-4 AVC streams, this is .264 or .h264, and for MPEG-4 ASP, this is generally .m4v
When a file that contains more than one stream (say an .avi, .mkv or similar) is played back, the first step in the playback 'chain' is to separate the different streams and send them to the appropriate decoders. This is accomplished by use of a splitter. A splitter can also refer to a program that extracts streams from a file (i.e. demuxing).
Sample Aspect Ratio. The ratio of width to height of an individual sample (or pixel) of the displayed image (i.e. after any resizing that occurs from an anamorphic encode). This contrasts with DAR which refers to the width:height ratio of the entire image.
A system where you turn a 23.976p video stream into a 29.97i video stream by interlacing and then selectively duplicating fields. Technically, there are many types of telecine processes, and this is the 3:2 Pulldown. Read wikipedia for more info.
When you resize video to a resolution greater than its original resolution. Generally frowned upon, because it's rather hard to increase the quality. The main way you can increase quality is by using a slower resizer than is practical in real-time on playback - but the gain is generally felt to be less than the increased 'cost' which manifests itself in a higher filesize required and the low overall gain in quality.
Video for Windows (VfW)
An old interface used on the Windows OS for encoding and decoding of video. Although it is mostly depracated and outright incompatible with new codecs such as h264 (and only partly compatible with MPEG4-ASP), it lives on in many programs. Its successor is directshow, which has totally replaced it in decoding apps (media players), but has only made a few inroads on the encoding side.