MeGUI/Presentation/Parallel job execution

From Wikibooks, open books for an open world
< MeGUI‎ | Presentation
Jump to: navigation, search

Parallel job execution is a feature introduced to MeGUI in version 0.2.6.1001 to allow running multiple jobs at once. This page is an explanation of why it was introduced, and how to use it.

Rationale[edit]

Video encoding is a CPU-intensive task, and welcomes improvements in hardware, to boost the speed of encodes, and allow deeper searching by the encoder to achieve better compressibility. Dual-core CPUs theoretically offer an almost 2x speed-up over their single-core counterparts, and the improvements (in theory) continue to increase as you get more cores. To harness this extra processing power, video encoders have been parallelized to support multiple cores. This means that encoding frontends can achieve most of the gains of multiple core CPUs without needing to do anything. However, there are two main problems with this:

  1. The need for synchronisation within the encoder means that the computer's processing power is not all used. While significant gains over single core applications are still observed, it is not the 2x/3x/4x/etc, that you would hope for.
  2. Not all encoders have support for multi-threading yet. In particular, many audio encoders are still single-threaded.

There is an alternative approach. Since video encoding is often done with more than one job at a time, running these jobs simultaneously solves both of these problems, as the parallelization comes from running both at once, not from the encoder.

Supporting parallel job execution drops the need for in-order execution of jobs, which allows for a lot more freedom when using MeGUI: you don't have to wait for a job to finish before running the next one.

How it works[edit]

The basic unit of job execution is the Job Worker (or just a worker). A single worker can process one job at a time. Within MeGUI, however, there can be multiple job workers, each processing jobs, leading to parallelism.

Normal job execution[edit]

The standard method of job processing is that each job worker requests a job from the main job queue, and processes it, then requests another. This continues until all the jobs are finished or cannot be processed (ie they are set to postponed or skip or aborted, or they had errors). So that jobs that depend on each other aren't executed in the wrong order, each job maintains a list of jobs it depends on, and a job will not be run until all the jobs it depends on have completed successfully (which means that their status is done). This is done on a fine level, so that even in a job chain containing audio, video, and mux jobs, the mux job depends on the audio and video, but the audio and video don't depend on each other, so they can be processed at the same time.

Owned jobs[edit]

Workers also maintain a list of reserved jobs, which are jobs that they 'own'. If a job is owned, it can only be executed by the worker that owns it. A worker will execute all the jobs it owns before requesting more from the main job queue. This ability is useful for running quick jobs without needing to wait for the slow jobs already queued. For example, you may have several long video encodes queued, and you want to set up another, which requires d2v indexing. With pre-0.2.6.1001 behaviour, you would have to queue the index job and wait for the previous job to finish. With owned jobs, you just queue the job, and then right-click and select 'Run in new temporary worker'. The temporary worker will then own that job, will run it immediately, and then shut down when the job is finished. This will not interrupt the other jobs running.

Each worker has a local queue, which lists the jobs it owns. If you want to 'disown' a job, you can 'return job to main job queue' by right-clicking on that job in the worker's local queue.

Worker summary[edit]

The worker summary is a small window which gives a concise summary of the workers. You can also reach most of the worker functionality by using right-click on that window.