Analog and Digital Conversion/The Ideal Sampler

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Ideal Sampler[edit | edit source]

Let us say that we have a sampler device that operates as follows: every T seconds, the sampler reads the current value of the input signal at that exact moment. The sampler then holds that value on the output for T seconds, before taking the next sample. We have a generic input to this system, f(t), and our sampled output will be denoted f*(t). We can then show the following relationship between the two signals:

f*(t) = f(0)(u(0) - u(T)) + f(T)(u(T) - u(2T)) + ...

Note that the value of f* at time t = 1.5T = T. This relationship works for any fractional value.

Star Transform[edit | edit source]

Taking the Laplace transform of this infinite sequence will yield us with a special result called the star transform. The star transform is also occasionally called the "starred transform" in some texts.

The star transform is defined as such:

The star transform depends on the sampling time, T, and is different for a single signal, depending on the speed at which the signal is sampled.

Sampler Block Diagram[edit | edit source]

A sampler is usually denoted on a circuit diagram as being a switch that opens and closes at set intervals. These intervals represent the sampling time, T.

Sampling Time
The amount of time between successive samples.

Samplers work by reading in an analog waveform, and "catching" the value of that waveform at a particular point in time. This value is then fed into an ADC converter, and a digital sequence is produced.

The exact method by which the digital sequence is produced will be discussed later in the sections on quantization.

Sampling Delays[edit | edit source]

Real samplers take a certain amount of time to read the sample, and convert it into a digital representation. This delay can usually be modeled as a delay unit in series with the sampler.

Sampling Jitter[edit | edit source]

Samplers in real life don't always take a perfect sample exactly at time T, but instead sample "around" the right time. The difference between the ideal sampling time T, and the actual sample time is known as the "Sampling Jitter", or simply the jitter.