# Circuit Theory/All Chapters

Circuit Theory
Wikibooks: The Free Library

# Preface

This wikibook is going to be an introductory text about electric circuits. It will cover some the basics of electric circuit theory, circuit analysis, and will touch on circuit design. This book will serve as a companion reference for a 1st year of an Electrical Engineering undergraduate curriculum. Topics covered include AC and DC circuits, passive circuit components, phasors, and RLC circuits. The focus is on students of an electrical engineering undergraduate program. Hobbyists would benefit more from reading Electronics instead.

This book is not nearly completed, and could still be improved. People with knowledge of the subject are encouraged to contribute.

The main editable text of this book is located at http://en.wikibooks.org/wiki/Circuit_Theory. The wikibooks version of this text is considered the most up-to-date version, and is the best place to edit this book and contribute to it.

Electric Circuits Introduction

The theory of electrical circuits can be a complex area of study. The chapters in this section will introduce the reader to the world of electric circuits, introduce some of the basic terminology, and provide the first introduction to passive circuit elements.

# Introduction

## Who is This Book For?

This is designed for a first course in Circuit Analysis which is usually accompanied by a set of labs. It is assumed that students are in a Differential Equations class at the same time. Phasors are used to avoid the Laplace transform of driving functions while maintaining a complex impedance transform of the physical circuit that is identical in both. 1st and 2nd order differential equations can be solved using phasors and calculus if the driving functions are sinusoidal. The sinusoidal is then replaced by the more simple step function and then the convolution integral is used to find an analytical solution to any driving function. This leaves time for a more intuitive understanding of poles, zeros, transfer functions, and Bode plot interpretation.

For those that have already had differential equations, the Laplace transform equivalent will be presented as an alternative while focusing on phasors and calculus.

This book will expect the reader to have a firm understanding of Calculus specifically, and will not stop to explain the fundamental topics in Calculus.

For information on Calculus, see the wikibook: Calculus.

## What Will This Book Cover?

This book will cover linear circuits, and linear circuit elements. The goal is to emphasize Kirchhoff and symbolic algebra systems such as matLab mupad or mathematica at the expense of node, mesh, Norton, etc. A phasor/calculus based approach starts at the very beginning and ends with the convolution integral to handle all the various types of forcing functions.

The result is a linear analysis experience that is general in nature but skips Laplace and Fourier transforms.

Krichhoff's laws receive normal focus, but the other circuit analysis/simplification techniques receive less than a normal attention.

The class ends with application of these concepts in Power Analysis, Filters, Control systems.

The goal is set the ground work for a transition to the digital version of these concepts from a firm basis in the physical world. The next course would be one focused on modeling linear systems and analyzing them digitally in preparation for a digital signal (DSP) processing course.

# Basic Terminology

## Basic Terminology

There are a few key terms that need to be understood at the beginning of this book, before we can continue. This is only a partial list of all terms that will be used throughout this book, but these key words are important to know before we begin the main narrative of this text.

Time domain
The time domain is described by graphs of power, voltage and current that depend upon time. The "Time domain" is simply another way of saying that our circuits change with time, and that the major variable used to describe the system is time. Another name is "Temporal".
Frequency domain
The frequency domain are graphs of power, voltage and/or current that depend upon frequency such as Bode plots. Variable frequencies in wireless communication can represent changing channels or data on a channel. Another name is the "Fourier domain". Other domains that an engineer might encounter are the "Laplace domain" (or the "s domain" or "complex frequency domain"), and the "Z domain". When combined with the time, it is called a "Spectral" or "Waterfall."
Circuit Response
Circuits generally have inputs and outputs. In fact, it is safe to say that a circuit isn't useful if it doesn't have one or the other (usually both). Circuit response is the relationship between the circuit's input to the circuit's output. The circuit response may be a measure of either current or voltage.
Non-homogeneous
Circuits are described by equations that capture the the component characteristics and how they are wired together. These equations are non-homogeneous in nature. Solving these equations requires splitting the single problem into two problems: Steady State Solution (particular solution) and Transient Solution (homogeneous solution).
The final value, when all circuit elements have a constant or periodic behaviour, is also known as the steady-state value of the circuit. The circuit response at steady state (when voltages and currents have stopped changing due to a disturbance) is also known as the "steady state response". The steady state solution to the particular integral is called the particular solution.
Transient Response
A transient response occurs when:
a circuit is turned on or off
a sensor responds to the physical world changes
static electricity is discharged
an old car with old spark plugs (before resistors were put in spark plugs) drives by
Transient means momentary, or a short period of time. Transient means that the energy in a circuit suddenly changes which causes the energy storage elements to react. The circuit's energy state is forced to change. When a car goes over a bump, it can fly apart, feel like a rock, or cushion the impact in a designed manner. The goal of most circuit design is to plan for transients, whether intended or not.
Transient solutions are determined by assuming the driving function(s) is zero which creates a homogeneous equation, which has a homogeneous solution technique.

## Summary

When something changes in a circuit, there is a certain transition period before a circuit "settles down", and reaches its final value. The response that a circuit has before settling into its steady-state response is known as the transient response. Using Euler's formula, complex numbers, phasors and the s-plane, a homogeneous solution technique will be developed that captures the transient response by assuming the final state has no energy. In addition, a particular solution technique will be developed that finds the final energy state. Added together, they predict the circuit response.

The related Differential equation development of homogeneous and particular solutions will be avoided.

# Variables and Standard Units

## Electric Charge (Coulombs)

Note:
An electron has a charge of
-1.602×10E-19 C.

Electric charge is a physical property of matter that causes it to experience a force when near other electrically charged matter. Electric Charge (symbol q) is measured in SI units called "Coulombs", which are abbreviated with the letter capital C.

We know that q=n*e, where n = number of electrons and e= 1.6*10-19. Hence n=1/e coulombs. A Coulomb is the total charge of 6.24150962915265×1018 electrons, thus a single electron has a charge of −1.602 × 10−19.

It is important to understand that this concept of "charge" is associated with static electricity. Charge, as a concept, has a physical boundary that is related to counting a group of electrons. "Flowing" electricity is an entirely different situation. "Charge" and electrons separate. Charge moves at the speed of light while electrons move at the speed of 1 meter/hour. Thus in most circuit analysis, "charge" is an abstract concept unrelated to energy or an electron and more related to the flow of information.

Electric charge is the subject of many fundamental laws, such as Coulomb's Law and Gauss' Law (static electricity) but is not used much in circuit theory.

## Voltage (Volts)

Voltage is a measure of the work required to move a charge from one point to another in a electric field. Thus the unit "volt" is defined as a Joules (J) per Coulomb (C).

$V = \frac{W}{q}$

W represents work, q represents an amount of charge. Charge is a static electricity concept. The definition of a volt is shared between static and "flowing" electronics.

Voltage is sometimes called "electric potential", because voltage represents the a difference in Electro Motive Force (EMF) that can produce current in a circuit. More voltage means more potential for current. Voltage also can be called "Electric Pressure", although this is far less common.

Voltage is not measured in absolutes but in relative terms. The English language tradition obscures this. For example we say "What is the distance to New York?" Clearly implied is the relative distance from where we are standing to New York. But if we say "What is the voltage at ______?" What is the starting point?

Voltage is defined between two points. Voltage is relative to where 0 is defined. We say "The voltage from point A to B is 5 volts." It is important to understand EMF and voltage are two different things.

When the question is asked "What is the voltage at ______?", look for the ground symbol on a circuit diagram. Measure voltage from ground to _____. If the question is asked "What is the voltage from A to B?" then put the red probe on A and the black probe on B (not ground).

The absolute is referred to as "EMF" or Electro Motive Force. The difference between the two EMF's is a voltage.

## Current (Amperes)

Current is a measurement of the flow of electricity. Current is measured in units called Amperes (or "Amps"). An ampere is "charge volume velocity" in the same way water current could be measured in "cubic feet of water per second." But current is a base SI unit, a fundamental dimension of reality like space, time and mass. A coulomb or charge is not. A coulomb is actually defined in terms of the ampere. "Charge or Coulomb" is a derived SI Unit. The coulomb is a fictitious entity left over from the one fluid /two fluid philosophies of the 18th century.

This course is about flowing electrical energy that is found in all modern electronics. Charge volume velocity (defined by current) is a useful concept, but understand it has no basis in reality. Do not think of current as a bundle electrons carrying energy through a wire. Special relativity and quantum mechanics concepts are necessary to understand how electrons move at 1 meter/hour through copper, yet electromagnetic energy moves at near the speed of light.

Charge is similar to the rest mass concept of relativity and generates the U(1) symmetry of electromagnetism

Amperes are abbreviated with an "A" (upper-case A), and the variable most often associated with current is the letter "i" (lower-case I). In terms of coulombs, an ampere is:

$i = \frac{dq}{dt}$
For the rest of this book, the lower-case J ( j ) will be used to denote an imaginary number, and the lower-case I ( i ) will be used to denote current.

Because of the widespread use of complex numbers in Electrical Engineering, it is common for electrical engineering texts to use the letter "j" (lower-case J) as the imaginary number, instead of the "i" (lower-case I) commonly used in math texts. This wikibook will adopt the "j" as the imaginary number, to avoid confusion.

## Energy and Power

Electrical theory is about energy storage and the flow of energy in circuits. Energy is chopped up arbitrarily into something that doesn't exist but can be counted called a coulomb. Energy per coulomb is voltage. The velocity of a coulomb is current. Multiplied together, the units are energy velocity or power ... and the unreal "coulomb" disappears.

### Energy

Energy is measured most commonly in Joules, which are abbreviated with a "J" (upper-case J). The variable most commonly used with energy is "w" (lower-case W). The energy symbol is w which stands for work.

From a thermodynamics point of view, all energy consumed by a circuit is work ... all the heat is turned into work. Practically speaking, this can not be true. If it were true, computers would never consume any energy and never heat up.

The reason that all the energy going into a circuit and leaving a circuit is considered "work" is because from a thermodynamic point of view, electrical energy is ideal. All of it can be used. Ideally all of it can be turned into work. Most introduction to thermodynamics courses assume that electrical energy is completely organized (and has entropy of 0).

### Power

A corollary to the concept of energy being work, is that all the energy/power of a circuit (ideally) can be accounted for. The sum of all the power entering and leaving a circuit should add up to zero. No energy should be accumulated (theoretically). Of course capacitors will charge up and may hold onto their energy when the circuit is turned off. Inductors will create a magnetic field containing energy that will instantly disappear back into the source through the switch that turns the circuit off.

This course uses what is called the "passive" sign convention for power. Energy put into a circuit by a power supply is negative, energy leaving a circuit is positive.

Power (the flow of energy) computations are an important part of this course. The symbol for power is w (for work) and the units are Watts or W.

# Electric Circuit Basics

## Circuits

Circuits (also known as "networks") are collections of circuit elements and wires. Wires are designated on a schematic as being straight lines. Nodes are locations on a schematic where 2 or more wires connect, and are usually marked with a dark black dot. Circuit Elements are "everything else" in a sense. Most basic circuit elements have their own symbols so as to be easily recognizable, although some will be drawn as a simple box image, with the specifications of the box written somewhere that is easy to find. We will discuss several types of basic circuit components in this book.

## Ideal Wires

For the purposes of this book, we will assume that an ideal wire has zero total resistance, no capacitance, and no inductance. A consequence of these assumptions is that these ideal wires have infinite bandwidth, are immune to interference, and are — in essence — completely uncomplicated. This is not the case in real wires, because all wires have at least some amount of associated resistance. Also, placing multiple real wires together, or bending real wires in certain patterns will produce small amounts of capacitance and inductance, which can play a role in circuit design and analysis. This book will assume that all wires are ideal.

## Ideal Junctions or Nodes

Nodes are areas where the Electromotive Force is the same.

Nodes are also called "junctions" in this book in order to make a distinction between Node analysis, Kirchhoff's current law and discussions about a physical node itself. Here a physical node is discussed.

A junction is a group of wires that share the same electromotive force (not voltage). Wires ideally have no resistance, thus all wires that touch wire to wire somewhere are part of the same node. The diagram on the right shows three big blue nodes, two smaller green nodes and two trivial (one wire touching another) nodes.

Sometimes a node is described as where two or more wires touch and students circle where wires intersect and call this a node. This only works on simple circuits.

One node has to be labeled ground in any circuit drawn before voltage can be computed or the circuit simulated. Typically this is the node having the most components connected to it. Logically it is normally placed at the bottom of the circuit logic diagram.

Ground is not always needed physically. Some circuits are floated on purpose.

Node Quiz

## Measuring instruments

Voltmeters and Ammeters are devices that are used to measure the voltage across an element, and the current flowing through a wire, respectively.

### Ideal Voltmeters

An ideal voltmeter has an infinite resistance (in reality, several megaohms), and acts like an open circuit. A voltmeter is placed across the terminals of a circuit element, to determine the voltage across that element. In practice the voltmeter siphons a enough energy to move a needle, cause thin strips of metal to separate or turn on a transistor so a number is displayed.

### Ideal Ammeters

An ideal ammeter has zero resistance and acts like a short circuit. Ammeters require cutting a wire and plugging the two ends into the Ammeter. In practice an ammeter places a tiny resistor in a wire and measures the tiny voltage across it or the ammeter measures the magnetic field strength generated by current flowing through a wire. Ammeters are not used that much because of the wire cutting, or wire disconnecting they require.

## Active Passive & ReActive

The elements which are capable of delivering energy or which are capable to amplify the signal are called "Active elements". All power supplies fit into this category.

The elements which will receive the energy and dissipate it are called "Passive elements". Resistors model these devices.

Reactive elements store and release energy into a circuit. Ideally they don't either consume or generate energy. Capacitors, and inductors fall into this category.

## Open and Short Circuits

### Open

No current flows through an open. Normally an open is created by a bad connector. Dust, bad solder joints, bad crimping, cracks in circuit board traces, create an open. Capacitors respond to DC by turning into opens after charging up. Uncharged inductors appear as opens immediately after powering up a circuit. The word open can refer to a problem description. The word open can also help develop an intuition about circuits.

Typically the circuit stops working with opens because 99% of all circuits are driven by voltage power sources. Voltage sources respond to an open with no current. Opens are the equivalent of clogs in plumbing .. which stop water from flowing.

On one side of the open, EMF will build up, just like water pressure will build up on one side of a clogged pipe. Typically a voltage will appear across the open.

### Short

A voltage source responds to a short by delivering as much current as possible. An extreme example of this can be seen in this ball bearing motor video. The motor appears as a short to the battery. Notice he only completes the short for a short time because he is worried about the car battery exploding.

Maximum current flows through a short. Normally a short is created by a wire, a nail, or some loose screw touching parts of the circuit unintentionally. Most component failures start with heat build up. The heat destroys varnish, paint, or thin insulation creating a short. The short causes more current to flow which causes more heat. This cycle repeats faster and faster until there is a puff of smoke and everything breaks creating an open. Most component failures start with a short and end in an open as they burn up. Feel the air temperature above each circuit component after power on. Build a memory of what normal operating temperatures are. Cold can indicate a short that has already turned into an open.

An uncharged capacitor initially appears as a short immediately after powering on a circuit. An inductor appears as a short to DC after charging up. The short concept also helps build our intuition, provides an opportunity to talk about electrical safety and helps describe component failure modes.

A closed switch can be thought of as short. Switches are surprisingly complicated. It is in a study of switches that the term closed begins to dominate that of short.

# Resistors and Resistance

### Resistors

Mechanical engineers seem to model everything with a spring. Electrical engineers compare everything to a Resistor. Resistors are circuit elements that resist the flow of current. When this is done a voltage appears across the resistor's two wires.

A pure resistor turns electrical energy into heat. Devices similar to resistors turn this energy into light, motion, heat, and other forms of energy.

Current in the drawing above is shown entering the + side of the resistor. Resistors don't care which leg is connected to positive or negative. The + means where the positive or red probe of the volt meter is to be placed in order to get a positive reading. This is called the "positive charge" flow sign convention. Some circuit theory classes (often within a physics oriented curriculum) are taught with an "electron flow" sign convention.

In this case, current entering the + side of the resistor means that the resistor is removing energy from the circuit. This is good. The goal of most circuits is to send energy out into the world in the form of motion, light, sound, etc.

### Resistance

Resistance is measured in terms of units called "Ohms" (volts per ampere), which is commonly abbreviated with the Greek letter Ω ("Omega"). Ohms are also used to measure the quantities of impedance and reactance, as described in a later chapter. The variable most commonly used to represent resistance is "r" or "R".

Resistance is defined as:

$r = {\rho L \over A}$

where ρ is the resistivity of the material, L is the length of the resistor, and A is the cross-sectional area of the resistor.

### Conductance

Conductance is the inverse of resistance. Conductance has units of "Siemens" (S), sometimes referred to as mhos (ohms backwards, abbreviated as an upside-down Ω). The associated variable is "G":

$G = \frac{1}{r}$

Before calculators and computers, conductance helped reduce the number of hand calculations that had to be done. Now conductance and it's related concepts of admittance and susceptance can be skipped with matlab, octave, wolfram alpha and other computing tools. Learning one or more these computing tools is now absolutely necessary in order to get through this text.

### Resistor terminal relation

A simple circuit diagram relating current, voltage, and resistance

The drawing on the right is of a battery and a resistor. Current is leaving the + terminal of the battery. This means this battery is turning chemical potential energy into electromagnetic potential energy and dumping this energy into the circuit. The flow of this energy or power is negative.

Current is entering the positive side of the resistor even though a + has not been put on the resistor. This means electromagnetic potential energy is being converted into heat, motion, light, or sound depending upon the nature of the resistor. Power flowing out of the circuit is given a positive sign.

The relationship of the voltage across the resistor V, the current through the resistor I and the value of the resistor R is related by ohm's law:

[Resistor Terminal Relation]

$V=R*I$

A resistor, capacitor and inductor all have only two wires attached to them. Sometimes it is hard to tell them apart. In the real world, all three have a bit of resistance, capacitance and inductance in them. In this unknown context, they are called two terminal devices. In more complicated devices, the wires are grouped into ports. A two terminal device that expresses Ohm's law when current and voltage are applied to it, is called a resistor.

### Resistor Safety

Resistors come in all forms. Most have a maximum power rating in watts. If you put too much through them, they can melt, catch fire, etc. Resistance is an electrical passive element which oppose the flow of electricity.

### Example

Suppose the voltage across a resistor's two terminals is 10 volts and the measured current through it is 2 amps. What is the resistance?

If $v=iR$ then $R = v/i = 10V/2A = 5 ohms$

Resistive Circuits

We've been introduced to passive circuit elements such as resistors, sources, and wires. Now, we are going to explore how complicated circuits using these components can be analyzed.

# Source Transformations

## Source Transformations

Independent current sources can be turned into independent voltage sources, and vice-versa, by methods called "Source Transformations." These transformations are useful for solving circuits. We will explain the two most important source transformations, Thevenin's Source, and Norton's Source, and we will explain how to use these conceptual tools for solving circuits.

## Black Boxes

A circuit (or any system, for that matter) may be considered a black box if we don't know what is inside the system. For instance, most people treat their computers like a black box because they don't know what is inside the computer (most don't even care), all they know is what goes in to the system (keyboard and mouse input), and what comes out of the system (monitor and printer output).

Black boxes, by definition, are systems whose internals aren't known to an outside observer. The only methods that an outside observer has to examine a black box is to send input into the systems, and gauge the output.

## Thevenin's Theorem

Let's start by drawing a general circuit consisting of a source and a load, as a block diagram:

Let's say that the source is a collection of voltage sources, current sources and resistances, while the load is a collection of resistances only. Both the source and the load can be arbitrarily complex, but we can conceptually say that the source is directly equivalent to a single voltage source and resistance (figure (a) below).

 (a) (b)

We can determine the value of the resistance Rs and the voltage source, vs by attaching an independent source to the output of the circuit, as in figure (b) above. In this case we are using a current source, but a voltage source could also be used. By varying i and measuring v, both vs and Rs can be found using the following equation:

$v=v_s+iR_s \,$

There are two variables, so two values of i will be needed. See Example 1 for more details. We can easily see from this that if the current source is set to zero (equivalent to an open circuit), then v is equal to the voltage source, vs. This is also called the open-circuit voltage, voc.

This is an important concept, because it allows us to model what is inside a unknown (linear) circuit, just by knowing what is coming out of the circuit. This concept is known as Thévenin's Theorem after French telegraph engineer Léon Charles Thévenin, and the circuit consisting of the voltage source and resistance is called the Thévenin Equivalent Circuit.

## Norton's Theorem

Recall from above that the output voltage, v, of a Thévenin equivalent circuit can be expressed as

$v=v_s+iR_s \,$

Now, let's rearrange it for the output current, i:

$i=-\frac{v_s}{R_s}+\frac{v}{R_s}$

This is equivalent to a KCL description of the following circuit. We can call the constant term vs/Rs the source current, is.

The equivalent current source and the equivalent resistance can be found with an independent source as before (see Example 2).

When the above circuit (the Norton Equivalent Circuit, after Bell Labs engineer E.L. Norton) is disconnected from the external load, the current from the source all flows through the resistor, producing the requisite voltage across the terminals, voc. Also, if we were to short the two terminals of our circuit, the current would all flow through the wire, and none of it would flow through the resistor (current divider rule). In this way, the circuit would produce the short-circuit current isc (which is exactly the same as the source current is).

## Circuit Transforms

We have just shown turns out that the Thévenin and Norton circuits are just different representations of the same black box circuit, with the same Ohm's Law/KCL equations. This means that we cannot distinguish between Thévenin source and a Norton source from outside the black box, and that we can directly equate the two as below:

 $\equiv$

We can draw up some rules to convert between the two:

• The values of the resistors in each circuit are conceptually identical, and can be called the equivalent resistance, Req:
$R_{s_n}=R_{s_t}=R_s=R_{eq}$
• The value of a Thévenin voltage source is the value of the Norton current source times the equivalent resistance (Ohm's law):
$v_s=i_sr\,$

If these rules are followed, the circuits will behave identically. Using these few rules, we can transform a Norton circuit into a Thévenin circuit, and vice versa. This method is called source transformation. See Example 3.

## Open Circuit Voltage and Short Circuit Current

The open-circuit voltage, voc of a circuit is the voltage across the terminals when the current is zero, and the short-circuit current isc is the current when the voltage across the terminals is zero:

 The open circuit voltage The short circuit current

We can also observe the following:

• The value of the Thévenin voltage source is the open-circuit voltage:
$v_s=v_{oc}\,$
• The value of the Norton current source is the short-circuit current:
$i_s=i_{sc}\,$

We can say that, generally,

$R_{eq}=\frac{v_{oc}}{i_{sc}}$

## Why Transform Circuits?

How are Thevenin and Norton transforms useful?

Describe a black box characteristics in a way that can predict its reaction to any load.
Find the current through and voltage across any device by removing the device from the circuit! This can instantly make a complex circuit much simpler to analyze.
Stepwise simplification of a circuit is possible if voltage sources have a series impedance and current sources have a parallel impedance.

# Maximum Power Transfer

## Maximum Power Transfer

Often we would like to transfer the most power from a source to a load placed across the terminals as possible. How can we determine the optimum resistance of the load for this to occur?

Let us consider a source modelled by a Thévenin equivalent (a Norton equivalent will lead to the same result, as the two are directly equivalent), with a load resistance, RL. The source resistance is Rs and the open circuit voltage of the source is vs:

The current in this circuit is found using Ohm's Law:

$i=\frac{v_s}{R_s+R_L}$

The voltage across the load resistor, vL, is found using the voltage divider rule:

$v_L=v_s \,\frac{R_L}{R_s + R_L}$

We can now find the power dissipated in the load, PL as follows:

$P_L=v_Li=\frac{R_L \, v^2_s}{\left(R_s+R_L\right)^2}$

We can now rewrite this to get rid of the RL on the top:

$P_L=\frac{v^2_s}{ \left(\frac{R_s}{\sqrt{R_L}}+\sqrt{R_L}\right)^2} = \frac{v^2_s}{ R_s \left(\frac{\sqrt{R_s}}{\sqrt{R_L}}+\frac{\sqrt{R_L}}{\sqrt{R_s} }\right)^2}$

Assuming the source resistance is not changeable, then we obtain maximum power by minimising the bracketed part of the denominator in the above equation. It is an elementary mathematical result that $x+x^{-1}$ is at a minimum when x=1. In this case, it is equal to 2. Therefore, the above expression is minimum under the following condition:

$\frac{\sqrt{R_s}}{\sqrt{R_L}}=1$

This leads to the condition that:

 $R_L=R_s \,$

We will get maximum power out of the source if the load resistance is identical to the internal source resistance. This is the Maximum Power Transfer Theorem.

### Efficiency

The efficiency, η of the circuit is the proportion of all the energy dissipated in the circuit that is dissipated in the load. We can immediately see that at maximum power transfer to the load, the efficiency is 0.5, as the source resistor has half the voltage across it. We can also see that efficiency will increase as the load resistance increases, even though the power transferred will fall.

The efficiency can be calculated using the following equation:

$\eta=\frac{P_L}{P_L+P_s}$

where Ps is the power in the source resistor. This can be found using a simple modification to the equation for PL:

$P_s=\frac{v^2_s}{ R_L \left(\frac{\sqrt{R_s}}{\sqrt{R_L}}+\frac{\sqrt{R_L}}{\sqrt{R_s} }\right)^2}$

The graph below shows the power in the load (as a proportion of the maximum power, Pmax) and the efficiency for values of RL between 0 and 5 times Rs.

It is important to note that under conditions of maximum power transfer as much power is dissipated in the source as in the load. This is not a desirable condition if, for example, the source is the electricity supply system and the load is your electric heater. This would mean that the electricity supply company would be wasting half the power it generates. In this case, the generators, power lines, etc. are designed to give the lowest source resistance possible, giving high efficiency. The maximum power transfer condition is used in (usually high-frequency) communications systems where the source resistance can not be made low, the power levels are relatively low and it is paramount to get as much signal power as possible to the receiving end of the system (the load).

# Resistive Circuit Analysis Methods

## Analysis Methods

When circuits get large and complicated, it is useful to have various methods for simplifying and analyzing the circuit. There is no perfect formula for solving a circuit. Depending on the type of circuit, there are different methods that can be employed to solve the circuit. Some methods might not work, and some methods may be very difficult in terms of long math problems. Two of the most important methods for solving circuits are Nodal Analysis, and Mesh Current Analysis. These will be explained below.

## Superposition

One of the most important principals in the field of circuit analysis is the principal of superposition. It is valid only in linear circuits.

The superposition principle states that the total effect of multiple contributing sources on a linear circuit is equal to the sum of the individual effects of the sources, taken one at a time.

What does this mean? In plain English, it means that if we have a circuit with multiple sources, we can "turn off" all but one source at a time, and then investigate the circuit with only one source active at a time. We do this with every source, in turn, and then add together the effects of each source to get the total effect. Before we put this principle to use, we must be aware of the underlying mathematics.

### Necessary Conditions

Superposition can only be applied to linear circuits; that is, all of a circuit's sources hold a linear relationship with the circuit's responses. Using only a few algebraic rules, we can build a mathematical understanding of superposition. If f is taken to be the response, and a and b are constant, then:

$f(ax_1+bx_2)= f(ax_1) + f(bx_2) \,$

In terms of a circuit, it clearly explains the concept of superposition; each input can be considered individually and then summed to obtain the output. With just a few more algebraic properties, we can see that superposition cannot be applied to non-linear circuits. In this example, the response y is equal to the square of the input x, i.e. y=x2. If a and b are constant, then:

$y=(ax_1+bx_2)^2 \ne (ax_1)^2 + (bx_2)^2 = y_1+y_2\,$

Note that this is only one of an infinite number of counter-examples...

### Step by Step

Using superposition to find a given output can be broken down into four steps:

1. Isolate a source - Select a source, and set all of the remaining sources to zero. The consequences of "turning off" these sources are explained in Open and Closed Circuits. In summary, turning off a voltage source results in a short circuit, and turning off a current source results in an open circuit. (Reasoning - no current can flow through a open circuit and there can be no voltage drop across a short circuit.)
2. Find the output from the isolated source - Once a source has been isolated, the response from the source in question can be found using any of the techniques we've learned thus far.
3. Repeat steps 1 and 2 for each source - Continue to choose a source, set the remaining sources to zero, and find the response. Repeat this procedure until every source has been accounted for.
4. Sum the Outputs - Once the output due to each source has been found, add them together to find the total response.

## Impulse Response

An impulse response of a circuit can be used to determine the output of the circuit:

The output y is the convolution h * x of the input x and the impulse response:

[Convolution]

$y(t) = (h*x)(t) = \int_{-\infty}^{+\infty} h(t-s)x(s)ds$.

If the input, x(t), was an impulse ($\delta(t)$), the output y(t) would be equal to h(t).

By knowing the impulse response of a circuit, any source can be plugged-in to the circuit, and the output can be calculated by convolution.

## Convolution

The convolution operation is a very difficult, involved operation that combines two equations into a single resulting equation. Convolution is defined in terms of a definite integral, and as such, solving convolution equations will require knowledge of integral calculus. This wikibook will not require a prior knowledge of integral calculus, and therefore will not go into more depth on this subject then a simple definition, and some light explanation.

### Definition

The convolution a * b of two functions a and b is defined as:

$(a * b)(t) = \int_{-\infty}^\infty a(\tau)b(t - \tau)d\tau$
Remember:
Asterisks mean convolution, not multiplication

The asterisk operator is used to denote convolution. Many computer systems, and people who frequently write mathematics on a computer will often use an asterisk to denote simple multiplication (the asterisk is the multiplication operator in many programming languages), however an important distinction must be made here: The asterisk operator means convolution.

### Properties

Convolution is commutative, in the sense that $a * b = b * a$. Convolution is also distributive over addition, i.e. $a * (b + c) = a * b + a * c$, and associative, i.e. $a * (b * c) = (a * b) * c$.

### Systems, and convolution

Let us say that we have the following block-diagram system:

 x(t) = system input h(t) = impulse response y(t) = system output

Where x(t) is the input to the circuit, h(t) is the circuit's impulse response, and y(t) is the output. Here, we can find the output by convoluting the impulse response with the input to the circuit. Hence we see that the impulse response of a circuit is not just the ratio of the output over the input. In the frequency domain however, component in the output with frequency ω is the product of the input component with the same frequency and the transition function at that frequency. The moral of the story is this: the output to a circuit is the input convolved with the impulse response.

Capacitors and Inductors

Resistors, wires, and sources are not the only passive circuit elements. Capacitors and Inductors are also common, passive elements that can be used to store and release electrical energy in a circuit. We will use the analysis methods that we learned previously to make sense of these complicated circuit elements.

# First-Order Circuits

## First Order Circuits

First order circuits are circuits that contain only one energy storage element (capacitor or inductor), and that can, therefore, be described using only a first order differential equation. The two possible types of first-order circuits are:

1. RC (resistor and capacitor)
2. RL (resistor and inductor)

RL and RC circuits is a term we will be using to describe a circuit that has either a) resistors and inductors (RL), or b) resistors and capacitors (RC).

## RL Circuits

An RL parallel circuit

An RL Circuit has at least one resistor (R) and one inductor (L). These can be arranged in parallel, or in series. Inductors are best solved by considering the current flowing through the inductor. Therefore, we will combine the resistive element and the source into a Norton Source Circuit. The Inductor then, will be the external load to the circuit. We remember the equation for the inductor:

$v(t) = L\frac{di}{dt}$

If we apply KCL on the node that forms the positive terminal of the voltage source, we can solve to get the following differential equation:

$i_{source}(t) = \frac{L}{R_n}\frac{di_{inductor}(t)}{dt} + i_{inductor}(t)$

We will show how to solve differential equations in a later chapter.

## RC Circuits

A parallel RC Circuit

An RC circuit is a circuit that has both a resistor (R) and a capacitor (C). Like the RL Circuit, we will combine the resistor and the source on one side of the circuit, and combine them into a thevenin source. Then if we apply KVL around the resulting loop, we get the following equation:

$v_{source} = RC\frac{dv_{capacitor}(t)}{dt} + v_{capacitor}(t)$

## First Order Solution

### Series RL

The differential equation of the series RL circuit

$L \frac{dI}{dt} + I R = 0$
$\frac{dI}{dt} = - I \frac{R}{L}$
$\frac{1}{I} dI = - \frac{R}{L} dt$
$\int \frac{1}{I} dI = - \frac{R}{L} \int dt$
$ln I = - \frac{R}{L} t + C$
$I = e^(- \frac{R}{L} t + C )$
$I = A e^(- \frac{R}{L} t )$ . A = eC
t I(t)
0 A
1 $\frac{R}{L}$ 36% A
2 $\frac{R}{L}$ 14% A
3 $\frac{R}{L}$ 5% A
4 $\frac{R}{L}$ 2% A
5 $\frac{R}{L}$ 0.7% A

### Series RC

The differential equation of the series RC circuit

$C \frac{dV}{dt} + \frac{V}{R} = 0$
$\frac{dV}{dt} = - V \frac{1}{RC}$
$\frac{1}{V} dV = - \frac{1}{RC} dt$
$\int \frac{1}{V} dV = - \frac{1}{RC} \int dt$
$ln V = - \frac{1}{RC} t + C$
$V = e^(- \frac{1}{RC} t + C )$
$V = A e^(- \frac{1}{RC} t )$ . A = eC

t V(t)
0 A
1 $\frac{1}{RC}$ 36% A
2 $\frac{1}{RC}$ 14% A
3 $\frac{1}{RC}$ 5% A
4 $\frac{1}{RC}$ 2% A
5 $\frac{1}{RC}$ 0.7% A

### Time Constant

The series RL and RC has a Time Constant

$T = \frac{L}{R}$
$T = \frac{RC}{1}$

In general, from an engineering standpoint, we say that the system is at steady state ( Voltage or Current is almost at Ground Level ) after a time period of five Time Constants.

# RLC Circuits

## Series RLC Circuit

### Second Order Differential Equation

$L \frac{dI}{dt} + I R + \frac{1}{C} \int I dt = V$
$\frac{d^2I}{dt^2} + \frac{R}{L} \frac{dI}{dt} + \frac{I}{LC} = 0$

The characteristic equation is

$s^2 + \frac{R}{L}s + \frac{1}{LC} = 0$
$s = -\alpha \pm \sqrt{\alpha^2 - \beta^2}$

Where

$\alpha = \frac{R}{2L}$
$\beta = \frac{1}\sqrt{LC}$

When $\sqrt{\alpha^2 - \beta^2} = 0$

$\alpha^2 = \beta^2 ; R = 2 \sqrt{\frac{L}{C}}$
The equation only has one real root . $s = -\alpha = - \frac{R}{2L}$
The solution for $I(t) = A e^(-\frac{R}{2L} t)$
The I - t curve would look like

When $\sqrt{\alpha^2 - \beta^2} > 0$

$\alpha^2 > \beta^2$ . R > $\frac{L}{C}$
The equation only has two real root . $s = -\alpha$ ± $\sqrt{\alpha^2 - \beta^2}$
The solution for $I(t) = e^{- \alpha + \sqrt{\alpha^2 - \beta^2} t} + e^{- \alpha - \sqrt{\alpha^2 - \beta^2} t} = e^{-\alpha} e^{j(\sqrt{\alpha^2 - \beta^2})} + e^{-j(\sqrt{\alpha^2 - \beta^2})}$
The I - t curve would look like

When $\sqrt{\alpha^2 - \beta^2} < 0$

$\alpha^2 < \beta^2$ . R < $\frac{L}{C}$
The equation has two complex root . $s = -\alpha$ ± j$\sqrt{\beta^2 - \alpha^2}$
The solution for $I(t) = e^{(- \alpha + \sqrt{\beta^2 - \alpha^2} t)} + e^{(- \alpha - \sqrt{\beta^2 - \alpha^2} t)} = e^{-\alpha} e^{j(\sqrt{\beta^2 - \alpha^2})} + e^{-j(\sqrt{\beta^2 - \alpha^2})}$
The I - t curve would look like

### Damping Factor

The damping factor is the amount by which the oscillations of a circuit gradually decrease over time. We define the damping ratio to be:

Circuit Type Series RLC Parallel RLC
Damping Factor $\zeta = {R \over 2L}$ $\zeta = {1 \over 2RC}$
Resonance Frequency $\omega_o = {1 \over \sqrt{L C}}$ $\omega_o = {1 \over \sqrt{L C}}$

Compare The Damping factor with The Resonance Frequency give rise to different types of circuits: Overdamped, Underdamped, and Critically Damped.

### Bandwidth

[Bandwidth]

$\Delta \omega = 2 \zeta$

For series RLC circuit:

$\Delta \omega = 2 \zeta = { R \over L}$

For Parallel RLC circuit:

$\Delta \omega = 2 \zeta = { 1 \over RC}$

### Quality Factor

[Quality Factor]

$Q = {\omega_o \over \Delta \omega } = {\omega_o \over 2\zeta }$

For Series RLC circuit:

$Q = {\omega_o \over \Delta \omega } = {\omega_o \over 2\zeta } = {L \over R \sqrt{LC}} = {1 \over R} \sqrt{L \over C}$

For Parallel RLC circuit:

$Q = {\omega_o \over \Delta \omega } = {\omega_o \over 2\zeta } = {RC \over \sqrt{LC}} = {R} \sqrt{C \over L}$

### Stability

Because inductors and capacitors act differently to different inputs, there is some potential for the circuit response to approach infinity when subjected to certain types and amplitudes of inputs. When the output of a circuit approaches infinity, the circuit is said to be unstable. Unstable circuits can actually be dangerous, as unstable elements overheat, and potentially rupture.

A circuit is considered to be stable when a "well-behaved" input produces a "well-behaved" output response. We use the term "Well-Behaved" differently for each application, but generally, we mean "Well-Behaved" to mean a finite and controllable quantity.

## Resonance

### With R = 0

When R = 0 , the circuit reduces to a series LC circuit. When the circuit is in resonance, the circuit will vibrate at the resonant frequency.

$Z_L = Z_C$
$\omega L = \frac{1}{\omega C}$
$\omega = \frac{1}{\sqrt{LC}}$
$f = \frac{1}{2\pi} \frac{1}{\sqrt{LC}}$

The circuit vibrates and has the capability of producing a Standing Wave when R = 0 , L = C

### With R ≠ 0

When R ≠ 0 and the circuit operates in resonance .

The frequency dependent components L , C cancel out ie ZL - ZC = 0 so that the total impedance of the circuit is $Z_R + Z_L + Z_C = R + [ Z_L - Z_C ] = R + 0 = R$
The current of the circuit is $I = \frac{V}{R}$
The Operating Frequency is $\omega = \frac{1}{\sqrt{LC}}$

If the current is halved by doubling the value of resistance then

$I = \frac{V}{2R}$
Circuit will be stable over the range of frquencies from $\omega_1 - \omega_2$

The circuit has the capability to select bandwidth where the circuit is stable . Therefore, it is best suited for Tuned Resonance Select Bandwidth Filter

Once using L or C to tune circuit into resonance at resonance frequency $f = \frac{1}{2\pi} \frac{1}{\sqrt{LC}}$ The current is at its maximum value $I = \frac{V}{R}$ . Reduce current above $I = \frac{V}{2R}$ circuit will respond to narrower bandwidth than $\omega_1 - \omega_2$. Reduce current below $I = \frac{V}{2R}$ circuit will respond to wider bandwidth than $\omega_1 - \omega_2$.

## Conclusion

Circuit General Series RLC Parallel RLC
Circuit
Impedance Z $Z = (j\omega)^2 + (j\omega)\frac{R}{L} + \frac{1}{LC}$ $Z = \frac{1}{RLC} \frac{1}{(j\omega)^2 + j\omega\frac{1}{RC} + \frac{1}{LC}}$
Roots λ λ = $- \zeta \pm \sqrt{\zeta^2 - \omega_o^2}$ λ = $- \zeta \pm \sqrt{\zeta^2 - \omega_o^2}$
I(t) Aeλ1t + Beλ2t Aeλ1t + Beλ2t Aeλ1t + Beλ2t
Damping Factor $\zeta$ $\zeta = {R \over 2L}$ $\zeta = {1 \over 2RC}$
Resonant Frequency $\omega_o$ $\omega_o = {1 \over \sqrt{L C}}$ $\omega_o = {1 \over \sqrt{L C}}$
Band Width $\Delta \omega = 2 \zeta$ ${ R \over L}$ ${ 1 \over CR}$
Quality factor $Q = {\omega_o \over \Delta \omega } = {\omega_o \over 2\zeta }$ $Q = {L \over R \sqrt{LC}} = {1 \over R} \sqrt{L \over C}$ $Q = {CR \over \sqrt{LC}} = {R} \sqrt{C \over L}$

# The Second-Order Circuit Solution

== Second-Order Solution

This page is going to talk about the solutions to a second-order, RLC circuit. The second-order solution is reasonably complicated, and a complete understanding of it will require an understanding of differential equations. This book will not require you to know about differential equations, so we will describe the solutions without showing how to derive them. The derivations may be put into another chapter, eventually.

The aim of this chapter is to develop the complete response of the second-order circuit. There are a number of steps involved in determining the complete response:

1. Obtain the differential equations of the circuit
2. Determine the resonant frequency and the damping ratio
3. Obtain the characteristic equations of the circuit
4. Find the roots of the characteristic equation
5. Find the natural response
6. Find the forced response
7. Find the complete response

We will discuss all these steps one at a time.

## Finding Differential Equations

A Second-order circuit cannot possibly be solved until we obtain the second-order differential equation that describes the circuit. We will discuss here some of the techniques used for obtaining the second-order differential equation for an RLC Circuit.

Note
Parallel RLC circuits are easier to solve using ordinary differential equations in voltage (a consequence of Kirchhoff's Voltage Law), and Series RLC circuits are easier to solve using ordinary differential equations in current (a consequence of Kirchhoff's Current Law).

### The Direct Method

The most direct method for finding the differential equations of a circuit is to perform a nodal analysis, or a mesh current analysis on the circuit, and then solve the equation for the input function. The final equation should contain only derivatives, no integrals.

### The Variable Method

If we create two variables, g and h, we can use them to create a second-order differential equation. First, we set g and h to be either inductor currents, capacitor voltages, or both. Next, we create a single first order differential equation that has g = f(g, h). Then, we write another first-order differential equation that has the form:

$\frac{dh}{dt} = Kg$ or $\frac{1}{K}\frac{dh}{dt} = g$

Next, we substitute in our second equation into our first equation, and we have a second-order equation.

## Zero-Input Response

The zero-input response of a circuit is the state of the circuit when there is no forcing function (no current input, and no voltage input). We can set the differential equation as such:

${{d^2 i} \over {dt^2}} + 2 \zeta {{di} \over {dt}} + \omega_o^2 i(t) = 0$

This gives rise to the characteristic equation of the circuit, which is explained below.

## Characteristic Equation

The characteristic equation of an RLC circuit is obtained using the "Operator Method" described below, with zero input. The characteristic equation of an RLC circuit (series or parallel) will be:

$s^2i + {R \over L} si + {1 \over {LC}} i = 0$

The roots to the characteristic equation are the "solutions" that we are looking for.

### Finding the Characteristic Equation

This method of obtaining the characteristic equation requires a little trickery. First, we create an operator s such that:

$sx = \frac{dx}{dt}$

Also, we can show higher-order operators as such:

$s^2x = \frac{d^2x}{dt^2}$

Where x is the voltage (in a series circuit) or the current (in a parallel circuit) of the circuit source. We write 2 first order differential equations for the inductor currents and/or the capacitor voltages in our circuit. We convert all the differentiations to s, and all the integrations (if any) into (1/s). We can then use Cramer's rule to solve for a solution.

### Solutions

The solutions of the characteristic equation are given in terms of the resonant frequency and the damping ratio:

[Characteristic Equation Solution]

$s = - \zeta \pm \sqrt{\zeta^2 - \omega_o^2}$

If either of these two values are used for s in the assumed solution $x = Ae^{st}$ and that solution completes the differential equation then it can be considered a valid solution. We will discuss this more, below.

## Damping

The solutions to a circuit are dependant on the type of damping that the circuit exhibits, as determined by the relationship between the damping ratio and the resonant frequency. The different types of damping are Overdamping, Underdamping, and Critical Damping.

### Overdamped

RLC series Over-Damped Response

A circuit is called Overdamped when the following condition is true:

$\alpha > \omega_0$

In this case, the solutions to the characteristic equation are two distinct, positive numbers, and are given by the equation:

$I(t)=A e^{\ s_1 t} + B e^{\ s_2 t}$, where
$s_1,s_2 = - \alpha \pm \sqrt{\alpha^2 - \omega_0^2}$

In a parallel circuit:

$\alpha = 1/(2RC)$
$\omega_0 = 1 / \sqrt{(LC)}$

In a series circuit:

$\alpha = R/(2L)$
$\omega_0 = 1 / \sqrt{(LC)}$

Overdamped circuits are characterized as having a very large settling time, and possibly a large steady-state error.

### Underdamped

A Circuit is called Underdamped when the damping ratio is less than the resonant frequency.

$\zeta < \omega_0$

In this case, the characteristic polynomial's solutions are complex conjugates. This results in oscillations or ringing in the circuit. The solution consists of two conjugate roots:

$\lambda_1 = -\zeta + i\omega_c$

and

$\lambda_2 = -\zeta - i\omega_c$

where

$\omega_c = \sqrt{\omega_o^2 - \zeta^2}$

The solutions are:

$i(t) = Ae^{(-\zeta + i \omega_c)t} + Be^{(-\zeta - i \omega_c)t}$

for arbitrary constants A and B. Using Euler's formula, we can simplify the solution as:

$i(t)=e^{-\zeta t} \left[ C \sin(\omega_c t) + D \cos(\omega_c t) \right]$

for arbitrary constants C and D. These solutions are characterized by exponentially decaying sinusoidal response. The higher the Quality Factor (below), the longer it takes for the oscillations to decay.

### Critically Damped

RLC series Critically Damped

A circuit is called Critically Damped if the damping factor is equal to the resonant frequency:

$\zeta=\omega_0$

In this case, the solutions to the characteristic equation is a double root. The two roots are identical ($\lambda_1=\lambda_2=\lambda$), the solutions are:

$I(t)=(A+Bt) e^{\lambda t}$

for arbitrary constants A and B. Critically damped circuits typically have low overshoot, no oscillations, and quick settling time.

## Series RLC

A series RLC circuit.

The differential equation to a simple series circuit with a constant voltage source V, and a resistor R, a capacitor C, and an inductor L is:

$L\frac{d^2q}{dt^2} + R\frac{dq}{dt} + {1 \over C}q = 0$

The characteristic equation then, is as follows:

$Ls^2 + Rs + {1 \over C} = 0$

With the two roots:

$s_1 = -{R\over 2L} + \sqrt{({R\over 2L})^2 - {1 \over LC}}$

and

$s_2 = -{R\over 2L} - \sqrt{({R\over 2L})^2 - {1 \over LC}}$

## Parallel RLC

A parallel RLC Circuit.

The differential equation to a parallel RLC circuit with a resistor R, a capacitor C, and an inductor L is as follows:

$C\frac{d^2v}{dt^2} + \frac{1}{R}\frac{dv}{dt} + {1 \over L}v = 0$

Where v is the voltage across the circuit. The characteristic equation then, is as follows:

$Cs^2 + {1 \over R}s + {1 \over L} = 0$

With the two roots:

$s_1 = -{1\over 2RC} + \sqrt{({1\over 2RC})^2 - {1 \over LC}}$

and

$s_2 = -{1\over 2RC} - \sqrt{({1\over 2RC})^2 - {1 \over LC}}$

## Circuit Response

Once we have our differential equations, and our characteristic equations, we are ready to assemble the mathematical form of our circuit response. RLC Circuits have differential equations in the form:

$a_2 \frac{d^2x}{dt^2} + a_1\frac{dx}{dt} + a_0 x = f(t)$

Where f(t) is the forcing function of the RLC circuit.

### Natural Response

The natural response of a circuit is the response of a given circuit to zero input (i.e. depending only upon the initial condition values). The natural Response to a circuit will be denoted as xn(t). The natural response of the system must satisfy the unforced differential equation of the circuit:

[Unforced function]

$a_2 \frac{d^2x}{dt^2} + a_1\frac{dx}{dt} + a_0 x = 0$

We remember this equation as being the "zero input response", that we discussed above. We now define the natural response to be an exponential function:

$x_n = A_1e^{st} + A_2e^{st}$

Where s are the roots of the characteristic equation of the circuit. The reasons for choosing this specific solution for xn is based in differential equations theory, and we will just accept it without proof for the time being. We can solve for the constant values, by using a system of two equations:

$x(0) = A_1 + A_2$
$\frac{dx(0)}{dt} = s_1A_1 + s_2A_2$

Where x is the voltage (of the elements in a parallel circuit) or the current (through the elements in a series circuit).

### Forced Response

The forced response of a circuit is the way the circuit responds to an input forcing function. The Forced response is denoted as xf(t).

Where the forced response must satisfy the forced differential equation:

[Forced function]

$a_2 \frac{d^2x}{dt^2} + a_1\frac{dx}{dt} + a_0 x = f(t)$

The forced response is based on the input function, so we can't give a general solution to it. However, we can provide a set of solutions for different inputs:

Input Form Output Form
K (constant) A (constant)
$M \sin(\omega t)$ $A \sin(\omega t) + B \cos (\omega t)$
$M e^{-at}$ $A e^{-at}$

### Complete Response

The Complete response of a circuit is the sum of the forced response, and the natural response of the system:

[Complete Response]

$x_c(t) = x_t(t) + x_s(t)$

Once we have derived the complete response of the circuit, we can say that we have "solved" the circuit, and are finished working.

the 2nd order circuit (LC) when there is no R in the circuit we consider a=1/2RC ( if the circuit parallel ) = 0 so the circuit will be in underdame case since a=0 and omega has value greater than zero

# Mutual Inductance

## Magnetic Fields

Inductors store energy in the form of a magnetic field. The magnetic field of an inductor actually extends outside of the inductor, and can be affected (or can affect) another inductor close by. The image above shows a magnetic field (red lines) extending around an inductor.

## Mutual Inductance

If we accidentally or purposefully put two inductors close together, we can actually transfer voltage and current from one inductor to another. This property is called Mutual Inductance. A device which utilizes mutual inductance to alter the voltage or current output is called a transformer.

The inductor that creates the magnetic field is called the primary coil, and the inductor that picks up the magnetic field is called the secondary coil. Transformers are designed to have the greatest mutual inductance possible by winding both coils on the same core. (In calculations for inductance, we need to know which materials form the path for magnetic flux. Air core coils have low inductance; Cores of iron or other magnetic materials are better 'conductors' of magnetic flux.)

The voltage that appears in the secondary is caused by the change in the shared magnetic field, each time the current through the primary changes. Thus, transformers work on A.C. power, since the voltage and current change continuously.

## Modern Inductors

When the coils of number of turns N1 conducts current . There exists a Magnetic Field B on the coil . Changes of B will generates an Induced Voltage on the turns of coil N1 and N2 as shown

p = $N_p \frac{dB}{dt}$
s = $N_s \frac{dB}{dt}$

The ratio of -ξ2 over -ξ1

p / -ξs = $\frac{N_p}{N_s}$

If Input voltage at coil of turn Np = -ξp and the Output voltage will be

$\frac{V_s}{V_p}$ = -ξs / -ξp = $\frac{N_s}{N_p}$

$V_s = V_p \frac{N_s}{N_p}$

Thus, this device is capable of Increase, Decrease and Conduct Voltage just by changing the turn ratio of the coils

Therefore, the output voltage can be

• Increased or Step Up by increasing number of turns of coil Ns greater than Np
• Decreased or Step Down by Decreasing number of turns of coil Ns less than Np
• Buffered by setting number of turns of coil Ns equal to Np

The following photo shows several examples of the construction of inductors and transformers. At the upper right is a toroidal core type (toroid is the mathematical term for a donut shape). This shape very efficiently contains the magnetic flux, so less power (or signal) is lost to heating up the core.

## Step Up and Step Down

The terms 'step-up' and 'step-down' are used to compare the secondary (output) voltage to the voltage supplied to the primary.

Many transformers are specially designed to operate exclusively as step-up or step-down. While an ideal transformer could simply be 'turned around', we find that many actual transformers are built to perform best at certain ranges of voltage and current.

For example, a power transformer may be used to step down household AC (about 120 Volts) to 24V for home heating controls, etc. The output current is higher than the primary current in this example, so the transformer is made with a heavier gauge of wire in its secondary windings.

In transformers that deal with very high voltages, special attention is paid to insulation. The windings that deal with thousands of volts must resist arcing and other problems we do not see at home.

Finally, some transformers in electronic equipment are designed for a task known as 'impedance matching', rather than for specific in/out voltages. This function is explained in literature covering audio and radio topics.

(This section has not yet been written)

# State-Variable Approach

New techniques are needed to solve 2nd order and higher circuits:

• Symbolic solutions are so complicated, merely comparing answers is an exercise
• Analytical solution techniques are more fragmented
• The relationship between constants, initial conditions and circuit layout are becoming complicated

A change in strategy is needed if circuit analysis is going to:

• Move beyond the ideal
• Consider more complicated circuits
• Understand limitations/approximations of circuit modeling software

The solution is "State Variables." After a state variable analysis, the exercise of creating symbolic solution can be simplified by eliminating terms that don't have a significant impact on the output.

### State Space

The State Space approach to circuit theory abandons the symbolic/analytical approach to circuit analysis. The state variable model involves describing a circuit in matrix form and then solving it numerically using tools like series expansions, Simpson's rule, and Cramer's Rule. This was the original starting point of matlab.

### State

"State" means "condition" or "status" of the energy storage elements of a circuit. Since resistors don't change (ideally) and don't store energy, they don't change the circuit's state. A state is a snap shot in time of the currents and voltages. The goal of "State Space" analysis is to create a notation that describes all possible states.

### State Variables

The notation used to describe all states should be as simple as possible. Instead of trying to find a complex, high order differential equation, go back to something like Kirchhoff analysis and just write terminal equations.

State variables are voltages across capacitors and currents through inductors. This means that purely resistive circuit cut sets are collapsed into single resistors that end up in series with an inductor or parallel to capacitor. Rather than using the symbols v and i to represent these unknowns, they are both called x. Kirchhoff's equations are used instead of node or loop equations. Terminal equations are substituted into the Kirchhoff's equations so that remaining resistor's currents and voltages are shared with inductors and capacitors.

### State Space Model

This State Space Model describes the inputs (step function μ(t), initial conditions X(0)), the output Y(t) and A,B,C and D. A-B-C-D are transfer functions that combine as follows:

$\frac{\mathbb{Y}}{\mathbb{\mu}} = B(s)\left(\frac{1}{s - A(s)}\right)C(s) + D(s)$

A control systems class teaches how to build these block diagrams from a desired transfer function. Integrals "remember" or accumulate a history of past states. Derivatives predict future state and both, in addition to the current state can be separately scaled. "A" represents feedback. "D" represents feed forward. There is lots to learn.

Don't try to figure out how a negative sign appeared in the denominator and where the addition term came from. How does the above help us predict voltages and currents in a circuit? Let's start by defining terms and do some examples:

• A is a square matrix representing the circuit components (from Kirchhoff's equations.
• B is a column matrix or vector representing how the source impacts the circuit (from Kirchhoff's equations).
• C is a row matrix or vector representing how the output is computed (could be voltage or current)
• D is a single number that indicates a multiplier of the source .. is usually zero unless the source is directly connected to the output through a resistor.

A and B describe the circuit in general. If X is a column matrix (vector) representing all unknown voltages and currents, then:

$\boldsymbol{\dot{X}} = \boldsymbol{A}\boldsymbol{X} + \boldsymbol{B}\boldsymbol{\mu}$

At this point, X is known and represents a column of functions of time. The output can be derived from the known X's and the original step function μ using C and D:

$y = \boldsymbol{C}\boldsymbol{X} + D*\boldsymbol{\mu}$

### MatLab Implementation

screen shot of matlab with simulink toolbox showing how to get to the state-space block for wikibook circuit analysis

This would not be a step forward without tools such as MatLab. These are the relevant MatLab control system toolbox commands:

• step(A,B,C,D) assumes the initial conditions are zero
• initial(A,B,C,D,X(0)) just like step but takes into account the initial conditions X(0)

In addition, there is a simulink block called "State Space" that can be used the same way.

### Video Introduction

Sinusoidal Sources

The circuits that we have analyzed previously have been DC , where a constant voltage or current is applied to the circuit. In the following chapters, we will discuss the topic of alternating current (AC), which utilizes sinusoidal forcing functions to stimulate a circuit.

# Sinusoidal Sources

"Steady State" means that we are not dealing with turning on or turning off circuits in this section. We are assuming that the circuit was turned on a very long time ago and it is behaving in a pattern. We are computing what the pattern will look like. The "complex frequency" section models turning on and off a circuit with an exponential.

## Sinusoidal Forcing Functions

Let us consider a general AC forcing function:

$v(t) = M\sin(\omega t + \phi)$

In this equation, the term M is called the "Magnitude", and it acts like a scaling factor that allows the peaks of the sinusoid to be higher or lower than +/- 1. The term ω is what is known as the "Radial Frequency". The term φ is an offset parameter known as the "Phase".

Sinusoidal sources can be current sources, but most often they are voltage sources.

## Other Terms

There are a few other terms that are going to be used in many of the following sections, so we will introduce them here:

Period
The period of a sinusoidal function is the amount of time, in seconds, that the sinusoid takes to make a complete wave. The period of a sinusoid is always denoted with a capital T. This is not to be confused with a lower-case t, which is used as the independent variable for time.
Frequency
Frequency is the reciprocal of the period, and is the number of times, per second, that the sinusoid completes an entire cycle. Frequency is measured in Hertz (Hz). The relationship between frequency and the Period is as follows:
$f = \frac{1}{T}$
Where f is the variable most commonly used to express the frequency.
Radian frequency is the value of the frequency expressed in terms of Radians Per Second, instead of Hertz. Radian Frequency is denoted with the variable $\omega$. The relationship between the Frequency, and the Radian Frequency is as follows:
$\omega = 2 \pi f$
Phase
The phase is a quantity, expressed in radians, of the time shift of a sinusoid. A sinusoid phase-shifted $\phi = +2 \pi$ is moved forward by 1 whole period, and looks exactly the same. An important fact to remember is this:
$\sin (\frac{\pi}{2}-t) = \cos (t)$ or $\sin (t) = \cos (t - \frac{\pi}{2})$

Phase is often expressed with many different variables, including $\phi, \psi, \theta, \gamma$ etc... This wikibook will try to stick with the symbol $\phi$, to prevent confusion.

A circuit element may have both a voltage across its terminals and a current flowing through it. If one of the two (current or voltage) is a sinusoid, then the other must also be a sinusoid (remember, voltage is the derivative of the current, and the derivative of a sinusoid is always a sinusoid). However, the sinusoids of the voltage and the current may differ by quantities of magnitude and phase.

If the current has a lower phase angle than the voltage the current is said to lag the voltage. If the current has a higher phase angle then the voltage, it is said to lead the voltage. Many circuits can be classified and examined using lag and lead ideas.

## Sinusoidal Response

Reactive components (capacitors and inductors) are going to take energy out of a circuit like a resistor and then pump some of it back into the circuit like a source. The result is initially a mess. But after a while (5 time constants), the circuit starts behaving in a pattern. The capacitors and inductors get in a rhythm that reflects the driving sources. If the source is sinusoidal, the currents and voltages will be sinusoidal. This is called the "particular" or "steady state" response. In general:

$A_{in} \cos(\omega_{in} t + \phi_{in}) \to A_{out} \cos(\omega_{out} t + \phi_{out})$

What happens initially, what happens if the capacitor is initially charged, what happens if sources are switched in and out of a circuit is that there is an energy imbalance. A voltage or current source might be charged by the initial energy in a capacitor. The derivative of the voltage across an Inductor might instantaneously switch polarity. Lots of things are happening. We are going to save this for later. Here we deal with the steady state or "particular" response first.

## Sinusoidal Conventions

For the purposes of this book we will generally use cosine functions, as opposed to sine functions. If we absolutely need to use a sine, we can remember the following trigonometric identity:

$\cos(\omega t) = \sin(\pi/2 -\omega t)$

We can express all sine functions as cosine functions. This way, we don't have to compare apples to oranges per se. This is simply a convention that this wikibook chooses to use to keep things simple. We could easily choose to use all sin( ) functions, but further down the road it is often more convenient to use cosine functions instead by default.

## Sinusoidal Sources

There are two primary sinusoidal sources: wall outlets and oscillators. Oscillators are typically crystals that electrically vibrate and are found in devices that communicate or display video such as TV's, computers, cell phones, radios. An electrical engineer or tech's working area will typically include a "function generator" which can produce oscillations at many frequencies and in shapes that are not just sinusoidal.

RMS or Root mean square is a measure of amplitude that compares with DC magnitude in terms of power, strength of motor, brightness of light, etc. The trouble is that there are several types of AC amplitude:

• peak
• peak to peak
• average
• RMS

Wall outlets are called AC or alternating current. Wall outlets are sinusoidal voltage sources that range from 100 RMS volts, 50 Hz to 240 RMS volts 60 Hz world wide. RMS, rather than peak (which makes more sense mathematically), is used to describe magnitude for several reasons:

• historical reasons related to the competition between Edison (DC power) and Tesla (Sinusoidal or AC power)
• effort to compare/relate AC (wall outlets) to DC (cars, batteries) .. 100 RMS volts is approximately 100 DC volts.
• average sinusoidal is zero
• meter movements (physical needles moving on measurement devices) were designed to measure both DC and RMS AC

RMS is a type of average: $p_{\mathrm{rms}} = \sqrt {{1 \over {T_2-T_1}} {\int_{T_1}^{T_2} {[p(t)]}^2\, dt}}$

Electrical power delivery is a complicated subject that will not be covered in this course. Here we are trying to define terms, design devices that use the power and understand clearly what comes out of wall outlets.

# Phasors

## Variables

Variables are defined the same way. But there is a difference. Before variables were either "known" or "unknown." Now there is a sort of in between.

At this point the concept of a constant function (a number) and a variable function (varies with time) needs to be reviewed. See this student professor dialogue. Knowns are described in terms of functions, unknowns are computed based upon the knowns and are also functions.

For example:

$v(t) = M_v \cos (\omega t + \phi_v)$ voltage varying with time

Here $v(t)$ is the symbol for a function. It is assigned a function of the symbols $M_v, \omega, \phi_v$ and $t$. Typically time is not ever solved for.

Time remains an unknown. Furthermore all power, voltage and current turn into equations of time. Time is not solved for. Because time is everywhere, it can be eliminated from the equations. Integrals and derivatives turn into algebra and the answers can be purely numeric (before time is added back in).

At the last moment, time is put back into voltage, current and power and the final solution is a function of time.

Most of the math in this course has these steps:

1. describe knowns and unknowns in the time domain, describe all equations
2. change knowns into phasors, eliminate derivatives and integrals in the equations
3. solve numerically or symbolically for unknowns in the phasor domain
4. transform unknowns back into the time domain

## Passive circuit output is similar to input

If the input to a linear circuit is a sinusoid, then the output from the circuit will be a sinusoid. Specifically, if we have a voltage sinusoid as such:

$v(t) = M_v \cos (\omega t + \phi_v)$

Then the current through the linear circuit will also be a sinusoid, although its magnitude and phase may be different quantities:

$i(t) = M_i \cos (\omega t + \phi_i)$

Note that both the voltage and the current are sinusoids with the same radial frequency, but different magnitudes, and different phase angles. Passive circuit elements cannot change the frequency of a sinusoid, only the magnitude and the phase. Why then do we need to write $\omega$ in every equation, when it doesnt change? For that matter, why do we need to write out the cos( ) function, if that never changes either? The answers to these questions is that we don't need to write these things every time. Instead, engineers have produced a short-hand way of writing these functions, called "phasors".

## Phasor Transform

Phasors are a type of "transform." We are transforming the circuit math so that time disappears. Imagine going to a place where time doesn't exist.

We know that every function can be written as a series of sine waves of various frequencies and magnitudes added together. (Look up fourier transform animation). The entire world can be constructed from sine waves. Here, one sine wave is looked at, the repeating nature ($\omega$) is stripped away. Whats left is a phasor. Since time is made of circles, and if we consider just one of these circles, we can move to a world where time doesn't exist and circles are "things". Instead of the word "world", use the word "domain" or "plane" as in two dimensions.

Math in the Phasor domain is almost the same as DC circuit analysis. What is different is that inductors and capacitors have an impact that needs to be accounted for.

The transform into the Phasors plane or domain and transforming back into time is based upon Euler's equation. It is the reason you studied imaginary numbers in past math class.

## Euler's Equation

Euler's Formula

Euler started at these three series. Obviously there is a relationship:

$\cos x = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots$
$\sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots$
$e^{x} = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \frac{x^5}{5!} + \cdots$

He did the following:

$e^{ix} = 1 + i x + \frac{i^2 x^2}{2!} + \frac{i^ 3 x^3}{3!} + \frac{i^ 4 x^4}{4!} + \frac{i^ 5 x^5}{5!} + \cdots$
$e^{ix} = 1 + i x - \frac{x^2}{2!} - i\frac{x^3}{3!} + \frac{x^4}{4!} + i\frac{x^5}{5!} - \frac{x^6}{6!} - i\frac{x^7}{7!} \cdots$
$e^{ix} = (1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots) + i (x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots)$
$e^{ix} = \cos(x) + i \sin(x)$

Set x = π and:

$e^{i\pi} = -1$

Euler's formula is ubiquitous in mathematics, physics, and engineering. The physicist Richard Feynman called the equation "our jewel" and "one of the most remarkable, almost astounding, formulas in all of mathematics."

A more general version of Euler's equation is:

$M e^{j(\omega t + \phi)} = M \cos (\omega t + \phi) + j M \sin (\omega t + \phi)$

This equation allows us to view sinusoids as complex exponential functions. A cyclic function represented as a voltage, current or power given in terms of radial frequency and phase angle turns into an arrow having length $\mathbb{C}$ (magnitude) and angle $\phi$ (phase) in the phasor domain/plane or a point having both a real ($X$) and imaginary ($Y$) coordinate in the complex domain/plane.

Generically, the phasor $\mathbb{C}$, (which could be voltage, current or power) can be written:

$\mathbb{C} = X + jY$ (rectangular coordinates)
$\mathbb{C} = M_v \angle \phi$ (polar coordinates)

We can graph the point (X, Y) on the complex plane and draw an arrow to it showing the relationship between $X,Y,\mathbb{C}$ and $\phi$.

Using this fact, we can get the angle from the origin of the complex plane to out point (X, Y) with the function:

[Angle equation]

$\theta_C = \arctan(\frac{Y}{X})$

And using the pythagorean theorem, we can find the magnitude of C -- the distance from the origin to the point (X, Y) -- as:

[Pythagorean Theorem]

$M_C = |\mathbb{C}| = \sqrt{X^2 + Y^2}$.

## Phasor Symbols

Phasors don't account for the frequency information, so make sure you write down the frequency some place safe.

Suppose in the time domain:

$v(t) = M_v e^{j(\omega t + \phi)}$

In the phasor domain, this voltage is expressed like this:

$\mathbb{V} = M_v \angle \phi$

The radial velocity $\omega$ disappears from known functions (not the derivate and integral operations) and reappears in the time expression for the unknowns.

## Not [Not] Vectors

Contrary to the statement made in this heading, phasors (phase vectors), are vectors. Phasors form a vector space with additional structure, hence they have some properties that are not common to all vector spaces; these additional properties exist because phasors form a field - thus you also get division.

For more details see http://en.wikipedia.org/wiki/Phasor_(electronics)

Like many kinds of vectors they have additional struct Phasors will always be written out either with a large bold letter (as above). They are not a vector. Vectors have two or more real axes that are not related by Euler, but are independent. They share some math in two dimensions, but this math diverges.

Phasors can be divided, but vectors can not.

Voltage can be divided by current (in the phasor domain), but East can not be divided by North (vectors can not be divided). Vectors move into three or more dimensions of linear algebra math that help build complicated structures in the real world such as space frames. Phasors move into more complicated transforms related to differential equation math and electronics.

cross product visualization

The math of phasors is exactly the same as ordinary math, except with imaginary numbers. Vectors demand new mathematical operations such as dot product and cross product:

• The dot product of vectors finds the shadow of one vector on another.
• The cross product of vectors combines vectors into a third vector perpendicular to both.

## Cosine Convention

In this book, all phasors correspond to a cosine function, not a sine function.

It is important to remember which trigonometric function your phasors are mapping to. Since a phasor only includes information on magnitude and phase angle, it is impossible to know whether a given phasor maps to a sin( ) function, or a cos( ) function instead. By convention, this wikibook and most electronic texts/documentation map to the cosine function.

If you end up with an answer that is sin, convert to cos by subtracting 90 degrees:

$\sin(\omega t + \phi) = cos(\omega t + \phi - \frac{\pi}{2})$

If your simulator requires the source to be in sin form, but the starting point is cos, then convert to sin by adding 90 degrees:

$\cos(\omega t + \phi) = sin(\omega t + \phi + \frac{\pi}{2})$

## Phasor Concepts

Inside the phasor domain, concepts appear and are named. Inductors and capacitors can be coupled with their derivative operator transforms and appear as imaginary resistors called "reactance." The combination of resistance and reactance is called "impedance." Impedance can be treated algebraically as a phasor although technically it is not. Power concepts such as real, reactive, apparent and power factor appear in the phasor domain. Numeric math can be done in the phasor domain. Symbols can be manipulated in the phasor domain.

## Phasor Math

The Appendix

Phasor math turns into the imaginary number math which is reviewed below.

Phasor A can be multiplied by phasor B:

[Phasor Multiplication]

$\mathbb{A} \times \mathbb{B} = (M_a \times M_b) \angle (\phi_a + \phi_b)$

The phase angles add because in the time domain they are exponents of two things multiplied together.

[Phasor Division]

$\mathbb{A} / \mathbb{B} = (M_a / M_b) \angle (\phi_a - \phi_b)$

Again the phase angles are treated like exponents ... so they subtract.

The magnitude and angle form of phasors can not be used for addition and subtraction. For this, we need to convert the phasors into rectangular notation:

$\mathbb{C} = X + jY$

Here is how to convert from polar form (magnitude and angle) to rectangular form (real and imaginary)

$X = M \cos (\phi)$, $Y = M \sin (\phi)$

Once in rectangular form:

• Real parts get add or subtract
• Imaginary parts add or subtract

$\mathbb{C} = \mathbb{A} + \mathbb{B} = (X_A + X_B) + j(Y_A + Y_B) = X_C + jY_C$

Here is how to convert from rectangular form to polar form:

$\mathbb{C} = M_c \angle \phi_c = \sqrt{X^2 + Y^2} \angle \arctan(\frac{Y}{X})$

Once in polar phasor form, conversion back into the time domain is easy:

$\operatorname{Re}(M e^{j(\omega t + \phi)}) = M \cos (\omega t + \phi)$

## Function transformation Derivation

$g(t)$ represents either voltage, current or power.

$g(t)=G_m cos(\omega t + \phi)$ starting point
$g(t)=G_m \operatorname{Re}(e^{j(\omega t + \phi)})$ from Euler's Equation
$g(t)=G_m \operatorname{Re}(e^{j\phi}e^{j\omega t})$ law of exponents
$g(t)=\operatorname{Re}(G_m e^{j\phi}e^{j\omega t})$ .... $G_m$ is a real number so it can be moved inside
$g(t)=\operatorname{Re}(\mathbb{G} e^{j\omega t})$ $.... \mathbb{G}$ is the definition of a phasor, here it is an expression substituting for $G_m e^{j\phi}$
$g(t) \Leftrightarrow \mathbb{G}$ where $\mathbb{G} = G_m e^{j\phi}$

What happens to $e^{j\omega t}$ term? Long Answer. It hangs around until it is time to transform back into the time domain. Because it is an exponent, and all the phasor math is algebra associated with exponents, the final phasor can be multiplied by it. Then the real part of the expression will be the time domain solution.

time domain transformation phasor domain
$A cos(\omega t)$ $\Leftrightarrow$ proof $A$
$A sin(\omega t)$ $\Leftrightarrow$ proof $-Aj$
$A cos(\omega t) + B sin(\omega t)$ $\Leftrightarrow$ $A-Bj$
$A cos(\omega t) - B sin(\omega t)$ $\Leftrightarrow$ $A+Bj$
$A cos(\omega t + \phi)$ $\Leftrightarrow$ proof $A cos(\phi) + A sin(\phi)j$
$A sin(\omega t + \phi)$ $\Leftrightarrow$ proof $A sin(\phi) - A cos(\phi)j$
$A cos(\omega t - \phi)$ $\Leftrightarrow$ proof $A cos(\phi) - A sin(\phi)j$
$A sin(\omega t - \phi)$ $\Leftrightarrow$ proof $-A sin(\phi) - A cos(\phi)j$

In all the cases above, remember that $\phi$ is a constant, a known value in most cases. Thus the phasor is an complex number in most calculations.

There is another transform associated with a derivatives that is discussed in "phasor calculus."

## Transforming calculus operators into phasors

When sinusoids are represented as phasors, differential equations become algebra. This result follows from the fact that the complex exponential is the eigenfunction of the operation:

$\frac{d}{dt}(e^{j \omega t}) = j \omega e^{j \omega t}$

That is, only the complex amplitude is changed by the derivative operation. Taking the real part of both sides of the above equation gives the familiar result:

$\frac{d}{dt} \cos{\omega t} = - \omega \sin{\omega t}\,$

Thus, a time derivative of a sinusoid becomes, when tranformed into the phasor domain, algebra:

${d \over dt}i(t)\rightarrow j\omega\mathbb{I}$ j is the square root of -1 or an imaginary number

In a similar way the time integral, when transformed into the phasor domain is:

$\int V(t) dt \rightarrow \frac{\mathbb{V}}{j\omega}$

There is an integration constant that will have to be dealt with when translating back into the time domain. It doesn't disappear.

The above is true of voltage, current, and power.

The question is why does this work? Where is the proof? Lets do this three times: once for a resistor, then inductor, then capacitor. The symbols for the current and voltage going through the terminals are: $V_m cos(\omega t + \phi_v)$ and $I_m cos(\omega t + \phi_I)$

### Resistor Terminal Equation

$V=R I$ . terminal relationship
$V_m cos(\omega t + \phi_V) = R I_m cos(\omega t + \phi_I)$ .. substituting example functions
$V_m e^{\omega t + j \phi_V} = R I_m e^{\omega t + j \phi_I}$ .. Euler's version of the terminal relationship
$V_m e^{\omega t} e^{j \phi_V} = R I_m e^{\omega t} e^{j \phi_I}$ .. law of exponents
$V_m \cancel{e^{\omega t}} e^{j \phi_V} = R I_m \cancel{e^{\omega t}} e^{j \phi_I}$ .. do same thing go both sides of equal sign
$V_m e^{j \phi_V} = R I_m e^{j \phi_I}$ .. time domain result
$\mathbb{V} = R \mathbb{I}$ .. phasor expression

Just put the voltage and current in phasor form and substitute to migrate equation into the phasor domain.

### Inductor Terminal Equation

$V = L\frac{d}{dt}I$ ... terminal relationship
$V_m cos(\omega t + \phi_V) = L \frac{d}{dt} (I_m cos(\omega t + \phi_I))$ .. substitution of a generic sinusodial
$V_m cos(\omega t + \phi_V) = -\omega L I_m sin(\omega t + \phi_I)$ .. taking the derivative
$- sin(\omega t + \phi_I) = cos(\omega t + \phi_I + \frac{\pi}{2})$ .. trig
$V_m cos(\omega t + \phi_V) = \omega L I_m cos(\omega t + \phi_I + \frac{\pi}{2})$ .. substitution
$V_m \operatorname{Re}(e^{j(\omega t + \phi_V)}) = \omega L I_m \operatorname{Re}(e^{j(\omega t + \phi_L + \frac{\pi}{2})})$ from Euler's Equation
$V_m \operatorname{Re}(e^{j\omega t} e^{j\phi_V}) = \omega L I_m \operatorname{Re}(e^{j\omega t}e^{j\phi_L}e^{j\frac{\pi}{2}})$ law of exponents
$\operatorname{Re}(V_m e^{j\phi_V} \cancel{e^{j\omega t}} ) = \operatorname{Re}(e^{j\frac{\pi}{2}} \omega L I_m e^{j\phi_L} \cancel{e^{j\omega t}})$ .... real numbers can be moved inside
$e^{j\frac{\pi}{2}} = cos(\frac{\pi}{2}) + j*sin(\frac{\pi}{2}) = j$ ... substitute in above
$\mathbb{I} = I_m e^{j\phi_L}$ and $\mathbb{V} = V_m e^{j\phi_V}$ .. substitute in above
cancel out the $e^{j\omega}$ terms on both sides
$\operatorname{Re}(\mathbb{V}e^{j\omega t} ) = \operatorname{Re}(j \omega L \mathbb{I}e^{j\omega t})$ .... definition of phasors
$\mathbb{V} = j \omega L \mathbb{I}$ .... equation transformed into phasor domain

Conclusion, put the voltage and current in phasor form, replace $\frac{d}{dt}$ with $j\omega$ to translate the equation to the phasor domain.

### Capacitor Terminal Equation

A capacitor is basically the same form, V and I switch sides, C is substituted for L.

$I = C\frac{d}{dt}V$ ... terminal relationship
$I_m cos(\omega t + \phi_I) = C \frac{d}{dt} (V_m cos(\omega t + \phi_V))$ .. substitution of a generic sinusodial
$I_m cos(\omega t + \phi_I) = -\omega C V_m sin(\omega t + \phi_V)$ .. taking the derivative
$- sin(\omega t + \phi_V) = cos(\omega t + \phi_V + \frac{\pi}{2})$ .. trig
$I_m cos(\omega t + \phi_I) = \omega C V_m cos(\omega t + \phi_V + \frac{\pi}{2})$ .. substitution
$I_m \operatorname{Re}(e^{j(\omega t + \phi_I)}) = \omega C V_m \operatorname{Re}(e^{j(\omega t + \phi_V + \frac{\pi}{2})})$ from Euler's Equation
$I_m \operatorname{Re}(e^{j\omega t} e^{j\phi_I}) = \omega C V_m \operatorname{Re}(e^{j\omega t}e^{j\phi_V}e^{j\frac{\pi}{2}})$ law of exponents
$\operatorname{Re}(I_m e^{j\phi_I} \cancel{e^{j\omega t}}) = \operatorname{Re}(e^{j\frac{\pi}{2}} \omega C V_m e^{j\phi_V} \cancel{e^{j\omega t}})$ .... real numbers can be moved inside
$e^{j\frac{\pi}{2}} = cos(\frac{\pi}{2}) + j*sin(\frac{\pi}{2}) = j$ ... substitute in above equation
$\mathbb{V} = V_m e^{j\phi_V}$ and $\mathbb{I} = I_m e^{j\phi_I}$.. substitute in above
cancel out the $e^{j\omega}$ terms on both sides
$\operatorname{Re}(\mathbb{I}e^{j\omega t} ) = \operatorname{Re}(j \omega C \mathbb{V}e^{j\omega t})$ .... definition of phasors
$\mathbb{I} = j \omega C \mathbb{V}$ .... equation transformed into phasor domain

Conclusion, put the voltage and current in phasor form, replace $\frac{d}{dt}$ with $j\omega$ to translate the equation to the phasor domain.

In summary, all the terminal relations have $e^{j \omega}$ terms that cancel:

$V_m e^{j\phi}\cancel{e^{j\omega t}} = I_m e^{j\phi}\cancel{e^{j\omega t}} * R$
$\mathbb{V} = \mathbb{I}R$
$V_m e^{j\phi}\cancel{e^{j\omega t}} = I_m e^{j\phi}\cancel{e^{j\omega t}} * j\omega*L$
$\mathbb{V} = \mathbb{I}j\omega L$
$I_m e^{j\phi}\cancel{e^{j\omega t}} = V_m e^{j\phi}\cancel{e^{j\omega t}} * j\omega*C$
$\mathbb{I} = \mathbb{V}j\omega C$

Device $\frac{\mathbb{V}}{\mathbb{I}}$ $\frac{\mathbb{I}}{\mathbb{V}}$
Resistor $R$ $\frac{1}{R}$
Capacitor $\frac{1}{j\omega C}$ $j\omega C$
Inductor $j\omega L$ $\frac{1}{j\omega L}$

The $j\omega$ terms that don't cancel out come from the derivative terms in the terminal relations. These derivative terms are associated with the capacitors and inductors themselves, not the sources. Although the derivative is applied to a source, the independent device the derivative originates from (a capacitor or inductor) is left with its feature after the transform! So if we leave the driving forces as $\frac{output}{input}$ ratios on one side of the equal sign, we can consider separately the other side of the equal sign as a function! These functions have a name ... Transfer Functions. When we analyze the voltage/current ratios's in terms of R, L an C, we can sweep $\omega$ through a variety of driving source frequencies, or keep the frequency constant and sweep through a variety of inductor values .. . we can analyze the circuit response!

Note: Transfer Functions are an entire section of this course. They come up in mechanical engineering control system classes also. There are similarities. Driving over a bump is like a surge or spike. Driving over a curb is like turning on a circuit. And when mechanical engineers study vibrations, they deal with sinusoidal driving functions, but they are dealing with a three dimensional object rather than a one dimensional object like we are in this course.

## Phasor Domain to Time Domain

Getting back into the time domain is just about as simple. After working through the equations in the phasor domain and finding $\mathbb{V}$ and $\mathbb{I}$, the goal is to convert them to $V$ and $I$.

The phasor solutions will have the form $\mathbb{G} = A + Bj = G_m e^{j\phi}$ you should be able now to convert between the two forms of the solution. Then:

$G = \operatorname{Re}(\mathbb{G} e^{j\omega t})= \operatorname{Re}(G_m e^{j\phi}e^{j\omega t}) = \operatorname{Re}(G_m e^{j(\omega t + \phi)}) = G_m cos(\omega t + \phi)$

If there was an integral involved in the phasor math, then a constant needs to be added onto the time domain solution. The time constant is calculated from the initial conditions. If the solution doesn't involve a differential equation, then time constant is computed immediately. Otherwise the solution is treated as a particular solution and the time constant is computed after the homogeneous solution's magnitude is found. See the phasor examples for more detail.

## What is not covered

There is another way of thinking about circuits where inductors and capacitors are complex resistances. The idea is:

impedance = resistance + j * reactance

Or symbolically

$Z = R + j*X$

Here the derivative is attached to the inductance and capacitance, rather than to the terminal equation as we have done. This spreads the math of solving circuit problems into smaller pieces that is more easily checked, but it makes symbolic solutions more complex and can cause numeric solution errors to accumulate because of intermediate calculations.

The phasor concept is found everywhere. Some day it will be necessary to study this if you get in involved in microwave projects that involve "stubs" or antenna projects that involve a "loading coil" ... the list is huge.

The goal here is to avoid the concepts of conductance, reactance, impedance, susceptance, and admittance ... and avoid the the confusion of relating these concepts while trying to compare phasor math with calculus and Laplace transforms.

## Phasor Notation

Remember, a phasor represents a single value that can be displayed in multiple ways.
$\mathbb{C} = M \angle \phi$ "Polar Notation"
$C = M e^{j(\omega t + \phi)}$ "Exponential Notation"
$\mathbb{C} = A + jB$ "Rectangular Notation"
$C = M \cos (\omega t + \phi) + j M \sin (\omega t + \phi)$ "time domain notation"

These 4 notations are all just different ways of writing the same exact thing.

## Phasor symbols

When writing on a board or on paper, use hats $\hat{V}$ to denote phasors. Expect variations in books and online:

• $\mathbb{V}$ (the large bold block-letters we use in this wikibook)
• $\bar{V}$ ("bar" notation, used by Wikipedia)
• $\vec{V}$ (bad ... save for vectors ... vector arrow notation)
• $\tilde{V}$ (some text books)
• $\hat{V}$ (some text books)

# Differential Equations

## Phasors Generate the Particular Solution

Phasors can replace calculus, they can replace Laplace transforms, they can replace trig. But there is one thing they can not do: initial conditions/integration constants. When doing problems with both phasors and Laplace, or phasors and calculus, the difference in the answers is going to be an integration constant.

Differential equations are solved in this course in three steps:

• finding the particular solution ... particular to the driving function ... particular to the voltage or current source
• finding the homogenous solution ... the solution that is the same no matter what the driving function is ... the solution that explores how an initial energy imbalance in the circuit is balanced
• determining the coefficients, the constants of integration from initial conditions

## Phasors Don't Generate Integration Constants

The integration constant doesn't appear in phasor solutions. But they will appear in the Laplace and Calculus alternatives to phasor solutions. If the full differential equation is going to be solved, it is absolutely necessary to see where the phasors fail to create a symbol for the unknown integration constant ... that is calculated in the third step.

Phasors are the technique used to find the particular AC solution. Integration constants document the initial DC bias or energy difference in the circuit. Finding these constants requires first finding the homogeneous solution which deals with the fact that capacitors may or may not be charged when a circuit is first turned on. Phasors don't completely replace the steps of Differential Equations. Phasors just replace the first step: finding the particular solution.

## Differential Equations Review

The goal is to solve Ordinary Differential Equations (ODE) of the first and second order with both phasors, calculus, and Laplace transforms. This way the phasor solution can be compared with content of pre-requiste or co-requiste math courses. The goal is to do these problems with numeric and symbolic tools such as matLab and mupad/mathematica/wolframalpha. If you have already had the differential equations course, this is a quick review.

The most important thing to understand is the nature of a function. Trig, Calculus, and Laplace transforms and phasors are all associated with functions, not algebra. If you don't understand the difference between algebra and a function, maybe this student professor dialogue will help.

We start with equations from terminal definitions, loops and junctions. Each of the symbols in these algebraic equations is a function. We are not transforming the equations. We are transforming the functions in these equations. All sorts of operators appear in these equations including + - * / and $\frac{d}{dt}$. The first table focuses on transforming these operators. The second focuses on transforming the functions themselves.

The real power of the Laplace tranform is that it eliminates the integral and differential operators. Then the functions themselves can be transformed. Then unknowns can be found with just algebra. Then the functions can be transformed back into time domain functions.

Here are some of the Properties and Theorems needed to transform the typical sinusolidal voltages, powers and currents in this class.

### Laplace Operator Transforms

Properties of the unilateral Laplace transform
Time domain 's' domain Comment
Time scaling $f(at)$ $\frac{1}{|a|} F \left ( {s \over a} \right )$ for figuring out how $\omega$ affects the equation
Time shifting $f(t - a) u(t - a) \$ $e^{-as} F(s) \$ u(t) is the unit step function .. for figuring out the $\phi$ phase angle
Linearity $a f(t) + b g(t) \$ $a F(s) + b G(s) \$ Can be proved using basic rules of integration.
Differentiation $f'(t) \$ $s F(s) - f(0) \$ f is assumed to be a differentiable function, and its derivative is assumed to be of exponential type. This can then be obtained by integration by parts
Integration $\int_0^t f(\tau)\, d\tau = (u * f)(t)$ ${1 \over s} F(s)$ a constant pops out at the end of this too

### Laplace Function Transform

Here are some of the transforms needed in this course:

Function Time domain
$f(t) = \mathcal{L}^{-1} \left\{ F(s) \right\}$
Laplace s-domain
$F(s) = \mathcal{L}\left\{ f(t) \right\}$
Region of convergence Reference
exponential decay $e^{-\alpha t} \cdot u(t) \$ ${ 1 \over s+\alpha }$ Re(s) > −α Frequency shift of
unit step
exponential approach $( 1-e^{-\alpha t}) \cdot u(t) \$ $\frac{\alpha}{s(s+\alpha)}$ Re(s) > 0 Unit step minus
exponential decay
sine $\sin(\omega t) \cdot u(t) \$ ${ \omega \over s^2 + \omega^2 }$ Re(s) > 0
cosine $\cos(\omega t) \cdot u(t) \$ ${ s \over s^2 + \omega^2 }$ Re(s) > 0
exponentially decaying
sine wave
$e^{-\alpha t} \sin(\omega t) \cdot u(t) \$ ${ \omega \over (s+\alpha )^2 + \omega^2 }$ Re(s) > −α
exponentially decaying
cosine wave
$e^{-\alpha t} \cos(\omega t) \cdot u(t) \$ ${ s+\alpha \over (s+\alpha )^2 + \omega^2 }$ Re(s) > −α

# Phasor Circuit Analysis

## Phasor Analysis

The mathematical representations of individual circuit elements can be converted into phasor notation, and then the circuit can be solved using phasors.

In phasor notation, resistance, capacitance, and inductance can all be lumped together into a single term called "impedance". The phasor used for impedance is $\mathbb{Z}$. The inverse of Impedance is called "Admittance" and is denoted with a $\mathbb{Y}$. $\mathbb{V}$ is Voltage and $\mathbb{I}$ is current.

$\mathbb{Z} = \frac{1}{\mathbb{Y}}$

And the Ohm's law for phasors becomes:

$\mathbb{V} = \mathbb{Z} \mathbb{I} = \frac{\mathbb{I}}{\mathbb{Y}}$

It is important to note at this point that Ohm's Law still holds true even when we switch from the time domain to the phasor domain. This is made all the more amazing by the fact that the new term, impedance, is no longer a property only of resistors, but now encompasses all load elements on a circuit (capacitors and inductors too!).

Impedance is still measured in units of Ohms, and admittance (like Conductance, its DC-counterpart) is still measured in units of Siemens.

Let's take a closer look at this equation:

[Ohm's Law with Phasors]

$\mathbb{V} = \mathbb{Z} \mathbb{I}$

If we break this up into polar notation, we get the following result:

$M_V \angle \phi_V = (M_Z \times M_I) \angle (\phi_Z + \phi_I)$

This is important, because it shows that not only are the magnitude values of voltage and current related to each other, but also the phase angle of their respective waves are also related. Different circuit elements will have different effects on both the magnitude and the phase angle of the voltage given a certain current. We will explore those relationships below.

## Resistors

Resistors do not affect the phase of the voltage or current, only the magnitude. Therefore, the impedance of a resistor with resistance R is:

[Resistor Impedance]

$\mathbb{Z} = R \angle 0$

Through a resistor, the phase difference between current and voltage will not change. This is important to remember when analyzing circuits.

## Capacitors

A capacitor with a capacitance of C has a phasor value:

[Capacitor Impedance]

$\mathbb{Z} = C \angle \left(-\frac{\pi}{2}\right)$

To write this in terms of degrees, we can say:

$\mathbb{Z} = C \angle (-90^{\circ})$

We can accept this for now as being axiomatic. If we consider the fact that phasors can be graphed on the imaginary plane, we can easily see that the angle of $-\pi/2$ points directly downward, along the negative imaginary axis. We then come to an important conclusion: The impedance of a capacitor is imaginary, in a sense. Since the angle follows directly along the imaginary axis, there is no real part to the phasor at all. Because there is no real part to the impedance, we can see that capacitors have no resistance (because resistance is a real value, as stated above).

### Reactance

A capacitor with a capacitance of C in an AC circuit with an angular velocity $\omega$ has a reactance given by

$\mathbb{X} = \frac {1}{\omega C} \angle (-90^{\circ})$

Reactance is the impedance specific to an AC circuit with angular velocity $\omega$.

## Inductors

Inductors have a phasor value:

[Inductor Impedance]

$\mathbb{Z} = L \angle \left(\frac{\pi}{2}\right)$

Where L is the inductance of the inductor. We can also write this using degrees:

$\mathbb{Z} = L \angle (90^\circ)$

Like capacitors, we can see that the phasor for inductor shows that the value of the impedance is located directly on the imaginary axis. However, the phasor value for inductance points in exactly the opposite direction from the capacitance phasor. We notice here also that inductors have no resistance, because the resistance is a real value, and inductors have only an imaginary value.

### Reactance

In an AC circuit with a source angular velocity of $\omega$, and inductor with inductance L.

$\mathbb{X} = \omega L \angle (90^\circ)$

## Impedances Connected in Series

If there are several impedances connected in series, the equivalent impedance is simply a sum of the impedance values:

----[ Z1 ]----[ Z2 ]--- ... ---[ Zn ]---   ==> ---[ Zseries ]---


[Impedances in Series]

$\sum_{series} \mathbb{Z}_n = \mathbb{Z}_{series}$

Notice how much easier this is than having to differentiate between the formulas for combining capacitors, resistors, and inductors in series. Notice also that resistors, capacitors, and inductors can all be mixed without caring which type of element they are. This is valuable, because we can now combine different elements into a single impedance value, as opposed to different values of inductance, capacitance, and resistance.

Keep in mind however, that phasors need to be converted to rectangular coordinates before they can be added together. If you know the formulas, you can write a small computer program, or even a small application on a programmable calculator to make the conversion for you.

## Impedances in Parallel

Impedances connected in parallel can be combined in a slightly more complicated process:

[Impedances in Parallel]

$\mathbb{Z}_{parallel} = \frac{\prod_N Z_n}{\sum_N Z_n}$

Where N is the total number of impedances connected in parallel with each other. Impedances may be multiplied in the polar representation, but they must be converted to rectangular coordinates for the summation. This calculation can be a little bit time consuming, but when you consider the alternative (having to deal with each type of element separately), we can see that this is much easier.

## Steps For Solving a Circuit With Phasors

There are a few general steps for solving a circuit with phasors:

1. Convert all elements to phasor notation
2. Combine impedances, if possible
3. Combine Sources, if possible
4. Use Ohm's Law, and Kirchoff's laws to solve the circuit
5. Convert back into time-domain representation

Unfortunately, phasors can only be used with sinusoidal input functions. We cannot employ phasors when examining a DC circuit, nor can we employ phasors when our input function is any non-sinusoidal periodic function. To handle these cases, we will look at more general methods in later chapters

## Network Function

The network function is a phasor, $\mathbb{H}$ that is a ratio of the circuit's input to its output. This is important, because if we can solve a circuit down to find the network function, we can find the response to any sinusoidal input, by simply multiplying by the network function. With time-domain analysis, we would have to solve the circuit for every new input, and this would be very time consuming indeed.

Network functions are defined in the following way:

[Network Function]

$\mathbb{H} = \frac{\mathbb{Y}}{\mathbb{X}}$

Where $\mathbb{Y}$ is the phasor representation of the circuit's output, and $\mathbb{X}$ is the representation of the circuit's input. In the time domain, to find the output, we would need to convolute the input with the impulse response. With the network function, however, it becomes a simple matter of multiplying the input phasor with the network function, to get the output phasor. Using this method, we have converted an entire circuit to become a simple function that changes magnitude and phase angle.

## Gain

Gain is the amount by which the magnitude of the sinusoid is amplified or attenuated by the circuit. Gain can be computed from the Network function as such:

[Gain]

$Gain = \left| \mathbb{H}(\omega) \right| = \frac{\left| \mathbb{Y}(\omega) \right|}{\left| \mathbb{X}(\omega) \right|}$

Where the bars around the phasors are the "magnitude" of the phasor, and not the "absolute value" as they are in other math texts. Again, gain may be a measure of the magnitude change in either current or voltage. Most frequently, however, it is used to describe voltage.

## Phase Shift

The phase shift of a function is the amount of phase change between the input signal and the output signal. This can be calculated from the network function as such:

[Phase Shift]

$\angle \mathbb{H}(\omega) = \angle \mathbb{Y}(\omega) - \angle \mathbb{X}(\omega)$

Where the $\angle$ denotes the phase of the phasor.

Again, the phase change may represent current or voltage.

# Phasor Theorems

## Circuit Theorems

Phasors would be absolutely useless if they didn't make the analysis of a circuit easier. Luckily for us, all our old circuit analysis tools work with values in the phasor domain. Here is a quick list of tools that we have already discussed, that continue to work with phasors:

• Ohm's Law
• Kirchoff's Laws
• Superposition
• Thevenin and Norton Sources
• Maximum Power Transfer

This page will describe how to use some of the tools we discussed for DC circuits in an AC circuit using phasors.

## Ohm's Law

Ohm's law, as we have already seen, becomes the following equation when in the phasor domain:

$\mathbb{V} = \mathbb{Z} \mathbb{I}$

Separating this out, we get:

$M_V \angle \phi_V = (M_Z \times M_I) \angle (\phi_Z + \phi_I)$

Where we can clearly see the magnitude and phase relationships between the current, the impedance, and the voltage phasors.

## Kirchoff's Laws

Kirchoff's laws still hold true in phasors, with no alterations.

### Kirchoff's Current Law

Kirchoff's current law states that the amount of current entering a particular node must equal the amount of current leaving that node. Notice that KCL never specifies what form the current must be in: any type of current works, and KCL always holds true.

[KCL With Phasors]

$\sum_n \mathbb{I}_n = 0$

### Kirchoff's Voltage Law

KVL states: The sum of the voltages around a closed loop must always equal zero. Again, the form of the voltage forcing function is never considered: KVL holds true for any input function.

[KVL With Phasors]

$\sum_n \mathbb{V}_n = 0$

## Superposition

Superposition may be applied to a circuit if all the sources have the same frequency. However, superposition must be used as the only possible method to solve a circuit with sources that have different frequencies. The important part to remember is that impedance values in a circuit are based on the frequency. Different reactive elements react to different frequencies differently. Therefore, the circuit must be solved once for every source frequency. This can be a long process, but it is the only good method to solve these circuits.

## Thevenin and Norton Circuits

Thevenin Circuits and Norton Circuits can be manipulated in a similar manner to their DC counterparts: Using the phasor-domain implementation of Ohm's Law.

$\mathbb{V} = \mathbb{Z}\mathbb{I}$

It is important to remember that the $\mathbb{Z}$ does not change in the calculations, although the phase and the magnitude of both the current and the voltage sources might change as a result of the calculation.

## Maximum Power Transfer

The maximum power transfer theorem in phasors is slightly different then the theorem for DC circuits. To obtain maximum power transfer from a thevenin source to a load, the internal thevenin impedance ($\mathbb{Z}_t$) must be the complex conjugate of the load impedance ($\mathbb{Z}_l$):

[Maximum Power Transfer, with Phasors]

$\mathbb{Z}_l = R_t - jX_t$

# Complex Power

Circuit Theory/Complex Power

The Laplace Transform

The Laplace Transform is a useful tool borrowed from mathematics to quickly and easily analyze systems that are represented by high-order linear differential equations. The Fourier Transform, which is closely related, can also provide us with insight about the frequency response characteristics of a system.

## Laplace Transform

The Laplace Transform is a powerful tool that is very useful in Electrical Engineering. The transform allows equations in the "time domain" to be transformed into an equivalent equation in the Complex S Domain. The laplace transform is an integral transform, although the reader does not need to have a knowledge of integral calculus because all results will be provided. This page will discuss the Laplace transform as being simply a tool for solving and manipulating ordinary differential equations.

Laplace transformations of circuit elements are similar to phasor representations, but they are not the same. Laplace transformations are more general than phasors, and can be easier to use in some instances. Also, do not confuse the term "Complex S Domain" with the complex power ideas that we have been talking about earlier. Complex power uses the variable $\mathbb{S}$, while the Laplace transform uses the variable s. The Laplace variable s has nothing to do with power.

The transform is named after the mathematician Pierre Simon Laplace (1749-1827). The transform itself did not become popular until Oliver Heaviside, a famous electrical engineer, began using a variation of it to solve electrical circuits.

## Laplace Domain

The Laplace domain, or the "Complex s Domain" is the domain into which the Laplace transform transforms a time-domain equation. s is a complex variable, composed of real and imaginary parts:

$s = \sigma + j\omega$

The Laplace domain graphs the real part (σ) as the horizontal axis, and the imaginary part (ω) as the vertical axis. The real and imaginary parts of s can be considered as independent quantities.

The similarity of this notation with the notation used in Fourier transform theory is no coincidence; for $\sigma=0$, the Laplace transform is the same as the Fourier transform if the signal is causal.

## The Transform

The mathematical definition of the Laplace transform is as follows:

[The Laplace Transform]

$F(s) = \mathcal{L} \left\{f(t)\right\} = \int_{0^-}^\infty e^{-st} f(t)\,dt$
Note:
The letter s has no special significance, and is used with the Laplace Transform as a matter of common convention.

The transform, by virtue of the definite integral, removes all t from the resulting equation, leaving instead the new variable s, a complex number that is normally written as $s=\sigma+j\omega$. In essence, this transform takes the function f(t), and "transforms it" into a function in terms of s, F(s). As a general rule the transform of a function f(t) is written as F(s). Time-domain functions are written in lower-case, and the resultant s-domain functions are written in upper-case.

There is a table of Laplace Transform pairs in
the Appendix

we will use the following notation to show the transform of a function:

$f(t) \Leftrightarrow F(s)$

We use this notation, because we can convert F(s) back into f(t) using the inverse Laplace transform.

## The Inverse Transform

The inverse laplace transform converts a function in the complex S-domain to its counterpart in the time-domain. Its mathematical definition is as follows:

[Inverse Laplace Transform]

$\mathcal{L}^{-1} \left\{F(s)\right\} = {1 \over {2\pi}}\int_{c-i\infty}^{c+i\infty} e^{ft} F(s)\,ds = f(t)$

where $c$ is a real constant such that all of the poles $s_1,s_2,...,s_n$ of $F(s)$ fall in the region $\mathfrak{R}\{s_i\} < c$. In other words, $c$ is chosen so that all of the poles of $F(s)$ are to the left of the vertical line intersecting the real axis at $s=c$.

The inverse transform is more difficult mathematically than the transform itself is. However, luckily for us, extensive tables of laplace transforms and their inverses have been computed, and are available for easy browsing.

## Transform Properties

There is a table of Laplace Transform properties in
The Appendix

The most important property of the Laplace Transform (for now) is as follows:

$\mathcal{L} \left\{ f'(t) \right\} = sF(s) - f(0)$

Likewise, we can express higher-order derivatives in a similar manner:

$\mathcal{L} \left\{f''(t)\right\} = s^2F(s) - s f(0) - f'(0)$

Or for an arbitrary derivative:

$\mathcal{L} \left\{f^{(n)}(t)\right\} = s^nF(s) - \sum_{i=0}^{n-1} s^{(n-1-i)} f^{(i)}(0)$

where the notation $f^{(n)}(t)$ means the nth derivative of the function $f$ at the point $t$, and $f^{(0)}(t)$ means $f(t)$.

In plain English, the laplace transform converts differentiation into polynomials. The only important thing to remember is that we must add in the initial conditions of the time domain function, but for most circuits, the initial condition is 0, leaving us with nothing to add.

For integrals, we get the following:

$\mathcal{L}\left\{ \int_0^t f(t)\, dt \right\} = {1 \over s}F(s)$

## Initial Value Theorem

The Initial Value Theorem of the laplace transform states as follows:

[Initial Value Theorem]

$f(0) \Leftrightarrow \lim_{s \to \infty} sF(s)$

This is useful for finding the initial conditions of a function needed when we perform the transform of a differentiation operation (see above).

## Final Value Theorem

Similar to the Initial Value Theorem, the Final Value Theorem states that we can find the value of a function f, as t approaches infinity, in the laplace domain, as such:

[Final Value Theorem]

$\lim_{t \to \infty} f(t) \Leftrightarrow \lim_{s \to 0} sF(s)$

This is useful for finding the steady state response of a circuit. The final value theorem may only be applied to stable systems.

## Transfer Function

If we have a circuit with impulse-response h(t) in the time domain, with input x(t) and output y(t), we can find the Transfer Function of the circuit, in the laplace domain, by transforming all three elements:

In this situation, H(s) is known as the "Transfer Function" of the circuit. It can be defined as both the transform of the impulse response, or the ratio of the circuit output to its input in the Laplace domain:

[Transfer Function]

$H(s) = \mathcal{L} \left\{h(t) \right\} = \frac{Y(s)}{X(s)}$

Transfer functions are powerful tools for analyzing circuits. If we know the transfer function of a circuit, we have all the information we need to understand the circuit, and we have it in a form that is easy to work with. When we have obtained the transfer function, we can say that the circuit has been "solved" completely.

## Convolution Theorem

Earlier it was mentioned that we could compute the output of a system from the input and the impulse response by using the convolution operation. As a reminder, given the following system:

 x(t) = system input h(t) = impulse response y(t) = system output

We can calculate the output using the convolution operation, as such:

$y(t) = x(t) * h(t)$

Where the asterisk denotes convolution, not multiplication. However, in the S domain, this operation becomes much easier, because of a property of the laplace transform:

[Convolution Theorem]

$\mathcal{L} \left\{ a(t) * b(t) \right\} = A(s)B(s)$

Where the asterisk operator denotes the convolution operation. This leads us to an English statement of the convolution theorem:

Convolution in the time domain becomes multiplication in the S domain, and convolution in the S domain becomes multiplication in the time domain.[2]

Now, if we have a system in the Laplace S domain:

 X(s) = Input H(s) = Transfer Function Y(s) = Output

We can compute the output Y(s) from the input X(s) and the Transfer Function H(s):

$Y(s) = X(s)H(s)$

Notice that this property is very similar to phasors, where the output can be determined by multiplying the input by the network function. The network function and the transfer function then, are very similar quantities.

## Resistors

The laplace transform can be used independently on different circuit elements, and then the circuit can be solved entirely in the S Domain (Which is much easier). Let's take a look at some of the circuit elements:

Resistors are time and frequency invariant. Therefore, the transform of a resistor is the same as the resistance of the resistor:

[Transform of Resistors]

$R(s) = r$

Compare this result to the phasor impedance value for a resistance r:

$Z_r = r \angle 0$

You can see very quickly that resistance values are very similar between phasors and laplace transforms.

## Ohm's Law

If we transform Ohm's law, we get the following equation:

[Transform of Ohm's Law]

$V(s) = I(s)R$

Now, following ohms law, the resistance of the circuit element is a ratio of the voltage to the current. So, we will solve for the quantity $\frac{V(s)}{I(s)}$, and the result will be the resistance of our circuit element:

$R = \frac{V(s)}{I(s)}$

This ratio, the input/output ratio of our resistor is an important quantity, and we will find this quantity for all of our circuit elements. We can say that the transform of a resistor with resistance r is given by:

[Tranform of Resistor]

$\mathcal{L}\{\text{resistor}\} = R = r$

## Capacitors

Let us look at the relationship between voltage, current, and capacitance, in the time domain:

$i(t) = C\frac{dv(t)}{dt}$

Solving for voltage, we get the following integral:

$v(t) = \frac{1}{C}\int_{t_0}^{\infty} i(t)dt$

Then, transforming this equation into the laplace domain, we get the following:

$V(s) = \frac{1}{C} \frac{1}{s} I(s)$

Again, if we solve for the ratio $\frac{V(s)}{I(s)}$, we get the following:

$\frac{V(s)}{I(s)} = \frac{1}{sC}$

Therefore, the transform for a capacitor with capacitance C is given by:

[Transform of Capacitor]

$\mathcal{L}\{\mbox{capacitor}\} = \frac{1}{sC}$

## Inductors

Let us look at our equation for inductance:

$v(t) = L \frac{di(t)}{dt}$

putting this into the laplace domain, we get the formula:

$V(s) = sLI(s)$

And solving for our ratio $\frac{V(s)}{I(s)}$, we get the following:

$\frac{V(s)}{I(s)} = sL$

Therefore, the transform of an inductor with inductance L is given by:

[Transform of Inductor]

$\mathcal{L}\{\text{inductor}\} = sL$

## Impedance

Since all the load elements can be combined into a single format dependent on s, we call the effect of all load elements impedance, the same as we call it in phasor representation. We denote impedance values with a capital Z (but not a phasor $\mathbb{Z}$).

## Determining electric current in circuits

RCL circuit with zero capacitance and zero initial current.

In the network shown, determine the character of the currents $I_1(t)$, $I_2(t)$, and $I_3(t)$ assuming that each current is zero when the switch is closed.

### Solution[3]

#### Current flow at a joint in circuit

Since the algebraic sum of the currents at any junction is zero, then

$I_1(t)-I_2(t)-I_3(t) = 0$.........(182)

#### Voltage balance on a circuit

Applying the voltage law to the circuit on the left we get

$I_1(t)R_1 + L_2 \frac{dI_2(t)}{dt}\ =E(t)$......... (182-1)

Applying again the voltage law to the outside circuit, given that E is constant, we get

$I_1(t)R_1+I_3(t)R_3 + L_3 \frac{dI_3(t)}{dt}\ =E(t)$......... (182-2)

#### Laplace Transforms of current and voltage equations

Transforming (182), (182-1) and (182-2), we get

$i_1(s)-i_2(s)-i_3(s)=0$.........(182-3)

$i_1(s)R_1 +sL_2i_2(s)=\frac{E}{s}\$......... (182-4)

$i_1(s)R_1 +( R_3 + sL_3 ) i_3(s)= \frac{E}{s}\$ ......... (182-5)

##### Review on implementing Laplace Transformation

The three Laplace transformed equations (182-3), (182-4), and (182-5) show the benefits of integral transformation in converting differential equations into linear algebraic equations that could be solved for the dependent variables (the three currents in this case), then inverse transformed to yield the required solution.

• In equation (182-3), we utilized the sum property of Laplace transforms.
• In equation (182-4), we utilized the transform of differential derivative as follows.

$si_2(s)-I_2(0)= \mathcal{L}\left\lbrace\frac {dI_2}{dt}\right\rbrace$.........(182-4.1)

Since, substituted by the given initial condition: $I_2(0)=0$

• In equation (182-5), we also utilized the transform of differential derivative

$si_3(s)-I_3(0)= \mathcal{L}\left\lbrace\frac {dI_3}{dt}\right\rbrace$.........(182-5.2)

Again, we substituted by the given initial condition: $I_3(0)=0$

The fact that the applied voltage was a step function, implied the use of Laplace transform of a step function, as follows:

$\frac {E}{s}= \mathcal{L}\left\lbrace E \right\rbrace$.........(182-5.3)

#### Solution linear simultaneous equations

The three linear simultaneous equations (182-3), (182-4), and (182-5) have the three unknown $i_1(s)$, $i_2(s)$, and $i_3(s)$ and can be solved by Cramer’s rule of matrices among other simple methods of elimination, as follows.

$i_1(s) = \frac{ \begin{vmatrix} 0 & -1 & -1 \\ \frac{E}{s}\ & sL_2 & 0 \\ \frac{E}{s}\ & 0 &R_3 + sL_3 \\ \end{vmatrix}}{ \Delta}= \frac{E}{s}\frac{R_3+s(L_2+L_3)}{\Delta}$ ......... (182-6)

Where, the determinant ∆ for the matrix is determined as follows

$\Delta = \begin{vmatrix} 1 & -1 & -1 \\ R_1 & sL_2 & 0 \\ R_1 & 0 &R_3 + sL_3 \\ \end{vmatrix} = \begin{vmatrix} 1 & 0 & 0 \\ R_1 & sL_2+R_1 & R_1 \\ R_1 & R_1 & R_1+R_3 + sL_3 \\ \end{vmatrix}$

$\Delta = s^2L_2L_3+s(R_1L_2+R_3L_2 +R_1R_3)+R_1R_3$......... (182-6.1)

Since we are interested in the factors of Δ, we consider the equation Δ =0. Since all coefficients of this equation are positive, hence it cannot have any positive roots. Its discriminant is

$(R_1L_2+R_3L_2 +R_1R_3)^2-4L_2L_3R_1R_3$ ........ (182-6.1.1)

which can be written

$R^2_1L^2_2+2R_1L_2(R_3L_2+R_1L_3)+(R_3L_2-R_1L_3)^2$........ (182-6.1.2)

which is positive. Hence the equation Δ = 0 has two negative distinct roots $-\alpha_1$ and $-\alpha_2$, say.

Therefore,

$\Delta = L_2L_3(s+\alpha_1)(s+\alpha_2)$ ......... (182-6.2)

Where,$-\alpha_1$ and $-\alpha_2$ are the roots of the quadratic equation (182-6.1) as follows

$\alpha_1=\frac{1}{2}\left\lbrace\frac{R_1L_2+R_3L_2+R_1R_3}{L_2L_3}+\sqrt {\left(\frac{R_1L_2+R_3L_2+R_1R_3}{L_2L_3}\right)^2-4\frac{R_1R_3}{L_2L_3}}\right\rbrace$ ......... (182-6.2.1)

$\alpha_2=\frac{1}{2}\left\lbrace\frac{R_1L_2+R_3L_2+R_1R_3}{L_2L_3}-\sqrt {\left(\frac{R_1L_2+R_3L_2+R_1R_3}{L_2L_3}\right)^2-4\frac{R_1R_3}{L_2L_3}}\right\rbrace$ ......... (182-6.2.2)

Therefore, equations (182-6) and (186-6.2) give

$i_1(s)=\frac{E}{s}.\frac{R_3+s(L_2+L_3)}{L_2L_3(s+\alpha_1)(s+\alpha_2)}$

$i_1(s)=\frac{A_0}{s}+\frac{A_1}{s+\alpha_1}+\frac{A_1}{s+\alpha_2}$ .........(182-7)

The constants $A_0$, $A_1$, and $A_2$ are obtained in terms of $R_1$, $L_2$, $L_3$, and $R_3$ and are given as:

$A_0=-\frac{ER_3}{L_2L_3\alpha_1\alpha_2}$ .........(182-7.1)

$A_1=E\frac{R_3\alpha_2-\alpha_1\alpha_2(L_2+L_3)}{L_2L_3\alpha_1\alpha_2(\alpha_1-\alpha_2)}$.........(182-7.2)

$A_2=E\frac{\alpha_2(L_2+L_3)-R_3}{L_2L_3\alpha_2(\alpha_1-\alpha_2)}$.........(182-7.3)

#### Inverse Laplace Transforms of current equations

The inverse Laplace transform of (182-7) is therefore,

$I_1(t)=\mathcal{L}^{-1}\left\lbrace\frac{A_0}{s}+\frac{A_1}{s+\alpha_1}+\frac{A_1}{s+\alpha_2}\right\rbrace =A_0+A_1e^{-\alpha_1t}+A_2e^{-\alpha_2t}$.........(182-8)

The remaining variables $I_2(t)$ and $I_3(t)$ and the corresponding voltages are determined by equations (182), (182-1) and (182-2)

##### Analysis of circuit dynamics

The electric current $I_1(t)$ in equation (182-8) shows a time-independent component $A_0$ and two decay terms, which reach asymptotic values as t reaches ∞. In other words, the currents in the three circuits lack sinusoidal osculation, mainly because: (1) the applied voltage is constant and (2) the circuit does not have capacitance components.

Notes: This example could be is modified in various ways[4] to involve voltage impulse, sinusoidal voltage source, capacitance,and various boundary and initial conditions of charges and currents.

## Generalization of the method

In the above example, the following modifications can be made:

(1) The applied voltage in the Krichhoff's equation can take many forms such as

•  :$E(t)=E_o\delta(t)$
•  :$E(t)=E_o\sin(\omega t)$
•  :$E(t)=E_of(t)$

(2) Capacitance add the integral term of current over the duration as

•  : $\frac{1}{C}\int_0^t I(\tau)d\tau$

## References

1. El-Hewie, Mohamed F. (2013). Laplace Transforms. USA: Shaymaa Publishing. pp. 217-220. ISBN 1484136349.
2. Lecture 6 Slide 22 (Page 6 in the PDF document) http://www.ee.ic.ac.uk/pcheung/teaching/ee2_signals/Lecture%206%20-%20Laplace%20Transform.pdf
3. El-Hewie, Mohamed F. (2013). Laplace Transforms. USA: Shaymaa Publishing. pp. 217-220. ISBN 1484136349.
4. El-Hewie, Mohamed F. (2013). Laplace Transforms. USA: Shaymaa Publishing. pp. 190–220. ISBN 1484136349.

## Laplace Circuit Solution

One of the most important uses of the Laplace transform is to solve linear differential equations, just like the type of equations that represent our first- and second-order circuits. This page will discuss the use of the Laplace Transform to find the complete response of a circuit.

## Steps

Here are the general steps for solving a circuit using the Laplace Transform:

1. Determine the differential equation for the circuit.
2. Use the Laplace Transform on the differential equation.
3. Solve for the unknown variable in the laplace domain.
4. Use the inverse laplace transform to find the time domain solution.

Another method that we can use is:

1. Transform the individual circuit components into impedance values using the Laplace Transform.
2. Find the Transfer function that describes the circuit
3. Solve for the unknown variable in the laplace domain.
4. Use the inverse laplace transform to find the time domain solution.
Joseph Fourier, after whom the Fourier Transform is named, was a famous mathematician who worked for Napoleon.

This course started with phasors. We learned how to transform forcing sinusodial functions such as voltage supplies into phasors. To handle more complex forcing functions we switched to complex frequencies. This enabled us to handle forcing functions of the form:

$e^{st}\cos(\omega t + \phi)$

where s is:

$s=\sigma + j\omega$

And the convolution integral can do anything.

Along the way "s" began to transform the calculus operators back into algebra. Within the complex domain, "s" could be re-attached to the inductors and capacitors rather than forcing functions. The transfer function helped us use "s" to capture circuit physical characteristics.

This is all good for designing a circuit to operate at a single frequency ω. But what about circuits that operate at a variety of frequencies? A RC car may operate at 27mhz, but when a control is pressed, the frequency might increase or decrease. Or the amplitude may increase or decrease. Or the phase may shift. All of these things happen in a cell phone call or wifi/blue tooth/xbee/AM/FM/over the air tv, etc.

How does a single circuit respond to these changes?

## Fourier analysis

Fourier analysis says we don't have to answer all the above questions. Just one question has to be answered/designed to. Since any function can be turned into a series of sinusiodals added together, then sweeping the circuit through a variety of \omega's can predict it's response to any particular combination of them.

So to start this we get rid of the exponential term and go back to phasors.

Set σ to 0:

$s = j\omega$

The variable ω is known as the "radial frequency" or just frequency. With this we can design circuits for cell phones that all share the air, for set-top cable TV boxes that pack multiple channels into one black cable. Every vocal or pixel change during transmission or reception can be designed for within this framework. All that is required is to sweep through all the frequencies that a sinusoidal voltage or current source can produce.

Analysis stays in the frequency domain. Because everything repeats over and over again in time, there is no point in going back to the time from a design point of view.

In the Fourier transform, the value $\omega$ is known as the Radial Frequency, and has units of radians/second (rad/s). People might be more familiar with the variable f, which is called the "Frequency", and is measured in units called Hertz (Hz). The conversion is done as such:

$\omega = 2\pi f$

For instance, if a given AC source has a frequency of 60Hz, the resultant radial frequency is:

$\omega = 2\pi f = 2\pi(60) = 120\pi$

## Fourier Domain

The Fourier domain then is broken up into two distinct parts: the magnitude graph, and the phase graph. The magnitude graph has jω as the horizontal axis, and the magnitude of the transform as the vertical axis. Remember, we can compute the magnitude of a complex value C as:

$C = A + jB$
$|C| = \sqrt{A^2 + B^2}$

The Phase graph has jω as the horizontal axis, and the phase value of the transform as the vertical axis. Remember, we can compute the phase of a complex value as such:

$C = A + jB$
$\angle C = \tan^{-1}\left(\frac{B}{A}\right)$

The phase and magnitude values of the Fourier transform can be considered independent values, although some abstract relationships do apply. Every fourier transform must include a phase value and a magnitude value, or it cannot be uniquely transformed back into the time domain.

The combination of graphs of the magnitude and phase responses of a circuit, along with some special types of formatting and interpretation are called Bode Plots.

## Bode Plots

Bode plots plot the transfer function. Since the transfer function is a complex number, both the magnitude and phase are plotted (in polar coordinates). The independent variable ω is swept through a range of values that center on the major defining feature such as time constant or resonant frequency. A magnitude plot has dB of the transfer function magnitude on the vertical axis. The phase plot typically has degrees on the vertical axis.

voltage across capacitor and resistor parallel combination is the output
File:Example45bode1.png
Bode plot for VCR compared to VS which looks like a passive, low pass filter with slope of -40dB/decade, the cut off frequency looks to be 100 = 1 radian/sec. The resistor dominates at DC and the capacitor dominates at very high frequencies.
find current through R3

## MatLab tr and bode

### Example 1

Previously the transfer function was found to be:

$H(s) = \frac{\mathbb{V}_{cr}(s)}{\mathbb{V}_s(s)} = \frac{1}{CLs^2 + \frac{Ls}{R} + 1} = \frac{1}{s^2 + 2s + 1}$

MatLab has a short hand notation method for entering this information where the coefficients are listed (high power to low power)(numerator first, then denominator). For this example

f = tf([1],[1 2 1])


Leaving the colon off the end should display the transfer function. The next step is to plot it:

grid on
bode(f)


The result is a low pass filter. Rather than understand how to create these plots (not trivial), the goal is to interpret the plot .. (which is almost the same thing). But at this point, the goal is to exercise MatLab.

File:Example14Bode.png
Bode diagram of R3 current versus Vs, looks like a notch filter at around 1K radians/sec

### Example 2

Previously the transfer function was found to be:

$\frac{\mathbb{I}_o}{\mathbb{V}_s} = \frac{1000s^3 + 5*10^9s}{s^4 + 5*10^6*s^3 + 2.000015*10^{12}*s^2 + 3.5*10^{13}*s + 5*10^{13}}$
f = tf([1000 0 5*10^9 0],[1 5*10^6 2.000015*10^12 3.5*10^13 5*10^13])
grid on
bode(f)


### Magnitude Graph

Bode magnitude plots are dB which is both a measure of power and voltage and current simultaneously. The vertical dB axis is not an approximation or relative to anything. The vertical axis is an accurate number. The horizontal, dependent axis could be radians/sec or Hz.

### Phase Graph

The Bode Phase Plot is a graph where the radial frequency is plotted along the X axis, and phase shift of the circuit at that frequency is plotted on the Y-axis. Axis could be in radians or degrees, frequency could be in radians per second or Hz.

## Poles and Zeros

A transfer function has 7 features that can be realized in circuits. Before looking at these features, the terms pole, zero and origin need to be defined. Start with this definition of a transfer fuction:

$H(j\omega) = \frac{Z(j\omega)}{P(j\omega)}$
• Zeros are roots in the numerator.
• Poles are roots in the denominator.
• The origin is where s = jω = 0 (no real part in a Bode analysis). When the frequency is zero, the input is DC. This is where after a long time caps open and inductors short.

The 7 possible features in a transfer function are:

• A constant
• Zeros at the origin (s in the numerator)
• Poles at the origin (s in the denominator)
• Real Zero (an s+a factor in the numerator)
• Real Pole (an S+a factor in the denominator)
• Complex conjugate poles
• Complex conjugate zeros

The bode and bodeplot functions are available in the MatLab Control system toolbox. BodePlotGui does the same thing and is discussed here. BodePlotGui was developed at Swarthmore through an NSF grant. There is a summary of the Swathmore Bode Diagram tutorial.

Circuit simulation software can plot bode diagrams also.

## Bode Equation Format

let us say that we have a generic transfer function with poles and zeros:

$H(j\omega) = \frac{(\omega_A + j\omega)(\omega_B + j\omega)}{(\omega_C+ j\omega)(\omega_D + j\omega)}$

Each term, on top and bottom of the equation, is of the form $(\omega_N + j\omega)$. However, we can rearrange our numbers to look like the following:

$\omega_N(1 + \frac{j\omega}{\omega_N})$

Now, if we do this for every term in the equation, we get the following:

$H_{bode}(j\omega) = \frac{\omega_A \omega_B}{\omega_C \omega_D} \frac{(1 + \frac{j\omega}{\omega_A})(1 + \frac{j\omega}{\omega_B})} {(1 + \frac{j\omega}{\omega_C})(1 + \frac{j\omega}{\omega_D})}$

This is the format that we are calling "Bode Equations", although they are simply another way of writing an ordinary frequency response equation.

## DC Gain

The constant term out front:

$\frac{\omega_A \omega_B}{\omega_C \omega_D}$

is called the "DC Gain" of the function. If we set $\omega \to 0$, we can see that everything in the equation cancels out, and the value of H is simply our DC gain. DC then is simply the input with a frequency of zero.

## Break Frequencies

in each term:

$(1 + \frac{j\omega}{\omega_N})$

the quantity $\omega_N$ is called the "Break Frequency". When the radial frequency of the circuit equals a break frequency, that term becomes (1 + 1) = 2. When the radial frequency is much higher than the break frequency, the term becomes much greater than 1. When the radial Frequency is much smaller than the break frequency, the value of that term becomes approximately 1.

### Approximations

Bode diagrams are constructed by drawing straight lines (on log paper) that approximations of what are really curves. Here is a more precise definition.

The term "much" is a synonym for "At least 10 times". "Much Greater" becomes "At least 10 times greater" and "Much less" becomes "At least 10 times less". We also use the symbol "<<" to mean "is much less than" and ">>" to mean "Is much greater than". Here are some examples:

• 1 << 10
• 10 << 1000
• 2 << 20 Right!
• 2 << 10 WRONG!

For a number of reasons, Electrical Engineers find it appropriate to approximate and round some values very heavily. For instance, manufacturing technology will never create electrical circuits that perfectly conform to mathematical calculations. When we combine this with the << and >> operators, we can come to some important conclusions that help us to simplify our work:

If A << B:

• A + B = B
• A - B = -B
• A / B = 0

All other mathematical operations need to be performed, but these 3 forms can be approximated away. This point will come important for later work on bode plots.

Using our knowledge of the Bode Equation form, the DC gain value, Decibels, and the "much greater, much less" inequalities, we can come up with a fast way to approximate a bode magnitude plot. Also, it is important to remember that these gain values are not constants, but rely instead on changing frequency values. Therefore, the gains that we find are all slopes of the bode plot. Our slope values all have units of "decibel per decade", or "db/decade", for short.

At zero radial frequency, the value of the bode plot is simply the DC gain value in decibels. Remember, bode plots have a log-10 magnitude Y-axis, so we need to convert our gain to decibels:

$Magnitude = 20\log_{10}(DC Gain)$

## At a Break Point

We can notice that each given term changes it's effect as the radial frequency goes from below the break point, to above the break point. Let's show an example:

$(1 + \frac{j\omega}{5})$

Our breakpoint occurs at 5 radians per second. When our radial frequency is much less than the break point, we have the following:

$Gain = (1 + 0) = 1$
$Magnitude = 20\log_{10}(1) = 0db/decade$

When our radial frequency is equal to our break point we have the following:

$Gain = |(1 + j)| = \sqrt{2}$
$Magnitude = 20\log_{10}(\sqrt{2}) = 3db/decade$

And when our radial frequency is much higher (10 times) our break point we get:

$Gain = |(1 + 10 j)| \approx 10$
$Magnitude = 20\log_{10}(10) = 20db/decade$

However, we need to remember that some of our terms are "Poles" and some of them are "Zeros".

### Zeros

Zeros have a positive effect on the magnitude plot. The contributions of a zero are all positive:

### Poles

Poles have a negative effect on the magnitude plot. The contributions of the poles are as follows:

## Conclusions

To draw a bode plot effectively, follow these simple steps:

1. Put the frequency response equation into bode equation form.
2. identify the DC gain value, and mark this as a horizontal line coming in from the far left (where the radial frequency conceptually is zero).
3. At every "zero" break point, increase the slope of the line upwards by 20db/decade.
4. At every "pole" break point, decrease the slope of the line downwards by 20db/decade.
5. at every breakpoint, note that the "actual value" is 3db off from the value graphed.

And then you are done!

## Impedance

Let's recap: In the transform domain, the quantities of resistance, capacitance, and inductance can all be combined into a single complex value known as "Impedance". Impedance is denoted with the letter Z, and can be a function of s or jω, depending on the transform used (Laplace or Fourier). This impedance is very similar to the phasor concept of impedance, except that we are in the complex domain (laplace or fourier), and not the phasor domain.

Impedance is a complex quantity, and is therefore comprised of two components: The real component (resistance), and the complex component (reactance). Resistors, because they do not vary with time or frequency, have real values. Capacitors and inductors however, have imaginary values of impedance. The resistance is denoted (as always) with a capital R, and the reactance is denoted with an X (this is common, although it is confusing because X is also the most common input designator). We have therefore, the following relationship between resistance, reactance, and impedance:

[Complex Laplace Impedance]

$Z = R + jX$

The inverse of resistance is a quantity called "Conductance". Similarly, the inverse of reactance is called "Susceptance". The inverse of impedance is called "Admittance". Conductance, Susceptance, and Admittance are all denoted by the variables Y or G, and are given the units Siemens. This book will not use any of these terms again, and they are just included here for completeness.

## Parallel Components

Once in the transform domain, all circuit components act like basic resistors. Components in parallel are related as follows:

$Z_1 || Z_2 = \frac{Z_1 Z_2}{Z_1 + Z_2}$

## Series Components

Series components in the transform domain all act like resistors in the time domain as well. If we have two impedances in series with each other, we can combine them as follows:

$Z_1 \mbox{ in series with } Z_2 = Z_1 + Z_2$

## Solving Circuits

(This section has not yet been written)

3-Phase Circuits

This section is about 3-Phase circuits.

Circuit Theory/3-Phase Transmission

Appendices

Circuit Functions
Phasor Arithmetic
Decibels
Transform Tables
Resources

# Circuit Functions

## Circuit Functions

This appendix page will list the various values of the variable H that have been used throughout the circuit theory textbooks. These values of H are all equivalent, but are represented in different domains. All of the H functions are a ratio of the circuit input over the circuit output.

## The "Impulse Response"

The impulse response is the time-domain relationship between the circuit input and the circuit output, denoted with the following notation:

$h(t)$

The impulse response is, strictly speaking, the output that the circuit will produce when an ideal impulse function is the input. The impulse response can be used to determine the output from the input through the convolution operation:

$y(t) = h(t) * x(t)$

## The "Network Function"

The network function is the phasor-domain representation of the impulse response. The network function is denoted as such:

$\mathbb{H}(\omega)$

The network function is related to the input and output of the circuit through the following relationships:

$\mathbb{Y}(\omega) = \mathbb{H}(\omega) \mathbb{X}(\omega)$

Similarly, the network function can be received by dividing the output by the input, in the phasor domain.

## The "Transfer Function"

The transfer function is the laplace-transformed representation of the impulse response. It is denoted with the following notation:

$H(s)$

The transfer function can be obtained by one of two methods:

1. Transform the impulse response.
2. Transform the circuit, and solve.

The Transfer function is related to the input and output as follows:

$Y(s) = H(s) X(s)$

## The "Frequency Response"

The Frequency Response is the fourier-domain representation of the impulse response. It is denoted as such:

$H(j \omega)$

The frequency response can be obtained in one of three ways:

1. Transform the impulse response
2. Transform the circuit and solve
3. Substitute $s = j \omega$ into the transfer function

The frequency response has the following relationship to the circuit input and output:

$Y(j \omega) = H(j \omega) X(j \omega)$

The frequency response is particularly useful when discussing a sinusoidal input, or when constructing a bode diagram.

# Phasor Arithmetic

## Forms

Phasors have two components, the magnitude (M) and the phase angle (φ). Phasors are related to sinusoids through our cosine convention:

$\mathbb{C} = |M| \angle \phi = |M| \cos (t\omega + \phi)$

Remember, there are 3 forms to phasors:

#### Phasor Form

• $\mathbb{C} = |M| \angle \phi$

#### Rectangular Form

• $\mathbb{C} = A + jB$

#### Exponential Form

• $\mathbb{C} = |M|e^{j\phi}$

Phasor and Exponential forms are identical and are also referred to as polar form.

## Converting between Forms

When working with phasors it is often necessary to convert between rectangular and polar form. To convert from rectangular form to polar form:

$|M| = \sqrt{A^2 + B^2}$
$\phi = \arctan \left( \frac{B}{A} \right)$

To convert from polar to rectangular form:

A is the part of the phasor along the real axis

$A = |M|\cos \left( \phi \right)$

B is the part of the phasor along the imaginary axis

$B = |M|\sin \left( \phi \right)$

To add two phasors together, we must convert them into rectangular form:

$\mathbb{C}_1 = A_1 + jB_1$
$\mathbb{C}_2 = A_2 + jB_2$
$\mathbb{C}_1 + \mathbb{C}_2 = (A_1 + A_2) + j(B_1 + B_2)$

This is a well-known property of complex arithmetic.

## Subtraction

Subtraction is similar to addition, except now we subtract

$\mathbb{C}_1 = A_1 + jB_1$
$\mathbb{C}_2 = A_2 + jB_2$
$\mathbb{C}_1 - \mathbb{C}_2 = (A_1 - A_2) + j(B_1 - B_2)$

## Multiplication

To multiply two phasors, we should first convert them to polar form to make things simpler. The product in polar form is simply the product of their magnitudes, and the phase is the sum of their phases.

$\mathbb{C}_1 = M_1 \angle \phi_1$
$\mathbb{C}_2 = M_2 \angle \phi_2$
$\mathbb{C}_1 \times \mathbb{C}_2 = M_1 \times M_2 \angle {\phi_1+\phi_2}$

Keep in mind that in polar form, phasors are exponential quantities with a magnitude (M), and an argument (φ). Multiplying two exponentials together forces us to multiply the magnitudes, and add the exponents.

## Division

Division is similar to multiplication, except now we divide the magnitudes, and subtract the phases

$\mathbb{C}_1 = |M_1| \angle \phi_1$
$\mathbb{C}_2 = |M_2| \angle \phi_2$
${\mathbb{C}_1 \over \mathbb{C}_2} = {|M_1| \over |M_2|} \angle {\phi_1-\phi_2}$

## Inversion

An important relationship that is worth understanding is the inversion property of phasors:

$\mathbb{C} = M_C\angle 0 = -M_C \angle \pi$

Or, in degrees,

$\mathbb{C} = M_C\angle 0^\circ = -M_C \angle 180^\circ$

On the normal cartesian plane, for instance, the negative X axis is 180 degrees around from the positive X axis. By using that fact on an imaginary axis, we can see that the Negative Real axis is facing in the exact opposite direction from the Positive Real axis, and therefore is 180 degrees apart.

## Complex Conjugation

Similar to the inversion property is the complex conjugation property of phasors. Complex conjugation is denoted with an asterisk above the phasor to be conjugated. Since phasors can be graphed on the Real-Imaginary plane, a 90 degree phasor is a purely imaginary number, and a -90 degree phasor is its complex conjugate:

$\mathbb{C} = M \angle 90^\circ$
$\mathbb{C}^* = M \angle -90^\circ = M \angle 270^\circ$

Essentially, this holds true for phasors with all angles: the sign of the angle is reversed to produce the complex conjugate of the phasor in polar notation. In general, for polar notation, we have:

$\mathbb{C} = M \angle \phi$
$\mathbb{C}^* = M \angle -\phi$

In rectangular form, we can express complex conjugation as:

$\mathbb{C} = A + jB$
$\mathbb{C}^* = A - jB$

Notice the only difference in the complex conjugate of C is the sign change of the imaginary part.

# Decibels

This appendix page is going to take a deeper look at the units of decibels, it will describe some of the properties of decibels, and will demonstrate how to use them in calculations.

## Definition

Decibels are, first and foremost, a power calculation. With that in mind, we will state the definition of a decibel:

$dB = 10 \log{\frac{P_{out}}{P_{in}}}$

The letters "dB" are used as the units for the result of this calculation. dB ratios are always in terms of watts, unless otherwise noted.

## Voltage Calculation

now, another formula has been demonstrated that allows a decibel calculation to be made using voltages, instead of power measurements. We will derive that equation here:

First, we will use the power calculation and Ohm's law to produce a common identity:

$P = VI = \frac{V^2}{R}$

Now, if we plug that result into the definition of a decibel, we can create a complicated equation:

$dB = 10 \log{ \left[\frac{ \frac{V_{out}^2}{R} }{ \frac{V_{in}^2}{R} }\right]}$

Now, we can cancel out the resistance values (R) from the top and bottom of the fraction, and rearrange the exponent as such:

$dB = 10 \log{\left[ \left(\frac{V_{out}}{V_{in}}\right)^2 \right]}$

If we remember the properties of logarithms, we will remember that if we have an exponent inside a logarithm, we can move the exponent outside, as a coefficient. This rule gives us our desired result:

$dB = 20 \log{\left[ \frac{V_{out}}{V_{in}} \right] }$

## Inverse Calculation

It is a simple matter of arithmetic to find the inverse of the decibel calculation, so it will not be derived here, but stated simply:

$P = 10^{dB/10}$

## Reference Units

Now, this decible calculation has proven to be so useful, that occasionally they are applied to other units of measurement, instead of just watts. Specifically, the units "dBm" are used when the power unit being converted was in terms of milliwatts, not just watts. Let's say we have a value of 10dBm, we can go through the inverse calculation:

$P = 10^{10dBm/10} = 10mW$

Likewise, let's say we want to apply the decibel calculation to a completely unrelated unit: hertz. If we have 100Hz, we can apply the decibel calculation:

$dB = 10 \log{100Hz} = 20dBHz$

If no letters follow the "dB" label, the decibels are referenced to watts.

## Decibel Arithmetic

Decibels are ratios, and are not real numbers. Therefore, specific care should be taken not to use decibel values in equations that call for gains, unless decibels are specifically called for (which they usually aren't). However, since decibels are calculated using logarithms, a few principles of logarithms can be used to make decibels usable in calculations.

### Multiplication

Let's say that we have three values, a b and c, with their respective decibel equivalents denoted by the upper-case letters A B and C. We can show that for the following equation:

a = b c


That we can change all the quantities to decibels, and convert the multiplication operations to addition:

A = B + C


### Division

Let's say that we have three values, a b and c, with their respective decibel equivalents denoted by the upper-case letters A B and C. We can show that for the following equation:

a = b / c


Then we can show through the principals of logarithms that we can convert all the values to decibels, and we can then convert the division operation to subtraction:

A = B - C