Pre-(r)amble

The odds favor that by the time someone has reached this article, myself included, they have spent at least the briefest of moments (frustratedly?) questioning the practical applications for linear combination, linear independence and linear math. In a sentence, these concepts allow us to mathematically understand and represent multidimensional coordinate systems. If you’re looking for a quick explanation for a homework problem feel free to skim through the bolded topics for help in specific areas of concern. Otherwise, here’s something to think about. Imagine maneuvering in three dimensional space. An instantaneous position can be described using a three dimensional coordinate system. When following a consistent pattern of movement, an instantaneous position can be described with a fourth dimension, time. Suppose you have just landed the snowball throw of a lifetime and hit a target moving across your view plane, increasing the distance between you, and uphill. You have properly estimated the intersection of two moving objects in four dimensions. This is not always an easy task to execute. Now make this throw using a fifth dimension. Most people can’t comprehend the existence of a fifth dimension without having to understand how to maneuver in it. With linear math we can attempt to understand and represent the relationships between these dimensions.

Important Definitions

Linear Independence
A set of linearly independent vectors {} has ONLY the zero (trivial) solution <> <> for the equation

Linear Dependence
Alternatively, if or , the set of vectors is said to be linearly dependent.

Determining Linear Independence

By row reducing a coefficient matrix created from our vectors {}, we can determine our <>. Then to classify a set of vectors as linearly independent or dependent, we compare to the definitions above.

Example
Determine if the following set of vectors are linearly independent:

, , ,

Setting up a Corresponding System of Equations and Finding it’s RREF Matrix

We need to understand that our vectors can be represented with a system of equations all equaling zero to satisfy the equation from our definition of linear independence. These equations will look something like this:

Notice that I have simply taken the coefficients from the given vectors and multiplied them by four variables (the number of variables will equal the number of vectors in the given set). They have been set equal to zero to allow us to test for linear independence. From here, create a coefficient matrix and perform row operations to reduce the matrix to reduced row echelon form (rref) .

rref =

Finding the Solution of the RREF Matrix

Finding the solution of the rref matrix may be the more difficult step in this process. However, it may become trivial following a few simple steps.

1) Identify the free variables in the matrix. Free variables are non-zero and located to the right of pivot variables. Pivot variables are the first non-zero entry in each row and since we have taken the rref of our matrix, all of the pivot variable coefficients are 1. By locating all free variables (or by eliminating all pivot variables) we find that is our only free variable.

2) Write free variables into your solution. The variable can be written into our solution vector as itself but we will represent it with another variable name (i.e. ) so that our solution is in parametric form. Multiple free variables are represented with multiple variables names (i.e. ). After this step your solution vector should look like this: <> <>.

3) Solve for pivot variables. The pivot variables should either be constant (i.e. 0, 6) or a function of your free variables (i.e. ). From the rref matrix we can see that , , and .

Since not all of our , the given set of vectors is said to be linearly dependent. The linear dependence relation is written using our solution vector multiplied by the respective vector from the given set: . We can also conclude that any vectors with non-zero coefficients are linear combinations of each other. Therefore, and are a linear combination.

Any op-amp worth its salt has a differential amplifier at its front end, and you’re nobody if you can’t design one yourself. So, this article presents a general method for biasing and analyzing the performance characteristics of single-stage BJT and MOSFET differential amplifier circuits. The following images show the general schematic for both kinds of differential amplifiers, often referred to as a differential input stage when used in designing op-amps. Notice that these types of differential amplifiers use active loadsto achieve wide swing and high gain.

Figure 1. BJT and MOSFET differential amplifiers with active loads

Due to design processes and the nature of the devices involved, BJT circuits are “simpler” to analyze than their FET counterparts, whose circuits require a few extra steps when calculating performance parameters. For this reason, this tutorial will begin by biasing and analyzing a BJT differential amplifier circuit, and then will move on to do the same for a FET differential amplifier. But it should be noted that the procedures to analyze these types of differential amplifiers are virtually the same.

BJT Differential Amplifier

The first thing needed is to configure the DC biasing. To accomplish this, a practical implementation of must be developed. A very popular method is to use a current mirror. A simple current mirror is shown below:

BJT Current Mirror

Figure 2. BJT Current Mirror

It is easy to understand how a current mirror works. Observe the equation governing the amount of collector current in a BJT, denoted :

Note: [This equation may look intimidating at first, but what is important to understand is that the point of designing “by hand” is to get close. One should aim simply to get a good estimation of such parameters as necessary bias current, gain, input impedance, etc. In this way, computer simulations can analyze the hand-designed circuit in much closer detail, which greatly aids in the process of designing a real-life differential amplifier. Knowing this, the equations to be used in this tutorial will be rough estimates, but are still invaluable when it comes to designing these types of circuits.]

By assuming a very large equivalent resistance, one can estimate that the collector current through any BJT can be described by:

What can be noticed here is that the only controllable variable in that equation is . All the other terms in the equation are constants that depend on either the environment or the actual physical size of the device. This means that for any two same-sized transistors, the currents through their collectors will be the same as long as the voltage across their base-emitter junctions is the same. By tying their bases and emitters together, we can mirror the currents between them! In order to implement a successful current mirror, one transistor (here, ) must have a current induced in it to mirror it to the differential amplifier’s current source (here, ). After adding this current mirror to our BJT differential amplifier, the resulting schematic is:

Figure 3. BJT differential amp with current mirror biasing

In order to properly bias this circuit, it is necessary to include . Two things are accomplished by including in our circuit. One of them is that we can induce the current in , and thus, the current in . The other important thing this resistor does is drop a majority of the available voltage across itself, so that doesn’t have the entire voltage difference between the supplies across it! To bias this circuit, the first thing one must do is determine what the desired magnitude of the current source will be. This parameter depends on how you want the circuit to operate, and is usually a known value. In this tutorial, we will assume we want an of 1mA. In order to determine the necessary size of , we analyze the loop that consists of:

These kinds of circuits are typically supplied rails of to . So, this tutorial will assume:

.

For a given technology, all of the BJT transistors are designed to have the same turn-on voltage. This tutorial will assume .7 V for each BJT. That being the case, and rearranging the above equation, results in:

By introducing a resistor of to the above schematic, the bias current is now established at 1 mA. Due to symmetry, the currents through transistors and are each half of the bias current, described by:

Now that we know the collector currents through and , characterizing the performance of this differential amplifier is a breeze. Since the parameters we are interested in (gain, CMRR, etc) are small-signal parameters, the small-signal model of this circuit is needed. To obtain this, a nice trick is to “cut the amplifier in half” (lengthwise, such that you only analyze the output side of the amplifier) to obtain:

Figure 4. Small-signal model for above differential amplifer

Note: [even though the output signal is single-ended here, the output is still a result of the entire input signal, and not just half of it. This is because the small-signal changes in the currents flowing through are impeded from traveling down the branches controlled by current sources . Also note that the connections between and the voltage-controlled current source (VCCS) indicate that the voltage that controls the VCCS is the voltage across . This is because the resistance in the emitter of these transistors has been omitted, due to its typically small value (10 to 25 ). In addition to this, is assumed to be a small signal (AC) open-circuit. The frequency response has also been omitted, and the amplifier is assumed to be unilateral.]

Differential Mode Gain

It is simple to see that (the small-signal output voltage) is equal to the current across the parallel combination of the resistors and multiplied by the size of the same parallel combination. Since we know the value of the current through this combination is equal to the input voltage multiplied by (the transconductance parameter):

The transconductance parameter is a ratio of output current to input voltage. It is described mathematically as:

and can be solved for thusly:

In this example, is .5 mA and is 25 mV. With these values, we compute:

Now that the transconductance parameter is known, the only other values needed to compute the differential mode gain are and . is an npn transistor, while is a pnp transistor, so they will not have the same small-signal resistance, but the procedure to find these two values are nearly identical. The following equation describes the small-signal output resistance of any BJT:

The parameter is typically given, and in this tutorial:

Which would result in:

and

Now that the small-signal resistances are known, along with the transconductance parameter, the differential mode gain () may be calculated:

or, in decibels (dB):

Differential Input Impedance

The differential input impedance of a differential amplifier is the impedance a “seen” by any “differential” signal. A “differential signal” is any and all signals that aren’t shared by and . For instance, if:

and

then the common mode signal and differential mode signals are:

and

To find the differential input impedance, begin by following the loop consisting of:

, as illustrated below:

Figure 5. Loop analyzed in order to determine Rin(DM)

We see that, in the differential signal mode, the path to ground only consists of of each input transistor. Since this is the case, the differential mode input impedance of any BJT diff-amp may be expressed as (omitting emitter resistance and assuming matched):

where:

(current gain factor)

A typical value for is 100, and knowing allows one to compute:

So, for the BJT differential amplifier in this tutorial, the differential mode input impedance is:

Common Mode Gain

The CM gain () is the “gain” that common mode signals “see,” or rather, is the attenuation applied to signals present on both differential inputs. A good op amp attempts to eliminate all common mode signals, but this is obviously not possible in the real world. However, one may compute the common mode gain by “cutting the amplifier in half” by observing one of the loops in the following diagram. The path differs from that of differential signals because common mode signals make it so that the two signal sources don’t “see” each other. Notice:

Similar to the output voltage of the differential mode small signal model, we can see that is the voltage across . We also know the current running through this resistance, and may equate the output voltage to:

This time, though, isn’t distributed entirely over the resistances at the base. Instead, a fraction of the input common mode input signal is across the base-emitter junction. Referring back to the small signal model, we see that the loop composed of:

reveals that:

but is negligible compared to the current supplied by the collector, so we say:

which we use to solve for :

Which we then plug back into the equation for :

From this we can solve directly for the common mode gain:

Here, the common mode gain is:

Common Mode Input Impedance

The common-mode input impedance is the impedance that common-mode input signals “see.” One can analyze the common mode input impedance () by, again, “cutting the differential amplifier in half” and analyzing one side the resulting schematic, assuming a common mode signal. This can be found by observing the figure 6, above.

Choosing one of these paths, we construct the corresponding small-signal model for common mode signals (assuming ), which is shown in figure 7. From this figure, deriving is simple. Notice the currents flowing in the loop that consists of:

from this loop, one may compute:

which is used to find an equation for

and since:

and

So:

which is the same as:

which can be rearranged for:

where:

Which, in this tutorial, results in:

Common Mode Rejection Ratio (CMRR)

The common mode rejection ratio (CMRR) is simply a ratio of the differential mode gain to the common mode gain, and is defined as:

Here, the CMRR is:

Analysis of FET Differential Amplifiers

As stated before, the analysis of these performance parameters are done virtually the same for FET diff amps as they are for BJT diff amps. There are, however, a few key differences. For one, all BJT transistors are typically built to be the same size on a given IC device. But for an IC device that uses FETs, this is not the case. Each FET has an adjustable length and width that affects how much current it will pass for a given voltage-drop across the device. In fact, observe the equation for the drain current in a FET:

Analyzing BJTs in a circuit is more simple because all base-emitter voltages are assumed to be equal. But this is not the case for mosfets, and one must analyze the above equation (or others) to find device voltages. But there is the threshold voltage – the minimum gate-to-source voltage that will allow for any conduction whatsoever. The threshold voltage is a result of the FET fabrication process, and is typically provided on datasheets for each FET gender.

For a differential amplifier composed of FETs to work, it is imperative that all the FETs be in saturation mode. For a FET to be in saturation implies:

So this must be checked when analyzing these types of circuits.

Another important difference is the derivation of the transconductance parameter, . When analyzed for a BJT, it was defined as the ratio of the change in collector current to the change in the base-emitter voltage. For a FET there is a similar procedure, as the transconductance is defined as the ratio of the change in drain current to the change in gate-source voltage. Mathematically, the transconductance parameter is:

The last notable difference is the computation for a FET’s small-signal resistance. The equation describing is:

From this little discussion, you should be able to apply the principles used to analyze the BJT differential amplifier to the analysis of a FET-based differential amplifier. But, of course, if you would like to see a FET differential amplifier explained in more detail, do not hesitate to ask a question!

Credit & Acknowledgment

This post was created in March 2011 by Kansas State University Electrical Engineering student Safa Khamis. A million thank yous extended to Safa for taking the time to document this important process for everyone else to learn from. Please leave questions, comments, or ask a question in the questions section of the website.

Introduction to the convolution

Amongst the concepts that cause the most confusion to electrical engineering students, the Convolution Integral stands as a repeat offender. As such, the point of this article is to explain what a convolution integral is, why engineers need it, and the math behind it.

In essence, the “convolution” of two functions (over the same variable, e.g. and ) is an operation that produces a separate third function that describes how the first function “modifies” the second one. Conversely, the resulting function can be seen as how the second function “modifies” the first function. Sometimes the result is used to describe how much the first two functions “have in common.” In all honesty, the concept of the convolution of two functions is quite abstract, but the frequency at which it appears in nature grants its importance to scientists and engineers. Ultimately the aim here is to identify its use to electrical engineers – so for now do not dwell solely on its mathematical significance.

A convolution of two functions is denoted with the operator ““, and is written as:

Convolution of f1(t) and f2(t)

Where is used as a “dummy variable.” To aid in understanding this equation, observe the following graphic:

Convolution of two square pulses, resulting in a triangular pulse

Before diving any further into the math, let us first discuss the relevance of this equation to the realm of electrical engineering.

Why is the convolution integral relevant?

Most electrical circuits are designed to be linear, time-invariant (LTI) systems. Being “linear” implies that the magnitude of a circuit’s output signal is a scaled version of the input signal’s magnitude. Further, an LTI system that is excited by two independent signal sources will output the sum of the scaled versions of each signal. This is extended for an infinite number of independent signal sources, and gives rise to the concept of superposition. Put in another way, if a function causes an LTI system to output , then:

Where is a multiplicative constant. In addition to this, superposition allows us to say:

Being a “time-invariant” system means it does not matter when the input signal is applied – a specific input signal will always result in the same output signal for a given LTI system. Put mathematically, time-invariance can be expressed as:

where can be viewed as a time delay when dealing with signals through time (i.e. “time-domain signals”). Though not directly, this concept also signifies that an output signal cannot contain frequency components not inherent in the input signal (causality).

The vast majority of circuits are LTI systems, each with a specific impulse response. The “impulse response” of a system is a system’s output when its input is fed with an impulse signal – a signal of infinitesimally short duration. A real-world “impulse signal” would be something like a lightning bolt – or any form of ESD (electro-static dischage). Basically, any voltage or current that spikes in magnitude for a relatively short period of time may be viewed as an impulse signal. The impulse response of a circuit will always be a time-domain signal, and exists because no signal can propagate through a circuit in zero time; each individual electron involved can only move so quickly through each component. Typically, real-world electronic LTI systems exhibit an impulse response that consists of an initial spike in magnitude, followed by an everlasting and ever-decreasing exponential relationship in signal magnitude. The following image describes this graphically.

Typical Unit Impulse Response

So, here’s the big deal: the fact that each LTI circuit has a specific impulse response function (here, referred to as ) is very useful in predicting its behavior given a particular input signal (here, referred to as ). This is because the input signal itself may be viewed as an impulse train – a stream of continuous impulse functions, with infinitesimally short durations of time between each impulse. This fact, along with superposition, allows one to find the output of an LTI system given an arbitrary input signal by summing the LTI system’s impulse response to each impulse function that make up the input signal. By allowing the time between each “impulse” of the input signal to go to zero, this approach can be used to determine the output time-domain signal of an LTI system for any time-domain input signal. For example, the following graphic shows the output of an RC circuit when fed with a square pulse:

What is seen here is the integral of the impulse response and the input square wave as the square wave is stepped through time. In the above convolution equation, it is seen that the operation is done with respect to , a dummy variable. In reality, we are taking an input signal, flipping it vertically through the origin (not evident with a square wave), and determining what the integral is at each value of , which here is delay through time. Since the output of any LTI system is non-causal (meaning it cannot exist until the signal that excites the output has been applied), we must mathematically step through time to see how each impulse signal of the input affects the LTI system’s impulse response – again, achieved by stepping through – the “time-delay” dummy variable.

A Convolution Example

To see how the convolution integral can be used to predict the output of an LTI circuit, observe the following example:

For an LTI system with an impulse response of , calculate the output, , given the input of:

The output of this system is found by solving:

We only integrate between 0 and + because, if we define as the time that the input signal is applied, then both and have zero magnitude at any time .

From there, we calculate:

Next, we can simplify and compute the integral:

Since for all , we can write the output as:

This result describes the output function for an LTI system with an impulse response when fed the input signal .

5 Steps to perform mathematical convolution

Often, one may wish to compute the convolution of two signals that can’t be described with one function of time alone. For arbitrary signals, such as pulse trains or PCM signals, the convolution at any time t can be computed graphically. For signals whose individual “sections” can be described mathematically, follow these steps to perform a convolution:

1.) Choose one of the two funtions ( or ), and leave it fixed in -space.

2.) Flip the other function vertically across the origin, so that it is time-inverted.

3.) Shift the inverted signal through the axis by seconds. Choose to shift the signal to the first “section” of the fixed function that is described by the same equation. The inverted signal (say, ), now shifted, represents , which is basically a “freeze frame” of the output after the input signal has been fed to the LTI system for seconds.

4.) The integral of the two functions, after shifting the inverted function by seconds, is the value of the convolution integral (i.e. output signal) at .

5.) Repeat this procedure through all “sections” of the function fixed in -space. By doing this, you can compute the value of the output at any time !

Useful Properties

The following is a list of useful properties of the convolution integral that can help in developing an intuitive approach to solving problems:

1.) Commutative Property:

2.) Distributive Property:

3.) Associative Property:

4.) Shift Property:

if

then

5.) Convolution with an Impulse results in the original function:

where is the unit impulse function

6.) Width Property:

The convolution of a signal of duration and a signal of duration will result in a signal of duration

Convolution Table

Finally, here is a Convolution Table that can greatly reduce the difficulty in solving convolution integrals.

Matrix manipulations and properties

Finding the inverse of a matrix is much more complex than finding the inverse of a number. All real numbers have an inverse (i.e. ). However, not all matrices have an inverse. There are several characteristics that allow us to visibly determine whether a matrix has an inverse but we will only focus on one. A matrix must be square (i.e. 2×2, 3×3, etc.) to have an inverse. Performing the following manipulations will be a waste of time if a matrix is not square. It is also important to know the inverse matrix property. Using my example above, and similarly with matrices, where In is the identity matrix (diagonal from top left to bottom right contains all 1’s, and everything else is 0) . We take advantage of this property when solving systems of matrices.

In words, the general algorithm for determining the existence of an inverse matrix is to manipulate the matrix into row reduced echelon form (rref). If the rref matrix is an identity matrix, then the inverse matrix exists. Hang on now, earlier I mentioned that there were other, visible characteristics that allow us to determine the existence of an inverse matrix, but now I’m asking you to perform a tedious process (without a calculator) with the same goal? Wouldn’t it be easier to first determine if finding the rref of the matrix is worthwhile? You’re right, except we are going to make a simple manipulation, and at the same time that we finish our rref process and determine that an inverse matrix exists, we will have found the inverse matrix! How do we do that? We will create an augmented matrix between our matrix in question, , and the appropriate identity matrix where the size of matrix is equal to the size of matrix . We will perform the same rref process to the augmented matrix . If the portion of our augmented matrix previously belonging to matrix reduces to an identity matrix (indicating the existence of ), then the portion previously belonging to the identity matrix, will equal .

Some matrix math

Now, for the math…

Suppose we are asked to find the inverse of the following matrix:

First, we must set up the augmented matrix discussed above. Notice that I have simply placed the identity matrix (of the same size as ) on the right of matrix .

Finding the rref of an augmented matrix

Next, we will attempt to find the rref of the augmented matrix. If the portion of the augmented matrix previously belonging to yields an identity matrix, is invertible.

rref =

Ok great! The left half of our augmented matrix reduced to an identity matrix. That means two things to us: the matrix has an inverse and we’ve already found the inverse. If you recall from above, is the right half of the augmented matrix (after finding it’s rref, of course). So we can conclude:

If our rref of the augmented matrix had yielded anything other than an identity matrix, we would conclude that does not exist. This method will simply allow us to determine the existence of and entries to for any size matrix.

Describing the process of solving a linear system using the adjacent matrix is best done while performing an example. Suppose we have a system where is the coefficient matrix of our system, is the column vector containing our variables, and is the solution column vector. We are asked to solve for the column vector made up of variables , , and .

Typically, we would divide by to solve for , however there is no method for performing division between matrices. By taking advantage of the inverse matrix property , we can simply the formula to solve for the column vector . The commutative property does not apply in matrix multiplication so . Therefore we have have to be aware of the ‘order’ in which we multiply:

simplifies to

Notice that since we multiplied by ‘first’ on the left side of the equation, we also multiply ‘first’ on the right side. Now, multiplying the inverse of matrix by matrix will yield a column vector matching our , , and . Below, I have used the equation and plugged the values for into the equation. The product between and is shown on the far right. Note: This article assumes you know how to find the inverse of a matrix. This process is described in my article Finding The Inverse of a Matrix.

Therefore, , , and . Simple systems (i.e. this 3×3 system) are much easier to solve with algebra instead of finding the inverse of the coefficient matrix and performing matrix multiplication. This application is more practical for larger systems or while working on Matrix Theory homework.

Please leave comments by signing in and then clicking on the “sticky note” located in the top right corner of this post to show your appreciation to the author!