IntroductionMany problems in linear algebra (and many other
branches of science) boil down to solving a system of linear equations in a
number of variables. This in turn means finding common solutions to some
“polynomial” equations of degree 1 (hyperplanes). In many cases, we are faced
with “nonlinear” system of polynomial equations in more than one variable.
Geometrically, this means finding common points to some “surfaces”. Like the
Gaussian elimination for linear systems, the elimination theory in general is about eliminating a number of
unknowns from a system of polynomial equations in one or more variables to get
an easier equivalent system.

One way of find common
solutions to polynomial equations is to solve each equation separately and then
compare all the solutions. As you can guess, this is not an efficient way
especially if the goal is only to show the existence of a solution.

To understand the
importance of elimination theory, let us start by considering the following
“simple” example.

Example Consider a system of two quadratic equations
in one variable x:

We would like to find
a necessary and sufficient condition for the existence of a common solution for
the system

If f(x) and g(x)
have a common solution, then they must have a common linear factor, say, L.
Let

Then both q1(x)
and q2(x) must be linear, and we can write

(the signs chosen in q2(x)
will make more sense in a moment)for some constants A1, B1, A2
and B2. Now, since

then

Explicitly,

Expanding and
collecting terms in the above equation gives:

This equation is only
possible if the coefficients of x, x2,
x3 and the constant term are all equal to 0. This gives the
following homogeneous system with variables arranged in this order: A2, B2, A1,
B1:

In order for this
system to have a nontrivial solution, its coefficient matrix must be
noninvertible. In other words, its determinant must be zero:

which is equivalent to

since the determinant
of a matrix is equal to the determinant of its transpose. This determinant is
called the (Sylvester) resultant of f(x) and g(x). Note that the resultant in this case is the
determinant of a 4x4 matrix consisting of the coefficients of the two
polynomials together with 0’s arranged in a very special way.

Here is the formal definition of the resultant:

Definition Let

be two polynomials of degree m and n respectively such
that am≠ 0 or bn≠ 0. If m
≤ n, we define the resultant of f(x) and g(x) to be the following
determinant:

Note that Res(f(x), g(x)) is the determinant
of a square matrix of size m+n

ExampleIf

then

As a generalization of the first example, we have the following:

Theorem Let

be two polynomials of degree m and n respectively such
that am≠ 0 or bn≠ 0. Then the
system

has a solution if and
only ifRes(f(x), g(x))=0.

Example 1Without
solving the polynomial equations, show that the following system

has solutions.

SolutionWe compute the resultant of the two polynomials

therefore, the
polynomials f(x), g(x) have a common root by the above theorem.

One can use the above
theorem to determine if a polynomial system in more than one variable has a
solution. The trick is to look at the polynomials in the system as polynomials
in one variable with coefficients polynomials in the other variables. The next
example illustrates that idea.

Example 2Solve the following system:

Solution We may look at this system as polynomials in y with coefficients polynomials in x:

In order to have a
common solution, one must have:

This is equivalent to

.

The problem is reduced
then to solving a polynomial equation in one variable x. Although the
solution of this equation does not look easy, one can use a numerical approach
to estimate the solutions. These solutions correspond to the x-coordinates
of the intersection points. Note also that the original system ca be written as

so any solution to the
system is an intersection of an ellipse and a circle that can be found
geometrically.