I'm hoping that this isn't such a basic question that it gets completely laughed off the site, but why would I want to multiply two polynomials together?

I flipped through some algebra books and have googled around a bit, and whenever they introduce polynomial multiplication they just say 'Suppose you have two polynomials you wish to multiply', or sometimes it's just as simple as 'find the product'. I even looked for some example story problems, hoping that might let me in on the secret, but no dice.

I understand that a polynomial is basically a set of numbers (or, if you'd rather, a mapping of one set of numbers to another), or, in another way of thinking about it, two polynomials are functions, and the product of the two functions is a new function that lets you apply the function once, provided you were planning on applying the original functions to the number and then multiplying the result together.

Elementary multiplication can be described as 'add $X$ to itself $Y$ times', where $Y$ is a nice integer number of times. When $Y$ is not a whole number, it doesn't seem to make as much sense.

I just want to comment on one subtlety: Actually, a polynomial is not a function even though it induces a function from the underlying field to itself. This is particularly apparent when the field over which you are working is finite: then there are only finitely many functions from the field to itself, but still infinitely many polynomials.
–
RasmusNov 22 '10 at 20:34

14 Answers
14

You have two questions, the explicit one about why you would want to multiply polynomials, and an implicit one in your final paragraph about what multiplication by a non-integer might mean or why we would care to multiply by a non-integer in the first place.

To address the last one first: once you have multiplication by integers, multiplication by fractions will very quickly rear its head. What does multiplying by "one and a half" mean, if multiplying by 2 means "add to itself", etc? Well, imagine you have a chocolate bar, those that are made up of smaller squares. You can imagine breaking the bar in half, and then figuring out what three times that half will be; that will be multiplying by "three halves" (a.k.a one and a half). You are really multiplying by an integer, after suitably modifying $X$.

In general, if you need to multiply $X$ by a fraction, $\frac{p}{q}$, imagine dividing $X$ into $q$ equal parts, and then multiplying such a $q$th part of $X$ by $p$ in the sense you have above. That is the same as "multiplying by $\frac{p}{q}$". So multiplying by a fraction is like "abbreviated addition": it means "break up into $q$ equal parts, and then add a $q$th part repeatedly $p$ times."

So at least, multiplying by fractions makes just as much "natural sense" as multiplying by integers does.

Why bother with numbers other than fractions then? Well, in one sense you don't have to: you can try to stick to fractions and nothing more complicated than that, and you can go very far. But as the Greeks discovered a long time ago, you also run into very big walls very quickly. For instance, if you draw a square which is $1$ foot long on each side, and you try to measure how long its diagonal is (say, for construction purposes), then it turns out that the diagonal is not a number that can be expressed as a fraction; it is an irrational number. So very soon you end up having to consider numbers that are not fractions, and if they are lying around sooner or later you are going to have to multiply them to compute stuff.

So you end up having to find some way of multiplying irrationals as well, even though they no longer seem to fit with that same "natural" meaning they had back when we started with integers. One solution is that every irrational can be approximated by a suitable sequence of fractions (think about computing the decimals one at a time; every time you stop, what you have so far as a rational; for example, $\sqrt{2} = 1.4142\ldots$, and you get that $1.4 = \frac{14}{10}$, $1.41=\frac{141}{100}$, $1.414=\frac{1414}{1000}$, etc.) We know what it means to multiply $X$ by each of those fractions in a sensible way, so we say that multiplying $X$ by $\sqrt{2}$ is the number you get by doing the successive multiplications, just like $\sqrt{2}$ is the number you get by doing the successive fractional approximations.

This no longer makes sense as "abbreviated addition", but it turns out that it is very, very necessary and very, very useful, in order to make sense of things and be able to compute things that we need to be able to compute (areas, productivity, interest, etc).

As for multiplying polynomials...

One answer: multiplying functions lets you construct more complicated functions out of simpler ones. Or more to the point, it lets you express more complicated functions in terms of simpler ones. This is particularly important if you want to perform complex computations, as then you my be able to "get away" with performing much simpler computations and then multiplying the results, rather than do the really complicated expression instead.

For instance, say you have a single polynomial like $p(x) = x^2-7x+10$. If you realize that $p(x)$ is the result of multiplying the simpler polynomial $x-2$ by the (also simpler) polynomial $x-5$, then whenever you need to evaluate $p(x)$ at a number, say $17$, instead of having to square $17$, then multiply it by $7$, subtract that form the square you computed, and then adding $10$ (three multiplications and two additions/subtractions), you can instead take $17$, subtract $2$ to get $15$; then take $17$, subtract $5$ to get $12$; and then multiply $15$ by $12$ (one multiplication and two additions/subtractions), because $x^2-7x+10 = (x-2)(x-5)$, so $(17)^2 - 7(17) + 10 = (17-2)(17-5)$. Much simpler to do.

Another: it is usually very hard to find a value $x$ for which the result of doing some complex series of operations will be a desired quantity, $d$. For example, you want to know how much money to put in the bank so that, at the end of five months at a particular interest rate, you will have exactly the amount of money you need to buy that new wide-screen TV. This involves solving equations. Many natural equations can be written down in the form $p(x)=c$ where $p(x)$ is a polynomial expression in the unknown quantity $x$, and $c$ is the desired value. Solving such equations can be dificult in general. If you don't know the quadratic formula, then figuring out the values of $x$ for which the polynomial above $x^2-7x+10$ is equal to zero can be pretty difficult. Or think about something like $x^4 + x^3 - 120x^2 - 121x = 121$.

On the other hand, figuring out when a product is equal to $0$ is very easy, because the only way for a product to be zero is if one of the two factors is equal to zero. So if you take the equation above and you write it as $x^4+x^3-120x^2-121x-121 =0$, then you are trying to find when a certain polynomial is equal to $0$. If you can write $q(x)=x^4+x^3-120x^2-121x-121$ as a product, $q(x) = p(x)r(x)$, then you have that $q(x)=0$ if and only if either $p(x)=0$ or $r(x)=0$. With some luck, $p$ and $r$ will be "easier" than $q$, so you can solve them. (In the above case, $q(x) = (x^2-121)(x^2+x+1)=(x-11)(x+11)(x^2+x+1)$, so the only way you can get $q(x)=0$ is if $x=11$ or $x=-11$).

In fact, this is one way to figure out the quadratic formula (did you ever wonder where it came from?). Why are the solutions to $ax^2 + bx+c = 0$ given by $x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$? You can factor out $a$ and get $a(x^2 + Bx + C) = 0$, with $B=\frac{b}{a}$ and $C=\frac{c}{a}$. For this to be zero, you need $x^2+Bx+C=0$. Now, imagine you could write it as a product,
$$x^2 + Bx+C = (x-r_1)(x-r_2).$$
What would $r_1$ and $r_2$ be? If you know how to multiply polynomials, you get that $(x-r_1)(x-r_2)=x^2 - (r_1+r_2)x + r_1r_2$, so you need $r_1r_2=C$ and $r_1+r_2 = -B$. Squaring the latter you get $(r_1+r_2)^2 = B^2$; but
$(r_1+r_2)^2 = r_1^2 +2r_1r_2 + r_2^2$. On the other hand,
$$(r_1-r_2)^2 = r_1^2 - 2r_1r_2 + r_2^2 = (r_1^2+2r_1r_2+r_2^2) - 4r_1r_2 = B^2 - 4C.$$
So $(r_1-r_2)^2 =B^2-4C$. Taking square roots, you have that $r_1-r_2 = \pm \sqrt{B^2-4C}$. And you already know that $r_1+r_2 = -B$. Adding them you get
$$2r_1 = -B\pm\sqrt{B^2-4C}\qquad\text{or}\qquad r_1 = \frac{-B\pm\sqrt{B^2-4C}}{2}$$
and taking the difference between $r_1+r_2 = -B$ and $r_1-r_2 = \pm\sqrt{B^2-4C}$ you get
$$2r_2 = -B\mp\sqrt{B^2-4C}\qquad\text{or}\qquad r_2 = \frac{-B\mp\sqrt{B^2-4C}}{2}.$$
So you get that $r_1 = \frac{-B+\sqrt{B^2-4C}}{2}$ and $r_2 = \frac{-B-\sqrt{B^2-4C}}{2}$, and plugging in $B=\frac{b}{a}$ and $C=\frac{c}{a}$ gives the usual quadratic formula. No way to find it without knowing how to multiply polynomials!

When you get to Calculus (added: I'm assuming you will "get to Calculus" because you tagged the question as being (algebra-precalculus), so presumably you are taking a course labeled as 'precalculus'; but this may not be the case. If you are not going to "get to Calculus", then this paragraph will not tell you anything useful), you will find that there is a particular operation (differentiation, taking derivatives) which is very useful and very important. It tells you how fast a certain quantity is changing, and it can be used to find all sorts of useful things, like what production level will maximize profit in a factory, how big a dose of medicine and how often you should give to a patient based on how fast they metabolize it, and many other applications. Computing derivatives from first principles with an arbitrary function is pretty labor-intensive; but by reconizing a function as being "made up" (through sums, products, quotients, and compositions) of other, simpler, functions, makes it a very straighforward and easy job.

But in order to be able to recognize that a function is a product of two other functions, you first need to know how to multiply two functions together. Polynomials are one case.

Another situation occurs when the polynomials are measuring different things, and their product is somehow meaningful; maybe one polynomial gives you the length and the other polynomial gives you the width of a certain figure? Their product will be the area, which may be something you need to compute.

And more generally, you can think of polynomials as "abbreviations" for more complicated operations that you are doing with numbers, just like you are thinking of multiplication as "abbreviated addition". In that case, multiplying the two polynomials represents another complicated operation that you need to express in terms of the two simpler ones (addition and multiplication).

@f-Prime: The question is not whether it is long, but whether it is useful and really addresses the question. Length, while certainly a consideration, should not be the consideration.
–
Arturo MagidinDec 10 '10 at 2:48

Of course, nowadays the best error correcting codes are turbo codes or LPDC codes, which aren't based on polynomials. But for nearly 40 years many digital communications systems were based on multiplying polynomials.

Yes, but at least in the techinical report you linked to those are convolutional codes, and I was thinking about BCH codes. However, I think convolutional codes (and thus I guess turbo codes) also involve multiplication of polynomials, although it's not as central to how these codes work. So pretty much any time you send information digitally, multiplication of polynomials is likely to be involved.
–
Peter ShorNov 23 '10 at 20:03

Why multiply polynomials? Because they are functions, and multiplication is a natural or desirable operation on functions.

Why multiply functions? Because the values of the functions are integers, or rational numbers, or real numbers, or objects of some other more complicated kind that can be multiplied (such as matrices or rotations). If multiplication is natural for these numbers or number-like objects, it is natural for functions whose values are such objects.

Why multiply integers, rational numbers, real numbers, etc? Here is where explanation becomes difficult. The natural operation is not multiplication but the more structured operation of "tensor multiplication" that retains information about the factors that are multiplied. One can represent an Area as a product of type Length x Length that remembers its two-dimensionality, instead of a one-dimensional numerical value of that product that has forgotten its origins. In the same way, for integer multiplication it is most direct to multiply 5 and 6 by drawing 5x6 as a rectangular 2-dimensional array of dots instead of the "numerical evaluation" of that array as a one-dimensional string of 30 dots. The latter is less natural in that it requires a method of enumerating the dots in the grid and there is no preferred order in which to count them. This is also reflected in the ability to naturally multiply 5 gadgets by 6 gizmos, without there being any given ordering of the gadgets or gizmos.

Polynomials, especially polynomials in several variables ($x +3x^2y + 2 z^{10} x$ and such), have more inner structure than numbers and thus can reflect -- in fact, they can be defined and derived from -- the tensor structure of multiplication. So it is not that polynomials exist and we might want to multiply them; it is that multiplication of numbers (or of finite sets) naturally involves more than just numerical information and polynomials are an enhancement of numbers that more directly embody that information so that the multiplication can be performed in a way that retains more of the inner structure.

A hint of this is that integers in base 10 are values of polynomials at $x = 10$ (and those polynomials can be thought of as a "liberation" or "upgrading" of the numbers). Multiplication of integers can sometimes be seen to replicate the patterns in the coefficients when the polynomials are multiplied for general $x$, e.g., compare powers of $102$ and $x^2 + 0x + 2$. Later there are things like generating functions and convolutions that directly exploit polynomials as carriers of information that is enriched compared to using numbers alone.

(This is glossing over some technical questions about commutativity. Also, areas should be sums of LxL products, and one should explain the role of sums as well as products. But these are details that do not affect the main point.)

In applications, multiplication represents interaction or correlation between different effects. Situations where there are several independent processes isolated from each other and contributing to some outcome lead to sums of functions of the different variables, such as $f(x) + g(y) + h(z)$, the terms in the sum accounting for the separate effects. Situations where different parts of the mechanism can interact with each other in producing the outcome, or where there is correlation between different effects (e.g., one intensifies or suppresses the other), almost always involve multiplication when expressed mathematically. For example, if you count the number of handshakes in a group of $n$ people the answer will be of order $n^2$, and if you count the number of possible 3-person interactions in this group it will be of order $n^3$. Nonlinearities reflect the ability to organize the people into pairs, triples etc, and this again reflects the fact that the natural multiplication of two finite sets $A$ and $B$ is the set of ordered pairs $(a,b)$ of elements, one from each set, and the same for triples and higher numbers of sets.

A somewhat faddish term for related ideas is (de- or re-)categorification.

While some of the other answers look good, there are two particular related uses for polynomial multiplication which I didn't see while skimming.

One is related to probabilities. We can use a polynomial to represent the possible outcomes of a random process (so long as there are finitely many), where there is a variable for each possible outcome, and the coefficient of that variable is the probability of it occurring. (We'll just go with linear polynomials for now.)

Now, what would it mean to multiply two such polynomials? Well, supposing that the random events are independent of each other (that is, the outcome of one doesn't affect the probabilities involved in the other), the product of two such polynomials tells us the likelihood of various pairs of outcomes. If we flip that unbalanced coin, and have that guy throw rock/paper/scissors, the combined possibilities are

So we can read off that, for instance, the combined probability of getting a head on the coin and having the guy throw scissors is the coefficient of $H$ scissors, which is $\frac 2{15}$.

So that's a decent way to organise information about probabilities in a way that lets us combine multiple independent random processes. Raising a polynomial to the power of n in this setting gives us the probabilities when the process is repeated $n$ times, assuming again that the outcome of the previous trials doesn't affect the next one.

This tells us that the probability that we get two heads and one tail is $\frac 49$, and so on. (If we didn't allow $H$ and $T$ to commute with each other, we would get the probability of each possible sequence.)

It's also worth noting that it's quite effective to use polynomials in keeping track of proportions when forming mixtures of things, which are then mixed together (say, in chemistry), using substitution of one variable representing a mixture with the polynomial representing its contents. Multiplication doesn't immediately have such a useful interpretation there though.

Another thing we can do is to use the coefficients of polynomials to count the number of things of a given type. If we have a (finite) bunch of things with integer "sizes" or "weights", and we want to record the number of things with a given weight, we can use the coefficient of $x^n$ in a polynomial to record the number of objects with weight $n$. The product of two such polynomials will then count the number of ways to form a pair of objects with total weight $n$, for each $n$.

For example, suppose we want to know the number of ways to put change for $n$ cents into a vending machine using $5$ coins, for each possible $n$. (And we'll assume that the coins have values $\{1, 5, 10, 25, 100, 200\}$). We can use the polynomial $c(x) = x^1 + x^5 + x^{10} + x^{25} + x^{100} + x^{200}$ to represent our set of coins. Sequences of $5$ coins are then represented by $(c(x))^5$. This is a polynomial with a fairly large number of terms (representing the fact that there are lots of ways to choose a sequence of $5$ coins in order, specifically, $6^5 = 7776$, and moreso the fact that there are many different possible sums that those coins could have).

This tells us, for instance, that there are $120$ distinct ways to put coins in a vending machine to make $\$1.41$, since the coefficient of $x^{141}$ is $120$. There is of course, only one way to make change for $10$ dollars (which is to put in all $2$ dollar coins), so the coefficient of $x^{1000}$ is $1$. Note also that if we were to evaluate this polynomial at $x = 1$, we would get the total number of possible sequences, which is $7776$. It's also maybe interesting to see which terms are missing from this polynomial (have a coefficient of $0$), telling us that there's no way to make change for that amount using $5$ coins.

In fact, both of these approaches generalise to power series (which are like polynomials, but which have infinite sequences of terms), and the effectiveness of the approach really becomes more apparent in that more general setting. This technique is called generating series (or the somewhat unfortunate, but perhaps more common name "generating functions").

Anyway, I think that gives some impression of some of the variety of things that one can do with polynomials. (I'll just leave that pun there for the algebraic geometers. ;)

When solving real-world problems with simple algebra, it is not uncommon to have polynomials scattered throughout the equation. Given that 1/(3x+4)=2x-1, which isn't an extraordinarily complex equation, the first thing you might want to do is multiply both sides by 3x+4, which means you will need to be able to deal with (2x-1)*(3x+4).

It's possible to write down the set of matches between two strings as the result of multiplying two polynomials constructed from the strings. This turns out to be very useful: indeed, fast algorithms for string matching exploit the fast Fourier transform method used to multiply polynomials.

To see how this works, think of a binary string as representing coefficients of a polynomial. Then matches between the strings correspond to nonzero coefficients of the product of the polynomial from one string and the reverse of the polynomial from the second. Here, the "multiplication" is over the AND-OR field, rather than the usual integer or GF(2) field.

The first paper that did this was by Fischer and Paterson in 1974, for matching with 'don't-care' symbols. A good recent reference is this paper by Indyk which contains a number of useful references. The basic idea is folklore: express the string matching problem as a convolution and then you're done.
–
Suresh VenkatNov 23 '10 at 16:07

When you take calculus, you will need to factor a polynomial p as a product of two polynomials a and b. If you know how polynomial multiplication works, then finding factorizations is easier. Learn how to multiply now so that you can factor easily later. :)

Yes, but why would I want to factor polynomials?
–
Qiaochu YuanNov 22 '10 at 21:12

1

@Qiaochu: I appreciate the Socratic questions (really), but I think peterdhorn is on to something here. One of the most basic and well-motivated problems that I can think of is the need to maximize or minimize something. According to differential calculus, you can do this for nice functions if you can find where the derivative is equal to zero. If the function -- equivalently, its derivative -- is a polynomial (and that's a nice simple case to understand anyway), then -- hey presto -- you want to factor polynomials.
–
Pete L. ClarkNov 22 '10 at 21:20

1

However my comment on joshdick's answer also applies here: someone has been asked why he's doing something in his current study of mathematics. Can we really do no better than refer to things in a course several years off that he might or might not take? Is a high level of faith a necessary condition for studying mathematics?
–
Pete L. ClarkNov 22 '10 at 21:28

@Pete: that's fair. But I still think you don't have to go all the way to calculus to justify a concept like multiplying polynomials.
–
Qiaochu YuanNov 22 '10 at 21:29

-1. I do not think this answer is suited to the level of the OP.
–
Qiaochu YuanNov 22 '10 at 21:10

5

By the way, there is a common assumption here that someone who is asking about polynomials now will be taking calculus later. Anyone care to estimate the conditional probability of this? I say less than 50%.
–
Pete L. ClarkNov 22 '10 at 21:22

2

@Pete L. Clark: At least with me, that assumption came from the tag (algebra-precalculus). Had it been tagged (algebra-elementary) or some such, I would certainly not have mentioned calculus.
–
Arturo MagidinNov 22 '10 at 21:50

@Arturo: fair enough, I neither looked carefully nor thought deeply about the tagging. For what it's worth, I think it is absolutely appropriate for an explanation about the merits of multiplying polynomials to end with some allusions to calculus. It just shouldn't begin there...
–
Pete L. ClarkNov 23 '10 at 4:19

In terms of simply the math, it is essentially a skill you need to learn. Mathematics becomes (a long with many other sciences) more abstract as you go up in rank, so to speak. You can think of multiplying and adding polynomials as the next logical step after learning how to multiply decimals.

There is a natural progression, that is both historic and customary to progress from the most concrete to the more abstract. In grade school, the first mathematical concept you learned is numbers, shortly thereafter you were introduced to how you could combine two numbers to form a third one (namely through addition and multiplication). After being introduced to natural numbers, the concept of combining two numbers was expanded to something called fractions, which also introduced the concept of division (the opposite of multiplication, if you will). After getting comfortable with fractions, you were introduced to decimals and irrational numbers, and the rules for combining and manipulating those numbers. At this point, mathematical education turns to more abstract concepts, the idea of functions and variables. You spend plenty of time (depending on how far you go in math, maybe the rest of your mathematical career) studying functions, how they behave, how to manipulate them. It turns out that polynomials, are some of the simplest functions you can think of, so it is only natural that you start with them. Following the trend as before, after understanding what functions (specifically polynomials) are, a natural next step is considering how you can combine those polynomials to make new polynomials. Later on in math, you will be introduced with even more ways of combining and manipulating functions to create new functions. This is where you are right now, you are learning to manipulate and combined some (if you think about it) incredibly abstract concept, you are learning to manipulate an object that took thousands of years of human thought to finalize. Of course, in reality this is just a small stepping stone to the greener pastures ahead, assuming you go on in mathematics you will encounter some of the most mind boggling abstract concepts (some of the negatively voted answers to this question will give you just a small hint of whats out there). You will find that the very simple concepts of addition and multiplication (and their respective inverses) will go a long way in the study of mathematics, but they are stripped of all their cuddly original meaning, to become the most beefed up behemoths of abstract mathematical thought.

Now, enough with the theoretical "its the way it should be". Lets move on to practicalities. Applications of numbers, are easy enough to see, addition,multiplication, subtraction, and division of numbers is an everyday thing. Considering how to apply polynomials to your everyday life (especially why you would ever think of combining polynomials) requires a little greater stretch of the mind (or a little more mathematics :D ). Without appealing to calculus (which is certainly an easy cop-out).

Polynomials, Describe a large variety of every day processes and events. The most common being the quadratic polynomial (this is due to a very deep physical fact, but I promised to avoid calculus). The free fall trajectory of an object in a gravitational field is described (neglecting friction) by a quadratic polynomial. Even in simple physical problems you can see the multiplication of monomials and quadratics. For example, solving for when a shot cannon hits the ground is merely the root of the polynomial, conversely, if you know where the canon hit the ground you can multiply to monomials $(x-r)$ where $r$ is the root, to get the trajectory.

If we deviate a little from the math you have been exposed to, there are more abstract polynomials called Laguerre Polynomials that are present in the solution to the hydrogen atom. So if you (for some reason) wanted to calculate the shape of a 3D orbital of the hydrogen atom, you would need to multiply a lot of polynomials. Related polynomials play important roles in other problems in quantum mechanics and classic physics (Legendre Polynomials for example).

Hopefully this clears some of it up, unfortunately in a large sense it is really a stepping stone, into the world of higher mathematics. They are important on their own, but for reasons that... also become much more obvious later on.

FWIW, for the radial wave solutions, you're multiplying the Laguerre polynomial with (decaying) exponentials and a normalizing factor, not with other polynomials. (On the other hand, for the full description, you then multiply with the spherical harmonics, but those are trigonometric polynomials...)
–
Guess who it is.Nov 23 '10 at 4:26

@J.M. You are right that the actual solution involves a single polynomial. But if you are doing Laguerre polynomials by hand they are (well, can be) defined through a recursion relation and thus require you to multiply polynomials together.
–
crasicNov 23 '10 at 9:47

Multiplication of polynomials has a very nice combinatorial interpetation.

Take a simple example: $1+x=x^0+x^1$. If you perform an experiment once, that experiment can either succeed or fail. You can fail the experiment in only one way in one performance, or you can succeed in one way. This is precisely encoded in the coefficients of the polynomial.

Look now at the product: $(1+x)\times(1+x)=1\times x^0+2\times x^1+1\times x^2$. The coefficients now encode in how many ways you can fail twice, how many ways you can succeed once and fail once and how many ways you can succeed twice.

So, I guess you get the trick now, $(1+x)^n$ will have encoded in its $k$th coefficient in how many ways you can succeed $k$ times but fail $n-k$ times.

I think a better metaphor for multiplication comes from physics. If your speed s is constant over some time t, then your distance traveled is just s times t. Whatever fraction of time you travel, you will expect to have traveled some distance. So even fractional multiplication makes some sense.

In the regular world, it's very common to use a polynomials (often linear ones) to approximate things that are changing. For instance, if you are a grocer and you are trying to predict how many tomatoes you will sell tomorrow, it's not a bad approach to guess it will be the same as yesterday. It's also not a bad approach to guess it might be the average of the last two days. Or, you might come up with a more complicated formula involving the last 50 days. All of these attempts to guess will have an error. This is the point of studying these things, to do a better job and make smaller errors. (For reasons, I can't go into, the squares and cubes of values can sometimes help make approximations more accurate.)

My point is that any quantity is a potential polynomial. You might approximate your customers per day as a polynomial of the number of customers of the last two days. You might approximate the value of sales per customer as another polynomial. Well, if you wanted to know your potential income, customers times sales, you might multiply them.

I use polynomials all the time. For instance, when I was helping a friend lose weight, I approximated their weight loss behavior using a polynomial. It was fairly accurate.

Apart from the mundane reasons, e.g. "you need it to do certain computations" there are also some more technical reasons:

The Stone-Weierstrass theorem: Let $X$ be a compact metric space and let $A$ be a $\mathbb{C}$-subalgebra of $C(X,\mathbb{C})$. If $A$ separates points in $X$ and vanishes at no point of $X$ and is self-conjugate, then $A$ is dense in $C(X,\mathbb{C})$.
The polynomial functions on $[a,b]$ with complex coefficients are the prototypical example of such a subalgebra. Of course verifying that it is an algebra and has the stated properties is an application of multiplication of polynomials.

Orthogonal polynomials. These are somewhat important in the theory of special functions. For example in [Andrews, Askey and Roy] there is an entire chapter devoted to families of orthogonal polynomials.

Not too sure about the third justification (unless as crasic says in his comment to his answer that you'll be generating the polynomials recursively); for most manipulational purposes, either of the explicit sum representation, hypergeometric representation, or Rodrigues formula of an orthogonal polynomial might be more convenient.
–
Guess who it is.Nov 23 '10 at 10:33

Yes, I was reaching a bit there. And I actually do believe that mundane computation is the most important reason. Simply put you need the practice to be able to do things accurately and at a reasonable pace. In addition there's power series which you'll probably never feel confident about manipulating if you're not used to working with polynomials.
–
kahenNov 23 '10 at 10:44

In digital signal processing a signal of finite length may be represented as a polynomial ( its Z transform). So the convolution of two finite length signals is simply the multiplication of the Z transform polynomials.

This is actually the best answer that highlights the most important application. Convolution is the fundamental operation and in signal processing it needs to be done very efficiently. This is why we need fast algorithms to multiply two polynomials.
–
ShitalShahMay 21 at 10:41