A powerful generalization of this idea is to represent functions by the differential (or in the discrete case, difference) equations they satisfy. Restricting to linear differential equations with polynomial (or rational function) coefficients gives the ring of holonomic functions, which includes many of the familiar special functions (exponentials, Bessel functions, hypergeometric functions, etc.) and even generalizations thereof.

Assuming that one can do exact computations with the initial values of the differential equations and with the coefficients in the polynomials in the differential equations (for example, assuming that all numbers are algebraic numbers), addition, multiplication, indefinite integration, testing equality, etc. of holonomic functions can be done completely algorithmically by computing new differential equations (analogous to computing new annihilating polynomials for algebraic numbers) and doing finitely many operations on initial values. This idea is the basis of many recent developments in computer algebra.

I didn't clarify what I meant by "most", so I can understand how you ended up interpreting it as "the set of irrational square roots has a different cardinality than the set of rational square roots" (which it does not, as you point out).

What I did have in mind was that, if you consider only the square roots of all integers up to n, the proportion that are irrational limits to 1 as n limits to infinity.

Well, yes, you can say that a square number n has Kolmogorov Complexity O(sqrt(n)) so in that sense, yes, in the subset of Q+ bounded by a given KC there are more irrational square roots (I just restated your definition, I know.)

Are there not an infinite number of rational roots as well? They wouldn't be integers, but as long as the result can be expressed as a fraction, it's rational. IIRC, there should be an infinite number of both between any two integers.

Unless if I am mistaken, there are about sqrt(n) squares in (0, n), so there are around n - sqrt(n) non squares, but then sqrt(n)/(n - sqrt(n)) ---> 0 when n goes to infinity which seems to indicate that there are more non squares than squares in N, hence showing that most square roots are irrational.

It's no wonder that you couldn't implement exponentiation: algebraic numbers aren't closed under exponentiation in the vast majority of the cases. In particular, aside from the trivial cases where a=0 or a=1, if a and b are algebraic and b is irrational then ab is transcendental.

That was in the section where one of the numbers was always a known rational. I was talking about computing an/d with known integers n and d.

The /d part was already handled by taking roots, so really I only had integer powers in mind as the goal.

I do know how to start. Typical do-opposite-to-x. The problem is that, for example, squaring the roots of x2 - 5x + 3 gives x-5sqrt(x)+3. That's not a polynomial, since it has a sqrt(x) in it. I just didn't figure out all the details of rearranging and squaring sides to fix it, especially not for higher exponents.

Indeed, it's just Horner's rule in disguise. This version is pretty much the fastest you can do for polynomials of degree up to a hundred or so. For truly enormous polynomials, there are even faster algorithms based on the fast Fourier transform (see J. von zur Gathen and J. Gerhard, "Fast algorithms for Taylor shifts and certain difference equations" link).

I really enjoy stuff like this! Earlier this year I was working on a side-project of representing all number as relational units. The rules I set my self were simple. "Numerals" could only be a set of unique symbols countable in one direction, if a symbol (represented by an Enum value) that is 2 or more characters will always be an operator. More than 2 characters will always be a "non-algebraic" (ie sin,cos,lim...etc). A unit symbol cannot be any of the following i,e,π,<,>,=,-,*,,!,(,),',',[,],{,},|,:, ., or ~. Also, addition was the "only" operation that I would let myself use. A unit could be represented as a symbol or a grouping of symbols (aka expression units). Any mathematical operation between expression units had to result in an expression unit meeting the contract above. Units must also be able to be represented as sets, with inclusive and/or exclusive boundaries.

So I came up with something along the lines of an object that looked like this:

So basically if I declared a new ExpressionUnit('x') which assuming the symbols 0 and 1 are our additive and mutiplicative identities respectively, and we have 0,1,2,3,4,5,6,7,8,9,0,a,b,c as out symbols set I'd get
...
1) * (c(x1)) + (0
...
so you can just recurse all day through that as soon as you call a getter or setter. which is composed of more
terms.

Or if you wanted to -1/3 it would be multiplicand* (c((-3)(-1)) ) +addend. Multiplication was handled by an addition loop and subtract is handled by adding a twos compliment representation (you can do this with a bitset btw so "real" length restrictions). Exponents are also loops of multiplication (so transitively addition). Division is a negative subtraction loop were the remainder is written as an addend composed of the remainder * divisor-1 . So basically these terms can all become polynomials all written as monomials using this method...I think.

EDIT: I didn't have an ending to what I was doing...And holy math symbols, formatting.
There was a bunch more code and it was lots of fun to program. Keep doin whatcha do! It's seeing stuff like this that makes me LOVE programming!

Cool stuff, two or three weeks ago I wrote a program to find a polynomial that cancels all sums of roots of two others: algsum.c. My algorithm is directly derived from the proof that algebraic numbers form a field. It is based on a determinant computation (which gives you directly the polynomial).

This looks so very similar to Fourier Transforms and the more general Z-Transform. It's clearly not the same application, but Z-transforms are used to design and evaluate digital filters.

In particular digital filters have several physical layouts, but a basic method involves the creation of a feedback path defined by multiplications, delays, and additions. If, for instance, you knew the coefficients and the structure of the filter, you can write down a transfer function (h[n] = y[n]/x[n]). One example, is adding every incoming sample to the previous sample (a 2-pt moving average). This function is quickly found by inspection and a linear equation can be written down, where
y[n] = 0.5x[n] + 0.5x[n-1].

The z-transform is used to find the poles (roots for the denominator) and zeros (roots for the numerator) of a transfer function. This example has only zeros (it's a Finite Impulse Response filter). It's z-xform is
H[z] = 0.5 + 0.5z-1, or re-written as
H[z] = 0.5*(z+1)/z, which has a zero at z=-1.

Anyway something like
y[n] = x[n-3] - 5 would give
H[z] = z-3 - 5.

You can re-arrange if you prefer, and then
H[z] = (1-5*z3 ) / z3.
More interesting results are probably well-known, since this is a well-covered topic, but convolution in the linear equation (time) domain is equivalent to multiplication in the z-domain.

Discrete fourier is similar, in the sense that you can evaluate the function with z = ei\omega) to find the frequency response. Also, you can think of the Fourier Transform as a power series, or a Taylor series expansion of ex as equal to
1 + x/1! + x2 /2! + ... + xn /n!.

So then I wonder if any FFT algorithms might be applicable to this.

Edit: Also wanted to mention that (1-5*z3 ) / z3 has a zero at z = cube root of 5, which has 1 real value and 2 complex valued answers, where the complex values are mirrored across the real-number line (e.g., z = -0.8850 +/- 1.4809*i). The bottom half of the imaginary plane is a mirror image of the top half. The FFT and DCT in some sense take advantage of this fact.