This recurring thread will be for questions that might not warrant their own thread. We would like to see more conceptual-based questions posted in this thread, rather than "what is the answer to this problem?". For example, here are some kinds of questions that we'd like to see in this thread:

I don't know if it's strictly considered non-financial, and you don't need a grad degree (I don't know if it helps or makes a difference), but I've heard good things about being an actuary. It's usually regarded as a desirable/highly rated position, with high pay comparable to engineering, low stress, and no need to "take your work home."

I recently programmed a simulation of gravity. With it, I can accurately simulate, say, the orbit of earth. It's distance from the sun stays approximately accurate and constant, and after 365 days it completes a full orbit.

I did this the simplest way I could, calculating the force of gravity between the earth and the sun at a point, and changed the momentum of both the sun and the earth by F*t, then move the planets according to yheir new velocity, and repeat the process. Simply put, I used eulers method. How accurate could I expect this to be, and how would I calculate it's "accuracy" for different delta-t?

Edit: I just read a little more about the problem (wiki) and that eulers method is a solution for differential equations of one variable. Because I'm also moving the sun (a little bit) during each step, I think this is probably a diff eq of 2 variables. Having not taking diff-eq yet, what is a resource I could use to learn about it?

You are solving two differential equations of one variable, so Euler's method is fine. Euler's method's accuracy scales linearly with the timestep delta t. You could try implementing a higher order method like a midpoint method or RK4.

Euler's method is fine as a starting point, and learning about higher order methods is certainly important, but simulating orbital mechanics accurately typically requires a symplectic integrator. Otherwise energy is not conserved and computed orbits wander in phase space.

There are many, many equivalent ways of defining a smooth algebra (smooth scheme). It depends on what you want to take as 'given'. I'm going to assume, since you mentioned algebraic geometry, that you are somewhat familiar with the very basics of scheme theory, and can translate your question about smooth algebras into one about schemes.

Probably my favorite definition of a smooth algebra relies on the knowledge of something called an 'etale' morphism. Intuitively, this is like a 'generalized open set', or a 'local isomorphism' for rings. For all intents and purposes, you should really think abut it as an open subset in not the Zariski topology, but in a more general, geometric topology.

Then, what it means for a map X->S of schemes to be smooth is that locally on X you can factor it through a projection map. More precisely, every point x of X has a neighborhood U such that there is a factorization of f:U->S as U->An_S->S, where the first map is etale, and the second map is the canonical projection. This says that smooth maps are precisely those which locally (local in the etale topology, the 'geometric topology') look like projections. Compare this to the Rank theorem for smooth manifolds (see 7.8 of Lee's book on smooth manifolds).

Another way of defining a smooth map of schemes is by appealing to a fiberwise definition, which will allow the kind of 'naive' definition of smoothness to come out. Namely, a map f:X->S is smooth if and only if it is locally of finite presentation (intuitively fibers are finite dimensional), flat (intuitively fibers vary continuous), and each of the fibers X_s over s in S is a smooth k(s)-variety. Thus, we can think about a smooth morphism of schemes as being continuously varying families of smooth varieties, and so it suffices for us to explain what a smooth variety over a field should be.

So, let's suppose that X is a variety over k. Since smoothness is an inherently geometric property of X, it's actually not just properties of X that determine whether X/k is smooth. No, to bring forth the geometric aspect of X, we need to base change to \bar{k}, call it \bar{X}. So, X being smooth is actually a property of its 'geometrification' \bar{X}. The technical condition then is that \bar{X} should be regular, but this is a somewhat unintuitive commutative algebra condition (it says that the tangent space of a point should have the same dimension as the codimension of that point in the ambient scheme).

But, because of a technical lemma (the localization of regular local rings are regular local) to check that \bar{X} is regular, it suffices to check this condition at closed points. But, closed points of \bar{X} are all \bar{k}-points of \bar{X}, and so checking regularity there actually takes a very familiar form. Namely, let's suppose that locally at a \bar{k}-point x, X is of the form Spec(\bar{k}[t_1,...,t_n]/(f_1,...,f_m)), and the \bar{k}-point is x=(x_1,...,x_n) in \bar{k} n (technicallly (t_1-x_1,..,t_n-x_n)). Then, you can form the Jacobian matrix of the polynomials f_1,...,f_m at the point x. It's only if this matrix has corank equal to the dimension of X. This is PRECISELY the condition one needs to check so that a subset of Rn cut out by equations f_1,..,f_m will actually be a smooth manifold.

So, after geometrifying a variety (which is undeniably where smoothness should be decided), and restricting to closed points (which one can do), smoothness is nothing but the classical definition of smoothness in differential geometry.

OK, here's one. Complex numbers are an extension of the reals are an extension of the rationals are an extension of the integers are an "extension" of the set {1}. Anyway, are there sets of numbers which contain the complex numbers as a proper subset? Or is that it?

If some of this is wrong, I didn't exactly study up, so feel free to clarify and correct me,

I guess the ambiguity of this question is "what do we want from an extension?" For example, the reason we extend from {1} to the integers is different from why we extend from R to C. We go from R to C because we want an algebraic solution to x2+1. Then we prove that C is algebraically closed (and so that all polynomials over C have solutions in C), and so we can say that there are no new sets of numbers algebraic over C.

As below, we can formally attach a new symbol, say x to the complex numbers and then canonically make it a field, C(x), of rational functions in the indeterminant x, but that may not be interesting to you. However, this extension is no longer an algebraic extension (we call these transcendental extensions instead).

We can also extend C to the complex 2-plane C2 and up (Cn...) but those no longer have field structures. It depends on what you want with your "set of numbers" that may determine the answer to your question.

beyond the set of complex numbers you can also expand to the set of quaternions. Basically you introduce two new complex variables j and k which essentially have the same properties as i but the act of multiplication between i, j, and k forms a non-abelian group.

If you'd like, you can call "t" a number and think of rational functions in t as numbers. These form a field (just like rationals, real, and complex numbers). The complex numbers are contained in the rational functions as constant functions. This field is sometimes written C(t).

One can define many supersets of complex numbers, like C2. Historically, complex numbers were developed because there were no real solutions to many polynomial equations. We now know that each polynomial of degree n has n roots in C. Perhaps we may encounter other equations without a solution in C. I think this is what you meant by your question; whether there is another set of numbers in this chain.

Historically, complex numbers were developed because there were no real solutions to many polynomial equations.

Sort of. They were invented as a means to find a real solution to a cubic equation. Note that people were not interested at all in the non-real solutions to these equations; what would the meaning of those be? They were even uneasy about real solutions that could only be expressed using complex numbers (see casus irreducibilis).

In a nice way, no. You can build the integers from {1} by completing addition and subtraction, just add 1+1, 1+1+1,... and 1-1, 1-(1+1),... until you get all the integers. You can build the rationals from the integers by completing division. What you're left with is a line that has infinitely many holes in every interval, no matter how tiny it gets. We can get the reals by completing, or filling in the holes of this line. We can then get the complex numbers by finding a polynomial that does not have real roots and then declaring that it does have a root (for instance, take x2 +1, declare it has some root, i, and say that you can add and multiply this by real numbers to get a+bi, but this process will actually work for any polynomial that does not have real roots). Any extension of the complex number will lose something, like commutativity or associativity, so we consider them as a different structure and not as numbers.

Now, there are more sets of numbers, but they are distinct and independent from the reals and complex numbers. If we go back to the rationals as a line with holes, it turns out we can view it as different shapes with holes. And, actually, for every prime number we can get a different shape out of the rationals that depends on that prime number (these shapes are not as simple or intuitive as a line though). We can then fill in the holes in these shapes to get completely different sets of numbers called p-adic numbers that have weird but useful properties that tell us things in number theory. You can kinda think of these are the "real numbers" associated to a prime number p (in actuality, we view the reals as the p-adics for the "prime at infinity", just some fun jargon). There is one big difference between the reals and the p-adics though. For the reals, there's only one set of numbers bigger than it: The Complex Numbers. But for p-adics, there are many sets of numbers above them! We can choose polynomials that do not have roots in some p-adic set and then do the same thing we did for the reals, by declaring that our polynomials actually do have roots, to get new fields of numbers. But this time, our choice of polynomial matters since we can choose two different polynomials and get two different fields!

The complex numbers have roots to any polynomial in them, so we kinda want to do the same for p-adics. We can add all roots to all polynomials to get a new collection of numbers, called the algebraic closure of our p-adic field. This extends any of the fields created by adding roots of polynomials. The complex numbers it the algebraic closure of the reals (this is the Fundamental Theorem of Algebra). We can view the complex numbers as a 2-dimensional vector space over the reals, but for p-adics, the algebraic closure is an infinite dimensional vector space over the original p-adics. This causes some problems! There are no holes in the complex numbers, it is both geometrically and algebraically complete. However, the algebraic closure of the p-adics does have holes in it! So we can fill in these holes again to get an even larger set of p-adic numbers, except this one is both geometrically and algebraically complete!

In general, there are also extensions of the rationals that are contained in the complex numbers called Algebraic Number Fields. For instance, instead of the filling in the holes of the rationals, we can just say that x2 +1 has a root and get the rational numbers, with sqrt(-1) added into the mix to get the Gaussian Numbers. And this process, just like in the p-adic case, depends on the polynomial we choose.

What you're left with is a line that has infinitely many holes in every interval, no matter how tiny it gets.

"Holes" here is little misleading. If you draw a picture of all the rational numbers, the picture cannot be distinguished (visually) from the real numbers. You won't see any gaps or holes.

If you're looking at the rational line around where π should be, you won't find π, but you'll find numbers that get arbitrarily close. Still, you can't point to a spot where a number is missing, because that spot doesn't even exist.

The difference between the rational numbers and the real numbers is a more technical (non-visual) difference. A sequence which "looks" like it ought to converge is called a Cauchy sequence. Examples would be successive rational approximations of π or √2. The reals have the property that if a sequence looks like it ought to converge (if it's Cauchy), then it does converge.

(The rationals do not have this property, obviously, because the sequence: 3, 3.1, 3.14, 3.141, ... spelling out the digits of π does not converge in the rationals).

(And I know you already know this, functor7. I just wanted to point it out to anyone else).

Does anyone have recommendations to an upper-level undergraduate Physics student (Think, have taken the calc sequence, upper level differential equations, linear algebra, and probability/stats)for an introduction to nonlinear dynamics, chaos, etc.?

Have you ever worked with/in a particularly "math-y" field of physics? If so, what did you do? What type of math were you using?

Does anyone have recommendations to an upper-level undergraduate Physics student (Think, have taken the calc sequence, upper level differential equations, linear algebra, and probability/stats)for an introduction to nonlinear dynamics, chaos, etc.?

I use basic differential geometry and Lie groups/algebras in my robotics work. We use the left invariant form of kinematics, say RT \dot{R} = skew(\Omega) for SO(3) and rigid body rotations, to describe the body-fixed frame instantaneous velocities. The texts always include a discussion of the right invariant (spatial frame) form of the kinematics, but I rarely seen it used for anything other than a computational tool when integrals, optimizations, or proofs are easier that way.

Are there other disciplines where the right invariant form of kinematics on a Lie group/algebra are used extensively, rather than as an afterthought?

In Projective Geometry we can show that two cubics always intersect in 9 points (given that we count them properly). We can also show that in general there is exactly one cubic curve that passes through 9 points. This suggests that there is only one cubic curve which I'm sure is not the case.

I think the way to resolve this lies in the way in which we count the intersections, basically if a point x on both curves has the same tangent direction on both curves then it counts as two instead of one.

So does this mean that any two distinct cubic curves in the projective space share a point with the same tangent direction?

Basically, the nine points of intersection between two cubics don't determine a unique cubic (obviously because there are at least two), but instead a one-parameter family of cubics. For example if you try to determine the cubic passing through the nine points {-1,0,1}x{-1,0,1}, you get a linear system whose solutions form a line. So through any nine points you can find a cubic, and through any nine sufficiently generic points you can find a unique cubic. But the nine points of intersection of two cubics are not sufficiently generic.

You could also look at it through the Cayley-Bacharach theorem which says that any cubic which passes through eight given points also passes through a ninth point dependent only on the eight. So once you know that a cubic passes through eight of the points of intersection, the ninth is automatic, and you can still choose another point freely.

While one can always find a cubic passing through any 9 points, it is apparently not quite always the case that it is unique. We require that the 9 points be in general position, which is to say that there are not any special relationships between them that we wouldn't otherwise expect.

It turns out that the intersection points of two cubics are not in general position. A good example of this is the Cayley-Bacharach theorem: we consider the nine intersection points of two cubics, any cubic curve which passes through eight of them also passes through the ninth. But this would clearly not be true for any arbitrary set of nine points.

Thus, uniqueness fails in this case and we are not forced into accepting only one cubic.

I've recently (as a HS student) been trying to understand the relation between the parallelogram between two vectors and the axial vector produced from the cross product. From the wikipedia page, I understand that the two are related by the Hodge Dual, which, to my understanding, maps the bivector to the axial vector. What exactly does this mean? Does it merely establish a bijection between the points in the set defined by the axial vector and those in the one defined by the bivector?

I am confused as to what this relation is, exactly. I tried to read the wikipedia page on the Hodge dual, but found it to be way above my level. Can anyone offer a simplified (as much as possible before losing key details) explanation of this relation?

While researching the above questions, I've been reading a lot about geometric algebra, and am a bit confused about other points. One, what is the geometric product in this algebra? The wiki page with the definition seemed a bit unclear (at least to someone lacking a solid base like me). What is the relation of this geometric product and the inner and outer products?

In short, can anyone clear up the relation between the bivector produced by a Λ b and the vector produced by a x b, and, in addition, can someone explain what the Λ function is?

Any other info you think I should know would be greatly appreciated as well. Thanks!

a Λ b is basically just the parallelogram formed by a and b, except it's also all other parallel parallelograms with the same area and orientation.

Basically take all oriented parallelograms, but then put them into groups where if they are parallel, oriented the same, and have equal area, they are in the same group. Those groups form the set of 2-blades in the exterior algebra, and a Λ b is the 2-blade for which the parallelogram formed by them lies in its group.

This 'grouping' is so that parallelograms of area 0 are treated as actually being 0.

Hodge Duality relates the 2-blades to 1-blades, which are just regular old vectors, and the relationship is precisely a Λ b -> a x b.

It also relates 3-blades, which are oriented volumes, to 0-blades, which are scalars, by just thinking of a volume as a scalar, with orientation corresponding to positive and negative.

Edit: To see why 3-blades are just oriented volumes, the construction is the same as with 2-blades, except you are taking parallelepipeds, and ALL parallelepipeds in 3 space are parallel, so they are grouped just into their volumes and orientations.

The 'Λ' is a rather dull function, the only thing it does is being linear:

(a+b) Λ c = (a Λ c) + (b Λ c)

(λa) Λ b = λ(a Λ b)

And being anti-symmetric:

a Λ b = - (b Λ a)

One of the consequences of this is that a Λ a = 0, since a Λ a = - (a Λ a).

Now let's say we have a 3d vector space, called U, with basis dx,dy,dz (the reason for this notation is not really important right now). Then we can use Λ to combine two elements of this space which gives us another linear space UΛU, and we can keep doing this to get U, UΛU, UΛUΛU,... these are sometimes also simply denoted by Λ1U, Λ2U, Λ3U,...

Now it turns out we can find a basis of ΛkU by simply calculating all different Λ-products of k different basis vectors. So for instance Λ1U has the basis dx,dy,dz, which gives us our original vector space, Λ2U has the basis dyΛdz, dzΛdx, dxΛdy, and Λ3U has the basis dxΛdyΛdz. And all other ΛkU, are trivial.

Now we simply note that Λ2U and Λ1U have the same dimension, so we can define an isomorphism between them by mapping:

(a dyΛdz + b dzΛdx + c dxΛdy) to (a dx + b dy + c dz)

This is a rather simple case of a Hodge Dual.

Now we can calculate the Λ-product of two vectors in U and map the result back into U by using the mapping we just defined. This gives us:

Pretty much all this. A neat thing that's kind of hiding in there is that you usually define the Hodge Dual by using an underlying inner product on the vector space, but here you've just picked a map that was convenient.

Thing is, this actually defines the inner product for you. The inner product defined by the chosen isomorphism is, in this case, the usual one. Of course the cross product needs to play nice with the inner product, because one of the defining properties of the cross product is that its result is perpendicular to its inputs, and perpendicular is defined via the inner product.

As a nitpick, I'm pretty sure that Λ1U usually refers to U, and not UΛU. So in standard notation you have made an isomorphism from Λ1U to Λ2U

I didn't really want to get into the theory behind Hodge duals because I feared it would get too complicated. This does indeed hide why this map gives you a perpendicular vector and therefore also why I mapped dzΛdx to dy and not dxΛdz to dy. But I thought that it would be simpler to just argue that there exists an isomorphism, and then pick a 'special' isomorphism.

Also you're right about the ΛkU notation, for some reason I did get the definition right but still used it wrong.

First of all, thank you very much for this explanation. There's a few points I need clarified though, if you have the time.

What does it mean that combining two elements of U gives us a new linear space? Is that just saying that combining two elements of U produces a new vector. I'm sure that my terminology is off here, so pardon anything that is unclear.

Why does calculating the Λ-products of the different basis vectors produce new basis for a different vector space?

How does one go about calculating a Λ-product in the first place? Is it similar to calculating a cross product? I assume that the algorithm for doing so (e.g. the determinant of the matrix of vectors) was defined so that it followed the properties you listed.

Thanks again for taking the time to explain things. I realize that it must be frustrating to explain these concepts to someone who lacks a background in the subject, but it really helps.

What I really meant was that we can use Λ to combine any two elements of U. We can then create a new space consisting of the elements: aΛb, where a and b are elements of u. We can then use the rules we defined for Λ to simplify this space, that is we actually define the following things to be equal to each other:

(a Λ b) = - (b Λ a)

(a + b) Λ c = (a Λ c) + (b Λ c)

(λa) Λ b = λ(a Λ b)

This makes the space UΛU a nice vector space, with only finite dimensions. The naive construction would have given us a vector space where each aΛb has it's own dimension, which is weird and quite useless.

Now since we've defined different ways to write the same element in UΛU, we can simplify an element aΛb by picking a basis and writing:

aΛb = (a1 dx + a2 dy + a3 dz)Λ(b1 dx + b2 dy + b3 dz)

We can now use the rules we've imposed on UΛU to simplify this to:

(a1 b2 - a2 b1)dxΛdy + (a3 b1 - a1 b3)dzΛdx + (a2 b3 - a3 b2)dyΛdz

In fact we can show that in general if we have a product u1Λu2Λ...Λuk, then we can keep simplifying this until we're left with a linear combination of Λ-products of k different basis vectors.

This simplification is basically all there is to calculating the Λ-product. Sometimes, as with the case of the cross product, it is possible to transport the result back to U or some other space, which will require some more calculation. But that's it, there's nothing else that we can do.

I seem to be having trouble teaching myself number theory. Has anyone tried using the George E. Andrews book to learn themselves? If so did you have success? Maybe I'm just not cut out for teaching myself but I have no idea...

What is "an algebra"? i.e. the Griess algebra, Jordan algebra, etc. (I only know these names because of Google, I have no idea what they are)

I thought "Algebra" was a complete (and major) branch in mathematics that doesn't have any variations. Those titles I mentioned suggest that there are more than a few algebras, and according to the terminology I'm accustomed to - this doesn't really make sense.

Algebra has a lot of meanings, whether it be a branch of mathematics, an adjective describing something (algebraic), or a particular type of structure. What they all have in common is that they are talking about things where you can add and multiply, which is basically polynomials

An Algebra is a structure where you can add things, multiply things by some base field (eg the rational numbers), and have a 'multiplication' that combines things in a way that respects the previous two operations.

An example of an algebra is 3d Euclidean space, with the cross product as the 'multiication'. Note that while the cross product plays nicely with scalar multiplication and vector addition, it does not have many nice properties with itself. It's anticommutative, a x b = - b x a, has no unit, and isn't associative, (a x b) x c is not the same as a x (b x c)

Other examples are any field is trivially an algebra over itself, as well as any ring extension of a field, eg polynomial rings. These of course do have some nice properties like units and associativity

What you're referring to as "algebra" is, more or less, the field of math that is involved with studying sets that have one or more notions of "combining elements within the set" (to get other elements in the set back) defined on them. "An algebra" is a particular mathematical object. Namely, it's a vector space (which has a notion of addition and scalar multiplication defined on it) that also has a notion of multiplication defined on it (and this multiplication needs to satisfy a bunch of requirements). Of course I'm sweeping a lot of details under the rug here, as they aren't really important for your question (though they can/should be studied if you're interested). The main point is that "algebra" as you see/saw it is a general field—that of studying sets with notion(s) of combining elements in them, and "an algebra" is a particular kind of object. Thus, all of the different algebras you listed above are particular examples of "an algebra", each with different types of extra structure added to the initial definition of an algebra. Hope that makes sense/helps.

Not really sure what motivates that idea. That's like saying basketball players have an unhealthy obsession with dribbling. Sure it's not everything, but it's definitely a lot of it, and an absolutely vital part of it.

You know, it might help if you explained what your problem with sets is. Then we might actually understand what point you're trying to get across. But making a vague statement and then acting dismissive when people respond to it is just a waste of everyone's time.

Lambda Calculus is an entirely different thing than differential or integral calculus you learn in school. It is a formalism used for expressing computations (as in computers).

It captures the ideas of function abstraction and application in a few succinct rules, and was originally used to study the properties of computation itself. It is now used as a basis for many programming languages (Lisp, Haskell, others). People also work with extended formal systems that are based in Lambda Calculus to try and prove things about more specific computational systems (like concurrent programs or type systems).

It's completely unrelated. "Calculus" is used in math for basically any system of "calculation", so there are lots of calculi. The lambda calculus is a specific calculus that is based on the notion of function in the abstract--you are "calculating" things by applying functions. Sort of.

Basically, there are 3 things you can do in lambda calculus:

Abstract over a variable to create a function, which is written with a lambda, so e.g., λx. x is the identity function: it takes a thing called 'x' and returns that thing,

Reduce an expression, by applying a function to its input. E.g., (λx. x)1 is the identity function applied to 1, hence, (λx.x)1 reduces to 1.

Convert an expression to another expression, by changing variables or replacing equal terms with equal terms. E.g., λx.x can be converted to λy.y.

Lambda Calculus is a calculus of functions. The neat thing is that you can finally build expressions that denote functions. Usually mathematicians do this in a very roundabout way. That is, they write "consider the function f defined by

f(x) = E ".
In lambda calculus you just write (λx: E) to denote f .

To answer your question it has nothing to do with "the Calculus" which refers to the differential and integral calculus (together called just the infinitesimal calculus).

PS Analogous to this would be that instead of writing the set {x | E} or even just the set {a} you would write
"the set S defined by

"Algebra" as a field of study is concerned with understanding objects with operations on them, like multiplication or addition.

An algebra (in the context as a mathematical object) is a vector space with a bilinear operation on it, call it multiplication. For example, we can give R2, the 2-dimensional real plane an algebra structure by giving a basis and acting like complex multiplication. This makes R2 into an R-algebra, where R is the field of real numbers.

It is most often used to refer to a collection of operators with certain (first-order) postulates. E.g. Boolean algebra, differential algebra.

However, "calculus" is more often used. I find it difficult to express what a calculus is, but it's like an algebra except richer; and with not just first-order postulates but a methodology and a number of theorems. I guess "calculus" is inbetween "theory" and "algebra".

However, in mainstream mathematics "calculus" has fallen out of favour because it's been basically hijacked for the use of the differential and integral calculus.

PS Universal algebra is (supposed to be) the field that studies algebras. Not to be confused with the field of Modern (or Abstract) Algebra, which is the field that studies mathematical "structures" (which on a technical level don't really differ from algebras except that we allow operators that map to the Boolean domain).

PS.2 Methods which are calculational or formal are often called "algebraic" for some strange reason, as if there were other kinds of mathematics (there aren't).

To get at what /u/DeathAndReturnOfBMG is saying here, you're not telling us what an irrational number is; you're telling us what it's not. The complex number 3+2i is a number that cannot be written as a ratio of two integers; by your definition of irrational numbers, that makes it irrational. But that's clearly wrong. So you might reply, "well, no, an irrational number is a real number which can't be written as a ratio of two integers". But you've defined the real numbers to be the union of the rational numbers and the irrational numbers, so substituting your definition of "real number" into your definition of "irrational number", you've essentially just said "an irrational number is an irrational number". That's no help.

Although you are very familiar with the arithmetic of real numbers, it's actually surprisingly difficult to define "real number" in a way that is not circular as above. There are two main approaches. I'll explain the Cauchy sequence way since it is IMO a little less mind-bending than the Dedekind cuts way, even though I find the Dedekind cuts way more creative and interesting and you should read about it.

I begin by assuming that there is such a thing as rational numbers, although, one really ought to begin at the construction of the integers, since that's where it all begins. But let Q be the set of rational numbers and let's just say it's well-defined. Now if you are as clever as Hippasus then you'll notice that there is no pair of integers whose quotient is the square root of 2. There's a "hole" where √2 should be. Now, note that this, on its own, does not imply that irrational numbers exist. It simply implies that there is no rational number whose square is 2. To understand what I mean by this, perhaps note that there is no integer equal to its successor: 0 is not equal to 1, 1 is not equal to 2, and so on. This observation does not imply that there is some special class of integers which are equal to their successors. Just because something doesn't exist doesn't mean it should. But we have good reasons for wanting a number system with no holes at √2; namely, we can construct a right triangle with sides of length 1, and we want a number that corresponds to the length of the hypotenuse of that triangle. So we really would like to invent the real numbers.

If you're clever like Cauchy, you might decide that instead of looking at individual rational numbers, it is more interesting to look at sequences of rational numbers. In particular, you might be interested in a special class of sequences where the terms get closer and closer and closer together. An example of such a sequence might be {0.9, 0.99, 0.999, 0.9999, 0.99999, ...}. This is a sequence where every term is a rational number, and the distance between terms get smaller and smaller and smaller as you keep going through the terms. Sequences that have the property that the distance between consecutive terms approaches 0 are called Cauchy sequences. This particular Cauchy sequence of rational numbers converges to a rational number: in fact, it goes to 1. If you are interested in these things, you might ask: does every Cauchy sequence of rational numbers converge to a rational number?

this produces the sequence {1, 1.5, 1.41667, 1.41422, ...}. A little bit of analysis that I won't bother to perform here shows that this sequence is indeed Cauchy; that is, the distance between its terms goes to 0. However, a little bit more analysis that I won't bother to perform here shows that if this sequence converges to a number N then N2=2... but there's no such rational number. So we've found a Cauchy sequence of rational numbers that does not converge to a rational number.

This gives us a rigorous, well-defined way to talk about the "holes" in the rational numbers: every "hole" corresponds to a Cauchy sequence that doesn't converge (actually it corresponds to infinitely many such Cauchy sequences but I'll leave that part alone for now). So finally we have a tool for constructing the real numbers in a way that is not circular à la "the real numbers are the rational real numbers and the irrational real numbers". We can define, and this might be a bit of a surprise, the real numbers in the following way: R is the set of Cauchy sequences of rational numbers (more precisely it is the set of a certain family of equivalence classes of Cauchy sequences of rational numbers but I said I would leave that part alone for now). This may come as a surprise because, by this definition, each real number isn't its own number but rather, each real number is a sequence. But this is more natural than you might realize—we already assign a Cauchy sequence to every real number without even thinking about it: its decimal expansion. The expression 3.14159... can be thought of as a sequence {3, 3.1, 3.14, 3.141, ...}, which is in fact a Cauchy sequence of rational numbers.

So, we have as our working definition: a real number is a Cauchy sequence of rational numbers. How do we add them? The answer is rather intuitive; for two real numbers A={a1,a2,a3,...} and B={b1,b2,b3,...}, we define A+B={a1+b1,a2+b2,a3+b3,...}. Finally we can answer your original question, and in fact, we must answer your original question if we wish to use this as a definition for the real numbers. If the definition of real numbers as Cauchy sequences of rational numbers is any good at all, then the sum of two real numbers had better be real. So, we must show that A+B={a1+b1,a2+b2,a3+b3,...} is a real number. This is actually very easy, but it requires even more formality (sigh). I haven't yet explained rigorously what it means for the consecutive terms of a sequence to "get closer and closer together". So, let's do. Definition: a sequence S={s1,s2,s3,...} is Cauchy if for any (small) number ε>0, there exists an integer N such that for any n≥N, |S{n+1}-Sn|<ε. Now, I don't know if this is the first time you've seen the use of the (small) number ε technique, and if so, it might seem a little confusing. It's really not so hard though. The idea is that if you pick a very very small number, say 0.000001, then I can find some number N so that all the terms after sN are closer together than 0.000001. And if you decide instead to use 0.0000000001, I can find some other N so that all the terms after sN are closer together than 0.0000000001. And for any small ε>0 that you choose, I can find some number N so that all the terms after sN are closer together than your ε. And since ε can be as small as you want, this implies that the limit of the distance between consecutive terms is 0.

So anyway, let's answer your question in this framework. Let A={a1,a2,...} and B={b1,b2,...} be real numbers—that is, they are Cauchy sequences of rational numbers. We wish to show that C=A+B={a1+b1,a2+b2,...} is real number—that is, C is a Cauchy sequence of rational numbers. So first, observe that it is definitely a sequence of rational numbers, since each term is the sum of rational numbers. Now it just remains to show that C is Cauchy. By the definition above, that means that we need to show that for any [;\epsilon>0;], there exists an integer N such that for all n≥N, [;|(a_{n+1}+b_{n+1})-(a_n+b_n)}|<\epsilon;]. So step 1, choose any [;\epsilon>0;] that you want, as small as you like. Maybe 10-1010 or something. Real small. Now, we know that A is a Cauchy sequence, which means that there exists some number [;N;] so that for all [;n\geq N;], [;|a_{n+1}-a_n|<\epsilon/2;]. We also know that B is Cauchy, which means that there exists some [;N';] such that for all [;n\geq N';], [;|b_{n+1}-b_n|<\epsilon/2;]. Note that [;N;] and [;N';] may not be equal and in fact they probably aren't. Now, let [;M;] be some integer that is bigger than both [;N;] and [;N';]. Then we know that for all [;n\geq M;], both [;|a_{n+1}-a_n|<\epsilon;] and [;|b_{n+1}-b_n|<\epsilon;] hold, since [;n\geq M\geq N;] and [;n\geq M \geq N';]. Then, for [;n\geq M;], we have [;|(a_{n+1}+b_{n+1})-(a_n+b_n)}|=|a_{n+1}-a_n + b_{n+1} - b_n}|< \epsilon/2 + \epsilon/2 = \epsilon;]. And that completes the proof.

If you have any questions about any of that I am more than happy to help.

Right, but this begs the question. You said that the real numbers are the set of all rational and irrational numbers and then that the irrational numbers are the reals minus the rationals. At some point you need to say what a real number is. Then we can say how to add them and determine that the sum of real numbers is real.

I'm using the same definition that's been taught to every elementary student in the last 100 years. It's a really simple definition.

You have the natural (counting) numbers (1,2,3,4,...)

The whole numbers (0,1,2,3,4....)

The integers (...,-2,-1,0,1,2,...)

The rational numbers (.1, 1/3, 5/8, etc) which can be written as a ratio of integers.

The irrational numbers (sqrt 2, e, pi, etc) which fill in the other gaps in the number line

And finally the real numbers which are a combination of the rational and irrational numbers. That's the definition I'm using. A real number is any number that can be put onto a number line.

This is the standard working definition that I've used my whole life. I've never heard a different definition. This is the definition my teachers have used and I'm sorry if that's not a thorough enough definition for you, but it's all I got.

Now all I'm asking for is a formal proof that the sum of any two reals are a real number. I can't explain it any more than that. I'm sorry.

The formality of your proof will be limited by the informality of your definition of real numbers. If you define real numbers as "infinite decimal expansions" then the usual addition algorithm tells you that the sum of two decimals is a decimal, so the sum of two reals is a real. (This has issues with things like .49999.... = .500000... but it's close.) If you define real numbers as points on a line, you can visualize adding: to see x + y, draw a line from 0 to x, then slide it so that it starts at y. The new endpoint will be x + y. Again, the sum of two reals must be a real because you can't slide off the line.

Let me put it a different way. Given your definition, what else could the sum of real numbers be?

How do I tell if an REU has too advanced of a topic for me? For example: I'm taking Modern Algebra right now, is that enough of an introduction to Group Theory to look at certain projects on Group Theory?

I think your first question is a good question, but you're asking the wrong people. We don't know you, we don't know your Modern Algebra class, we don't know what sort of project or REU you're talking about. There is a ton of possible variation in all of those factors.

The best thing you could do is ask people who do know some of these things. Your professor will have some idea of how much group theory you know. An organizer of the REU will know a lot about the project and will also be familiar with the sorts of students who have succeeded in the program.

Ooh simple question:
Let's say that I didn't know that ∑2n=n(n+1), but only had n(n+1); is there any way that I could find the 2n by using some sort of inverse summation function? Just like how differentiation is to integration.

maybe i'm too late here, but i'd like to know why "multiplicative functions" only apply to coprimes. more exactly, why is it "multiplicative" and "totally multiplicative" rather than "coprime multiplicative" and "multiplicative" - why is the emphasis on coprimes?

i don't know much maths, and was learning about euler's totient function, and was confused until i realised that "multiplicative" is restricted to coprimes (so phi(4) != phi(2) * phi(2)). are there many other functions that are multiplicative? are functions that are "totally multiplicative" less common or less interesting?

ps also, is there an efficient way to calculate euler's phi for products that are not coprime, like phi(4) (but larger, obviously)?

Is there branch of mathematics that reasons about the structure of mathematics itself? An area that would reason about questions like "why do some mathematical structure turn out to be more complex than others?". Maybe even with a touch of stochastics and information theory.