The room you are currently sitting in is probably around 20°C, or 68°F (within reasonable error, since different people like their rooms warmer or colder or have no control over the temperature of the room they're reading this entry in). But what does it mean to be at a certain temperature? Well, we often define temperature as an average of the movement of an ensemble of constituent particles – usually atoms or molecules. For instance, the temperature of a gas in a room is given as a relation to the gas' rms molecular speed:

Where T is the absolute temperature (e.g. Kelvin scale), m is the mass of the particles making up the gas, and k is Boltzmann's constant. But this is a specific case. In general, we need a more encompassing definition. In thermodynamics, there is a quantity known as entropy, which basically quantifies the disorder of a system. It is related to the number of ways to arrange the elements of a system without changing the energy.

For instance, there are a lot of ways of having a messy room. You can have clothes on the floor, you can track mud into it, you can leave dishes and food everywhere. But there are very few ways to have an immaculately clean room, where everything is tidy and put in its proper place. Thus, the messy room has a larger entropy, while the clean room has very low entropy. It is this quantity that helps to define temperature generally. Denoting entropy as S, we have that

Or, in words, temperature is defined as the change in energy divided by the change in entropy of something when its volume remains fixed, which is equivalent to the change in enthalpy (heat) divided by the change in entropy at constant pressure. Thus, if you increase the energy of an object and find that it becomes more disordered, the temperature is positive. This is what we are used to. When you heat up air, it becomes more disorderly because the particles making it up are moving faster and more randomly, so it makes sense that the temperature must be positive. If you cool air, the particles making it up slow down and it tends to become more orderly, so the temperature is still positive, but decreasing. What happens when you can't pull any more energy out of the air? Well, that means that the temperature has gone to zero, and movement has stopped. Since the movement has stopped, the gas must be in a very ordered state, and the entropy isn't changing. When the speed of the gas particles is zero, we call its temperature absolute zero, when all motion has stopped.

It is impossible to reach absolute zero temperature, but it isn't intuitive as to why at first. The main reason is due to quantum mechanics. If all atomic motion of an object stopped, its momentum would be known exactly, and this violates the Uncertainty Principle. But there is also another reason. In thermodynamics, there is a quantity related to temperature that is defined as

Since k is just a constant, β can be thought of as inverse temperature. This sends absolute zero to β being infinity! Now, this makes much more sense as to why achieving absolute zero is impossible – it means we have to make a quantity go to infinity! It turns out that β is the more fundamental quantity to deal with in thermodynamics because of this role (and others).

Now, you're probably thinking, "Akano, that's all well and good, but, are you saying that this means that you can get to infinite temperature?" In actuality, you can, but you need a special system to be able to do it. To get temperature to infinity, you need β to go to zero. How do we do that? Well, once you cross zero, you end up with a negative quantity, so if we could somehow get a negative temperature, then we would have to cross β equals zero. But how do we get a negative temperature, and what would that be like? Well, we would need entropy to decrease when energy is added to our system.

It turns out that an ensemble of magnets in an external magnetic field would do the trick. See, when a compass is placed in a magnetic field, it wants to align with the field (call that direction north). But if I put some energy into the system (i.e. I push the needle), I can get the needle of the compass to point in the opposite direction (south). When less than half of the compasses are pointing opposite the external field, each time I flip a compass needle I'm increasing entropy (since the perfect order of all the compasses pointing north has been tampered with). But once more than half of those compasses are pointing south, I am decreasing the disorder of the system when I flip another magnet south! This means that the temperature must be negative! In practice, the compasses are actually molecules with an electric dipole moment or electrons with a certain spin (which act like magnets), but the same principles apply. So, β equals zero is when exactly half of the compasses are pointing north and the other half are pointing south, and β equals zero is when T is infinite, and it is at this infinity that the sign on Tswaps.

It's interesting to note that negative temperatures are actually hotter than any positive temperature, since you have to add energy to get to negative temperature. One could define a quantity as –β, so that plotting it on a line would be a more intuitive way to see that the smaller the quantity, the colder the object is, while preserving the infinities of absolute zero and "absolute hot."

Today I want to talk about mass. Sometimes you'll hear it defined loosely as "the amount of stuff in an object." There are, however, two separate definitions of mass in classical physics. The first definition comes from Newton's second law.

This mass is known as the inertial mass. The larger an object's inertial mass, the more it resists being accelerated by a given force. The second definition of mass also comes from Newton, but it is instead determined by his law of gravitation.

The mass here determines how much two massive objects attract one another; this is known as the gravitational mass. But here's the interesting thing about these two masses: there is no law of physics that says these masses are one and the same. Such a notion is known in physics as the equivalence principle. The weak equivalence principle was discovered by Galileo; he noticed that objects with different masses fall at the same rate. Einstein came up with the strong equivalence principle, which discusses how a uniform force and a gravitational field are indistinguishable when you look at a small enough portion of spacetime. The only reason we believe these two masses are equivalent is because experiments show that they are equal to within the precision of the instruments with which we measure them, and there are ongoing experiments trying to narrow down that precision to determine if there is any difference between the two.

You've probably heard of the Uncertainty Principle before. In words, it says "you cannot simultaneously measure the position and the momentum of a particle to arbitrary precision." In equation form, it looks like this:

What this says is that the product of the uncertainty of a measurement of a particle's position multiplied by the uncertainty of a measurement of a particle's momentum has to be greater than a constant (given by the reduced Planck constant, h over τ = 2π). This has nothing to do with the tools with which we measure particle; this is a fundamental statement about the way our universe behaves. Fortunately, this uncertainty product is very small, since ħ is around 1.05457 × 10-34 J s. The real question to ask is, "Why do particles have this uncertainty associated with them in the first place? Where does it come from?" Interestingly, it comes from wave theory.

Take the two waves above. The one on top is very localized, meaning its position is well-defined. But what is its wavelength? For photons, wavelength determines momentum, so here we see a localized wave doesn't really have a well-defined wavelength, thus an ill-defined momentum. In fact, the wavelength of this pulse is smeared over a continuous spectrum of momenta (much like how the "color" of white light is smeared over the colors of the rainbow). The second wave has a pretty well-defined wavelength, but where is it? It's not really localized, so you could say it lies smeared over a set of points, but it isn't really in one place. This is the heart of the uncertainty principle. Because waves exhibit this phenomenon – and quantum particles behave like waves – quantum particles also have an uncertainty principle associated with them.

However, this is arguably not the most bizarre thing about the uncertainty principle. There is another facet of the uncertainty principle that says that the shorter the lifetime of a particle (how long the particle exists before it decays), the less you can know about its energy. Since mass and energy are equivalent via Einstein's E = mc2, this means that particles that "live" for very short times don't have a well-defined mass. It also means that, if you pulse a laser over a short enough time, the light that comes out will not have a well-defined energy, which means that it will have a spread of colors (our eyes can't see this spread, of course, but it means a big deal when you want to use very precise wavelengths of light in your experiment and short pulses at the same time). In my lab, we use this so-called "energy-time" uncertainty to determine whether certain configurations of the hydrogen molecule, H2, are long-lived or short lived; the longer-lived states have thinner spectral lines, and the short-lived states have wider spectral lines.

So while we can't simultaneously measure the position and momentum of a particle to arbitrary certainty, we can definitely still use it to glean information about the world of the very, very small.

I like triangles. I like numbers. So what could possibly be better than having BOTH AT THE SAME TIME?! The answer is nothing! 8D

The triangular numbers are the numbers of objects one can use to form an equilateral triangle.

Anyone up for billiards? Or bowling? (Image: Wikimedia Commons)

Pretty straightforward, right? To get the number, we just add up the total number of things, which is equal to adding up the number of objects in each row. For a triangle with n rows, this is equivalent to

This means that the triangular numbers are just sums from 1 to some number n. This gives us a good definition, but is rather impractical for a quick calculation. How do we get a nice, shorthand formula? Well, let's first add sequential triangular numbers together. If we add the first two triangular numbers together, we get 1 + 3 = 4. The next two triangular numbers are 3 + 6 = 9. The next pair is 6 + 10 = 16. Do you see the pattern? These sums are all square numbers. We can see this visually using our triangles of objects.

(Image: Wikimedia Commons)

You can do this for any two sequential triangular numbers. This gives us the formula

We also know that two sequential triangular numbers differ by a new row, or n. Using this information, we get that

Now we finally have an equation to quickly calculate any triangular number. The far right of the final line is known as a binomial coefficient, read "n plus one choose two." It is defined as the number of ways to pick two objects out of a group of n + 1 objects.

For example, what is the 100th triangular number? Well, we just plug in n = 100.

T100 = (100)(101)/2 = 10100/2 = 5050

We just summed up all the numbers from 1 to 100 without breaking a sweat. You may be thinking, "Well, that's cool and all, but are there any applications of this?" Well, yes, there are. The triangular numbers give us a way of figuring out how many elements are in each row of the periodic table. Each row is determined by what is called the principal quantum number, which is called n. This number can be any integer from 1 to infinity. The energy corresponding to n has n angular momentum values which the electron can possess, and each of these angular momentum quanta have 2n - 1 orbitals for an electron to inhabit, and two electrons can inhabit a given orbital. Summing up all the places an electron can be in for a given n involves summing up all these possible orbitals, which takes on the form of a triangular number.

The end result of this calculation is that there are n2 orbitals for a given n, and two electrons can occupy each orbital; this leads to each row of the periodic table having 2⌈(n+1)/2⌉2elements in the nth row, where ⌈x⌉ is the ceiling function.They also crop up in quantum mechanics again in the quantization of angular momentum for a spherically symmetric potential (a potential that is determined only by the distance between two objects). The total angular momentum for such a particle is given by

What I find fascinating is that this connection is almost never mentioned in physics courses on quantum mechanics, and I find that kind of sad. The mathematical significance of the triangular numbers in quantum mechanics is, at the very least, cute, and I wish it would just be mentioned in passing for those of us who enjoy these little hidden mathematical gems.

There are more cool properties of triangular numbers, which I encourage you to read about, and other so-called "figurate numbers," like hexagonal numbers, tetrahedral numbers, pyramidal numbers, and so on, which have really cool properties as well.

Today I want to talk about something awesome: Special Relativity. It's a theory that was developed by this guy you may have heard of, Albert Einstein, and it's from this theory that arguably the most famous equation in physics, E = mc2, comes from. I'm not going to talk about E = mc2 today (in fact, I've already talked about it, but it's not the whole story!), but I wanted to talk about two other cool consequences of Special Relativity (SR), time dilation and length contraction.

First and foremost, the main fact from which the rest of SR falls out is the fact that the speed of light is the same for all observers moving with constant velocity, regardless of what those velocities may be. Running at 5 m/s? You see light traveling at the same speed as someone traveling 99% the speed of light.

Wait, how can that be? This idea originally came from Maxwell's equations, which govern electromagnetism. When you solve these equations, you can put them into a form that results in a wave equation, and the speed of those waves is equal to that of light. This finding brought on the realization that light is an electromagnetic wave! But here's the interesting thing: Maxwell's equations do not assume any particular frame of reference, so the speed of the waves governed by Maxwell's equations have the same speed in all reference frames. Thus, it makes sense from an electromagnetic point of view that the speed of light shouldn't depend on how fast someone is traveling!

Now, we're still in a bit of a pickle; if all observers see light traveling at the same speed, how do things other than light move? Think about it. If you're driving down the highway at 60 mph and the car next to you is driving 65 mph, they appear to be moving 5 mph faster than you, don't they? So why doesn't this work with light? If I'm traveling 5 mph, shouldn't I see light moving 5 mph slower than normal? No; the problem here isn't that the speed of light is the same for all observers, but the fact that we think relative velocities add up normally. In fact, this relative velocity addition is simply a very good approximation for objects that are much, much slower than light, but it is not complete.

The first equation determines time dilation, and the second equation determines length contraction, when shifting from a frame moving at speed v to a frame moving at speed v' (β and γ are both physical parameters that depend on the velocity of the frame in question and the speed of light, c). From the first equation, we can see that the faster someone is moving in frame S (moving at speed v), the slower their clock ticks away the seconds in frame S' (moving at speed v') and the more squished they look (in the direction that they're traveling). These ideas are the basis for the famous "barn and pole" paradox. Suppose someone is holding a pole of length L and is running into a barn, which from door-to-door has a length slightly longer than L. If the person runs fast enough, an outside observer will see that the person running with the pole will completely disappear into the barn before emerging from the other side. But from the runner's frame of reference, the barn is what is moving really fast, and so the barn appears shorter than it did to the outside observer. This means that, in the runner's frame, a part of the pole is always outside of the barn, and thus he is always exposed.

What if the observer outside the barn had the exit door closed and the entrance door open and rigs it such that when the runner is completely inside the barn, the entrance door closes and the exit door opens? Well, in the outside observer's frame, this is what happens; the entrance door closing and the exit door opening are simultaneous events. But in the runner's frame, there is no way for him to fit inside the barn, so does the door close on the pole? No, because the physics of what happens has to be the same in both frames; either the door shuts on the pole or it doesn't. So, in the runner's frame, the entrance door closing and the exit door opening are not simultaneous events! In fact, the exit door opens before the entrance door closes in the runner's frame. This is due to the time dilation effect of special relativity: simultaneous events in one reference frame need not be simultaneous in other frames!

Special relativity is a very rich topic that I hope to delve into more in the future, but for now I'll leave you with this awesome bit of cool physics.

As the fates would have it, the day after my birthday I hopped on a plane and went to a physics conference. I'm now sitting here in my lovely hotel room waiting for today's poster session at which I am presenting a poster on my research thus far. It involves stuff from my first published paper and some current "in the works" calculations that I'm doing to help our analysis along.

The talks up to this point have completely left me in the dust, so I'm hoping there will be discussion during the poster session that's more to my level of understanding on the various topics I've been exposed to.

My very first Equation of the Day was about the wave equation, a differential equation that governs wave behavior. It doesn't matter whether you have linear waves (sine and cosine functions), cylindrical waves, or spherical waves, the wave equation governs them. Today I will focus on the second, the so-called cylindrical harmonics, or Bessel functions.

For cylindrical symmetry, the Laplacian (the operator represented by the top-heavy triangle squared) takes the following form:

This is where a neat trick is used. We make an assumption that the amplitude of the wave, denoted here by ψ, can be represented as a product of three separate functions which each only depend on one coordinate. To be more explicit,

This technique is known as "separation of variables." We claim that the function, ψ, can be separated into a product of functions each with their own unique variable. The results of this mathematical magic are astounding, since it greatly simplifies the problem at hand. When you go through the rigamarole of plugging this separated function back in, you get three simpler equations, each with its own variable.

Notice that the partial derivatives have become total derivatives, since these functions only depend on one variable. These are well-known differential equations in the mathematical world; the Φ function is a linear combination of sin(nϕ) and cos(nϕ) (this azimuthal angle, ϕ, goes from 0 to 2π and cycles, so this isn't terribly surprising) with n being an integer, and the Z function is a linear combination of cosh(kz) and sinh(kz), which are the hyperbolic functions. These equations are not what I want to focus on; what we've really been working so hard to get is the radial equation:

This is Bessel's differential equation. The solutions to this equation are transcendental (meaning that you can't write them as a finite sum of polynomials; the sine and cosine functions are also transcendental). We write them as

The Jn are finite at the origin (J0 is 1 at the origin, all other Jn are 0), and the Yn are singular (undefined) at the origin. They look something like this:

The Jn are much more common to work with because they don't have infinities going on, but the Yn are used when the origin is inaccessible (like a drum head that has a hole cut in the middle). These harmonic functions are used to model (but are not limited to)

Note that, while they kinda look sinusoidal, they don't have a set period, so the places where they cross the x-axis are have different intervals and are irrational; thus, they must be computed. This results in some weird harmonic series for instruments like xylophones, drums, timpani, and so on. I got into them because I'm a trumpet player, and the resonances of the surface of the bell of a trumpet are related to the Bessel functions.

There are some cool videos (this one has a strobe effect during it) showing them in action. There are also some cool Mathematica Demonstrations related to them as well. There are also orthogonality relationships with them, but I'll save that for another day.

I made thesetwo images in Mathematica and tidied them up in Photoshop.

They're graphs in the complex plane. The color indicates the phase, or argument, of the complex number, and for this function, curves of equal phase are hyperbolas. To animate it, all I did was let the phase vary linearly in time.