Spacetime is a new and unique substance in physics. It has 4 dimensions: 3 in space and 1 in time. Yet it is much more than just a taping together of time and space. This article will examine, in basic terms, its nature and implications for cosmology.

Space is, on the whole, intuitive. Its 3 dimensions can all be given in kilometres, and they are interchangeable. If a map tells you to go 2 km North and 2 km East, you will know that it doesn’t matter in which direction you go first. And for a shortcut you might instead go North-East.Now add a time dimension, let’s say that your train leaves in 20 minutes. Is there any way of adding this dimension to the diagram above? Not really. Our intuition doesn’t allow time and space to be co-represented, let alone reconciled. 1 metre and 1 minute are not just two different things but two different kinds of things. However, in General Relativity, distance and time are, in a sense, interchangeable. This is a drastic step, and not simply an addendum, for each dimension now represents something different than before; x is no longer Newton’s x, since it relates to time in a way that the length of a football pitch intuitively does not. Yet Einstein’s idea has been vindicated, and precisely by its most absurd predictions. Gravitational lensing, the idea that light is deflected in a gravitational field despite having no mass, was observed in 1979, and as recently as 2016 gravitational waves were found, emitted by orbiting masses pulling on the fabric of spacetime like ripples on a pond. Where our intuition falls short, Einstein’s relativistic spacetime triumphs.

Having established therefore what it is not, what actually is spacetime? In basic terms, spacetime is the substance of which the Universe is made and through which everything in it moves. It is no longer a stage on which events happen but instead a key player helping to shape the scene. Spacetime tells matter how to move and matter tells spacetime how to curve. This relationship is formalised in Einstein’s Field Equations, where the left-hand side describes curvatureand the right-hand side matter.

This may appear to be only 1 equation, but it is in fact 10. The subscripts μ andv (pronounced ‘mew’ and ‘new’) each take the values of x, y, z and t. One term so far left undefined is Λ(capital lambda). This is the cosmological constant, which Einstein introduced in 1917 and later called his ‘biggest blunder’. His motivation for inserting it was that, earlier in 1917, the astronomer Vesto Slipher had observed interstellar redshift, a clue that the Universe might be expanding. Einstein’s equations didn’t allow for this. In fact they described a cosmos dominated by gravity and doomed to collapse back on itself with nothing to provide resistance, let alone expansion. The cosmological constant was costless and perhaps face-saving, hence his later regrets, but he turned out in fact to be correct. Λ has now been measured to be ~10⁻²⁵ kg m⁻³, and physicallyit represents the dark energy opposing gravity and causing the Universe to expand.

But what exactly is expanding? Cosmology is changed utterly by the introduction of spacetime. The Big Bang is normally visualised as an inflating balloon, but this is a gross misrepresentation. It is spacetime – not space – that is expanding; that new and unfamiliar substance. Whereas a balloon expands in x, y and z, spacetime expands in x, y, z and t, and as we’ve seen those first three dimensions aren’t the same as before. A balloon cannot travel faster than the speed of light, yet spacetime can. The reason we are limited to an observable Universe is because the fabric on which the rest of it lies carries it away from us faster than c. Indeed, spacetime is a very different kind of creature than the space with which we are familiar. And we should be careful not to oversimplify it. ‘What is it expanding into?’ is a very good question for the high school teacher’s 3-dimensional balloon, but is – it turns out – nonsensical when applied to a 4-dimensional Universe. For the time being the great cosmological questions remain unanswered, but it is worth noting that the questions themselves have changed.

This article will derive the area of a circle and volume of a sphere using integration. It builds on a previous article introducing calculus, and aims to function as a primer on polar and spherical coordinate systems.

A circle is a 2D object with extent in both x and y. Its area is, of course, A = 𝜋r². It is very telling that this well-known equation relies not on x or y but on the radius, r. This is purely for simplicity. The radius is that number which intuitively defines a circle, it is the same all the way around, while x and y change drastically. To see how r relates to x and y consider a circle of radius r centred on the origin. Any point along the circumference, like the blue point below, can be treated as making a right-angle triangles with both axes, as shown below.

The sides are given by the trigonometrical relations (from r and the angle θ swept from the x-axis). In fact, any point can be specified using these coordinates r and θ; for example (2, 𝜋/6) refers to a point 2 units from the origin at an angle 𝜋/6 (or 30º) much like the one pictured above. θ ranges from 0 to 2𝜋 (or 360º) since it would simply repeat after one revolution anyway, while r can be any number greater than 0. This is the polar coordinate system.Finding the area of a circle in the x-y coordinate system requires integrating with respect to x and then with respect to y. Polar coordinates do the same thing but with r and θ. Double integrals may look intimidating at first sight but they are simply performed one at a time from the inside out.To solve in x and y is long and tedious, which is why area is commonly given in terms of r. A change to polar coordinates simplifies things, but at a cost. A conversion factor is needed, called the Jacobian. Fortunately, the change to polar requires a Jacobian of simply r; derived here. A more complex derivation follows later, so a glance at this one is advised if only to avoid the sensation that it’s being pulled from thin air. The area of a circle can thus be described as:Where θ ranges from 0 to 2𝜋, the angle swept by a circle, and r is limited between 0 and r. This is really the same as saying r isn’t defined, other than its being greater than 0. The integral of dθ = θ, so:θ then assumes the values of its limits, top minus bottom, so θ = 2𝜋 – 0, where the zero can be ignored:Then integrate r. Since it is undefined, i.e. r = r, its limits can be ignored.Finding the volume of a sphere requires a switch to the spherical coordinate system. Whereas polar coordinates specify a position on a circle, in spherical coordinates points are specified by their position on a sphere, using r, θ and φ.r is exactly as before, it points out from the origin and gives how far away the point is, or the radius of the sphere. But now two angles are needed, θ and φ, to specify a position. These are equivalent to latitude and longitude on the Earth: the first tells you how far South of the North pole you are, which of course isn’t enough to know your position, while the second tells you how far East you are from the Greenwich meridian. Latitude, θ, goes from 0 to 𝜋 (180º, i.e. North pole to South pole) while longitude, φ, goes from 0 to 2𝜋 (360º, i.e. Greenwich to Greenwich). These are shown below.A triple integral is needed to derive the volume of a sphere, and has the form:In spherical coordinates these become drdθdφ and the Jacobian is r² sinθ, derived here. This gives the volume of a sphere as:Integration is much as before, and can in fact be performed in any order.Thus the volume of a sphere is given as expected:

Physics has its occasional eureka moment. James Clerk Maxwell had one such moment in 1862 when he discovered that the equations he had found describing electric and magnetic waves combined to describe light. The speed of light c, this article will demonstrate, can easily be found from Maxwell’s equations. As this derivation is an undergraduate degree exam question each step will be described qualitatively, but a brief glance at the vector operators guide in Resources is recommended.

One of the most profound 19th century discoveries was Faraday’s finding that electric currents passing through wires induce magnetic field loops, as shown below.By 1862 Maxwell had shown that the gradient of the magnetic field H was directly proportional to the curl (or rotationality) of the electric field E. This essentially means that electricity and magnetism are one and the same, and a magnetic field flowing around a wire will equally induce a current through it, as described by:Where μ0 is the magnetic permeability constant, defined later. Taking the curl of both sides gives:The curl of H (∇ x H) is defined by Ampére’s law as:Where σ is the conductivity of the medium and ε₀ is the electric permittivity constant. Inserting this expression above gives:Using the following identity the Laplacian can be extracted. However, imposing the vacuum condition means that σ = 0 and the divergence of E (∇ . E) = 0.Thus:What has been derived is a wave equation, describing an electromagnetic wave, where μ0 = 1.26 x 10-6 N A⁻² and ε0 = 8.85 x10-12 F m⁻¹. Notice the similarity to the general equation for a wave:In this equation v is the speed of the wave. The speed of an electromagnetic wave can therefore be found:This gives v = ~ 3 x10⁸ m s⁻¹, the speed of light. Thus, light can be treated as an electromagnetic wave. The century and a half since Maxwell first worked out this derivation has corroborated and built upon it.

The discovery of the wave nature of particles led Erwin Schrödinger to formulate his equation of the wavefunction in 1926, part of his quest to find a more reliable description of electrons in the atom. This article will lay out the mathematical basis of the wavefunction and its implications for quantum mechanics.

A bead on a string that is fixed at both ends is an example of an infinite potential well. It may move along one direction (call it x) between two limits (call these 0 and L). While a bead has no natural preference for one point on the string over any other, the following derivation will show that electrons do. In a one-dimensional well, their positions fixed between two limits, electrons ‘prefer’ to be in the middle and never occur at either edge. Let’s begin.

The general wavefunction Ψ (pronounced ‘sigh’) of an electron in a one-dimensional well is defined as:A is the maximum height of the wave, k the wavenumber which relates to its wavelength, and x ranges from 0 to L as with the bead on a string.At this point it’s worth clarifying what the wavefunction is. Although it appears to be fundamental, its physical meaning is unknown, and is the subject of various interpretations which are beyond the remit of this article. Its square, however, is simple and measurable. For a well with one electron, the integral of Ψ² yields the probability of finding the electron in a particular region of the well.Probability is a familiar concept. In the case of a bead on the string, the probability of finding the bead off the string is zero and thus Ψ² = 0 at these points. Waves, unlike particles, are continuous in space, meaning they have no sudden breaks, and so if Ψ² = 0 just above L and just below 0 then the same must true at these points. So:In case it isn’t apparent, this has shown why particles, whose positional probabilities relate to wave functions, can never appear at the edges of infinite wells. This has the consequence of allowing for k to be defined in simpler terms. If:Then, since sin(x) only equals zero at points where x = n𝜋, then:Thus k can be replaced to give:At this point, it’s worth reflecting on n. n has integer values 1, 2, 3, 4… and its introduction is a consequence of treating the electron probability function as a wave, and by restricting both ends to zero. Since n relates to energy, which is shown later, this is the reason electrons are restricted to precise energy levels in atoms.

Finding A requires only the deduction that the probability of finding the particle in the entire well must be 1, so:The above can be rewritten as:This follows a prolonged derivation, available here. Thus the full wavefunction is:For n = 1 and L = 1, the probability P of being in a region from x₁ to x₂ is given as:Below this wavefunction is graphed and regional probabilities are given.~40% of the time the electron will be found in the middle fifth of the well, while ~10% of the time will it appear in the two fifths at both edges. Rather profoundly, the bead on a string is no different. It too obeys Schrödinger’s equation and has its own wavefunction just like the one above. The difference has to do with the associated energies involved, which as mentioned relate to n. Cranking up n yields the classical picture where there is no positional preference, as with beads on strings.In the n = 5 graph, there is equal probability of finding the electron in either fifth, while for the n = 20 graph, the classical picture is re-emerging where the electron can be virtually anywhere. As n is increased into the thousands and millions, the graphs are filled with yellow; the particle can be found anywhere.

The equation relating energy, mass and n is derived here, and a basic infinite well simulator, designed to produce the above graphs, can be found in Resources.

In the 20th century as galaxies further afield were resolved astronomers found that the rate of cosmic expansion has increased over time; the Big Bang is, in fact, speeding up. This article will briefly cover the evidence leading Adam Riess et al. to this conclusion in their 1998 Nobel-winning paper, as well as its profound implications on the future of the universe.

The first method of finding the distance to another galaxy was provided by Henrietta Leavitt in 1912 when she, having tracked thousands of stars in the Magellanic cloud, found a relation between the period and luminosity of Cepheid variable stars. Distinguishable by the signficant He²⁺ content in their spectra, this transparent gas ionises at high temperatures to form the opaque He³⁺, making the star dimmer. A resulting expansion following this temperature increase causes the star to cool and contract, reverting to He²⁺, upon which this process repeats and the star oscillates. By measuring the period and relative brightness of Cepheid variables their distances, and those of the galaxies to which they belong, can be found. Another way of finding the distance to another galaxy is to measure the brightness of Type 1A supernovae. These are the result of white dwarfs gobbling up nearby stars, a process that the Pauli exclusion principle predicts must reach a critical instability at ~1.44 solar masses. Type 1A supernovae therefore share the same luminosity, making them a reliable standard candle, whose distance relates to their relative brightness. Generally, these and other methods are used in combination in order to improve precision and minimise distance uncertainties.

The velocity of a galaxy can be found by measuring the redshift of its spectrum. Elementary peaks are compared with their known laboratory wavelengths, and redshift z is defined as:Where λ₀ is the known wavelength of a peak and Δλ is by how much it is shifted in a measured spectrum. The velocity v relates to z by:The galaxies Hubble observed had, on average, a positive redshift, and the further away one was the redder its spectrum, and so the faster it was receding. More profoundly, Hubble showed that this would be the case at every point in space, meaning that the redshift was not purely a Doppler effect but is caused the expansion of space between the galaxies while the light is in transit. Hubble defined a rate of constant expansion H₀, which is presently defined as 67.8 km per second per megaparsec, and relates to velocity by:Since c and H₀ are taken to be constants, the graph of this equation is simply a straight line. Observations agree with this on the relatively small scales Hubble observed in 1929, but as galaxies are mapped at ranges up to 12,000 megaparsecs, ~90% of the radius of the known universe, Hubble’s straight line bends, as shown below, indicating a much lower expansion rate in the past.This is precisely the opposite of what is expected in a matter-dominated universe. In such a universe the force of gravity is sufficient to eventually slow down any expansion and cause a contraction. To overcome this force requires energy, and thus dark energy is introduced, with the word ‘dark’ conveying no more information than its being unknown. So the question of whether the universe will expand forever or eventually stop and collapse in on itself is no more than a question on the ratio of matter to dark energy. This cosmic density term, aptly given the letter Ω, is conveniently defined such that if Ω > 1 gravity prevails and the universe will collapse in on itself while if Ω < 1 dark energy prevails and the universe expands forever. Current measurements put a value on Ω at almost exactly 1, predicting that the universe will eventually slow down but never quite stop. Further research is underway to more precisely measure Ω and reduce what are considerable uncertainties.

For now the acceleration continues. It is predicted that within a few million years many galaxies now visible will have vanished from the night sky, since they will be receding from the Milky Way faster than the speed of light. In several billion years all distant galaxies will have vanished, leaving astronomers stranded in a much smaller universe, resembling the one in which they thought they lived at the start of the 20th century.

Calculus revolutionised maths and physics in the 17th century, allowing Isaac Newton to form his equations of motion. This article will lay out the physical meaning of its primary components, differentiation and integration, and show their power by, from first principles, deriving Newton’s equations of motion.

Differentiation finds the slope of a curve. In others words, by how much y changes when x is increased by 1. For any function:The slope (or derivative) is given by:For f(x) = x, as plotted below, the gradient f'(x) = 1, meaning that y increases the same as x, which makes sense.Differentiation is therefore the rate of change. The rate of change of position is velocity, and the rate of change of velocity is acceleration.

Integration, on the other hand, finds the curve from the slope. It is the opposite of differentiation. For any function:The curve (or integral) is given by:So for f'(x) = 1, the integral f(x) = x + c. The c is an uncertainty constant. This arises because knowing the slope isn’t enough to know where the curve starts and ends, just as knowing the velocity of a car gives no information about its position. Yet the integral of velocity is position, and the integral of acceleration is velocity. This last one is where we begin.

Define acceleration as a. From first principles the velocity v will be the integral of a with respect to t. Performing this integral returns:In other words, the a gains a t term (as expected) and there is an uncertainty constant u. Now what is u? Well, if t = 0, we can see that v = u, and so u is the initial velocity, before time commenced. This is one of Newton’s equations of motion, but he has another.

If we integrate again, we will arrive at position:Where, again, x₀ is the uncertainty constant and is equal to x at t = 0. Thus it is the initial position. By redefining x – x₀, (i.e. distance between final and starting positions) as the displacement s, the second of Newton’s equations of motion is derived.

To approach the topic of dark matter is to approach the frontier of modern physics. This article presents briefly the evidence for this hypothetical substance, after a short overview on why it is widely considered to exist.¹

The first person to suggest the presence of dark matter was the Dutch astronomer Jacobus Kapteyn in 1922.² By this time telescopes had grown powerful enough to resolve galaxies, providing a testing ground for general relativity. Yet, while Einstein’s theory acquired overwhelming evidence closer to home, galaxies fell far short of its predictions. To account for this, Vera Rubin and Kent Ford published in 1980 a model for a new substance with the required mass and which did not emit light, that had to be six times more abundant in the universe than visible matter.³ The evidence leading to such a conclusion is summarised as follows:

First, the rotation of galaxies indicates a spread of mass throughout rather than a central concentration. Kepler’s law predicts that the average velocity of stars orbiting the centre of a galaxy is given by:Where r is the distance from the centre. Instead of dropping, v is observed through optical spectroscopy to be constant.⁴ By equating the gravitational and centripetal forces the implications of this on galactic density are revealed:Where G is the Newtonian constant, and M the mass of a galaxy whose derivative with respect to distance is dM/dr. In words, constant velocity requires almost-constant density meaning large amounts of matter must exist on the very edges of galaxies, far more than shows in optical imaging.

Second, and on an even grander scale, galaxy clusters appear to have many times more kinetic energy than their visible mass allows. For a galaxy of mass m moving in a cluster of mass M a distance r from the centre, its kinetic and potential energies will be:But for a stable system, the centripetal and gravitational forces must be equal, returning expressions for K and P:So the potential should be equal to twice the kinetic energy, yet for observed clusters, the kinetic energy appears several times greater than the potential, implying an unseen source of extra gravitational energy.⁵

Third, gravitational lensing. Einstein showed that gravitational fields can deflect light, where the angle of deflection is:Where d is the distance between the foreground and background galaxies, found by measuring their redshifts. The results again imply several times more mass in a galaxy cluster than is visible, as would be explained by dark matter.⁶