What if the universe was contracting instead of expanding ever since the big bang? Meaning, how would we see the universe if the Hubble constant was not 72.5 (km/s)/Mpc, but -72.5 (km/s)/Mpc?

Would very far galaxies be approaching us faster than light? What would be the proper distance to the edge of the observable universe (the distance which in our real universe is 46 billion ly)? Would the universe be getting crowded enough for galaxy mergers to be happening significantly more often?

Speaking of far away galaxies, I imagine they would be blueshifted. Which gets me to another question. Why are the galaxies in the deep field images so red? Even if the whole visible spectrum was redshifted far into infrared, shouldn't ultraviolet or shorter wavelengths shift into the visible spectrum, making the galaxies visible?

Source of the post Meaning, how would we see the universe if the Hubble constant was not 72.5 (km/s)/Mpc, but -72.5 (km/s)/Mpc?

One consequence we should think about is for how we figure out what the age of the universe is. If the universe started out as a singularity state, and has expanded at a constant rate to the present time, then it is simple to find the age by taking the inverse of the Hubble constant:

Distance = (velocity)x(time), so time = distance/velocity. But the expansion rate (Hubble constant) has dimensions of velocity/distance, so 1/(Hubble's Constant) has dimensions of time and equals the age. The way to think about this is to imagine a stretchy rubber cord, with a bunch of evenly-spaced dots along it. If you stretch the cord at a constant rate, measure the speed at which two dots move away from each other, and divide by their current separation, then this is the "Hubble constant" for that cord, and the inverse of that tells you how long ago all the dots were on top of each other -- assuming that the expansion actually extrapolates back that far.

If H = 70km/s/Mpc, then 1/(70km/s/Mpc) works out to be 4.4x1017 seconds, or about 14 billion years. (The real expansion rate actually varies with time, depending on the density of matter, radiation, and dark energy, so this turns the calculation of the age into an integral, but it is still the same idea, and in fact this is exactly how we determine the age of the real universe).

If the universe is instead contracting, then this logic doesn't work. What was the initial size of the universe? Has it been contracting forever? Instead of using the inverse of Hubble's constant, we would need to estimate by other methods, like the universe must be at least as old as the oldest objects we can find and measure ages of. We could also use light-travel-time arguments with the most distant objects we can observe, since the universe must be at least as old as the time it takes for light from those sources to reach us.

We could make things simple by considering a contracting universe that is the same age as the real universe but contracting at ~70km/s/Mpc. What will this look like?

Instead of distant objects appearing redshifted, they will look blueshifted (just as you imagined). Photons will be "squeezed" by the contraction of space, shortening their wavelength and increasing their energy.

The distant (older) universe will be seen to have a lower density than the nearby (younger) universe. The density increases with time. This also means galactic collisions will become more frequent, until inevitably there is a "Big Crunch" (unless something halts the contraction).

Very distant galaxies may approach us faster than light. This does not mean that they move through space faster than light (they may be moving very slowly relative to the space in their neighborhood), but rather that the space between them and us is contracting fast enough so that Hubble's Law will yield |v|>c (negative v in this case), just as there are objects in the real expanding universe that have recession velocities greater than c.

Aside: One important, counter-intuitive, and very frequently misunderstood fact that I'd like to mention here: not only are there distant galaxies in the real universe that recede from us faster than light -- we can also see them.There are even galaxies that we observe which are not only receding faster than light now, but have always had recession velocities faster than light. It is difficult to visualize how this is possible, but it can be made sense of by thinking about how the photon moves through the expanding space, at the same time that the expansion rate changes. (Before dark energy took over, the expansion rate was decreasing, so the distance at which the recession speed exceeds c was increasing). So some photons cross over that boundary and were then able to reach us, even as the sources were never inside that region. There is a wonderful paper "Expanding Confusion" that explains this and many other common misconceptions about the superluminal expansion in great detail (highly technical but worth looking at).

An'shur wrote:

Source of the post Which gets me to another question. Why are the galaxies in the deep field images so red? Even if the whole visible spectrum was redshifted far into infrared, shouldn't ultraviolet or shorter wavelengths shift into the visible spectrum, making the galaxies visible?

You are right that ultraviolet and shorter wavelengths can be shifted into the visible range by the expansion (in fact this sometimes caused confusion when looking at highly redshifted spectra -- we might have a difficult time identifying spectral lines in the optical, because they are actually ultraviolet lines!) However, this doesn't necessarily keep the redshifted spectrum as bright because there is usually less light emitted overall in those wavelengths. The majority of light from galaxies is emitted by the stars, and the majority of that is in infrared to visible (depending on the typical ages of the stars -- spirals tend to be bluer than ellipticals for example). So there is typically less UV and higher energy light being shifted into the visible to replace what was shifted out of it.

The other problem is that the redshift reduces the total brightness (over all wavelengths) that we receive. Think of it like this: not only is each photon being stretched out, but the photons are also being pulled apart from one another. So we receive fewer of them per unit time, regardless of what their wavelength was. The same thing will happen to the image of something falling into a black hole -- it not only turns redder, but also fainter, and eventually vanishes in all wavelengths, even though we as outside observers say it never crossed the horizon.

Why do all electrons have the same charge and the same mass? Because, they are all the same electron! ~Wheeler

World lines traced out across spacetime by every electron is actually all part of one single line like a huge tangled knot, traced out by the one electron. Any given moment in time is represented by a slice across spacetime, and would meet the knotted line a great many times. Each such meeting point represents a real electron to us at that moment.

At those points, half the lines will be directed forward in time and half will have looped around and be directed backwards.

These backwards sections appear as the antiparticle to the electron, the positron. While there are many, many more electrons than positrons, the missing positrons might be hidden within protons.

The eventual creation and annihilation of pairs that may occur now and then is no creation or annihilation, but only a change of direction of moving particles, from past to future, or from future to past.

Could the Big Bang have arisen from a single atom?

Is the universe made of a hyperfinite number of fractal hierarchies described by isomorphic von Neumann algebra within a Hilbert space?

Well, no, because there were no atoms during the first nanoseconds of the Big Bang. Those formed 300'000 years later mostly as hydrogen, some helium and a tiny amount of lithium during the aptly named Matter Era. Almost all of the original atoms from that era were destroyed since they existed in an environment that was not conducive to their existence.

Oh well, looks like we're not getting our hyperloop trains from Musk anymore. His new habit caught on .

Source of the post [For a shrinking universe] What would be the proper distance to the edge of the observable universe (the distance which in our real universe is 46 billion ly)?

I have been thinking about how to answer this question specifically. The very short answer is "it depends!" (on what the contents of the hypothetical collapsing universe are, which in turn determine how it collapses, and what sort of initial state it began with). The Hubble constant alone will not determine where the edge of the observable universe is.

Probably the simplest scenario we could consider is to just flip the sign on the Hubble constant and keep everything else (the density of matter, radiation, and dark energy) the same. But this would imply a universe that did not begin with a Big Bang, but rather one which has always been shrinking -- faster at earlier times -- and then shrinking faster again in the future as it approaches an ultimate end in a Big Crunch.

We can imagine a few different kinds of universes that are collapsing. We can ask what they look like to someone inside, and how far away the edge of their observable universe is (from how far has light had time to reach them). We will generally find different answers for these different scenarios, and I think they are all quite interesting.

So, in another post coming in the hopefully near future, I will share results from some computer simulations I have done, and explain in more detail how it all works.

Watsisname, I was thinking about a universe which starts with a big bang, inflates and then instead of slowly expaning as our universe does, it slowly contracts. But I'll await what the other scenarios would look like.

Source of the post I will share results from some computer simulations I have done, and explain in more detail how it all works.

An'shur, this is code for "I'm going to look that one up online later" .

I'm sorry, Watsisname, I couldn't resist .

Anyway, based on my meager knowledge of this, I would hedge my bet that in such a Big Crunch scenario, we might see a reverse of the universe's development at some point? Like it would go through a Matter Era, reverse reionization, then a Dark age, succession of new nucleosynthesis, etc etc until it catalyses into the Planck Epoch? I'm probably wrong, but that seems to be the general Big Crunch gist. Of course, it's unlikely that the Big Crunch will happen anyway based on the current consensus.

Actually it is code for "I have been working on it", as in writing my own Python code which solves the Friedmann equations for a universe containing any abundance of matter, radiation, and dark energy that I want. I have already showed some of that progress on the Discord server, but I can show some more here as well, which now includes tracing the path of photons as the universe evolves.

Simulating a closed or collapsing universe is easy, simply by tweaking the cosmological parameters like the matter density. In fact here's one which is closed (total density greater than critical density), but with an (unnatural) component of energy which behaves like radiation (photons), in that the energy density decreases with the size of the universe to the 4th power, but are gravitationally repulsive, so that they turn the Big Crunch into a bounce. I also added a negative (attractive) dark energy so that the turnaround from expansion to collapse happens more quickly.

The parameter 'w' is especially fun to play with, as it defines how the energy density of dark energy depends on the size of the universe (w=-1 means it does not dillute, unlike matter and radiation, while w<-1 means it gets even stronger as the universe expands and can eventually result in a Big Rip).

More questions arise with every answer, but out of all scenarios what is the soonest and latest the universe would end? And how could the Big Rip happen without the heat death of the universe happening first?

Also the other day I was politely reminded that during the first few hundred thousand years of the universe there were no atoms, but as far as I recall the most distant objects we see (quasar) are populated with supermassive black holes containing the mass of billions of stars. Is the light we see from these objects showing them as younger than they truly are now, meaning we are seeing objects that couldn't possibly exist? There simply wasn't enough matter to create these supermassive black holes, much less Stars of normal Mass at such a young age in the universe.

And lastly, during the first Planck second of the universe, microscopic primordial black holes where the rage in fashion. Could these have become the protons that make up the matter we see today? Or the BH in quasar or are those completely unrelated things?

Source of the post as in writing my own Python code which solves the Friedmann equations for a universe containing any abundance of matter, radiation

What machine did you use to implement this code? I recall that PCs are generally not the weapon of choice for lots of operations like these.

Anyway, very cool presentation.

Gnargenox wrote:

Source of the post Is the light we see from these objects showing them as younger than they truly are now, meaning we are seeing objects that couldn't possibly exist? There simply wasn't enough matter to create these supermassive black holes, much less Stars of normal Mass at such a young age in the universe.

Source of the post What machine did you use to implement this code? I recall that PCs are generally not the weapon of choice for lots of operations like these.

My home desktop computer, which is not extraordinary in any way. This works because what I am interested in is how the scale factor of the universe changes with time, which depends only on the average densities of the different forms of matter and energy in the universe, and not how they are distributed (because on large enough scales they are uniformly distributed). So all I need to do is numerically iterate through the differential equations governing the expansion rate and how it changes, which is not terribly many computations per time step, and for which a home computer is perfectly well suited.

If I were instead interested in simulating the individual motions of a bunch of interacting particles in the universe over time (like the Millenium simulation, showing the evolution of galaxies and the cosmic web), then that would require a supercomputer, because modelling the interactions of a system of N particles requires close to N2 computations per time step. For N in the millions or billions, that rapidly becomes impractical for a home computer.

Source of the post More questions arise with every answer, but out of all scenarios what is the soonest and latest the universe would end? And how could the Big Rip happen without the heat death of the universe happening first?

Very fun questions that I will be delving deeply into soon. So stay tuned.

Gnargenox wrote:

Source of the post There simply wasn't enough matter to create these supermassive black holes, much less Stars of normal Mass at such a young age in the universe.

There was plenty of matter available! The density of matter then was even greater than it is today. The hard question is how that matter got collapsed into supermassive black holes so quickly. We often think of black holes being formed in the deaths of massive stars, but it is also possible for them to form directly by collapse of a sufficiently massive cloud of gas. Or maybe the stars that formed the seed black holes just formed more quickly than we thought. This is still an open question for research.

I think the "dark star" hypothesis for their formation is not very likely though, simply because of how dark matter behaves dynamically (it is very difficult for it to lose the necessary energy to collapse into something that small, because it does not radiate). Some dark matter will inevitably be drawn into the black holes, of course, but it is not very much. Dark matter is more important for understanding how the cosmic web structure formed so quickly.

Gnargenox wrote:

Source of the post Could these have become the protons that make up the matter we see today? Or the BH in quasar or are those completely unrelated things?

Black holes cannot turn into protons, and a black hole with the mass and charge of a proton would be nonphysical (too much electric charge per mass), in a similar way as a black hole with too much angular momentum per mass (spinning too fast). Protons instead form from quarks meeting up shortly (less than millionths of a second) after the Big Bang.

It is possible that the period shortly after the Big Bang did also produce a lot of microscopic black holes (and this could also explain dark matter), but they have little to do (directly) with the supermassive black holes like those we see in quasars.

Source of the post I think the "dark star" hypothesis for their formation is not very likely though, simply because of how dark matter behaves

Black-holes aside, what do you think of this 'Dark Matter Star' theory? At first it seemed quite like another wild gap-filler for dark matter measurements, but the more I read about them, the more they made some sense to me, at least from a early, pre-nucleosynthetical point of view. I do get it though - they are just a theory for now, what with their reaction-dependence on hypothetical neutralino particles.

Source of the post Black-holes aside, what do you think of this 'Dark Matter Star' theory?

I think the physics behind it is good conceptually. If there is enough dark matter annihilation in a virialized cloud of gas and dark matter to produce a lot of heat, then it can certainly influence the equilibrium of that cloud. The hard questions for it lie in the details -- are enough dark matter particles really captured into such a bound system, and do they really annihilate enough to have this effect?

I think I would find it more convincing if there was a convergence of evidence to favor this model. That could come from it being a single and natural way to explain some oddities in current observations. Of course it's great and practically necessary to successfully predict future observations, but if that is all it does then I don't think it is as compelling.

Relativity earned its place as an established theory after many beautiful predictive successes (time dilation, lensing of light, gravitational waves, etc). But if it had only made those predictions, and did not also naturally explain previous mysteries like the Michelson Morely experiment, or the precession of Mercury, then it would have been harder to take seriously before these other observations could be made. The best models are those that predict things both old and new.

As promised, I am now ready to share some results of cosmological simulations I have run on the computer, which model the evolution of universes filled with different amounts of matter, radiation, and dark energy. Creating such simulations is something I have wanted to do for a long time, and An'shur's question earlier provided the motivation to finally do it.

There is quite a bit to cover, so I'll break this up into spoilerized sections. First, I'll review the equations and techniques for building a cosmological model. Then I will present the standard Lambda-CDM model, which is currently the best model for observations. We will see what the evolution of our universe looks like over time, and also encounter some surprises!

Then, to explore An'shur's question, I will flip things around and examine a model universe which started with a Big Bang, but contains much more matter -- enough to halt and reverse the expansion and eventually lead to a Big Crunch. I will set the matter density so that after 13.8 billion years, this model universe is collapsing at negative 68km/s/Mpc -- the opposite of the expansion rate we observe in the real universe today.

Building a Cosmological Model

► Show Spoiler

On very large scales, the matter and energy in the universe is essentially uniformly distributed. This is the cosmological principle, which is justified by observations on scales greater than ~100Mpc. This is extremely convenient for our ability to model the universe, because it greatly simplifies the mathematics. If we apply General Relativity to a universe with a uniform distribution of matter and energy, we obtain the Friedmann equations, which describe how the expansion rate of the universe changes over time.

These equations may seem complicated and mysterious. How can we be sure that they are accurate? Well, for one thing they are exact solutions to GR, and for another, they are very well tested against observations. Also, I think it is actually not that difficult to understand them, at least in a Newtonian context, by drawing an analogy to orbits. If we launch something from a planet with too slow of a speed, then it will slow down and come back. This is analogous to a "closed" universe, with too much gravitation such that its expansion slows down and then it collapses on itself in a finite amount of time. An "open" universe, with too little gravitating matter, will continue to expand forever. This is analogous to launching something from a planet on a hyperbolic orbit -- fast enough that it slows down, but never stops, and never comes back. Finally there is a perfectly balanced "flat" universe, which has exactly the right amount of gravitation so that the expansion rate slows to zero, but after infinite time. This is like a parabolic orbit, where the object was launched at exactly the escape speed.

(Aside: "Flat" in the cosmological sense does not mean "like a pancake", but rather that the spatial geometry is 3D-Euclidean, the type of geometry we are all intimately familiar with. It means two straight and initially parallel lines remain parallel, and the sum of angles in any triangle is 180 degrees. A closed universe on the other hand will have those straight, parallel lines converge on each other, and the sum of angles is greater than 180°. The 2D analogy is the surface of a sphere, where lines of longitude are straight, and parallel at the equator, yet converge at the poles. This is also called "positive curvature". An open universe will instead spread apart parallel lines, and the sum of angles in a triangle is less than 180 degrees. That is "negative curvature", and a good 2D analogy is the region around a saddle point.)

The Friedmann equations are the general relativistic version of this same logic. In fact, if we use the Newtonian laws of gravity, apply it to an expanding, uniform sphere of particles that interact only by gravity, then we will come up with equations that look almost identical to the properly general relativistic Friedmann equations (save for a few numerical factors and constants, like the speed of light.) Someday I might show the Newtonian derivation, which is quite a fun bit of physics, but for now let's take the equations as given. How do we use them to simulate a universe?

We'll start with the First Friedmann equation, written in a slightly different and more suggestive form:

Here "a" represents the size or "scale factor" of the universe. It is set equal to 1 at the present time by convention. If a grows to 2, then that means the distances between galaxies has doubled. The moment of the Big Bang corresponds to a=0.

"a" with a dot over it (which I'll write in text as a') represents the rate at which the scale factor is changing with time (like an expansion velocity). In calculus terms, the dot stands for a time derivative. Dividing a' by a means "expansion velocity per distance", which is the definition of the Hubble constant. The Hubble constant today in the real universe is roughly 68km/s/Mpc. This means a galaxy 1Mpc away is receding at 68km/s due to the expansion of space in between here and there. A galaxy 2Mpc away recedes twice as fast (136km/s), because there is twice as much space in between. For a wide range of cosmological distances, this linear relationship between distance and recession velocity holds, and this is known as Hubble's Law.

On the right hand side of the equation is H0, which is the value of the Hubble constant at the present time, and then a bunch of funny symbols Ω (omega). Each omega represents a density (for example Ωm is the density of matter), but as a fraction of the "critical density". The critical density is the exact density which would make the universe flat and slow down to zero expansion rate after infinite time. As an example, if the matter density was exactly the critical density of the universe, then Ωm would equal 1.

So, this first Friedmann equation says that the size of the universe changes at a rate which is governed by the densities of matter (Ωm), radiation (Ωr), and dark energy (ΩΛ). And there is an Ωk, which is 1 minus the sum of the others. That is, if the sum of the matter, radiation, and dark energy densities is exactly the critical density, then Ωk=0. The meaning of k is the spatial curvature. Ωk=0 is a flat universe, while Ωk>0 is open (negatively curved), and Ωk<0 is closed (positively curved).

One last thing to notice is how each Ω term is scaled by some power of a. This has a very deep physical meaning. Ωm is divided by a3, because the density of matter decreases with the volume of the universe, or the size of the universe cubed. Ωr gets divided by a4 because radiation is not only dilluted just like matter, but each photon is also redshifted by expansion, which decreases its energy, and tacks on another factor of a. So the energy density of radiation drops off as a4. This is a weird way in which cosmological expansion appears to violate energy conservation -- the energy of radiation is not conserved in an expanding universe!

ΩΛ is an interesting one. It is divided by the scale factor with a weird exponent, containing another parameter, "w". "w" is a knob which describes how the density of dark energy changes with expansion (in more precise terms it describes its equation of state). The simplest model, and the one that best agrees with observations, is w=-1, which means dark energy does not dillute at all. The interpretation is that dark energy is a property of space itself, rather than some substance like particles of matter. The more the universe expands, the more space there is, and thus there more dark energy there is. This is yet another example of how cosmic expansion seems to violate energy conservation. (It doesn't really -- energy conservation just does not apply in the usual way to an expanding space-time!)

Now to turn this equation into something we can use -- implementing it into a computer code which evolves model universes. To do so, we will convert it into an equation for acceleration. This is a good calculus exercise, and I'll explain briefly the steps and then show the final result.

First, multiply both sides of the equation by a2 to get a' by itself on the left side. Then, take the time derivative of both sides. Using the chain rule, this means the time derivative of a'2 is 2a'a''. Finally, isolate a'' (the acceleration) by dividing both sides by 2a'. The final result is:

This is a form of the 2nd Friedmann equation, also called the acceleration equation. As you might guess it describes how the universe's expansion speeds up or slows down. Notice that matter and radiation both act to slow it down (their coefficients are negative), while dark energy acts to speed it up. This isn't immediately obvious since there is a minus sign in front of the dark energy term too, but remember that w=-1.

At last, this acceleration equation can be implemented directly on the computer. We begin by setting the values of the cosmological parameters at the present time (all the Ω's and Hubble constant), calculate the acceleration a'', and then iterate again for the expansion velocity a' and the change in scale factor a:

Those are the fundamental equations programmed into the model, and we will explore their consequences below.

(Aside: The Friedmann equations may also (and often are) expressed and solved in integral form. Here I have instead opted to work with them in differential form, solving for the evolution of the model universe by successive iteration through small steps of time from some starting condition. In my case, starting at the present time with a value of H0, and iterating both backwards and forwards.)

Simulating the Lambda-CDM Universe

► Show Spoiler

Let's look at the expansion history of the real universe. Observations indicate the values of the cosmological parameters are:

Ωm ~ 0.31

Ωr ~ 9.2x10-5 (the energy density of radiation is quite negligible in the universe today)

ΩΛ ~ 0.69

H0 ~ 67.5km/s/Mpc

w = -1

Here is the expansion history of this Lambda-CDM model universe:

The universe began expanding very quickly after the Big Bang, but slowed down due to the gravitational pull of all the matter and radiation present. Radiation was then diluted away quite quickly (due to the 1/a4 dependence), leading to a matter dominated universe which still slowed down, but not as quickly. But over time the matter was diluted away too, relative to the third, very mysterious component we call dark energy, or the cosmological constant. This energy density stays constant as the universe expands, so while matter and radiation get diluted, it grows stronger by comparison, and eventually dominates. We can see this by plotting their densities (in equivalent proton masses per cubic meter) over time:

At this plot scale we cannot see the radiation-dominated era, which was the first few 10,000 years. But we can see both radiation and matter density dropping over time, radiation faster than matter, and we can see dark energy as the steady horizontal line, which began to dominate over matter when the universe was around 10 billion years old.

Next let's look at a space-time diagram of the universe:

There is a fair bit of information to digest in this figure!

I have time running along the horizontal axis in billions of years, and the "proper distance" running upward in billions of light years. Proper distance is what you would measure if you could freeze the expansion everywhere and then stretch out a ruler to those locations. I set the scale so that 1 year of time horizontally is the same as 1 light year of proper distance vertically. The path of a light ray with this scale would be a 45° angle, if space was not expanding. But it is expanding, which leads to some weird consequences.

The present time is the vertical grey bar at 13.8 billion years. Extending back from our location at the present is a yellow curve, which is the path taken by a photon just now reaching Earth, having been emitted at the Big Bang. Notice how it initially moves away from us after the Big Bang. For about 4 billion years this photon, which is always moving towards us, was pulled farther away by the expansion. The path it took defines our "past light cone" -- the slice of spacetime that we observe.

The thick dashed black curve shows the path of whatever source emitted that photon. In truth there is no such source, as the early universe was opaque to light until about 380,000 years of age when electrons could finally join up with protons to form atoms, in the "recombination era". But we can imagine a hypothetical source of some light-speed signal emitted from the Big Bang, just reaching us now. Perhaps gravitational waves. Anyway, it is interesting to ask "how far away is that source from us now"? The source of the photon now reaching us from the Big Bang is now located at a proper distance of over 40 billion light years! Much more than the speed of light times the age of the universe, because expansion has again pulled it further away. This distance is what defines our "particle horizon" -- the most distant thing that we can in principle receive a signal from today.

I show a number of thinner black dotted lines also radiating outward from the Big Bang. These are the paths of galaxies. I choose the specific ones that intersect the light cone at regular 2 billion year intervals. This means each intersection is a galaxy that we observe when the universe was 2 billion years old, 4 billion years old, etc, up to 12 billion.

Finally, green dashed curves represent distances for which the expansion pulls things away at 1, 2, and 3 times the speed of light (from bottom-most to top-most curve, with the bottom 1c curve the thickest). It might seem like the notion of things receding faster than the speed of light is contrary to relativity, but it isn't. Special relativity says nothing can move through space faster than light, but here we are dealing with an expansion of space itself, which is a general relativistic effect, and there is no limit to how fast something can be pulled away from us by that expansion.

Let's zoom in on the region closer to our past light cone:

Check out the galaxy represented by the second dotted curve from the top, intersecting the light cone right at its peak. This is a galaxy we observe from when the universe was 4 billion years old, and it was at a proper distance of about 6 billion light years. Notice it crosses both the light cone and the green "v=c" curve together. At 4 billion years the distance where the recession velocity exceeds the speed of light overtook it, and for the next several billion years it hovered with a recession velocity a little slower than light. Then, about a billion years ago, the dark-energy driven accelerated expansion brought that galaxy back into the region receding away from us faster than light, and it will continue to be receding faster than light for ever more.

It gets even more extreme than that. The galaxy represented by the first curve, which was at about 5 billion LY distance when the universe was 2 billion years old, has always had a recession velocity faster than light. Yet we are receiving photons from this galaxy today! I think this is one of the most counter-intuitive features of the expanding universe.

Referring back to the first space-time diagram, the most distant things we could see (in principle -- the particle horizon) now have recession velocities of about 3 times the speed of light.

There is some more still that we can explore with this standard Lambda-CDM model, particularly for the long-term evolution and questions of its eventual fate (heat death, big rip, etc?). But I will save that discussion for another time.

A Collapsing Universe:

► Show Spoiler

Now we will explore your question, An'shur. What if we lived in a universe which began with a Big Bang, but began contracting, and is now collapsing at H=-68km/s/Mpc? (I know you asked for -72.5, but it does not make much of a difference.)

Let's set all the cosmological parameters to be the same as the Lambda-CDM model, except we'll make the matter density 9.9 times the critical density, which reverses the expansion rate to -67.5km/s/Mpc in 13.8 billion years. Here's our plot for the universe scale factor over time:

The densities over time:

And finally, the space-time diagram:

What can we tell from this?

-The proper distance to the particle horizon will be about 18 billion light years, at least in this specific case. Less than the real universe, because the expansion slowed down and reversed.-We will observe galaxies nearby which are slightly blueshifted, while galaxies that we observe from when the universe is less than about 5 billion years old will still look redshifted, because the contraction has not yet "cancelled out" the same amount of expansion.-Objects that we observe when the universe was less than about a billion years old will now be approaching us faster than light!

-Over time, more and more galaxies will move into the region that is approaching us faster than light.

And, of course, we will be doomed to an sudden, horrifying fate in a Big Crunch. (Actually I think that might rather be a lot more fun than a boring old heat death). Speaking of which,

Gnargenox wrote:

Source of the post More questions arise with every answer, but out of all scenarios what is the soonest and latest the universe would end? And how could the Big Rip happen without the heat death of the universe happening first?

I have not forgotten you, Gnargenox, but I am out of time for tonight and this post is already excessive. So let's look at the Big Rip scenario next time, by playing with the parameter "w".