Early in 2013, I was enrolling at the International Space University (ISU: http://www.isunet.edu) and wanted to use the opportunity to work on celestia.Sci as the Individual Project (a mini-thesis) of my Masters degree there. Fridger agreed to be an academic co-advisor for this project, which would have a focus in astronomy, astrophysics, space or cosmology. This project would demonstrate that celestia.Sci could serve as a framework for Masters and Bachelors thesis work. Eventually, the project was successful and you can read my ISU report here and the paper presented at the 65th International Astronautical Congress (IAC) 2014 in Toronto here.

Fridger and I agreed on the topic of gravitational lensing, for which a quick prototype I had done using OpenGL fragment shaders suggested promise. Given Fridger's vast academic experience as a professional (astro-) particle physicist and advisor of Masters and many PhD students, there was no problem from the side of ISU to making him officially a co-advisor (the other advisor had to be an ISU faculty member). This was a perfect match, since he is also the lead of the celestia.Sci project! In November of 2013, I introduced him as a potential advisor to the faculty at ISU, and ISU gave full support to this idea. The other advisor would be Dr Hugh Hill, professor of space sciences at ISU.

Next I was required to submit a document to ISU, outlining my plan for the project. The aim was to create a general framework for gravitational lensing that is accurate for a wide range of astronomical objects while also giving smooth framerates. This is the plan I submitted.

Gravitational lensing is a phenomenon by which light rays are bent by gravitational sources (i.e big masses) due to General Relativity.

NASA, ESA, and A. Feild (STScI)

Gravitational lensing can be classed into several types based on the amount of distortion seen in the image:

1. Strong: Multiple images or large arcs are produced2. Weak: Arclets and some shearing are seen3. Microlensing: Brightness varies over time due to relative movement of multiple bodies (e.g., an orbiting exoplanet)

Why does gravity bend light? This is because gravity causes curvatures in the fabric of spacetime, and light rays follow the curvature of space and bend along with spacetime. In technical terms, we say that light rays follow null geodesics, or maximum-length casual curves. But the end result is that gravitational lensing can look a lot like optical lensing that happens in ordinary magnifying glasses, telescope lenses, etc. (a key difference is that optical lenses can show chromatic aberration where light of different wavelengths are bent by differing amounts, while this does not happen in gravitational lensing).

The main kinds of optical phenomena by which we recognize cosmic gravitational lenses include: Multiple Images, Einstein Rings, Magnification, and Shear.

Why is gravitational lensing so important? For one, lensing can act as a natural telescope and focus and magnify light. This allows us to detect very distant or small cosmic objects such as galaxies or exoplanets that would otherwise be invisible to our telescopes. Another very important reason is the detection of dark matter. Normally dark matter cannot be seen, but its mass exerts gravity that in turn, bends light. See the Bullet Cluster for a striking example (Clowe, D., and et al., 2006. A Direct Empirical Proof of the Existence of Dark Matter).

Normally, light rays curved by gravity are really curved and are represented by solutions to second-order ordinary differential equations (ODEs) which are expensive to solve. See for example the black dashed and solid curves in the figure below from this article: http://iopscience.iop.org/0264-9381/30/9/095014/article

Previously, raytracing has been used to compute the result of lensing. But raytracing is really computationally expensive. Luckily we don’t have to resort to it. By making the following tradeoffs that make little visible difference we can generate a convincing yet accurate result without too much of a performance hit:

Weak gravitational field: This excludes black holes, but most important lensing objects can still be modeled, such as galaxy clusters and exoplanets.

Masses are slowly moving: Again, this excludes only the most extreme cosmic phenomena and so is a worthwhile tradeoff.

Thin lens: Almost all lenses have a "thin" mass distribution compared to the distances to the source and observer and so this is another important approximation that simplifies the problem into a cylindrically symmetric one and allows us to use geometric optics where rays are all straight lines and save a lot of computation effort.

Weiskopf et al. showed that applying all of these tradeoffs allow us to reduce the problem to that of image warping, or computing the deflections of 1-d rays in a 2-d domain. Universe Sandbox takes a similar approach. But neither are able to simulate lensing at arbitrary viewpoints and times. What we are aiming for is a general framework for simulating lensing anywhere, and from the scale of exoplanets all the way up to galaxy clusters.

The following figure illustrates the coordinate system we use in our lensing implementation. "Source" refers to a distant background object in its actual position in space (e.g., quasar or galaxy), "image" is a shifted/split/sheared mirage of the source due to lensing, and O is the observer. The lensing mass is assumed to lie in a plane ("lens plane") and is composed of one or more point masses . The perpendicular distance the light ray from the source makes with each mass is termed the impact parameter , and the lensing deflection angle is related to the mass and the impact parameter.

Attachment:

lensgeomv.png [ 30.03 KiB | Viewed 2792 times ]

We have mentioned that lenses are modeled as point masses or collections of point masses; since the gravitational field outside a spherically symmetric body is identical to that of a point mass this is a broadly applicable approximation.

Multiple images: The angles and can be either positive or negative, implying that multiple images are possible,

Amplification: If the multiple images are too small and close together to be seen separately (e.g., in the case of exoplanet microlensing), then the images combine together and cause the lens to appear to brighten over time (the amplification can sometimes be a factor of several hundred),

Einstein rings: If the source is directly in line with the observer and lens ( = 0) then a ring of light (commonly called an Einstein ring) is observed.

We will now discuss the concrete implementation of lensing in celestia.Sci.

Take this scene of the bright elliptical NGC 6166 galaxy rendered in celestia.Sci. The inset is a magnified view showing the individual pixels that make up the Milky Way galaxy as seen from NGC 6166 in the huge distance of 157 Mpc. The result of simulating lensing is shown for comparison.

Attachment:

ngc6166.png [ 335.9 KiB | Viewed 2792 times ]

Attachment:

ngc6166-lens.png [ 208.12 KiB | Viewed 2792 times ]

What we should notice here is that each pixel in essence represents a light ray originating from within the simulation, regardless of whether the light source is a star or galaxy. Thus a GPU fragment shader running on all pixels will process all sources of light in the scene democratically. This is equivalent to computing the lensing deflection angle on a grid of dimensions equal to the rendered image. Fragment processors available on most modern graphics cards will be able to execute such a fragment shader rapidly, and the end result will have pixel-level accuracy.

We use a two-pass approach where we first render stars and DSOs to a square texture in memory using a framebuffer object (FBO). Then we draw the texture as a quad covering the entire window (cropped by the viewing limits). We apply a lensing fragment shader during this second step.

Attachment:

lens123.png [ 321 KiB | Viewed 2792 times ]

A challenge in this strategy is to correctly transform coordinates between texture space, where the lensing effect is calculated in the fragment shader, and world space. Distances in the lens equation must be computed in world units (km), and the angular deflection must be converted to a displacement in texture units [0, 1]. The intercept theorem from optics can help here:

Attachment:

lensplot.jpg [ 57.78 KiB | Viewed 2792 times ]

(intercept theorem suggestion courtesy of Fridger)

One issue with computing the displacement amount is that the distance from the lens to the background source Dds is not known inside the fragment shader; in fact at this stage we do not know the specific coordinates of any stars and DSOs that were rendered to the texture any more. But we don’t really need to know exact distances Dds to background sources as Dds is already a large value for most astronomical sources and thus any distance variation between sources won’t matter. Instead we use a similar approximation as Weiskopf et al. 2005, and set Dds = a constant large value ("infinity").

We require a final transform from texture space to window space. As texture space is square (0, 0) - (1, 1) while window space is generally not, we must render to horizontally or vertically distorted coordinates depending on the aspect ratio of the window, then "undistort" when rendering to the full-screen quad in window space.

Up to now we’ve discussed mainly coordinate transforms, but we’ve forgotten an important aspect of what makes gravitational lensing work: Mass! However, one problem is that masses for most objects except exoplanets are not defined in celestia.Sci solar system and star definition (.ssc and .stc) files. Only magnitudes (brightnesses) are guaranteed to be known for stars and DSOs in celestia.Sci. Fortunately, astronomers have known for some time that mass is closely related to how luminous an object is.

Attachment:

mass-to-light_stars.png [ 37.1 KiB | Viewed 2792 times ]

Plot based on data from Torres, G., and et al., 2009. Accurate masses and radii of normal stars: modern results and applications. The Astronomy and Astrophysics Review, 18(1-2), pp.67–126

This plot demonstrates that luminosity L (in solar units) is related to mass M (also in solar units) via power laws. In other words, L is always M raised to the power n, where n=4, 3.76, ...

On the larger scale of galaxies and galaxy clusters, the situation is different. Mass becomes linearly related to luminosity, giving rise to mass-to-light ratios (M/L). M/L depends on galaxy type: spiral M/L=100, elliptical (E/S0) M/L=200, and irregular M/L=1 (values from Bahcall and Kulier 2014, Carroll and Ostlie 2007; note that there is some non-linearity for elliptical types based on radius and velocity dispersion but for simplicity we do not use the full rigorous model here).

Finally, let’s discuss how to represent amplification due to lensing. Previously we’ve seen that amplification due to microlensing is an important technique used to detect exoplanets (other techniques include radial velocity, transit, etc). But how can this be possible, since planets are so much smaller in mass than stars and galaxy clusters? Actually, even stars by themselves amplify light by focusing it, but the effect is very small. This GNU Octave plot illustrates the amplification factor around our Sun:

Attachment:

solar_magnification_plot.png [ 47.35 KiB | Viewed 2792 times ]

Things get interesting however when a planet orbits close to the star. While the planet itself exerts an even smaller gravitational influence than its star, when the two influences combine they can produce extreme, discontinuous jumps in brightness called caustics. Caustics can be seen as bright wavy lines in a pool of water, as illustrated in the excellent Optics Picture of the Day website: http://www.atoptics.co.uk/fz535.htm.

The following Octave light curve plot shows the amplification that is observed when a single exoplanet passes near the Einstein radius of its star. The Einstein radius is defined as where is the Schwarzschild radius of the lens, a measure of how much the lens curves spacetime (and also the radius of the event horizon in a black hole). The Einstein radius can be thought of as the characteristic size of the lens, and is also the size by which an Einstein ring would appear around the lens.

Attachment:

lightcurve.png [ 5.22 KiB | Viewed 2792 times ]

Here is an actual light curve observed for an extrasolar system 4.1 kpc from Earth called OGLE-2012-BLG-0026.

I've implemented a real-time plotting feature in celestia.Sci using the Qwt framework. Here it is in action:

Attachment:

celsci-OGLE-2012_microlens.png [ 129.47 KiB | Viewed 2792 times ]

First to briefly describe how this was done, I had to create an add-on for the extrasolar system as stock celestia.Sci doesn't have it. Then I chose View > Plot, selected the star OGLE-2012-BLG-0026L, lined it up with a background star which I arbitrarily named OGLE-2012-BLG-0026L-SRC, and hit Refresh in the plot panel. The microlensing code then starts varying the impact parameter across the lens to produce the time-varying light curve.

The light curve of OGLE-2012-BLG-0026 plotted by celestia.Sci shows several differences such as higher peaks. This however could be explained by the fact that we are approximating lenses and source objects as points of zero area; this results in singularities in the calculated amplification factors. Real microlenses have non-zero area, and this smooths out the singularities. However, qualitatively we can identify similarities between the simulated and real light curves, such as the presence of multiple peaks. Future improvements could try to increase the realism by taking into account the disc sizes of stars and planets.

Who is online

Users browsing this forum: No registered users and 1 guest

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum