The three methods that Celestia uses to draw stars now--points, fuzzy points, and scaled discs--all have some problems:1. In 'fuzzy points' mode, stars appear too blurry; point stars mode gives crisp looking stars, but there's also no anti-aliasing2. None of the modes except scaled discs draws stars over a wide range of brightnesses in such a way that stars of different brightness are obviously distinguishable; the familiar asterisms tend to get 'washed out', lost among a mass of mid-brightness stars3. Qualitatively, the stars just don't 'pop' they way they do in a good astrophoto when you look at the sky on a dark night; this is probably the result of 1 & 24. There's no documentation on how Celestia translates apparent stellar brightness into pixels

I've developed a new technique for drawing stars that addresses all of the problems above. Like almost everything in interactive graphics these days, it involves shaders.

The point spread function for stars is approximated as a Gaussian. This is integrated over the area of a pixel. Rather than treating the pixel as a square, the pixel is given a Gaussian 'response'. This could be thought of as leakage between elements of the detector, either the pixels of a CCD or the photoreceptor cells in the eye. The choice to not use square pixels is also driven by ease of implementation: the convolution of two Gaussians is another Gaussian.

The PSF/pixel function is summed with a second Gaussian introduced to simulate the effect of light scattering in the optical system. The aim of this is not to replicate all the flaws in a real eye or camera, but to give the appearance of brilliance for brighter stars. This effect is responsible for the halos that we see in astrophotos such as this one:

Where:r is the distance in pixels from the pixel center to the star centers^2 is the variance of the convolution of the PSF and pixel functionGs^2 is the variance of the glare GaussianG is the brightness of the glare Gaussianb is the brightness of the star

Gs is set so that the glare function is much broader than the PSF; the G factor is necessary to keep the glare dim relative to the star. The brightness of the star is calculated from the magnitude of the star, the limiting magnitude, and the 'saturation magnitude'. The saturation magnitude is the point at which a star is bright enough that it is clipped to the maximum displayable brightness at the center.

Here is the implementation in GLSL... It is cleaned up version of the actual code, so no guarantees that it will run as-is. First the vertex shader:

// Calculate the minimum size of a point large enough to contain both the glare function and PSF; these // functions are both infinite in extent, but we can clip them when they fall off to an indiscernable level float r2 = -log(thresholdBrightness / brightness) * 2.0 * s2; float rG2 = -log(thresholdBrightness / (glareBrightness * brightness)) * 2.0 * Gs2; gl_PointSize = 2.0 * sqrt(max(r2, rG2));

There's a very important step in the pixel shader that hasn't been discussed yet: the linear to sRGB mapping. Most monitors are use the sRGB color space, which has a nonlinear mapping from pixel value to the amount of light emitted at the pixel. If we want stars to look as natural as possible, we can't neglect this (we shouldn't ignore it when drawing planets, etc. either, but that's another discussion...) The function that maps from linear to sRGB colors has two parts: a linear part on the low end, and a nonlinear part on the high end. The following GLSL function handles the mapping:

Newer OpenGL drivers (with appropriate hardware) offers the extension EXT_framebuffer_sRGB, which automatically converts linear pixel shader output values to sRGB. It also fixes alpha blending: sRGB pixels are read back from the frame buffer, linearized, blended, then converted back to sRGB and written back. This should further improve the appearance of overlapping stars--the Alpha Centauri system looks noticeable unrealistic when the A and B stars are blended in nonlinear space.

Programs like Photoshop, The GIMP, etc allow you to specify a working colorspace. Shouldn't Celestia do the same? ie. Shouldn't the mappings you're talking about be done according to whatever ICC profiles are available on the machine, rather than hardcoded into Celestia? ie. Celestia should allow an ICC profile to be specified/loaded, which will then be used to do the appropriate mappings.

By hardcoding to a specific colorspace (sRGB) this confines Celestia's output to that (smaller) colorspace and the current/old hardware. I think it's a mistake to hard-code to a specific colorspace. If you insist on hardcoding, IMO, it should at least be AdobeRGB rather than sRGB. People with AdobeRGB monitors (of which more and more are becoming available) will not be able to get the best out of their display when using Celestia, and may in fact get some color shifts in reds and greens.

RegardsCC

PS. Notwithstanding my comments regarding colorspace, I'm still seeing a nice improvement between those two pics on my current (sRGB) monitor.

"Is a planetary surface the right place for an expanding technological civilization?"-- Gerard K. O'Neill (1969)

Programs like Photoshop, The GIMP, etc allow you to specify a working colorspace. Shouldn't Celestia do the same? ie. Shouldn't the mappings you're talking about be done according to whatever ICC profiles are available on the machine, rather than hardcoded into Celestia? ie. Celestia should allow an ICC profile to be specified/loaded, which will then be used to do the appropriate mappings.

By hardcoding to a specific colorspace (sRGB) this confines Celestia's output to that (smaller) colorspace and the current/old hardware. I think it's a mistake to hard-code to a specific colorspace. If you insist on hardcoding, IMO, it should at least be AdobeRGB rather than sRGB. People with AdobeRGB monitors (of which more and more are becoming available) will not be able to get the best out of their display when using Celestia, and may in fact get some color shifts in reds and greens.

I have only implemented the gamma correction part of the conversion from a linear color space to sRGB. Adobe RGB uses almost the same gamma correction. It differs from sRGB in that it has a 2.2 gamma over the whole range rather than sRGB's piecewise combination of a linear part and a 2.4 gamma part; I suspect that the two would be difficult to distinguish visually, and that my linearToSRGB function would work just fine if I replaced it with a function that simply computed color^(1/2.2).

The linear part of the transformation is the major difference between AdobeRGB and sRGB, and this isn't treated in the star shader, or in fact anywhere at all in Celestia. This is fine as long as both the inputs--textures and light colors--and display device are sRGB. If you have textures created with an AdobeRGB profile, then they won't look right in Celestia unless you've got an AdobeRGB monitor--texture color spaces have to match the monitor profile, since no mapping is done.

PS. Notwithstanding my comments regarding colorspace, I'm still seeing a nice improvement between those two pics on my current (sRGB) monitor.

I agree that any differences between AdobeRGB and sRGB are likely to be negligible in the star shader, but I'm also thinking about planetary textures as well, so my comments below really only apply when thinking about that aspect.

I can imagine a future where most people will have AdobeRGB devices (even internet browsers are starting to be color aware) and at that point planetary textures may well use AdobeRGB which has a wider gamut than sRGB, is closer to what the human eye sees, and for example does a much better job with dark greens (prominent in Earth textures).

I guess what I'm suggesting is that Celestia should become fully color-managed which I imagine would negate the need for any hard-coding. The shaders I guess would query the appropriate loaded ICC profiles to determine the required transformations for whatever devices they have. I'm not sure, but this may in fact be easier to implement than the hard-coding work which you're doing ... using ICC profiles would probably ensure that the mapping of gamma corrections that you're coding for, happens automatically.

It's then up to the user to make sure that their machine has the appropriate ICC profiles installed for their devices (and referenced in celestia.cfg?) to ensure color accuracy. As AdobeRGB displays a wider gamut than sRGB, IMO hardcoding to sRGB is setting the bar too low because sRGB cannot encode certain parts of the spectrum that AdobeRGB can. This prevents people running Celestia on AdobeRGB monitors from taking advantage of the wider color gamut available to them, if and when the source textures encode AdobeRGB.

The hardest part of a fully color managed Celestia would be ensuring that input images created with different color-spaces are classified and loaded with the correct profile, but I'm sure all this could be handled in celestia.cfg.

CC

PS. I know this is a wider issue than this thread requires, but just my 2CW.

"Is a planetary surface the right place for an expanding technological civilization?"-- Gerard K. O'Neill (1969)

besides the marked rendering improvement, I really like your style of documentation above. It is most instructive and inspiring alike.

For my part, I also consider a more universal approach to brightness and to the color space as important issues for the near future. Besides stars, there are e.g. the large brightness differences of DSOs and their subtle colors that should eventually be accounted for in a more quantitative & hardware-independent manner. So far much of this was done on a rather intuitive basis, as I know very well from my own code relating to DSOs.

Today I have little spare time. I'll be back tomorrow asking a few more questions in this direction.

Chuft-Captain wrote:I agree that any differences between AdobeRGB and sRGB are likely to be negligible in the star shader, but I'm also thinking about planetary textures as well, so my comments below really only apply when thinking about that aspect.

I can imagine a future where most people will have AdobeRGB devices (even internet browsers are starting to be color aware) and at that point planetary textures may well use AdobeRGB which has a wider gamut than sRGB, is closer to what the human eye sees, and for example does a much better job with dark greens (prominent in Earth textures).

Nothing in Celestia now or in my star shader prohibits using Adobe RGB planetary textures. If you've got a set of Adobe RGB planetary textures and Adobe RGB display device, then you'll get see the larger gamut. If you've got a standard sRGB monitor, then I expect any textures created using Adobe RGB will look color-shifted and desaturated.

Different sets of star colors are necessary for different RGB color spaces. What could be done is to store then in a device independent form such as CIE xy chromaticities, then convert these to the RGB color space of the output device. The default star colors are enough of a guess that color space differences don't matter too much; but the blackbody colors are explicitly sRGB.

I guess what I'm suggesting is that Celestia should become fully color-managed which I imagine would negate the need for any hard-coding. The shaders I guess would query the appropriate loaded ICC profiles to determine the required transformations for whatever devices they have. I'm not sure, but this may in fact be easier to implement than the hard-coding work which you're doing ... using ICC profiles would probably ensure that the mapping of gamma corrections that you're coding for, happens automatically.

It's then up to the user to make sure that their machine has the appropriate ICC profiles installed for their devices (and referenced in celestia.cfg?) to ensure color accuracy. As AdobeRGB displays a wider gamut than sRGB, IMO hardcoding to sRGB is setting the bar too low because sRGB cannot encode certain parts of the spectrum that AdobeRGB can. This prevents people running Celestia on AdobeRGB monitors from taking advantage of the wider color gamut available to them, if and when the source textures encode AdobeRGB.

The hardest part of a fully color managed Celestia would be ensuring that input images created with different color-spaces are classified and loaded with the correct profile, but I'm sure all this could be handled in celestia.cfg.

There absolutely is a need for hard-coded color transformations in Celestia, especially gamma correction. There's no function that can be called from a shader to automatically do the right mapping for any color. Remember: the architects of both Direct3D and OpenGL have explicit support for sRGB gamma correction. To make stars look good, it's critical to account for the display's gamma. There are four cases:

1. Old fixed function hardware with no shaders: you're screwed.2. Shaders but no support for the ARB_framebuffer_sRGB extension: gamma correction is done in the shader for stars, but it doesn't help with blending, which is still done (incorrectly) in nonlinear sRGB space.3. ARB_framebuffer_sRGB supported, gamma correction handled automatically by OpenGL when it's enabled. Overlapping stars look OK because blending is done in a linear color space.4. High precision (> 8 bit per channel) framebuffer. The framebuffer is linear, so no need for sRGB conversion. The gamma function in sRGB is only there to get the most dynamic range out of pixels with limited precision. With extra bits, linear color space is fine (and floating point colors have a non-linear distribution of precision anyway.) The tone mapping stage that maps high dynamic range colors to displayable ones subsumes any gamma correction.

Furthermore, we generally want a logarithmic mapping (i.e. linear with magnitude) of DSO brightness (including stars) to pixel value. One therefore has to ask next, how this expression for the pixel_brightness should be normalized after integration over the sky plane (x,y)? I would expect that the result should be b, the magnitude of the star (possibly including bolometric corrections).

Let's integrate your expression. In cartesian pixel coordinates the sky-plane area element dA=dx dy. Since the circular area in polar coordinates is A = Pi * r^2, the differential is simply dA = 2* Pi * r dr by implicit differentiation. Using this I get for the sky plane integral of your above expression in polar coordinates

int ( 2 * Pi * r * pixel_brightness(r), r=0..infinity) = b*s + G*Gs,

while I would have expected b as the result. Here is my proposal of how to correctly normalize your Gaussians:

t00fri wrote:Furthermore, we generally want a logarithmic mapping (i.e. linear with magnitude) of DSO brightness (including stars) to pixel value. One therefore has to ask next, how this expression for the pixel_brightness should be normalized after integration over the sky plane (x,y)? I would expect that the result should be b, the magnitude of the star (possibly including bolometric corrections).

There may be some confusion, because I wasn't clear enough about what the brightness b of the star was. It's not the apparent magnitude: it's the brightness on a linear scale. You can see the pow(2.512, magnitude) expression in the code, but I should have made this explicit in my variable definitions. Anyhow, the mapping to pixel values isn't linear with magnitude, though the sRGB gamma correction does compensate somewhat. I'd always thought that a logarithmic mapping would be best, but visual results suggest that a linear mapping of apparent star brightness (not magnitude) to pixel brightness (not value, because the display's gamma means that pixel brightness varies nonlinearly with pixel value.) That this should look most natural isn't a surprise, I suppose, since it's how we actually see stars. The problem is that monitors have a limited dynamic range. The sRGB gamma correction is critical, because it allows us to exploit the full dynamic range of the monitor; beyond the maximum display brightness, the spread of the Gaussian of multiple pixels provides the visual cue required to perceive the stars as extremely bright. At least this is my theory, and I'd be happy to hear supporting or contrary opinions.

once you give up the 'canonical' logarithmic map between pixel value and DSO luminosity, I wonder how one should have a chance to render naturally the many orders of magnitude of DSO luminosity that exist in the Universe? The available range of the pixel value V = sqrt(r^2 + g^2 + b^2) in a standard 3x8bit color space seems very small indeed (apart from device-dependent Gamma corrections).

In order to avoid misunderstandings, how do you define pixel brightness precisely?I suppose the usual definition of the pixel value is what I wrote.

Given the visual magnitude values from the scientific catalogs, what is really needed in the near-future, is a more universal approach to the corresponding relative pixel brightness for all DSOs in Celestia, not just stars. Suppose we consider a screen containing a mix of stars, globulars, galaxies nebulae,...Their overall absolute brightness on screen can be set as usual. But the relative brightness of DSOs should be defined clearly from physics considerations (in terms of catalog data) and not be up to personal gusto.

Of course, for general DSOs, we need to take into account another observable quantity besides app.magnitudes, namely their (central) surface brightness [mag/arcsec^2] along with proven luminosity profiles like deVaucouleurs or King. Surface brightness is crucial for the appearance of extended DSOs.

Finally, in your above brightness formula in terms of two Gaussians, I would like to know how the parameters b,s,G,Gs can be extracted in principle from scientific catalogs or reference imaging in a device-independent manner. The correct normalization of the integral over the sky plane will be an important consideration in this context.

t00fri wrote:once you give up the 'canonical' logarithmic map between pixel value and DSO luminosity, I wonder how one should have a chance to render naturally the many orders of magnitude of DSO luminosity that exist in the Universe? The available range of the pixel value V = sqrt(r^2 + g^2 + b^2) in a standard 3x8bit color space seems very small indeed (apart from device-dependent Gamma corrections).

In order to avoid misunderstandings, how do you define pixel brightness precisely?I suppose the usual definition of the pixel value is what I wrote.

Aha... 'Value' isn't the clearest term, since it some contexts it can also mean lightness. I was using value to mean the numeric value stored in the color channel of a pixel. By brightness, I mean the physical luminous intensity emitted by a pixel on a monitor.

As for reproducing a wide range of apparent magnitudes, I do not have a complete solution. Certainly, one thing that we need to do is to be more careful with the display gamma. Ignoring it gives inaccurate results and doesn't fully exploit the available dynamic range. The star rendering technique naturally handles overexposed stars by letting them bleed into neighboring pixels, so that brighter stars are larger. The glare Gaussian provides an additional cue. These same techniques don't work as well for DSOs. Then again, what is the range of surface brightnesses (per arcsec^2) for DSOs? It's not really so large, is it? The main challenge will not be making DSOs look OK relative to each other, but to making them bright enough to show up at all without having stars vastly overexposed. Then again, overbright stars what we're accustomed to seeing in all those nice Hubble images:

Given the visual magnitude values from the scientific catalogs, what is really needed in the near-future, is a more universal approach to the corresponding relative pixel brightness for all DSOs in Celestia, not just stars. Suppose we consider a screen containing a mix of stars, globulars, galaxies nebulae,...Their overall absolute brightness on screen can be set as usual. But the relative brightness of DSOs should be defined clearly from physics considerations (in terms of catalog data) and not be up to personal gusto.

Of course, for general DSOs, we need to take into account another observable quantity besides app.magnitudes, namely their (central) surface brightness [mag/arcsec^2] along with proven luminosity profiles like deVaucouleurs or King. Surface brightness is crucial for the appearance of extended DSOs.

Finally, in your above brightness formula in terms of two Gaussians, I would like to know how the parameters b,s,G,Gs can be extracted in principle from scientific catalogs or reference imaging in a device-independent manner. The correct normalization of the integral over the sky plane will be an important consideration in this context.

s, G, and Gs are completely dependent on the observing instrument. I think it's best to pick values that give visually pleasing results rather than try to mimic the performance of a particular hardware. b is the brightness of the star (linear) derived from the magnitude. In my implementation, I work with brightnesses that are relative to a reference value determined by the setting of the faintest visible magnitude.

The following two images demonstrate the importance of using OpenGL's EXT_framebuffer_sRGB extension when drawing stars. The top image shows a region of the sky in Centaurus rendered without the EXT_framebuffer_sRGB extension. Conversion from linear color space to sRGB is handled by the shader. In the bottom image, the extension takes care of the linear to sRGB conversion and performs alpha blending in linear color space.

stars-srgb-off-crop.png

stars-srgb-crop.png

The second image is more realistic, because in the top image, non-linear pixel values are blended as if they were linear. Two things to note:

1. The brightest star in the image is Alpha Centauri, a double star system with components close enough that they appear to overlap exactly in this field of view. In the top image, the star is much too bright because.2. In the top image, the glare halos around the stars cause nearby stars to become unrealistically bright. This effect isn't present when blending is done in linear space.

The EXT_framebuffer_sRGB extension has two effects. The output of the pixel shader is automatically converted from linear color space to sRGB (effectively, it is raised to the power 2.2.). Second, when blending is enabled, the current framebuffer color is read, converted from sRGB to linear, blended with the pixel shader output, then converted back to sRGB and written to the framebuffer. This proper implementation of alpha blending eliminates some familiar and distracting artifacts.

chris wrote:I haven't done anything with the appearance of nearby stars, though obviously that needs quite a bit of work too.

--Chris

Chris,

another related question to ask is how your stars behave, once the sensitivity is increased from visual to a 200 mm photographic lens, say. In Celestia the automag might be a relevant testing ground for this. Anyway, in this case, the distance to the stars remains large throughout.

As a possible reference, here is a nice photo of alpha Cen and beta Cen, taken with a 200 mm lens by Dr. No?l Cramer, Observatoire de Gen?ve.

We see that the two bright stars really blow up in size with the glare regime being very conspicuous. An uncountable number of background stars is visible with the star size remaining tiny.

Where would the hardware-specific profile enter in your approach? Once I increase the amount of collected light by some device other than the eye, I need to know its spectral sensitivity! A CCD, for example is way more sensitive in the blue regime than normal film.

t00fri wrote:another related question to ask is how your stars behave, once the sensitivity is increased from visual to a 200 mm photographic lens, say. In Celestia the automag might be a relevant testing ground for this. Anyway, in this case, the distance to the stars remains large throughout.

We see that the two bright stars really blow up in size with the glare regime being very conspicuous. An uncountable number of background stars is visible with the star size remaining tiny.

In fact, with glare described as a Gaussian, extremely bright stars did not look like the ones in your image. With a Gaussian, the edges of the star get sharper as the brightness increases. This matches what we see for the core PSF, but the glare behaves differently.

The authors describe the PSF for the eye as the sum of three terms: a central Gaussian, a theta^-2 term, and a theta^-3 term (where theta is the angle from the center of the light source.) Here's a sample image of the Southern Cross where Gaussian glare is replaced by G(1/(1 + k*r^3)), where G is the glare brightness, k is a constant that sets the glare falloff rate, and r is the distance in pixels. I deliberately did not the angular distance, as it's more desirable to resolve stars at high zoom factors than it is to try and realistically reproduce all the quirks of a real imaging system (and I realize that's what I did with the diffraction spikes, but they're aesthetically pleasing ).

crux.png

Where would the hardware-specific profile enter in your approach? Once I increase the amount of collected light by some device other than the eye, I need to know its spectral sensitivity! A CCD, for example is way more sensitive in the blue regime than normal film.

You could multiply the final pixel colors by RGB factors right before the linearToSRGB step.

Here's just such a sequence... Crux again, with a limiting magnitude changing in steps of one magnitude:

mag8.png

mag9.png

mag10.png

mag11.png

mag12.png

There are several free parameters:- The limiting magnitude- The 'saturation magnitude'--the magnitude at which a star centered exactly on a pixel will reach the maximum pixel value- sigma^2 for the PSF- The glare brightness; arbitrarily set now, but this should be some fraction of the total star light that gets scattered in the optical system- The glare falloff

In the images above, the limiting magnitude and saturation magnitude both change, but the difference between the two remains constant. Increasing the distance between limiting and saturation magnitudes will allow a greater range of brightnesses to be shown without blowing out the bright stars. The tradeoff is that visual contrast between bright and faint stars is reduced.

Star colors are derived here are derived from Johnson B-V color indices. The color index is converted to an effective surface temperature. From the black body temperature, I use a piecewise cubic approximate to get the CIE xy color coordinates. These are converted to CIE XYZ, transformed to linear sRGB, and then used in the pixel shader. It is important to avoid using gamma-corrected sRGB colors in the pixel shader, as the multiplication of pixel brightness and color must be done in linear color space. My earlier images suffered from this error and thus appeared too unsaturated.

For comparison with the images in the previous post, here's Crux again with a limiting magnitude of 12 and saturation magnitude 4. The bright stars of the Southern Cross are now not nearly so overexposed. Changing the difference between saturation magnitude and limiting magnitude amounts to gamma correction, while changing the limiting magnitude is adjusting the exposure.