CAP-Team wrote:I really like the new way of drawing stars. Is this going to be an extra option for CTRL+S or will it replace the older methods?

I'd like it to replace the older methods on capable hardware. After using this new star rendering technique, none of the old methods seem adequate.

--Chris

Chris,

what is "capable hardware" in this case? Can you give a minimum graphics chip generation? And how would it impact on the FPS of this minimum?

Any NVIDIA card from the GeForce FX on will work. ATI Radeon cards will work as long as they are from the X1xxx series or later (anything 2005 or later should be OK.) The new stars should actually be faster, since more work gets pushed onto the GPU. In fact, with some modifications to the octree processing, the new star code could be quite a bit faster. The amount of CPU work could be reduced to a bare minimum; I recall that when I last profiled the star code, a significant amount of time was spent computing apparent magnitudes for stars. This step could be eliminated altogether if the star octree traversal and shader were written to work with absolute luminosities instead of magnitudes.

A disadvantage of this new star rendering is that it's perfect as long as you don't travel between stars.

If you select a star and then go to it, it brightens twice. First you approach the glare and when that's full screen, you approach the star it self.But I guess it's hard to find a decent way to transit from a big glare to the star itself.

I have been playing with Chris's code (big thanks for sharing!).The new star rendering is very nice and can be gorgeous with some tweaking.First, I have got rid of the 1 Ly solar system size limit so that the transition between shader and mesh rendering is smooth.Then I have been playing with the variables and I found that brightnessBias is the most controllable.The screenshots below - /!\ big pics! - are made with brightnessBias set to 0.5, 0.7 and 0.8

EDIT Aug 12 --> brightnessBias back to (0.0f)

The first shot simulates a bare-eyes sight from someone's backyard:

The second shot simulates the same scene as seen through hypothetic binoculars:

Now, let's try with a digital camera on some sort of equatorial stand with a few minutes long exposure time:

Next, this small telescope that we just found at the nearby pawn shop will allow us to do some zooming in:

How about fitting a ccd sensor to our telescope?

OK, enough experimentation for the time being and let's go for more eye candy with our pocket Hubble toy:

As a reference to t00fri's post re alpha cent picture...Some not so bad shots...Celestia's sensor is doing pretty well and does not leak light Done with brightnessBias(0.65f) EDIT Aug 12 --> brightnessBias back to (0.0f)What is weird is that there are more visible differences between fuzzy points and point rendering on my (big) display than on the smallish cropped screen captures.

Fuzzy points:

Points:

Scaled discs:

Scaled discs with a tad more glare effect would be very close. Will check that.BTW, we need more stars in Celestia!

Boux wrote:I have been playing with Chris's code (big thanks for sharing!).The new star rendering is very nice and can be gorgeous with some tweaking.First, I have got rid of the 1 Ly solar system size limit so that the transition between shader and mesh rendering is smooth.Then I have been playing with the variables and I found that brightnessBias is the most controllable.The screenshots below - /!\ big pics! - are made with brightnessBias set to 0.5, 0.7 and 0.8

The new star rendering is only enabled when point stars mode is selected. The brightnessBias setting will only affect the old star rendering. The adjustable parameters for the new star rendering are the limiting magnitude (adjusted with the square bracket keys) and the various settings in the renderShaderStars method. The three variables that control how star brightness are mapped to pixel values are the limiting magnitude, saturation magnitude, and the just visible pixel value. Here are the values from the code:

Roughly speaking, stars with an apparent magnitude equal to the limiting magnitude will have a pixel value equal to the just visible value, and stars at the saturation magnitude will have a pixel value of 1.0. Gamma correction and that the stars are Gaussians rather than points complicates things slightly, but this should give an idea of the physical meaning of the parameters.

One thing missing from your screenshot is a background of faint stars. By adjusting the brightnessBias to a very high value, every star appears bright. For realistic images, there should be a rich field of faint stars in the image.

Chris,I must be dumb or very tired or both I messed up two directories with screenshot sets from various tests.I have uploaded again all the images for my 2 previous posts.Look at the new series from bare eyes to zooms into the galactic plane.Limiting magnitude at work.What was I thinking???

in the context of our Qt discussion, I compiled SVN 5043 with your new star patch, which worked fine. However, with my laptop (the graphics card of which does not support the EXT_framebuffer_sRGB extension as you know), I observe a rather dramatic decrease in performance (star style = Point):

in the context of our Qt discussion, I compiled SVN 5043 with your new star patch, which worked fine. However, with my laptop (the graphics card of which does not support the EXT_framebuffer_sRGB extension as you know), I observe a rather dramatic decrease in performance (star style = Point):

How is the analogous situation with a card supporting EXT_framebuffer_sRGB ?Anyway, it seems some further speed-tuning might be necessary.

No doubt further speed tuning would be helpful. Here are the results I get on my MacBook with an NVIDIA GeForce GT 330M. I'm benchmarking full screen (1680x1050) with 4x AA. The field of view is 47 degrees and the observer is positioned in interstellar space at a point 9 light years from the Sun:

The new star code is faster than the old code when the limiting magnitude is 10, but significantly slower when the limiting magnitude is modified to 12. Meanwhile, the frame rate doesn't change at all with fuzzy points mode, and the performance drop with scaled discs is more modest. One variable we can eliminate immediately is the number of stars: the HIPPARCOS catalog doesn't have a significant number of stars below magnitude 10, so increasing the limiting magnitude to 12 won't result in an more stars being drawn.

In fuzzy points mode the size of the stars on screen is fixed; with new stars and scaled discs, the size increases with brightness. While the width of the Gaussian increases only very slowly, the glare halo can get very large with the new stars. Quite simply, the new star code draws a lot more pixels at limiting magnitude 12, consuming both additional pixel shader cycles and graphics memory bandwidth. When the limiting magnitude is at 10, the stars are drawn with few pixels, and the CPU efficiency of the new star code pushes its performance above that of the old code.

The new star code moves more of the rendering work from the CPU to the GPU. Part of this change is inevitable with the new algorithm, and part is deliberate. GPUs now have tens or hundreds of processors, so it makes sense to offload calculations there (especially when the results are bound for the screen anyway!) But if the GPU was already a bottleneck, moving more computation to the GPU will only make the problem worse.

I'm almost certain that main reason you're seeing a performance drop with the new star code is because more pixels are drawn... You could be limited by either pixel shader or graphics memory bandwidth. There are a couple tests you can try:

* Modify the assignment of gl_PointSize in the star vertex shader so that all stars are 3 pixels. Bright stars won't look right, but it will give some indication of the performance impact of drawing larger stars.* Increase the range between the limiting and saturation magnitudes by one or two magnitudes. This generally decreases the size of the star glare.

Boux wrote:No significant loss or gain of performance.I did the test several times and the results are pretty steady.Looks like the EXT_framebuffer_sRGB extension is doing its job.

When the EXT_framebuffer_sRGB extension is available, there are two benefits:1. Better quality, since the GPU can do gamma correct blending (i.e. blending is performed in linear space)2. Better speed, since the pixel shader doesn't have to do a linear color space to sRGB conversion

Although my system does support EXT_framebuffer_sRGB, I investigated the performance effect of not having the extension by forcing the linear to sRGB conversion in the shader. These results are with the same settings I reported in my previous message:

So... no observable effect at limiting magnitude 10. It probably *does* have a minor effect that is being masked by some other performance bottleneck. At limiting magnitude 12, there's a definite slowdown. At this point, there are lot of stars with very large glare halos overlapping, so the extra pixel shader instructions in the linToSRGB function have a very noticeable effect.

I found something interesting running the MSI Afterburner monitoring tool.The test has been done at the location Chris was using (9 light years from earth, stars rendering only, 45 FOV).Here is the graph:You can see that GPU load starts dropping at Lim Mag 3 to reach a minimum at Lim Mag 10.At Lim Mag 12, GPU load starts increasing again up to the maximum at Lim Mag 15.09.There is something happening here. Could be CPU load reaching a peak and the GPUs starting waiting for data.GPUs temperatures variations are consistent with the load.This is a crossfire setup. Actual drop on a single GPU would likely be more in the 50-60% range.