]]>https://wforl.wordpress.com/2012/02/04/going-native-2012/feed/0wforl6f1bfc53-5eb9-4e2c-8b12-f9d295412afeFuture C++ Bookshttps://wforl.wordpress.com/2011/10/31/future-c-books/
https://wforl.wordpress.com/2011/10/31/future-c-books/#respondMon, 31 Oct 2011 11:23:14 +0000http://wforl.wordpress.com/?p=501I was looking to see if there were any new books out based on C++11. I didn’t find any, as its still early (Guess we’ll have to give it a year or two), but I did find these coming out next year. Thought I’d share.

I’m personally looking forward to the new editions of the “The C++ Standard library” and “C++ Programming language”. Going by this though, I wonder just how much code will actually compile in VS when they do come out ( assuming they are using C++ 11 )

]]>https://wforl.wordpress.com/2011/10/31/photon-tracing-2/feed/0wforlTrace47Trace47Trace47Photon Tracinghttps://wforl.wordpress.com/2011/10/31/photon-tracing/
https://wforl.wordpress.com/2011/10/31/photon-tracing/#respondMon, 31 Oct 2011 10:16:45 +0000http://wforl.wordpress.com/?p=494Today I added photon tracing. Its only Caustic photons at the moment, but the improvement in speed and visual quality is much better now, and I’ve still yet to thread the 1st pass photon generation stage. As mental ray does, I only trace photons at geometry in the scene that reflect or refract, to save on wasted caustic photons. This is done by querying for all geometry with a specular type material, retrieving their AABBs and then shooting photons from the lights at these boxes. A lot of geometry really doesnt get bounded very well by AABBs, such as the teapot below, so I also added the option to keep shooting photons until the target count is matched, as opposed to just storing the ones that do hit. This takes a little longer, but it means less fiddling with the multipliers to get descent results.

The images below all use point lights.

]]>https://wforl.wordpress.com/2011/10/31/photon-tracing/feed/0wforlTrace47Trace47Trace47Selection Renderinghttps://wforl.wordpress.com/2011/10/31/selection-rendering/
https://wforl.wordpress.com/2011/10/31/selection-rendering/#respondMon, 31 Oct 2011 10:15:34 +0000http://wforl.wordpress.com/?p=492Having to re-render the whole frame to see changes got annoying a long time ago. Yet I finally added the ability to only render pixels made by a marquee selection

]]>https://wforl.wordpress.com/2011/10/31/selection-rendering/feed/0wforlSelectionFinal Gather 4https://wforl.wordpress.com/2011/10/31/final-gather-4/
https://wforl.wordpress.com/2011/10/31/final-gather-4/#respondMon, 31 Oct 2011 10:14:48 +0000http://wforl.wordpress.com/?p=490Added sampling of contours with in the scene to get better irradiance data near edges. Also added AO to the final result to crispen some of the edges that tend to bleed out when using FG.

]]>https://wforl.wordpress.com/2011/10/31/final-gather-4/feed/0wforlTrace46Final Gather 3https://wforl.wordpress.com/2011/10/31/final-gather-3/
https://wforl.wordpress.com/2011/10/31/final-gather-3/#respondMon, 31 Oct 2011 10:14:11 +0000http://wforl.wordpress.com/?p=488Update to FG. I decided to use the noisy AO results of computing the FG points after all, its not that bad when I also include the view/ray intersection points. The image below shows this update. The scene is from a Digital Tutors tutorial DVD.

]]>https://wforl.wordpress.com/2011/10/31/final-gather-3/feed/0wforlTrace45Final Gather 2https://wforl.wordpress.com/2011/10/31/final-gather-2/
https://wforl.wordpress.com/2011/10/31/final-gather-2/#respondMon, 31 Oct 2011 10:13:32 +0000http://wforl.wordpress.com/?p=486Update to FG. The sample points are now based not only on view ray intersecitons, but also points layed out on the geometry (back face culled). This produces nice results as you get more points in complex areas, and the distribution increases as the rays form more grazing angles to the geometry. There are still some artifacts though, and this is based on the points being uniformly placed, and patterns can be seen in the interpolation. Hopefully some randomness will sort this out. I also tried using ambient occulsion to create a buffer which can be used to distribute points. It worked great, but is just very slow to compute. If I reduce the samples, then a get a more noisy buffer, if I reduce the resolution of the buffer then I get other artifacts showing up. Im still going to play around with it some more and try to reach a good compromise.

]]>https://wforl.wordpress.com/2011/10/31/final-gather-2/feed/0wforlTrace44Final Gatherhttps://wforl.wordpress.com/2011/10/31/final-gather/
https://wforl.wordpress.com/2011/10/31/final-gather/#respondMon, 31 Oct 2011 10:12:40 +0000http://wforl.wordpress.com/?p=484Some early renders using my Final Gather implementation. The first shot is of just the indirect illumination with no interpolating. The second shot is with interpolation of 3 points, and the third shot with 10 points. The final shot is with the direct illumination too. You can see the FG points Im sampling in the the top viewport (the green points), which currently are uniformaly sampled at across the viewing plane (using a ration of 1/5 to pixels in the shots below). Unfortunatly, even with many FG points and a lot of interpolation the results still contain noticeable low frequency noise. Hopefully this will be reduced by importance sampling the FG points as opposed to unformly distributing them.