I think if instead of shooting a ray out of each pixel, you could launch a trapezoid out of each scanline, and cut down on a crapload of intersection math. wherever each trapezoid collides, it should be split into 3 trapezoids: one leading to the face it collides with, 2 continuing on either side.

Creating a pick list based on a scan line, then restricting the ray-trace (first collision only - no reflections) to checking that scan list would probably speed things up a bit, as you wouldn't keep collision testing for objects that weren't anywhere near the scan line.

It works for ray-casting. I check each vertical scan line for movable objects that might intersect and make a list. Then when rendering the scan line, I only need to check a subset of objects for collisions.

(Actually I firstly check that the object is in front, using dot product, then check it is within draw range. If if passes these tests then it gets added to my first drawList (Same as for normal facet based 3D stuff). Then for each scan line, I filter the first drawList, based on ray casting collisions, to create a second drawList of objects I need to overlay on the vertical scan line. I have a third inner loop, where I draw the terrain at various depths and then overdraw the objects between the terrain billboards. It is the existence of this last inner loop which makes it worthwhile for me, otherwise it wouldn't save any time. Used in two of my 4k games)

I wonder what would happen if for true pixel-by-pixel ray-tracing, if for each visible object I did a collision check by vertical scan line and a separate collision check by horizontal scan line, resulting in each object having a screen draw box. I suspect that the time taken for each pixel-ray to check the the intersection of the two lists would be longer than the time saved. Maybe using a hash map... Not sure. Interesting idea though I suspect the raytracing fraternity have already dug up any worthwhile optimisations.

It should help a bit. There is a technique called "vista buffer" which should be more efficient though, if you already implemented a bounding box tree:

Quote

Additionally POV-Ray uses systems known as vista buffers and light buffers to further speed things up. These systems only work when bounding is on and when there are a sufficient number of objects to meet the bounding threshold. The vista buffer is created by projecting the bounding box hierarchy onto the screen and determining the rectangular areas that are covered by each of the elements in the hierarchy. Only those objects whose rectangles enclose a given pixel are tested by the primary viewing ray.

You still need to calculate refracted and reflected rays per pixel though (at least if you have curved surfaces), also the texture and lightness. The intersection checks are suprisingly fast in many cases, compared to texture and light.

afterwards each shape can be decomposed into rays for rasterization. the idea is that you can cut down on intersection math by doing intersection tests on a slice of the frustum corresponding to each scanline.

i had originally come up with an idea similar to beam tracing, but i didn't know it was called beam tracing,so googling "volume tracing" didn't get any results. it looks like this should work.

If you have polygon-faceted surfaces, raytracing isn't the best rendering method IMO. Raytracing can only excel if you have mathmatically defined bodies (sphere, cylinder, plane, box, prism) - e.g. instead of approximating a sphere with a 500 triangle mesh, you have a real sphere (and only one intersection check for the sphere).

java-gaming.org is not responsible for the content posted by its members, including references to external websites,
and other references that may or may not have a relation with our primarily
gaming and game production oriented community.
inquiries and complaints can be sent via email to the info‑account of the
company managing the website of java‑gaming.org