Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

I don't know if that's entirely true though. Carmack talks of slowly integrating raytracing technology into videogames. This research into raytracing in games could prove useful later in videogame development. As I understand most advancements in videogame visuals today are optimizations on old research. So I wouldn't rain on their parade entirely.

You can only do so much raytracing in the original games. For example, Quake doesn't support or include the map information to create visually detailed worlds - after that, you're guessing on how the world should be shown.

This is Intel, not Id. It's a tech demo to show off what Intel's technology is capable of. Ray tracing scenes in real time was absolutely unthinkable just a few years back (and honestly I'm quite impressed with what they've achieved here, since ray tracing is about the most expensive (though also most realistic) way to render a scene in 3D).

It's rendered in the cloud. If they managed to actually get more bang for the buck- i.e. made this run on conventional hardware- Then I'd be interested. They're just doing something that has been done before, albeit maybe not in real time (But you never know, seeing these new OpenCL apps), running it on high-end servers, and piping it into a small laptop. I'm not sure how much of an achievement this is, we've all heard of gaming in the cloud before.

Exactly, using a bunch of servers to run a game on a laptop is neither impressive not new.Plus, the game looks nothing like Wolfenstein, which by the way used to run fine on my 386SX - no raytracing there of course. Where are the narrow grey or blue stone-walled corridors? And what is all that furniture doing in Castle Wolfenstein?

My 486 ray-traced perfectly. I don't understand why we're using processing power to show glass reflections in ray-traced sniper scopes when all the old monitors showed the reflections of people approaching from behind already!

My 486 ray-traced perfectly. I don't understand why we're using processing power to show glass reflections in ray-traced sniper scopes when all the old monitors showed the reflections of people approaching from behind already!

According to some calculations I have done, based on Intel's roadmap and extending graphs (yes, you may shoot me), one year ago it was about 4-5 years to get the full packages (ambient occlusion and all that) with a single Intel chip.

But looking at more recent data I suggest 3 years before a very expensive desktop computer can render it. Keep in mind: rendering, excluding fluid animations and all that!

This is the true advantages of raytracing. A rasterizer would have to deal with each and every triangle in that chandelier.

Rasterizers scale on O(triangles) while raytracers scale on O(pixels * log triangles). I dont remember if it was Microsoft Research or something out of Intel, but 5 or so years ago they did some scalability testing and concluded that about 1 million polygons was the sweet-spot where raytracing and rasterization were about equal in efficiency using the per iteration constants derived i

A rasterizer would have to deal with each and every triangle in that chandelier.

Or it could just do LOD with a geometry shader on the GPU.

When the polygon counts do get high enough, there will be no looking back.

The problem is that you don't just want high polygon counts, but high polygon counts for dynamic objects. And ray tracing itself has its strength in static objects, as soon as stuff moves and deforms, ray tracing runs into quite a few issues, not necessary unsolvable issues, but that demo was rather lacking in that aspect as the particle system looked like complete garbage compared to todays games.

LOD doesnt absolve the scaling problem. Lets say you have a system set up for 25% LOD versions of the geometry, but then double the number of polygons in the geometry.. well you have also doubled up the polygons in the LOD versions as well.

A LOD only needs enough polygons to look good on screen, if you add more detail that isn't visible from a distance, the LODs polygon count doesn't need to increase at all. In practice this basically means that you only need to render polygons larger then a pixel, as anything subpixel is lost anyway, and if you look at the tech specs of a modern GPU you'll see that they can already render more polygons then you have pixels on the screen.

I think the biggest one that you missed is the much higher cost of anti-aliasing.. for rasterizers, anti-aliasing techniques pretty much fall right out of it..

There are post-processing algorithms for faking anti-aliasing, already u

> When the polygon counts do get high enough, there will be no looking back.

Academic fairy land. Polycounts will never get high enough. There is a physical limit on how many polygons a animator/modeller/rigger can actually handle in a sane way. The higher the number of polygons in a character, the more time an artist has to spend painting the skin weights and mapping the UV coords. The relationship between the number of polygons and the asset creation time is exponential. Double the poly count, and ex

1. It's extremely common in FPS games for the player model to be excluded from the player perspective. It really complicates things and usually doesn't look good without a lot of extra work.

2. That's not the car's shadow. The building shadow is the shadow you are seeing. You can't see the car's shadow because the car is mostly (if not entirely) shadowed by the building behind it. The viewing angles were not suited for showing a shadow cast by any directly illuminated portion of the car.

You are right, the player model is often excluded, but that isn't really necessary. Especially id Software is usually known to show the player model's shadows and refelctions (including mirrors) since Doom 3.And if you really want a game with not only visible player model but actually pretty good player animation and physics, you should try out Dark Messiah of Might and Magic.

This sounds like a John Lasseter I saw ages ago. Those guys are scientists not 3D artists. They can't see why it's wrong. It's job done when the maths work. I've not idea why they don't hire in a guy, most of these problems have been identified and fixed in the pre-rendered market years ago. Maybe extra lights kills the frame rate too much.

The worst example of 3D I've seen so far would be the "shadows on a mirror" trick - nice.

I know they just started but still... what is the point of this? There is no upsides to rendering. It's slower (you need 4 servers), it looks worse (they had no antialiasing, ugly smoke, no complex lightning). You can do some things like reflections and refractions and portlas bit easier than with other methods but most of the time you don't need 100% correct reflections/refractions (simplified models work quite nice) and security cameras where implemented in Duke Nukem 3D on i486 machines without problems.

As someone who has dabbled with raytracing before, I would have to agree. It's an interesting tech demo of something that's possible, but not really of practical use.
For instance, they showed the chandelier with a million polys - that's all well and good, but it's on the ceiling! If the game was actually being played, the player would never get close enough to see those clever refractions. (And even if they did, the demo shows the frame rate would drop to around 17-20 FPS).

Not to mention who is actually gonna use this thing? Bandwidth ain't cheap for most folks, and uncapped connections are becoming a thing of the past. Finally you have the fact that a good 90%+ of the games are made for either the consoles first, or are designed to be "multiplatform" which means my $36 HD4650 I bought over a year ago plays just about every game out there at my LCD native 1600x900 thanks to the consoles being so behind the curve.

It should scale well for multiple clients, particularly where surfaces are not perfect optical reflectors. If every surface scatters then each client need only require tracing for the last leg of every ray.

Not quite. The complexity of rasterisation is (very) roughly O(number of polygons * number of lights). The complexity of ray tracing is O(number of rays). The number of primary rays is the number of pixels (sometimes multiplied by 4 or 9). The number of secondary rays depends on the number of lights (you fire a ray into the scene and then a secondary ray from what it hits to each light). This means that increasing the complexity of the scene does not affect the ray tracing time very much, but increasing the resolution does. On the plus side, ray tracing gives you shadows and reflections for free. It also degrades more gracefully - you can get a lower quality scene quickly (just from one primary ray per pixel) and then add the details from secondary rays and extra rays if the user doesn't move. In contrast, rasterisation tends to just lower the frame rate.

There is no point now. But in 10 years (maybe faster) the cpu speeds has increased to the point that you don't need a high performance cluster. It would be nice if you can at that moment run a game without an advanced GPU. in full detail.

If you have to start research about raytracing when the hardware is cheap enough you are too late.

And as for quality: fun of a game has little to do with grpahics quality. But it has to advance, or else we still would be looking at pong like graphics. people buy 1080p tv at sizes where it almost impossilbel to see the difference with 720p. But they still want the best quality.

PS, when they speak of wolfenstein i still think of the 1991 prequal to doom that was playable on a 286.

I'm seeing a fairly generic fantasy world and a bunch of nice rendering techniques that by now have pretty much all appeared in released games, on consoles even. Am I missing something?

Keep in mind that these demo videos were released in 2005 by a three person team working out of an apartment. It was met with pretty much universal acclaim [webcitation.org] back then and still holds up extremely well against any engine today.

Yes they had some impressive tech and good artists, but about every experienced game developer in the field (including me) realized that super-ambitious projects started by a handful of indies in a basement rarely makes it to the shelves nowadays.

If you want to create a hit as an indie startup, you make something like Braid or Limbo.

but about every experienced game developer in the field (including me) realized that super-ambitious projects started by a handful of indies in a basement rarely makes it to the shelves nowadays.

There are some exceptions. Valve has a history of buying up these groups and hiring the original people. Day of Defeat, Portal, Team Fortress all started that way, and they have done the same with other small groups as well. One more reason I'm a fan of Valve, they buy talent and put them to work, giving them the opportunity to expand their original dreams.

Absolutely ridiculous triangle counts and shaders that most of us wouldn't even dream of. Of course, they could've made better use of all that, so that people would actually get enthousiastic about it.

Chandelier part displays 40 fps on top right, but you can clearly see on the screen that its more like 15. Not to mention unimpressive difference between RT and normal renderer. I was expecting something more real life.

The very idea of using the cloud to render a FPS is preposterous and will never work in practice, for obvious latency reasons.

How else will you start training for the moment when that computing capacity is on every PC?

You use the cloud, ignore the lag and build an engine ready for the generation of computers that will come in five or ten years. You'll lose a lot of your investigation, but anyone who starts studying RT at that point will be years behind you.

I don't know if latency is any sort of a problem. You're talking of a LAN connection. This technology is not meant to render stuff somewhere out there on the intertubes. It needs to be in the same building, or on the same campus.

Funny you mention that...I've actually played twitch FPS games on OnLive, a cloud gaming service, and they were playable. If the cloud gaming servers were organized such that nearly all subscribers could reach a server bank within 100 cable miles the latency from the cloud would be negligible.

The surveillance station.At a wall in the game you see twelve screens that each show a different location of the level. This can be used by the player to get a tactical gaming advantage. Have you ever seen something similiar in a current game? Again - probably not

Someone doesn't play many games. Many 3D engines, for well over 10 years, have had some means of rendering to a texture and throwing it up on a poly in the game world. I'm going to say that hardware accelerated means of doing this have been common

You had to 'use' the monitor to view it. I think Unreal (or atleast Unreal Tournament) was the first engine that managed to render a 3D back to a texture and display it ingame. And that's more then 10 years old.

They don't like the whole GPU market because the more powerful a GPU you have, often the less powerful a CPU you need. This is particularly true now that GPUs are out and out stream processors. Intel sees this as a threat, and AMD has made it a more explicit threat with their fusion idea (combined CPU/GPU chips).

Well as a result of this Intel has done various things some useful (like make extremely fast processors) and some not. This is one of the "not" things. They have been trying to get people interested

Actually my first reaction was the "Lost Coast" demo that Valve put out with Half-Life 2 a few years ago. It had dynamic reflections and refractions in animated water surfaces and stained glass windows with refractions.

And it didn't need four high-powered graphics servers to keep it above 10FPS either.=Smidge=

Everybody agrees that ray tracing is just awesome and I at least think it's the future of 3D computer graphics. But there is only one big 3D hardware vendor left, AMD is more a CPU vendor that tries to get into the 3D market because Intel is too big in the CPUs market. Intel only have small on-board graphic chips. Will we see ray-tracing from Nvidia anytime soon?

I sure hope that maybe Intel or AMD try to take over the 3D computer graphics market with their CPU know-how (ray-tracing is using mostly the CPU).

Actually, no, the raytracing shown in that demo isn't awesome, it is rather primitive and ugly. You can render shiny spheres with it and static high polygon objects, but basically nothing else.

The stuff that you need to make graphics look good is global illumination and that demo had none of that. Todays games on the other side start to get there, you can already find realtime ambient occlusion in some games, you can get soft shadows and there have been tech demos even showing realtime photon mapping. And o

You missed my point. Don't you think the stuff that today games do are pretty math intensive? But the math is so fast done because we have specialized hardware for it.

So I ask, when we have the hardware to do ray-tracing like we have the hardware now to do triangles? When will ray-tracing become the mainstream? Because I think that the future is in ray-tracing (maybe I'm wrong).

If the "Future of Graphics Rendering" was a job being advertised and potential candidates were asked to submit their Resume, then Intel's would be very thin.

The job is asking for 5 years experience, with a tertiary qualification, preferably post grad.

In Graphics, Intel has completed High School and done 2 years admin temping.

And yes, I am still bitter about the Intel i740 Graphics Card [wikipedia.org]. Intel are just great at the snowjobs, even suckering John Carmack in a very ancient.plan [floatingorigin.com] update:
"Good throughput,

The reality turned out to be what this story will be - smoke and mirrors.

The i740 was OK once you stuck enough video memory on the card: what crippled it was Intel's crazy desire to pull textures over the AGP bus when other cards had large amounts of 128-bit VRAM. I presume the intention was to increase AGP takeup, but the reality was that it made AGP look bad when compared even to older 3dfx cards on PCI.

It's interesting to see what a game looks like with raytracing, but I don't see any practical use for this tech until they can make it happen in a normal GPU.

The problem with ray tracing is that if you have 1280x720 display then you're going to have to fire off at least 921,600 rays which must be intersected with objects and these in turn split into more rays as they reflect / refract around the screen. In a complex scene you may end up firing millions of rays. And I say at least because at 1 ray per pixe

Just thinking about the bandwidths is interesting. Start with 150E6 rays per second. Assume that to traverse the binary space subdivision data structures takes, say, 256 bytes, along with another 256 bytes worth of data for the polygon. That requires ~77 gigabytes/s memory bandwidth, sustained. So in practice you need the bandwidth of 6 fastest DDR3 sticks. And your algorithms better kept the CPUs pipelines full, and did proper prefetching, or else cache misses will have you for a day's worth of meals.

"The surveillance station. At a wall in the game you see twelve screens that each show a different location of the level. This can be used by the player to get a tactical gaming advantage. Have you ever seen something similiar in a current game? Again - probably not"

Yes, In Duke Nukem 3D... over 15 years ago. And again in a bout 40 other FPS games that followed including the Unreal series, more then a few Quake maps especially in capture and control maps.

"There is nothing more amusing to watch then some young kid discover something old and think it is new" - That quote in action.

Red Faction had security cameras in 2001. Multiple screens on-screen, but I don't remember if you could change them. Half-Life 2 (or one of the episodes) had security cameras, too, that you could change, but I don't think there were more than one at a time. (I don't think it's an engine limitation.)

Multi-million dollars graphics render farms and we still can't draw convincing fire, trees or animate a human walking smoothly (even with motion capture you often "see the join" between one action and another).

Motion capture is a crutch. What you really need for fluid motion of humans and other animate models is motion control akin to what they have in robots, say in Big Dog [youtube.com]. What motion capture does is basically leave the dynamics and control to a wetware system. It's a hack at best.

The game engine needs a kinetics+kinematics simulator, and a controller like what you'd have in a real-life robot. If you push this idea forward, it enables you to do very realistic tricks. Say you get an extra strength pill -- all i

The surveillance station.wolf_station.jpg At a wall in the game you see twelve screens that each show a different location of the level. This can be used by the player to get a tactical gaming advantage. Have you ever seen something similiar in a current game? Again - probably not.

Uhm.... Counter Strike had this in one of its levels like 10 years ago.

Sounds like the programmers are way too used to the dominant rendering model. One of the advantages of ray tracing is that you don't have to build everything out of triangles; you can have real continuous curves. For example, a ray-traced sphere can be an actual sphere. A lot of objects that require thousands of triangles with current GPUs can be produced using a much smaller number of objects using constructive solid geometry in a ray tracing context. It's analogous to the difference between raster and vec

It isn't, but good looking brittle fracture shouldn't be too hard to model in other ways (simple finite element arrays with a big mesh) if you don't care about true realism. Simple elastic and plastic deformation on a macro scale (eg. bending a bar) shouldn't be that hard either because you'd be dealing with much bigger elements (cubes, cylinders etc) than the polygons you have to worry about with lighting.

BSP's (in the DOOM sense) had to be aligned to polygon planes, and required the splitting of geometry in order to facilitate nice tricks such as front to back rendering (for transparency), not to mention collision detection. It's rare to see BSP used in such restrictive ways anymore (in games at least), and in fact they have largely fallen out of favour. Portals, Kd trees, Oct-trees, AABB-trees, OBB-trees, ABT trees, and Quad trees are more common approaches these days (effectively most of them are axis ali

Moore's Law has become an expectation, and thus a design method from a marketing point of view. This is particularly visible in harddisks where they release a harddisk that has been designed to scale up, but only contains a single platter, then a little over a year and a half later, the same hard disk is released with a second platter. The expectation allows them to get ahead, while the previous iteration is slowly allowed to get to it's full potential. Then they work on the next thing and while the current

That's when you stream. I don't know offhand how the binary space partitioning meshes with caching. Unless someone shows me research where they benchmarked it and shown it not be an issue, I'll assume that 100-130 gb/s may not be enough. Those figures are for streaming reads/writes. Random access will be slower, perhaps 2x slower, perhaps more. I expect very little cache coherency, so pretty much every 2-3 accesses to a cache line will end up in an eviction. The raytracing needs say 50-100gb/s of random acc