Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

An anonymous reader writes "As Slashdot readers know Intel's research project on ray tracing for games has recently been shown at 1080p, using eight Knights Ferry cards with Intel's 'Many Integrated Core' architecture. Now a white paper goes into more detail, documenting near-linear scaling for the cloud setup with 8 cards, and gives details on the implementation of 'smart anti-aliasing.' It will be interesting to see how many instances of Intel's next MIC iteration — dubbed Knights Corner, with 50+ cores — will be required for the same workload."

So you're implying that intel, a company notorious for building shoddy GPUs, used Wolfenstein for their demonstration not because Wolfenstein is the latest and greatest game that requires good hardware, but because it shows their cards in a good light?

Wrong. Nearly the entire raster pipeline would be ignored for ray tracing, and you don't really need a lot of shading units for the rest (don't need to multitexture in the background and whatnot). The main use for the GPUs in ray tracing would be collision detection, which could be written into shaders as long as the entire scene was loaded into each GPU's memory, so Wolfenstein is actually a good choice - a large scene would have problems because of memory constraints. Ray tracing works very well with lots of parallel CPUs, but usually is memory constrained (dependent on memory access more than anything else) in that scenario, so splitting it off onto multiple GPUs is a way to remove that constraint, but basically it still works like a lot of parallel CPUs accessing the same scene in memory.

Not to mention that those 8 cards probably don't have as much raw pixel pushing power as a pair of SLI or CF cards from the other guys, really not that impressive. Frankly Intel should have tried to buy out Nvidia years ago, the thing that gimps their chips more than anything is the crappy IGPs, instead they shoot themselves in the face by killing the Nvidia chipset business thus making Atom completely worthless. With ION it was actually a nice little unit but with Intel IGPs Atom is horrible.

That is one place you really have to give AMD credit, they saw the direction the market was going with low power mobile devices and bought ATI which makes excellent chips and ended up with Brazos, a dual core with HD graphics that sucks less than 18w under full load. I mean sure 8 cards might be nice for a render farm, but how many are gonna be willing to pay that electric and cooling bill with the prices constantly going up?

While I appreciate the effort Intel's answer to graphics has never been very good and if they need to throw 8 fricking cards at it I don't even want to know how many watts this sucker is pulling. The future is obviously mobile and while Nvidia has tegra, AMD has brazos, what's Intel got that can compete with that level of graphics in that small a wattage envelope?

Spot on! ION was the best thing that ever happened to the Atom platform. Really, it was the only thing that made it into a usable HTPC or ultra-low-power desktop. They really need to stop shitting down NVidia's throat because they are precisely the kind of aggressive, performance-driven company that would fit alongside Intel's model.

Try the new Brazos friend, especially the E350 and E450 units, they are what Atom+ION USED to be. I have built several low power office units and HTPCs out of them and was impressed enough by the performance i sold my full size laptop for a 12 inch EEE with an E350. i get nearly 6 hours watching HD videos, never gets hot, video is smooth, hell I even play L4D and GTA:VC (I could play the newer ones but don't care for them) on it and it doesn't skip a bit. Also because AMD isn't in bed with Microsoft or tryi

Note that these are raytracing cards, not rendering. Raytracing is a very different technique which can do cool effects like refraction through glass (shown in the chandeliers and scopes), jawdropping water, and realistic lighting effects that rendering cards simply cannot do.

It's also much more demanding on hardware. One of the big drawbacks is it requires a lot of scattered reads out of memory making caching much less effective. You need tons of bandwidth to low latency memory to make it happen. We're still a very long ways out from having this possible in reasonably-priced consumer GPUs.

Rag on Intel for their integrated graphics if you want (though I consider them a good non-gaming graphics chip with very good open source support), but these cards are not related to those in any way. These are full-featured x86/x64 processors with 32 cores per die. In other words, they created a 256-core system capable of software-raytracing the whole thing at high resolutions.

That is quite an accomplishment, and rest assured, it is top-tier performance in the raytracing world. This isn't meant to be a practical gaming system; this is pretty clearly being done by Intel to show off the benefits of their many-cores processors, and it is an impressive show.

The problem with these demos is, they use ray tracing like we did in 1980 (i.e. Whitted style). All computations are highly coherent and efficient. As soon as you want to have more natural rendering, with diffuse illumination etc. Parellization doesn't scale proportionally anymore. Rays become heavily incoherent, memory access scatters and you get cache misses etc. So the real feat would have been if tey show 7.7x speed with diffuse global illumination.

And this is illustrated beautifully by the pretty shoddy visuals that Intel are showing off. These graphics would have been jaw dropping 15 years or so ago, but frankly today they look amateur and desperately outdated. Reflection and refraction are nice gimmicks but it'll be a rare game that actually makes use of them to improve the gameplay. For all the other titles out there these effects are usually faked to a high enough quality that it doesn't make much of a difference to the gamer.

Judging from my limited experience with 3dsmax and real diffuse material pipelines I would suggest that state of the art RT algos won't come into the real time scene for ohh at least two decades. That is for real implementations. AFAIK You still can `hack` reflection and refraction behaviors to kind of simulate true diffuse refractions.

I can remember blowing render jobs' render times into thousandfolds with misuse of diffuse reflections.

I believe you're mistaken. Raytracing IS the technique where you're tracing light much the way it happens in the real world. The techniques usually used in GPUs are quite backward. It hasn't really been all that downhill, though; they've gotten pretty good at faking a lot of the effects, but when it comes to things like shadows, local lighting, radiosity, and refraction, Raytracing is where it's at.

Most raytraced images are 100% in focus, which is very different from what we expect from a traditional image. So it appears that the image is fake. It is the same effect that a lot of people have upon seeing high definition movies on a good TV. The enhanced framerate and better contrast displays make the image different from what was expected, so the viewer ends up with a negative reaction to what are meant to be positive enhancements.

I raytraced a simple scene once where I solved the rendering equation analytically (to see if it was practical), & the result was badly out of focus because I had neglected to include a lens (I did not expect it to be that realistic). Better raytracing does in fact produce non-perfectly-focused images, even with approximate solutions (although it is possible that said improvements to raytracing have different technical names than "raytracing").

Nah, it's raytracing, you just scatter the rays you shoot for each pixel taking into account the lens's circle of confusion (and shoot more rays overall), with biases for things like the number of leaves on the camera's iris for extra realism.

Most of the time a 2D DoF effect using a rendered zbuffer is just fine, but raytracing will give you proper defocusing of reflections and refractions, as well as showing objects that would be completely obscured in the in-focus render.

In raytracing, there is no requirement to trace rays from the camera or the light source, either direction is valid. Actually, a variant called bidirectional pathtracing traces rays from both the camera and the light source. This is also the approach of photon mapping. Raytracing in itself is simply the process of modeling the physics of light transport.
There are however some limitations. For instance, it's not relly suited for modelling diffraction, but it's hardly limiting.

If you use the simplistic style of raytracing then yes, but there are many additions which make it possible to do extremely realistic scenes.

The fundamental problem is that ray tracing is only half of the puzzle. Typically you trace from each pixel on a "screen" into the 3d scene and look at where that ray intersects with an object. You then calculate the color of the object at that point and this becomes the color of that pixel on the screen. (In a real scenario you typically calculate multiple rays per pi

Photon mapping is a raytracing technique. Photons are traced from the light source, precisely the same way that rays are traced from the camera. Where photon mapping differs is that the contribution of the photons is computed by local density estimation. This reduces the noise in the output, at the expense of introducing bias (ie. bluriness).

I think the term you're looking for regarding "normal" graphics cards is "rasterising". Both rasterisation and ray-tracing are examples of rendering, which is the general term for turning data into an image. (Typically 3D data onto a 2D screen.)

Intel have been trying this technique of putting x86 cores on a board for quite some time now. But they still seem to be struggling to figure out a good use for them. One thing traditional GPUs have going for them is that they are rather dumb and limited in their cap

It's also much more demanding on hardware. One of the big drawbacks is it requires a lot of scattered reads out of memory making caching much less effective. You need tons of bandwidth to low latency memory to make it happen. We're still a very long ways out from having this possible in reasonably-priced consumer GPUs.

Yes, it is exactly what Intel Mic card are awesome for. They are generic x86 core with 4-way SMT and a buttload of memory bandwidth. I worked with Knight Ferry prototypes and studied the scalability of the worst case of algorithms for scattered memory access: graph algorithms. (The paper will be published soon but the preprint is available at http://bmi.osu.edu/hpc/papers/Saule12-MTAAP.pdf [osu.edu].) Basically, we achieve close to optimal scalability on most of our tests.

These MIC card are designed to scale in good cases (compact memory and SIMDizable operations such as dense matrix vector multiplication, or image processing) but almost in the bad cases (lots of indirections, accessing caches lines in pathological scenarios such as sparse matrix vector multiplication, graph algorithms.)

I am excited to get a hold on the commercial card (we worked on prototypes) to make a CPU/GPU/MIC comparison.

Intel would have destroyed NVIDIA. They give up on graphics every few years and they only make enough for the lowend market. They similarly would have killed any progress with NVIDIA after a few years. It only would have caught them up for awhile.

Intel doesn't get graphics. It's so bad, I recommended an AMD A4 yesterday over an ATOM build because of the GPU.

Raytracing is an example of an embarrassingly parallel vector math problem. It's not the only such example nor the only use these cards are being put to. They're being used in thermodynamic, aerodynamic and hydrodynamic modelling of systems for computer design, for mineral exploration, for climate modelling. It would not surprise me if NASA has a cluster with them for certain space physics applications. No doubt for financial modelling too.

it honestly looks like an old game to me, yes there are some impressive features, but I really have to look for them in the images, something that is not going to happen at 60Hz (and if its not running at real speed who cares, that is a movie which can take its sweet ass time rendering frame by frame)

I'm sorry Michael but you have a poor understanding of rendering technology. They are showing off technology to potential partners and customers. They are not proposing an engine that utilizes the technology which will go to the consumer. It won't look pretty until it is developed as an end product.

What is important is that these rays are being cast in a relatively efficient manner allowing for realtime feedback. An engineer doesn't care specifically what those rays may be used for but just that they

There's another aspect of raytracing that many don't even get. It's the Military aspects, such as being able to efficiently calculate the origin of a shot (backtrace). For things like this (think about the final action in Last StarFighter) and you suddenly realize that Intel isn't working on this for gamers but for the Military. Lots more money to be made when you consider all of the CiC systems that would benefit from the ability to backtrace incoming fire and take it out with the appropriate weapon (StarS

Rent the Pixar Shorts DVD and watch what they did before Toy Story 1. Red's Dream, Tin Toy and Andre'B are all pretty crappy looking short films that demonstrate what a couple of guys in a lab could do at the time - when viewing them, you're supposed to imagine what could be done by a larger studio with funding, not bash them because a couple of guys in a lab given 3-6 months aren't producing something competitive with hundreds of people given millions of dollars and a couple of years.

That Intel is not just some small outfit, and they are the ones who want to push this change from rasterization to ray tracing. Rasterization works great and looks good and is what run well on all the GPUs out there today. Makes AMD and nVidia happy, they make billions doing it. Intel is unhappy, they want you spending less, or rather none, on those products, more in Intel products. So they are on about ray tracing. Something that GPUs aren't as good at.

Yes, but... this is apparently not a big part of their greater plans at the moment. Not everything that has the Intel name on it is given billion dollar backing.

I think that the realtime raytracing thing is coming, not this year, probably not with 22nm processes, but by the time 6nm processes and 3D packaging are here, there are going to be way more than 8 cards worth of transistors on a single chip.

Have to compare that to what will be available form nVidia and AMD though. There really isn't a "right" rendering technology, people are not all in with ray tracing even in the high end world. 3Dsmax uses a scanline renderer by default, there are plugins for it like the Indigo Renderer which uses basically uses various Monte-Carlo methods to get really realistic images.

In terms of realtime rendering it will be whatever can give the best perceived quality on the least amount of hardware. Maybe that'll end up

The evolution I have seen, for better or worse, over the last 30 years is from impossible to barely possible to practical to so-easy you can do it with stupid simple algorithms, and most people do because the hardware is cheaper than writing clever software.

Clever software will always have a great economy of scale, but when people have the equivalent of a 1990s supercomputer in their cell phone running for 7 days on a battery that weighs 20 grams, clever software won't matter as much as it used to.

The problem is that ray tracing doesn't do the trick. As I pointed out in another post, ray tracing sucks at indirect lighting. Since you tracing back from the display to sources, it only does direct lighting well. Thing is, most of the lighting we see in the real world is indirect. So you've got three choices:

1) Deal with poor lighting. Suboptimal, particularly since rasterization isn't so problematic with this. You can handle indirect lighting a number of ways and have it work fairly well.

I'm fine with the idea of "do it simple" if the hardware can handle it. However we are a long, long way from that in graphics.

My bigger point is just nothing Intel has produced has convinced me that ray tracing is a better way to go. Never mind that they are still talking about hardware that doesn't exist.

Ray tracing may not (or, eventually with photon mapping, may) be the way to go. If by long, long way you mean 8 years, then, yes, I'd agree.

At my age, 8 years goes pretty quick, and even when I was younger, I only replaced my computers at most every 4 years, I've had a couple of systems for 8 or more years.

And, if they weren't talking about hardware that didn't exist, I'd be pretty bored - the existing stuff is pretty well understood, and yes, on the existing stuff, realtime ray tracing is pretty sucky com

You're only saying this because you don't have a clue how 3D engines work. A raytraced game could potentially look "real" unlike current games which continue to just look like more and more sophisticated animated cartoons. Google raytraced images sometime, it's hard to tell that they are CG and not real a lot of times.

I know exactly how this works, and I have written a couple crappy little ray-tracers in the past. ray-tracing is one of those things like the space program... yea it could do a lot, but it doesn't, because in reality its not very practical and not very useful. Displaying pixels on a grid your always going to have an margin of display error, and who cares if you can see its a 100% perfect circle as long as the computer knows and correctly calculates it.

Raytracing falls down bigtime in the lighting department. It can't handle indirect lighting well and you get this situation of everything looking too perfect and shiny. Reflective metal spheres it is great at. Human flesh, not so much.

Now there are solutions, of course. You do photon mapping or raidosity and you can get some good illumination that can handle diffuse lighting, caustics, and that kind of shit. However ray tracing by itself? not so much. Problem is none of that other shit is free. You don't ju

It IS the be-all end-all of computer graphics. Indirect lighting is only a problem due to the limitations of CPU speed. Specifically, when you set up a render you set it up with the number of "bounces" a ray will make. When you're doing live video, those bounces are set to about 3... it's hard to get ambient lighting with that. Is a Raytracing engine the solution to computer graphics right now? Probably not. In 100 years when computers are likely smarter than we are and have us hosted in matrix-like virtual

This is all still proof of concept. Just the fact that you can raytrace an image like this is impressive. Once realtime raytracing is a reality, then more advanced shading systems will be developed which do more interesting things with those cast rays. An example shown in the article is physically based refraction for glass and water. A more challenging application would be subsurface scattering "light penetrating a surface and bouncing around before it bounces back out, i.e wax," or light dispersion " c

The real advantage to ray tracing is how it scales only logarithmically with scene geometry.

Games (and so forth) are using more and more on-screen polygons, which scales linearly with a rasterization but logarithmically with ray tracing. Ray tracing will inevitably be as efficient for the same quality as rasterization if things continue as they do, and from then on rasterization will never be able to keep up (just like bubble sort cant keep up with any O(N log N) sort for sufficiently large N)

I've still got a 32MB Voodoo 3 3k (given to me as surplus to requirements - also the guy couldn't get the driver to work on NT, which I managed to get going on Slackware 8)... still works, too. I'm using it as a head for one of my thin clients.Another client has a NVidia Riva TNT2 Model 64 32MB dual head AGP (my first AGP card).The third has an ATI Rage Pro 8MB (upgraded from 4MB). This was the first PCI graphics card I ever bought.

Ridiculously old cards, but they still work as advertised - which is plenty

No, the voodoo 3 was both 2d/3d. It was the first of their cards that wasn't 3d only. (well, in the main line. I think there was a variant of 1 or 2 that was less powerful but also did 2d. Not sure about that though. )

Intel doesn't make games. Intel makes hardware. You can use that hardware to play great games, or you can use the same hardware to play bad games. GPUs cannot help with the story, the replayability or the installation, but they can help with the graphics.

Without a lot of work being done, PhysX and the like are just a high latency wholly inefficient transaction. Since games must support people that cannot accelerate the physics, they also cannot do lots of it, culminating in PhysX just being that high latency transaction. It becomes just a bullet point on the box, rather than something advantageous.

Maybe in a few more years when games give up on supporting the current mid-range GPU's....

In my experience, the gamers who care about such beautiful graphics are happy to spend a few grand on hardware. They are not happy with jitter due to the internet connection, or waiting in line for a server.

I'm impressed by the raytracing speeds and all but is it surprising that it has near linear scaling? Raytracing is very well suited for parallel processing and scaling is nearly linear on CPU's if the software is well optimized and you're on a good network.

hmmm... optimised software. Read: custom code for massively parallel clusters. Oh, yeah.:)Good network. Read: 2-ary-4-tree with twin redundant fibre switching. Or for home users with a bit of spare cash rather than a University department with EOY budget to blow, several lengths of cat5, some PCI Gigabit ethernet cards and redundant Gigabit switchgear (what I did with a pair of DLink 24-port Gigabit switches and a boatload of surplus cat5 patch cables. Oh, yeah, that's one fast network).