Posted
by
ScuttleMonkey
on Friday November 13, 2009 @04:47PM
from the more-rain-from-the-cloud dept.

WesternActor writes "ExtremeTech has an interview with a couple of the folks behind Nvidia's new RealityServer platform, which purports to make photorealistic 3D images available to anyone on any computing platform, even things like smartphones. The idea is that all the rendering happens 'in the cloud,' which allows for a much wider distribution of high-quality images. RealityServer isn't released until November 30, but it looks like it could be interesting. The article has photos and a video that show it in action."

Aren't Photo-Realistic Images pretty big in size? If I want to get 30 Frames per second, how am I ever going to push 30 Photorealistic Frames through the internet - I can hardly get 5 Mb/s from my ISP.

Still, you can get fairly decent video quality at 720p on youtube nowadays, with connections that aren't so fast (mine is limited to 8mbps download). On a cellphone, you probably can't realistically get very fast speeds just yet (500 kbps?) but the screen is also much smaller. As connections get faster, approaches like this become more feasible.

Another way to see this is that Nvidia just wants to expand its marketshare. They are likely hoping that with something like this, they could sell expensive serve

DVD quality is 720x480. That's much lower resolution than modern consoles and PCs put out, and yet a decent movie looks far more photorealistic than anything they can render. I think a decent game at 640x480 res, with good animations and realistic aliasing, looks better and more realistic than a crisp HD image without those. By focusing so much on resolution, we've been putting emphasis on the wrong thing.

Let's see it perform with something like this:
http://tinyurl.com/ycm5uy5 [tinyurl.com]
It's a YouTube 3D refinery flythrough - architectural stuff is tame, compared to this. This is not as finely-detailed as the interior of a microprocessor but the actual thing processes much more than electrons and has to accommodate humans walking throughout and managing it. It looks complicated to the uninitiated, but it's not really.
Nifty, eh? Complex enough?

which purports to make photorealistic 3D images available to anyone on any computing platform, even things like smartphones. The idea is that all the rendering happens 'in the cloud,' which allows for a much wider distribution of high-quality images. RealityServer isn't released until November 30, but it looks like it could be interesting. The article has photos

Notice there is no emphasis on video or animation. This is for 3d images only. Or were you seriously hoping to play 3d realistic games on your phone?

Maybe not the phone - I can't imagine why anyone would really need high quality photo realistic Renderings on your phone - I mean once the image is rendered you can just put it on your phone and show people, if thats what you're going for. But there isn't exactly an Engineering or Architect App for the iPhone, as far as I'm aware (don't hold me to that).

However, in my experience, the only time where rendering is preferable over a picture is for entertainment purposes. Though someone above mentioned this wou

how am I ever going to push 30 Photorealistic Frames through the internet - I can hardly get 5 Mb/s from my ISP.

I'm far from being a computer programmer/expert.

But say you have a display at, for argument's sake, 1280x1024 pixels at 32 bits per pixel. That's 41.9 million bits per frame. Call it 42 Mbits. You want to do that at 30 frames per second? You're up to 1.26 Gb/s. Now please raise your hands who has a 2GBs internet connection? OK there will be some compres

You can have supersampled pixels to avoid jagged lines - for every pixel in the framebuffer, your raytracer might generate a grid of 8x8 or 16x16 rays, each of which has its own unique direction. These leads to smoother blended edges on objects. It takes considerably more time, but helps to improve the appearance of low resolution images, especially mobile phone screen which may only be 640x480 or 320x200 (early VGA screen resolutions).

That's a rendering trick that has zero impact whatsoever on the final size of a rendered frame. I highly doubt they're sending raytracing data to smartphones.

A pixel is a pixel is a pixel, no matter how you go about generating it, and a screen can only display so many of them. The smart phone or whatever isn't generating these images, it doesn't give a crap about the raytracing methods behind them. It just downloads them and puts them on the screen.

The screen resolution of an iPhone is around 640x480 . I would guess that there are probably applications to allow larger images to be viewed through the use of scroll and zoom functionality. What I meant is that the server is going to do the raytracing, and all it has to do is send an image back to the iPhone.

Forget the data transfers, they'll increase, it's the latency that's the problem. Games using this technology will be almost useless, especially action games. Currently you get practically 0ms latency when you interact with a game, which is what makes it seem fast. If it's a multiplayer game then the only latency you get are from other people, and if they appear to go left 50ms later than when they pressed the button to go left it doesn't make a difference for you, since you don't know when they pressed the

Not all games. Many genres would work great such as an RTS or many RPGs like WOW or Baldur's gate or any other game where the interface could be run locally on the portable's hardware and then let the server handle the rendering.

I imagine even a local 3D copy which is hidden from the user but handles all of the 3D Mechanics of detecting unit selection etc. Since it's not being shaded and it only needs collision meshes it would run fine on a cell phone. Then let the server render the well shaded and lit v

Good point, I didn't think about that way. More specifically the server could for example render expensive global illumination and then send the textures to the client, which can use simple GPUs to apply the textures to local meshes.

Because it would be cloud based, they could merely send the finished rasterized frames for cellphones(very little bandwidth), or preprocessed data for desktops/notebooks/things with a GPU which it could then assemble. The whole problem is usually when you do something like this you need to download much more data than you actually need to your machine to view only one small subset of that information. Now, it can send you only the data you need, or all of the data in progressive chunks that you can start to

The reality engine isn't for real time gaming, its for artists, game and CAD
designers to see the scenes rendered in near real time. It makes a lot
of sense to the render on a remote server, most of time the artists computer
will be just a user interface for modelling using very little CPU, only on the
few rendering occassion will you need the vast ammout of CPU power
that the remote render farm, can provide. Nvidia and Mental image have
picked a great application for the cloud here. Even cleverer for Nvidi

By moving ray tracing and many other high power graphics algorithms off the client and into the cloud, lightweight-computing platforms like netbooks and smartphones can display photorealistic images in real time.

Why not just say:

By moving ray tracing and many other high power graphics algorithms off the client and onto nvidia's servers, lightweight-computing platforms like netbooks and smartphones can display photorealistic images in real time.

I guess it's just not as cool...

I wonder if this would work for cooking?

By moving cutting, peeling, baking, frying and many other food preparation techniques off the dining room table and into the food cloud (kitchen), lightweight-eating platforms like TV trays and paper plates can be used to eat off of in real time.

For me, I just hate the marketing cocksuckers who come up with these terms. Some asshole saw too many Visio diagrams with a big cloud in the middle representing the intervening networks and decided that there are computers out there that will magically do what they want. Every time I hear the term 'cloud' I think 'botnet'. Because essentially, that's the only thing extant that resembles what they are proposing.

I too was skeptical. But last night there was a presentation on cloud computing at Monadlug, and rerendering for a video service to insert new advertisements was given as an example. This is something that is being done NOW, a few dollars paying for 20 minutes of time on someone's "cloud", that would otherwise require that the video service buy a whole roomfull of expensive multiprocessor computers.

Amazon and Rackspace and others are already offering cloud services. I don't like it - I think everyone should

I think everyone should own all the processing power they need - but cloud computing is here, it's real, and it performs a valuable economic function.

Old news. It used to be called "server time". There are bits and pieces related to "server time" billing left in most Unix or Unix-like systems (which could probably be brought back to life if need be). No need to bring any meteorology in it.

"Sorry, your cloud computing operations have been cancelled because of an unexpected storm which washed away our reserve of zeroes"

Buzzwords can be fun. Next time you're scheduled for a sales presentation make up a bunch of cards with different sets of mixed buzzwords and give each attendee a card and a highlighter. The first person to get five buzzwords marked off should yell BINGO! and win a small prize for paying attention. It's called buzzword bingo. It works equally well whether you warn the presenters or not, since they can't help themselves. Some salespeople can't get past the first slide without "BINGO" ringing out.

Shhhhh! You'll ruin the scam (of convincing uninformed people that an old idea is a new idea by renaming it).

Thin client -> fat client -> thin client -> fat client. *yawn*

Every time, this happens; things move away from the client for "performance" and "flexibility" and "scalability" reasons and then everyone realises it's a pain because of the lack of control or reliability and by that point the client hardware's moved on to the point where it can do the job better anyway so everyone moves back to it.

We were forced to stop using the term "fat client' here at Big Bank; our end-users got offended when they heard the term, apparently they thought we were talking about the/users/ and not the systems... Instead, we must call it "thick client"* -- which is odd, since if they interpret it the same way it's just as insulting from another direction.

We were forced to stop using the term "fat client' here at Big Bank; our end-users got offended when they heard the term, apparently they thought we were talking about the/users/ and not the systems... Instead, we must call it "thick client"* -- which is odd, since if they interpret it the same way it's just as insulting from another direction.

You forgot how we used to refer to IDE devices as either a "master" or a "slave"... this wasn't back in the 50s either.

Oh, and it's not real-time at all. IT will *at least* have the lag of one ping roundtrip. Then add some ms of rendering time and input/ouput on the device to it. On a mobile phone that can mean 1.5 seconds(!) in delay. Or ever more.

It's real-time, when it does not sound weird anymore, when I press a key in a music game, to hear the sound.That's below 10 ms for me. But something around 50ms TOTAL for the average Joe.

Oh, and don't even think about winning a game against someone with real real-time rendering.

The point is that the blackberry doesn't do any processing. It just streams the end result. Which is certainly doable considering the ZuneHD can playback 720p HD footage and it's not much bigger than a blackberry.

I'm not talking about real-time processing (which cloud rendering can help with).

The new Zune HD is one of a few select devices that actually supports a decent resolution. It pisses me off because I can't use a Zune in Linux, and I won't buy a Zune, but it does have perhaps the nicest screen of any portable device on the market right now.

Most smart phones have low-resolution screens. You can't produce a photo-realistic image on a low-resolution screen, regardless of pushing rendering out to the cloud.

...it does have perhaps the nicest screen of any portable device on the market right now. Most smart phones have low-resolution screens.

Really?! I'm looking at specs now; by what I see, the Zune HD 32 has a 480x272 pixel screen.

There are quite a few smart phones out there with better than that. The Droid has 480 x 854; the HTC hero has 320x480; the Nokia N900 has 800x480. Even the iPhone, which doesn't have stellar resolution, is 480x320.

I believe the term you were looking for is Stereo Images,br>
Anyways, this is just nVidia's attempt to come up with market for its soon to be irrelevant GPU business.

note: I actualy LIKE nVidia video cards, but the writing is on the wall. AMD is going to be putting out a veritable monster with CPU + GPU on a single chip, and Intel is going to be doing similar with larrabee (more general purpose, tho.)

nVidia can't compete without its own line of x64 chips, and they are just too far away from that ca

While the comments here are mostly negative, I can say this is a big leap ahead for rendering technology mainly because the rendering is occuring at the hardware level, rendered on the Nvidia processors on a video card, instead of the CPU via software rendering. They are calling this iray and it's developed by mental images, not nvidia. While video cards are currently great at rendering games in real time, they require a tremendous amount of shader programming and only do this sort of rendering within the c

Ray tracing has been done on video hardware for quite a while. It still takes a pile of shader programming. These things are programmed using CUDA, which is really just another layer over top of a shader. The 200 parallel processors in a Tesla are really just a modified version of the X number of shaders on your video card. Yeah, the Tesla boxes are cool, but they're not a revolutionary change - people have been talking about GPU clusters for a long time.

Wanna know what playing games on a system like this would be like? Go to your favorite video streaming site and change the player settings (if you can) to 0 caching. The end result is, approximately, what you'd get here. The internet is a very unstable place. The only reason online games work is that programmers have gotten really good at developing latency hiding tricks which all stop working when the video rendering is done by the server. And, don't think this will just effect fps games. Just because it doesn't make or break a game like WOW doesn't mean you'd want the stuttering game-play you'd have to put up with. As far as I can see, the only kind of game this would be useful for it photo-realistic checkers.

What about Dragon's Lair or Space Ace? Or how about all the "games" out there which are mostly noninteractive cut scenes?

Hmmm...I see the big game studios may be moving back to those "choose your own adventure [wikipedia.org]" video titles of the late 1990s...except in 3D!!!! Mwahahaha! (**cue cheesy villan is victorious music**)