Posted
by
Soulskill
on Wednesday October 02, 2013 @03:08PM
from the expanding-options dept.

Phopojijo writes "OpenGL and DirectX have been the dominant real-time graphics APIs for decades. Both are catalogs of functions which convert geometry into images using predetermined mathematical algorithms (scanline rendering, triangles, etc.). Software rendering engines calculate colour values directly from the fundamental math. Reliance on OpenGL and DirectX could diminish when GPUs are utilized as general 'large batches of math' solvers which software rendering engines offload to. Developers would then be able to choose their algorithms for best suits their project, even native to web browsers with the upcoming WebCL."

The advantage of early API's like glide was that it was much lower level than opengl/d3d, allowing for very efficient hardware-assisted software rendering.

As a simple for-instance, the transition between those lower level API's and software rendering into those higher level API's and fully-hardware rendering was that things like landscapes suddenly used polygons and their datasets ballooned enormously. There was nearly an entire decade of fixed-function-

Oh I understand the reasons for wanting a software rendering engine. But, the OP was looking for a "function" that is currently available (AFAIK) due to the fact that most of the fixed function behavior of early 3D APIs/GPU (like glide) is now programmable and can be simulated with the more generic pipelines in modern GPUs. In fact there are attempts at writing full blown ray tracers by misusing just GLSL.

But, more to the point, i'm not even sure GPU's are necessarily for many game related drawing. I recent

I think people are glorifying glide in this thread a biiiiiit toooo muuuch.

glide drew triangles. what the voodoo did in hw was to draw horizontal lines zbuffered, textured and shaded. what this did enable was that people with sw software engines could just pop their trifiller to use that and away they went. now if the engine used something fancier like modelling the ground as a surface(like the game outcast) then there was no fucking way to use 3dfx voodoo for accelerating that.

but that was the suck. 24 bit is better. I think you would need to use 16bit throughout for the effect though, to get alpha mess up as badly as on voodoo(which is why you just can't slap on a shader on the framebuffer to turn it into 16bit, it wouldn't look the same as if the whole scene was rendered with 16bit+dither from the start)

Developers would then be able to choose their algorithms for best suits their project, even native to web browsers with the upcoming WebCL."

If web browsers were people, that statement would have caused a mass suicide of them. Guys, stop trying to turn the browser into a platform. It introduces so many layers of complexity and security issues that it's a miracle anyone has any trust or faith in the internet at all. It's getting to the point where the only way to safely browse the net is to shove the entire browser into a virtual machine... and even that only manages to protect your own computer, say nothing of your online activities, credentials, life, etc.

We need to be making browsers simpler, not more complex. Feature bloat is making these things a leper's colony inside your computer... a cesspool of malware and vulnerability. Don't add to it by coming up with some new way for developers to directly access the hardware of your computer because you're too fucking lazy to write an app to do whatever it is, and want to cram it into the browser instead. You're just encouraging them.

Developers would then be able to choose their algorithms for best suits their project, even native to web browsers with the upcoming WebCL."

If web browsers were people, that statement would have caused a mass suicide of them. Guys, stop trying to turn the browser into a platform. It introduces so many layers of complexity and security issues that it's a miracle anyone has any trust or faith in the internet at all. It's getting to the point where the only way to safely browse the net is to shove the entire browser into a virtual machine... and even that only manages to protect your own computer, say nothing of your online activities, credentials, life, etc.

We need to be making browsers simpler, not more complex. Feature bloat is making these things a leper's colony inside your computer... a cesspool of malware and vulnerability. Don't add to it by coming up with some new way for developers to directly access the hardware of your computer because you're too fucking lazy to write an app to do whatever it is, and want to cram it into the browser instead. You're just encouraging them.

Seriously, we need a 12 step program for these "web 2.0" people.

Browser based apps are not done because developers are "too fucking lazy to write an app to do whatever" but because they are lazy/costs more money to do it for multiple platforms, including mobile ones, and they just-work without the need of installing anything (the app itself, JRE, whatever) and the need for some kind of user privileges.

I suggest some modding:1. round up politicians and remaining lawyers2. add them to the "web 2.0" crowd3. line them up4. Shoot 'em5. cover with lime6. bulldoze 'em into a ditch7. repeat (you can never be too careful here)

As there are millions of webdevelopers and only a couple of hundred thousand of 'native app' developers for iOS and Android which charge a lot more money.

Really, development speed and knowledge of native platforms is an important factor. If you only need to know one platform and can reuse code this translates to less time, less knowledge of native platforms and thus less cost.

Actually, I look at web browsers as an art platform. It is programmed by a set of open standards which gives any person or organization the tools to build support for the content which is relevant to society. A video game, designed in web standards, could be preserved for centuries by whoever deems it culturally relevant.

For once, we have a gaming platform (besides Linux and BSD) which allows genuine, timeless art. If the W3C, or an industry body like them, creates an equivalent pseudo-native app platform... then great. For now, the web is the best we have.

Then that makes two open specs that must be implemented: POSIX and X11. But you have a point that right now the only notable environments that focus on implementing these specs are desktop Linux and the free *BSDs. OS X, while based in part on FreeBSD, is not a free *BSD and no longer includes XQuartz as a standard feature [apple.com].

It is programmed by a set of open standards which gives any person or organization the tools to build support for the content which is relevant to society.

I think you need to lay off the koolaid man.

JPEG and [wikipedia.org] GIF [wikipedia.org] both have licensing issues; They are not free. The intended replacement for these, PNG, hasn't seen widespread adoption, can't do animations, but has no licensing issues. In fact, if you take a walk down a list of all the multimedia technologies commonly used on the web, MP3, MPEG4, h.264, AAC, surround sound... you will find yourself in a veritable desert when it comes to truly free standards. The standards may be 'open', that is, published... but a

JPEG and [wikipedia.org] GIF [wikipedia.org] both have licensing issues; They are not free.

Are you kidding me? The patents for GIF expired long ago [freesoftwaremagazine.com]. As for JPEG, that's as much a "living standard" as HTML5 is. It's worth researching further, but I'd think the older parts of JPEG aren't too problematic [pcworld.com].

The intended replacement for these, PNG, hasn't seen widespread adoption, can't do animations, but has no licensing issues.

PNG has never been and never will be an intended replacement for JPEG, as PNG is lossless and JPEG is (mostly) lossy. And in what way hasn't PNG seen widespread adoption? It is the dominant lossless image format and is used absolutely everywhere. PNG can do animations too (though it's not supported

Yeah, they really missed the boat not freezing computing in the 70s. These kids and wanting features and interesting applications and useful computing are a bunch of assholes. They should be forced to do things MY preferred way because my opinion is the only one that matters.

I'm not sure that's the best solution either. With the current desktop offerings, all applications run with the full permissions of the user. Things are a little bit better on the mobile side. At least with Android I can see which permissions an app has, and by default they are very limited in what they can do. With Windows/Linux, any application I run can go and delete my entire home folder, or send it all out to some site on the web, or wreak all kinds of havoc. Currently, running in a web browser, if t

I'm not sure that's the best solution either. With the current desktop offerings, all applications run with the full permissions of the user. Things are a little bit better on the mobile side. At least with Android I can see which permissions an app has, and by default they are very limited in what they can do. With Windows/Linux, any application I run can go and delete my entire home folder, or send it all out to some site on the web, or wreak all kinds of havoc. Currently, running in a web browser, if the only pseudo-sandbox that exists for desktop systems. I'd much rather run a web app from some random company then install some application on my computer.

on Linux isn't that what AppArmour or selinux is supposed to help negate

Or if your users happen to have chosen a platform that gives the operating system publisher veto power over apps, and the operating system publisher has chosen to exercise this power over your app. For example, see Bob's Game or any story about rejection from Apple's App Store.

If you make a distinction [wikipedia.org], you need to explain the difference [logicallyfallacious.com]. Where does a website end and an application begin? Slashdot and other web boards are essentially web-based workalikes of an NNTP user agent, and that's certainly an application. I had assumed that the line was that one reads a "website" but posts using an "application". If elsewhere, where do you draw the line?

It's getting much closer. Most ASM.js demos show C++-compiled-into-Javascript is only half performance of native C++ (and getting faster). That's a difference between 30fps and 60fps if all code was Javascript. WebCL, on the other hand, is almost exactly OpenCL speeds... so for GPU-accelerated apps (depending on whether Javascript or WebCL is your primary bottleneck) you could get almost native performance.

SmallPtGPU, from the testing I did a while ago, seems to be almost the same speed whether run in WebC

Half is faster than most scripting languages. Look it up, a lot of scripting languages are 100 times slower than native.

Javascript is the fastest generally used scripting language, after or similar to Lua. And Lua was optimized to be an embedded language from the start, nobody really considered Javascript would go this far so it wasn't designed for that.

Handwave it all you want, but there's huge difference between 30fps and 60fps, and "almost there" isn't good enough when you're still chasing frame rates that were common in native C++ applications in the late 90s and the customer wants your build yesterday.

Remember VB6? (Oh, the horror!) VB6 apps were slower and bigger than the equivalent written in VC++.

Do you know why it was so popular?

Because the "horrible" performance was good enough for most applications. A good developer using VB6 cut his development time significantly (hours vs days, days vs. weeks). A beginner could actually get something to work in a reasonable amount of time. That's powerful.

The web as a platform has its own set of advantages that, for many applications,

The reason why they are doing this, is the big push by major industries for more DRM. Although current DRM is ineffective against more technically inclined people. They want to eventually be able to encrypt and split up programs and data tying them to the server. Just like how diablo 3 took part of the program hostage across the internet and you had to constantly 'get permission' to continue playing the game.

If you think big companies are not looking at what the game industry and others are doing locking down apps, then you haven't been paying attention.

For the sake of all our sanity, could you please learn the distinctions between full stops (which you may know as periods... lol), commas and semi-colons before you next post? It will make your posts look less like Spot the Dog, and make people reading them hurt less.

I respectfully disagree. Whether anybody likes it or not, when JavaScript support was added to browsers (a really long time ago) the browser became a platform. Great web based apps are harder to do than native apps. Well designed web based apps work on all platforms and do not require client side installation or support. The cost of distribution and maintenance of web based apps is dramatically lower and that reduces cost. Centralized code management makes change management much more effective and that can

Then what's a better platform for developers who want to reach users of Windows, OS X, desktop Linux, Android, iOS, Windows RT, Windows Phone, and the game consoles? Making a program work on more than one platform requires severe modifications, sometimes including translation of every line of code into a different programming language. Windows Phone 7 and Xbox Live Indie Games, for example, couldn't run anything but verifiably type-safe.NET CF CIL. And all except the first four require permission from the

I thought the point of GPU's was to not only offload the rendering of 3D graphics but also the algorithms. Game developers don't want to have to program primary rendering algorithms with every game they create. Do they? Am I missing something?

Some want to use the same algorithms OpenGL and DirectX does... and those APIs are still for them.

Some do not. A good example is Epic Games who, in 2008, predicted "100% of the rendering code" for Unreal Engine 4 would be programmed directly for the GPUs. The next year they found the cost prohibitive so they kept with DirectX and OpenGL at least for a while longer. Especially for big production houses, if there is a bug or a quirk in the rendering code, it would be nice to be able to fix the problem direct

Since OpenGL 3.0, the traditional OpenGL rendering pipeline (supposedly) has been implemented using shaders under the hood. Also, check out the OpenGL Mathematics (GLM) library for the matrix operations in userspace that used to be at the driver level before.

I thought the point of GPU's was to not only offload the rendering of 3D graphics but also the algorithms. Game developers don't want to have to program primary rendering algorithms with every game they create. Do they? Am I missing something?

Yes, you are missing something. The point of GPUs is to efficiently calculate pixel values to show on the screen. Specific algorithms can be implement in hardware or software and GPU hardware has been moving toward exposing more generic functionality for years, which WebCL can make available to Javascript code. It's the game engine or libraries used by the game engine that worry about low level details about how to talk to the GPU, whether that happens via OpenGL, WebGL, Direct3D, WebCL or something else.

remember when dos game engines used just whatever math (voxels, fake shit, raytracing against 2d map..) the coder could get to run fast enough to create the graphics? to not be constrained with triangles?

the "article" is about "hey, wouldn't it be cool to do that again just with gpu's?". the summary makes it sound as if the dude had done something cool with that idea.

the video on in the article shows a shaded triangle. fail, waste of time. and sort of ignores that programmable shaders are just for this purp

First of all, software rendering vs. hardware rendering isn't the same as scanline rendering vs. "rendering from the underlying math", which I assume is a bad attempt at a layman's description of raytracing. You can have a scanline triangle renderer in software, and you can have a raytracer in hardware. It is true that most GPUs are built for scanline rendering and not raytracing, but plenty of raytracers have been written that run on GPUs.

Upcoming GPUs support protected memory, C++, and pre-preemptive multi-tasking. A GPU is just a type of CPU. You will actually be able to pass a pointer from the CPU to the GPU and not have to translate it, it will work natively with it.

Actually the demo doesn't raytrace. In this demo "scene" (one triangle) it uses barycentric coordinates to determine if a pixel is inside or outside of a triangle. If it is inside? It shades it with one of two functions. These two functions derive red, green, and blue from how far the pixel is away from a vertex compared to the distance between that vertex and the center of the opposite edge (the animated function also has a time component). If it is outside the triangle? Pixel is skipped.

Many 3D engines are carefully tuned to the limited bandwidth to the GPU cards that provides them just enough bandwidth per frame to transfer the necessary geometry/textures/etc for that frame. The results, of course, stay on the GPU card and are just outputted to the frame buffer. Now, in addition to that existing overhead, the engine writer would now have to transfer back the results/frame buffer back to the CPU to process, generate an image, that is then passed back to the GPU to be displayed as an image? Or am I missing something?

While I'm sure it would allow customized algorithms, they would have to be rather unique to not be handled by the current state of geometry/vertex/fragment shaders. Are they thinking some of non-triangular geometry?

Maybe there is a way to send the result of the maths directly to the frame buffer while it's on the GPU?

Only if you want it to! You can share resources between OpenCL and OpenGL without passing through the CPU.

Now, of course, you may wish to (example: copy to APU memory, run physics, copy to GPU memory, render)... but the programmer needs to explicitly queue a memory move command to do so. If the programmer doesn't move the content... it stays on wherever it is.

While I'm sure it would allow customized algorithms, they would have to be rather unique to not be handled by the current state of geometry/vertex/fragment shaders. Are they thinking some of non-triangular geometry?

The FA mentions voxel rendering for Minecraft-type applications. Although volume rendering can be achieved with traditional hardware accelerated surface primitives, there are many algorithms that are more naturally described and implemented using data structures that don't translate so easily to hardware accelerated primitives.

Constructive solid geometry, vector based graphics, and ray tracing are also not such a nice fit to OpenGL and DirectX APIs. You don't always want to have to tessellate geometry tha

Reliance on OpenGL and DirectX could diminish when GPUs are utilized as general 'large batches of math' solvers which software rendering engines offload to.

GPUs have never not been general 'large batches of math' solvers. It's just that games have never required a very large amount of math to be done apart from rendering 3D graphics. Hell, they still don't. Most of the time this kind of stuff is actively avoided, or faked when it cannot be avoided. Why? Because there are better things you can do with the development budget than try to bring a science project to life.

"Good" code doesn't pay as well as "fast" code. (and since it's a little ambiguous, the "fast"