If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Not really, because that is strictly restricted to the blobs. You might find yourself a few corner-case users, but certainly not widespread deployment. Why would anyone build software depending on openCL if they can't trust that their customers will be able to use it?

In other words, it MUST be supported by both AMD and Intel **open source** drivers, AND have a solid CPU fallback, before software developers can trust in its availability.

It does not have to be supported by both AMD and Intel drivers.

We programmers can make the program so it can check if there is OpenCL support and if there is not use a different algorithm, function(s).
We don't choose between CPU threads and OpenCL. We can make the programs so it can choose when starting up what to choose by including both options in the program.

Blender does support GPU rendering with their Cycles Render Engine. There is work on openCL in gegl (the library that new versions gimp uses for graphics operations), but i am not sure what state it is in. Darktable also has openCL.

Gegl supports OpenCL. r600g and clover is very close to being able to handle the Gegl image operations. A few months ago, with some small patches to Gegl to work around missing libclc standard library implementations, I had some of the Gegl operations working with clover and r600g.

Gegl supports OpenCL. r600g and clover is very close to being able to handle the Gegl image operations. A few months ago, with some small patches to Gegl to work around missing libclc standard library implementations, I had some of the Gegl operations working with clover and r600g.

Thomas Stellard, i am involved in project that maybe ambitious and biggest OpenCL kernel ever exist, 3D raytracer Blender Cycles, and have very bad experience with proprietary AMD OpenCL realisation. For now, AMD compiler just eat enormous RAM (32+GB) and still puke on correct code. We have contacts with AMD driver team, and last owrds they aware and give up, suggesting to cut kernel size. My current work will add even more code to it to get more features, making code 2x large or more. Any chance you can get Clover to stage when it will "eat" such huge kernels w/o issues, as NVidia do? Unfortunately, it use 2D images and other builtings and i cannot try to run it as this stage, even when cut code to minimal. My expectations that GCN have full general CPU stack/call support, and can easily avoid old "flatten" architecture, that really can not unroll complex code graph to internal VLIW form. Is it correct? Just in case, for now must say that Clover is only real hope to run Blender Cycles on AMD hardware.

An open source CPU driver for OpenCL would be a good thing to have first.
Don't see why the author seems more enthousiastic about GPU drivers than the LLVM CPU driver.

Most people with x86 based CPU's would be able to run it.
Can make it run on Linux.
Can develop it in conjunction with tests and then the tests with other drivers.
App developers can easier get their hands on an OpenCL driver this way.

i think right now it makes more sense for developers to invest time in openMP if they want to target a wide audience. dual cores are now standard on laptops and desktops, and quad and oct are not rare. Often adding just a few prama statements to the loops that do most of the processing can give big speed ups. OpenCL (and CUDA) only work on a smaller subset of algorithms, and are more effort to program and debug (though if you have a problem that suits them you can get huge speed ups).

I'd also suspect that foss software developers are more likely to be using opensource drivers than typical linux users. They are more likely to have chosen hardware with drivers in mind. they are more likely to be using opensource for philosophical reasons. and they might like to be able to debug kernel crashes.

QFT.

Not sure how representative I am, but I'm not going to develop on OpenCL until I can run it myself, which means using a stable, full-featured open-source stack. I will not install either Intel's or AMD's blob to run on the cpu, and I hear all the open cpu implementations are rather incomplete.

Just like I have OpenMP, pthreads and other such tech at my fingertips now

Hmm, I wish FFmpeg would use OpenCL. But I guess it will, given a bit more time.
Speaking of OpenCL on AMD, what about R700 cards? I know they use some really strange form of OpenCL, but they should be able to handle OpenCL 1.0 regardless.

Hmm, I wish FFmpeg would use OpenCL. But I guess it will, given a bit more time.
Speaking of OpenCL on AMD, what about R700 cards? I know they use some really strange form of OpenCL, but they should be able to handle OpenCL 1.0 regardless.

They don't have a dedicated compute pipeline so you have to implement compute as a special mode over the 3D pipeline.

One of most obvious uses for AMD cards with proprietary blob is cryptography acceleration. They perform really well in this area. Most notably, AMD cards are great when it comes to hashes bruteforce (password auditing, etc) with John The Ripper. Though especially popular use of AMD cards is Bitcoin mining. Which is all about bruteforcing SHA-256. AMD performs so great that some guys are installing huge clusters with hundreds of best AMD GPUs. However AMD management seems to be real nuts. They both fail to understand that they can perform MUCH better than nvidia in some areas and they also fail to understand that their proprietary driver is utter crap and causes numerous headache under Linux. Should I admit that most of those clusters are using Linux since it's better suited for batch-mode operations? In fact, AMD has managed to ignore problems of customers who buys far more hi-end cards than any crazy gamer would buy, ever. No gamer would run 5 x 6970 in one motherboard. Guys who does mining or something like this would do. Probably installing dozen and half of such boxes around, creating their own "supercomputers" (which can show impressive performance, btw).

I guess if AMD management were not such a complete nuts, they could give nvidia a boot in high-performance computations markets. Especially granted that nvidia fails to get idea that closed driver in open OS = source of headache, no matter what.

Considering how few people have bought into that false economy, let alone the nut cases that have fallen for it hook line and sinker and are spending several grand of actual money in kit and power for fake money.

The same people that do this are the end of the world nuts that stockpile food, guns, gold and silver in their basement turned fallout shelter.