GPU programming for easy & fast image processing

If you ever need to manipulate images really fast, or just want to make some pretty fractals, [Reuben] has just what you need. He developed a neat command line tool to send code to a graphics card and generate images using pixel shaders. Opposed to making these images with a CPU, a GPU processes every pixel in parallel, making image processing much faster.

All the GPU coding is done by writing a bit of code in GLSL. [Reuben]’s command line utility takes that code, sends it to the graphics card, and returns the image calculated by the GPU. It’s very simple for to make pretty Mandebrolt set images and sine wave interference this way, but [Reuben]’s project can do much more than that. By sending an image to the GPU and performing a few operations, [Reuben] can do very fast edge detection and other algorithmic processing on pre-existing images.

So far, [Reuben] has tested his software with a few NVIDIA graphics cards under Windows and Linux, although it should work with any graphics card with pixel shaders.

Although [Reuben] is sending code to his GPU, it’s not quite on the level of the NVIDIA CUDA parallel computing platform; [Reuben] is only working with images. Cleverly written software could get around that, though. Still, even if [Reuben]’s project is only used for image processing, it’s still much faster than any CPU-bound method.

No, it’s more like “your” not a rendering engineer, and “your” just being taken in by buzzwords that are rather trivial to implement in practice.

You can implement a simple Sobel filter using around 8 lines of GLSL, just perform four additional tex2D calls to fetch the neighboring texels around the central texel, subtract the texels’ values from the central texel, and then use a ternary operator comparison to set the output pixel to white when the difference is over a given threshold. The “framework” to render a full-screen quad, and maybe bind an image to a sampler, takes about 30 minutes to write with some cursory Googling, and the relevant GLSL fragment shader about 5 minutes.

Because I have not done this in the fashion before, ergo neither has someone else, ergo it opens doors for others.

Yore pointing out of how quickly this can be done makes it an interesting point that even non image data can be processed as an image. Heck I may even take yore advice and render a waveform to an image via the graphics card with some built in filters and shader manipulation you can do many things.

I think I will take yore advice about the filters and use a separate pinching algorithm to do beet detection