It might not seem like much, but his algorithm has all the hardware-friendly properties you want, some of which my old GLSL simplex noise demo was missing. In summary, it's fast, it's a simple include (no dependencies on texture data or uniform arrays), it runs in GLSL 1.20 and up (OpenGL 2.1, WebGL) and it scales well to a massively parallel execution because there are no memory access bottlenecks.

Concerning this, I would like to get in touch with some people in the Khronos GLSL workgroup. I was last involved in this around 2003-2004, and my contact list is badly outdated. Are any of the good people in the GLSL WG reading this? My email address is "stegu@itn.liu.se", if you want to keep this private. Just please respond, as I think this is great news.

Re: GLSL noise fail? Not necessarily!

With the default window size, the sphere covers about 70K
pixels, so multiply the frame rate with 70,000 to get the
number of noise samples per second.
On my ATI Radeon HD 4850, I get 5700 FPS, which translates
to about 400 Msamples/s. Whee!

Windows and Linux compatible source code. (Untested on
Linux, but it should compile and run without changes.)
Windows binary (.exe) supplied for your convenience.
Uses only OpenGL 2.1 and GLSL 1.20, so it should compile
under MacOS X 10.5 as well, if you either run it from the
command line or create an application bundle and change the
file paths for the shader files to point to the right place,
e.g. "../../../GLSL-ashimanoise.frag" instead of
"GLSL-ashimanoise.frag". You also need the library GLFW
to compile the demo yourself (see www.glfw.org).

Re: GLSL noise fail? Not necessarily!

I tried the Windows binary on Linux through Wine and it run great. The only "problem" is that the compiler issue a warning for the fragment shader: WARNING: 0:252: warning(#288) Divide by zero error during constant folding.

Re: GLSL noise fail? Not necessarily!

Because of my long standing interest in noise, I have had some insight into the painful and drawn-out process of implementing noise() in GLSL. I would venture a guess that the problems have not been primarily because of licensing or patent issues, but for a lack of a good enough candidate, and a resulting fear of premature standardization.

A noise() function that gets implemented as part of GLSL needs to be very hardware friendly. Previous attempts have been lacking in at least some respects. Ian's code removes two memory accesses in a very clever way, by introducing a permutation polynomial and creating an elegant mapping from an integer to a 2D, 3D or 4D gradient. This is the first time I have seen a clear candidate for a standard noise() function that both runs well as stand-alone shader code *and* scales well to a massively parallel hardware implementation.

Also, a standard noise() implementation will need to remain reasonably stable over time. You can't expect people to create real time shaders using one version of noise() only to have them look different when a slightly better but different version shows up in the next generation of hardware.

In short, a standard noise() needs to be hardware *and* software friendly, to enable an efficient implementation in silicon but also allow for a shader fallback with good performance. A standard also needs to be good enough to keep around for a long time. This code delivers on both accounts, I think.