FBO particles

I can’t tell how many particle engines I’ve written for the past 15 years but I’d say a lot. one reason is that it’s easy to implement and quickly gives good looking / complex results.

in august 2014, I started a year-long project that never shipped (which I playfully codenamed “the silent failure”), the first thing they asked for was a particle engine to emulate a shitload of particles.

the way to go in this case is to use a GPGPU approach a.k.a. FBO particles. it is a fairly well documented technique, there were working examples of FBO particles running in THREE.js and especially this one by Joshua Koo & Ricardo Cabello

in a nutshell, 2 passes are required:

simulation: uses a Data Texture as an input, updates the particles’ positions and writes them back to a RenderTarget

render: uses the RenderTarget to distribute the particles in space and renders the particles to screen

the first pass requires a bi-unit square, an orthographic camera and the ability to render to a texture. the second pass is your regular particles rendering routine with a twist on how to retrieve the particle’s position.

//2 use the result of the swap as the new position for the particles' renderer

exports.particles.material.uniforms.positions.value=rtt;

};

returnexports;

}({});

I left the comments so it should be easy to understand. step by step, it unrolls as follow:

we need to determine if the hardware is capable of rendering the shaders. for the simulation pass, we’ll need to use float textures, if the hardware doesn’t support them, throw an error.

for the render pass, we’ll have to access the textures in the vertex shader which isn’t always supported by the hardware, if unsupported, bail out & throw error.

create a scene and a bi-unit orthographic camera (bi-unit = left:-1,right:1, top:1, bottom:-1) near and far are not relevant as there is no depth so to speak in the simulation.

create the RenderTarget that will allow the data transfer between the simulation and the render shaders. as this is not a “regular” texture, it’s important to set the filtering to NearestFilter (crispy pixels). also the format can be either RGB (to store the XYZ coordinates) or RGBA if you need to store an extra value for each particle.

straight forward: we create a bi-unit square geometry & mesh and associate the simulation shader to it, it will be rendered with the orthographic camera.

we create the render mesh, this time we need as many vertices as the pixel count in the float texture: width * height & to make things easy, we normalize the vertices’ coordinates. then we initialize a Points object( a.k.a Particles, a.k.a PointCloud depending on which version of THREE.js you use)

initialization is over now the update loop does 2 things:
1 render the simulation into the renderTarget
2 pass the result to the renderMaterial (assigned to the partciles object)

that’s all good and sound, now a basic instantiation would look like this (I’ll skip the scene setup you can find it here):

//the mesh is a nomrliazed square so the uvs = the xy positions of the vertices

vec3 pos=texture2D(positions,position.xy).xyz;

//pos now contains a 3D position in space, we can use it as a regular vertex

//regular projection of our position

gl_Position=projectionMatrix*modelViewMatrix*vec4(pos,1.0);

//sets the point size

gl_PointSize=pointSize;

}

//fragment shader

voidmain()

{

gl_FragColor=vec4(vec3(1.),.25);

}

ok this was a long explanation, time to do something with it, the above will probably look somehow like this (click for live version):

which is a bit dry I’ll admit, but at least it works :)

the benefit of this system is its ability to support lots of particles, I mean lots of them, while preserving a rather light memory footprint. the above uses a 256^2 texture or 65536 particles, 512^2 = 262144, 1024^2 = 1048576 etc…. and as many vertices which is often more than what is needed to display …well anything (imagine a mesh with 1+ Million vertices).

on the other hand particles often cause buffer overdraw which can slow down the render a lot if you render many particles on the same location for instance.

it’s trivial to create random or geometric position buffers, it’s straight forward to display an image of course (vertex position = normalized pixel position + elevation), it’s easy to create buffer describing 3D objects as we don’t need the connection information (the faces) and as we shall see, it’s also easy to animate this massive amount of particles.

the getContext() method creates a 2D context to access the image’s pixels values and the loooooong line computes the greyscale value for that pixel.

which should give this (can’t view online because of CORS restrictions, it’s on the repo though):
thanks to @makc for pointing out in the comments that images need a crossOrigin, in this case, img.crossOrigin = “anonymous”; solved the problem, so enjoy the live demo :)

it uses this 256 * 256 greyscale image:

loading a mesh is even easier:

1

2

3

4

5

6

7

8

9

10

11

12

13

functionparseMesh(g){

varvertices=g.vertices;

vartotal=vertices.length;

varsize=parseInt(Math.sqrt(total*3)+.5);

vardata=newFloat32Array(size*size*3);

for(vari=0;i<total;i++){

data[i*3]=vertices[i].x;

data[i*3+1]=vertices[i].y;

data[i*3+2]=vertices[i].z;

}

returndata;

}

the method takes the geometry of the loaded mesh and the trick here is to determine the size of the texture from the amount of vertices. this total is simply the square root of the vertices count.

click the picture for a live version

in the render shader, I compute the depth and the size of the particles is indexed on it which gives the illusion of faces but those are only particles (47516 particles, less than the first example).

what about animation?

say we want to morph a cube into a sphere, first we need a sphere:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

//returns a Float32Array buffer of spherical 3D points

functiongetPoint(v,size)

{

v.x=Math.random()*2-1;

v.y=Math.random()*2-1;

v.z=Math.random()*2-1;

if(v.length()>1)returngetPoint(v,size);

returnv.normalize().multiplyScalar(size);

}

functiongetSphere(count,size){

varlen=count*3;

vardata=newFloat32Array(len);

varp=newTHREE.Vector3();

for(vari=0;i<len;i+=3)

{

getPoint(p,size);

data[i]=p.x;

data[i+1]=p.y;

data[i+2]=p.z;

}

returndata;

}

note that it uses the “discard” approach ; a point is generated randomly in the range [-1,1] and if it’s length > 1, it’s discarded. it is quite inefficient but prevents points from being stuck in the corners of a normalized cube. there’s also an exact way to prevent this problem that goes:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

Math.cbrt=Math.cbrt||function(x){

vary=Math.pow(Math.abs(x),1/3);

returnx<0?-y:y;

};

functiongetPoint(v,size)

{

varphi=Math.random()*2*Math.PI;

varcostheta=Math.random()*2-1;

varu=Math.random();

vartheta=Math.acos(costheta);

varr=size*Math.cbrt(u);

v.x=r*Math.sin(theta)*Math.cos(phi);

v.y=r*Math.sin(theta)*Math.sin(phi);

v.z=r*Math.cos(theta);

returnv;

}

it needs cubic roots and involves more computations so all in all the discard method is not that bad and easier to understand (for people like me at least).

now back to morphing, the way to go is to create 2 DataTextures, one for the cube, one for the sphere and pass them to the simulation shader. the simulation shader will perform the animation between the 2 and render the result to the RenderTarget. then the render target will be used to draw the particles.

that wasn’t too scary right? we have our 2 DataTextures (the cube and the sphere) passed to the simulation material along with a timer value which is a float in the range [0,1]. we sample the coordinates of the first model, store them in origin, the coordinates of the second model, store them in destination then use the mix(a,b, T) to blend between the two.

here’s a preview of the timer value being set at 0, 0.5 and 1:

of course we can create more sophisticated animations, the idea is the same, anything that should alter the particles’ positions will happen in the simulation shader.

for instance this uses a curl noise to move particles around:

the size is set like this:

1

gl_PointSize=size=(step(1.-(1./512.),position.x))*pointSize;

it reads, if the current “uv.x” is lower than 0,998046875, it’s a small particle, otherwise it’s big. if you’re interested in the simulation shader, it’s here, I don’t think it’s an example of good practices though.

to wrap it up, this technique allows to easily control insane amounts of particles, it is well supported (as compared to when I started using it at least) and – when using smaller texture sizes – it performs relatively well on most platforms ; usually GPUs optimize nearby texture sampling, which is what the FBO is based on.

I believe this is where WebGL falls short ; the Hardware specifics … I wouldn’t recommend using this on mobile devices anyway ; you’re not the gl.MAX_VERTEX_TEXTURE_IMAGE_UNITS maybe forbidden, or worst the Float Textures.
for your problem I don’t know, it seems to come from the RTT, maybe having a look at how to get a render to texture to work might help.

ok i fixed the problem :)
the THREE.DataTexture needs to be defined as THREE.RGBAFormat (not THREE.RGBFormat), so that it works on Intel Mobile Chips. I found out because mrdoobs FBO examples worked where these did not. So I compared the code and tada.

We see the phrase “framebuffer incomplete” in the error message. For this code, it indicates that you are unable to write floating point values into the frame buffer. The OES_texture_float extension, which the code checks, only indicates that you can create, and that the shaders can read floating point textures, not that the shaders can write to those textures.

Hello Nico !
Glad to see you finally have some fun with shaders ;)
You should consider the fact to create the triangles directly inside the vertex-shader, based on 3 constants located in the shader ( defined once for all) representing the 3 vertex of a triangle.
Then you only need the position of your particle (the center of your “shader-triangle”) in your VBO and then you can put much more data in it ! :)

I posted an example showing how I use to do in the processing-forum
The GLSL code should work in WebGL without any modification.

hey, thanks for passing by :)
this uses GL.POINTS instead of GL.TRIANGLES. if we use triangels, we’ll need to triple the vertex count (like in your example).
it’s a trade off ; GL.POINTS will perform faster with small point size (less data to process, less overdraw) but will slow dosn terribly when the point size gets bigger.
if the idea is to work with bigger surfaces, then your approach is the way to go :)

the idea would be to draw a set of lines then draw the particles on top of it. looks easy enough but it requires a piece of information you don’t have in the particle system: connectivity ; indeed to draw lines, you need to know which points to connect. Over more, the picture you linked looks like a “node garden” (or a Force Directed Graph), it’s an emergent structure computed from the relative distance of nodes ; if they’re “close enough” draw a line, otherwise don’t. to compute this, you need to know where the nodes are before knowing if 2 nodes should be linked. this operation cannot be performed easily on the GPU (it would require a specific data structure and extra GPGPU steps) and given the amount of particles, computing on the CPU will require a lot of resources (and time).

this being said, you can use a grid or any mesh rendered with lines and use the same FBO technique to compute the vertices’ positions. I never did anything like this though, just a wild guess :)

Hi! This is one of the best articles on shaders i’ve read so far. Thanks for sharing, it’s immensely helpful and clear!!

I am trying to build an animation to morph meshes of any number of vertices. I have some extra vertices on one mesh that i need to hide. So, in the DataTexture i am using a THREE.RGBAFormat, instead of THREE.RGBFormat. Vertices are defined by a THREE.Vector4.
Now all vertices that are not needed have xyz values set to random and the alpha value set to 0 (hidden).

Everything works fine. But i cannot get that alpha value in the shaders.
If i add ‘transparent: true’ in the simulation shader, I get a strange behaviour: all the particles with alpha zero get scaled to position 0! And their alpha is still 1.

Really cool project. I have this bookmarked for a few months now because i was trying to create something similar but because i am new to THREE and WEBGL i was not able to follow your instructions.

In the meantime i have found this example http://www.pshkvsky.com/gif2code/animation-13/ which i was able to follow. But it doesn’t work as smoothly as yours does. I guess the ‘raycasting’ metehod is too inefficient for defining points on a mesh.

I am looking for a better way to form shapes (loaded 3d object) out of particles in THREE.js. Do you have some time to explain that in more details?

hi,
sorry I didn’t see this comment earlier :)
your question boils down to knwoing where to place particles on a mesh.
as far as the astral website is concerned, it’s very nicely done! :) they probably used a depth sensor (kincet, leap motion, 3D camera…) to scan the faces and assign the size of the dots depending on how close the particle is from the ‘camera’. it’s possible to use photogrammetry too but it would be somehow overkill and less efficient in this case :)

if you have a mesh, the raycasting – though very slow as you mentioned – could be the way to go ; shoot X random rays at the mesh, store the intersections, rotate the mesh a bit, repeat. this is basically what the kinect (or any LIDAR device) does. then you obtain a series of points on the surface of the mesh.

if you have a mesh and only need points onn the surface, you can use the geometry’s ‘faces’ and radomly distribute points on the face. this is fairly trivial. the trick here is to use a ratio based on the area of the triangles to set the amount of random particles to create for each face.
hope this will help you go on, thanks for passing by :)

Your particle effect is really cool !!!!
But when I try to view it on my phone, three.js throw an warning that it does not support “EXT_frag_depth”.And the effect of the particles did not appear.
But I did not see “gl_FragDepthEXT” in your glsl code.
I am very confused about this.

ps: The performance particles on phone is very important and I really want to use your technology on the phone.

Hi! Admiring this article! Good explanation!!
I tried to follow your steps and what you did and have one question.
Am i right, that what is done in the article may be done without FBO approach actually?
We can just compute that same curl noise in vertex shader in one pass. Based on some geometry positions attribute (your sphere) and time uniform?
I mean, of course, FBO can be extended by adding velocities texture and etc. But does this exact animation in article benefit from it?
Again, thank you for this article, it has been my first step into this type of animations!