Neural Vector Fields

Requires ES6 & WebGL 2.0 ★ Runs Best In Firefox (and Chrome) ★ Does Not Run On Mobile


This is a particle shader where the particles exhibit a force on a field, which is rendered to a texture. It’s kind of a hello world for molecular dynamics. In this simulation, the field does not change the movement of particles. That involves gradient descent, which I will focus on next. In this animation, the motion of particles is similar to the Brownian Motion shader I just wrote. Here it’s like a polar version of Brownian Motion, where the random numbers that are generated alter the forward motion of the particles and the rotation of each. In this way, the particles have a rotational orientation, which is important later on and in particular for the implementation of gradient descent that I will use. This kind of gradient descent is quite a bit more difficult than normal.

The long term goal of this series on graphics simulations is a social physics simulation, where the particles exhibit a non-directional repulsive force and a directional attractive force. This force is calculated on two shared fields and balanced to produce equilibrium, so that the particles form molecule-like “conversations.” Then, “programs” can be written “within” the simulation, which are essentially probabilistic state machines. These allow the exchange of information and the difference in behavior of various particles types to be observed. Utilizing Delauney Triangulation and Quad Trees, the ways in which particles are able to exchange can be greatly expanded.


Avoiding Use of Blending

The correct way to render the field textures is by using blending, which I tried to use on the brownian motion shader. Configuring blending with transparent particles is a bit tricky in 2D, since in 3D the particle can be sorted and drawn back to front. I ran into problems doing this and decided to implement it in a different way, which is much less performant, but still creative.

Rendering ParticleId and Joining Textures

To avoid blending, I employed a bit of relational algebra and took a page from SQL. I rendered the particles first, writing the particleId to just one pixel each. This is rendered to a texture after the particles have their position updated. Then, in the fsFields shader, I join the particles and particleIds texture.

There is a drawback, which is that occasionally two particles can overlap, leading to an inaccurately generated field. However, for the purpose of calculating a shared field used to update the particle positions, this is fine: with properly tuned parameters, equilibria is produced, the particles repel and no two particles should ever occupy the same pixel. If they do, then the particles are still update as though they are in slightly separate positions, so no two particles would ever get “stuck” together in the same pixel.

Improving the Speed

The problem with this implementation is that it’s slow. This is because of the for loop in the fsFields shader. I expected there to be large performance hits. To avoid these, I need to properly configure blending to minimize the number of pixels rastered. This will be much better and much simpler in the end. With a ballSize of 21, there is a loop of 445 iterations on every pixel drawn in fsFields.

Checking Rasterization of ParticleId

To get the particleId joining solution working, this required forcing rasterization of only one pixel in vsParticleId & fsParticleId shaders. This the motivation behind the following lines:

particle.x = (trunc(particle.x * u_resolution.x) + 0.5) / u_resolution.x;
particle.y = (trunc(particle.y * u_resolution.y) + 0.5) / u_resolution.y;

Ensuring that each particle only resulting in one pixel draw is absolutely essential for an accurate field being drawn. I wanted to minimize debugging math errors in runtime, so to check the output of the fsParticleId shaders, I debugged it with the following lines of code:

// after using readPixels() to copy pixels from the particleId framebuffer
var maxIntFloat = 2147483647;
var renderedPixels = texContainer.reduce((a,v,i) => { if (v != 0 && v != 2147483647) { a.push([i,v]); } return a}, []);
var particleIdCounts = renderedPixels.reduce((a,v) => { a[v[1]] = (a[v[1]] || 0) + 1 ; return a }, {});

With this code, I was able to check that there were slightly less than 1024 counts, where the sum in the hash particleIdCounts was exactly one for every pixel. There would be slightly less than 1024 counts for 1024 pixels because of the chance that two particles could be rendered to the same pixel and overwritten. This is fine, but I needed to ensure that no single particle would result in the rasterization of more than one pixel. This is a creative approach to solving the problem, but ended up being less performant than necessary.

Fragment Shader: fsParticle
Vertex Shader: vsParticleId
Fragment Shader: fsParticleId
Fragment Shader: fsFields
Fragment Shader: fsRenderFields