# Volumetric Light

For a while I have been thinking about how to make some kind of volumetric light effect. There are a lot of resources on things like this, but I have avoided doing any research on this on purpose. Figuring out how something works on my own is an interesting challenge, and it sometimes yields idiosyncratic results and/or interesting mistakes.

The following is a description of the algorithm I worked out recently. Since I don't really know how this is achieved in mainstream implementations, this isn't a description of how to properly and efficiently create volumetric light. But it should serve as an introduction to a possible approach.

My implementation uses a raymarcher on signed distance functions (SDF), but you should be able to adapt this to a ray tracing approach as well. Provided you have, or can create, the prerequisite functions listed below.

We assume that everything happens in world space. That means you might have to figure out how to get the world coordinate of each pixel, as well as points on the geometry surface. In addition, this approach was made using an isometric projection. It should work for other projections as well, but you might have to make some adaptations.

To get started, we assume that we have these functions:

`march(i, j)`

, which returns the world space coordinate of the point on the surface geometry seen by pixel,`[i, j]`

;`clear_path(a, b)`

, checks whether there is any obstructing geometry between`a`

and`b`

;`pix_position(i, j)`

, returns the world coordinate of pixel`[i, j]`

;`length(a)`

, returns the length of vector`a`

;`line_sample(a, b)`

, returns a random point on the line`[a, b]`

;`pow(a, b)`

, returns`a`

to the power of`b`

.

The final requirement is a function, `light_sample()`

, that can
generate random coordinates inside the light source. A good alternative is
a function that generates random points inside a cube. You can probably get
interesting results with different light distributions as well.

The following algorithm is repeated for every pixel in the rendering. It assumes that there is a single light source. Adapting to multiple sources only requires minor changes.

The pseudo code for the algorithm is as follows:

p = pix_position(i, j) hit = march(i, j) d = distance(hit, p) start with a black pixel value. pix_value = 0 n is the number of samples per pixel, e.g., 10 repeat n: sample = line_sample(p, hit) light = light_sample() check if the point inside the light source is visible from the point on the line from the pixel if clear_path(sample, light): pix_value += d / pow(1 + distance(sample, light), 2) the result should be scaled so that it lies between 0 and 1, adjust light_strength accordingly pix_value *= light_strength/n

`pix_value`

is the amount of light at the pixel in
question, as a value between 0 and 1. You might have to scale things
differently, depending on your geometry and scene layout. But this should be
a pretty good start.

This pseudo code assumes that every pixel hits a point on the geometry. This might not be the case. How to solve this depends on your approach, but on alternative is to have a very large plane behind the scene. Either parallel to the floor or to the camera plane. Adjusting the distance and orientation of this plane will affect the intensity of the light.

To reduce the amount of grain in the result, you can increase the value of
`n`

. You might also want to introduce some kind of anti-aliasing.
A very naive approach is to repeat the process multiple times per pixel,
while shifting the starting point slightly inside the pixel square, then take
the average of these values.

As a final challenge, it should be possible to combine this approach with the depth of field effect described in this earlier post.