Simple Physically-Based Film Grain Simulation: An Experiment




What is this about?
In this article you will find an exploration of simulating film grain, complete with examples, algorithm breakdown, and source code.
Whatever you now find weird, ugly, uncomfortable and nasty about a new medium will surely become its signature...
― Brian Eno, A Year With Swollen Appendices
Film photography is far from a perfect high-fidelity representation of reality.
As a staple of 20th Century photography and videography, most of the images that humanity has seen are a result of film.
Its legacy lives on as a holy grail of sorts, a warm stylized medium that brings us away from reality and into the world of cinema and photography.
Making a digitial image look like a film image is a time-honoured tradition, so much so that even the contrast curve in ACES is designed to be filmic.
But while a lot of work has been done to make the colours and values film-like, how much attention has simulating the structure of film been given?
Film grain is the very mechanism with which film gives us its image, and is well deserving of some careful crafting.

Bask in its glorious grainy glory.
The Buzzword of "Film Simulation"
Fujifilm cameras have a feature called "film simulation", which gets you some nostalgic-looking colours based on their classic film stock (and Classic Chrome is actually secretly Kodachrome).
But even though this is called "film simulation", is it really?
I suspect that the camera mostly just applies a LUT, and the grain option is mostly just multiplying by a predefined grain texture.1

This is not to belittle the amazing work Fujifilm has done on creating this wide array of LUTs that do a good job of getting those film colours in! They even likely have some of that LUT generation code on-device, seeing as each film simulation has tweakable settings that you can even save out and share as "recipes". Fujifilm is the only digital camera brand that actually gives you several methods to get interesting colours -- not realistic colours -- out of your digital camera. There is a strong following, a community sharing their film recipes, and the liberation of having the equivalent of developed film snaps without slaving away at Lightroom for each photo. The colours are unique, interesting, and just like film: "Good Enough".
Which brings me to the point, is this simulation, or is it really emulation?
The application of a LUT and a multiplied grain texture mimicks the look of film, making it more of an emulation.
I think there is plenty of room to also research actual film simulation, where we actually take the physical properties of film and simulate the interaction of light with the film model.
To understand how we can approach simulating film, let's take a look at how film works physically.
How film works
The most important thing about film is the grain.
Grain is not a visual artifact, it is the actual physical mechanism of image formation.
Grain, not pixels


In a digital camera we have an even grid of tiny light sensors.
These correspond to the digital pixels in the image file2, and their regularity has an effect on the image.
When the ISO is very high, and the scene is very dark, we do see some digital "grain" noise, but it feels a bit different than film grain. More noisy, particularly in the dark areas of the image.
For film, we have a very large scattering of light-sensitive silver halide crystals.
This is the grain, it is actually something physical, not just a phenomenon.
When light hits the film, depending on the strength of the light, more or less grain crystals get activated.
Film Grain is Binary

The interesting thing is that grains don't have a grayscale value.
If you were to look extremely closely at film, you'll notice that actually it is entirely a binary image.
The grains that got activated by the light get fixed in place during development, and the grains that did not get activated go translucent.
Blurring
How do we go from a "binary" grain image to grayscale?
Simply it is because the grains are extremely tiny.
Visual blurring results in seeing brightness values.
An Interesting Paper
What sparked this experiment is a very interesting and well-written paper called A Stochastic Film Grain Model for Resolution-Independent Rendering (A Newson, Julie Delon, B Galerne).
The key ideas of this paper are:
- Using the physical properties of film to model the simulation
- Overall flow is: Sample the image, place binary grain at a sub-pixel level, filter the binary image to get a final output
- Input and output resolution are independent, can be used for upscaling
- Grain is placed to ensure input/output gray level remains the same
Simplified Experiment: Examples
Here are some examples of a very simplified version of what is presented in this paper in action.
See below for a full breakdown.












The noise texture tiling is noticeable in the very even colour of the sky.


How it Works
I took the general idea from the original paper (reconstructing the image with binary grain, then blurring), and simplified it to use a texture as the input for the noise instead of computing it on-the-fly.
With this simplification, it is easy to implement and also somewhat realtime enough. It is implemented in a compute shader, but could just as easily be a fragment shader.
A large departure from the paper is that this method does not ensure that the gray level matches the input.
In fact I used an off-the-shelf noise texture that I did not control for at all.
This means that the contrast curve is largely modified and not separately controllable.
For a real solution, ensuring the contrast stays the same is important for controllability. On the other hand, from an exploratory perspective it is interesting to see how it reacts with an arbitrary noise texture.
Blue Noise Texture

A blue noise texture was used, from the [free pack in Christopher Peters' wonderful blog post]http://momentsingraphics.de/BlueNoise.html).
In this case, it was LDR_RGBA_0.png
, an 8-bit noise texture with 4 channels at 512x512 resolution.
It may be interesting to experiment with HDR noise textures, for preserving better the dynamic range of 16-bit RAW images.
Unlike most texture-based approaches, the noise is not multiplied, but is in fact stepped by the brightness value.
This is the key point of this approach, which makes the image be reconstructed as a binary grain image, rather than modified to be grainier.
Applying the texture just once simply results in a dithered-looking image, which is not yet the final result.
Making the grain much smaller and employing filtering is the next step:
Per-Pixel Tiling & Blurring


of the noise texture

as binary grain. Subdivided 8x8.
Each pixel is subdivided into a grid (ranging from 2x2 to 8x8), sampling a chunk of the noise texture.
The noise value is stepped by the pixel value, essentially "dissolving" away the noise and leaving a binary image.
The sub-pixel grain is then averaged back into a pixel value.
The noise texture is of course much larger than the sub-pixel grid, giving us a grain texture that applies overall.
For colour textures, this happens per-channel.
The same noise texture is used, but offset slightly, to prevent colour photos from looking overly-random.
Layers

Gradients have less detail and contrast is higher.

More detail is preserved.


The grain crystals in a layer of film are actually arranged three-dimensionally.
The grains are stacked on top of each other.
To simulate this, multiple noise textures are layered on top of each other.
The subsequent layers of noise texture have their step value increasingly decreased in intensity, to simulate the upper layers blocking some of the light.
The final formula then becomes: grain += step(noise_layer[i], value * weight[i])
For convenience and performance, the noise texture sampled has four noises in the respective RGBA channels.
In some film stocks the multiple layers of grain may be of varying light sensitivity, which usually means different grain size (finer for slower grain, coarser for faster grain).
This is an interesting point to try to simulate as well, by supplying differently-sized grain textures per layer.
The layering of blue noise (which is equally-spaced) gives us some of the clumping that film grain has, while not having purely-random white noise.
Base Color


Developed film is completely transparent in areas that are too underexposed to have affected any grain crystals.3
That being said, colour negative film stocks usually have a dye applied giving us a base colour.
This base colour and how it shows up (or doesn't) on the paper/scan is somewhat part of the interesting look that colour film gives us.
Though this experiment was about specifically grain and not colour modifications, I snuck in a subtle (and not particularly accurate) base colour emulation.
It's just changing the black point to a slightly tinted dark blue (the opposite of the orange tint of a negative).
In reality the base colour is supposed to be accounted for when printing/scanning, but it's hard to do so 100% perfectly and there are enough janky scans nowadays that a strong base colour gives us more of what is expected from a film look
Sample Source Code
I was originally testing out this technique inside of a RAW photo editor project that I was working on,
but I have extracted all of the relevant bits into a minimal C program for your viewing pleasure.
The program does all the calculation on the GPU via the lovely SDL3 GPU, with the shader written in HLSL and compiled to SPIR-V for the Vulkan backend.
Binaries are available, and the pre-compiled shader code is also in the repo in case you want to compile but don't have DXC on-hand.
(There are no Mac binaries because I don't have a modern Mac and don't know much about cross-compiling HLSL to MSL)
See kanzwataru/filmgrain-simplified on GitHub.
Conclusion & Next Steps





A simple technique, born out of a slightly more complex paper, with lots of room to expand.
Different noise textures, different layering, various biasing, using HDR noise, there are lots of ways to spice it up.
Maybe even using SDF textures for more distinct grain shapes, which kind of brings us closer to the original paper.
Overall, this technique is more about reconstructing the image with film grain rather than overlaying film grain. This distinction is important, because in this case it is behaving more as a simulation.
The model being, the pixel data is the light passing into the camera, and the simulated grain is the film reacting to it.
I hope this writeup is helpful to some, and feel free to check out the source code!
Appendix: Why not Machine Learning?
Taking an image and applying a "style" to it has been a staple of machine learning demos for many years, and has picked up steam greatly.
But for serious production usage, predictability and fine-grained (wink) control is most important.
Machine Learning is a wide field, and there may be ample oportunities for applying its techniques in ways that are not dependent on a large corpus of copyrighted training data.
I would certainly like to see more work being done on methods that aren't style transfer, which is essentially leveraging a black box that has ingested thousands of pre-existing images and having it magic the image to be similar to the training data in the chosen aspect.
The main takeaway is that it is worth exploring simulation of the film model with fine-grained control, rather than the imitation (or emulation) of existing outputs of film.
Just as physical film was producing images by way of physical phenomena, "digital film" could produce a new generation of images with a simulation of the mechanism,
but without being limited to mimicking the results of pre-existing film imagery.
The techniques used in implementing this can be anything, including ML.
References
- A Stochastic Film Grain Model for Resolution-Independent Rendering (A Newson, Julie Delon, B Galerne)
- Film Grain, Resolution, and Fundamental Film Particles (Tim Vitale)
- Understanding Film Grain and Digital Noise in Photography (Alexander Kladoff)
- How Film Works (Neil Oseman)
-
From this grain writeup, it does seem that there isn't anything particularly fancy going on. There is a grain texture... and it is "applied" (probably multiplied). More info here. ↩
-
Technically, the sensors in a digital camera do not correspond one-to-one with pixels. In a pure RAW scanout of the sensor, each "pixel" only has one colour channel, and the organization differs from camera to camera. It is difficult to map these to RGB pixels, because often there are twice as many green pixels as there are other channels. Reconstructing RGB pixels from this arrangement is called demosaicing, you can read more about it in the paper Efficient, High-Quality Bayer Demosaic Filtering on GPUs ↩
-
In fact film has many transparent spots, wherever there isn't a grain. How many activated grains are in one spot is called the density of the film, ↩