Here’s a rudimentary example.Let’s say we store video using a digital file format that represents the brightness of a pixel across a range of values, from 1 to 100. Pitch black is 1 and full white is 100. Now consider that an image sensor can discern changes in light level from 1 to 200 steps. How do we squeeze those 200 steps in light level into a range of 1 to 100 in the file format?
One approach would be to distribute the steps evenly: two steps into each level in the file format. For example, a value of 1 or 2 from the image sensor would be stored as 1 in the file format, 3 and 4 from the image sensor would be stored as 2 in the file format and so on until you got to 199 and 200—100 in the file format. This kind of interpretation is linear because it’s the same across the entire range of brightness levels. While this system would work, it doesn’t take into account our vision system’s ability to notice more subtle changes in the bright areas of the image.
But what if we were to divide that range of 1-100 into three groups—1-20, 21-40 and 41-100—and use a non-linear system? In the 1-20 section, we could represent the first 80 values from the image sensor. For examples, a value stored as 1 (pitch black) is created by the image sensor from a value of 1, 2, 3 or 4. In the middle section (21-40), a value of 30 is from either 108, 109 or 110.
That seems like we’re throwing away differences. We are, but not on the bright end of the scale. The last grouping ends up with a one-to-one relationship. A stored value of 90 is created by the image sensor value of 190; likewise, 91 is from 191 and so on.
Now, this isn’t a literal example of how the interpretation happens in cameras recording log. In real life, there’s much more nuance. And by “nuance” I mean math. Thankfully you don’t need to know the math to deal with log in the edit.
And that’s what I’ll talk about next time.