Nigel Wood Photography - cobwood studio

Digital Sensors

Digital Camera Sensors and ISO

When taking a photograph, the camera sensor is exposed to light for the short period while the shutter is open. The sensor consists of a large array of light-sensitive picture elements, pixels. A typical sensor might have some 6000 pixels across by 4000 pixels, giving a total of 24 million elements to record the entire photographic image.

Quantum mechanics tells us that light can be described as a flow of particles, photons. As the photons fall onto the sensitive surface of a pixel, they initiate the release of electrons, which build up an electrical charge within the pixel. Each pixel can only retain a certain electron charge and it is common to speak of the elements as picture “wells”, filling up with electrons as photons arrive on the surface. If a picture well becomes full, it can’t hold any additional charge and just remains full.

camera sensor
Camera Sensor

Once the shutter closes, the camera electronics read the pixels to build up the digital image. At each pixel, the electrical charge is amplified and the signal is passed to an analogue-to-digital converter, which converts the signal into a very precise digital value, typically of about 14 bits. Pixels in darker parts of the image will have received relatively few photons, will have a weak charge and a correspondingly low digital value. Conversely, pixels in a bright part of the image will receive more photons and generate a high digital value. Any pixel well that has become full will simply generate the maximum digital value and there is no way to tell how many more photons entered the pixel. (These will be “blown” highlights in the final image.) The electronics thus build up the digital image of light and dark areas in great detail and high precision.

ISO

Back in the days of film, any roll of film had a specific sensitivity to light (“film speed”) as defined by the International Standards Organization (ISO). Film speed typically ranged from ISO 25 for a standard slow film to about 400 or more for a reasonably fast film. The ISO number for the film was selected on a dial on the camera and was used by the exposure meter to indicate the required exposure. There was no way to change film speed / ISO in the middle of a roll of film as the sensitivity depended on the chemical composition of the emulsion on the film. To change ISO, we had to physically change the film.

Now, in a modern digital camera, we can select from a wide range of ISO settings, even changing the setting from picture to picture. We tend to treat ISO as just one of the three selectable exposure settings: shutter speed, f/number and ISO. However, there is a fundamental difference between film ISO and the ISO on a digital camera. In film days the ISO was a true measure of the emulsion's chemical sensitivity to light. However, in a digital camera, when we change the ISO, we do not change the sensitivity of the sensor. A digital sensor always works at the same sensitivity. As photons fall onto the sensor, electrons are released according to the quantum efficiency of the sensor material. The resulting electrical signal is amplified and converted to a number as described above. The baseline sensitivity of this process tends to be equivalent to a film sensitivity of about 100 to 200. From there, a higher ISO can be simulated by increasing the amplification prior to digitising the result.

For example, suppose the baseline ISO of a camera is 100 and we take a photograph in the evening but find that even with the widest f/number, the predicted shutter speed for a normal exposure is too slow - let’s say 1/15 sec. So we select ISO 200 to increase the shutter speed to 1/30 sec. Now, only half as many photons will fall onto each pixel as are needed for a normal exposure. The “wells” on the bright parts of the image, which would have been nearly full at 1/15 sec are only half full at 1/30 sec. The camera then amplifies the signal according to the ISO (200) to get back to a “bright” reading. This is all well and good up to a point. But if we carry on and increase the ISO again, the exposure time will get shorter and the sensor has to work with even fewer photons. By ISO 1600 each pixel well in the bright areas of the image is using just 1/16th of its design capacity. The darker areas are, of course, having to work with even less light. At each stage the amplifier has to amplify the signal by an increasing amount to simulated a normal exposure.

Why is this a problem? First, we have the inefficiency of the measurement process and secondly, we have the issue of noise. The inefficiency arises because it is easier to get an accurate measurement of the strong signal when a pixel is working at its design point than the weak signal when working with restricted light. Any errors in measuring the weak signal will be increased by the amplification process.

Noise

The noise in a digital image from a camera arises from 2 sources: the camera and the light itself. Remembering that the sensor is counting the number of electrons (generated by the arrival of photons), it is easy enough to perceive that random electron activity in the sensor, the amplifiers and associated electronics will produce a level of interference with the image signal. This internally generated noise is known as "read noise" and varies between camera and sensor designs. Improvements are steadily being made as technology advances.

The amount of read noise is not proportional to the amount of light falling onto the sensor and for the sake of the argument, we will say that the read noise is generated independently from the light signal. What is important to us as photographers is the extent to which the true light signal dominates the measurements that are output to our image file. This domination of the signal over the noise is described as the Signal to Noise Ratio (SNR), which is the signal strength divided by the noise strength.

signal to noise ration
Signal to Noise Ratio

As long as the SNR is high, the noise should not be significant in our final image, on screen or in print. However, in the darkest areas of a scene where the pixels receive relatively few photons, the signal may be only slightly stronger than the noise. The digitisation process can't distinguish between what is true signal and what is noise and so the information recorded on the picture file will be corrupted. In the worst case the signal strength may drop down as low as the noise, at which point no valid measure of colour and light level can be made at all.

The second source of noise, and somewhat more interesting, is the optical “shot noise” - that is the noise within the light entering the lens. Our own eyes and brain perceive the colour and brightness of an object as unchanging when viewed under steady light. However, light is not a uniform flow of photons but a random cascade. It is easy to picture this if we think of rain; whereas on a large scale the fall of water might be considered “steady”, if we look more closely, the arrival of each water droplet is random. The same applies to photons falling onto a pixel. If we photograph a uniform surface, there will be random variations in the number of photons falling on each pixel. So each pixel will measure the brightness of the subject slightly differently and if the differences are great enough, we will see this variation in the final image as shot noise.

Now we need to turn to mathematics. If we photograph a uniform surface and a large number of photons fall onto each pixel, the random variation between photon counts from pixel to pixel will follow a statistically “Poisson distribution”. The “shot noise” will then be the square root of the mean count of photons at the pixels. For more detail, see https://en.wikipedia.org/wiki/Shot_noise

Noise = √S

Where S is the count of photons at a pixel.

From the discussion of read noise, we know that we are more interested in how well the true signal stands out above the noise. The Signal to Noise Ratio (SNR) is the signal divided by the noise:

signal to noise ration
Signal to Noise Ratio

Bringing this back down to Earth, we can say that the greater the number of photons a pixel collects, the more the signal will rise above the random shot noise and the cleaner our photo will look. To give some rough idea of the magnitude of the numbers here, a digital SLR might have a maximum electron count at base ISO in the tens of thousands. So, for example, the pixels in a bright area of an image at base ISO might generate 60,000 electrons, giving a healthy SNR of over 240.

Back to ISO

We can now relate this back to ISO. We saw above that if we take a photo at dusk and increase our ISO to 200, 400, 800, 1600 and so on, what we are actually doing is underexposing the sensor by a factor of 2 each time we double the ISO value. At each step we halve the number of photons entering the lens and falling onto each pixel and therefore halve the signal to be measured. As the signal reduces, the Signal to Noise Ratio goes down with it - and if we continue to increase ISO and thereby reduce the signal, eventually the noise will become significant and degrade the final image.

This applies to the bright areas of our image - but consider the dark areas where the signal may already be 10 stops or more lower. In our example with an SNR of 240 in the bright areas, we would have an SNR of just 8 in the dark shadows at base ISO. Even modest increases in the ISO setting will bring out significant noise in the shadows.

Advantage of Large Pixels

We have seen how the Signal to Noise Ratio for shot noise depends on the number of photons arriving at a pixel during the exposure.

The brightness (more correctly, illuminance) of an image falling on an area of the sensor is the number of photons per square micron of area. So for a given scene and exposure, the number of photons falling onto a pixel is proportional to the area of the pixel. Also, the capacity of a pixel to hold electrons (and thereby to count photons) is proportional to its surface area.

Let us compare a full-frame sensor with an APS-C sensor, both with the same number of pixels. The full frame camera has a larger sensor: 36 mm wide vs 22.4 mm wide. So each pixel in the full-frame sensor is 1.6 times bigger on each axis. The surface area of a full-frame pixel is therefore 2.56 times the area of an APS-C pixel. If we photograph the same scene at the same exposure, hence same illuminance on the sensor, the full frame pixels will receive 2.56 times as many photons as their APS-C counterpart. In doing so, the full-frame sensor will have benefited by an increase in signal to noise ratio of √2.56 (i.e. 1.6) and will produce the cleaner image.

Even if we compare cameras with the same size of sensor, we see that manufacturers have a trade-off to make when choosing size of pixel. On the one hand, reducing pixel size allows more pixels to be designed into the sensor, increasing spatial resolution. On the other hand, increasing pixel size offers better performance at the pixel level and a potentially cleaner image.

For example a top-of-the-range Canon 1Dx ii has 20 million pixels of size 6.6µm compared to a much less expensive Canon 5D iv with 30 million pixels of 5.4µm. Similarly the Sony A7S has just 12 million pixels of 8.4µm giving much better low-light performance than its stablemate A7iii with twice as many pixels but of 6µm. Choosing larger pixels means less resolution but potentially better image quality. For marketing purposes, manufacturers tend to advertise the number of mega-pixels but we can see that this is only half the story.

Equivalence of sensor formats

Imagine taking a head-and-shoulders portrait in a studio using a full-frame DSLR and a 50mm lens. In the section on depth of field, we saw that the depth of field (DOF) is given by:

depth of field
Depth of Field

where N is f/number, c is acceptable circle of confusion, f is focal length and s is distance to subject.

What happens if we now switch to an APS-C camera and wish to frame the photo in the same way, i.e. using the same field of view and the same distance from the subject? The APS-C camera has a smaller sensor - 22.4mm wide vs 36mm, giving a crop-factor of 1.6. So we need to use a 31mm lens.

At this stage, let’s consider the effect of the diameter of the lens aperture (d). We know that the f/number is the focal length divided by the aperture diameter, i.e. N = f / d. (But see footnote below.) So we can replace N in the formula for DOF as follows:

depth of field
Depth of Field

Which simplifies to:

depth of field
Depth of Field

Now, as we switch from the full-frame to the APS-C camera, the circle of confusion and the required focal length to maintain the same field of view both reduce by the crop factor (1.6). So the term ( c / f ) is unchanged. As we haven’t moved the camera, the distance (s) is unchanged and we see that the depth of field depends only on the diameter of the lens aperture (d). The bigger the diameter of lens, the smaller the depth of field.

So, when switching from one camera to another, if we maintain the same field of view and the same aperture diameter, we will also maintain the same depth of field. The 2 cases are optically “equivalent”.

The relationship between aperture diameter and depth of field is quite interesting. When considering what format of camera to buy or to use in a particular scenario, it is helpful to understand the role of aperture diameter. In effect, if we want a shallow depth of field to separate subject from background, we need a physically big lens, regardless of camera format. We need to think in terms of diameter rather than f/number.

As many people switch from full-frame DSLRs to mirrorless APS-C or Micro 4/3rds format, they generally benefit from an increase in depth of field. However, if we want to retain a shallow depth of field for artistic reasons, then we need a physically big lens, which partially defeats the point of using a small format (small size, less weight).

Referring this back to f/number, the following table shows 3 lenses that would have the same diameter and are "equivalent" in angle of view and depth of field characteristics.

Micro 4/3 APS-C Full frame
25 mm 31 mm 50 mm
f/1.4 f/1.8 f/2.8

Notice that in this table I have started with a Micro 4/3rds "standard" 25mm f/1.4 lens and shown the APS-C and full-frame equivalents. However, if we started with a "standard" full-frame 50mm f/1.4, we could not find equivalent lenses of the same diameter at the smaller sensor sizes; there aren't any Micro 4/3rds 25mm f/0.7 lenses.

Footnote - definition of f/number. Strictly, the f/number is the focal length divided by the lens entrance pupil diameter, N = f / d, where the entrance pupil is the image of the mechanical aperture as seen through the front of the lens. However, for simplicity we can loosely refer to lens aperture instead of lens entrance pupil diameter.