When taking a photograph, the camera sensor is exposed to light for the short period while the shutter is open. The sensor consists of a large array of light-sensitive picture elements, pixels. A typical sensor might have some 6000 pixels across by 4000 pixels, giving a total of 24 million elements to record the photographic image.
In order to detect colour, the array is covered with a 3-colour filter called a Bayer array [Fuji use a proprietary array]. So, when the shutter opens, each light-sensitive element receives either the red, green or blue part of the image formed on the sensor. Note that there are twice as many pixels measuring green, because the human eye is very sensitive to subtle changes in green.
Quantum mechanics tells us that light can be described as a flow of particles, photons. As the photons fall onto the sensitive surface of a pixel, they initiate the release of electrons, which build up an electrical charge within the pixel. Each pixel can only retain a certain electron charge and it is common to speak of the elements as picture “wells”, filling up with electrons as photons arrive on the surface. If a picture well becomes full, it can’t hold any additional charge and just remains full.
Once the shutter closes, the camera electronics read the pixels to build up the digital image. At each pixel, the electrical charge is amplified and the signal is passed to an analogue-to-digital converter, which converts the signal into a very precise digital value, typically of about 14 bits. Pixels in darker parts of the image will have received relatively few photons, will have a weak charge and a correspondingly low digital value. Conversely, pixels in a bright part of the image will receive more photons and generate a high digital value. Any pixel well that has become full will simply generate the maximum digital value and there is no way to tell how many more photons entered the pixel. (These will be “blown” highlights in the final image.) The electronics thus build up the digital image of light and dark areas in great detail and high precision.
Back in the days of film, any roll of film had a specific sensitivity to light (“film speed”) as defined by the International Standards Organization (ISO). Film speed typically ranged from ISO 25 for a standard slow film to about 400 or more for a reasonably fast film. The ISO number for the film was selected on a dial on the camera and was used by the exposure meter to determine the required exposure. There was no way to change film speed / ISO in the middle of a roll of film as the sensitivity depended on the chemical composition of the emulsion on the film. To change ISO, we had to physically change the film.
Now, in a modern digital camera, we can select from a wide range of ISO settings, even changing the setting from picture to picture. We tend to treat ISO as just one of the three selectable exposure settings: shutter speed, f/number and ISO. However, there is a fundamental difference between film ISO and the ISO on a digital camera. In film days the ISO was a true measure of the emulsion's chemical sensitivity to light. However, in a digital camera, when we change the ISO, we do not change the sensitivity of the sensor. A digital sensor always works at the same sensitivity. As photons fall onto the sensor, electrons are released according to the quantum efficiency of the sensor material. The resulting electrical signal is amplified and converted to a number as described above. The baseline sensitivity of this process tends to be equivalent to a film sensitivity of about 100 to 200. From there, a higher ISO can be simulated by increasing the amplification (see figure above) prior to digitising the result.
For example, suppose the baseline ISO of a camera is 100 and we take a photograph in the evening but find that even with the widest f/number, the predicted shutter speed for a normal exposure is too slow - let’s say 1/15 sec. So we select ISO 200 to increase the shutter speed to 1/30 sec. Now, only half as many photons will fall onto each pixel as are needed for a normal exposure. The “wells” on the bright parts of the image, which would have been nearly full at 1/15 sec are only half full at 1/30 sec. The camera then amplifies the signal according to the ISO (200) to get back to a “bright” reading. This is all well and good up to a point. But if we carry on and increase the ISO again, the exposure time will get shorter and the sensor has to work with even fewer photons. By ISO 1600 each pixel well in the bright areas of the image is using just 1/16th of its design capacity. The darker areas are, of course, having to work with even less light. At each stage the amplifier has to amplify the signal by an increasing amount to simulated a normal exposure.
Why is this a problem? First, we have the inefficiency of the measurement process and secondly, we have the issue of noise. The inefficiency arises because it is easier to get an accurate measurement of the strong signal when a pixel is working at its design point than the weak signal when working with restricted light. Any errors in measuring the weak signal will be increased by the amplification process.
Secondly, there is always a certain amount of electronic noise in the sensor and the associated circuits. As the ISO increases and the image signal reduces at each pixel, the separation between the signal and the underlying noise diminishes. The amplifier boosts both the signal and whatever noise has been introduced upstream of the amplifier. The result is an increasingly noisy image. See the following article for a more detailed discussion of ISO and noise.
For a discussion of the "equivalence" of sensor formats, see equivalence.