About bit depth

While using a program like Adobe Photoshop you may have noticed an image being referred to as having 8 bits / channel, or 16 bits / channel. You would be correct in presuming that there is a benefit to using the format with higher numbers but it is still important to understand not only *why*, but *when* you should make such a change.

First let's be clear about what "bits / channel" or "bits per channel" means. It has primarily to do with how a computer stores information in binary bits. The smallest piece of information in a computer is either a 0 or a 1. This single on / off option is a "bit".

Now if we were to use 8 of them then how many different combinations could we possibly end up with?

2 * 2 * 2 * 2 * 2 * 2 * 2 * 2 = 256

As you'll recall from the section concerning colors in digital format, 256 is the default range of values for any give red green or blue channel, and 256 values across 3 channels creates 16,777,216 possibilities. This is how computer engineers ended up deciding upon 8 bits per channel. It just so happens that 8 bits per channel, as opposed to 6 or 10, provides enough colors to cover the entire visible spectrum without wasted information.

(If you do the math 6 bits would end up producing 262,144 possible colors. Those two bits make quite a difference!)

So if 16 MILLION colors is enough to display most images then why do we have higher bit formats? Because while 16 million is more than enough to *display* an image, it is not always enough to allow high quality *editing* of an image.

Even if your source image starts in an 8-bit format (the format most consumer-grade cameras will save as) it can be beneficial to convert it to a 16 bit format while editing so that programs like Photoshop can have a greater "memory" for the values on canvas.

You can try this on any given photo in a program with a "levels" adjustment option to see for yourself. Just follow these steps (an example can be seen below).

  1. In your chosen paint program open a photo twice (or duplicate a photo onto a new canvas) so you can compare the two.
  2. Leave one image at a bit-depth of 8 per channel and change the other to 16-bits / channel (in photoshop this can be changed via "Image > Mode > X Bits/Channel").
  3. Now for *both* canvases do the following.

  4. Apply a level adjustment (in Photoshop "Image > Adjustments"). Change the black output to 124 and the white output level to 132.
  5. Left-Click Ok. The image should appear as a hazy grey.
  6. Apply another level adjustment, this time changing the *input* levels. Again use 124 for the black level and 132 for the white.
  7. Click Ok. The second adjustment should "reverse" the change and the image should more closely resemble the original.

By using the same figures for the output and input every time we apply the "level" adjustment we are compressing the dynamic range from the *possible* pure black and white to grey and then back to the original range again.

Once done you should see a distinct difference between the 8 and 16 bit versions. It should be similar to the examples shown here.

Our initial image.
The initial 8-bit image after applying the raised black / lowered whites of the output fields of a "level" adjustment. You can see how narrow the range is just by how little variation is left in the image (you have to look closely but colors are still there).
After applying the same values to the equivilent *input* properties of the "level" adjustment we get something similar to the original image but which shows obvious banding (the clear divisions between what used to be smoothed colors).
But if we transform the image format to 16 bits before applying the adjustments then the final product is almost indistinguishable from the original.

As you can see the 8 bit version has been degraded. This is because photoshop has to compress the image to a smaller range for the 8 bit than the 16 bit when we applied the level adjustment. Even if we couldn't see the difference with our own eyes much more memory was dedicated to remembering the color values by your computer for the 16 bit version.

Typically you will not ever apply an adjustment that severe to an actual working image, and the use of the "level" adjustment is just one possible instance, but this illustrates the usefullness of processing images with a higher bit depth. Generally you'll want to utilize 16 bit images when you might use any of the following...

  • Destructive adjustments (level, curves, or channel mixer options).
  • Color correction.
  • Gamma / exposure correction

It's possible to go even further with your bit depth than 16. Some programs allow you to edit images of up to 32 bits. These are known as HDR images.

HDRI Photography

Now if you really want to use *extremely* high quality images with absurd bit depths then it is also possible to use 32 bits per channel. This option results in so many color possibilities that individual values are usually store as floating point numbers, which make use of decimal points, as opposed to the integer values used in lower bit depths. Using decimals allows the computer to track values "between" whole numbers into the hundredths, even thousandths, and allows for an incredible amount of tonal accuracy.

"HDRI" (High Dynamic Range Image) is any image that stores values as floating point numbers instead of integers of a lower range.

Typically "HDR" images are the format of choice amongst image editing professionals for various specific reasons.

  • Photographers who have a need to change exposure after the photo is taken.
  • Artists working on 3D animations who may need to actually change the exposure of an image during an animation.

But why was this extra dense image data format developed to begin with? Why not just use 16 bits when it appears to be able to retain enough information during even the most severe color manipulation? Because there's a problem that is related to, but separate from manipulation artifacts, and that problem is this;

While you can tell a computer to use more memory to RETAIN information while EDITING an image you can not CREATE information in an EXISTING image.

In photography "exposure" refers to how much light is let into the camera when a photo is taken. More light means brighter images, less light means darker images.

To acquire that extra image data in a photograh you need one of two things.

  1. A super nice camera. You don't have this.
  2. Multiple exposures that can be merged together.

The second approach is what we'll be explaining in a bit, but first let's look at a very basic example to help visualize the problem we're talking about.

Here we have an image where the camera used a high exposure (open aperture). This makes the cloth of the couch light and clear but makes the sunlight on the floor too bright to see details within it.
Here we have an image where the camera used a low exposure (closed aperture). This maes the cloth of the couch so dark you can barely make out the details within it but allows you to now see the detail even in the brightest areas of the floor.
Here we have the product of a merged photo set. We now have detail in both the light and dark areas of the image. Remember that to the human eye there is no difference between 8 or 32 bit imagery. What you see here is the ability of a 32 bit composite to be down-sampled to 8 bit while retaining detail across the entire image.
You've probably seen images, especially landscapes, with what looks like an excessive sharpening filter added to it. This is the result of a high 'detail" setting in the HDR toning options. Overuse of the "Detail" setting during merging can result in a surreal atmosphere, but typically just makes the image processing obvious.

This the basic idea behind HDRI images. It's difficult to fully illustrate the strength of using HDR images through the 8-bit images on this page but there's certainly a joy in simply adjusting the gamma of an image to see parts that were previously not visible pop into existence.

Merging images

The basics of how multiple exposure merging works

Before combining your source images into a single HDRI image you should check a few things about them by opening them individually.

  • Make sure they all have the same focus.
  • Make sure the white balance
  • Make sure the "steps" in brightness are even. Two images of similar brightenss will bring no new brightness values but may introduce blur.

The steps to combine individual source images into an HDRI are as follows.

Panorama images


Short version:
Go to file > automate > photomerge