What is Color Bit Depth?
Many large-budget shows and films like Saturday Night Live, House, and Captain America have used 8-bit footage from the Canon 5D series for years and found its strengths more convincing than its shortcomings. As long as you have reasonable expectations about grading your footage in post-production, 8-bit video can be a totally viable option.
Bit depth is a piece of technical jargon that has come to us in the age of digital cinema. It is part of that cloud of tech specs that sound so very important on paper and that get hyped up in every press release - “Now with 10-bit internal recording!” All this marketing has really distorted and clouded our understanding of the word. Which is a shame, because at its heart, bit depth does not have to be terribly complicated or scary.
So to all you filmmakers, videographers, and curious outsiders scratching their heads at it all, know that you are not alone. And know that at the end of this article, you will be fully armed for any conversation that starts with “yeah, but does it shoot 10-bit?”
Bit Depth, Technically Speaking
Bit depth, as a technical measurement, describes the number of color values per channel - Red, Green, and Blue. When a camera shoots 8-bit, it is recording 2^8 unique colors per channel, which adds up to 256 total. 256 shades of green, 256 shades of blue, and 256 shades of red, all mixed up together to form an image.
It’s important to note that each pixel is sampling all three colors. The sensor is still receiving the full visible spectrum, but the processor is trying to efficiently compress that glut of information into a usable image format that still preserves image detail - not an easy task. A 10-bit image comes out to 1024 unique colors per channel, and 12-bit brings us all the way to 4096. You can have a lot more subtlety and nuance when working in 10 or 12 bit, but the difficulty of encoding it climbs exponentially. There’s a reason that smaller, more consumer-oriented cameras only shoot 8-bit.
This isn’t entirely accurate, but you can picture a collection of crayons. 8-bit color would be like a healthy 64-pack: you have more than enough options on the face of things, but every so often you find a need that you just can’t fill. Maybe there just is no right shade of blue, or you find that there are only two purples for some reason. 10-bit color is like having the 120 pack. At this point, you’re pretty well set for all of your day-to-day needs. Rare is the situation where you will need more colors than you have available. 12, 14, or even 16-bit color is like having that crayon-melting machine that lets you create an infinite number of color mixes.
When is 8-Bit Best Used?
8-bit video is the lowest bit depth that can reasonably be expected to look “realistic” and not noticeably crush colors in strange ways. It will not survive a strong color grading process, leaves less room for correction, and does not capture the richness of the full color spectrum as well as 10- or 12-bit. It is a less-than-ideal candidate for shooting log footage because of this, and performs best when the final look can be captured in-camera. This is about as stripped-down as a video signal can be while still being viable.
While we think of it as a low-quality format today, it is worth remembering that early digitally-captured feature films were 8-bit. George Lucas famously pushed digital cinema into the mainstream with his Star Wars prequel trilogy, shot on HDCAM with an 8-bit 1080p signal. Many large-budget shows and films like Saturday Night Live, House, and Captain America have used 8-bit footage from the Canon 5D series for years and found its strengths more convincing than its shortcomings. As long as you have reasonable expectations about grading your footage in post-production, 8-bit video can be a totally viable option.
8-Bit Cameras Include:
• MiniDV/HDV cameras
8-Bit Codecs Include:
When is 10-bit Best Used?
In the past few years, 10-bit video has been working its way into cheaper and cheaper cameras, most recently, in the Panasonic Lumix GH5. It is undeniably a ‘stronger’ image that more precisely represents colors and is much more accepting of change in post. It is the standard bit depth in lower-end professional cameras such as the Sony FS7, the Canon C300 Mark II, and the Panasonic EVA-1. But that is not to say that high-end cinema cameras like ARRIs or REDs don’t support 10-bit recording, only that they are capable of so much more.
10-bit recording is broadcast standard in many ecosystems, and is a mandatory minimum in many production companies like Netflix. This is both to meet the demands of HDR displays and to future-proof the images captured.
10-bit Cameras Include:
• Sony FS7
10-Bit Codecs Include:
• ProRes 422 HQ
• When is 12-bit or Higher Used?
A camera that can shoot 12, 14, or 16 bit video is a rare treat indeed. These color spaces place a premium on post-production flexibility and rich, accurate color. They guarantee a noticeably higher-quality moving image that will be much more malleable in your post-production workflow. To maximize the advantages of these high bit depths, they are often captured in RAW formats that largely preserve the information reported by the sensor with little to no compression or loss. The massive storage requirements of these formats make them much more accessible to large studio films, but cameras like the Canon C200 with its RAW-Light codec are beginning to bring even 12-bit RAW into a price range that is accessible to independent owner-operators and filmmakers.
Shooting continuously, or even fairly consistently, in these large formats requires an amount of infrastructure, storage space, and processing power that naturally limits the accessibility to the ‘average’ user. These sorts of workflows are largely reserved for studio films and, more recently, very large YouTube creators.
It is worth noting that even these extremely high-quality formats are trickling their way down to consumer levels. Blackmagic made major waves by bringing 12-bit RAW to a ridiculously affordable price range with their Pocket Cinema Camera, and now Canon has brought a more compressed 12-bit RAW option to the table with the C200. It may not be too terribly long before 12-bit is the new broadcast standard and 10-bit is seen as merely passable.
High Bit Depth Cameras Include:
High Bit Depth Codecs Include:
• RED Raw
• Cinema DNG
The bit depth that you record with doesn’t define your production value. A well-lit, well-shot scene will look gorgeous no matter the record format. But we live in a world where grading is becoming more and more accessible and popular - so your camera better be able to keep up. 8-bit cameras will get the job done, but anything shooting in 10-bit will hold up a lot better once you take it into the post-production studio. And if you have the good fortune, not to mention the storage capacity, to shoot at a higher bit depth, then your horizons open up so much more.