A Filmmaker's Guide to Sensor Sizes and Formats
As a cinematographer and owner of a rental house, I often run into questions regarding sensor sizes and formats. Questions like: "will this lens work with my camera?" Or, "will this lens cover 8K?" Or, "Will this lens cover full frame." Or, "what will this full frame lens look like on my camera's Super 35 sensor." My hope is that this article will help answer questions like these, and help to understand some of the concepts that surround sensor size, and lens choice.
MORE FORMATS THAN EVER
With the release of more video cameras with larger sensors like the ARRI Alexa LF, Panavision DXL2, RED MONSTRO, and the Sony Venice, we have more choices than ever when it comes to formats and lens options. However, it's important to know the differences as well as what results should be expected before selecting your project’s sensor size and lenses.
There are many formats to choose from. Digital or film? If film, which format? If digital, what camera and sensor size? There is no single "best" format that is right for every project or situation. When selecting your format or sensor size for a project you need to consider not just the style, but also your budget, your shoot schedule, crew size, camera size and weight, whether you need to shoot on primes vs. zooms, spherical vs. anamorphic, etc.
Motion picture film formats are fairly straight forward: 8mm, 16mm, 35mm, 65mm, 70mm. These names are based on measurements of the physical size of film used to capture the images. Today with digital cameras, we have more sensor sizes than ever: 1/2”, 2/3", Micro Four Thirds, Super 35, DX, APS-C, Full Frame, Vista Vision, etc. There are lenses for every format, and the lens market is bigger and more confusing than ever. Also, not all lenses work with all formats, and not all formats might be the right choice for every project.
Sensor size is the physical size (area, not number of photosites or pixels) of a camera’s image sensor, usually measured in mm width x height. Sensors like the ones found in a Canon 5D, Sony a7S II or the ARRI Alexa LF, as well as traditional 35mm still photography film, all have areas that measure roughly 36x24mm. For the remainder of this article this format will be referred to as “Full Frame.” In the chart below the red rectangles represent the physical size of many common sensors and formats as well as the number of pixels represented by “HD” or “Ks” (the “HD” or “K” number of course does not apply to film).
For film, resolution is the term used to describe how much detail can be resolved usually measured in line pairs (lines per mm or lines per inch). For digital sensors, we use the term Number of Recorded Photosites or Number of Effective Photosites. This is the number of individual photosites (often referred to as pixels) on a given sensor that contribute to the final image. This number can be measured in horizontal x vertical. For instance 1920 x 1080 is the standard pixel count for HD cameras. These days we abbreviate these numbers by using "Ks" (2K, 4K, 6K, 8K etc.), each K standing for 1000 photosites on the horizontal axis. For instance 4K represents 4,000 photosites in the horizontal axis of a 4K image sensor. That being said, a common “4K” pixel count is actually 3,840 x 2,160 photosites (UHD). In still photography a digital camera's photosites count is often measured in “megapixels.” One megapixel = 1 million “pixels.” The sensor in the Canon 5D Mark IV is 30.4 megapixels, which is about 30,400,000 photosites, which is 6,720 x 4,480 photosites, and could also be called 6.7K.
FIELD OF VIEW / ANGLE OF VIEW
Field of view or more accurately stated, "horizontal angle of view" is a measurement in degrees along the horizontal axis to determine just how much of the world you will see when looking through a lens from a given position. For example an ultra-wide-angle lens like a 16mm fisheye lens (one designed for the Full Frame format) will have a 180° horizontal field of view on a camera with a Full Frame sensor. If you placed your camera and 16mm fisheye lens in a square room, standing in the doorway in the center of that wall, your camera would see the wall across from you as well as the walls to your left and to your right. A super telephoto lens like a 500mm will have a very narrow field of view of only 5° on a Full Frame camera, and you would only see a small piece of the wall directly across from you.
Field of view is determined by the focal length of the lens you are using as well as the sensor size of the camera you pair with it. A particular focal length will give different fields of view when paired with different sized sensors. For instance, a 50mm lens on a Full Frame camera will give you a field of view of about 46°, but on the smaller sensor of an APS-C camera, the same 50mm lens will give you a 31° field of view, showing you a narrower slice of the world. The difference in field of view is extreme if you are using a camera with an even smaller Super 16 size sensor like the one in the original Blackmagic Pocket Cinema Camera or a Digital Bolex D16, versus the larger Full Frame sensor of a camera like the Sony a7S II. In the images below, you can see that using a 50mm lens on a Super-16 sized sensor gives you an extreme-close-up, but with a Full Frame sensor, it's a medium-close-up, even though the distance from camera to subject stays exactly the same. This effect of one focal length producing different fields of view when paired with different sensor sizes is sometimes referred to as "crop factor.”
Crop factor is a ratio of one camera’s sensor size related to another’s camera’s sensor of a different size. Crop factor is most commonly used to compare the sensors of Full Frame cameras to smaller formats like APS-H, APS-C, Nikon’s DX format, etc. Crop factor is often used when shooting on these smaller formats to help people find the appropriate focal length to give the same field of view that can be achieved with a particular lens on a camera with a Full Frame sensor.
For example, photographers and videographers accustomed to the field of view they see when looking through a 50mm lens mounted on a Full Frame Nikon D810 ("FX" format in Nikon speak) might wonder what focal length will give them the same field of view on a Nikon D500 which has a smaller "DX" format sensor (which is similar in size to APS-C and Super 35 sized sensors). There is a formula to figure it out. To find the focal length to give you the same field of view when using the smaller sensor of Nikon DX cameras, you use a crop factor of 1.5. So to find the right lens, you divide 50mm by 1.5, which gives you about 33.3mm. Since you probably don't have a 33.3mm lens, using a 35mm prime lens on a DX format Nikon camera like the D500 will give you roughly the same field of view as a 50mm lens on your Full Frame Nikon D810.
Another example: if you want to go the other direction, and see what focal length you need on your FX format Nikon D810 to give you the same field of view when you use your Nikon 24mm f1.4 on your DX formate Nikon D500, you need to now multiply 24mm x 1.5, which gives you 36mm. Since you probably don’t have a 36mm lens, to get roughly the same field of view on your Nikon D810, use a 35mm lens. "Crop Factor" is really just another way to help understand the changes in "Field of View" caused by using cameras with different sensor sizes.
A lens' image circle refers to the light (the image) projected out of the rear side of a lens, for our purposes onto a camera’s sensor or onto film. A lens projects a circular image, not rectangular. We simply crop the circular image into rectangular shapes with various aspect ratios. The diameter of the image circle is measured in mm. It is very helpful to know a lens’ image circle size, because if you know the image circle size, you know how large of a sensor a lens can "cover." For instance, in order for a lens to illuminate (or cover) the entire sensor of a Full Frame camera like a Canon 5D or the Sony Venice, it would need an image circle with a diameter of at least 43mm. A lens' image circle size isn't information that is always easy to find, especially with older lenses, so research or testing is recommended to verify if a lens can completely illuminate certain sensors.
In the images below it's easy to observe this lens' complete image circle. This lens was originally designed to cover Super 16mm film. When used on a camera with a Full Frame sensor we are able to see the lens' entire image circle. The red rectangle represents a 16x9 crop in the Super 16mm format, which will end up being the final image represented by the photo on the right.
As mentioned earlier, a lens' image circle determines what sensors it can illuminate or "cover." With all the different formats and lenses to choose from, it can get overwhelming. A lens will cover the sensor size and film format it was designed to cover, as well as any formats smaller than that. Most lenses can't cover formats larger than the one they were engineered to cover. If a lens was designed for Super-35 and APS-C (which are close in size), it will successfully cover the sensors of Super-35, APS-C as well as smaller sensors like Micro Four Thirds and Super-16. However, it more than likely will not cover larger sensors like the ones in Full Frame cameras, and even larger sensors like the one in the ARRI Alexa 65. There are some exceptions. While wide angle lenses often will not cover formats larger than the ones they were designed to cover, often, primes lenses 50mm and longer will actually cover sensors larger than their intended format. This is a result of the optical designs of these focal lengths. For example, a 50mm Cooke Speed Panchro made in the 1950s to cover Academy 35mm (22mm x 16mm) will actually cover larger formats including Full Frame (36mm x 24mm). Another interesting exception, the Tokina 11-20mm f2.8 zoom, which was designed to cover sensors like APS-C, and Super 35, will actually cover Full Frame sensors from about 14mm to 20mm! This is exciting because it is not as common for wide angle lenses to cover beyond their intended format.
To add to the confusion it has become popular to measure sensor size in "Ks." And so it has become common for people to say a lens “covers” 4K, 6K, 8K, which is flawed. Since Ks refer to the number of photosites on a sensor not the physical size of an image sensor, using Ks in this way is problematic. The reason for this lack of a constant in the relationship between photosite count and sensor size, is that the physical size of the photosites varies from sensor to sensor. For example the 8K Sensor in a RED Helium is much smaller than the 8K Sensor in a RED Monstro despite having the same number of photosites, and both being cameras designed and built by RED. The physical size of the individual photosites in the Helium are smaller than the individual photosites in the Monstro.
"A 50...is a 50...is a 50..."
All lenses of the same focal length, regardless of what format they were designed for "see" the world in the same way, in regards to magnification, however the size of their image circles can vary depending on what format they were designed for. By that I mean a 50mm lens designed to cover a camera that shoots 16mm film, "sees" the world in the same way as a 50mm lens designed to cover a Full Frame camera in terms of magnification. The difference between the two is how much of the world they are able to see and project onto a camera's sensor. In the images below, the same scene was shot 3 times with the same camera, which has a Full Frame sensor. The subject stayed in the same place. The camera never moved (always 4' 6" from her eyes to the sensor) and the aperture stayed at T4 for all three lenses.
As you can see, the magnification of the scene is the same. Meaning: the size of the woman's face is the same, the size of the doorway and the objects in the background are the same. And becasuse all three lenses were set to T4, the depth of field, and size of the out of focus background’s blur circles (often referred to as bokeh) are the same with all three 50mm lenses. The difference between the 50mm lens designed for Super 16, the 50mm lens designed for Super 35 and the 50mm lens designed for Full Frame, is the size of each lens' image circle and therefore how much of the world you are able to see.
(NOTE: of course there are also differences between lenses of the same focal length in regards to color, contrast, resolving power, distortion, character of bokeh, etc. One of the reasons I chose to use monochromatic images for these examples, is to remove as many of those differences as possible.)
Here is another way to illustrate this. In the examples below, you can see how three 25mm lenses (sorry, one is a 24mm) designed for different formats, shot on a camera with a Super 16 size sensor, at the same T stop will produce images that have nearly the same field of view, the same magnification, same compression, the same depth of field, and the same amount of background blur. Therefore a 25...is a 25...is a 25. These three lenses were designed to cover different sensor sizes, but because they were all designed to at least cover a Super 16 size sensor, they give us virtually identical shots in the Super 16 format. The field of view of each shot is nearly identical, and since all three set-ups were shot at T2.8, depth of field and background blur is also nearly identical between the three.
CROPPING IN ON YOUR SENSOR
A director of a project might say, “I really want to shoot this project Full Frame.” The DP of the project owns a Sony a7S II, which has a Full Frame sensor. Perfect. Then the producer of the project says, “I was able to get a great deal on ARRI/Zeiss Ultra Primes!” That’s great news for the budget, but if the DP doesn’t know the right information about those lenses, or doesn’t do research or doesn’t test them out before the shoot, he or she will find out the hard way that those lenses were not designed to cover Full Frame. Luckily, that camera has a very useful "center crop" feature. So that production could still use the Ultra Primes. They would just shoot using a smaller part of the sensor which is roughly the size of the Super 35mm format, and they would be recording 1080P HD instead of 4K.
Now more and more digital cameras allow you to use a smaller area of the sensor. “Cropped sensor” formats are available on cameras like the ARRI Amira, Canon C300 MKII, Sony F5, F55, FS7, Panasonic Varicam LT, pretty much any RED camera, and many more. There are plenty of benefits to using a smaller area of the camera's sensor. For example, with RED cameras, using a smaller area of the sensor means you have access to higher maximum frame rates. Another advantage, if you are shooting a documentary and most of it will be shot hand held, you may want a light-weight zoom lens that covers a wide range of focal lengths. And since you may have to depend on available light, you might want a lens with a fast T stop. In that case a zoom lens designed for the Super 16mm format might be perfect, since they often are light-weight, small, and tend to have fast T stops.
Also, using a smaller area of the image sensor means smaller file sizes, which means you may need fewer cards for your camera, and fewer hard drives to store all the data.
ARE LARGER-FORMAT LENSES THE ONLY LENSES YOU'LL EVER NEED?
Lenses designed for larger formats have advantages. Lenses like the Leitz Thalia primes are rehoused still photography lenses, designed to cover a a very large 60mm x 60mm format. Therefore these lenses will cover huge sensors like the one in the ARRI Alexa 65, as well as any sensor or format smaller than that, which is pretty much every motion picture format that exists! One could come to the conclusion that the only lenses you may ever need are a set of lenses like these because they will work on any current camera format, right? It’s not that simple.
For instance the Leitz Thalia 24mm has a maximum aperture of T3.6, which compared to prime lenses designed for Super-35 or Full Frame, which can be as fast as T1.5, T1.4, or even T1.3, is a somewhat slow lens requiring more light. What if you have a project that will be shot on an Ursa Mini Pro, shot mostly at night that relies upon practical lighting or available light? Since the Ursa Mini Pro performs better at lower ISOs, you decide that you need lenses that can open up to T2 at least, but T1.4 would be even better. If that’s the case, larger format lenses like the Thalia primes might not be the best option, and it might be better to get something like Zeiss Super Speeds, which have a maximum T stop of T1.3. Both the Thalia 24mm and the Zeiss Super Speed 25mm will give you almost identical fields of view, but the Zeiss Super Speed requires less light, and has the ability to really blur your background relative to your subject (often referred to as shallow depth of field) should you choose to open up to wider apertures.
It’s also important to note, the widest lens in the Thalia set is 24mm. When shooting on Super 35 or smaller formats, it’s very common to shoot focal lengths wider than 24mm to achieve a wider field of view. So if you only had that set of Thalias, you couldn’t shoot any ultra-wide shots on cameras with smaller sensors. Leitz didn’t need to design any focal lengths wider than 24mm for the Thalia lens line because on the formats they were designed for, a 24mm lens has an extremely wide field of view.
A 24mm T3.6, shot at T3.6 on an Alexa 65 can provide a wide field of view that still has shallow depth of field. When shooting on Super 35, to get a shot with the same field of view and depth of field, you’d need a lens that was roughly 13.5mm and T1.8. So depending on the look you’re going for and the size of the sensor you are using, the Thalia 24mm T3.6 may leave you looking for wider focal lengths with faster T stops. So as amazing as the Leitz Thalia lenses are on an Alexa 65, they might not be the best choice for projects using smaller sensors.
THE ULTIMATE LENS SET...
Then why doesn't someone just design large format lenses that cover the huge Alexa 65 sensor, have a maximum aperture of T1.3, and make focal lengths much wider than 24mm? Then you could buy one set of primes that you could use on every camera and on every job! Physics is one of the big reasons why it’s difficult to make lenses like that. For example an 14mm lens that could cover the Alexa 65's sensor and was T1.3 would be very large, and very heavy. It would also be very difficult to actually design and build. The individual glass elements would be enormous.
Another challenge in designing these “ultimate lenses” would be cost. The cost to design and manufacture such lenses would make the resulting lenses so expensive, no buyer outside of NASA could afford to own them. There is also practicality to think about. If this hypothetical lens set was built, it would be difficult in certain situations to shoot at T1.3, especially with medium and longer focal lengths. If you were using a theoretical 85mm T1.3 (designed to cover the Alexa 65 sensor) on the Alexa 65 at T1.3, the depth of field would be so shallow, you would only be able to keep a thin sliver of an object in focus, and the background would be extremely blurred compared to the subject. This effect could be great for specific situations, but often such extremely shallow depth of field could be distracting, and pulling focus would be very difficult for the 1st AC. The DP might end up stopping the lens down to T2.8 or T4 anyway.
DOES SIZE MATTER?
As exaggerated as the above example is, it’s a good demonstration of why we benefit from having different sensor sizes and film formats. For ROGUE ONE: A STAR WARS STORY, shooting on Alexa 65 with big, heavy, vintage Panavision Ultra 70 anamorphic prime lenses was a great recipe for achieving beautiful, large-format images perfect for huge movie theater screens. That production also had the budget and crew to support using gear that is bigger, heavier, slower to set-up, and more expensive. However for a run-and-gun documentary, shooting on a smaller format like the S16 HD crop on an Alexa Mini or a 2K Center Crop on a Sony F55, shooting 3.5K on a RED Helium, or using a Blackmagic Pocket Cinema Camera, can result in a smaller, lighter camera, smaller lenses, typically faster lenses that need less light, probably cheaper rentals, smaller or fewer hard drives, and possibly even a smaller crew, needing less set-up time.
Even smaller than Super 16, the 2/3” sensors in the cameras used for most live sports, make it practical to build a lens like the Fujinon XA101x8.9BESM/PF that can go as wide as 8.9mm and as long as 900mm (1800mm with its built-in doubler engaged!) and has a maximum aperture of f1.7…wow. When you are covering live sports you need that range of focal lengths and that f stop.
To build a lens that could cover that massive 101x zoom range and cover a much larger sensor like the ones in a Canon C300 or Sony FS7 at f1.7 would be the size of a Civil War era cannon and probably weigh as much. The Fujinon mentioned above costs $233,490, so if you needed to build its Super 35 bigger brother what would it cost, half a million dollars? To build one that covered an even larger format like Alexa 65, would probably be the size of a minivan and cost 5 million dollars! Again, there are many practical reasons for having different formats and sensor sizes.
The physical, logistical, and financial reasons for having all these different sensor sizes might be more obvious than the aesthetic differences that go along with them. Where you will see the biggest aesthetic differences between images captured with various sensor sizes are the magnification of film grain, (and the magnification of a digital sensor's noise), and the ability to blur backgrounds relative to the subject of an image, with the lens choices currently available to us.
One of the simplest differences to understand is the magnification of the recorded image. When we finally show our recorded images to an audience, they are magnified from their original size (the size of the sensor or of the film used to capture the images). Whether the images are being watched on a cell phone or an IMAX theater screen, the original recorded images will be magnified. With film, when projected onto let’s say a 60-foot-wide theater screen, a 16mm print has to be magnified more than a 35mm print to illuminate the entire 60-foot screen. If both of these hypothetical films were shot using the same film stock, they would both have the same size film grain. If you then projected each film onto the same size screen, although the film grain would be the same size in both stocks, due to its smaller physical size, the 16mm print will need to be magnified more than the 35mm print for projection, and so the size of the film grain would also be magnified more. So the film grain in 16mm film would appear larger and more noticeable when compared to the film grain in the 35mm film.
It’s the same in digital. If you shot two theoretical projects, one on an ARRI Alexa Mini shot at 3.2K, and one shot on an ARRI Alexa 65 in 6K, both camera’s sensors have the same size photosites. So if you kept the camera’s settings the same (aspect ratio, ISO, shutter, codec, etc), and projected the two films onto the same size screen, like with film, you will see that same aesthetic differences in regards to grain (or "noise" in the case of digital). Because the Alexa 65 sensor is so much larger than the Alexa Mini sensor, the Alexa 65’s images would need to be magnified less than the Alexa Mini’s images, therefore the appearance grain or “noise” would be less in the Alexa 65. In the images below you can really see the presence of noise in the Super 16mm format example. It's more difficult to see the differences between the Super 35mm image and the Full Frame image because of the limitations of presenting these small, compressed images on your computer screen.
Another big difference between smaller formats and larger formats can be the depth of field you are able to achieve, and how much you are able to blur the background relative to the subject. I say “can be” because sensor size alone does not dictate depth of field or how much background blur you are able to achieve. Depth of field is the distance between the nearest and farthest objects that are in "acceptably sharp focus" in an image. Depth of field and how much background blur you are able to achieve, are both affected by a lens’ f stop, its focal length, the distance from the camera to the subject, and the size the recorded format. The wider the f stop, the less depth of field, and the more you can blur the background. A 50mm lens at f1.4 has far less depth of field than a 50mm lens at f4. The longer the focal length of the lens, the less depth of field it will have compared to a wider lens, when shot at the same f stop, and the same distance from the sensor to the subject, and captured on the same size sensor. If the distance from the camera's sensor to the subject is the same in both shots, an image captured with an 85mm lens shot at f2.8 will have less depth of field than an image captured with a 25mm lens shot at f2.8. Also the background will appear more blurred relative to the subject in the shot captured with the 85mm lens compared the the shot captured with the 25mm lens.
If you compare the 3 images above, the camera was placed in the exact same position for all 3 shots. The woman was placed in the exact same spot in the room. All 3 were shot at the same T stop, with the same field of view, and focus was set to her eyes. The 3 images were captured with camera that have different sensor sizes, therefore 3 different focal lengths were required to capture the same field of view. You can really see how much more background blur you can achieve when using longer focal lengths on larger sensors compared to using wider focal lengths on smaller sensors.
To explain this a little further, images captured by cameras with Full Frame sensors and Super 35 format sensors, using focal lengths that will give you the same field of view and the same amount of background blur relative to the subject, is about 1 full stop. For instance, if you capture an image with an ARRI Alexa Mini using a 50mm lens at f1.4, to get an image with the same field of view and the same depth of field on an ARRI Alexa LF, you’d need to use a 70mm lens shot at f2. If you want to go the other direction, and capture a third image with the same field of view, and the same depth of field, but this time using the ARRI Alexa Mini’s S16 HD format (Super 16), you would need to use a lens that was 26mm f0.7. As you can see, it is theoretically possible to achieve extremely shallow depth of field in smaller formats, but it might be difficult or impossible to find lenses that have such huge maximum apertures. There aren’t too many f0.7 lenses out there. So one advantage of shooting on larger formats like Full Frame is the ability to achieve shallow depth of field, on images with a wide field of view, with the lens options that are available to filmmakers.
MORE LENS OPTIONS THAN EVER
Another opportunity these new formats and larger sensor sizes have brought, is all the new lens options out there. Lens makers are building new lenses that are designed to cover these larger formats. In addition to the Leitz Thalia primes mentioned above, there are many lenses being built to cover Full Frame sensors like the ARRI Signature Primes, Cooke S7/i primes, Zeiss Supremes, Tokina Vista Primes and Sigma FF Cine primes to name just a few. Since the 36mm x 24mm (Full Frame) 35mm film format dates back to the early 20th century, there are countless still photography lenses that are just begging to be rehoused. Companies like True Lens Services, Zero Optik, GL Optics, and Whitepoint Optics are busy rehousing full frame still lenses like Leica R, Canon FD, and Nikon AI-S primes. These lenses were designed for this format, and they bring with them unique characteristics that many filmmakers find quite appealing.
Between all the new lenses being produced to keep up with growing sensor sizes and all the vintage lenses that are being repurposed for motion picture use, there are more lens options available to filmmakers than ever before.
There are more choices than ever for acquisition format. New lenses seem to be released monthly. With all these options at our fingertips, it means we have so many creative possibilities. However, more than ever we need to make sure to consider all the variables before deciding which format is best for a given project. What’s the style of the project? Do you need a physically small camera and lenses? Will you be in a low-light situation? Do you need primes, zooms, or both? Do you have limited set-up time? What’s your budget? You also need to know which lenses are compatible with which cameras and formats. The last thing you want is to choose the wrong equipment, which leaves you fighting with your gear instead of making amazing images. There are more tools than ever available to filmmakers, and it’s up to you to figure out which ones are the best for your project.