Technical Details

The Realities of Resolution

2012-06-14 by jrista. 4 comments

Resolution, and in relation, sharpness, are concepts often talked about in regards to photography. These concepts are often misunderstood, and various forms of “resolution” are often conflated in a way that leads to greater confusion. This often gives rise to anecdotes that ripple through the photographic community about what resolution is, how it affects image sharpness and overall quality, and when or why it might matter to your photographic works.

Factual vs. Anecdotal

From a resolution standpoint, a lot of the anecdotes on the net are just things people have heard and passed on. Simple anecdotes like “stop the lens down two stops for best sharpness” are based on rough empirical studies performed on a select subset of lenses that generally lead many photographers to think that rule works for every lens. In reality exactly where you can achieve “maximum resolution”, and therefore your best sharpness, depends on a variety of factors, and is highly dependent on the lens and camera used. The use of the term “resolution” is also often conflated in meaning. There are different forms of resolution, and in general when it comes to photography the one that ultimately affects sharpness is spatial resolution. Another form of resolution is image resolution. This has to do with pixel counts, and is usually referred to in terms of megapixels. Image resolution is something relative to itself, and is not an adequate measure of a camera’s ability to “resolve” detail. To get into a factually valid discussion of “ideal aperture”, we need to delve into spatial resolution, and how the resolution of various system components (i.e. lens and sensor) combine to create the actual resolution of the system as a whole. This article is not necessarily for the faint of heart, however if you wish to know exactly what “resolution” is, how it is derived, and what constitutes an “ideal aperture”, read on.

Effectors of Resolution

There are two primary components of any camera system: lenses and the sensor. Both of these components have intrinsic properties that affect resolution, and consequently, sharpness. Sensor spatial resolution is a relatively simple concept, and easy to grasp. Lenses are more complex, as their spatial resolution changes with aperture…at first the achievable resolution will climb as you stop down the aperture, then decline again. The actual resolution of the whole camera “system” is effectively the root squared sum of the resolutions of each component involved (i.e. lens + TC + sensor). I’ll explain spatial resolution for both lenses and sensors, but first a little terminology:

  • F-Ratio: The focal-ratio, often referred to as the f-number or f-stop. Usually expressed as f/#, such as f/4, where 4 is the ratio of the focal length that determines the diameter of the aperture.
  • Diffraction: The tendency of light to bend around an obstacle or edge…namely the edge of the diaphragm as it passes through the aperture.
  • Microcontrast: The contrast level between individual pixels of an output image from a camera…the camera’s ability to reproduce fine, closely spaced detail
  • Image Resolution: The ability of an image to represent a certain pixel volume of detail within its area. Often expressed in terms of line pairs per picture height (lp/ph).
  • Spatial Resolution: The ability of an imaging system to distinguish small details within space (usually two-dimensional space). Often expressed in terms of line pairs per millimeter (lp/mm).
  • Angular Resolution: Synonym of `Spatial Resolution`.
  • Resolving Power: The ability of an imaging device to separate fine elements of detail located at a small angular distance from each other.

In the context of this article, when I simply say “resolution”, I am referring to spatial resolution. If I am referring to image resolution, I’ll explicitly call out as such, to avoid confusion.

Sensor Resolution

Sensor resolution is a simple concept. The resolution of a sensor is “fixed” in that it has an intrinsic width and height, and within that physical area is a fixed number of pixels, arranged a specific number of pixels wide and high. Film resolution is actually a bit more complex than sensor resolution, as it is based on the fineness of silver-halide grains and the uniformity of those grains…the spatial resolution of any given 35mm frame or any given medium or large format film sheet will not always be exact, it will fall within a small range.

Sensor spatial resolution can be calculated quite simply, to within a couple percent of what you’ll actually encounter in reality. To calculate the exact spatial resolution of a sensor, you would need to know the specific characteristics of the various elements of a sensor assembly, which would include the specific layout of pixels on the sensor (which may not necessarily be a square grid), the existence and nature of pixel microlenses, the presence and strength of an optical low-pass filter (OLPF, or AA filter), and the presence and effect on visible light of an infrared cutoff filter. These details are usually not published for commercial camera equipment, and the effect of them is usually rather small.

To calculate the spatial resolution of a sensor, expressed in line-pairs per millimeter (lp/mm), you can employ the following formula:

spatRes = dimPixelCount l / dimPhysicalLength mm / 2
The variables of this formula are:

  • spatRes: Sensor Spatial Resolution (in lp/mm)
  • dimPixelCount: Number of pixels along the given dimension (in l, or lines)
  • dimPhysicalLength: The physical length along the given dimension (in mm, or millimeters)

(Note: In the formula above, the final division by two converts our units from l/mm to lp/mm.)

Let’s use a few actual DSLR image sensors to demonstrate spatial resolution. I would also like to express how spatial resolution differs from image resolution, and how two cameras that offer identical image resolution output can have different actual resolutions. Lets use the Canon 1D X 18.1mp, Canon 5D Mark III 22.3mp, Canon 7D 18mp, the Nikon D800 36.3mp and Nikon D3200 24.2mp sensors.

Camera/Sensor Megapixels Image Dim. Physical Dim.
Canon 1D X 18.1 5184×3456   36mm x 24mm
Canon 5D III 22.3 5760×3840 36mm x 24mm
Canon 7D 18.0 5182×3456   22.3mm x 14.9mm
Nikon D800 36.3 7360×4912   35.9mm x 24mm
Nikon D3200 24.2 6012×4000   23.2mm x 15.4mm

If we employ the above formula for each camera, along the vertical (height) axis, we arrive at the following resolutions for each:

Camera/Sensor Equation Spatial Res. Image Res.
Canon 1D X spatRes = 3456 l / 24 mm / 2 72 lp/mm 1728 lp/ph
Canon 5D III spatRes  = 3840 l / 24 mm / 2 80 lp/mm 1920 lp/ph
Canon 7D spatRes  = 3456 l / 14.9 mm / 2 115.97 lp/mm 1728 lp/ph
Nikon D800 spatRes  = 4912 l / 24mm / 2 102.33 lp/mm 2456 lp/ph
Nikon D3200 spatRes  = 4000 l / 15.4 mm / 2 129.87 lp/mm 2000 lp/ph

A couple things to note here. First, the 1D X and 7D sensors produce images with exactly the same image dimensions. The “image resolution” is the same…18 megapixels worth of “content”, or 1782 lp/ph. However the 7D has a significant edge in terms of spatial resolution, by a factor of 1.6, or a 61% difference. The D800 is well known as one of the highest “resolution” DSLR cameras on the market with its 36.3mp sensor, however that is “image resolution”. The D800 sports an image resolution of 2456 lp/ph. In terms of spatial resolution, the amazing D800 is outpaced by both the 7D and the D3200. The latter even offers a 12% increase in spatial resolution over even the 7D. It should be noted that these are “ideal” figures. Bayer sensors with a CFA, and the presence of an OLPF, will affect this number to a degree, by a couple percent usually.

Resolution is a measure of a camera’s ability to discern fine detail. When it comes to modern DSLR cameras, cropped-frame cameras actually offer considerably greater ability in this area than full-frame sensors. If your goal is to capture a lot of fine detail, make sure you get the right kind of camera with the right kind of sensor. This does come with some caveats, however, that should at least be noted. Per-pixel dynamic range can often suffer as pixel area shrinks. The ability of a pixel to gather light is related to area, depth is not a factor. To increase resolution one must reduce area, which has an impact on the maximum saturation level for any given pixel. A lower maximum saturation will usually increase the impact of electronic noise when the sensor signal is read and converted into a digital image. Fine detail with a higher signal-to-noise ratio will certainly reproduce better in a higher resolution sensor, however fine detail in the shadows and potentially in the upper reaches of highlights might suffer and be lost to noise as resolution is increased. Modern sensor designs are continually improving to combat electronic noise, and we are beginning to see a new wave of sensors (primarily from Sony) that have very high signal to noise ratios and lower levels of electronic noise, which ultimately bodes well for continued increases in resolution.

Lens Resolution

When it comes to lens resolution, things are not quite as simple as with sensor resolution. With lenses, all there is is spatial resolution. The image projected by a lens is it, so we don’t have to worry about differentiating with image resolution. However, all lenses exhibit two things that degrade resolution below the maximum potential, at both ends of the aperture range. On the wide end, where the aperture is larger than the “ideal” setting for maximum spatial resolution, optical aberrations will degrade resolution, sometimes significantly. On the narrow end, where the aperture is smaller than the “ideal” setting, diffraction will degrade resolution.

A couple of facts about physics first. To start, diffraction is always present in a lens. It only varies by degree. When you use a lens with a wider aperture setting, optical aberrations will usually overpower diffraction, and be the primary factor degrading image quality…degrading resolution, contrast, etc. Optical aberrations are also always present in a lens, however their effect diminishes greatly as the aperture is stopped down to narrower and narrower settings. Finally, the degree or scale of diffraction present is directly related to the aperture chosen…diffraction increases as the aperture is reduced in size. The following chart, which assumes a hypothetical lens that exhibits “ideal” or “perfect” resolution at f/4, demonstrates (the higher the lp/mm, the smaller the aberration/diffraction artifact, and higher the resolution):

enter image description here

The peak at f/4 is what we call the diffraction-limited aperture of the lens. This is the point at which optical aberrations affect IQ less than diffraction. I have plotted a rather “nice” aberration-limited resolution curve in this chart, however in reality the curve is usually less ideal than that. Depending on which kinds of optical aberrations are dominant at any given aperture, the effect on IQ could vary. The effect of optical aberrations also increases from the center to edge of a lens, complicating matters further. Every lens design is different, and there is unlikely to be a clean exponential curve like this in reality. To be completely accurate, a curve for each type of aberration present in a lens should be plotted, with the resolution of the lens fit to the lowest common denominator amongst them all. Most lenses exhibit a similar resolution characteristics to the plot above in a general sense, initially limited by optical aberrations and later limited by diffraction, with a peak somewhere in the f/3.5 to f/6.3 band. Specialized lenses, or top-shelf glass like a Canon EF 600mm f/4 L II or the Zeiss Biogon M 25mm f/2.8¥ offer near-ideal (or near-perfect) resolution at their maximum apertures, and are therefor at their peak spatial resolution wide open. Many professional-grade lenses manufactured within the last decade, particularly within the last 5-6 years, offer previously unparalleled resolution (excepting specialty lenses like the Biogon), and approach perfection at or near their widest apertures. Consumer-grade lenses are generally unlikely to achieve perfection, however in recent years they too have been presenting improved resolution in that band of peak resolution between f/3.5-f/6.3.

Assuming most lenses reach the aberration-to-diffraction crossover point around f/4, the maximum average resolution for most lenses that the average and even professional photographer might use would be around 170lp/mm. Compared with sensor resolutions, that is quite high, more than double the average full-frame sensor resolution (excepting the D800), and anywhere from 30-70% higher than your average APS-C (cropped frame) sensor resolution.

From a simplistic standpoint, the aperture range wherein the effects of optical aberrations and diffraction converge is the resolution peak of your system, and you’ll experience the best results at this aperture setting. This is often about 1-2 stops down from maximum aperture, assuming your maximum aperture is around f/2-f/3.5. However the story isn’t quite this simple in reality. Your “ideal” aperture for achieving the best resolution is a largely fixed attribute of a lens, however this is often different from the point at which stopping down a lens produces “ideal” sharpness, even if the resolution at that point is lower than at the ideal aperture for achieving maximum resolution. Additionally, depending on the sensor used, you may be able to continue stopping down the aperture without visible loss in resolution, although the clarity of the image resolved may not be as ideal.

System Resolution and the Range of Acceptability

Resolution is a funky thing, and for any given lens, how resolution is perceived can change from camera to camera and from person to person. Assuming a perfect lens, one which is only diffraction limited at every aperture, resolution tends to be largely sensor-bound until the point at which diffraction restricts lens resolution below the sensors intrinsic spatial resolution. Until that point, using the widest diffraction-limited aperture possible will always produce better results (ignoring, for the moment, the effects of a shrinking depth of field…for discussion sake).If your sensor is diffraction-limited at f/8, then your resolution will be limited at most to 86 lp/mm. If you change your aperture to f/4, the focused subject should improve in quality, despite the fact that the lens is now outresolving the sensor by a factor of 2. This is due to the fact that the spatial resolution of your camera “system” as a whole, the combination of the lens + sensor, is the root squared sum of the blur of each separate component. Blur can be caused by diffraction, or by the sensor itself, and if you want to get really complex, by every individual optical or sensing element within the light path…that would include every lens element, the OLPF/AA filter, the IR cutoff filter, even the microlenses over each pixel. Practically speaking, for computing the maximum resolution a camera is capable of, we only need to factor in the lens and the sensor. The rest can effectively be ignored, and factored in at the end by knocking off a percent or two from the final results of the following calculations. From this point on, things get pretty complex, so if you haven’t absorbed the information above, you might want to wait to read the next part until you do.

Sensor blur is effectively determined by the size of the photodiode in each pixel. With modern advancements in current sensors, particularly single- or double-layered microlens structures over each pixel, the total area of each pixel can be considered the “blur circle size” for a sensor. Lens blur is defined by the size of the airy disc (the name of the pattern in a single point-light source created by diffraction in a lens.) Computing the blur circle size for for a sensor is pretty strait forward. Similar to the spatial resolution formula, sensor blur can be calculated as follows:

sBlur = dimPhysicalLength / dimPixelCount * conversionFactor
Where:

  • sBlur: Sensor Blur
  • dimPhysicalLength: The physical length along the given dimension (in mm, or millimeters)
  • dimPixelCount: Number of pixels along the given dimension (in l, or lines)
  • conversionFactor: A unit conversion factor. Generally speaking for these purposes, it’ll be 1000 µm/mm to convert millimeters to microns.

If we take two cameras from above, say the Canon 1D X and the Nikon D3200, we’ll find that the blur circles (also the “pixel pitch” of the sensor) of both are:

Camera/Sensor Equation Blur circle/Pixel pitch
Canon 1D X sBlur = 24 mm / 3456 pixel * 1000 µm/mm 6.94µm / pixel
Nikon D3200 sBlur = 15.4 / 4000 * 1000 µm/mm 3.85µm / pixel

Calculating the size of an airy disc is a bit more complex, and it is ultimately dependent on the wavelength of the light involved. Since cameras can capture visible light, which ranges in wavelength from around 380nm (0.38µm) to over 750nm (0.75µm), trying to be highly accurate is a complex exercise. For now, since most sensors are doubly sensitive to “green” light, I’ll assume were working with yellowish-green light, which falls around 565nm or about smack-dab in the middle of the visible light spectrum.The formula to calculate the size of an airy disc is as follows:

D = 1.22 * λ * fNumber
Where:

  • D: Diameter of Airy Disc
  • λ: Wavelength of light
  • fNumber: The aperture setting as a single number (i.e. f/4 -> 4)

Be careful to use consistent units, or inject appropriate conversion factors where necessary. The wavelength of yellowish-green light, 565nm, is 0.565µm. Since sensor pixel pitches (sensor blur circle sizes) are usually measures in microns, its ideal to keep all values in the airy disc equation in microns as well. We can rearrange this formula like so to determine what aperture will match the airy disc size to the sensor’s pixel pitch. This aperture is the point at which the sensor is diffraction-limited, and is called the DLA or Diffraction Limited Aperture:

DLA = D / (1.22 * λ)
For the same two cameras above, the DLA would be:

Camera/Sensor Equation DLA
Canon 1D X DLA = 6.94µm / (1.22 * 0.565µm) = 10.068 f/10
Nikon D3200 DLA = 3.85µm / (1.22 * 0.565µm) = 5.585 f/5.6

 

These numbers give us somewhat of a starting point. We are assuming that the blurs for both lens and sensor are equal. To calculate the total output resolution of the lens+sensor combination, you need to take the root squared sum (rss) of the blurs of each element in the system:

sysBlur = sqrt(blur1^2 + blur2^2 ... + blurN^2)
Where:

  • sysBlur: The size of the blur circle for the whole combined system.
  • blur1: The size of the blur circle for the first element of the system (i.e. lens)
  • blur2: The size of the blur circle for the second element of the system (i.e. sensor)
  • blurN: The size of the blur circle for the Nth element of the system (i.e. a Teleconverter)

The result of the above formula is the size of the smallest dot that can be resolved clearly by a camera system. One could also refer to it as the thickness of the smallest line that could be resolved. That leads into the next equation, converting the minimum resolvable detail into a common form: Spatial Resolution. To normalize results so we can compare in common terms, we can convert a blur circle in microns into a spatial resolution in lp/mm with the following formula:

spatRes lp/mm = 1000µm/mm / (sysBlur µm/l * 2)
Where:

  • spatRes: System Spatial Resolution (in lp/mm)
  • sysBlur: The result of the previous formula (in µm/l)

We can reduce the above two formulas into a single formula to directly compute the spatial resolution of a lens and camera like so:

spatRes = 1000 µm/mm / (sqrt(lensBlur^2 + sensorBlur^2) µm/l * 2)
If we calculate the total spatial resolution for the two cameras above with a diffraction-limited lens at the DLA (assume all units are in microns):

Camera/Sensor Equation System Resolution
Canon 1D X 1000µm/mm / (sqrt(6.94µm^2 + 6.94µm^2)/l * 2) = 1000µm/mm / (9.81µm/l * 2) = 1000µm/mm / 19.62µm/lp 50.97 lp/mm
Nikon D3200 1000µm/mm / (sqrt(3.85µm^2 + 3.85µm^2)/l * 2) = 1000µm/mm / (5.44µm/l * 2) = 1000µm/mm / 10.88µm/lp  91.91 lp/mm

Key things to note here. Both cameras with diffraction-limited lenses at the sensors DLA are offering LESS final system resolution than either the lenses or sensors are capable of. This is not a factor of diffraction. This is due to the fact that every additional element in the light path from the lens to the sensor, as well as the interface of the image circle with the sensor itself, will add its own “blur” effect to the image. In the case of a simple system, just lens and sensor, its due to the fact that the spatial frequencies in a lens image are generally always discontinuous with the spatial layout of a sensor. In other words, if you projected an image of the camera’s sensor at 1:1 magnification onto the sensor itself using an aperture that matches lens resolution to sensor resolution, it would be extremely difficult to line up the image projected by the lens such that it perfectly matched the pixel layout of the sensor. It is entirely possible that you could achieve that outcome, but in an average case its more likely that the images will be offset by a certain degree, diminishing the accuracy of the results. In this case, given the lens and sensor spatial resolutions are equivalent, were losing about 30% resolution to the effects of blur.

You can reduce this effect by increasing the resolution of one element of the system. As sensors are generally fixed components of a camera system, the logical conclusion is to increase the aperture of the lens…so long as it remains diffraction-limited. If your lens is already aberration-limited at the sensor’s DLA, then the only option would be to use a better lens that was capable of higher spatial resolution at the DLA or wider apertures. Lets assume we have a diffraction-limited lens at f/4 on both of our test cameras. First, we need to compute the size of the airy disc:

D = 1.22 * 0.565µm * 4
D = 1.22 * 0.226µm
D = 2.76µm
Plugging this new airy disc size into our previous equations:

Camera/Sensor Equation System Resolution
Canon 1D X 1000µm/mm / (sqrt(2.76µm^2 + 6.94µm^2)/l * 2) = 1000µm/mm / (7.47µm/l * 2) = 1000µm/mm / 14.94µm/lp 66.93 lp/mm
Nikon D3200 1000µm/mm / (sqrt(2.76µm^2 + 3.85µm^2)/l * 2) = 1000µm/mm / (4.73µm/l * 2) = 1000µm/mm / 9.48µm/lp 105.49 lp/mm

 

As you can see, both cameras realize an improvement in final spatial resolution. The 1D X obviously realizes greater gains (about 31%), as the aperture was widened by 2 2/3rds stops, where as the D3200 realized lesser gains (about 15%) as the aperture was only widened by a single stop. Lets take it a stop farther, with a diffraction-limited f/2.8 lens:

D = 1.22 * 0.565µm * 2.8
D = 1.22 * 1.582µm
D = 1.93µm
Plugging this new airy disc size into our equations for our test cameras:

Camera/Sensor Equation System Resolution
Canon 1D X 1000µm/mm / (sqrt(1.93µm^2 + 6.94µm^2)/l * 2) = 1000µm/mm / (7.20µm/l * 2) = 1000µm/mm / 14.4µm/lp 69.44 lp/mm
Nikon D3200 1000µm/mm / (sqrt(1.93µm^2 + 3.85µm^2)/l * 2) = 1000µm/mm / (4.30µm/l * 2) = 1000µm/mm / 8.6µm/lp  116.27 lp/mm

Now the tides have turned. The D3200 is realizing a greater gain (about 10%), despite an identical change to aperture. The 1D X has only gained a small amount in terms of total output resolution (almost 4%). Total system resolution, or output resolution, cannot supercede the resolution of the least resolving element. In this case, the 1D X sensor is becoming the limiting factor. You could keep realizing gains, however finding a “perfect” (diffraction-limited) lens at apertures as wide as f/2.8, let alone wider, is a difficult task. The D3200 has a more capable sensor when it comes to spatial resolution, and is resolving a whopping 67% finer detail than the 1D X sensor. Assuming you could find a perfect f/2.8 lens, the D3200 with its small but pixel-dense sensor could resolve 2/3rds more detail than the 1D X with its larger but less-dense sensor. Assuming you could find a perfect f/2.8 or wider lens…

Backing up a bit to the perfect f/4 lens, the gap between the 1D X and the D3200 shrinks. The D3200 is still resolving about 57% more detail, however the difference for the 1D X itself is only about 3%. This indicates that the 1D X has reached its peak sharpness, and for what its resolving, despite being less detailed than the D3200, the results are likely to be clearer and better defined at a 1:1 crop level. As the resolution curve for any given sensor at wider and wider apertures begins to flatten (assuming the lens is diffraction-limited at all apertures), the clearer the results will usually be. Acutance and contrast improve as you approach the spatial resolution limits of a camera system. The D3200 has a fair ways to go before it reaches that point of diminishing returns. This is a significant part of the reason why you’ll often hear many photographers complaining about perceived “softness” or “lack of microcontrast” on increasingly pixel-dense APS-C cameras, as its harder to reach the maximum system resolution. A plot of the output resolution curves for the 1D X and D3200 demonstrates the asymptotic relationship system resolution has with the lowest common denominator of the system:

enter image description here

When it comes to real-world lenses, ones that are not diffraction-limited at all apertures, and eventually become aberration-limited as you continue to open up the aperture, you’ll quickly experience diminishing returns that will again eat away at your output resolution. From the first chart above, at f/2.8 the hypothetical lens blur drops to about 5.26µm. This would result in output resolutions of 57.42lp/mm for the 1D X and 76.7lp/mm for the D3200. Depending on the exact characteristics of a lenses optical aberrations, resolution could continue to drop rapidly from there, or it may plateau at some point, around 30-40lp/mm. A whole host of various types of optical aberrations can affect a lens, and the degree they affect IQ increases from the center to the edge of the lens, so trying to really nail down resolution once you pass the “ideal” aperture is usually an exercise in futility. A basic empirical test with a given lens will usually produce enough data points that can be used to roughly plot your system resolution without the need to know every intrinsic factor that may impact it. In this respect, many labs and third-party reviewers perform these tests for most lenses on the market, and you can often simply look up such numbers if you so desire. Keep in mind that empirical testing results are usually based on the results of testing with a variety of arbitrary camera bodies, and methods of measuring also differ from one reviewer to the next. Many empirical tests are based on theory from the film era, where the concept of sharpness was based more on circles of confusion and contact print size. Such factors are becoming increasingly irrelevant in the modern digital world, and the resolutions possible with a digital camera often far out-pace the resolutions “classically” used when discussing acceptable resolution in film. You might encounter 10lp/mm, 30lp/mm, and 40lp/mm quite often, however these “standard” testing points are often inadequate to truly describe the resolving power of the sensors or lenses of today.


¥ The Zeiss Biogon M 25mm f/2.8 was (is?) specialized lens designed to offer ideal resolution right into the corners, which was capable of 400lp/mm at MTF 9% (very, very low contrast). In more normal metrics related to lens performance with digital cameras, that would have been about 190lp/mm at MTF 50% (medium-higher contrast). This lens was purportedly intended to test high resolution film, of which there has also purportedly been a 400 lp/mm, or “gigabit” film at one point in time. A 200lp/mm MTF 50% film or sensor resolution would be quite amazing, and only achievable at f/2.8 or wider on a technically “perfect” lens (no optical aberrations, leaving diffraction as the sole factor limiting resolution.)