Optical Magnification in the Age of Digital Microscopy by Flagship Bio

Optical Magnification in the Age of Digital Microscopy

Figure 1: Displaying the same image on two different monitors immediately reveals the difference in optical magnification.

Share this Post:
Introduction
January 14, 2019 – Study of the minuscule is not possible without aid of an instrument because the human retina does not possess sufficient angular resolution to discern microscopic detail at the focal lengths achievable by its natural lens. Thus the history of microscopic study is intimately linked to the history of microscopy itself. This history brings with it some conventions that conveniently describe the behavior of optical systems when viewed by the human eye, but which fail to describe digital microscopy data such that it can be presented consistently or processed automatically without further context. As microscopy becomes increasingly digital, the conventions used to describe digital microscopy data must adapt as well.

Defining Optical Magnification
One convention applied throughout design, documentation, and marketing of lenses is that of “optical magnification” as expressed by a dimensionless coefficient: e.g. 1x, 2x, 10x, 100x. Intuitively, this coefficient describes a ratio between the apparent size of an object as viewed with the naked eye, and the apparent size of the object as viewed through the lens. More precisely with respect to a compound optical microscope, it represents the ratio between the tangent of the angle subtended (covered) by an object as measured from the focal point of the eyepiece and the tangent of the angle subtended by the object as measured from the focal point of the objective.

Optical magnification provides an excellent metric for comparing different lenses. That a 100x lens makes things appear 10 times bigger than a 10x lens is easy to understand. It also has nice, linear arithmetic for combining multiple lenses: a 2x lens and a 10x lens in series will produce 20x, as one might intuit.

Optical Magnification in Practice
The optical laboratory microscope is built for the human eye, to project the magnified image of a very small object through the eye’s natural lens and onto the retina. Thus the human eye must be considered part of the optical system. While a microscope can be adapted to photographic purposes, this requires yet more lenses for perfect results, as anyone who’s attempted to jury-rig a camera adapter can attest.

Thus the set of optical magnifications provided by the microscope’s various objective lenses in combination with the eyepiece lens has been chosen for a particular application to be useful to a human viewer. A microbiology lab might have a microscope with a 10x eyepiece and a range of objective lenses between 4x and 80x, but these values are arbitrarily derived from the optics of the human eye. A lab staffed by octopuses or eagles would have very different lenses on their microscopes.

Because the optics in a lab microscope are selected with regards to the human eye, it feels quite natural to discuss or conceptualize the scale of objects by referring to the optics required to view those objects. A tumor boundary might be visible at 40x, but the individual cells not until 400x, which is useful information. It’s also quite natural to shorten this to 4x or 40x, when it’s clear in context what instrument is being used, in reference to the label on the objective lens itself.

Digital Microscopy
Digital slides are not imaged by the human eye, but by some variety of digital image sensor. As digital devices, these sensors produce a series of discrete pixel values by sampling light intensity in a regular grid. These pixels represent the atomic limit of the resulting digital image’s resolution: no visual detail is present “smaller” than a pixel. Thus any visual detail we want to preserve in the resulting image must be projected onto the sensor so that its image covers at least one, but preferably multiple, pixels.

Due to the limits of technology, we cannot image the entire slide in a single shot at sufficient resolution to show details within cells. In fact, when it comes to lab-grade sensors capable of calibrated, repeatable results, we don’t even come close. Instead, we precisely move the sensor through a regular grid, imaging the subject piecewise and joining the pieces back together when showing them to the user.

The pixels produced by the sensor’s sampling are typically presented to a user via a computer monitor. This again is a digital device, conceptually composed of a regular grid of light-emitting or -transmitting elements which can be manipulated to generate a variety of colors by modulating the intensity of separate red, green, and blue sub-elements. We also refer to these light-emitting elements as pixels, but we differentiate between monitor or display pixels (those shown to the user) and sensor or image pixels (those sampled from the object and stored in an image file).

A computer monitor can exactly reproduce a digital microscope image only when each pixel of the monitor is used to display the color value of a single image pixel. Thus, when viewing a 40,000×40,000 pixel slide image on a standard high-definition monitor, only a selected 1920×1080 pixel sub-area of that slide can be exactly reproduced at any given time.

Computers approximate zooming in and out of an image by sampling the original image to create a new image, which is displayed to the user. For instance, to zoom in by a factor of 2, the computer duplicates a single image pixel into a 2×2 square block of monitor pixels. This increases the apparent size of the image, but does not reveal any new information, as the imaging sensor does not provide any detail smaller than the pixel level. To zoom out of the image by a factor of two, the computer averages the color values of a 2×2 block of adjacent pixels, and displays that average color value with a single monitor pixel. This decreases the apparent size of the image, but in the process discards visual detail in the same fashion as a low-pass filter. When rescaling the image by non-integer factors, the computer uses more sophisticated filtering and sampling methods to suppress digital artifacts that would otherwise appear.

For all non-unity scales, it is the newly-created, resampled image that is displayed to the user, not the original values captured by the sensor. Furthermore, the resampled image does not contain the visual information required to recreate the original. In the case of zooming in, the information is lost to the decreased field of view; and in the case of zooming out, the information is lost throughout the image as attenuation of high-frequency detail. Thus attempting to zoom back in on a zoomed out version of an image yields a blurry enlargement without the detail of the original.

Optical Magnification and Digital Images
With a convention of optical magnification established, it is tempting to apply it to zoom in our digital image. We might conclude that if the original image were scanned using a 20x objective lens, and we’ve then digitally zoomed it by ½, that we are looking at an image magnified 10x. But as we’ve mixed optical and digital domains, we should check that our assumption holds.

We are missing a critical detail in calculating the optical magnification: the physical size of a pixel. For a computer, a monitor is a monitor regardless of physical size. A high-definition cellphone display and a stadium Jumbotron appear logically identical to image display software. But to compute the optical zoom of a displayed image, it is necessary to know how wide each pixel is. We can then compute the ratio between a distance as measured on the physical slide and the same distance as measured on the displayed image in pixels.1

In the case of both the cellphone display and the projector display, the scanned digital image contains exactly the same information. The projector, despite physically enlarging the image of the subject by many orders of magnitude, cannot ever display any details smaller than the pixel values sampled by the original image sensor. Our intuition would lead us to believe that increasing the magnification would reveal new details, but instead we only get bigger pixels.

Even the case of scaling the image on a single display misleads us. Swapping from a 20x to a 10x objective lens on a microscope does not alter the subject, only the optics between the subject and the viewer. But zooming out of a digital image constructs a new image by resampling the pixels of the original image, reducing the size by discarding visual information. Likewise, swapping from a 20x objective lens to a 40x objective lens reveals new details not previously visible by the microscope’s operator. But zooming in on a digital image only creates a new image by replicating the originally sampled pixels to occupy more on-screen pixels, enlarging the image while resolving no new detail.

Optical magnification has meaning only where there are optics. In digital microscopy, optical manipulation of the image occurs only once, at the beginning of the process during initial image acquisition. Past that, describing image operations in terms of optical magnification becomes misleading.

Use of optical magnification to describe image operations in digital microscopy comes from the legacy of optical microscopes, where the context, and therefore meaning, of the convention is clear. But aside from communicating procedural or technical matters, their primary utility is in conveying a concept of size. An experienced microscope operator will quickly learn to scan at low power, looking for structures “about that big at 4x”, before jumping to 20x to resolve cellular detail. They’ll also learn that “about this big at 20x” is roughly the size of a red blood cell. Work with a microscope for any time at all and this becomes second nature.

But as described, applying that “second nature” experience directly to digital imagery misleads the viewer as to the contents of the screen in front of them. If an image is originally scanned at 20x magnification, no manipulation of that image can produce the image as it would appear if scanned at 10x or 40x. Display software can only interpolate or extrapolate new approximations using the existing pixels. The half-scaled version of a 20x scan is precisely that: a digitally-produced variation of another image, a new image constructed from an original. And since the same exact digital image may be viewed on any number of different monitors, unlike a physical slide which is only viewable through an optically-equivalent instrument, a viewer using an ultra-high-definition 4k screen might find they cannot physically see their target structures if the image is scaled down too much. And a person viewing the same image on an old laptop might need to zoom out quite far to even fit one of the structures on screen. Thus a comfortable viewing magnification doesn’t correspond directly to the size of the structure as with an optical microscope, but rather to the combination of structure size, imaging resolution, and display resolution.2

Figure 2: Relative size of tissue sample displayed at native scan resolution on HD and 4k displays of the same physical size. Pixel dimensions of the display are noted.

A Robust Alternative: μm/px
Remember that the precise definition of optical magnification is the dimensionless ratio between apparent angle subtension by un-magnified and magnified objects. This allows comparison between different lenses, and shortcut conventions for specific contexts, but doesn’t directly provide any spatial metrics. While experience will bring increasingly good estimates, peering through a microscope and knowing that you’re seeing something at 400x does not immediately permit calculation of object size. Some additional information is necessary, such as an in-field ruler or the measured distance subtended by the field of view at the target magnification.

Digital microscopy introduces that information because device manufacturers know the physical size of each discrete pixel in their sensor. By using the conventional optical magnification formula, slide scanners calculate the distance on the slide subtended by each sensor pixel at the original optical magnification and include this as metadata in the physical image file. In other words, image display software automatically knows how much distance on the physical slide is represented by each sampled pixel. Furthermore, because the display software deterministically resamples the original image, it can calculate the physical slide distance subtended by any monitor pixel when displaying the original image at any digital scale. This can replace the dimensionless, context-dependent optical magnification with a dimensioned metric of real-world distance.

Given the scale of microscopic features studied in pathology, the most convenient metric available is micrometers-per-pixel or “μm/px”. This metric does not depend on any physical or logical properties of the monitor used to display the image, only on the controlled initial conditions of image acquisition. It does not describe an approximation of a convention, but meaningful physical data about the pixels displayed on screen. It requires no context to understand, no expectation that each viewer of a particular image knows the details of how it was scanned.

Furthermore, images originally scanned at different resolutions or with different methodology can be unambiguously compared. This is critical as digital pathology moves beyond bright-field optical microscopy. Techniques such as photon tunneling and structured illumination use optical lenses in unintuitive ways not derived from the human eye. And near-field imaging techniques can omit optics entirely. A metric that starts with an assumption of optical magnification via compound optical lens cannot accurately describe images produced via another microscopy technique. Having an unambiguous, physically-defined spatial metric future-proofs software against advances in digital imaging technology, permitting transitional or hybrid methodology and algorithm re-use across imaging techniques.

Internally to image display and editing software, a per-pixel spatial metric is not only consistent, but necessary for many of the operations expected of digital pathology software. In deficient slide formats where this information is not available, it must be inferred from other metadata or entered by the user before standard image operations are possible. Without some distance-to-pixel metric, it is not possible for an image analysis algorithm to measure sizes or distributions, and image display software cannot provide reliable scales and measurements nor registered annotations.

Even in cases where image display software appears to operate in optical magnification, this is only a UI convention mimicking a familiar experience, and the software operates in dimensioned spatial metrics “under the hood” where spatial meaning is required. This dualism serves adequately when software is primarily used in traditional workflows, with a manual operator using the software primarily to view a slide and add metadata just as they would with an optical microscope and a notebook. But as digital pathology methods become more sophisticated and automated, it becomes increasingly important that human operators not be mislead as to the software’s internal interpretation of the image.

It might seem a good solution to the incorrect magnification reported by some legacy pathology software would be to simply query pixel dimensions from the operating system and show the true optical magnification in the UI. Even assuming that we can determine the true physical pixel density of a display3, the results are not familiar to an experienced microscope user. Using physical calipers and an on-screen scale line, the author’s monitor was measured to have an optical magnification of 481x while displaying at 1:1 resolution a slide image originally scanned with a 20x objective lens. At this scale, the cytoplasm was clearly distinct from cell membranes by a viewer seated normally. A colleague’s ultra-high-definition monitor was found to have an optical magnification of only 180x when displaying the same image at a 1:1 resolution, with cytoplasm distinct but visible only when leaning close to the monitor’s surface.

Even divided by 10, it is clear that these magnification numbers don’t match those conventionally found in the field of microbiology. While one might guess field of view and subject size from a microscope’s magnification, this is because of the relatively limited set of optics that can physically be constructed. In the digital case where people sharing the same image might have wildly different display devices, these numbers provide very little predictive power and no communication value. It is as if every lab used totally different microscopes with random objective and eyepiece powers.

Meanwhile, the valuable information is not “what magnification did you use?” but “how big is that thing?”. This has always been a relevant problem when publishing microscopic imagery. Since the optical magnification of a printed image depends on actual size at which the image is printed (which may change during prepublication), it is standard practice to ignore real magnification, report the magnification used to take the photo, and include a scale or ruler bar to show the size of a subject. But unlike a physical microscope, where including a calibrated ruler in every field would be quite inconvenient and worthwhile only when publishing results, software can automatically include scale bars and precise measurement tools throughout the entire workflow with no additional effort. And as the scales and tools are derived from the μm/px spatial metric, and drawn using the same techniques used to render the image itself, they can be made visible at all times and to track precisely with scale.

If the goal of preserving the convention of optical magnification in digital microscopy is to maintain a familiar basis of comparison with optical microscopes, it has already failed. To claim that, regardless of display dimensions or pixel density, an image displayed at native resolution is magnified “20x” simply because that was the objective lens used for the scan, or that a version of that image digitally scaled by 0.5 is displayed at “10x”, is an outright falsehood disprovable by anyone with a ruler. On the other hand, optical magnification computed to be physically accurate has no alignment to a physical microscope, and no stability from display to display. So it would seem that to make optical magnification physically correct means to make it unfamiliar; and to make it familiar means to make it false. Thus there is no value in bending the convention of optical magnification to a digital world, and perhaps some harm.

Any scientist must have a mental model of the tools they use for their research, lest they mistake an artifact of their instrument for phenomena of their subject. The complexity and continual evolution of software makes this especially challenging, and so it becomes tempting to hide that complexity behind a facade of familiarity. But acting on that temptation does the user of digital microscopy tools a great disservice in obfuscating the complexity without actually reducing it.

What does materially reduce complexity in software is internal consistency, the Principle of Least Surprise. Optical magnification does not have a context-free, self-consistent definition within the realm of digital pathology, and its inclusion serves only to obfuscate. In contrast, uniformly communicating image resolution and spatial scale in terms of μm/px aligns the user’s experience and mental model with the self-consistent digital representation.

Aubrey Jones, Senior Software Architect
Flagship Biosciences, Inc.

1 In practice, this is not easily done without some intermediate metric, but this can be solved with a calibration scale included in every scan, so we will take it as granted.

2 This leaves aside the even more complicated issue of high DPI support built into modern operating systems, which is designed to rescale UI for ease of use but can make it hard for software to determine the “real” pixel dimensions of a monitor. Disabling high DPI support results in user interface elements that appear too small for easy reading or interaction.

3 By design, many software execution environments, such as those found within a web browser, provide no facilities for determining the physical density of the display. Instead they report a “logical” pixel density, and any rendered image

may undergo one or more digital scaling operations before presentation on the monitor. This is how, for instance, a browser can allow “zooming in” on a web page. In these cases, it is simply not possible for software to accurately calculate optical magnification.

Share

twitterlinkedinmail

About the Author and BIOCOM CRO Board Member