Film scanning: Preserving the past for posterity

Deck: Although the parade may have gone by for Hollywood’s Golden era, it is only just beginning for those involved in film preservation.

Starting his career as a documentary film maker in Alaska, North America, film restoration and preservation could not have been further from the mind of Reed Bovee, now Chief Technical Officer at Reflex Technologies (Burbank, CA, USA). However, when the lack of available good quality archival footage of historic events became apparent, Bovee and his colleagues decided to build a film scanner to meet the demand.

First demonstrated at the annual convention of the National Association of Broadcasters (NAB; Washington, DC, USA) in 2012, the company’s prototype film scanner was immediately recognized as a valuable film preservation tool, bringing Reflex Technology its first customers.

“In the early days of motion pictures,” says Bovee, “a nitrate film base was used as the transparent substrate for the photosensitive emulsion used to expose photographic images. Degradation of this film results in an amber, brown or yellowish film discolouration, blistering or bubbling of the surface and a film that is possibly stuck together or decomposed into a brittle residue.”

Although nitrate-based film is extremely flammable, it was used in most film-based stock until the late 1940s when it began to be replaced with that made of less-flammable cellulose triacetate (also known as safety film). However, although this cellulose triacetate-based film is less flammable, it is still subject to decomposition.

“When cellulose acetate film base decomposes,” says Bovee, “it produces a vinegar-like odor which  indicates that acetic acid is leaching from the film base and breakdown has begun.” This can cause cloudy images, make one layer of the film adhere to the next resulting in changes in the film’s surface characteristics (known as ferrotyping) and loosen the emulsion from the base. The film can also shrink resulting in individual frames becoming cupped or curved. “Eventually, at the later stages of decomposition, the film becomes increasingly brittle, and transforms into powder,” says Bovee.

Today, polyester-based film has replaced cellulose acetate as the preferred medium for motion picture prints since it is stronger, more resistant to tearing and less brittle. Since such polyester-based film is far more durable and resistant to degradation than cellulose acetate-based film, distribution prints can survive for long periods since the film base does not decay or emanate odors.

Film formats

One of the most important methods to preserve old footage is image digitization. To perform this task, film scanners must not harm the original film or add any new artifacts. Furthermore such systems must be capable of digitizing numerous different types of film stock, ranging from 8mm, Super 8mm, 16mm to 35mm. All of these different film types may contain different image formats. For example, the image area of 16mm film is 10.26 x 7.49 mm while that of 35mm film is 21mm x 18mm.

Figure 1: Numerous film formats have been produced during the last century ranging from 16mm, Standard and Super 8mm and 9.5mm formats. As can be seen, each of these vary in the type and size of perforation pitches, perforation sizes and whether magnetic or optical soundtracks have been incorporated.

Each of these film types vary in type and size of perforation pitches that they use and whether they use single or double perforations with which a camera (in the case of a film negative) or projector (in the case of a film positive) would advance the film.  As well as having a variety of image formats and perforation sizes, many may incorporate either magnetic or optical soundtracks (Figure 1).

“To digitize such film formats,” says Bovee, “conventional digitizers use sprocket-based mechanisms to advance the film over a scanning mechanism. Unfortunately, this necessitates the use of multiple sprocket formats to accommodate every film type. Worse, should a film be shrunken by as little as 2%, or if the film is creased down its length, the use of sprocket-based mechanisms cannot be used. Furthermore, such conventional scanners often limit how much of the film area can be scanned, not allowing the outer edges of the film (that may contain essential information) to be digitized. To convey these different types of film through a film scanner without damage, the use of sprocket less systems allow even very badly preserved film to be handled.

However, it is not just film transportation that plays an important role in accurate film digitization. As each frame is presented to the scanning mechanism, it must lie as flat as possible to ensure that the entire image area is sharply focused, and that there is no reduction of the captured image’s brightness or saturation at the periphery compared to the image center (a phenomena known as vignetting).

As well as capturing fidelity images, today’s film-based scanners must incorporate pick-ups (to capture sound from magnetic soundtracks) and a means of interpreting the optical soundtrack so that it can be converted to digital sound files. Finally, such scanners must be capable of creating digital image and sound files in a number of different formats such as Microsoft’s uncompressed Audio Video Interleave (AVI) format, Apple Inc.’s QuickTime files or as individual bitmapped (BMP), Tagged Image File Format (TIFF) or Digital Picture Exchange (DPX) files for digital restoration or print back to film stock.

Scanning stock

Figure 2: At each scanning station, two monitors are used to display captured images and the image processing software used to perform image restoration. (inset) To perform image digitization, film is first loaded onto a feed reel on the film scanner, threaded through an optical reading head and onto a take-up spool.

To control film transport, film illumination, digitization, color correction and image storage, the Reflex scanner employs both a programmable logic controller (PLC) and a host-based PC. At each scanning station, two monitors are used to display captured images and the image processing software used to perform image restoration (Figure 2).

To move the film through the scanner, the system uses both a supply and a take-up capstan servo drive and a supply reel and take-up reel servo drive. Using dual capstans on each side of the scanning mechanism provides smooth film movement through the scanner and allows the film to travel at a constant speed. While the rotating capstans control film movement through the scanning mechanism, the supply and take-up reel servo-drive systems assist in the movement and stopping of the film when required.

To accomplish this, the capstan  is connected to a tachometer that feeds capstan velocity to the reel servo-drive systems. Simultaneously, a supply and take-up film tension tensor ensures that the film is kept under consistent tension during winding, re-winding and scanning. These functions are controlled using the PLC and operated using a graphical user interface (GUI) on the scanner’s control panel.

To illuminate the film, it is backlit using a white light LED source. With a built-in cooling system, the LED array produces approximately 23,800 Lumens of light with a color temperature of 5000K when strobed with an external TTL pulse. “This is important,” says Bovee, “since the film is moving at a continuous rate and by using a 25μs strobe, the image can be imaged correctly without blur”.

Just as important is the need to illuminate the film such that the surface has an isotropic luminance when digitized. To do so, two condensing lenses are used to render the divergent light from the LED panel into a converging beam to illuminate the base of the film. To obtain a near Lambertian reflectance from the film, holographic diffusion material is placed in front of the pair of converging lenses. “If such a material was not employed,” says Bovee, “then vignetting would occur at the edges of the film during scanning.”

Taking pictures

Figure 3: To move the film through the scanner, the system uses both a supply and a take-up capstan servo drive and a supply reel and take-up reel servo drive. To control film transport, film illumination, digitization, color correction and image storage, the Reflex scanner employs both a programmable logic controller (PLC) and a host-based PC.

In the design of the Reflex film scanner, a PC interfaced to the system’s PLC is used to perform film illumination, digitization, color correction and image storage (Figure 3). As the film moves through the scanner it is digitized using a full-frame, global shutter CCD imager. The camera is fitted with a macro lens and positioned in a housing approximately 4ins above the film transport mechanism (Figure 2 -inset). After images are digitized, they are transferred over the camera’s interface to a GigE network interface card housed in the host PC.

To trigger both the camera and the strobe as each frame moves under the FOV of the camera, an optical system consisting of a combined LED emitter and receiver is used to create a red laser beam that then passes through an anamorphic lens.

The purpose of the anamorphic lens,” says Bovee, “is to spread the laser beam in one axis to shape the beam into a short line, similar to the straight edge of the film’s perforations. “By properly calibrating the system,” says Bovee, “the laser sensor uses the reflected light to detect whether the film is correctly positioned in the scanner.”  Once detected, the sensor is used to both trigger the white LED strobe and the CCD camera.

Recreating the past

In restoring historic archival footage and badly preserved nitrate or acetate-based film, it is necessary to produce high-fidelity images. While post-production software packages such as DaVinci Resolve 14 from Blackmagic Design (Fremont, CA, USA), can be used to alter the image quality of digitized negative or positive film, they often lack the tools required to improve the color quality or dynamic range of such images.

Recognizing this, Reflex Technologies turned to NorPix (Montreal, Quebec, Canada) for assistance when developing the image processing software required for its film scanner. To do so, Bovee and his colleagues worked with Philippe Candelier, Vice President of Engineering at NorPix and the company’s StreamPix 7 software. This collaboration resulted in the development of a user-friendly menu-driven graphical user interface (GUI) that scanner operators use to increase the dynamic range of captured images, to perform color space transformation, color correction, exposure level adjustment and gamma correction and output restored image sequences in a number of different file formats.

“While the CCD camera used in the system has a 14-bit analog to digital converter (ADC), the dynamic range of images captured by the camera is approximately 12-bits,” says Bovee. “To increase this dynamic range, a number of different methods can be used. Perhaps the most popular of these is exposing the scene – or in this case the film – at two different exposure settings. This results in parts of the image that appear bright being captured using a high-speed exposure and those that appear dark captured with a lower-speed exposure. Combining the results of these two exposures results in an image with a greater range of luminance levels (and thus higher dynamic range) than can be achieved by simply using a single exposure,” he says.

In the Reflex film scanner, this is accomplished by starting and stopping the film intermittently and using the photoelectric laser sensor to decelerate and park the film such that images can be exposed at both 25µs and 60µs while the film is stationary (Figure 4). This can be accomplished at approximately 2-3 fps.

Figure 4: To increase the dynamic range of each frame, the scene is exposed at two different exposure settings – in this case at both 25µs (left) and 60µs (center). This results in parts of the image that appear bright being captured using a high-speed exposure and those that appear dark captured with a lower-speed exposure. Combining the results of these two exposures results in an image with a higher dynamic range than can be achieved by simply using a single exposure (right).

Combining the two exposures, however, first requires that images from the CCD camera are interpolated since the sensor used in the camera employs a Bayer color RGB filter. To accomplish this, Bovee used a bi-linear interpolation techniques supplied as part of the NorPix StreamPix 7 software. This results in two images, taken at different exposure times, comprising 4864 × 3232, 7µm square pixels each with an interpolated RGB value.

According to Mihai Ghita, Software Developer with NorPix, a number of different methods can be used to create an HDR image. These include algorithms that first merge the exposure sequences into a single image and then use tone mapping to map one set of colors to another to approximate the appearance of an HDR image (see “High Dynamic Range (HDR)”;

Rather than use this method, however, Ghita incorporated the Merge Mertens fusion function of The Open Source Computer Vision Library into NorPix StreamPix 7 software since this algorithm does not require the exposure times of the two images to be described or any tone-map algorithm to be used. This results in HDR images that can be viewed and stored in both full-frame or full-aperture modes.

Color correction

With both film positives and negatives, colors within image sequences can shift over time, and thus require color correction. To accomplish this task, Reflex Technologies has incorporated a number of tools within the StreamPix 7 software to provide real-time feedback to the scanner operator. These include displaying different types of color spaces that can be adjusted during the restoration process, reducing the need for single frame color correction. As part of the StreamPix 7 software, four different tools exist to accomplish this task that include an RGB Parade tool, a Waveform Monitor, a Vector Scope, and a Histogram Analysis tool.

To enhance the color of digitized images using the RGB Parade tool, the red (R), green (G) and blue (B) values of the 12-bit interpolated pixels can be each be manipulated to control the color density across the entire frame (Figure 5). One of the most important functions performed by adjusting these RGB levels is color balancing. This is used to render specific colors correctly by first digitizing an image of a Macbeth color chart with known colors and illuminant, then scaling the RGB components to correct the color density across the image.

Figure 5: To enhance the color of digitized images using the RGB Parade tool, the red (R), green (G) and blue (B) values of the 4864 × 3232 interpolated pixels can be each adjusted (left) to provide control over their color density across the entire frame (right).

In some situations, it may be necessary to alter the brightness of an image without altering its color or vice-versa. In such cases, NorPix’s Waveform Monitor uses a simple matrix multiplication to transform RGB images to those of YUV (a.k.a. YCrCb) where Y represents the image’s luminance value and Cr and Cb two chrominance components. By operating on YUV values, it is then easier for the operator to compensate for, for example, the light intensity across an image, without altering its color values.

Images can also be transformed and displayed from the RGB color space to hue (H), saturation (S) and intensity (I) values using the Vector Scope tool. This allows the hue (color) and saturation (amount of color) and intensity (or lightness) values of the image to be independently adjusted. This is important when adjusting the white balance of the image since, for example, an image taken under incandescent lighting will have more of an orange hue and, by increasing the level of blue in the image, a more neutral image will be produced. Indeed, such HSI models are important in many image processing applications because they represent color as it would be sensed by the eye.

To correct the lightness, darkness, and contrast (or tonality) of images, NorPix’s Histogram Tool can be used. This illustrates how red, green and blue pixels in images are distributed and is computed by  plotting the value of the number of pixels at each color intensity level, revealing details in shadows, mid-tones and highlights. Since an image with a full tonal range will have pixels in shadows, mid-tones and highlights, adjusting the RGB histogram allows the user to determine the appropriate tonal corrections that need to be made.

“While the number of image processing tools may at first appear overwhelming,” says Bovee, “ an operator can use known visual cues within images (such as the color of an American flag or a blue sky) to accelerate image adjustment. Then, such settings can be saved for each type of image sequence processed so that an operator can later load known adjustments that pertain to particular effects caused by film degradation and make fine adjustments as required.”

Adding audio

To capture magnetic sound, the scanner features a magnetic pick-up head aligned with the edge track of the film. Since no ferrous metals are used anywhere in the film path, no degaussing (erasing) of the magnetic soundtrack can occur. Indeed, only non-ferrous metals such as aluminum, brass and stainless steel are used for any parts which are in proximity to the film.

To capture sound from an optical soundtrack, Reflex Technologies has also developed a proprietary reader that captures an image of the optical soundtrack simultaneously with an adjacent image frame. Software then stitches each frame of the optical sound image together, interprets this and converts the sound to 24-bit, 96 kHz broadcast wave files. According to Bovee, this method works equally well with both variable area or variable density optical soundtracks—either monaural or stereo.

Data output

After a particular image sequence and/or soundtrack has been scanned and processed, it must be stored to disk and shipped to the customer. This can be accomplished in a number of different ways. If, for example, the customer simply wishes to view the images on a PC, then the image and/or sound sequence can be stored as a Audio Video Interleave (AVI) file, a multimedia container format created by Microsoft (Redmond, WA).

“Even though a film may have been scanned at 5 or 10fps, for example,” says Bovee, “it can be saved in this format using NorPix’s StreamPix 7 software to play at 24fps. Similarly, the data can be stored in QuickTime format, the extensible multimedia framework developed by Apple (Cupertino, CA; USA). In either case, StreamPix software stores these image files along with the scanner settings used to produce the final image sequence.

In many cases, image sequences are stored at 8-bits/pixel since the customer may only want to view them on a computer. In other cases, post-production houses and film laboratories may require the 12-bit camera images to be re-mapped in a 16-bit color space and stored in a Digital Picture Exchange (DPX) file format. This ANSI/SMPTE standard is used to represent the film sequence with all the detail from the film scanner. To do so, each frame of the image sequence is individually tagged and stored which may result in tens or even hundreds of thousands of individual frames.

“Today’s advancements in film scanning technology,” says Bovee, “mean that negatives and positive prints once impossible to scan using conventional sprocket-based scanners can now be preserved for posterity.” With the film creations of the past now becoming increasingly available in digital format thanks to companies such as Reflex Technologies, the people who created them will continue to remain relevant to archivists, film historians and the general public.


Companies and associations mentioned

Apple Computer
Cupertino, CA; USA

Blackmagic Design
Fremont, CA, USA

Redmond, WA

National Association of Broadcasters (NAB)
Washington, DC, USA

Montreal, Quebec, Canada

Reflex Technologies
Burbank, CA, USA