Reports of Zone System's Death are Greatly Exaggerated by John G Arkenberg

A Technical Analysis of the Claims in Johnny Patience’s The Zone System is Dead

“Reports of my death are greatly exaggerated.” - Mark Twain

 1.0 - Introduction

Whether I shoot film or record a digital file for projection, I use the Zone System. In Death of the Zone System II, I explained how I learned ZS and the benefits it conferred on my photographic process. In pre-production on Deadwax, a ShudderTV original, I used Zone System to create a set of in-camera Look-Up-Tables for the digital camera in order to confidently set lights with a light meter. Speaking in photometric values with my gaffer, Katie Walker, allowed us to work quickly on set without being tied to the monitor, and easily manifest our aesthetic in the post-production pipeline. Stumbling upon Johnny Patience’s The Zone System is Dead I was surprised to learn the very methods I use for controlling exposure and tonal representation are fatally flawed, irrelevant in the 21st century, and thereby dead. Should I throw away my notebooks, calibration tablets, charts, densitometer, and light meter? The flattering and enthusiastic comments section following the Zone System is Dead would lead one to believe Patience’s discovery is a resounding success. I e-mailed a list of technical questions, terminology clarifications, and a request for his photographic tests and never received a response. I submitted the same list of questions in the comments section of the blog which, unsurprisingly, were never posted. As of this date I still have not received any answer.

I am genuinely interested in the sequence of steps and tests that support his conclusion because subjecting my own research to counterarguments or conflicting experiments is the forge from which to shape new ideas. However, a lack of response implies that his article exists in an echo chamber. I decided to move forward with posting this piece since the ideas in The Zone System are Dead are common misconceptions I find on the internet. Patience’s ideas are persuasive to those who have never or only briefly researched Zone System. To address this imbalance I have written two pieces sketching the basic process of how ZS works on a theoretical and technical level. On a practical level I’ve also written a section explaining how to calibrate photographic materials using ZS.

By presenting the technical principles and working practices of ZS along with the science of sensitometry I hope other photographers can reach their own conclusion about whether this is a tool worth their time and energy to learn. I gave up controlling the length of this piece because it contains a number of elementary sensitometry concepts that would benefit both beginners and experts. Towards the end of this piece I will explain how I can use ZS and sensitometry to interpret what exactly Patience’s photo process and metering method accomplishes (and what it does not) to provide a fair sense of perspective.

2.0 – Finding the Logical Argument

Overall, the majority of Zone System is Dead is filled with far more anecdotes, opinions, and emotionally stirring rhetoric than clearly explained concepts and scientifically backed claims. While my argument is with the latter it is imperative to identify these rhetorical techniques in order to preclude them from future discussions.

Patience relies on emotionally charged language to make two arguments. First, he paints those who use and understand Zone System as “traditionalists” and worshippers of a “Holy Grail of B&W” that “repeat golden rules without questioning.” This is a biased characterization of ZS users as blindly chasing an unattainable end. There is no “tradition” in using what is simply a tool for tone control in a photographic system. Also, there is no ultimate summit of perfection, but rather a process of refinement that allows for more creative control. I imagine that the enthusiasm by some photographers for ZS has led the author to take an emotional stance.

Secondly, he dismisses ZS as too complex to understand with a cursory “you can forget about it now (in case you didn’t ever completely understand or like it anyway).” I agree that the methods and practices have a steep learning curve (all puns intended). However, the principles are easily learned by anyone who applies their mind and energy to the task. Often the most valuable lessons in any field are not the easiest. I can hear echoes of Euclid’s “there is no royal road to geometry.” Similarly, there is no royal road through Zone System.

There is also a great deal of anecdotal evidence offered up in this article that lacks substance. There is an appeal to authority in commenting that Paul Caponigro and Gary Briechle agree with his conclusion. This is poor evidence without direct quotes or the context of the conversation. I have to admit that if someone ran up to me on the street and asked “is it okay to overexpose my negative film by a stop or two?” I would respond in the affirmative. If they asked “is overexposing my negative film a stop or two the best method to control tonality in my photographic system” I would respond in the negative. We just don’t know enough about these conversations to warrant it as proof. Also, ZS is not used by all photographers. So identifying two photographers who don’t use ZS just means you found two photographers who don’t use ZS.

Anecdotal evidence extends to the author’s claims he “…researched the topic in depth, shot hundreds of rolls of B&W film, experimented with all kinds of exposure settings, chemicals and development formulas.” However, the article curiously lacks any specific evidence from all this research and testing. This is important because Patience’s criticism of ZS and the technical claims about his metering method are ultimately researchable and scientific.

With emotion and anecdotes aside does this article warrant conceptual and technical merit? I contend Patience’s explanations of Adams and Archer’s concepts and method are flawed. The author mischaracterizes the concepts and practices of ZS to buttress his argument, fails to provide sufficient evidence to back up his claims, makes erroneous technical claims about film technology and sensitometry, displays a lack of knowledge of the history of photographic materials, and over-exaggerates the importance of his metering method. No laws of physics are being broken here contrary to his humble claim in the opening paragraph. Unfortunately, for the reader, his technical obfuscation provides little educational content to help others improve their photography.

2.1 The Argument

I find it useful to represent any argument in logical form in order to more clearly understand the claims and their connections. I understand his argument as follows:

Premise 1 – A photographer can achieve excellent results with negative film by overexposing it many stops above the manufacturer’s ISO.

Premise 2 – The results of Premise 1 are in direct contradiction to the concept and methods of using Zone System.

Conclusion – Therefore, Zone System is fundamentally flawed, i.e. dead.

I will begin by looking at Premise 2 because we need to carefully define Zone System versus how Mr. Patience explains it in his article. Once we clarify the concepts and methodology of ZS we can return to the first premise in order to understand his exposure method.

3.0 Zone System versus Patience’s Portrayal of Zone System

The claims about Zone System and how it works are scattershot throughout the article and require consolidation. For clarity I have isolated Patience’s claims in italics within indented paragraphs. His ZS claims fall into four essential categories:

  • Claims about Zone System’s relevancy to modern photographic materials. I will call this the “out-of-date” argument.

  • Claims about the methodology of ZS in relation to metering.

  • Skepticism that tonality is controllable from scene to print.

  • The claim that ZS does not allow for subjectivity.

3.1 Zone System is “Out-of-Date”

“[Zone System] … is based on late 19th century sensitometry studies and it provides photographers with a systematic method of precisely defining the relationship between the way you visualize a photographic subject and the final results.”

The definition the author provides is fairly accurate and a nearly verbatim copy from this wikipedia entry. What exactly is the problem, then? There is a sense that Patience is leaning into the idea that Zone System is “based” in an archaic 19th century science. I think the author of the Wikipedia entry is partially to blame because the choice of wording seems to directly correlate the beginning of sensitometry and zone system. There is a distance of decades between the pioneering work of Hurter and Driffield and the sensitometry techniques and tools when Adams and Archer created the Zone System. Zone System came into existence in the late 1930s and was refined throughout the 1940s and 50s. By then there were great improvements in standards of measurement, definitions, and the accuracy of the tools. Adams writes in his autobiography “For technical confirmation, I asked Dr. E.C. Kenneth Mees, director of the Kodak laboratories, and Dr. Walter “Nobby” Clark, his associate, to check the accuracy of the Zone System and its codification of the principles of applied sensitometry. Their favorable comments supported and encouraged me.” (Adams, 1976 pg. 275) Work on sensitometry and ZS did not end there and were researched and updated throughout the 20th century.

Here are two better definitions of the science of sensitometry that frame its methods and aims more clearly. The science of sensitometry is “the scientific method of evaluating the technical performance of photographic materials and processes in the recording of images.” (Eggleston, 1984 pg. 1) Studying sensitometry “…provides the necessary understanding of the technical characteristics of photographic films and papers. It deals with all aspects of the original subject to the finished image.” (Stroebel, et al., 1990 pg. 86) Notice that these two definitions establish an important hierarchy - that photographic materials and processes are studied through the science of sensitometry.

“Everything evolved since 1930, and that includes photographic film, chemical emulsions and photographic paper.”

“…new film emulsions and papers [sic] stocks changed the technical base the Zone System was once founded on.”

These statements identify two issues that should be examined more closely; one involving the nature of science, and the other involving Zone System’s relationship to sensitometry.

The sciences are founded on establishing general principles from research on particular instances. As a science, sensitometry studies the wide range of possible photographic materials from the past to the present. Patience is making a bold claim that a change in emulsion formulas changed the technical base (aka - sensitometry) that ZS was founded upon. In order for this statement to be true Patience needs to furnish proof of an exact change in a photographic material resulting in a fundamental change to the science. If a film manufacturer changing materials and chemicals undermines Zone System then the whole method would have fallen apart immediately.

Improvements to emulsions, chemistry, and materials (whether in dynamic range, grain, and sensitivity) are all analyzed and understood through the science of sensitometry. Patience’s statements are reversing this relationship. This is similar to claiming that ‘the fact car engines are now cast from aluminum alloy and no longer iron alloy disproves the principles of the combustion engine.’ The author is inverting the relationship between general principles and specific instances in an effort to sow doubt in the reader’s mind.

The second issue concerns the relationship between sensitometry, Zone System, and photographic materials. When an artist makes a photographic image they are engaging with the fundamental science of sensitometry. This science encompasses general principles of tone reproduction and is modified slowly over time through research and engineering. Zone System is a practical method for photographers to apply sensitometric concepts to their specific materials. Simply, Zone System is an interface between the general and the particular.

Diagram 3.1 - Sensitometry is the foundation for understanding the behavior of photographic materials - especially in regards to tone reproduction. This science is largely stable and changes little over time. Zone System operates as an intermediary, allowing us to apply sensitometric principles to the photographic medium using our visual system as the conduit. Finally, above these are the ever changing photographic materials and tools of the art and craft.

Patience is obscuring this hierarchy by confusing methods and materials. As a method Zone System is the application of general sensitometry concepts to the particular case of each photographic process. As a method Zone System gives a photographer the ability to take into account the immense number of variables unique to each photographer’s process to facilitate their vision. The photographic materials themselves are the subject of study. Changing photographic technology is a foregone conclusion and all it takes (whether one uses ZS or not) are a few tests to learn how new materials perform in order to adjust one’s process. What is lost in the obsession over materials is that ZS helps photographers apply sensitometry without the need for expensive (and now hard to obtain) step wedges and densitometers. The revolutionary feature of ZS is that it works on a visual level using the acute perception of the photographer.

“…Adams’ findings seem to make sense if you only consider the traditional darkroom process and have never worked with a scanner or multigrade paper.”

Instead of providing evidence how changes to materials impact the fundamental methods of ZS, Patience instead offers up this opinion. He fails to explain how multigrade paper is not a “traditional darkroom process” or how its existence disproves Adams and Archer’s methods. Graded photographic papers existed at the time of Zone System’s creation. Adams discusses graded papers in The Negative (1981, pg. 47) and The Print (1983, pgs. 47-48). The New Zone System Manual covers paper grades as a part of the system. (1976, pg. 71) The methods of testing graded papers for ZS are covered extensively in Davis’ seminal Beyond the Zone System. (1988, pgs. 115-118)

Explanations of how multigrade papers are integrated within ZS is easily located by consulting the index of any of the books cited in this article. ZS practitioners calibrate for multigrade paper using a middle grade in the ranges possible (typically Grade 2, but I know others who use Grade 3). Starting in the middle means the photographer has options for changing the contrast of their image. Most importantly, there is a technical advantage to calibrating to the middle contrast grades because dodging and burning adjustments on higher grades produce rapid changes in tonality, making it difficult to manually execute tonal changes. On the other side calibrating to a low contrast grade simply does not provide any practical benefit to the photograph (1981, pg. 47). The weight of textual and practical evidence multigrade paper is integrated into ZS and there is a logic to its use. The author’s claim that this doesn’t ‘make sense’ is an empty declaration.

The flexibility of ZS methods extends to digital and even hybrid digital/film systems. Users can calibrate digital systems by performing under/over exposure tests and establishing a basic workflow changing only one variable at a time. This is tricky with digital because of the sheer number of options and settings made available in scanning and image processing software. Nonetheless, there are books currently in print detailing how to calibrate ZS for digital that contain information on scanning. While I personally don’t use a scanner I have a theoretical understanding of how I could incorporate the scanner into my system. This would involve scanning a calibrated step wedge (or even an over and underexposed graycard) in order to find the minimum level of density that the scanner is able to detect above the base+fog level. This would determine the exposure index of my film and from there I could experiment with optimal developing times. I don’t have the space to go into specifics but perhaps a clear article could be written about this in the future. (Note to self.)

The fact that Hurter and Driffield developed sensitometry and Adams’ and Archer created Zone System with analog materials is mere historical fact. Sensitometry and ZS would have lived a short life if they were rigidly tied to a specific set of photographic materials. Instead, the concepts developed by these pioneers are more general and encompass the photographic process in its many material expressions. The flexibility of Zone System as a method to understand and control a wide range of materials is pointed out by Minor White, Richard Zakia, and Peter Lorenz in the Author’s Preface to the The New Zone System Manual:

“Photographic chemistry is changing. Equipment is in the throes of automation. Weston exposure meters are phasing out. As foolproofing advances, Contrast-Control diminishes. But the principles of sensitometry upon which the Zone System stands, remain firm. And visualization always has the creative power to accommodate whatever changes are ahead.” (1976, pg. 2)

The “out-of-date” argument is mere handwaving to distract from clearly explaining the relationship between the materials and the science.

3.2 Metering

“The most common problem in film photography is underexposure. Not because metering is more difficult than with a digital camera, but because all light meters are using medium gray as their point of reference.”

“Metering for neutral gray often makes shadow areas fall into the wrong range, or if you like, zone.”

These statements cast a lot of suspicion on light meters. My best guess is that he is speaking specifically about the reflective meters internal to cameras. Cameras with internal meters typically average the light across the frame (this averaging can take many different forms) which can produce undesirable results. For example, if your frame is mostly a bright sky then the meter will underexpose. This can be seen with cellphone cameras in auto-exposure mode. However, an averaging meter failing to produce the exposure the photographer desires is neither a ZS problem nor the meter’s fault.

First, to describe all light meters as using “medium gray as their point of reference” is inaccurate. Light meters, whether incident or reflective, quantify the amount of light collected by the photocell as a photometric quantity. Incident meters quantify the illuminance falling on the dome of the meter, whereas reflective meters quantify the luminance averaged across an angle of view. Light meters use a photometric quantity as their point of reference, not middle gray.

The photometric quantities are then calculated into an exposure time and f/stop based on the ISO. At this moment we do need to draw an important distinction between incident and reflective meters. Incident meters produce exposure information so that objects under the same incident light will maintain their tone. An incident reading in front of a white, black, or middle gray object all produce the same f/stop and exposure time provided all the objects are under the same intensity of light. Obviously, these meters are not taking middle gray as their reference. However, reflective meters do require exposure calculations to turn luminance into exposure values of a particular tone. The decision by the ISO committee was to make incident and reflective meters agree. This decision is of immense practical value because it equates an incident reading in front of a graycard to the reflective meter reading from the same graycard.

Diagram 3.2 - A visual explanation of how illuminance (in footcandles) and luminance (in candelas per feet squared) are transformed into camera settings. The ISO made the thoughtful recommendation to equate incident readings to reflective readings from a graycard. I chose these numbers from the 100:100:2.8 rule in cinematography - that 100 footcandles, at 100 ISO is correctly exposed at f/2.8. (Assuming 24fps and a 180 degree shutter!)

The author’s complaint is that underexposure is a problem tied to the fact that reflective meters are calibrated to make the tone of an object middle gray. This begs the question - what tone would he prefer reflective meters to render objects? White? Black? A particular dark or light shade of gray? If this was the case we would need to remember that the incident meter is accomplishing one exposure goal, while the reflective meter is accomplishing quite another. This “controversy” is addressed very adeptly in Basic Photographic Materials and Processes.

“It may strike the readers as strange that, whereas film speeds for conventional black-and-white films are based on a point on the toe of the characteristic curve where the density is 0.1 above base plus fog density, which corresponds to the darkest area in a scene where detail is desired, exposure-meter manufacturers calibrate the meters to produce the best results when reflected-light readings are taken from midtone areas. Meter uses who are not happy with this arrangement can make an exposure adjustment (or, in effect, recalibrate the meter) so that the correct exposure is obtained when the reflected-light reading is taken from the darkest area of the scene, the lightest area, or any area between these two extremes. This type of control is part of the keytone method and the Zone System. (1990, pg. 56).

What the authors of this text are proposing is not radical - that a human with a brain should interpret the data of the meter to their photographic needs. Moreover, they observe that there are systems in place to assist the photographer in this task, namely keytone and Zone System. Patience’s criticism of ZS is a tacit admission that he is ignoring the very tool that would help solve his reputed “meter problems.”

3.3 – No Correlation Between Zones in Subject and Zones in Print

“The biggest misconception resulting from the zone system is the suggested correlation between tonal values in a scene and tonal values in your print or scan. There simply is no such correlation.”

The gravest claim that strikes at the heart of Zone System is that there is no “correlation between tonal values in a scene and tonal values in your print or scan.” This is supported by an anecdote about how exposure to a dark and moody scene produces too thin a negative for a proper scan or print. The paucity of technical details in this example provides no evidence to support his criticism. When confronted with exposure issues in classroom settings we go over the students metering technique, camera settings, meter readings, and darkroom technique to assess what resulted in a thin negative. A vague story about underexposure does not present a case against ZS. In fact, to seasoned photographers this example suggests that the author made a mistake in metering and/or exposure - which is an easy suspicion considering his claims about metering.

There is another possibility for underexposure problems - the strong possibility that Patience never tested his film/developer combination to find its proper Exposure Index. One of the earliest steps required in learning ZS is to experimentally verify the exposure index (or effective film speed) of your film/developer combination. This guarantees that the lowest shadow detail in a scene meets a specific threshold of density on the negative. (Notice that this fact is mentioned in the previous citation from Basic Photographic Materials and Processes.) If you are interested in the process you can find a description of the process in Appendix of Adams’ The Negative, and in the Calibration chapter in the New Zone System Manual. I also wrote a sketch of Zone System Calibration here.

Calibration brings the Zones in the subject and print into alignment. The New Zone System Manual explains the virtues of rigorous calibrating and testing. “Consequently calibration and material testing is an on-going part of a systematized photographer’s career; because it keeps all variables under control, connects photographer to medium, makes visualizing possible and effective.” (1976, pg. 3) I can point to the materials in my ZS Calibration article in order to demonstrate that the Zones do in fact correlate. Look at how closely the graycard in my scanned print matches a graycard I placed on the scanner as reference. (Section 5.4) Patience never once mentions in his article attempting ZS calibration nor produces examples of how calibration failed to correlate zones. This lack of information is reminiscent of the Sherlock Holmes story Silver Blaze where the critical clue is that the dog didn’t bark. Patience’s statement that he has “shot hundreds of rolls of B&W film” while never producing a single technical counterexample should give the reader pause in believing the author’s claims.

3.4 Expose for Shadows and Develop for Highlights

“The mantra “expose for the shadows, develop for the highlights” reflects exactly that, and suggests to overexpose and underdevelop.”

The author’s claim that the ZS “mantra” suggests you overexpose and underdevelop is incomplete and incorrect. Compare this to the explanation in Basic Photographic Materials which points out that “…it can be seen in practice that the darker tones (shadows) should govern the camera exposure determination, while the lighter tones (highlights) should govern the idea of development. This idea is consistent with a common saying among photographers: “Expose for shadows and develop for highlights.” (1990, pg. 103)

Saying that this phrase only suggests overexposure is in line with his argument that underexposure is the greatest problem a photographer faces. However, this is a one-sided portrait because ZS practitioners choose between overexpose/underdevelopment, and underexposure/overdevelopment. Both techniques change how the film records the subject luminance range to match the limits of a print paper. Expose for shadows and develop for highlights cleanly encapsulates how the photographer uses exposure to set the minimum shadow detail point and uses development as a way to set contrast in a way that suits the intended aesthetic. In a technical sense, ZS users are determining exposure indices and development times for different contrasts in scenes so that their own ideas are realized appropriately and efficiently. Explaining only one half of Adams’ famous adage does a grave disservice to how cleverly this phrase encapsulates sensitometric principles.

“That in itself [the phrase “expose for shadows and develop for highlights”] doesn’t really make sense according to the Zone System, because it artificially changes the density and the tone curve of the negative.”

Patience is actually stating an opinion by claiming that exposing for shadows “doesn’t make sense” and is “artificial.” Changing the density and curve of the negative is precisely what the photographer must do in order to achieve an intended result. There is nothing “artificial” about it because the altered development time is the film photographer’s equivalent of using the ‘curve’ tool in Photoshop. For example, if the scene is too high in contrast for our given paper/film combination we pull the film (overexpose and underdevelop) to extend the exposure range of the film to match the paper and if the scene is too low contrast we push the film (underexpose and overdevelop) to compress the exposure range of the film to match the paper. The changing of our film rating/development time is just a tool available to us to achieve a desired tone rendering. If Patience is using the ‘curve’ tool in Photoshop he is a hypocrite since this too is “artificially” manipulating the tonality of his image by his own logic.

Patience’s claims that overexposing and underdeveloping does not work well with a scanner may well be accurate. However, he is not following the advice of Adams’ adage by calibrating his photochemical materials to the scanner. A ZS user would perform a calibration to the scanner in order to establish a proper exposure index and development time that suits the sensitometry of that specific device in particular settings. The author’s accusation that ZS users “repeat golden rules without questioning” is ironic in light of his poor explanation and misapplication of this famous adage.

3.5 Ultra-Deterministic Zone System

“It’s also not correct that darkroom prints are straight, unmanipulated results where the metered tonal value of the scene translates from the negative directly onto the paper. The photographer decides with each print how he would like to final result to look like and can adjust the brightness and the contrast of a print frame by frame…”

The use of an ultra-deterministic language is best exemplified by Patience’s claim that Zone System aims for an “unmanipulated” print where “tonal value translates from the negative directly onto the paper.” I am under the impression he is arguing that Zone System aims for a perfect print - one that does not require dodging, burning nor a change in contrast. This is Patience’s own dogmatic interpretation and is not a part of Zone System as explained in Adams’ books nor by subsequent experts. I challenge him to produce citations that support his claim.

I don’t think he’ll find his evidence because Zone System by definition and practice gets you to an “optimum” negative - a negative that is close to one’s previsualization and requires some, but not drastic tonal adjustments. (1981, pg. 47) Optimizing your process makes printing easier just as a properly exposed digital file is easier to correct in a computer. As I pointed out in my own experience, a print that is closer to my own previsualization wasted less time, less materials, and afforded me the time to focus on details. Adams’ himself produced the wonderful analogy that the negative is the composition and the print is the performance. A negative exposed and developed to ZS methods is like showing up to conduct an orchestra knowing the score and having an idea of how the piece should be realized.

3.6 It’s All Subjective!

One last criticism that should be dealt with is the notion that “it’s all subjective anyway” expressed in the “Darkroom prints are subjective” section. Invoking subjectivity is a lazy argument to let everything descend into a stew of relativity. What he fails to mention is that subjectivity is given a clear role integrated with the objective truths of sensitometry. This is addressed on page 1, Chapter 1 of The Negative.

“The concept of visualization set forth in this series represents a creative and subjective approach to photography. Visualization is a conscious process of projecting the final photographic image in the mind before taking the first steps in actually photographing the subject. Not only do we relate to the subject itself, but we become aware of its potential as an expressive image. I am convinced that the best photographers of all aesthetic persuasions "see" their final photograph in some way before it is completed, whether by conscious visualization or through some comparable intuitive experience.” (1981, pg. 1)

It is perfectly acceptable that Adams changed his printing of Moonrise Over Hernandez over time because his visualization also changed. The tools of the photographic medium are merely the vehicle for artistic expression. Similar to the author’s mistaken inversion that places sensitometry at the mercy of photochemical tools, he is placing the artistic vision at the mercy of Zone System.

If ZS contains subjectivity then why worry about metering, exposing, and developing so precisely? Why not just create your artistic expression in the computer or in the darkroom? The truth is, that you can’t fix everything in post whether capturing on digital or film. Even if correction is possible there would be a noticeable impact on image quality. You, as the artist, should be in charge of maintaining your vision throughout the process. Patience is promoting a careless attitude toward the early steps in a process because ‘you will change it in post anyway because it’s all subjective.’ This is a defeatist stance that places photographers at the mercy of their own tools. What ZS books stress in the opening chapters is that the photographer should strive to understand their own subjective intentions and the photographic process to achieve their vision. Those who are interested in maintaining an intimate control of their process from soup to nuts tend to use the ZS or some modification of it. In contrast I would argue that it is subjective - which is why we experiment, test, and calibrate our materials to achieve our subjective vision.

3.7 – Conclusion: What Patience calls Zone System is not the Zone System

For someone who claims to have read The Negative and The Print and admires Ansel Adams’ technical expertise, Patience presents a distorted picture of the method and science. First, he has ignored how ZS works on a technical level and how it incorporates the artist’s subjective vision. Second, his skepticism that ZS fails to correlate tonality between subject and print requires hard technical proof which he fails to provide. The counterexamples he does provide center around improper explanations of light meters and how they work. Finally, he makes a number of bold assertions about photographic materials that are false or explained in Zone System books. Simply, the author is describing a sham Zone System to refute.

4.0 Sensitometry Claims

There are a number of claims I’ve separated because they deal less with ZS and more with the science of sensitometry.

 4.1 – ISO is a “Minimum Value”

“The ISO rating (“box speed”) states the minimum value at which you will be able obtain a properly exposed negative.”

Nowhere in the ISO document for determining film speed for black and white negative film (ISO 6:1993) does it claim the speed rating is a “minimum value.” The ISO number is calculated off of a curve with a specified slope and from a density point, designated as Hm, above the base + fog. This point is commonly referred to as the minimum density point because it represents the lowest amount of usable density on the negative. This coincides with the point where a photographer would typically want to place the darkest shadow detail. (The minimum density point is around Zone II in ZS parlance.) The point Hm is used for calculating ISO, but that doesn’t make it the ‘minimum value’ as he claims.

Diagram 4.1 - ISO 6:1993 - Determining film speed for black and white negative film. Film speed is calculated from point Hm. Where m is located on the curve is the location of the lowest amount of shadow detail from a subject. This point coincides with Zone II in the Zone System.

The logic behind the ISO establishing standard 6:1993 is supported by many practical and aesthetic reasons. First, this is the fastest ISO that still provides 3 to 4 stops of shadow detail. This provides the photographer with an adequate amount of shadow detail while still allowing them flexibility to stop down or shoot with a high shutter speed. In actuality, the ISO on the box is the maximum value, not the minimum. Patience has the language backwards, which should give one pause before considering him an authority on this subject.

The ISO speed is not dogmatic since the photographer will change the exposure index for their film based on a number of factors including choice of developer, and subjective intentions. For example, Beyond the Zone System discusses artistic reasons for selecting a different speed point depending on the amount of shadow detail desired. (1988, pg. 39) Second, psychophysical tests made decades ago found that people preferred exposing a scene closer to the toe of the film curve because grain is less prominent. (Eggleston, 1984 pgs. 30-31. Todd & Zakia, 1969, pgs. 73-77.)

4.2 Overexposure = More Information

“The more exposure you give your negative, the more information it will hold.”

The truth is not as simple as Patience is portraying. Overexposing (or lowering your exposure index) is necessary if your negatives are thin. For most beginning photographers this is due to issues with metering, and/or a lack of calibration within their materials. The evidence (or lack of evidence) provided by the author in the previous sections explains his continual insistence that film should be overexposed. However, his simplistic correlation lacks accuracy when researching ZS and sensitometry texts.

If you choose to expose your subject for the darkest shadow located at point Hm, the ‘speed point,’ than you are allowing for over-exposure latitude. In ZS language you are placing Zone II, where details in shadows disappear, at the minimum density on the curve. (Diagram 4.2, below, illustrates this fact.) However, you could also place the lightest highlight near the point where the film “tops out” with its maximum density. This is locating Zone VIII, where highlight details disappear, near the point of maximum density on the film curve. A photographer can also choose to place their subject’s information in the middle of the curve and balance the exposure latitude on either end. All of these possible scenarios are explained and understood through both sensitometry and ZS.

Diagram 4.2 - This illustration is from Basic Photographic Materials and Processes, 1st Edition. Graph A shows the conventional method of placing subject tonality near the lower part of the curve. This provides excellent tonal rendering with little grain. Graph B is another possibility, but would make grain more pronounced. Graph C shows a third possibility splitting the difference. Notice that film has upper and lower limits.

Notice in these illustrations that there is a lower limit and upper limit to the light a film emulsion can record. Patience claims that “the limit is never the negative” which contradicts the fact that all photographic materials have limits. A film/developer combination can only capture a determined range of information. The measure of the lowest recorded signal to the highest is the film/developer Dynamic Range and this will not change without altering the materials or development time in the process. Understanding that the author is struggling with shadow detail in what is, presumably, an uncalibrated process would entail the need to lower the effective film speed just to meet the threshold of placing the lowest shadow detail at the minimum density point. He is not “getting more information” but instead rescuing lost shadow detail and bringing his photographic process into alignment.

However, if you keep raising the exposure you run the risk of crushing highlight detail in the shoulder of the film curve. Therefore, the skillful photographer is interested in making sure they are placing their subject’s tones at an optimum place on the film curve for printing or scanning. Patience is performing the very advice espoused in sensitometry and ZS books, but turning it around as a criticism. For example, on the back of the book Photographic Sensitometry: The Study of Tone Reproduction it claims:

Diagram 4.3 - The marketing department certainly got involved with writing the back cover information for the book Photographic Sensitometry. These are true statements, but a more nuanced view is depicted inside the text.

If negative films can handle overexposure what are the drawbacks? As already mentioned there is a possibility of reducing highlight detail. Another significant change to image quality is increasing graininess because fixed pattern noise (aka grain) increases with exposure. These are two significant impacts to quality that Photographic Sensitometry, and many other books warn about. You are welcome to choose this process for aesthetic reasons if so desired. The impact to highlights and grain is demonstrated by Patience in a series of over and underexposed frames from a roll of Tri-X at this link. Notice that underexposure crushes shadow detail and creates milky blacks. From +6 overexposed and higher the grain is pronounced and highlight details are compressed.

Patience’s claim that ‘more exposure means more information’ is wishful thinking compared to explanations by sensitometry experts. Despite its sensational text on the back cover, Photographic Sensitometry more soberly admits that “a plot of print quality (as estimated by a panel of viewers) versus exposure index level indicates that there is an optimum. [The accompanying figures] show a reduction in print quality with both under and overexposure, but the reduction is more serious with under exposure.” (1976, pg. 169)

Patience’s most egregious statement comes at the end of his article when he claims that “A lot of things I am sharing in this article shouldn’t work according to the books. But they do, and they do so beautifully.” With the word beautifully he links to the under and overexposed images of Tri-X discussed in the previous paragraph. The problem is that his methods and claims are covered with greater accuracy and depth in books about Zone System and sensitometry.

4.3 The Larger the Film the Greater the Dynamic Range

“Large format sheet film, for example, has way more exposure latitude than medium format film. Medium format film has way more latitude than 35mm film, which again is a completely different story than a modern digital sensor. The size of the negative has a tremendous influence on the tonal range and the final result, be it a scanned image, a traditionally printed image or both. Dividing every image into 10 identical zones is a questionable approach, because the tonal response of a large format negative and a way smaller 35mm negative to the same exposure, the same amount of light, are completely different.”

Patience’s lack of knowledge extends to the design of photographic materials. In the paragraph above he equates larger film sizes to a greater dynamic range and therefore the number of zones that should be considered by the photographer. (For an understanding about the number of zones see my first Technical Sketch section 3.3.) However, looking at the technical publications for any available film stock reveals no difference in emulsion formula, but a change in thickness of the acetate base. For example, Ilford’s technical publication for FP4+ begins by describing the film and the different backings for each film size. The document only displays a single characteristic curve on page 5 because the emulsion, and therefore the tonal response, is the same for all format sizes. The technical publication for Kodak’s TMAX 100 is similar - there is no change in characteristic curves for size of format, but instead for different developing chemicals.

Without going too in-depth there are some reasons why Patience may believe that format size changes the tonal rendering. These are not inherent to the film format itself, but involve changes to the process surrounding the change in format size.

  • Different tank sizes causing changes to the volume of developer to area of film ratio.

  • Film development in a tank is different than tray developing sheet film.

  • Contact printing sheet film produces different print tones than enlarging film.

  • A psychological ‘feeling’ there is increased tonality due to the increased of detail with large format films.

There is also a strong possibility that he is responding to a difference in tonal rendering between Tri-X roll film (TX) and Tri-X sheet film (TXP). Kodak uses a different emulsion formula for 35mm and 120 film then that for large format film. However, notice that the curves for 35mm and 120 are identical when overlaid on each other.

Diagram 4.4 - On the left the curves for Tri-X 35mm and 120 are overlaid. Notice that they are identical because both use the same emulsion formula. On the right is the curve for Tri-X sheet film, which is a different emulsion formula.

The research required to confirm the emulsion formula for a range of format sizes is very simple. One doesn’t even need to buy, expose, develop, or analyze any of the materials download and read the pdf documentation from the manufacturer. This is further evidence of a genuine lack of research by the author. More importantly, the differences he is seeing, variations in the processes used between different formats, could be analyzed through the direct application of sensitometry or Zone System.

4.4 Out-of-Date Redux

“But I think it might be time to update the books and accept that what Adams suggested was solely made for the traditional darkroom printing process – born out of the problem that he had to find a way to compress 15 stops of dynamic range from a well exposed large format negative onto a sheet of paper that can only accommodate a total range of approximately 8 stops.”

The very reason that photographers care so much about the exposure and development of their film is precisely due to the fact that the final display has a limited range. This is not a problem that needs updating, but a part of the photographic process. I have explained this with references in previous posts in my Death of the Zone System series. For brevity I will state the major facts with the links to the posts which contain references to studies on the subject.

First, the dynamic range of the human visual system is tremendous if you are allowed the time to adapt to total darkness (which takes about 30 minutes) or to bright light. You experience this time lag walking from the sunny outdoors into a dark building and vice versa. Vision scientists worked with sensitometry experts to establish the dynamic range of the visual system when fixated on a photograph. The range (containing object detail) turns out to be around 8 stops when viewing a properly lit photographic print, or a computer monitor in interior settings. My writing with the data and citations are in part 4 of the Death of the Zone System.

If the visual dynamic range is only 8 stops this explains why so many display mediums possess the same range. Patience is correct that photochemical paper has an 8 stop range. (More accurately somewhere in the 7 to 8 stop range.) Many slide films contain an 8 stop range when projected. A properly calibrated computer monitor has 8 to 9 stops. (You can meter your own monitor to establish this fact.) This is why the number of Zones in the Zone System work so well across different mediums - it is based on appearance to our eye.

Negative films and digital sensors are designed to capture a large dynamic range in order to provide the photographer flexibility in the darkroom and/or computer. I cover how the range of tonality is captured and compressed in my How Zone System Works Part I and Part II. If you have not already read these I encourage you to do so because they explain the science and methods of these systems.

Patience is not leveling any criticism at Zone System or revealing cracks in the science of sensitometry. His statement that it was Adams who needed to find a way to compress 15 stops into 8 is historical fiction. The move from a larger DR in the capture medium to the compressed DR of the display is true for analog and digital and will continue that way until our visual system evolves.

4.5 Conclusion: Sensitometry Misunderstood

Johnny Patience’s explanations of sensitometry and its relation to Zone System are inaccurate and/or simplistic. He incorrectly explains how ISO is calculated and fails to understand that this is a manufacturer’s recommendation. His claims regarding photographic materials easily debunked by research online. By turning facts about the process of tone reproduction into criticisms and failures he is revealing he does not understand the content he is criticizing. The most revealing statement is the boastful “A lot of things I am sharing in this article shouldn’t work according to the books” when, in fact, his claims are covered in greater detail and accuracy in many books. One of his greatest discoveries, that negative film is forgiving when overexposed, is even used as a marketing blurb to encourage photographers to read a book on sensitometry.

Giving Patience the benefit of the doubt, perhaps he has read many wildly inaccurate books on sensitometry and Zone System. I don’t know where he found these books and who wrote them, but I hope he informs me of their titles and includes quotes. Another possibility, weighing the evidence Patience provides against the material cited, is that he read these books and just simply did not understand the concepts. There is also the possibility he never applied himself to any research or methodical study and is simply calling our bluff.

To be fair, when I began learning photography and darkroom printing I floundered a fair amount. There is a familiar ring when he writes that he “…researched the topic in depth, shot hundreds of rolls of B&W film, experimented with all kinds of exposure settings, chemicals and development formulas.” At first I believed trying numerous films, papers, and chemicals would improve my understanding of the medium. The results were all over the map because I was just using manufacturer recommendations. In hindsight I recognize that I was never properly handling any of these materials and making shallow judgments. (I criticized TMAX 100’s tonal rendering in front of John Sexton! Totally embarrassing in retrospect because my choice of exposure index and development time was the guilty culprit.) Fortunately, I encountered some excellent mentors who forced me to choose one set of materials and rigorously test them through the Zone System. I was also only allowed to change one link in my photographic chain at a time and only once understanding the consequences. The claim of testing hundreds of rolls of film and all kinds of chemicals sounds impressive. However, in my own experience, authorities on film photography can only usually speak to a narrow range of materials they use. On the other hand, they often spoke very clearly about the processes to understand any materials.

5.0 Patience’s Metering Method

Returning to the logical form of Patience’s argument we can answer whether there is any evidence that overexposing film is a violation of Zone System or the science of sensitometry. The answer is firmly negative (another pun intended) because of the sheer weight of published evidence that refutes Premise 2. Rather, the decision to overexpose film is treated by ZS and sensitometry and covered in numerous publications. There is experiential evidence as well if you choose to study and try Zone System methods. Don’t take Patience’s or my word for this - try it yourself.

Patience speaks highly of his metering method throughout The Zone System is Dead. Using the science of sensitometry and ZS I would like to explain his metering method in detail in order to put its purpose in proper perspective.

5.1 – Metering Method

“My metering method
I meter all color negative film the same. I use a very simple analog incident light meter (Sekonic L-398 A), nothing fancy or expensive. I rate my film half box speed. If I shoot Porta 400, that means I set the meter to ISO 200. Then I meter for the shadows, which means I bring my meter into the part of the scene that has the least light. If I don’t have a shadow anywhere close, I shade the bulb of the meter with my hand. I hold the meter in a standard 90 degree angle to the ground, which means nothing else than parallel to the subject, with the bulb facing the direction of the camera. That’s it.”

Patience’s Metering Method sounds simple because it is. He is metering in the shadows which would move the subject luminance range up by anywhere from two to four stops.

5.2 – Through the Lens of ZS and Sensitometry

Establishing a metering method that favors overexposure with negative films is easy to justify. First, the published ISO is established through ideal testing, but varies based on an individual’s process including their choice of printing materials. Helping others learn Zone System it is altogether too frequent that a student learns that their choice of film developer results in a loss of film speed. On the other hand, I have friends working with high acutance developers such as Rodinal who find their film gains speed. Johnny Patience uses Tri-X in XTOL developer and provides little information about his methods of handling the film. Searching for XTOL we can see in the comments that he uses XTOL in a stock dilution, but does not provide any developing time. However, he does admit that:

I don’t develop any of my film at home because of the volume I shoot. My lab develops and scans my work and then sends back my negatives. (23rd of October 2016)

Unfortunately, with no sensitometry graph from Kodak for Tri-X in XTOL, and the fact a lab is handling the development we have little information to go on. What we can surmise is that the film is losing speed since Kodak’s recommendations in its technical data for XTOL begins with an ISO of 400 and nothing lower. The Massive Development Chart does contain information for Tri-X in XTOL with a range of exposure indices and dilutions, but we would need to know what the lab is doing. (I considered sending a roll of film I exposed to his lab to make a sensitometric analysis of the negative and the scan. I acknowledge it is critical to my argument, but have little time at the moment to spare on this.)

Second, establishing a correct exposure index for a set of materials requires assiduous testing and observation. In my post on Zone System calibration I discovered that Agfapan APX 400 lost nearly two stops of speed in Pyro PMK. Patience provides no evidence of any attempt to calibrate his materials through Zone System. However, I believe that Patience has “observed his way” into a more calibrated photographic process by overexposing. For someone acquainted with ZS and sensitometry I can see from his complaints that he found his negatives thin and that shadow detail was a problem. He correctly followed the path of altering his exposure index and changing his exposure methods to improve image quality. At the same time, however, he denigrates the concepts that support his very decisions and would help him understand his materials and methods.

Patience’s recommendation to overexpose is only a useful recommendation or a rule of thumb. While it can be analyzed from a sensitometric or Zone System standpoint it is not a system or method, but really just an observation and easy way to get out of a jam. Zone System and sensitometry are systems of tone reproduction. As systems they incorporate a range of concepts in order to assist the photographer in understanding, calibrating, and executing the methods to create images. Patience’s metering method is a useful observation that generally works.

5.3 - Scanning

I am also under the assumption that the scanner has poor sensitivity to subtle changes in density in the toe of the film curve. Therefore, rating his film slower helps move the exposure more into the straightline portion which provides more density separation for the scanner to see and therefore more data to work with in Photoshop. Once again, he is expressing a legitimate adaptation of his process to better fit the limitations of the scanner. It would be interesting to see a scan of a Stouffer step wedge at different scanner settings to better understand the properties of the scanner.

6.0 – Conclusion

Consider Johnny Patience’s argument in light of the evidence from authorities on sensitometry and Zone System, as well as examples of the methods in practice.

Premise 1 – A photographer can achieve excellent results with negative film by overexposing it many stops above the manufacturer’s ISO.

This claim may be frequently true, but depends on many contingent circumstances. Patience is portraying this statement as a sweeping truth without context or nuance. A more statement would begin with ‘it is possible to achieve excellent results…’ which is more accurate and weakens the dogmatism of his stance.

Premise 2 – The results of Premise 1 are in direct contradiction to the concept and methods of using Zone System.

Sensitometry and Zone System as concepts and methods encompass not just the claim of the first premise, but a complete system of tone reproduction. The author’s insistence that these tools are the problem and not a solution goes beyond mere misunderstanding, but active dissemination of misinformation.

Keep in mind that Patience is making an argument concerning scientific processes. (These are processes in the service of an artistic vision, but ultimately scientific.) The burden of proof he is required to produce includes citations from texts along with technical evidence from following the disputed method. From the perspective of people conversant with sensitometry and ZS (such as myself and colleagues) it appears that Patience chose to show up to a battlefield without a single weapon. Claiming to have ‘read all the books’ and that the process ‘does not make sense’ is not evidence.

The faulty or inaccurate explanations of sensitometric principles, ZS method, and the behavior of photographic materials reveal this piece is not so much a requiem for Zone System as a monument to Patience’s mistaken ideas. One could go so far as to say that he did not only forget to bring a weapon to the battlefield, but did not show up to begin with. He is on a Quixotic quest to attack some shadow Zone System he believes is the enemy all the while overselling his metering method as simple solution to a complex problem.

By deciding to put information online we owe a huge amount of responsibility to the greater photographic community. Too often I find people struggling with Zone System (it’s a long learning curve!) and the poor information online only further confuses the process or summarily relegates it to the trash heap. We have to be careful when we make claims, and I’m more than willing to produce more evidence in the form my own tests, sensitometric data, and research to back up my claims. (I figure e-mails I receive with questions or arguments can lead to future posts.)

At the end of the day this is where I find Johnny Patience’s article the most troubling beyond the technical inaccuracies and logical fallacies. I really love his concluding paragraphs where he expresses the importance of making the photographic process service the creative vision and the need to test claims for ourselves. However, Patience’s assessment of Zone System fails to follow in the same generosity of spirit he expresses and becomes a lost opportunity. If he solicited advice from users and experts to weigh in on each problem he encountered with ZS his blog would become profoundly educational. Instead, he chose to denigrate ZS and sow confusion with personal mythologies.

6.1 Where does Freedom Come From?

As a final thought I want to present a different approach to the art of photography then the one Johnny Patience espouses. He invokes a need for freedom, specifically a freedom from technicalities.

“With practice you will be able to guess your exposure, which removes all technicalities and for me, is the ultimate freedom. Nothing needs to stand between your vision and your final result.”

“…it’s all about freedom. Sometimes even freedom of thought.”

Personally, Patience’s ‘freedom from all technicalities’ and ‘freedom of thought’ sounds more like avoidance. The photographic arts is one of the most technologically complicated mediums. It not only encompasses a broad range of the physical sciences, but involves a large number of processes to capture, process, and display an image. On a technical level, the photographer simply cannot choose whether to engage with sensitometry, optics, chemistry, etc - they are along for the ride whether you like it or not. Conceptually, the artist must make an honest effort to engage and understand the methods and ideas of others. ‘Freedom of thought’ is not genuinely embodied by a flawed understanding and characterization of Ansel Adams’ method or views. Instead, Patience’s The Zone System is Dead reads as if he is prisoner to his own beliefs.

To be clear, I’m not suggesting that learning photography must begin with everyone learning hard physics. Many start with the camera in automatic settings and focus on optics and composition to begin. What I am claiming is that anyone wishing to deepen their skill and craft will ultimately seek out further topics which, alas, are technical in this art. You are welcome to choose how deep you want to go. What I do not recommend is stopping short and claiming everything deeper is incorrect because you chose not to engage it.

What I contend is that learning the science and technology is the way to find freedom. Freedom is found through the technicalities. Freedom is found through understanding another person’s ideas before forming one’s own. Freedom involves engagement, not dismissal.

Cited Books:

Adams, Ansel, The Negative. Little, Brown and Company: Boston, 1981.

Adams, Ansel. The Print. Little, Brown and Company: Boston, 1983.

Adams, Ansel, and Mary Street Alinder. Ansel Adams: An Autobiography. Little, Brown and Company: Boston, 1976.

Davis, Phil. Beyond the Zone System, 2nd Edition. Focal Press: Boston, 1988.

Eggleston, Jack. Sensitometry for Photographers. Focal Press: London, 1984.

James, T.H., ed. The Theory of the Photographic Process, 4th edition. Macmillan Publishing Co., Inc.: New York, 1977.

Stroebel, Leslie, et al. Basic Photographic Materials and Processes, 1st edition. Focal Press: Boston, 1990.

Todd, Hollis N. and Richard D. Zakia. Photographic Sensitometry: The Study of Tone Reproduction. Morgan & Morgan, Inc.: Dobbs Ferry, 1969.

White, Minor, et al. The New Zone System Manual. Morgan & Morgan, Inc.: Dobbs Ferry, 1976.

How Zone System Works: Zone System Calibration by John G Arkenberg

(N.B. - I wrote these instructions for Zone System Calibration intending them as a sequel to the previous two articles about the conceptual and technical underpinnings of ZS. However, anyone is welcome to read this unto itself. However, this is a key piece setting up the last article in my series which will directly address the mistakes and misinformation in Johnny Patience’s Zone System is Dead blog.)

(N.B.B. - The calibration example in this article is from well over 15 years ago. I saved many of the materials, but some of my notes are lost. The prints are in good condition if one considers tonality only, but they are very scratched and dusty. I did my best to clean them up minimally in Photoshop.)

1.0 First Steps

1.1 Give Yourself Time

This calibration procedure cannot be rushed. Best to perform this over a weekend with comfortable time limits. Try to limit any extraneous distractions so you can give each step your undivided attention. Don’t rush!

1.2 Select One Set of Photographic Materials to Test

I am writing this under the assumption that you have access to a film camera, a graycard, and a darkroom. (I am also assuming that you understand how to develop film and print in a darkroom.) I should also admonish you that you need to perform these steps only using one fixed process. By that I mean you need to pick one film, one developer, one paper, one paper developer. etc. If you change anything in this system you will need to perform this series of steps all over again to recalibrate the system.

If you are just starting down the road of calibrating your materials for Zone System I would avoid testing multiple films. I recommend just focusing on one set of materials to begin with in order to become comfortable with the methodology. As you become comfortable with the procedure I would expect someone to juggle multiple variables during a period of testing.

1.3 Set Up a ‘Scene’

You will also need to set-up a scene so that you have a standard visual reference to observe an photograph. The first scene I used to calibrate materials can be seen in the prints starting with Image 5.2. (I have different opinions now as can be seen in Image 1.0.)

Light your scene with artificial light in order to avoid changes in intensity or quality in daylight conditions. Also, since you should expect to recreate this scene (except with a few changes) for future calibration tests lighting with artificial lights provides greater control. I expect the scene to include at least an 18% graycard, an object that spot meters three stops over middle gray, and an object that spot meters three stops under middle gray. (These are Zone II and Zone VIII values and are extremely important to the calibration procedure.) Additionally, I highly recommend the following: an X-Rite Colorchecker, slightly crumpled aluminum foil, slightly crumpled black wrap, and objects that spot meter in whole stop increments above and below middle gray. I have a friend who keeps all the objects in a box so he can easily access them and set them up.

Image 1.0 - The scene above is composed of teaware that contain areas that spotmeter as whole stop differences. It also includes some standard charts; a graycard, the Kodak Q-14, and the X-Rite Colorchecker. Below is the key indicating which areas of the teaware are in each Zone. I particularly like the porcelain dish in Zone IX because it contains a relief that will not render correctly unless your system is optimized. I have only used this scene in current times testing digital cameras.

1.4 Set Up Your Darkroom

Make sure your darkroom is set up and your darkroom processes are standardized. This entails making you sure there is no variation in the control of your tools. There should be no fluctuation of the intensity of your enlarger bulb when the refrigerator turns on. The faucet should be able to supply water at a consistent temperature. The thermometer you own has been checked and calibrated. Your safelight does not fog your paper. Your agitation technique is consistent, etc.

Remember that you are approaching these tests as a scientist would approach an experiment. Ruthless consistency is the method of operation and I’ve seen many a good darkroom printer produce prints in a ritualistic manner.

2.0 Effective Film Speed (EFS)

After selecting a single set of photographic materials and ensuring the darkroom is properly set-up we can begin the first test. However, I would be remiss in glossing over some important technical background.

2.1 Defining ISO, EI, and EFS

Film manufacturers provide an ISO or EI for the film. There is an important distinction between these two terms. An ISO rating is produced following the procedures outlined in document ISO 6:1993. (This document covers black and white negative film.) However, the manufacturer can also decide to publish a “best practices” recommendation so long as it is labeled EI, or Exposure Index. As an example, Fuji F series motion picture films (now discontinued) displayed an EI rating. However, I discovered that by applying the ISO calculation procedures on the sensitometry charts the film was actually one stop slower than the published EI! You may be wondering why they would be allowed to do this? Well, it was up to you as an informed consumer to recognize the difference and perform your own exposure/development calibration tests.

Many amateurs entering the world of black and white photography discover their initial rolls of film underexposed. While this can be due to metering mistakes, I also find it common that their choice of film developer results in a loss of film speed. One should remember that when a film manufacturer publishes film speed and developing times it is for a specific film and developer combination and in ideal settings. The decision to use a different developer can radically change the EI of your film. This is why Zone System calibration is such an important first step.

When I first learned Zone System calibration methods I was taught to use the term EFS, or Effective Film Speed, for the speed rating established by my test. EFS is synonymous with EI and you are welcome to use either term interchangeably.

2.2 Shooting the EFS Test

Load one roll of film into your camera and frame the graycard in your scene so that it fills the entire frame. For this test each exposed frame should be an even patch of tonality. Set the focus of the lens to infinity to blur the texture of the graycard and to alleviate vignetting and loss of exposure from close focus.

Before exposing any film make sure to check that your lighting across the graycard is even! I recommend doing this with a spot meter since that ensures the tonality of your graycard is tied to middle gray.

Begin by exposing a few frames with the lens cap on so that there are some frames with no exposure. This is important for establishing your Standard Printing Time in the next step.

Next, expose the graycard properly at your metered exposure. Now, begin to underexpose in 1/2 or 1/3 stop increments until you have reached six stops underexposed. You should do this primarily through shutter speed. Use the aperture only for 1/2 or 1/3 stop changes in exposure. For example, let’s say that my graycard meters at 1/30th of a second at f/2.8. Then I would underexpose by the following:

-1/3 stop is 1/30th at f/2.8 1/3

-2/3 stop is 1/30th at f/2.8 2/3

-1 stop is 1/60th at f/2.8

In this way the aperture stays within the same one stop range. Also, it prevents the depth of field from becoming so great as to bring the graycard into focus. Also, make sure to avoid using ND filters since these may not be precise enough.

Having completed exposing the roll of film you can then develop it at the recommended developing time.

2.3 EFS Test Example

Here is a photograph of my negatives from an EFS test with Agfapan APX 400 in Pyro PMK developer. Since I had only 12 frames with the roll of 120 film I chose to change the exposure in 1/2 stop increments. Also, I had gleaned from someone else’s experience that the film would lose about 1 stop in speed in PMK. I shot this roll with my light meter set to 200 EI figuring I would be closer to the appropriate EFS. Unfortunately, I cannot remember which resource provided the 16 minute developing time.

Image 2.1 - Many apologies for just a cellphone photo, but you can still understand the example well enough! The graycards photographed on this roll started at two stops underexposed down and proceeded to four stops underexposed in half-stop increments. Notice my deliberate exposure with a lens cap on and this frame noted on the sleeve. The yellow stain of PMK Pyro developer is very apparent.

The notations on the negative sleeve may seem mysterious but I will explain my logic in subsequent steps.

3.0 The Standard Printing Time (SPT)

The printing time of our negatives must be established for the rest of the test. Since we are establishing a system of tonal control any variation in printing time would be catastrophic.

3.1 Establishing the SPT

Begin by setting the enlarger head at a height you will make all the test prints at. Image 5.2 shows that I set the enlarger so that the frame of my scene fit within an 8x10 sheet of paper. You are welcome to select any height that best fits how you want to work. However, write down this height and make sure the head is securely locked off. Get your chemicals mixed and ready in their respective trays.

Load the unexposed frame from your EFS test into the enlarger or lay it on the paper if you are contact printing. Make sure to set your enlarging lens to your standard f/stop for printing. (I used f/11.) Set the enlarger’s timer control to 30 seconds and get a black piece of cardstock ready to use as a dodging tool.

With the lights off in the darkroom load a sheet of paper or a strip of paper into the easel. Cover the paper with the black cardstock so that only one inch is exposed. Now, switch on the timer and move the cardstock about one inch every three seconds until all 30 seconds have elapsed.

Develop your exposed paper at the recommended developing time. Make sure to wash the print and let it dry before making any assessment.

3.2 Interpreting the SPT Test

Once the paper strip is dry take it into a situation with strong, but not direct illumination. Some photographers have an area of wall in their darkroom lit like a gallery in order to judge their prints. I’ve made many judgements standing outdoors in the shade of a porch. Do not make judgments holding the print in direct sunlight, or under lights that are too dim.

The SPT test strip will likely be mostly black before emerging into steps of dark tones, and finally steps of gray tones. The SPT is where the exposure time produces the first true black for the paper. Knowing that the lightest tone on the paper is a 3 second exposure time just count down until you reach the exposure time that produces a black indistinguishable from the rest.

3.3 SPT Example

In my tests of APX 400 to Oriental Seagull GFII paper in Ansco 130 developer I found an SPT of 18 seconds.

Image 3.1 - A scan of the SPT test strip is a pale imitation of the real thing. I hope on your monitor that you can see the slight difference between 12 and 18 seconds. I could not distinguish changes in tonality from 18 seconds onward. This strip established my Standard Printing Time as 18 seconds for APX 400 printed to Oriental Seagull GFII in Ansco 130 developer.

3.4 Technical Rationale for Establishing the SPT

Developed film is not perfectly transparent. The plastic base supporting the emulsion has a slight density and this density is different for roll films, such as 35mm, than for sheet films such as 4x5. Also, the chemical activity from development creates a slight ‘fog’ to the unexposed film. The photographer desires that this Base+Fog is exposed to the paper to produce a deep black.

Be aware that different film and film developer combinations, as well as different speeds of film have different Base+Fog levels. Therefore, you need to find the SPT for each film and developer! For example, my SPT times for APX 100 is 12 seconds compared to the 18 seconds for APX 400 in the same developer. This is due to the fact that slower speed films have less Base+Fog.

4.0 Printing and Interpreting the EFS Test

With a Standard Printing Time established one can now print the grayscale patches from the EFS test negative.

4.1 Printing the EFS Negative

With the enlarger still in the same settings, and fresh chemicals in the trays (if necessary) - frame of the EFS negative at your SPT. Make sure to keep track of the negative numbers or the underexposure values for each print. I write the data on the back of the photo paper with a soft leaded pencil. Develop each in your chosen materials for this calibration procedure.

There are two good tips to follow during this procedure. First, you can trim down your paper to a smaller size in order to save some money. I cut out pieces about 2x4 inches when I performed this method. Second, you can load the negative in the tray so that two graycards are visible at once. This has the advantage of saving on time and money, but also giving you a black unexposed strip between the two graycards which helps make judgments of tonality.

4.2 EFS Interpretation Example

What we are looking to ascertain is the level of underexposure that provides a just noticeable tonal difference from black. Ideally, this difference should be at the point where the film is four stops underexposed. Remember, four stops underexposed from Zone V (Middle Gray) is Zone I, which by definition is where tonality emerges from black but where there is still no texture.

For my tests I started with the graycards exposed two stops under. Here are the EFS prints:

Image 4.1 - I printed across two frames because it is a more efficient use of materials, and provides a black reference in the unexposed section between. The underexposed graycards in the print on the left light enough to be considered Zone II. However, the print on the right shows underexposed graycards just above black, on Zone 0.

The graycards on the left are so light as to be in Zone II. However, the graycards on the right are just barely perceptible in comparison to the unexposed region between the two frames. These look my preferred Zone I areas but we should look at the next two prints to make sure.

Image 4.2 - The underexposed graycards is these prints are indistinguishable from black.

The graycards from four stops under and below are indistinguishable from black. So I need to look back at the print with the -3 and -3 1/2 graycards in order to determine my film speed.

Remember that I exposed this film at an EI of 200. If 200 EI was the appropriate speed to place a graycard in Zone I that would mean I would be able to see a slight tonal difference on the print exposed to the -4 graycard. However, it’s not visible so my EFS (or EI) is not 200. Since the exposures above -3 did print with a tonal difference we can reason that my film is slower than 200.

To make these judgments easier I wrote the EI corresponding to each exposed graycard if it printed with tonality. You can see these written in red sharpie on my negative sleeve in Image 3.1. Here is a table:

-2 stops = 50 EI

-2 1/2 stops = 75 EI

-3 stops = 100 EI

-3 1/2 stops = 150 EI

-4 Stops = 200 EI

-4 1/2 stops = 300 EI, etc.

I felt the optimal Zone I was between -3 and -3 1/2 stops underexposed so I chose the EFS of 125 for APX 400. So development in PMK lost 1 2/3 stops of speed!

4.3 Technical Rationale of Establishing the EFS

The famous Zone System adage is “expose for shadows and develop for highlights.” Establishing the Effective Film Speed for your film/developer combination you are “exposing for shadows.” You are determining the EFS so that the shadow detail of a scene is not lost in the Base+Fog, but emerges just slightly lighter than black.

I cannot stress enough the importance of these first steps in calibrating. Just making sure that your subject luminance range is all contained within the straight line portion of the film curve provides one with a negative capable of successful printing.

5.0 Establishing Development Time

Armed with the Effective Film Speed and Standard Printing Time we can now determine the optimal Development Time. I recommend doing this on a new day in order to start with a fresh mind.

You will need to set up your ‘Scene’ as discussed in part 1.3, as well as make sure your darkroom is in the same set-up as when you performed the SPT and EFS procedures.

5.1 Getting the ‘Scene’ and Darkroom Ready

Place your objects and light them carefully. Remember that any standardized chart in frame, such as a graycard or ColorChecker, needs even lighting. Use your spotmeter to also take notes of the different luminance levels of the objects. You need to ensure that your final print renders Zones as close to optimal as possible. These are the important tonal regions I consider:

Zone V - Middle Gray - A perfectly exposed and developed negative should make the graycard in your print the same exact tone as the graycard in the scene.

Zone II - This Zone is the first emergence of detail in the shadows. Make sure to locate an object that meters three stops under your graycard. Select an object with texture!

Zone VIII - This Zone contains highlights with details before drifting off to white. Make sure to find an object that meters three stops above your graycard and contains texture.

With your scene established and lit now go set up the darkroom. Check that the enlarger is still at the same height above the easel. Make sure to do this on a day without interruptions since you will need to jump back and forth between the scene and darkroom.

5.2 Exposing the First Roll of Film

Expose a roll of film to the scene at the EFS you determined from the previous steps. I also chose to over and underexpose it in 1/2 stop increments, but this proved to be unnecessary. You can see in the negatives below I also exposed a graycard at Zone I (4 stops underexposed), Zone V (proper exposure), and in whole stop increments above Zone V.

Next, develop the film at the same developing time used for the EFS test. In my case this meant I exposed APX 400 at 125 EI and developed it in PMK for 16 minutes.

Image 5.1 - The reason for over and underexposing the scene in half stop increments was to ensure a printable negative if for some reason my EFS was off. This way I could select a different exposure level to print and recalibrate my EFS. I printed the overexposed graycards in a manner similar to the EFS test. However, this time I wanted to see if Zone IX was a slightly lower tone than the white of the paper.

From this point onward it’s a little easier to explain the procedure through the lens of my own experience.

Once the negatives were dry I mixed my paper development chemicals. Taking the normally exposed negative I printed it to Oriental Seagull GFII at the SPT of 18 seconds. I developed this in Ansco 130 with my usual development time and agitation.

Image 5.2 - The first print of my ‘scene’ at EI 125 and a Development Time of 16 minutes. Notice that I scanned this print with a graycard covering the top. I think this helps the reader make their own judgment with a reference in the same screen.

Once the print dried I compared the tonality in the print to the canonical Zones. The black square (bottom right) is visibly separated from its surrounding black border so I know my shadow detail is coming up in about the right place. However, comparing the print to a real graycard I found my printed graycard too bright. To help in this comparison I scanned my print with a graycard as a reference. Also, the crinkled aluminum foil metered in zones ranging from 2 to 3 stops above Middle Gray, but in the print is very bright and lacks highlight detail.

This developing time of 16 minutes produced a print with too much contrast. In order to compensate I need to develop for less time. However, if your first roll produces a print with lower contrast than you need to increase your developing time.

I made the decision to reduce the developing time by 10% and rounded the value to 14:30 minutes. Now I returned to my scene to expose and develop another roll of film.

5.3 Second Roll of Film at a New Developing Time

After setting my camera back onto its tripod I checked my lighting and exposure. I exposed a second roll of film at 125 EI. Taking this to the darkroom I developed it for a time of 14:30 minutes and struck a new print.  

Image 5.3 - The scene from my second roll of film developed at 14:30 minutes. Notice the slightly lower contrast and the retention of highlight details in the aluminum foil.

This development time still maintained shadow detail, but now my graycard in the scene appeared even closer to the physical graycard. Looking at the aluminum foil you can see how delicate highlight detail is now maintained.

Even though this felt like the proper time I decided to reduce my development time by another 10% just to double check. I don’t recall my logic at this point but I selected a development time of 12:45 minutes.

5.4 Third Roll of Film at a New Developing Time

Repeating the same procedure as before I exposed another roll at 125 EI, developed it at the new time, and struck a third print.

Image 5.4 - The third roll of film developed for 12:45 minutes. Judging from the actual scene I felt this to be the best representation of tonality.


I thought this new print would overshoot, but instead the graycard was now nearly an identical match. I also noticed that the further reduction in contrast helped open up dark detail (look in the black lacquer tray) and retain nuanced highlights in the aluminum foil.

Image 5.5 - Here is a line-up of the critical area of the scene at the three development times.

Beyond just the tonality notice how bringing the exposure and development time into alignment also makes the image appear more dimensional!           

5.6 Technical Rationale of Establishing Development Time

Establishing the proper EFS or EI fulfilled the first half of the zone system adage. Now, by adjusting the development time we change the contrast of the image to place our highlights so that their detail is appropriately rendered. This completes the second half of the sensitometric truth to “expose for shadows and develop for highlights.”

6.0 Finishing Up

While this procedure will take many hours over one or two days the effort is invaluable. I recall that even just knowing the properly calibrated EI and Development Time meant more of my images were properly exposed and therefore easier to print.

The calibration procedure is not over because the next step involves exposing film in the world and confirming that the test scenario works in reality. I recall that my chosen values produced immediately satisfying results. Nonetheless, I’ve spoken to others who ended up nudging the EI by 1/3 of a stop, or changing the development time by a minute. This is acceptable in order to align your vision with the tonal rendering in the print.

On a final note I should point out that this calibration procedure establishes ‘normal’ development for a ‘normal’ subject luminance range of around 7 stops. If your scene is greater or smaller than this range you would need to adjust your development time and EI accordingly. These situations call for the technique of Push or Pull development. Calibrating a film/developer combination for changes in development time is more involved and I’ve already written a long enough post.

Nonetheless, my hope is that giving preliminary instructions and an example of Zone System Calibration inspires others to at least try. The process may well be tedious, time consuming, and perhaps frustrating. Yet, it will improve your eye, metering and exposure, and darkroom technique. Most importantly, Zone System Calibration provides an intimate understanding of your materials and the image-making process.

How Zone System Works: A Conceptual and Technical Sketch part II by John G Arkenberg

(This article is part II in a series. Reading part I is essential as it establishes the scientific basis of tone reproduction. My intention in this series is to disseminate accurate information about sensitometry and Zone System for the photographic community.)

4.0 Image Processing

In the previous article I explained the general technical basis of sensitometry and Zone System and its benefits to the photographer. Simply, in a well-controlled photographic system the artist can satisfactorily correlate the appearance of tonality in a scene and the final image. However, what remains is to address the middle step of Processing which brings the scene and final image into alignment.

The previous article addressed the beginning and end of this chain. This article focuses on Processing, the middle block.

The previous article addressed the beginning and end of this chain. This article focuses on Processing, the middle block.

I will begin with explaining the analog process for three reasons. First, sensitometry started with analog so this medium is the most studied and documented. Second, the analog process has very rigid controls which give us a straighter arrow to follow. Finally, there is a level of transparency to the process (all puns intended) compared to digital steps in a computer which hide some of the signal processing.

4.1 Thinking Backwards

For all photographic materials the most important limit to tonality are the limits of the print, projector or monitor. This statement is so crucial to the correct application of sensitometry it is worth reiterating - the medium of the final display dictates how to expose and process the negative. This means that the tonal rendering of the photographic paper determines how to expose and develop the film. The flexibility of changing a digital file in a computer often cloaks the fact that the electronic display, whether a monitor or projector, also determines how we should be exposing and processing the digital image. Even though we learn about the photographic process beginning with the camera and ending with the print - the analysis of tone reproduction is viewed backwards.

Diagram 4.0 - We learn photography as a process from scene to print. However, the science of sensitometry looks backwards through each link in the chain in order to understand what intensities of light are within each subsequent step in the process.

Diagram 4.0 - We learn photography as a process from scene to print. However, the science of sensitometry looks backwards through each link in the chain in order to understand tonal rendering at each subsequent step in the process.

In order to really appreciate the quantitative and qualitative changes in tonality in the processing phase of an image it is necessary to look at the sensitometric graphs that accompany this step.

4.2 The Sensitometry Graph and Transfer of Tonality

There is a power in graphs - they easily reveal relationships between each step in a process. Let me take a moment to give a brief explanation of sensitometry graphs for analog and digital materials before exploring how these relate to the print paper and computer monitor.

The x-axis of the graph is the incoming light from a scene. For a sensitometry analysis the scene consists of a range of discrete steps of tonality (such as a grayscale chart like the Kodak Q-14) or a series of images of an underexposed and overexposed graycard. The scale of the x-axis is typically shown in steps of log exposure (log(H) with H=lux x seconds) where each step of 0.3 is one stop in change of the illumination. More recently it is common to show the x-axis in camera stops. While camera stops are more familiar to a practicing photographer there must be an indication of the proper exposure point a scene. In the graphs for Kodak’s Vision 3 motion picture films correct exposure at the manufacturer’s EI is indicated by a 0.

Against the change in intensity of light from a scene one can measure and plot the amount of signal captured by the medium . Film records light as silver density (or dye in the case of the colored film) so the y-axis on a film graph is density. All film graphs use a logarithmic scale where each step of 0.3 is one stop change in density. (Neutral Density filters use the same scale!) Digital files store the signal as a string of bits and the length of this string is the bit-depth. For digital sensitometry graphs the y-axis can be either bit depth, or a percentage of bit-depth. The slope of the straight line portion of the graph is the contrast. The steeper the slope of the curve, the more contrasty the image and vice versa.

Diagram 4.1 - At top is a generic sensitometry diagram labeled with key terms.

Diagram 4.1 - At top is a generic sensitometry diagram labeled with key terms. Both analog and digital share the same x-axis scale. However, the y-axis is different since film records the image as density, and digital records in terms of code values.

Obviously, we don’t display the camera negative, nor can we see the stored digital file. So the captured information is made visible by moving the recorded signal to a display - photographic paper for analog, or an electronic monitor in the case of digital. We can analyze this transfer of tonal information by mapping the curve from the capture medium to a curve for the display medium. In diagram 4.2 I show this transfer from the negative to the steep curve of the paper.

By analyzing the entire photographic chain through sensitometry graphs the need to match the limits of our materials becomes imperative. If we have two curves to work with, the curve of the negative and that of the print, why do ZS photographers work so hard on exposing and developing the negative? In fact, you may have heard their adage “expose for shadows and develop for highlights.” The reason is that the curve of the print paper is very steep, and any small change in printing and developing time has a large impact on the final tonality. It is simply more practical to work with one grade of paper, calibrate the negative to that specific grade, and then work within the parameters of dodging and burning only. Of course, we do have the option to switch grades of paper if we need an image to have more or less contrast. But simply, the flexibility of the negative in processing and exposing is just so much greater and easier to control.

Diagram 4.2 - Follow this graphic from Input in the lower left clockwise through to the Output. One can understand for film / print materials how the light from the scene is recorded as density on the developed negative. Then, we shine light through…

Diagram 4.2 - Follow this graphic from Input in the lower left clockwise through to the Output. One can understand for film / print materials how the light from the scene is recorded as density on the developed negative. Then, light is transmitted through the negative to expose and develop a print. Provided a system is well calibrated by Zone System techniques the photographer can have excellent control over how the tones from the scene are rendered in the final print.

So how to control tonality? Well, it involves spending time testing materials with calibration charts – standardizing one’s post-process – and then spending a period of time light metering and taking notes in the field. In a next post I will spell out a step-by-step guide of how I calibrated my film/paper combination. For now I want to demonstrate the generic process of control within a calibrated film and digital system.

5.0 Analog Image Control

In the analog image chain we rely on altering contrast and tonality by:

Film Development Time – Determines the contrast of the captured information from the scene.

Paper Grade – Typically ZS practitioners calibrate for paper grades 2 or 3. This allows for the use of grade 1 if a lower contrast is required, and grades 4 and 5 if the image needs more contrast. However, we expose and develop our negatives so as not to need these extreme ranges of paper grades.

Dodging/Burning – Localized changes in tone by blocking the light from the enlarger (dodging) to make that area lighter or exposing it for longer to make it darker (burning).

While one could rely on grades of paper and dodging/burning alone it is far from ideal. I have printed difficult negatives and the range of physical motions I have to make, the time in which they must be performed becomes a narrow tight rope leading to many rejected prints and problems with reproducibility. Better to have an optima negative so that tones fall in a close to desired way. When this occurs all further tone adjustments through dodging and burning are much easier.

So how does one control their curve of the negative? By development time – a longer development time (push processing) raises the contrast and a shorter development time (pull processing) lowers the contrast.

Diagram 5.1 - The sensitometry curves from Tri-X in T-max developer. Notice that as the development time increases the curves become increasingly steeper. From Kodak Technical Publication F-4017, February 2016.

Diagram 5.1 - The sensitometry curves from Tri-X in T-max developer. Notice that as the development time increases the curves become increasingly steeper. From Kodak Technical Publication F-4017, February 2016.

There is an important ramification to changing the developing time, which is that the recorded density from the scene is moved to a less ideal part of the dynamic range of the negative. If one does not decrease exposure in response to a longer developing time the highlights can be too “bulletproof” to print and an increase in grain across the image. To compensate we change the EI of the film to a higher number, as if the film is more sensitive to light. For example, in my tests with Agfapan APX400 in PMK I discovered that 12 minutes of developing time gave me an EI of 100. However, increasing the developing time to 16 minutes required an EI of 125 to keep Zone II in the lower part of the curve.

Conversely, the image loses shadow detail if there is no exposure compensation for the shorter developing time. Therefore, we lower the EI as if the film is less sensitive to light. Looking at my Agfapan APX400 notes the change from a 12 minute developing time to an 8 minute developing time resulted in an EI of 80. Some of you may be thinking that such little change in EI to developing times is not a big deal. However, this is just the case of my materials and these relationships can be dramatically different depending on the developer and film combination one chooses.

Moving further backward through our analysis we can also determine the range of light in a scene that is held by our film/film development/paper combination. I found that within my system, (grade 2 paper in Sprint Quicksilver developer, APX400 with an EI of 100 and developed in PMK for 12 minutes) that I obtained a 7 stop range of light in my scene. This allowed me to go out in the field and take spot meter readings reliably knowing any object 3 1/2 stops over and 3 1/2 stops under proper exposure were outside the limits of tonality rendered with detail. You will also notice that this 12 minute development time is a good normal development time that places an object 3 1/2 stops under middle gray in Zone II, and an object 3 1/2 stops over middle gray in Zone VIII.

Two quick points – 1) Remember, the negative records more than the narrow window of 7 stops. What we are accomplishing through this sensitometry analysis is how to optimize our system so light in our scene is easily rendered as the same Zone in the print through manipulation of development time and EI alone. This gives us an easier image to finesse into our desired intention. 2) This 7-stop range are tones with detail. We know the scene will have objects beyond these tones that are black and white.

Armed with my 7-stop range I can spot meter objects in my scene to see whether they fall with the ideal recorded density of my negative. So what do I do if my subject has a greater range than this “normal” developing time? Well, let’s look at our cat picture again.

If spotmeter readings of our subject have these values…

Diagram 5.2 - These are possible spot metered values for a high contrast object. Notice that the white and black fur that we want rendered with detail are four stops over and four stops underexposed.

…I would pull process in order to lower the contrast on my negative. This development allows the critical highlight and shadow details to fit within the limits of the paper.

Diagram 5.3 - This is the transfer quadrant for a scene that is too contrasty. The cat’s white fur which we want in Zone VIII is in Zone IX and the black fur which should be in Zone II is in Zone I. Pull Processing (sometimes called Contraction Development) of the film places all the tones back into their proper location.

            What if my scene is too low in contrast? Let’s say I get these spot meter readings…

Diagram 5.4 - In this situation the subject’s white fur is only metering two stops above middle gray and the black fur is only two stops under. The cat looks low in contrast and we want to place the white fur back into Zone VIII and the black fur into Zone II.

            …I would push my film to increase its contrast in order to fit the paper.

Diagram 5.5 - This is the transfer quadrant for a scene that is low contrast. The cat’s white fur which we want in Zone VIII is in Zone VII and the black fur which should be in Zone II is in Zone III. Push Processing (sometimes called Expansion Development) the film places all the tones back into their proper location.

Keep in mind, these examples are merely to illustrate how to make an image of the cat with the fur in the appropriate zone for ‘how it appears to the eye.’ However, recognizing how development time controls tonality opens a world of aesthetic controls to the artist and their intended vision.

The previous examples are simplified by relating the same light meter readings to film development time. However, the decision to change development time requires the photographer adjust the Exposure Index of the film. By lining up our three development curves (normal, pull, and push) on one graph the necessity of altering the exposure time becomes apparent.

Diagram 5.6 - This is idealized to illustrate basic concepts. For a real world diagram just look at 5.1 above with the different development curves for Tri-X.

Notice that as the development time is changed the point where Zone II begins is displaced horizontally. Since the x-axis is the scale of exposure the distance these points move in stops can be used to calculate the change in Exposure Index. This point is also called the speed point in sensitometry since it is used to calculate the speed rating of the film. In our example the speed point is moving one stop on the Log Exposure scale so the Exposure Index is changed by the same amount. If my film is 200ei with normal development and a spotmeter reads the graycard as f/4 then that is the aperture I will set my lens. However, if I wanted to push the film I would need to set my aperture to an f/5.6 to compensate for the longer developing time. If I wanted to pull my film I would set my lens to an f/2.8 to compensate for the shorter developing time. I could also adjust my shutter speed instead of aperture if I did not want my Depth of Field to change and my subject remained static.

If you have ever heard the famous adage “expose for shadows, develop for highlights” you can begin to unpack the remarkable truth this statement encapsulates. That is, your choice of EI places your shadow details to a point on the curve that keeps the detail of shadows intact, and then the choice of development time sets the slope of the curve to position the highlights. The calibration tests I explain in the next post will give a working example from my own tests. However, in sensitometry graphs we can see this interrelated dance of EI and development times.

          I should make two more important points for clarity:

First, I cannot stress enough that the process of calibrating a film/film development time/paper/paper developer and the temperature and agitation techniques is a total system. Once one starts changing different parts of each control than different tonal results occur. The analog photo chain requires a high degree of discipline.

Second, the control of the curve of the negative is made by the photographer at the moment of exposure. The act of determining exposure and developing time in the field dictates the darkroom process. This can be changed, but only by so much. This is why ZS emphasizes the skills of memory, calibration, rigor, and practice.

6.0 DIGITAL

An understanding of analog tone control is easily translated to the digital image making process. The greatest difference lies in the fact that the “developing” of a digital image is now in a computer where a myriad of controls await. In fact, I think adjusting the image in a photo editing software is so easy that many don’t realize they are undertaking similar steps as with film – setting the brightness and contrast of the image curve to fit the limits of the display – now a computer monitor.

Your monitor is set to an EOTF, or electro-optical transfer function. This is a conversion function to translate from electronic code values to an intensity of light. The EOTF is the equivalent to the sensitometry curve of photographic paper. Computers commonly use a gamma of 2.2 as graphed below. I chose to graph the y-axis on two different scales - an exponential scale and a logarithmic scale. On the logarithmic scale the 2.2 gamma reveals an S-curve shape. This is more analogous to film/paper curves because the y-axis is showing each step on the graph as a change in one stop - so each step is a doubling of the output of light from the monitor.

Diagram 6.1 - On the left is a gamma of 2.2, the conventional gamma for sRGB and Rec709. The x-axis are 8-bit code values and the y-axis is the luminance of the monitor if calibrated properly. On the right is the same data, but with a logarithmic y-axis scale. Each step on this scale is a whole stop change in luminance. Notice the emergence of the S-shaped curve as well as the steepness of the curve. Notice that currently the sRGB and Rec709 standard only displays around 8 stops of dynamic range.

Notice that the monitor also has a steep curve similar to that of a photographic paper. For our cat example to display correctly we need to adjust the curve of the captured image to make it fit within the limits of the monitor. To accomplish this our ‘developing’ is replaced by the use of the curve tool or wheels. (Now you understand why this tool is called a ‘curve’ tool - it derives its principles from sensitometry!) However, the laborious process of calibrating film is replaced by simple computer adjustments while you view the changes to the image in real time.

Diagram 6.2 - Notice that this is very similar to the analog tone transfer diagram but with a few alterations. The image is captured in a file format that is then corrected in a computer. For this example I selected a RAW capture that has its metadata and a curve applied in the computer.

Diagram 6.3 - Here is a screenshot from adjustments to the same image in a photo editing software. While the curve tool is inspired by the sensitometry characteristic curve, it doesn’t actually map the tonal rendering of the image. Instead, it just provides a straight line tool for further adjustment to whatever data is there. Here I applied a ‘medium contrast’ curve and then proceeded to nudge the midtones higher. If this image was analog I would have developed the negative for the same desired contrast. Next, I would have dodged up midtone areas in the image. So in the computer you are using the curve or wheel tools as both your ‘developing’ and ‘dodging & burning.’

The possibilities in image editing in a computer are vast and displayed immediately for our assessment. I think this is where the chain of tone reproduction is so seamless one could be using a “light” form of ZS without knowing it. After all, you are mapping the tones of a scene into a desired place by using one’s visual memory and visual system as the comparison. What many don’t realize is that the tone reproduction process for film and digital share the same principles. No matter the format you begin by viewing a scene with the limited range of your eye, captured it with a photographic system with a large dynamic range, but are now compressing the limits back down to limits of your eye in the final display.

7.0 Tying it All Together

Armed with a knowledge of how we see tonality, and how our final display depicts tonality, we are empowered to transform the curves of our capture medium to bring our vision and the final image into alignment. More importantly, we can use this information in order to interpret light meter readings of a scene. This ensures that objects are reproduced as intended in printing and are far away from the limits of the dynamic range.

There are perhaps two lingering questions: 1) Why is the S-shaped curve so predominant?

Sensitometry studies on people have shown this to be the most characteristically pleasing mapping of tonality. On a pure technical level a straight line with a slope of 1 would be ideal, but we are making these images for people’s enjoyment. The reason for the slightly crushed toe of the curve is to suppress flare in the photographic system. The rounded shoulder is to give a pleasing and pleasing rendering to highlights. This is an important fact because shows that sensitometry does not exist in a technical/numerical vacuum, but studies how we perceive the world.

Another point of clarification involves how film and digital have a difference in preferred placement of subject luminance. This is important regarding the fact both sensors have a DR much greater than the average scene. Early psychovisual studies for film established that people prefer the image quality when it is exposed lower on the curve. This is why a properly calibrated negative/developing time puts Zone 2 above the base/fog level. This is also called the speed point – the point from which ISO calculations are performed. The drawback to exposing a scene higher up on the curve is compressed highlights and increased noise in the form of graininess.

Sensitometric standards for digital are still being worked on by the ISO, but the ideal placement of subject luminance is already known and symmetrical to analog. The photosite in a CMOS sensor clips information in highlights. Another way to say it is that overexposure has a hard limit. Many manufacturers determine an EI that moves the information down the curve to salvage highlight detail. There is a bit of a balancing act here because underrating the sensor too far would bury the exposure into the noise floor. This may seem to be a non-issue with de-noise algorithms. However, overzealous application of de-noise algorithms will eliminate fine frequency, low contrast detail from the scene. Despite these differences in approach the main psycho-visual underpinnings of both remain the same – the limit of the eye.

 8.0 Conclusion

With analog and digital photo processes possessing a range of flexibility why care about ZS? Why not just shoot and print to taste? You certainly can, and as I’ve said many times ZS is a tool to take or leave. However, learning sensitometry and ZS provides understanding of the real essence of the photographic process. The knowledge is so fundamental I not only found I could apply it to digital technology as it first emerged, but could also see through the strange ad speak and unrealistic claims manufacturers make.

Now, how to apply this knowledge? First, testing and calibrating one’s materials so a scene of average contrast (approx. 7-8 stops) allows for close placement of Zones with a simple curve adjustment (either by developing time or a simple curve adjustment in Photoshop).

Next, you work out how to handle scenes of high contrast and low contrast – so that one can transform subject luminance ranges whether they are 5 stops or 6 or 9 or 10. This informs your metering technique and allows one to work with confidence that a difficult subject such as a black and white cat can be displayed .

After calibration begins the task of practice, note-taking, practice, self-assessment, practice, troubleshooting mistakes, and more practice. The learning curve is quite long (all puns intended), but very rewarding the photographer applying themselves to the task.

As one gets a handle on conventional image qualities they can start exploring to find unique aesthetics and fully understand the exact steps to achieve these in the future. You will begin to understand how ZS frees the subjective vision much as Ansel Adam’s pointed out in The Negative.

Adams, Ansel. The Negative. Boston: Little, Brown and Company, 1981.

Adams, Ansel. The Print. Boston: Little, Brown and Company, 1983.

Davis, Phil. Beyond the Zone System, 2nd Edition. Boston: Focal Press, 1988.

Eggleston, Jack. Sensitometry for Photographers. London: Focal Press, 1984.

Todd, Hollis N. and Richard D. Zakia. Photographic Sensitometry: The Study of Tone Reproduction. Dobbs Ferry: Morgan & Morgan, Inc., 1969.

How Zone System Works: A Conceptual and Technical Sketch Part I by John G Arkenberg

(I started getting e-mails about Zone System due to search engines associating my Death of the Zone System series with another article about how the Zone System is Dead. My title is ironic since I use Zone System and believe it a useful tool for any photographer. My belief is that quality information about ZS on the internet is lacking which is leading to the dissemination of misconceptions that discourage and dissuade others from learning this technique – hence, the Zone System is dying a death due to misinterpretation and misinformation. Before I address the argument made in Johnny Patience’s The Zone System is Dead I am producing two pieces; the first explaining how Zone System works on a conceptual and technical basis and the second detailing the steps of how I calibrated my film/paper combination. Once these are posted I will follow up with an analysis of Patience’s arguments in an effort to dispel a number of mythologies and further explain how his metering method falls short as a system of tone reproduction.)

1.0 INTRODUCTION

The Zone System helps solve a complex problem in a simple way. That is - how to capture light from a scene and through a competent control of each step in the photographic process produce an image that aligns closely to either what you saw in the scene or in your mind’s eye. I believe the canon of Adam’s The Negative, Zakia, Lorenz and White’s New Zone System Manual and Phil Davis’ Beyond the Zone System do an excellent job explaining ZS on both a creative and technical level. And yet, in this age people looking for easy answers are disappointed by the Zone System’s long learning curve. So why would I say it’s a simple answer to a complex problem? Because the technical adjustments of tonality using the scientific methods of sensitometry is very complex and requires one own some rather obscure equipment. Zone System is not only an ingenious way to bring this science into the hands of the entire photographic community, but also provides a visual vocabulary and method of marrying the artist’s subjective vision to the technical photographic process. I may own several light meters and use a densitometer to calibrate my materials, but I still rely on the Zone System to understand how I perceive tonality and manifest the vision from my mind’s eye onto paper or electronic screen. Simply, Zone System is the easiest method in our possession and I think it helpful to explore the science and concepts of how it works in order to encourage others to consider learning it as well.

1.1 Language Clarifications

I am writing this for use with both analog and digital photo processes so we should clarify some terminology I use. I prefer the general term sensor for both film and silicon semiconductor imaging. This is to disconnect any medium-centric thinking and ground us in the shared processes between light-sensitive materials used for photographic imaging. I will be clear if any aspect is particular to digital or analog photography.

Processing or image processing is the general term I use for film developing or image adjustments in computer software. Once again, there are many commonalities to both mediums in the general process of tone reproduction. Also, I try to keep clear the individual steps of the process by separating out what you saw in the real world as the scene, the captured image as the negative or digital file, and only the final image as the print on paper, the projected motion picture print, or electronic display in the form of a monitor or electronic projector.

Finally, I make an important distinction between ISO and EI. ISO is the measure of sensitivity of a sensor as dictated by the ISO organization in their standardized tests ISO 6:1993 for black and white film, ISO 5800:1987 for color negative film, and ISO 12232:2006 for digital. Due to the fact these tests are really laboratory protocols for a manufacturer the photographer may realize they need to rate their sensor differently based on the circumstances of a scene or for a desired aesthetic. In the case when photographer or a camera manufacturer recommends a different speed rating this is properly referred to as an Exposure Index or EI. (“Native ISO,” a term that is entirely made up by digital camera manufacturers, is analogous to EI.)

2.0 TONE REPRODUCTION SIMPLIFIED

Sensitometry and Zone System provide technical grounding and visual control over the process of tone reproduction. The process reduced to its simplest steps is as follows:

Diagram 2.0 - Although illustrated for analog photography these three steps are equally applicable to digital with Processing done in a computer and the Final Image seen on a computer monitor.

Diagram 2.0 - Although illustrated for analog photography these three steps are equally applicable to digital with Processing done in a computer and the Final Image seen on a computer monitor.

I will address the middle section of Processing in Part II of this article. For now imagine it as a machine with dials we can adjust for different image qualities but for the moment they are set to “default.” In this instance the entire photographic system becomes a “black box” where we measure and control the light inputted into the photographic system and observe the appearance of the image at the end. We then ask ourselves a series of questions about the quality of final image comparing it to the scene we remember or scene we previsualized. Based on our assessment we then proceed to adjust the process to dial up more or less contrast, raise or lower the brightness, etc. until we get a final image to our desired aesthetic. To give an example let’s say one took a photograph of a black and white cat. Black and white fur is notoriously hard to render well for film and digital.

Diagram 2.1 - My father, who first taught me film photography and darkroom printing, loves black and white cats as an example of a difficult photographic object to render tonally. I doubt our photographer friend above would really take this photo as…

Diagram 2.1 - My father, who first taught me film photography and darkroom printing, loves black and white cats as an example of a difficult photographic object to render tonally. I doubt our photographer friend above would really take this photo as an expression of his artistic creativity, especially since it includes a graycard and a Zone System visualization strip. Nonetheless, these technical items needed to be in the photo for important data later in the article.

If your hope was an outstanding artistic depiction with both black and white hair rendered with detail and nuanced tones to occur from “default processing” it’s highly unlikely your photographic system would produce this result. (By “default processing” I mean you rate the film at the ISO listed on the box, use developing times from a manufacturer or in the digital realm allow Photoshop to apply image processing to the RAW file with default curves, color temperature, etc.) As I will explain a little later, there is a small probability that circumstances and materials will align to produce a stunning final image. But in reality what will most likely happen is something along these lines:

Diagram 2.2 - I make a point with my students that taking a photograph merely means pushing the shutter button and just merely throwing on a pre-built Instagram filter or leaving “auto” modes on. On the other hand, making an image means previsualizi…

Diagram 2.2 - I make a point with my students that taking a photograph merely means pushing the shutter button and just merely throwing on a pre-built Instagram filter or leaving “auto” modes on. On the other hand, making an image means previsualizing an image, engaging your visual memory, and critical assessments of the image quality.

Comparing your scene (through one’s memory or even light meter notes) to the image one can tweak the process chain until a desired result is achieved. So how does one learn to control their photographic process to control the tonality from scene to image?

2.1 The Many Roads to Learning Tone Control

Admittedly, there are many roads to learning to control tonality. One can do this by trial and error, careful experimentation and testing, or learning Zone System and sensitometry. In my experience trial and error (what I started with) is frustrating, inefficient, and expensive. Being more scientific by following steps methodically and taking notes is more efficient and the results can be quite good with attention and diligence. I find people decades into their photography careers who follow this path of rigorous attention and a solid technical understanding of the steps needed to achieve the aesthetic they need. I call these people “unwitting ZS users” because they are engaged in the process deep enough to control the result, but may not use ZS or sensitometry terminology. Finally, those who labor to understand ZS and sensitometry bring a comprehensive and embodied knowledge to the photographic process and are very adept at interpreting situations and problem solving. I’m not saying these are discrete levels of engagement, but that they admit of a spectrum. However, I cannot emphasize enough that a full engagement has very desirous benefits that cannot be denied. 

If Zone System is so simple why do so many amateurs and occasionally professionals fail to understand it? Well, there still is a barrier to entry – a set of prerequisite lessons and experiments to perform to really embody the knowledge. A few I can think of are –

1. Empower the photographer’s ability to previsualize an image. This is important because image making requires that we can see in our mind’s eye a meaningful idea in which to express through the craft of photography.

Previsualization.jpg

2. Development of a visual memory so as to not only remember the scene photographed, but also compare tonality from the scene to the canonical Zones.

Barrier 01.jpg

3. An understanding of the tool of the light meter and how to interpret its data.

Barrier 02.jpg

4. Technical proficiency and consistent execution of image processing in a darkroom or computer.

5. Time spent testing one’s photographic process to calibrate materials for different subject brightness ranges.

Barrier 3 and 4.jpg

6. Meticulous note taking and self-assessment to learn from mistakes and build up mental database.

Barrier05.jpg

7. A good teacher!

 My hope in enumerating these prerequisites is not to dishearten those interested, but separate the components into digestible ideas that can be worked on individually in service to the greater goal.  

3.0 HOW DOES THIS WORK?

Even though ZS is founded on a well-established science of tone reproduction there is recently a great deal of doubt cast on the process. Even if we treat the middle step – Processing – as mere technical execution this still leaves some unanswered questions. How can it be possible to move tonality in a scene to a piece of photo paper when the quantity of light is magnitudes different? Why does the number of Zones not match the range of tonality in nature, or the range my camera can capture?

3.1 Absolute vs Relative Tone Reproduction

First, keeping the appearance of tones/Zones intact is achieved by maintaining the relative differences of values to each other. Sensitometry studies two kinds of tone reproduction – Absolute and Relative. Absolute tone reproduction entails that photometric quantities of light emitted in a scene are the exact same as those in the final display.

Diagram 3.1 - You have probably experienced how straining on the eyes it is to look at bright computer screens in indoor settings as well as how blinding snow is in sunlight. Imagine a monitor so powerful as to output the intensity of the sun into y…

Diagram 3.1 - You have probably experienced how straining on the eyes it is to look at bright computer screens in indoor settings as well as how blinding snow is in sunlight. Imagine a monitor so powerful as to output the intensity of the sun into your house!

Obviously, this is extremely difficult from a practical standpoint and it seems foolhardy to engineer visual displays that emit the same quantities of light as in the outdoors. Thankfully, there is a more effective route which is to lower the overall amount of emitted or reflected light by the display, but maintain the relative difference between all the tones.

Diagram 3.2 - Despite the limited range of photographic materials, they still contain enough range to satisfy our visual system.

Diagram 3.2 - Despite the limited range of photographic materials, they still contain enough range to satisfy our visual system.

Despite the overall transformation blacks appear black, whites appear white, and middle gray is still middle gray.

Diagram 3.3 - When taking the photo of the cat I took luminance readings (candelas/meters squared) with my light meter. I rounded the numbers slightly, but nonetheless one can see how the white fur is 8 times greater than the graycard (a difference …

Diagram 3.3 - When taking the photo of the cat I took luminance readings (candelas/meters squared) with my light meter. I rounded the numbers slightly, but nonetheless one can see how the white fur is 8 times greater than the graycard (a difference of 3 stops), and the black fur is 8 times darker than the graycard. After adjusting the image in Photoshop by eye I metered the luminance values emitted from my computer screen. Once again, you can see that the white fur and black fur are 8 times, or 3 stops, above and below the graycard respectively.

3.2 Zones and the Human Visual System

If the Zones are being moved to such radically different photometric quantities how is the photographer able to maintain their relative differences? For this Adams and Archer’s visual Zone definitions are a powerful tool of comparison throughout the process. True, we adapt to different brightness levels, but we are extremely good at detecting differences in side-by-side comparisons. (Salvaggio and Shagam 2020, 283) This is why every ZS book recommends carrying a printed comparison card, such as Stouffer’s Zone Card, into the field to help relate objects to Zones. I cannot emphasize enough how important it is to practice looking at objects in your scene and developing the visual memory and tonal discrimination to determine what Zone is closest. Perhaps this quote sums it up best:

Every object, well contemplated, creates an organ of perception within us. - Goethe

Then, comparing our image at each step of the photo process to the Zones we can successfully learn how to manipulate the “dials” of contrast, brightness, etc. in the Processing step. By exercising our visual system’s ability to relate scene zones to image zones it’s easy to understand the potential to align their photographic process with their creative vision.

3.3 How Many Zones?

The next big point of confusion arises from questions surrounding the number of Zones frequently leveling the accusation that we need more Zones because digital cameras have a large dynamic range. The confusion comes from the erroneous attempt to equate Zones to the Dynamic Range of their camera or film. (Dynamic Range (DR) is the difference between the smallest to greatest recordable signal by a medium. The DR of both negative film and most professional digital cameras is in the order of 13 to 16 stops.)

Equating the DR of film to Zones was never Adams and Archer’s intention. Instead, the Zones are entirely described by the appearance to your eye (Adams 1981, 49). So, the Zones are based on our visual system – the DR of our eye/brain system, which in most circumstances is only about a 200:1 or 7 2/3 stop range. (There are some variations based on adaptation to an average light level but suffice it to say we are about 7-8 stops at the light levels used in print and projection.) When you look at a scene you lose details in the shadow and highlights beyond this 7-8 stop range. You may “feel” the range is greater since you can look into a dark corner of a room and start to make out detail or look at a bright light and see detail in the bulb. However, in these moments we are unaware of how we are adapting and losing detail on the other end of the scale. (If you still resist this fact I recommend setting up two graycards separated by a solid, but thin division. Light them separately so they are 12 stops or more different in intensity. Look at both graycards in the same field of view of your vision and get back to me on the results.) So having 10 Zones over a 10 stop range is plenty for a pure black, pure white, and a range of grays that are rich in detail in between.

Diagram 3.4 - From Ansel Adam’s The Negative. Notice the tendency to ignore Zones 0 and X since they are beyond the tonal values of interest to us. The Dynamic Range covers 9 stops and the range of objects with texture (or detail as I like to say) i…

Diagram 3.4 - From Ansel Adam’s The Negative. Notice the tendency to ignore Zones 0 and X since they are beyond the tonal values of interest to us. The Dynamic Range covers 9 stops and the range of objects with texture (or detail as I like to say) is 7 stops. (Adams 1981, 52)

The research on this was established many decades ago and I have written about this before in this post which includes citations of pertinent sources. Some of you may be wondering about HDR monitors since many are touting claims of 13 or more stops. That may be so, but there is little evidence such a large dynamic range is needed. In fact a graph of tonality from a recent talk on HDR projection is revealing because the straight line portion of the projector curve covers 8 stops, the rest of the 13 stop range is a soft roll off of shadow and highlight detail. I hope to address this more fully in a future post.

Regardless, the limit of our eye helps explain why the seemingly narrow range of print material works – it fits our eye! The same is true of theatrical projection and computer monitors.

 (A quick note about the limited DR of reversal films – since these are direct positives the camera original must display a visually acceptable contrast range after development. This is why in the field they already have a narrow DR and in projection are very contrasty compared to a negative.)

3.4 Is there such a thing as a “Normal” scene?

By logical extraction, if our eyes have a static dynamic range of 7-8 stops than we must have evolved these limits because it provided us with a suitable amount of information from the world in order to survive. Are most scenes in nature about 7 to 8 stops? Rather amusingly, I stumbled upon early sensitometric research where a study showed nearly exactly that. Below is a graph from a study of the range of photographed scenes and one can see that the majority of outdoor photos were of scenes with a contrast ratio of 160:1 - that’s 7 1/3 stops!

Diagram 3.5 - From a study by Jones and Condit investigating the luminance ratio of 126 photographs taken outdoors (James 1977, 549).

Diagram 3.5 - From a study by Jones and Condit investigating the luminance ratio of 126 photographs taken outdoors (James 1977, 549).

Photographic materials optimally fitting this 7 to 8 stop light range to the final image are legion. In my own class we took the curve of Kodak’s 2383 motion picture release print stock and used it to analyze the optimal contrast ratio of a scene to record onto the negative. Each student was given a different negative stock and all the values cluster around 7 to 7 1/3 stops of light as the ideal contrast ratio of a scene. I have done this with computer monitors and projectors as well to find the range of their light output is about 7 to 8 stops.

My reason for re-iterating this range of our eye and how it is integrated into the photographic system is to provide a more substantive definition of “normal” developing time or “normal” contrast. I’m typically wary of the word “normal” applied in the sciences and arts, but this tells us something important about our materials. A typical scene in nature is about 7 to 8 stops, and the published ISO and developing time by a film manufacturer are designed to hold that same range of light in order to best fit the final image display.

But scenes are rarely ever “normal” in their light range? Exactly, which is why sensitometry and Zone System exists. These are tools to help us interpret the range of light in a scene and adjust our exposure and processing accordingly to the image medium we are finishing to.

3.5 Tying it Together

Putting these facts together one can begin to appreciate tone reproduction as it has been already studied for the photographic arts. We first perceive a scene with our eye which has a limited static DR. We capture this onto a sensor that often has a much greater DR, but process the final image in a way that once again is tailored to the limits of our eye. The greater DR of the capture sensor (film negative and RAW files) allows us flexibility in post, but how much flexibility is debatable since large scale changes can have significant impacts on quality.

Diagram 3.6 - The scene we observe with our eyes is only about 9 to 10 stops and 7 to 8 of those stops carry detail of the objects. A camera sensor may as much as a 16 stop Dynamic Range. We can choose where to expose our scene within the limits of …

Diagram 3.6 - The scene we observe with our eyes is only about 9 to 10 stops and 7 to 8 of those stops carry detail of the objects. A camera sensor may as much as a 16 stop Dynamic Range. We can choose where to expose our scene within the limits of the sensor, but obviously if we stray beyond these limits than information is lost. Also, choosing to place our scene close to either end of the sensor’s DR has impacts on image quality whether in terms of increased noise, or lost tonal detail.

You may have noticed when you began taking pictures that occasionally you obtained a great image quality with your “default” settings. This was due to the chance alignment of the range of light in the scene you photographed and the range of tonality successfully funneled through your system to the final display. You noticed that it was great when it happened, but not all the time. So how can we make this alignment always occur? Well, that comes from controlling the middle section – image processing. I will address processing for film and digital extensively in part II of this article.

References:

Adams, Ansel. The Negative. Boston: Little Brown and Company, 1981.

James, T.H., ed. The Theory of the Photographic Process, 4th edition. New York: Macmillan Publishing Co., Inc., 1977.

Salvaggio, Nanette L. and Josh Shagam. Basic Photographic Materials and Processes, 4th edition. New York: Routledge, 2020.

Edits and Corrections:

24th of March, 2020 - Following a conversation with Dwight Primiano I added the writing and image about previsualization to section 2.1.

Exposure Concepts for Cinematography by John G Arkenberg

(Each semester I give incoming students to the Science of Cinematography course a series of questions involving exposure concepts ranging from easy to difficult. I ironically call it Cinematographical Calisthenics and it not only provides insight into how well-versed the student is with their medium, but also provides an opportunity for students to practice the mental manipulation of all the technical aspects that realize a properly exposed negative. What follows is a guide I wrote in order to refresh their memory on these concepts and I have decided to post it publicly to receive any feedback. If you would like to try and complete the assignment you can also e-mail me for a copy.)

1.0 Introduction

Both Camera I & II introduce and explain the concepts covered in this supplement. Nonetheless, there is inestimable value in revisiting past lessons to reinforce what you know, and clarify terms and concepts that may be muddled. Even if you feel confident in your knowledge please take the time to read this since I use different terms than what you may have encountered in previous courses.

2.0 The Appearance of Brightness

The appearance of brightness is determined by a complex chain beginning with incoming photons absorbed by the photoreceptors in the retina. This photochemical reaction is then passed through a web of neural circuits and finally to the visual cortex. Since our visual system responds in a nearly logarithmic manner to changes in light we perceive slight changes of brightness in low light far more easily than under bright circumstances.

A simple log curve. Consider the x-axis as quantity of light (footcandles, for instance) and the y-axis as the response by the human visual system. You can see how in low light situations there is a dramatic response to light, but as the light inten…

A simple log curve. Consider the x-axis as quantity of light (footcandles, for instance) and the y-axis as the response by the human visual system. You can see how in low light situations there is a dramatic response to light, but as the light intensity increases our system clamps down on the signal so as not to be overwhelmed by information.

If you are looking for visual confirmation of this fact look at a log recorded image such as Arri’s LogC or Sony’s S-Log. The reason for the image appearing so low contrast is the compounded effects of the logarithmic display in brightness and the logarithmic sensitivity of our vision.

Due to the particular sensitivity of our vision to brightness and color the visual arts rely on exponential changes in the materials. For instance, a step chart (such as a Kodak Q-14 or an XRite Colorchecker) appears to have a linear change in brightness from black to white, but the quantity of pigment is increasing at an exponential rate as the steps get darker. This is also the case for photography where each “stop” is either double or half the intensity, sensitivity, or length of time depending on the scale being used. This is a fundamental fact to understand about the relationship of our vision to the visual arts; the combination of our vision’s logarithmic sensitivity with materials displaying exponential changes appears as linear, or equal steps of change in brightness.

Log Exp Linear.png

In order to achieve proper exposure the photographer learns to manipulate a wide range of factors that must all follow exponential scales in order to appear correct to us. By exponential I mean that each change involves a factor of two - doubling or halving quantities.

3.0 Exposure

Exposure is the relationship between the sensitivity of the sensor (whether film or silicon) the intensity of light that is incident upon it, and the length of time the sensor is exposed to light. In this document I have separated the different components of exposure into three major categories. First, we will consider light in nature as it falls onto a scene or is reflected by objects. Next, we will look at the factors surrounding the camera that alter the intensity of the light such as the f/stop of the lens, the sensitivity of the film, and filters. Finally, we will consider the length of time of the exposure. The basic mathematical relationship in a simplified form is expressed as:

Exposure = Intensity / Time

While the above formula is not useful in calculating exposure in a practical situation it demonstrates an important relationship in reciprocity. Basically, the exposure is the same so long as the intensity of light incident upon a sensor and the time of exposure are changed in reciprocal to each other. For instance, if you lose one stop of light from your unit you can double the length of the exposure time so that the same exposure is achieved. This flexible relationship of give and take between lens aperture, exposure time and ISO allows the photographer to deal with a wide range of situations as well as create a specific aesthetic.

3.1 Exposure Scales

Photographic scales are divided into stops. “Stop” is a term used fluidly in photographic technology but which can refer simply to a point of reference, i.e. “set the lens at this stop” as in setting the aperture to a particular f/stop, or in terms of a difference in quantity by a factor of two, i.e. one stop more light is twice as bright, or one stop less is half as bright. The origin of the word in its use in photography is obscure, but one can think of it much how stations along a route serve as important geographic markers.

Stops mark out points on an exponential scale where each step is either double the previous or half the following quantity. The reason stops are useful is to help simplify quantities into equal steps rather exponential numbers. For instance, if you have a light outputting 400 footcandles it is easier to say we need to lower the output of the unit by six stops instead of “decrease the output by a factor of 64 times!”

A change in stop from a single reference point:

1 stop 2 stops 3 stops 4 stops 5 stops 6 stops

Increasing: 2x        4x          8x          16x        32x        64x

Decreasing: ½         ¼         1/8        1/16     1/32     1/64


Changes in stops can also be expressed as a percentage:

1 stop 2 stops 3 stops 4 stops 5 stops 6 stops

Increasing: 200% 400% 800% 1600% 3200% 6400%

Decreasing: 50%      25%     12%      6%          3%        1.5%


While all the photographic exposure scales are readily known in whole stop increments they can also be broken down into ½ or even 1/3 stop divisions for greater accuracy. Where possible I have included these finer markings. While light meters read with an accuracy of 1/10th of a stop and scientific measures of photographic materials are even more accurate I’ve found 1/3 stop accuracy to be a practically achievable tolerance.

4.0 Light in Nature

The first consideration for the photographer involves the amount of light from nature or their lighting units. Within a large range of the visual arts there is only consideration of how a painting or graphic appears and therefore the amount of light reflected, or transmitted, to the observer. However, the cinematographer must not only contend with light reflected from objects within their scene, but also control aspects of lighting or the light incident upon the scene. This difference between incident and reflective is usually addressed in regards to light metering but is crucial to working effectively with control of lighting, exposure and tonal reproduction.

4.1 Quantifying Incident Light - Footcandles

Incident light is categorized as a photometric quantity of Illuminance – the amount of light falling onto a surface. When using an incident meter you are measuring the quantity of light falling onto the white dome of the meter. The quantity is given in footcandles, which for simplicity sake one can imagine as the amount of light from one candle falling onto a surface one foot away from the flame. (The real definition is much more complex, but this image is helpful as a starting point.) The higher the number of footcandles the greater the intensity of light is falling onto your subject or meter. In fact, your light meter probably reads footcandles but you rarely use this feature since it is simpler to input the the ISO and frame rate of your camera in order to get a reading of the correct f/stop.

The quantity of footcandles (abbreviated fc) must be increased, or decreased, in an exponential fashion to appear as a linear change in brightness and each double or half of a quantity is a stop. For example, if you have a light reading 100 fc and you want to lower it by one stop you put a double scrim in front of the lens and the source now ideally reads 50 fc with your meter. Conversely, increasing your exposure by two stops requires a unit with four times the output. So if 100 fc is not enough you need a unit that outputs 400 fc. Be aware that you can also have fractions of a footcandle in low light situations. A light that outputs ½ fc is twice as bright as a light outputting ¼ fc. This may seem unusual but keep in mind that we could view this situation from as fine-grained a perspective as counting photons where eight photons is one stop brighter than four photons.

The practical importance of understanding footcandles is that lighting manufacturers give information about their light units in footcandles or the SI equivalent Luxbecause these are pure quantities of intensity independent of sensor sensitivity, exposure time and aperture.

4.2 100:100:2.8 Rule

A useful “rule” to remember is the 100:100:2.8 rule. Honestly, it’s not a rule but just a mnemonic that states the correspondence between three quantities  - that 100 footcandles with 100 ISO film is correctly exposed with the lens at f/2.8. (This “rule” assumes that you are shooting 24fps at a 180 degree shutter.) Even though the photographic press writes the rule with ratio signs this rule is not a ratio - that decreasing one quantity does not entail that another quantity increases. Rather, that as one factor of this correspondence is changed, one must alter the others in a manner which keeps the exposure the same.

For example, a common mistake students make is to simply lower a number in one category if another one is raised. If you change your ISO from 100 to 200 and you need to change your f/stop to compensate for the one stop increase in sensitivity you need to set your lens at f/4. I commonly get the answer as f/2 because the student is not thinking about the exposure, but simply looking at the numbers.

4.3 Quantifying Reflected Light - Percentage

The amount of light reflected from objects is dependent on that particular object’s response to the energy of light striking it. This is a very complex topic when considering the color of light and the color of the object, but for the time being we will treat this subject only with neutral tones, which reflect the visual spectrum equally. White and very light objects reflect most of the light that is incident on the surface and would therefore have a reflective percentage of 90%. You may notice when you go buy copying paper they print this on the package. An object that is very dark absorbs most of the light so something like black velvet may reflect only a small percentage.

You may remember that a graycard reflects 18% of the light incident upon it and this tone is “middle gray” or “Zone V.” Many people wonder why the graycard is not 50% if it is middle gray. That’s because our sensitivity to light is logarithmic so objects reflecting 18% appear in brightness as middle gray. (The actual midpoint is 19.9% reflected light, but that’s another discussion.)

Every double or half of the percentage of reflection is a full stop. So if you are lighting an object that has 50% reflectivity, and then you swap it for an object with 25% reflectivity then this new object is a stop darker. Similarly, if you swap someone’s dark jacket with 6% reflectivity with a new coat that has a reflection of 24% then it would appear two stops brighter.

4.4 Inverse Square Law

The intensity of a light source is going to change depending on the distance to the subject - obviously the farther a light source is from your subject the less light falls onto your subject. Lighting manufacturers publish photometric tables that correlate a given footcandle output with a specific distance. Or, they may have a useful calculator like Arri’s Photometric Calculator. However, if you have only one known distance and intensity you may need to calculate the footcandles for the distance you know the light will need to be positioned at.

Light intensity decreases at a rate of the distance squared. The simplified formula is as follows where Iv is the initial intensity and Ev is the final intensity. The units for distance are arbitrary but here in the US you will find the distance is almost always in feet.

Ev  = (Iv  / distance)^2

Inverse Square Law.png

This diagram illustrates how the formula works for a light source that has an initial value of 200 fc at 1 foot. Notice that at each foot it falls to a fraction of its initial intensity by the distance squared. So at four feet the light falls to 1/16th of its initial value and would read 12.5 fc on our incident meter.

Since it is highly unlikely you will work with any lighting unit at a distance of 1 or 2 feet here is a formula that takes into account larger distances:

Ev  = Iv  / (Current distance from source to subject/Initial distance)^2

This version of the formula is better because if you are given an initial light intensity at a distance of 100 feet, you would need to work in 100 foot increments. Obviously, if the light is moved to 101 feet it would not fall by two stops and it would be a mistake to divide the footcandles of the unit by 1012. Moving the light 200 feet away from your subject it would result in a two stop loss of light. So for the correct distance it’s best to divide the current distance from light source to subject by the initial distance reported by a manufacturer.

5.0 Camera Controls for Intensity of Light

The intensity of light from your scene is next altered by the technology of the lens and camera.

5.1 F/Numbers

F/Numbers, f/stops, or Focal Stops, are the ratio between the focal length of a lens and the diameter of the aperture. For example, a 50mm lens with an aperture diameter of 25mm has an f/stop of f/2.

Focal Length/Diameter of aperture = f/stop

50mm/25mm = f/2

The beauty of this system is that it allows lenses of different focal lengths to pass the same amount of light to pass regardless of the focal length. For instance, a 25mm, 50mm, and 100mm all allow the same amount of light to pass through when set at an f/2. In order for this to occur the aperture must be set to a different diameter for each lens: at 12.5mm for a 25mm lens, at 25mm for a 50mm, and at 50mm for a 100mm lens. The only confusing aspect of the f/number scale is that initially students find it counterintuitive that lower f/numbers allow more light through than the higher f/numbers. This is a common mistake in answers that will go away with practice.

F/numbers are a mathematical construction and do not take into account different lenses of radically different designs using as few as six elements or glass to as many as sixteen. The more elements of glass the more light is lost due to reflections at each air-to-glass surface. Also, light is lost within the mechanics of the lens. T-stops, or Transmission stops, account for the difference by measuring the percentage of light lost and compensating with a slightly different sized aperture. For instance, to make a more complex lens design like a 25mm Distagon achieve a true f/2 the aperture may need to be set to 13mm or 14mm in diameter rather than the calculated 12.5mm. Once this transmission accurate diameter is found the ring is marked T2 in order to distinguish it from f/stops. All motion picture lenses, except some older makes, are marked in T stops so that entire lens sets within a manufacturer and even between manufacturers match very closely.

Should I be saying f/stops or T-stops? In cinematography people use both terms flexibly and without any loss of understanding but there is a distinction in the fact that T-stops only refer to markings on a lens. Your light meter will read in f/stops and when discussing the general concept of exposure one should also speak in f/stops. When discussing the stop for a lens one should speak in reference to T-stops. The distinction is academic and not necessary to follow in practice, but is preserved in these exercises for clarity.

Should I be saying “f/2.8 and a half, 2.8-4 split, or f/3.5 on set? There is no incorrect method of expressing an f/stop as long as it is clear. I personally like using fractions since memorizing the ½ and 1/3 stop fractions is impractical and most crew don’t know them. On the other hand, I have been on sets where these numbers where used and one was expected to know them. Pick a method and be flexible, but don’t give people a hard time if they don’t communicate the same way as you do.

“Shooting Stop” – A shooting stop is the chosen f/stop for which to photograph a scene. In older Zone System parlance this was referred to as the Key Stop. The selection of a shooting stop helps the cinematographer and gaffer set all lights for a given range of exposure around the shooting stop. You will also encounter that cinematographers typically pick a single f/stop to shoot a scene or even an entire movie because this means the characteristic contrast and resolution of the lenses remains the same throughout.

Numbers at top are whole stops, middle numbers in blue ink are half stops, and lower numbers in green are one-third stops

Numbers at top are whole stops, middle numbers in blue ink are half stops, and lower numbers in green are one-third stops

5.2 ISO & EI

The sensitivity of a sensor to light is given as a “rating” where the higher the number the more sensitive to light the sensor is and vice versa. The ISO of a sensor is determined by a specific scientific test determined by the ISO organization. These numbers provide a useful means to identify whether a sensor is very sensitive to light (fast) or not very sensitive to light (slow) and to relate these quantities to each other.

EI, or Exposure Index, uses the same numbers as the ISO scale but the key difference is that EI is not determined by authorized test procedures. EI is chosen by a manufacturer as a “best practice” rating of the sensor, or by a cinematographer changing the rating of a sensor for a particular aesthetic result. For example, the manufacturer may test a film and identify its speed as 400 ISO but if I like the look of the film rate at 200 and pulled 1 stop in development I should appropriately say that I am using the film at an EI of 200.

Numbers at top are whole stops and lower numbers in green ink are one-third stops

Numbers at top are whole stops and lower numbers in green ink are one-third stops

5.3 ND Filters

Another important means of controlling the intensity of light is through Neutral Density filters in front of or behind the camera lens. This is a a filter containing an achromatic pigment that cuts the intensity of light depending on the density of the pigment. These are commonly provided in whole stop increments, although ½ and 1/3 stops NDs can be made and do exist for scientific purposes.

ND filters are labelled with an unusual scale of numbers which are the logarithm of the amount of light lost by the filter. (Check out the logarithmic curve I plotted near the beginning of this article.)

If an ND cuts the light by a factor of 2, then log(2) = .3. As you may remember an ND.3 loses 1 stop of light or cuts light by a factor of 2.

If an ND cuts light by a factor of 4 then log(4) = .6.

The logarithmic scale provides simpler numbers since each .3 step is a 1 stop change in the intensity of light. For instance, it’s much easier to refer to a filer that cuts light by a factor of 128x by its logarithmic number 2.1.

6.0 Exposure Time

Exposure time is the length of time in seconds that a sensor is exposed to light. In analog cameras this is the length of time the shutter is open allowing light from the lens to strike the film. For digital cameras, which don’t always have a mechanical shutter blocking light, this is the length of time the light sensitive photosite is allowed to gather photons before being reset for the next exposure. In the motion picture industry the exposure time is governed by the frame rate – 24 frames per second in the U.S. This does not mean that each frame is exposed for 1/24th of a second because some time needs to be allocated toward moving an unexposed frame of film into the path of the lens. Cameras adopted a semicircular or “half-moon” rotating shutter that covers the film half the time allowing the next frame to be transported into place behind the lens, and open the other half for exposure. The shutter opening is expressed in degrees of on angle and the standard 180 degree angle cuts the exposure time in half.

180/360 = 1/2                         ½ x 1/24 = 1/48th of a second

Due to the engineering of film cameras this why 1/48th of a second is the adopted standard for all of the motion picture industry.

6.1 Frame Rate  - Changing Shutter Speed

Running the camera at a different frame rate for fast or slow speed motion changes the exposure time. Each whole stop change in exposure is either double or half the given frame rate.


3fps          6fps 12fps 24fps 48fps    96fps    192fps

With a 180 degree shutter the corresponding exposure times are:

1/6 1/12     1/24    1/48     1/96     1/192 1/384

6.2 Shutter Angle

Many film cameras allow adjustment to the opening in the shutter in order to change the exposure time without altering the frame rate. A shorter exposure time reduces motion blur in a moving subject and vice versa. Once again, each double or half of the angle is a one stop change in exposure.

Shutter Angle.png

Each of these shutter angles cuts exposure down by a factor of:

180/360 = ½         90/360 = 1/4 45/360 = 1/8    22.5/360 = 1/16 11.25/360 = 1/32

The corresponding exposure times at 24fps are

    1/48 1/96     1/192 1/384 1/768

Understanding the relationship between exposure time and shutter angle helps one calculate shutter angles for special situations such as the following:

Filming at 24fps in Europe where the current is 50Hz requires an exposure time of 1/50 of a second to avoid lights flickering. The camera can remain at 24fps if a proper shutter angle is found that gives 1/50 of a second.

x/360 x 1/24 = 1/50  

Solving for x gives you 172.8

Another common scenario is filming with computer monitors which flicker at 60Hz. This requires a shutter angle that gives an exposure time of 1/60.

x/360 x 1/24 = 1/60  

Solving for x gives you a shutter angle of 144 degrees.

While digital cinema cameras often give a shutter angle adjustment in the menu this is just a legacy feature that actually changes exposure time. In actuality the camera collects and resets each photosite for a specific time. All the math is the same except that digital cameras can allow for “shutter angles” greater than 180.

7.0 Discussing Exposure within a Scene

Once a student is accustomed to metering and setting exposure on the camera the next task to connect these concepts to lighting within a scene and to match these to the limits of one’s medium. The first useful concept is setting lights is contrast ratios, especially since it is rare that all the lights on set will be at the shooting stop.

7.1 Contrast Ratio

Contrast Ratio is the difference in intensity between two lights, or two metered areas in a scene. This is never given in logarithmic numbers but expressed as a ratio.

Ex. If one light outputs 100fc and a second is 200fc this is a ratio of 1:2

Ex. If one light outputs 5fc and another 20fc this is a difference 4 times so this is a ratio of 1:4

Here is a chart to compare the contrast ratio to the number of stops difference:

1:2 – 1 stop

1:4 – 2 stops

1:8 – 3 stops

1:16 – 4 stops, etc.

7.2 Subject Brightness Range or SBR (Sometimes called Subject Luminance Range)

Another major concern of a cinematographer is whether the range of light in a scene is able to be successfully recorded by a medium, or, in other words, the lowest desired shadow detail is not lost in the noise floor nor are highlights “blown out.” Many refer to the range of exposure a medium can successfully reproduce as latitude, but this is incorrect and that term is used very differently in sensitometry. The correct term is SBR or subject brightness range – the range, usually given in stops, of light intensity from a scene the medium can successfully record and reproduce without extensive post-production processes.

For example – if a film stock has 7 stops of SBR this means it can successfully record and reproduce exposures up to 3 ½ stops under neutral gray and up to 3 ½ stops over. So long as the brightest and darkest objects you wish to recorded by the sensor with detail are within that range then your post correction will be much more easy.

8.0 Tying it All Together

While this post may be a slow journey through a thicket of technical topics my intention is to display how these concepts unite into the successful realization of an image. Moreover, I hope they can more clearly shed light on the interrelation between all the technical controls the cinematographer has for exposure. Back in the analog era we were required to be extremely conversant with these concepts and be able to juggle them all in our heads (or for me, count them on my fingers) to ensure we achieved the correct exposure. Too often I hear an argument that today we can look at a monitor and no longer need to be tied to these technical scales. That may be somewhat true, but I don’t have an Arri Alexa and a calibrated monitor on scouts, nor does my pre-rig team working at the next location on a job. At the end of the day we must be so conversant with the technical aspects of our work that our mind is free to use our technical control for artistic ends. So - it sucks to learn it - but get used to it now.

8.1 Where Does it Go From Here

The successful recording of an image is not the end of the photographic process, but the beginning of a journey toward proper realization. To paraphrase Ansel Adams - if the negative is the score, the print is the performance. The further steps of process the film or digital image and realizing it in print or projection is the process of tonal control which is covered by the science of sensitometry. This is too great a topic to cover here, but one should be aware that what I have written here is just a small part of the photographic journey from subject to image.

Tea Developed Black & White Film (Part I) by John G Arkenberg

21st January 2018.jpg

Life in the Weeds

In researching material for Science of Cinematography I have reached a peculiar point where most of my time is spent in the weeds. If you spend time outdoors you have surely had the experience where the idyllic calm of the woods is broken by an uninvited perturbed rustling that sets the heart racing and the imagination freewheeling. Timid investigation reveals not a mountain lion but a foraging squirrel who, disturbed in his labors, is just as surprised to see you. I’m that squirrel, although I’m more like a meadow vole in my unassuming poise, out foraging the technical literature of photography for god knows what, and sometimes also making a goddamn racket.

The advantage of being out in the weeds is that eventually one returns to the manicured lawns of civilization with knowledge that proves useful. It’s not unlike desiring to know the best places for grasses and forbs and having a vole at hand to give a satisfactorily delicious answer. A current example is a project my partner, Erika Houle, has begun to photograph members of the Global Tea Hut community in New York with black and white film that is developed in tea.

Looking Back

My past experience with tea as a developer had measured success leaving only one critical obstacle hanging in the balance. In the earlier years of teaching my course I used to begin the Photochemistry class by taking photos of my students. I used Efke 25 film and a 2K tungsten unit within 3 feet of their faces to get exposure. (I used 25 EI film largely because this slow emulsion had a quick development time.) At the point in the class where I discussed the chemical process I would produce a bottle of Teas Tea green tea to use as the developer, a bottle of lemonade to stop development, and a can of tomato juice as a fixer. Much of this was a show because, as I was forced to reveal, the tea in the bottle was actually green tea I boiled at home to extract every ounce of chemistry and to which I added Sodium Carbonate as an “accelerator” to get development times down to a reasonable length. Also, I never fixed in tomato juice because it would give the negative a strong red stain. However, stopping development in lemonade was both successful and delicious.

The first tea developed negatives shown on a light table. Notice the unusual orange-brown color of the base + fog.

The first tea developed negatives shown on a light table. Notice the unusual orange-brown color of the base + fog.

Achieving a dense negative proved not to be the problem, but rather printing the negative. Much to my chagrin no matter how I changed the printing time the image of the students never appeared - only a black and empty frame. I felt as if I had captured a photo of Dracula only to discover his visage was as unprintable as it is catching his reflection in a mirror.

Step print to determine the standard printing time for the tea negatives. You can see the slightest image of a face in the lighter portions of the print.

Step print to determine the standard printing time for the tea negatives. You can see the slightest image of a face in the lighter portions of the print.

As I thought through my process I realized something was amiss between the color sensitivity of my materials and the fact my negatives possessed a deep orange stain from the developer. My light source in the enlarger was tungsten and my negatives were orange so this meant plenty of light, albeit orange in color, was reaching the paper. Then, I realized I had never looked at the spectral sensitivity diagrams of Oriental Seagull GF paper.

Oriental Seagull Graded Paper spectral sensitivity graph. You can see how it is only sensitive to blues and some blue-greens.

Oriental Seagull Graded Paper spectral sensitivity graph. You can see how it is only sensitive to blues and some blue-greens.

The orange base + fog meant the paper was blind to the image on the film.

Sketch.jpg

My next solution was to try and contact print the negatives in the Sun. My hope was that the more even spectrum of daylight would mean a greater amount of blue light could penetrate the negative. I ran outside and flashed the film and paper quickly. However, this only produced the barest ghost image of a face.

A strip of the negatives flashed in direct sunlight. Once again you can see just the ghost image of the faces. This shows how much blue light was absorbed by the negatives and therefore never reached the print.

A strip of the negatives flashed in direct sunlight. Once again you can see just the ghost image of the faces. This shows how much blue light was absorbed by the negatives and therefore never reached the print.

The eventual solution was to hand the negatives to a friend at a photo lab who could print them through a color process. The paper for color is much more full spectrum and the orange stain posed less of a problem because it is not too dissimilar to the orange integral color mask of color negative film. Eventually, I had my images and the additional bonus of a great science lesson for my students.

Finally, an image. Tea developed negative printed to color paper through a commercial machine. 

Finally, an image. Tea developed negative printed to color paper through a commercial machine. 

Moving Forward

So time in the weeds produced valuable information and demonstrated future success is possible. I had an inkling the color cast originated from the Sodium Carbonate since its addition to the brown colored tea instantly turned it orange during mixing. This provided hope of success with Erika’s project as well as a starting point for our own tests. If we could eliminate the orange stain problem then we could move on to more prosaic task of matching development time to paper contrast and make beautiful prints. At the time I couldn't work out a good tea developer because my life dictated moving onto another patch of weeds. To be precise, I had to delve into a patch of digital weeds as I worked on keeping up with the analog to digital transition.

I have encountered those using this “in the weeds” in a negative sense (no puns intended), as if there is little use to far-flung journeys in remote places. Rather, these are educational opportunities for deep study as well as experiment, the mother of invention. Even though developing film in tea tea was a sideshow attraction to the much greater topic of cinematography I could have never guessed this errant journey would be relevant in the future. So I derive some satisfaction with the work itself, and eight years later being able to dredge up my story from memory in order to assist someone in their artistic process.

Death of the Zone System (Part VI) by John G Arkenberg

THE VIEW FROM HERE

Peaceful Distance.jpg

When starting the Death of the Zone System series I really didn't expect the topic to become so sprawling. With great naiveté I assumed each post would cut through misconceptions like so many weeds until the Zone System was clearly revealed. Then, each post not only left me bemoaning my lack of preparation and failure to provide a clear overview of the subject, but also left me holding fragments of questions. Writing each post has been a difficult ascent to a height where I can observe the vast landscape of the topic. However, at this point I feel the need to pause, look back at the route taken, and peer out into the distance in order to obtain some perspective.

While the instructions and claims about Zone System online are arrayed on a spectrum from comprehensive to confused common mistakes are found throughout. I can distill these errors into two categories of being overly simplistic and/or incomplete. The simplicity becomes immediately apparent by noting whether the article or blog begins with the same subject as the first chapter of Adam's The Negative, with a discussion of visualization. The loss of this central concept pulls the rug out from under the photographer because the entire system is held together by the artist's ability to see the final image in their mind's eye. Lacking visualization when making an image is akin to the construction of a piece of furniture without having a design in mind. By shortcutting this central concept and jumping straight to Zone definitions and exposure is the same as telling someone the first step in building a chair is to go buy wood. If visualization is not addressed at the beginning (or for that matter entirely) then one must be suspicious because what follows is mere technical regurgitation.

Secondly, since ZS ties together the entire process from subject to final image the explanation must include control of each step in the photographic chain. In fact, most articles stop after exposing the film or electronic sensor. Fewer go further to address how to work with the image in a computer to obtain the desired tonal values. I have found only one book and one website that actually guides the photographer all the way through to the print or an end product. Here is a quick survey but the bottom line is that the bulk of information available on-line reduces Zone System to a mere exposure tool.

DPAnswers.com - Good technical info but lacks discussion of visualization and ends with the image in the computer.

Outdoor PhotographerKen Rockwell, Photography.Tutsplus, - Merely discussions of zone definitions and their relation to exposure.

Alan Ross - Since Ross worked with Adams I'm not surprised that he included visualization, but I'm surprised his also ends with exposure.

Martin Bailey - Fails to discuss visualization but at least gives a really solid technical rundown on calibrating a digital system for Zone System. Also, doesn't really go beyond the image on the computer screen.

Beyond the Digital Zone System - Not surprising that by far the best explanation and application to digital is by someone who read Beyond the Zone System. Does cover the entire imaging chain and begins with whole section on visualization.

Is there any reason for these publications falling short in their explanation other than maybe lack of research or understanding? In fairness, I must admit that ZS is easier applied to analog, not because it was created in the analog era, but rather that the process has a very clear workflow with rigidly defined procedures. In an effort to illustrate this I created the following flow chart.

N.B. - This analog workflow chart is a first draft. I'm not entirely happy with it and will be showing it to colleagues for feedback. Nonetheless, I think it does show the simplicity of the system and also what I believe to be its biggest failing wh…

N.B. - This analog workflow chart is a first draft. I'm not entirely happy with it and will be showing it to colleagues for feedback. Nonetheless, I think it does show the simplicity of the system and also what I believe to be its biggest failing which is the drop in MTF from step to step.

Compare this to the digital workflow chart which, while certainly possessing more flexibility with the use of computers to adjust images, is obviously more complex.

Digital workflow is so convoluted and sprawling I really struggled to create this sketch. Any feedback is welcome. What I do find important to observe is how each output from the camera requires very different technical settings.

Digital workflow is so convoluted and sprawling I really struggled to create this sketch. Any feedback is welcome. What I do find important to observe is how each output from the camera requires very different technical settings.

We should be careful about falling into the "digital is simpler" trap just because one can change the color or contrast with the click of a mouse. In fact, there are many mis-matches in this diagram between the gamma and color gamut of your monitor and the output, plus transitioning to a different format results in a loss of quality. For example, it is easy for me to post my photos on-line but why does Squarespace crush my midtones? Why does their resizing algorithm for the web also affect my tonality? I could try to carefully calibrate a LUT for my photos that will be posted on Squarespace but as some point I have to give up based on the fact that most people viewing this will not have the same calibration as my monitor. All the technical differences between each link in digital workflow is also embarrassingly present in the motion picture industry where many filmmakers are using a gamma of 2.4 on the monitors on set with which they judge exposure, then color correct the image on a computer monitor set at 2.2 gamma, to finally project the image at a gamma of 2.6 in theatrical projection. Try to wrap your head around handling this explosion of possibilities when figuring out the sensitometry for digital! I am not claiming that the various standards amongst these outputs will never be reconciled, but at this particular time this is truly a mess. 

Along with the tangled web of digital standards is a lack of sensitometry information for digital processes. The principles of this science are well documented for analog and one can find books as well as a free explanation on Kodak's website in their Basic Photographic Sensitometry Workbook. The practices are so powerful that one can take any manufacturer's supplied graph and easily determine critical information about the Subject Luminance Range for a film/print combination, the exposure index, and the relationship between Zones to help with light metering a scene. Perhaps a favorite example of mine is how I used Fuji's own graphs to determine that Eterna 500T really had an ISO one stop slower than the EI they published. The evidence is shown below:

From Fuji Technical Publication KB-501E. I have put in marks for the minimum and maximum useful densities for printing to Kodak Vision Color Print 2383. Take a look at where these coordinates intersect the x-axis, which Fuji usefully marked out in c…

From Fuji Technical Publication KB-501E. I have put in marks for the minimum and maximum useful densities for printing to Kodak Vision Color Print 2383. Take a look at where these coordinates intersect the x-axis, which Fuji usefully marked out in camera stops. According to this diagram the film has more overexposure room than underexposure. Actually, any sensitometry expert would realize that Fuji is claiming a higher EI for their film then its actual ISO. (Yes, they can do this because only ISO testing and quantification is regulated whereas EI is just a published recommendation.) 

The over and underexposure range is perfectly balanced if one overexposes the film by 2/3 to 1 stop. Originally I found this out through extensive testing of this film in one of my classes in 2010 when we all agreed that visually the film looked bes…

The over and underexposure range is perfectly balanced if one overexposes the film by 2/3 to 1 stop. Originally I found this out through extensive testing of this film in one of my classes in 2010 when we all agreed that visually the film looked best overexposed a stop and printed to look normal. Only a year later I looked at this chart and realized our special discovery was actually just correctly rating the film at 250 ISO.

I can find many tests for digital cameras regarding their dynamic range or either chart or graph form but these are largely for marketing and rarely contain actual sensitometric or photometric data to help the user on set. For example, look at the chart Arri created in 2010 with the release of the Alexa demonstrating how changing EI shifts the dynamic range and provides important data for light metering in these situations. For three years Red published nothing comparable but only the mere dynamic range quantity. Arri should be applauded for being rather frank in revealing this data. 

(Left) Arri's chart to demonstrate how exposure latitude changes with EI adjustments. (Right) Red's first diagram of something similar three years later. Notice how much more intuitive to read and informative the Arri chart is in comparison to the R…

(Left) Arri's chart to demonstrate how exposure latitude changes with EI adjustments. (Right) Red's first diagram of something similar three years later. Notice how much more intuitive to read and informative the Arri chart is in comparison to the Red chart which lacks any helpful information for use in the field.

The reluctance or flat-out lack of information provided by digital camera manufacturers really ends up hurting the consumers in terms of their technical intelligence about the medium. This in turn hurts photographers better understanding how to apply a practical methodology for exposure and tone control, such as Zone System provides, to their chosen materials. However, I guess creating this confusion does keep their customers ignorant in a way that helps them sell cameras. 

So, regardless of my despair at the state, or perceived state, of Zone System education why should I make it such an issue? Mostly because students find it useful and I don't wish for it to die a death of neglect and misinformation. But when an inundation of mediocre explanations washes away better resources then it saturates the information landscape and sows the seeds of confusion and ultimately disappointment for the amateur photographer trying to put it into practice. Do I have any proof of these lost ZS practitioners? Sadly, nothing statistical, but I do have the characterizations of ZS that students bring to me when they begin my class, as well as their questions regarding aspects they find confusing. I can report that despite their initial lack of understanding they have a strong intuitive sense that it is worth learning and will be a useful tool on set.

I hope these writings help heal the wounds from such ignorant and disparaging comments such as this:

An example from photo.net of advice from someone who has already made up their mind and is unwilling to change it. His characterization of ZS and sensitometry makes me suspicious that he understands either. To claim sensitometry is important but ZS …

An example from photo.net of advice from someone who has already made up their mind and is unwilling to change it. His characterization of ZS and sensitometry makes me suspicious that he understands either. To claim sensitometry is important but ZS is not is like saying that all we need is Bernoulli's Principle, but not the joystick to fly the plane. The analogy to unsharp masking is irrelevant because this is the rhetorical trick of the strawman argument. The sign off to his post is more about his ego than an awareness that he is submitting advice to a community. I take this post as a warning that I must recognize the fact my ideas will have to change with evidence. 

This form of non-advice (a negation of advice) illustrates the dwindling use and understanding of ZS is not through any fault of its own. Rather, it is a victim of its own dissemination amongst the online communities or publications that are merely producing content at expense of substance. Just go to any forum and one can read Zone System's epitaph.

Despite my rather negative tone, I actually am optimistic for the future. Since I can survey the Zone System landscape from a vantage built on years of study and practice I would be a fool to not notice even higher peaks that are within reach even though they lurk in the clouds. So, where can The Death of the Zone System series go in the future? First, I think there needs to be much more research about the human eye incorporated. Since I found there exists characteristic curves for the human in the Theory of the Photographic Process I want use this data for further understanding how tonal values appear to us at the end of the imaging chain. Unfortunately, this raises a number of questions for my industry; what is the adaptation level of the eye in a theatrical or household setting and how much does it change? Is there a point in a movie theatre where the screen fills such a large angle of view that we move from dark surround conditions to bright surround conditions? These are great questions to set out on a journey to answer.

Secondly, digital sensitometry seems non-existent. I have located information about Photon Transfer Curves but have not ascertained their relevance to ZS. I have found this article about digital ZS (and the title sounds like something I would write) but I need to go through it carefully because I find some of the claims suspect. Nonetheless, establishing digital sensitometry working methods would provide a clearer relevance of ZS to digital as well as spur the creation of better tools in image adjustment software, improved formats, and perhaps better algorithms. I get the sinking feeling that the photographic industry relies on a great number of ad hoc solutions based on earlier video standards and that decisions are based on nominal research. There is a willful ignorance of lessons that can be learned from the past and how they would be applied in the future. We need to ask more questions and perform better research. We need to get above the trees to order to appreciate the entire landscape.

Death of the Zone System (Part V) by John G Arkenberg

When is a Zone no longer a Zone?

When I began writing Death of the Zone System I never expected this minute detail to grow into a full-blown post, but I realized it was worth addressing after reading a specific statement on Martin Bailey's blog. In the past few posts I have drawn from his article on the Zone System repeatedly merely for the reason that it provided a good source of common misconceptions. Don't take my repeated exploration of this post as a condemnation of what is really a helpful and clear resource for photographers.

The comment that inspired this article is this:

"Although I’ve seen heated arguments as to whether or not a Zone is equal to one stop of EV or Exposure Value, Adams himself clearly states that this is how he intended the zones to be used.." 

"Now, in practice, we’ll find that as the dynamic range that our cameras can record increases, strict use of The Zone System requires that we will have to move away from thinking of each zone as one stop of exposure, or, simply use more zones, keeping the zone to EV stop relationship."

The reason it gave me such pause is that while I find many circumstances in which each zone is a one stop change in exposure, I also find plenty of circumstances where this is not the case. Bailey recommends that now with the greater dynamic range of digital we need to expand the amount of exposure each zone contains or add more zones. I have addressed in parts III and IV the fallacy in this reasoning, but have yet to really prove how flexible the Zone System becomes once we take a more nuanced view of how the Zone scale is expanded or contracted to adjust for different subjects under a range of different contrast ratios.

So, is each Zone equal to a one stop change in exposure? At first I couldn't believe that Adams would so rigidly define each Zone as a 1 stop change in exposure, but looking back to The Negative I found that this is precisely the case (1981, 49).

"...we define a one-stop exposure change as a change of one zone on the exposure scale, and the resulting gray in the print is considered one value higher or lower on the print scale."

However, even though I learned a great deal from the three books of the New Ansel Adams Photography Series and they are treated as critical reading for any photographer I must admit that I have found a far more comprehensive, albeit dense, explanation of ZS in Phil Davis' Beyond the Zone System. After guiding the reader through sensitometric testing and plotting of films and papers Davis gets down to brass tacks about precisely this question (1988, 139):

"Although the '1 stop equals one zone' concept is easy enough to understand and apply when we're dealing with a normal subject, it raises an obvious question when the subject range is greater or less than 7 stops. Does a 9-stop subject have a 9-zone range, and a 5-stop subject have a 5-zone range? Obviously not, because when these subjects are photographed and printed properly, both prints will exhibit the full range of print tones that we recognize as including the seven standard zone grays. It's clear, then, that every subject has seven zones, regardless of its luminance range in stops, because if this were not the case, it wouldn't be reproduced properly in a full-scale print. Therefore, as we've concluded before, zones and stops are not always equal."

This statement correlates with much more with my experience using Zone System. Even though Adam's originated ZS I give some priority to Davis since he provides a much more thorough explanation using sensitometry, the chassis underpinning ZS.

There are some important points of note before we get started. First, in order to clearly demonstrate how this flexible relationship between Zones and Exposure Values depends on the nature of the scene and the desired outcome of the final images' tonality I will have to use a number of graphs. I also need to use only one rigidly defined negative to print process because each film/print combination has unique results but also to keep this post short. Secondly, I need to introduce an unfamiliar term to many, the Subject Luminance Range. The Subject Luminance Range is the exposure range of the subject and therefore could contain a wide range of values; a high contrast scene may have 10 stops of luminance range from the object the darkest object to brightest object. The general assumption is that when a photographer supplies a number such as the Luminance Range they are identifying the darkest and brightest objects they want rendered with detail in the final image and how great the range of exposure is in stops between these two objects. For ZS practitioners they commonly want these objects in Zone II (darkest Zone that still has texture and space) and Zone VIII (lightest Zone that still has texture and space). The total number of zones from II to VIII is 7 zones which is where Phil Davis is getting his 7 stop quantity in the definition above. 

To start somewhere with the utmost basics take a look at this simplified photographic imaging chain for film:

Since everything in this chain must be tailored to the limits of the human eye all exposure range limits must be worked out from the end of the chain to the beginning. The image below illustrates how finding a capture medium's luminance range requires one to work backwards through the photographic chain; the limits of the display dictate how the limits of the capture medium regardless of its dynamic range, which in turn provides a subject luminance range the photographer must work within.

In the cases where the photographer cannot adjust the exposure range of their subject then they must alter their photographic process to record a greater or lesser subject brightness range while still working within the limits of the display medium.

Since I work in motion picture my chosen example materials will be negative stock Vision3 500T 5219, and Vision Color Print Film 2383. First, I will perform a conventional analysis of 2383 to determine the exposure range of the print film and how look at how Zones are displayed under projection. In order for this post to be clear I moved the data from Kodak's Tech Publications into my own graphing program. This allows me to make each unit on the x and y-axes the same size, mark the axes in 0.3 increments which is a whole stop change in exposure or density, and also provide a useful grid.

Kodak Vision Color Print Film 2383 - A very steeply curved stock with a very high density in order to provide an image that extends outside the dynamic range of the adapted eye in the movie theater such that there is a rich black, brilliant white an…

Kodak Vision Color Print Film 2383 - A very steeply curved stock with a very high density in order to provide an image that extends outside the dynamic range of the adapted eye in the movie theater such that there is a rich black, brilliant white and rich midtone range.

Starting with the print stock I determine the range of useful densities from the toe to the shoulder using methods I learned in Beyond the Zone System. The minimum useful density, which typically defines the limit of Zone VIII, is located at the Base + Fog (where the toe bottoms out) plus 0.2 log density. This locates the minimum density at coordinate 0.24, 0.24. The maximum useful density, which typically defines the limit of Zone II, is found by taking 90% of the maximum density of the shoulder. This produced coordinate 1.33, 3.67. The difference between these two points on the Exposure Scale axis is a log exposure of 1.2 or 4 stops.  

Same chart but showing how the exposure range of the stock is very narrow - log 1.2 or 4 stops - when looking at the x-axis. However, this is spread out into quite a wide range of rich tones under projection.

Same chart but showing how the exposure range of the stock is very narrow - log 1.2 or 4 stops - when looking at the x-axis. However, this is spread out into quite a wide range of rich tones under projection.

The total amount of density the print covers in stops is 11.4 stops, which is well beyond the dynamic range of our eye under dark surround conditions. This is exactly why the projected image contains a full range of tonality. While the image above is trying to help relate the print to our eye this would technically require the use of the characteristic curve of our eye to demonstrate how the the image would appear to have linear steps of tonality from black to white.

Moving on, now that we know the narrow exposure range of the print film, a log exposure of 1.2, we can use this to determine the exposure range of the negative.

The magenta-forming layer of Kodak Vision3 5219 with normal development. I have moved the data from the Technical Publication to a graph with the same size axes as the print stock curve used above.

The magenta-forming layer of Kodak Vision3 5219 with normal development. I have moved the data from the Technical Publication to a graph with the same size axes as the print stock curve used above.

Using the limits of 2383 we can see where these intersect the sensitometric curve of 5219 under normal development (I'm using only the magenta-forming layer of the film). These limits can then provide useful information about the luminance range of …

Using the limits of 2383 we can see where these intersect the sensitometric curve of 5219 under normal development (I'm using only the magenta-forming layer of the film). These limits can then provide useful information about the luminance range of a scene and therefore ZS info. Notice that this film holds exactly a subject luminance range of 7 2/3 of a stop (the 200:1 ratio from my previous posts!). 

As with the print I identify the lowest useful minimum density by going 0.2 log density steps above the B+F. (N.B. On the negative this determines the start of Zone II on the curve.) From this value I derive the coordinate -2.68, 0.78. We can't perform the same procedure as the print stock for locating the useful maximum density because negative film has an incredibly long shoulder that maxes out at a density far beyond anything the print can contain. That's fine because we use the acceptable log exposure range of 2383 to define the highest useful density. 0.78 plus 1.2 equals 1.98. This gives the coordinate -0.5, 1.98.

Despite the two graphs being analyzed separately the relationship between the negative and print can be easily understood by turning the print stock graph 90 degrees clockwise. Doing this helps better reveal how the step of contact printing the negative to the print stock transforms all the tonalities and how they are displayed to the human eye.

From Lorenz, Zakia and White's famous New Zone System Manual (1976, 11). Notice the delightful statement at the bottom that ties together the photographic system and the mind of the photographer.

From Lorenz, Zakia and White's famous New Zone System Manual (1976, 11). Notice the delightful statement at the bottom that ties together the photographic system and the mind of the photographer.

This cartoon illustration really helps spell out how tonality is transferred from negative to print and how the limits of the print define the ideal Subject Luminance Range for the scene. So let's look at a real world example using 5219 with Kodak's recommended normal development

Notice how this illustrates my claims in previous posts; the DR of our vision is about 8 stops and the photographic chain is established to capture a Subject Luminance Range of  about 8 stops and then display the image in such that a manner that all the Zones with detail are within the DR of our visual system. Everyone will always begin Zone system discussions in these lighting circumstances because they are about the average luminance range of light on a sunny day. When the scene is of average contrast, film development is "normal" and matched correctly to the exposure limits of the print then a Zone does equal close to a 1 stop change in exposure. 

We can see how true this is by laying a Zone System scale on the slope of the negative's curve and see how much we have to expand or contract it in order to fit Zones II through VIII at the minimum and maximum useful densities.

By putting this ruler above the slope of the negative and then stretching the rules so that Zones II and VIII are at the limits of the Subject Luminance Range I can see just how much the tonality is adjusted to fit the exposure range of the print. In this case, Kodak's recommended normal development is just a touch lower than normal contrast so each Zone has been stretched slightly greater than 1 stop. (Notice where each Zone falls in relation to the vertical grid lines.) This difference is slight enough as to be negligible in practical working conditions so basically the 1 zone equals 1 stop relation holds.

However, the lighting ratio of a scene is realistically not always going to match up with the useful Subject Luminance Range of the system. This requires cinematographers and photographers to use lighting to adjust for this fact, but changing the developing time of the negative in order to capture a greater or lesser range of exposure is also accepted practice. (For the time being I want to set aside discussions of how changing development times impacts film grain.)

Let's say, for instance, that the luminance range the photographer wishes to hold is greater than 8 stops because there is a delicate texture in a cloud. In terms of Zone definitions they are moving the delicate highlight from Zone IX to Zone VIII so that it is rendered on the print. By pulling the film's development this lowers the contrast of the film and allows the densities from that bright highlight to fit within the Exposure Range of the Print. 

Kodak 5219 pulled one stop. Notice how the exposure range the negative transfer to the print is 9 stops. This data was extracted from a sensitometry test performed on 5219 during Fall 2015 semester of my Science of Cinematography class.

Kodak 5219 pulled one stop. Notice how the exposure range the negative transfer to the print is 9 stops. This data was extracted from a sensitometry test performed on 5219 during Fall 2015 semester of my Science of Cinematography class.

Since transferring these recorded densities to the print will cause the Zones to expand we can lay our ruler along the slope of the curve and see how the Zones have been expanded as the contrast has been softened. 

Notice how each Zone is now stretched to about 1 1/3 stops each. This is useful information to know when working in the field because now one can take light meter readings and know where objects will fall tonally in the final print.

In comparison, it is worth analyzing at the opposite case where a photographer needs to increase contrast on a foggy day. Below is 5219 pushed 2 stops in development. You can see that the Subject Luminance Range that fits the print is now lower contrast - 6 1/2 stops of range.

Laying the Zone System ruler on it we can see how the contrast of the image will be increased and how each Zone has shrunk to about 2/3 of a stop for each Zone. This is not terribly dramatic but it revealed to me that this particular stock does not alter its tonality characteristics when pushed as dramatically as when pulled. 

Once again, despite the contrast of the scene the photographer changes the negative such that it fits the same Exposure Scale of the print stock giving a full black to white print. Looking at how the Zones relate to exposure now we find each zone is less than a stop.

Through this sensitometric analysis of Zone System we can arrive at the conclusion that 1 stop equaling 1 Zone is not a dogmatic truth, but rather changes based on the contrast ratio of the scene and how the photographer wishes to treat the tonality in their system. The relationship also changes based on films, developers, and the print medium. For digital technology this would be changes to the camera settings, curves used in Photoshop and the electronic display. Zones may be one stop in Ansel Adam's definition, but this is certainly not the case in practical work. This led Phil Davis to provide this critical new definition of Zones in the glossary of his book (1988, 213).

Zone: An ambiguous term. In this book, any one of the several divisions of the gray print scale that represent separate, consecutive luminance of 1 stop in the normal subject. In subjects of other than the normal 7-stop range, each print zone represents one seventh of the total SBR, whatever it is.

So to return to the question I've asked in previous posts, is this a mistake on Ansel and Archer's part in creating and explaining the Zone system? Well, partially, but then again the purpose of ZS is to provide a system to photographers that grounds their medium to the visual perception of brightness and also give them control over their tonal rendering without getting too bogged down in math or graphs. In some sense I should be laughing at myself for making this system precise by trotting out a bunch of math and graphs. But this is the beauty of ZS, that it provides useful tools that empower the photographer and are flexible to their unique vision without the weighty complexity of sensitometry. However, I can defend myself by pointing out that in order to clarify technicalities I must rely on the graphs and mathematics.

So why should everyone on blogs and forums make such a strong claim if they lack evidence? One would imagine anyone using ZS would discover through their photographs taken on low or high contrast days that they are expanding or contracting Zones to compensate for the change in lighting ratio. Nontheless, I think there are two reasons for this; first, the people who deeply understand ZS practice their photography but don't feel the need to shoot their mouths off online. Second, many of the books and blogs I find discussing a digital ZS typically fail to describe it in it's entirety. For instance, most forget that the first step is visualization and skip straight to Zone definitions, and many leave out the intricacies of understanding what changes were made in the computer and how they related to the subject as encountered at the moment of exposure. This is all well and good as a shorthand approach but no one should make bold sensitometric claims from a partial explanation. Lastly, I think the fluid nature of Zone appearances and their relation to exposure are more easily apparent to those who learned with analog. Without an LCD screen and a histogram we lived and died through our light meter. Extensive ZS tests provide us critical information to take out into the field to assist in metering. For instance, my information for 5219 in my notebooks looks like this:

The EI values were also calculated during 5219 stock tests during the Fall 2015 semester of Science of Cinematography.

The EI values were also calculated during 5219 stock tests during the Fall 2015 semester of Science of Cinematography.

Here is a nice visual representation of the expansion and contraction of Zones that helps with visualization.

Here is a nice visual representation of the expansion and contraction of Zones that helps with visualization.

Digital capture in RAW does not require one to make a decision in the field about how the image must be processed, this is done later in a computer. By shifting that decision to later and making it deceptively simple the photographer's understanding of what range of exposures were recorded and how they were effected by the curve applied in the computer obfuscates the process. I'm not saying this is bad, but that it does create the conditions for a disengagement of the mind. All progress involves a sacrifice and while it may be easier to change the image in post there is a problem of disconnecting from a deep understanding of the conditions one encounters in the act of taking a photo.

My purpose in addressing this one claim should not be viewed as an excoriation of those who make this mistake, or for those who espouse or use a shorthand version of ZS. My hope is that by taking this one single sentence, and exploring it in such detail photographers will be more inspired to not take everything they read online as gospel (even with my claims) and really pay close attention to their process. The wealth of information produced and available to us on the internet is immense and provides easy answers, but this should never be a substitute for performing one's own research.

Death of the Zone System (Part IV) by John G Arkenberg

N.B. Death of the Zone System is a series about common misinterpretations of Ansel Adam's and Fred Archer's famous Zone System as photographers apply it to digital technology. This particular post is a continuation of Part III and should be read in preparation for the ideas of this piece.

Previously I addressed the oversimplification of Zone System by correlating the Zones directly to the dynamic range of the camera, as if this is the only part of the photographic process worth considering. Mostly I spoke to the myth that film has a smaller dynamic range than digital sensors and therefore why ZS does not need any alteration to the number of Zones and their definitions. In this post I will continue to explain where the number of Zones arises and upon what photographic or visual art materials it is based.

An important part of teaching ZS is emphasizing the whole system nature of how it works from the subject as perceived by the photographer all the way to the tonality they desire in the final print, projection, or electronically displayed image. Since Adams was creating the system for the reflective print he created the system around the visual limits of photographic paper. (These visual limits are not exclusive to photographic paper but are the same for any visual art involving pigment on or in a surface and made visible by a light reflecting off its surface.) Whether he knew it or not, by defining the limits around a reflected print he was actually giving us information about the ultimate end of the photographic chain: the human eye and the brain processing the information.

I personally was unaware of the limits of our visual system largely because most textbooks on photography and cinematography skimp on research about our visual system and often only speak to the dynamic range of the eye through its entire adaptation from night vision (scotopic) to daytime vision (photopic). Since our entire visual range throughout adaptation is enormous I was completely surprised when I read this statement in C.L. Hardin's Color for Philosophers (1988, 24-25).

It was earlier remarked that the eye is capable of operating over a huge dynamic range of light intensities, on the order of ten trillion to one. But for any single scene with which the eye can deal, the intensity ratio of the brightest to the dimmest object rarely exceeds 200 to 1, and this is about the limit of differences that can be signaled by a visual neuron (Haber and Hershenson 1980, 50-51; Evan 1974, 195; Barlow and Mollen 1982, 102). 

After some quick calculation to translate this ratio into stops I was shocked to discover that our visual system has a DR of about 7 2/3 of a stop when stabilized at a specific light level. My initial reaction was incredulity and a knee-jerk rationalization that our eyes must be adapting through a wide range when viewing an image. And yet, I quickly came to realize this is far from the case because we do not view prints and projected images under changing lighting so they are providing a nearly stable quantity of light to our eye.

I have continually put to trial this 8 stop quantity in the research for my class only to continually find a great deal of scientific and practical support. (At some point in the future I hope to write a further discussion of the limits of our visual system with some more in-depth research, since there is some nuance here worth exploring.) Adams probably was not aware of what little research existed at the time, but he ultimately based his definition of  Zones on the appearance of the final image. I encourage those who go on to read or re-read The Negative from these posts to pay close attention to where he separates discussions of the appearance of Zones from the materials capturing or displaying the image.

The Zone System allows us to relate various luminances of a subject with the gray values from black to white that we visualize to represent each one in the final image (1981, 47).

in this definition that starts Chapter 4 Adams is discussing Zones as previsualized or observed luminances and how they appear on the print but not in a manner that addresses camera technology. This reveals how much we should look to our visual system as what first defines the limits of our medium before considering next the display technology, then the capture medium, and finally the subject brightness range. This has always been the case in sensitometry that analysis of a photographic chain is performed backwards from display to capture medium. If you ever find a photographer making claims about Zones, Zone definitions, how many stops are in a Zone you must be wary if their discussion revolves purely around the capture medium because this is a mere part of a greater chain. Both film and digital sensors (recording in RAW or Log mode) do not render tonality in any way resembling how it should appear to the eye so why try and tie them directly to Zones? This short circuit approach to ZS is what causes conceptual trouble and leads to the pitfalls I described in part III.

So leveraging this knowledge that the eye at a given level of adaptation has a dynamic range of about 8 stops one can really understand how Zone System works, tuning our chosen photographic process to our visual system. Take at this diagram on page 52 of The Negative.

Note the 8 stop dynamic range from Zones I to Zones IX. An important point I should make is that the surrounding text upon inattentive reading can make it appear that the dynamic range or textural range depicted is for the negative. What he is discu…

Note the 8 stop dynamic range from Zones I to Zones IX. An important point I should make is that the surrounding text upon inattentive reading can make it appear that the dynamic range or textural range depicted is for the negative. What he is discussing is exposures that give the appearance of this dynamic range or textural range.

Notice that the dynamic range depicted is again 8 stops. I tried to locate in the text surrounding this chart exactly what this dynamic range is? Camera, print? It's not entirely clear because he is addressing the visual appearance of Zones in context of the entire process. Whether he explicitly understood this he created the Zones based on the limits of human vision and this is reflected in scientific research performed much later. This is nicely illustrated in this diagram I found in the 4th edition of the Theory of the Photographic Process (James 1977, 545).

The scene curve on the right shows the eye adapted to a daylight with an average log luminance of 10,000 candelas/meters squared and how the visual system has a dynamic range of 7 1/2 stops at this light level. (Log Luminance of 3.5 - 1.25 = 2.25/0.…

The scene curve on the right shows the eye adapted to a daylight with an average log luminance of 10,000 candelas/meters squared and how the visual system has a dynamic range of 7 1/2 stops at this light level. (Log Luminance of 3.5 - 1.25 = 2.25/0.3 = 7.5 stops) The curve of the print on the left is extremely similar in order to emulate these conditions. Discussions about the curve of the transparency will be addressed in a future post since the characteristics of our eyes in dark surround circumstances can change based on levels of adaptation. I must admit, there is some nuance here and I would like to further explore the level of adaptation of the eye for a monitor versus theatrical projection to further refine my claims. 

This chart displays the curve of the human eye on the right versus the curves of a print and projected transparency. Notice how the limits of the human eye are quite narrow - 7 1/2 stops, and how the print matches this so that it appears the same as the original scene.

Another of my favorite diagrams I have found in my research is shown below and depicts the sensitometry curve of the eye at not just the photoreceptor stage, but at each stage of neural processing subsequent to the cones and rods.

From Optical Radiation Measurements Volume 5: Visual Measurements (Bartleson and Grum 1984, 252). For those unaccustomed to Log Luminance scales the farthest right curves (photopic adaptation and bright surround) for each transfer function has the f…

From Optical Radiation Measurements Volume 5: Visual Measurements (Bartleson and Grum 1984, 252). For those unaccustomed to Log Luminance scales the farthest right curves (photopic adaptation and bright surround) for each transfer function has the following range in stops: Cones - 10 stops, Bipolar Cells - 6 1/2 stops, Ganglion Cells - 6 1/2 stops. I am currently looking for the original test data to make sure there is no information in the shoulder of the curve as per my recommendation in the previous post.

By now I hope it is clear why the Zone System ties together our eyes and the photographic process and that the definitions of 11 Zones will remain the same no matter what the technology and its dynamic range. (This may sound heretical, but will probably be the case even including HDR monitors and projectors). Simply, the majority of image displays whether reflective or transmissive look correct to our visual system so long as they produce 8 stops of tonality framed by an extra stop or stop and a half both sides to give us the visual sensation of a rich black and a pure white.

My hope in dispelling the mistaken idea that Zones directly correspond to sensor DR goes beyond just technical clarification. The problem also involves the dissemination of mistaken info around the internet and how these ideas are spread with a veneer of authority. The lack of research and unquestioned assumptions of online "experts" is irresponsible. In my previous post I used a quote from Martin Bailey's blog which is an excellent source of information on photography and which I recommend to amateur photographers. My reason for using his blog is simply because he handed me the best instance of this mistake. He too equates Zones to sensor DR only to find the same numerical mismatch of 11 Zones to the 12 stops of DR of his Canon DSLR. However, just after this he claims:

"... so it’s a bit wider than Adams’ definition, but in practice I’ve found that even now, thinking of each zone as a stop of exposure works fine."

This last comment is an offhanded acknowledgement that he has found ZS to work just fine despite it having a smaller DR than his camera. That's because ZS works when you base your workflow around your visual system and not the entirely arbitrary relation of numbers you find in books and discussions. Martin Bailey gets good results because he has a strong visual memory of the Zones, a well-calibrated monitor to look at his photographs, a particular curve he applies to the RAW image from his camera, and takes photographs assuming the limits of the ZS which are based on the human eye. Which is why analysis from the final display backwards through the capture medium to the subject is so important; if one blindly makes adjustments through each step of the photographic chain then the photographer will not fully understand their process. Ideally, they should know the real photometric values of the light reflected or emitted from the display and how this relates to the light intensity through each step of the process all the way to the initial reflected or emitted light from a subject in order to have a comprehensive understanding.

Now, one very important clarification I should make about the 8 stop dynamic range of human vision. This fact is generally true but can change somewhat based on viewing circumstances such as whether our eye is observing a bright image in dark room, or in a brightly lit location (which in scientific parlance is referred to as bright surround and dark surround.) I would like to address this in a piece further down the road as I want to obtain the full data in order to extrapolate some important lessons.

Finally, my hope after parts III and IV is that anyone pursuing the photographic arts begins to put themselves back into the picture; all puns intended. That is, to begin to learn and pay attention to how they perceive the world and how display technologies work. The stress on camera dynamic range is really more of a marketing tool, a number that grows over time to impress gear heads, but is not necessarily important to ZS. (I will demonstrate this in Part V of this series). Be careful to avoid simplistic numerical associations and first learn how to see.

Death of the Zone System (Part III) by John G Arkenberg

25th of June 2017 Header.jpg

In my last post (many many months ago) I explained my own education about Zone System and how this empowered my visualization skills and technical control to achieve a desired image quality. Being a Zone System practitioner many years in the analog medium, however, makes for translating the same concepts and practices to digital workflows fairly straightforward. Yet, for those beginning to learn ZS confusion abounds since the photographers and bloggers writing about its application to digital photography typically lack a theoretical and practical understanding of sensitometry, the chassis upon which ZS is built. Sensitometry, the study of light-sensitive materials, provides not just mere information about how a medium responds to light, but also provides predictive power of how light intensity captured from a scene will be depicted as brightness in the entire imaging chain from light in the real world to the human visual system. Zone System is a an easier way to understand sensitometry, but I think it gets short-shrift by photographers who think it outdated, or fail to grasp its concepts. In the next two posts I would like to address a recurring question that leads to confusion - why are there 11 Zones and where did this number come from?

The Zone System scale. If you want a quick rundown of ZS and the visual definitions of these Zones I recommend looking at Norman Koren's site here. Some photographers dispute having 0 and X since they seem largely irrelevant but in Part IV I can exp…

The Zone System scale. If you want a quick rundown of ZS and the visual definitions of these Zones I recommend looking at Norman Koren's site here. Some photographers dispute having 0 and X since they seem largely irrelevant but in Part IV I can explain the importance for leaving them on any ZS scale.

While arguing about something as simplistic as the total number of Zones may seem petty I find amateur photographers can be confused about why there are 11 despite the fact that there are fundamental reasons as to why this number was chosen and why it works for a wide range of materials. The relationship between Zones and photographic materials seems simple, if each Zone represents a stop then many photographers conclude their film or digital sensor should be able to capture 11 stops of light. I have neither the time nor space to inventory all of the incidences I have encountered but I have already written about one in particular where Joe Marine on the No Film School site claimed that a false color exposure tool in the Red camera is a Zone System with 16 Zones to match the 16 stop dynamic range of the Red sensor. Perhaps a much more polished presentation of this same analogy is on Martin Bailey's blog and podcast where he claims:

"I actually have a measured range of 11.9 stops on my Canon EOS 5Ds R, and DxO Mark have it at 12.4, so we're at around 12 stops of dynamic range in digital terms. This is the full range from full black to pure white, and I consider almost that entire range to be useful, so it's a bit wider than Adam's definition..."

Here Martin Bailey is also equating the entire range of Zones (with each a one stop change in exposure) to the dynamic range of his camera. Now Martin Bailey does not say this explicitly but many other sources move forward with the idea that Adam's 11 Zones were determined by analog technology, which has been surpassed by digital, and therefore Zone System is either obsolete or needs modification. Sometimes the claims for digital's superiority leads people to connect unrelated concepts between two mediums which further compounds the confusion such as this claim:

I think the zone system is still important to understand in the digital world, although we now have 256 zones rather than just 10.

By this logic black and white film has either a bit depth of 3.3, or an infinite number of Zones since it is, after all, analog.

Regardless, the argument I frequently encounter runs something like this:

a. Zone System was designed for film, which has a limited dynamic range, and so could only at most record about 11 stops of exposure.

b. The modern digital sensor can record 12, 13, 14+ stops of exposure.

Therefore: Zone System is obsolete, or it should be modified to contain more Zones.

So where is the fallacy? In this post I want to discuss the dynamic range of film in order to disprove the first premise of the argument. In a subsequent post I will explain what part of the photographic process determined there be 11 Zones, and how knowledge of a few simple facts provides information that is revelatory for anyone in the visual arts.

A common misinterpretation of the history of Zone System by photographers and bloggers whether intentionally or not is that the number of Zones originated from the analog medium's dynamic range, and in particular that of negative film. If this is the case we should observe an 11 stop dynamic range from Kodak's sensitometry charts for any given negative stock. Just to set down definitions keep in mind that dynamic range is the range from the smallest to the greatest recordable signal by a medium. For digital cameras this is the amount of light that is recorded between the noise floor up to full well capacity. In analog this is the amount of light just above Base+Fog density on the negative up to where the curve tops out into the shoulder.

I intentionally selected Tri-X 400 in D-76 developer since I know this to be a film and developer combination used by Ansel Adams. Below are the current sensitometry charts and I used the 6 min curve because there seems to be wide spread agreement this is "normal" development.

From Kodak Tech Pub F-4017. Marks in pencil are inserted by me using methods I learned in Beyond the Zone System by Phil Davis. I should point out that this method of finding a film's DR is incorrect because the shoulder of the curve has not been pr…

From Kodak Tech Pub F-4017. Marks in pencil are inserted by me using methods I learned in Beyond the Zone System by Phil Davis. I should point out that this method of finding a film's DR is incorrect because the shoulder of the curve has not been provided in its entirety by Kodak.

By this analysis I obtained a DR of 10 1/3 stops which fits the conclusions typically thrust upon film that it only has a 10-11 stop DR and therefore this is what Zone System is based on. However, this DR analysis, which I've seen performed again and again by those who don't really work extensively with sensitometry, is flawed. Notice how the curve of Tri-X abruptly stops at a Log Exposure of 0.34 and density of 1.7. This is not because the film can no longer record further exposure as density, but because Kodak stopped graphing the curve. The reason for this is because film curves can be charted for quite a long range; a range quite far beyond what is needed for print or projection stocks. Kodak is basically supplying you only with the info needed for basic photographic purposes. 

I cannot find a fully mapped curve (I have made them for my own chosen negatives as you can see in DotZS Part II), but I did locate a depiction of a full film curve in Langford's Advanced Photography in a section on how extremely high levels of exposure produce the phenomenon of solarization. For your interest I calculated the DR of this unidentified film to obtain approximately 13 stops.

From M.J. Langford's Advanced Photography 1974, page 169

From M.J. Langford's Advanced Photography 1974, page 169

So, the only way I can calculate the DR of Tri-X is to use a french curve to best fit the shoulder and further extend where the line cuts off. I extrapolated the following DR:

Using a french curve to extend the shoulder of a curve is a practice recommended in Phil Davis' excellent book Beyond the Zone System. The best method of finding the curve of the shoulder would be to perform one's own sensitometry test and create a …

Using a french curve to extend the shoulder of a curve is a practice recommended in Phil Davis' excellent book Beyond the Zone System. The best method of finding the curve of the shoulder would be to perform one's own sensitometry test and create a new chart with a densitometer.

Before crying foul that I manipulated data I would also recommend looking at Kodak Vision 5219 motion picture stock. For this sensitometry curve Kodak updated their thinking on how to graph these curves by extending the x-axis for a greater range of exposure as well as showing much marking the x-axis in camera stops. Once again you can see a rather large DR is covered by this film and one could argue that the shoulder is still cut short.

From Kodak Tech Pub H-1-5219t. 

From Kodak Tech Pub H-1-5219t. 

If film possesses the same dynamic range as many modern digital sensors then it is incorrect to claim that Adam's system of Zones is based on the DR of the capture medium. Both film and digital possess a range of recordable exposures beyond the 11 Zones of the Zone System.

Did Ansel Adams make a mistake? Have we been using Zone System incorrectly this entire time? No, the reason for 11 Zones has a very logical origin, and one that is surprising to many photographers who know little about the nature of human vision, and how the final print, projected transparency, or electronic display must depict tonality in order for the image to appear "correct" to our eyes.

More to follow in Death of the Zone System part IV.

Death of the Zone System (Part II) by John G Arkenberg

(“Death of The Zone System” is a series about the relevancy and shifting attitudes towards Ansel Adams and Fred Archer’s Zone System.)

Part II - A Personal Journey through the Zone System

Before I delve into the claims made today about the Zone System, I think it helps to take a step back and appreciate what knowledge ZS practitioners have about their materials and how they obtain this knowledge. I initially began writing this post explaining the historical development (all puns intended) of sensitometry and Zone System but quickly realized this would probably induce somnolence. I hope my own story to become a competent photographer is not taken as solipsism but illustrates how the acquisition of in-depth knowledge can improve one's photography.

PHASE 1: Ignorance with Moments of Insight

When I first started learning black and white photography my first struggles involved getting a properly exposed negative. From there I would take my battle into the darkroom to create the image I thought I had captured in the first place. I remember at this time each press of the shutter was a moment of uncertainty and the darkroom was a void where I fought to reverse the mistakes of the past. I have kept a handful of prints from this time. Below is the first photograph I thought was “great.” I took it with a Nikon F3T and a 50mm lens on Tri-X inside of St. Patrick’s Cathedral. It was difficult to hand hold the camera for the 1/30th of a second exposure, but I photographed it enough times to get a sharp negative.

Scan from original print. I took this photo some time back in the late 1990s. 

Scan from original print. I took this photo some time back in the late 1990s. 

I developed the film in Rodinal and if I remember correctly (the negative is lost) it was fairly low contrast and probably a bit underexposed. Still, I had all the tools on hand to remedy this in the darkroom. I shortened my printing time, I used a multigrade paper at either grade 4 or 5 in order to get a deep black. Still, the image was lacking substance because the all the tonal values were low (except the window). So I discovered I could dodge and burn separate areas to draw out distinctions between the different areas of vaulting. I remember a particular problem was with the foreground pillar, which was dinghy gray and ominous, and I solved this by cutting a dodging tool exactly to the shape of the pillar in order to bring it up to a more luminous value. Sadly, I’ve lost my dodging and burning map but I can recall it looked something like this:

The more I look at this mental recreation of my dodging and burning map the more I think this looks too simple. It was very hard to print and required a large number of dodges and burns to extract the subtle gradations that are in the final print.

The more I look at this mental recreation of my dodging and burning map the more I think this looks too simple. It was very hard to print and required a large number of dodges and burns to extract the subtle gradations that are in the final print.

I created a print that at the time I was very proud of, but making several identical prints  resulted in a large number of mistakes and rejects. I knew this was no way to work, but I didn’t understand how to improve. The problem was that occasionally, even as an amateur, I would get a beautifully exposed negative that was easy to print. In retrospect it is obvious that in the sheer volume of exposures you make as an amateur you will inevitably take a good photo, captured under the right lighting circumstances, developed correctly, and easy to print such that you are left writing off the other images as uncontrollable mistakes. And yet, I didn’t want to write off many of the images I was taking due to technical mistakes. I wanted my images to not always be a struggle to print. Fortunately, I was friends with a number of extremely accomplished photographers who helped guide me through the process of understanding my tools. What it required was some testing.

PHASE 2: Discipline through Testing

Even though I had read Ansel Adam’s The Negative and The New Zone System Manual nothing was more helpful than when my friend Dwight Primiano sat down with me and spelled out the entire test method.

Step one was to pick a set of materials (film, developer, paper, darkroom chemistry) and stick with only these materials until I genuinely felt the urge to change something. A common neophyte mistake is to constantly change film, developers, papers without first learning how any one of them behave. I settled on a system and then had to perform the instructed tests. This took only a weekend and required shooting out multiple rolls of film. I had to determine Standard Printing Times for each film, the Effective Film Speed of each film, and then photograph a representative scene with five rolls and develop each at different times in the effort to establish a normal development time. I don’t want to make this post any longer by explaining the test in detail, but I still have all my work from that weekend.

Practical Zone System tests to establish standard printing times (print strips are in the folder at the top) and effective film speeds of APX 100 & 400.

Practical Zone System tests to establish standard printing times (print strips are in the folder at the top) and effective film speeds of APX 100 & 400.

Everything I learned that weekend is posted below. Most importantly was the discovery that my problems with thin negatives stemmed from the fact my film lost speed in the older PMK pyro developer. My 100 speed needed to be exposed at an index of 12, and my 400 speed became a 125. I was also establish normal development times so that my tonal scale carried a rich palette from white to black. What I learned was as follows:

APX 100 - EI: 12 - DevelopingTime: 11 minutes - Printing Time: 15 seconds

APX 400 - EI: 125 - Developing Time: 16 minutes - Printing Time: 24 seconds

Even though this test did not give me data about how to alter my exposure and development time for scenes with low or high contrast, it gave me a useful set of practices to photograph in scenes of normal contrast. I immediately discovered that the number of images that were easy to print increased dramatically. My time in the darkroom wrestling with an image dropped and I was proud of the quality I was achieving.

Photographed on a Mamiya C33 with a 135mm lens at f/8. The exposure was 15 minutes with APX 100 in PMK pyro. Printed on Grade II FB Oriental Seagull. I never realized the similarity between this and the previous photo until I started writing this en…

Photographed on a Mamiya C33 with a 135mm lens at f/8. The exposure was 15 minutes with APX 100 in PMK pyro. Printed on Grade II FB Oriental Seagull. I never realized the similarity between this and the previous photo until I started writing this entry. This negative is much easier to print. The only difficulty is in burning down the wall on the left that is receding behind the column.

By also reading about Zone System I also learned the first and most important step of any visual art process, Previsualization: that a well-realized image is first seen in the mind’s eye of the artist. Once the artist’s vision is harmony with their knowledge of their tools they are no longer at the servitude of the medium, but rather begin to express their ideas more purely through aesthetics. I felt closer to the precipice of a photographic truth.

PHASE 3: Peeling Away Layers of the Onion

When a photographic system is calibrated it’s exciting to make prints that are nearly perfect at the standard printing time alone. Certainly this makes life easier and wastes less materials, but it also gives you time and energy to use dodging and burning as a means to finesse very fine details.

However, at this point I realized I was still not fully in control (and not really practicing Zone System) because I did not have information for expanding or contracting development for different lighting scenarios. While I appreciated the practical testing methods I had learned I wanted a more comprehensive and scientific method of understanding my tools, and of the hard numbers and charts that back up my experiences.

By far the best book I found was Phil Davis’ Beyond the Zone System, a book whose title is incredibly accurate and misleading at the same time. I find many photographers upset that it does not contain an updated ZS that is easier to understand and use. However, this is not what the author intended when he used the word “beyond,” but rather that this is the sensitometric science beyond Zone System.

After digesting the rather difficult information I did eventually find it easier to test my materials by contact printing a step tablet to five pieces of sheet film, developing these sheets at different times, reading them on a densitometer and plotting the results. (I know this does not sound easy but it is because you use less film, spend less time in the darkroom, and the major task becomes creating graphs.)

Densitometer with enlarged step tablets on Seagull GF-II, and contact printed step tablets on Efke 25. This was my system before the discontinuation of both of these products.

Densitometer with enlarged step tablets on Seagull GF-II, and contact printed step tablets on Efke 25. This was my system before the discontinuation of both of these products.

Once all this data is graphed one can immediately relate the exposure range of your paper, to the density range of the negative and extract a range of useful charts showing how to rate the film speed for different contrast ratios, and what developing time is optimal.

My sensitometry charts for my paper and film. You can see my propensity for doing things by hand.

My sensitometry charts for my paper and film. You can see my propensity for doing things by hand.

A page from my notebook I would carry in the field. You can see that depending on the contrast ratio of the scene I would use different developers. You can also see how inherently contrasty Efke 25 is (I could never really pull it in development) ve…

A page from my notebook I would carry in the field. You can see that depending on the contrast ratio of the scene I would use different developers. You can also see how inherently contrasty Efke 25 is (I could never really pull it in development) versus Efke 100 which was hard to push develop but easy to pull.

This method is abstracted from the working practices, but I quickly found myself able to walk outside with a simple set of notes, photograph a negative, and get extremely good results. I started to make prints like this

8x10 contact print on Seagull GF-II. The print is beautiful at the standard printing time and my only adjustment is dodging the foreground rock up by a few seconds. Shot with the Wisner Technical Field through a 12" Bausch & Lomb Protar (un…

8x10 contact print on Seagull GF-II. The print is beautiful at the standard printing time and my only adjustment is dodging the foreground rock up by a few seconds. Shot with the Wisner Technical Field through a 12" Bausch & Lomb Protar (uncoated!) onto Efke 25. Exposure time was probably 2 seconds at f/32.

While many people get a good chuckle out of the process of using a densitometer to make all these charts I have never looked back on these efforts with any regret. First, my understanding of how my materials behave under different circumstances and how to control them became comprehensive. I was also able to leverage this knowledge to help answer further questions about the effects of selenium toning to image quality, and also investigate the effects of bromide drag when developing a print. Second, armed with my notebook I never had to second guess myself on a technical level. My experience of making a photograph became exactly that: making. I could concentrate on the aesthetic decisions of my work.

Not Mastery but Competency

My own learning was the reverse order of the history of photographic science, I learned Zone System first and then sensitometry. While sensitometry is the root of Zone System I understand why the majority of photographers will not begin by wading through this intense science: sensitometry requires a device many photographers don’t want to purchase, is very analytic, and also does nothing to help relate the process to visual perception. This is why the Zone System remains such a powerful tool in that it gives you practical testing methods to understand your materials, is possible to understand without an intimate knowledge of sensitometry, and helps you relate it to your sensory perception. The work by Adams and Archer is comprehensive, coherent and also applicable to digital. So why does it appear that less and less photographers understand it, or care to use it?

My next posts will take a close look at the current books about "Digital Zone System" and the comments on-line that I find contain a great number of misconceptions and ultimately fail to help amateur photographers. Ultimately, I hope to take a close look at Zone System to demonstrate its relevancy despite advances in camera technology, automation, and image manipulation software.

Death of the Zone System by John G Arkenberg

Part I – “Film has curves too!”

One of my most beloved cameras is this Wisner 8x10 Technical Field.

As one can imagine, taking photos on the streets of Manhattan with this piece of functional furniture attracts a fair amount of attention. In particular, people most often approach me to talk about digital cameras. I will politely converse with them despite the fact my inner voice is wondering “what about this camera makes you think I want to talk about digital cameras?!”

One conversation that stands out to me was with a gentleman who felt it necessary to explain to me how easier digital photography was because he could adjust the image tonality using the curves tool in Photoshop. I remember at one point he said something along the lines of, “That’s the easy thing about digital, it has curves. Film doesn’t have curves.” Despite the vacuous assumption on his part I give this stranger a lot of credit for patiently listening to me explain how sensitometry curves are derived and that you can chart them for any photographic process; digital or analog. It never occurred to him that this applied science was over one hundred years old. Then again, this type of information gets lost of the mists of time. The work in the late 19th century by Ferdinand Hurter and Vero C. Driffield to establish the sensitometry of the photographic medium meant that for decades the characteristic curves of a film used to be called HD or H&D Curves in their honor. Post-war this nomenclature fell out of favor for the more generic Sensitometric or Characteristic curve label.

From the Royal Photographic Societies' "A Memorial Volume Containing an Account of the The Photographic Researches of Ferdinand Hurter and Vero C. Driffield." Printed in 1920, scanned from microfiche, and corrected using the curves tool in Photoshop.

From the Royal Photographic Societies' "A Memorial Volume Containing an Account of the The Photographic Researches of Ferdinand Hurter and Vero C. Driffield." Printed in 1920, scanned from microfiche, and corrected using the curves tool in Photoshop.

Granted, adjustments to a digital image through the curve tool in Photoshop are simple to perform and seen immediately. However, what I had to clarify was that I knew my curves from rigorous sensitometric testing of my print paper and film. It’s not that my process lacked curves, but that I already knew what they would look like and therefore how my image would appear. What remained for me was to make the correct exposure and perform the correct development. I posed the question to him, “What do you think is better; having limited control of the image in the capture and having to alter it later in post or having complete control and understanding of the process from beginning to end?” There is no singularly true answer to this question but it did get him to leave me alone so I could concentrate on my work.

My fascination with encounters like the one above are not because I want to demean an amateur photographer’s lack of knowledge about the medium. I collect them because they paint a picture of the misunderstandings about photography brought about by a confluence of deceptive advertising by manufacturers, books on photography that lack sufficient technical editing, poor or incomplete advise on forums, and blog writers who are sleepwalking through proper research on the topic they’ve decided to expound upon. A good friend of mine calls this world “digital fan-fiction” because it propagates the myth of technological solutionism at the expense of the photographer’s own understanding about his medium. We are our own worst enemies. 

I’m not suggesting that Zone System is officially dead. The title of this blog series is a personal joke about how people are quick to proclaim something is “dead” in the swift current of technological change. (Check out The Tragic Death of Practically Everything.) I’m justifying the title because it is the language and tone used in our lives today because it screams for attention. Now that I have yours I would like provide in the next few weeks a brief history of photographic sensitometry, how it is used with analog materials and the benefits it confers upon the photographer. I think we should look at the modern claims about the relevancy of Zone System to digital, and whether the Digital Zone System explanations that exist are accurate to the science. Finally, I hope to make a compelling case that the updating of sensitometry and Zone System information for digital production and post-production is sorely needed at this point.

Gio Scope and the Zone System by John G Arkenberg

I hope this post is not entirely irrelevant since Gio Scope was released about a year ago. Despite arriving late to the party and with everyone already departed I want to provide a more coherent criticism of Gio Scope than has been offered online. After all, the Red forum addresses all questions in the same manner in which buckshot addresses a target; demolishing it beyond all recognition. In comparison I will handle it so exhaustively as to produce somnolence in the reader.

I was unaware of the existence of this tool until I had dinner the other evening with a former student and now colleague. We are trying to work on applying the science of sensitometry to digital cameras in order to update the technical foundations behind Zone System. He showed me the post on No Film School which claims that Gio Scope “works just like Zone System.” Being a Zone System practitioner with my analog photography and using its principles on set as a cinematographer I wanted to pursue this claim to both better understand the tool and clarify the practice of Zone System for myself.

The Tool

Gio Scope is a false color view overlaid on the RAW sensor data. I am assuming this tool works on the RAW data post linearization but pre white balance and debayering. Since Gio Scope views the RAW image the advertised 16 stops of dynamic range of the sensor are broken into 16 steps, each corresponding to one stop change in light intensity in the scene. These are numbered from 1 to 16 with the noise floor being below 1 and full well capacity as above 16. (For the sake of avoiding confusion I’m going to refer to the Gio Scope divisions as steps and the Zone System’s divisions as Zones.) Since the tool correlates exposure to each numbered step Red proposes this as a substitute for a light meter.

When the cinematographer selects the numbered zone on the scale at the top of the screen the corresponding area of tonality is filled in with a unique color in the image. I particularly like the feature of selecting the specific tonality shown in false color because this greatly simplifies the information viewed and can be tailored to an individual’s lighting process. Moreover, the selectivity feature also allows the user to quickly analyze the contrast ratio between any two selected steps.

So does Gio Scope work just like Zone System?

In honesty, I cannot find anywhere that Red claims Gio Scope replaces or has anything to do with Zone System on their website or in the video introducing the tool. The claim seems to originate on the No Film School website as the tool “working just like Zone System.” Of course, this statement has been picked up by Red users and discussed on their forum. The first post not only astutely inquires as to the relevance between Red's tool and the Zone System, but also links to Norman Koren’s website who I personally find is a venerable source of answers. Despite Norman’s lucid explanation I can understand how confusion is created from the differences between the Gio Scope 16 step scale and the Zone definitions.

Opposing Scales

One of the first points of confusion is attempting to correlate the 16 step scale in Gio Scope to the 11 Zones in Zone System. The difficulty primarily lies in the numerous ways in which the two could be related. For instance, if we relate each step to each Zone in a 1:1 manner than we have two seemingly arbitrary scales of numbers. Where do they connect in a comprehensible manner? Red reports that step 11 is neutral gray so using this as Zone V we could produce the following chart:

However, this only brings up multiple questions. What use are the Zones below 0 and are they superfluous? Why is the neutral gray point on the Red sensor not 8.5? What ramifications does this have to image quality assuming I have even more Zones below 0 and I consider them as part of my exposure?

Tying the Zones to the steps in this one stop correlation is problematic because Zones can be more or less than a stop of exposure because they are defined perceptually. At this moment I recommend looking back at Norman's website and reading the Zone definitions as supplied by Ansel Adams.  So under this logic I should actually look at the RAW image and assign the tonality I see to each corresponding Zone. In which case it would look something like this:

Relating the Zones to the RAW image is a conceptual dead end because the desired end of photography is to relate tonality as seen in the world to that of your final image. If one decides to light a scene such that the RAW view appears to have Zones from 0 to X then the actual scene would look very unusual and very high contrast! Attempting to correlate the RAW image to each Zone is as foolish and unnecessary as the analog photographer looking at his negative and trying to do the same.

Linking the Zones to these 16 steps fails because the Zones and their definitions are conceptual tools. As Norman Koren points out on his webpage the Zone definitions embrace each step in the photographic process whether the scene, the print, and the different gamma settings of computer monitors. In the highly technical but unparalleled Beyond the Zone System Phil Davis further frees up the definition of a Zone by calling it “An ambiguous term” in the Glossary. While Davis does continue to clarify his definition what I wish to emphasize is that these Zones are tied to a conceptual division of the range of black to white with neutral gray as the midpoint of the scale. Since we like decimal scales there is something not only appealing but intellectually useful about having a ten step scale balanced around Zone V. This is entirely different than the steps in Gio Scope which are tied to camera data and therefore serve cannot serve the same purpose.

Zone System is more than just the sum of its Zones

Defining Zones perceptually allows the photographer to understand the changes in tonality through different mediums and through each step in the imaging chain. In order to make this clear it helps to look at the definitions of Zone System by multiple experts.

Ansel Adam’s own definition from The Negative: "The Zone System allows us to relate various luminances of a subject with the gray values from black to white that we visualize to represent each one in the final image. This is the basis for the visualization procedure, whether the representation is literal or a departure from reality as projected in our “mind’s eye." After the creative visualization of the image, photography is a continuous chain of control involving adjustment camera position and other image considerations, evaluation of the luminances of the subject and placement of these luminances on the exposure scale of the negative, appropriate development of the negative, and the making of the print.”

Similar definitions emphasize the interconnectedness between each step of the imaging chain and that its ultimate end is the realization of the photographer’s artistic vision. Notice that this last definition does not even need to address camera technology.

From White, Zakia, Lorenz’s seminal The New Zone System Manual: “Previsualization is the beginning, control of the process the middle, and postvisualization brings the process full circle.”

The use of Zone System is often best understood through a test shoot and there is an extremely helpful video here that was originally made to demonstrate Gio Scope.

The image in this video has nearly a full range of tones from black up to nearly a pure white. The subject’s face is lit with a 4:1 contrast ratio and the RedGamma4 curve is applied. Watching the overexposure of our subject’s skin you can see that at the image is completely blown out at step 16 which indicates this is 5 stops overexposed. This is Zone X! As the camera stops down you can see that the highlight side of his face just begins to possess detail when it reaches step 14, which would be analogous to Zone VIII.

Moving forward you can do the same analysis for the shadow detail by observing the fill side of the subject’s face fall into underexposure. Notice that when the fill side is at step 8 the exposure is such that it has just enough visible detail before being plunged into black when exposed at step 7. This means step 8 correlates to Zone II. My reason for picking out Zone II and VIII is that these Zones are critical because they contain the deepest shadow and the lightest highlight that still contain texture and detail in the subject. Therefore, if I want any information about my subject present in the final image I need to light, expose, and control my post process so that it fits within this range of Zones.

What is particularly good about this video is that it provides useful information to the Zone System user; RedGamma4 is a curve with a correlation of one stop of exposure to each Zone. Any exposure in my scene above or below 3 1/2 stops will not be reproduced in the final image. To see this in an analytical setting I exposed a gray card at subsequent steps of over and underexposure and charted the results:

Data produced applying a simple Zone System test designed to the Red Epic. Side note: despite setting exposure through a light meter the image came back 2/3 stop underexposed and I lifted the curve appropriately. I have found this is pretty consiste…

Data produced applying a simple Zone System test designed to the Red Epic. Side note: despite setting exposure through a light meter the image came back 2/3 stop underexposed and I lifted the curve appropriately. I have found this is pretty consistent with most digital cameras due to the way the manufacturer maps their curves but also that when the scene is lit with a tungsten light there is exposure loss from the reduced spectrum of the source.

With this graph you can visually see that the last points of useable tonality that are above Zone VIII and below Zone II.

If I wanted to translate Zones into the steps in Gio Scope I could claim that the RedGamma4 has a range of steps 6 to 16 to create a black to white image and that the useable range of tonal detail is from 8 to 14. 

The fact that RG4 correlates so nicely to the Zone System definitions and each Zone is a whole stop change in exposure is because the camera image with this gamma holds a 7 stop SBR or Subject Brightness Range. SBR is a Zone System acronym used for the range in subject brightness from the limits of tonality with visual detail, in other words from Zone II to Zone VIII. When this range fits the limits of the photographic medium and its display (camera + monitor or camera + print) than the final image will depict a full range from black to white tonality. As you can see in this video this is exactly the case when the exposure is correct, which is about at the 0:20 point when the key is in step 11 and the fill in step 9. On our curve the SBR would be represented as follows:

Since my exposures are in the middle of these Zones I have a 1/2 stop range in exposure above and below them.  I should say the limits of my scene are 3 1/2 stops above and below normal exposure and this gives us the full 7 stop SBR.

Since my exposures are in the middle of these Zones I have a 1/2 stop range in exposure above and below them.  I should say the limits of my scene are 3 1/2 stops above and below normal exposure and this gives us the full 7 stop SBR.

At this moment it is worthwhile to re-visit Phil Davis definition of a Zone in its entirety:

“An ambiguous term. In this book, any one of the several divisions of the print gray scale that represent separate, consecutive luminance ranges of 1 stop in the normal subject. In subjects other than the normal 7-stop range, each print zone represents one seventh of the total SBR, whatever it is. Print zones stand for specific average values of gray that, when memorized, assist the photographer in visualization.”

Quite a few comments on the Red Forum and many on the internet make the claim that each Zone equals one stop. This can be the case in the special circumstance where the SBR is 7 stops and the photographic process places this SBR in Zones II to VIII in the final image. This is about the range of an average scene and I would like to reserve a further discussion on this point for a future post because there are important ramifications here for understanding the limits of our visual system as well.

Nonetheless, if the SBR is less than 7 stops than we are observing a low contrast scene and if higher than 7 stops we are observing a high contrast scene. (Of course, there is a level of subjectivity in what is high or low contrast, but this is a rough mathematical definition.) In these instances the photographer changes their lighting so the scene better matches the limits of their medium, or perhaps changes their development time of the negative or the shape of their curve in Photoshop to produce a final image with full black to white tonality. In this instance each Zone in the scene is expanded or contracted to produce a full black to white final image. The fact that Zones are not precisely tied to each stop of exposure is why the System works so well: the scale of tonality needs to be flexible because it is being transformed by the medium at each step of the imaging chain. This is in direct contrast to the rigid Gio Scope scale. If a step in Gio Scope can be white, neutral, or black depending on the LUT applied then it becomes irrelevant data.

Even though I have given correlating numbers between Gio Scope steps and Zones the entire use of the Gio Scope scale is unnecessary. Zone System encompasses the entire process whereas Gio Scope is confined to analyzing one step, the exposure as seen by the camera. In which case Gio Scope as a tool only has two functions; seeing where the exposure falls in relation to the sensor in order to ensure it is not beyond the limits of the camera’s dynamic range, and as a light meter if none is available.

If not a Zone System how about a Zone System Tool?

Investigating Gio Scope's use to a Zone System practitioner is also best handled through an example. Consider the problem of creating an image of a black and white cat and you want to determine the exposure of his white and black fur. To find this you must press the individual steps on Gio Scope until the white and black fur are shown in false color. Compare this to aiming and pressing the button twice on a light meter. Obviously, the light meter is a much quicker method by which to acquire exposure information. I could additionally argue that looking at a Waveform would be quicker as well. (If Gio Scope was to function with the speed of a light meter it would need to be programmed so that the user could press an area on the image and all corresponding tones were shown in false color and the step in the scale above is highlighted. That would be an impressive feature!) True, Gio Scope is helpful in determining that if the white fur is in step 15 and black fur is in step 5 then our subject has a contrast range of 10 stops. However, you could also just know your f/stop scale and that the difference between the metered dark fur at f/1 and the white fur at f/32 is also two stops. If the latter seems more difficult than this probably is because you are not engaging with the f/stop scale as much as you should.

The second problem that would be worth investigating is that most false color modes highlight not a single exposure, but a range. So each step most likely highlights a range of one stop of exposure. This could lead to a false impression of a scene's contrast range. If the cat’s fur is on the cusp of steps 4 to 5, just barely in 5 and at the cusp of 15 to 16 and just within 15 then it would be better to interpret this ratio as 11 stops. I think this is especially critical in the interpretation of contrasty scenes that push the limits of your medium. I understand that RAW allows for correction of exposure errors, but this becomes increasingly difficult at the limits of a sensor. Moreover, whether the tonality is rescued from over and under exposure successfully is dependent on so many circumstances and upon the desired quality of image that I would rather not pretend a 1 stop difference in exposure does not matter.

So I contend that using a light meter is far simpler. If you know your shooting stop than it is really just a matter of taking readings of objects and seeing if they fall into the Zone of your choice. The No Film School article suggests that in order to do this you also need a pencil and paper, but this is maybe only the case when first practicing Zone System or if you are rigorous and take notes on set. However, this notion that Gio Scope liberates you from having to use a pencil and paper are the kinds of comments I find commonly made by those who do not practice Zone System but like to provide criticisms from their lack of experience. Some photographers write down exposure info and its relation to Zones and some don’t. That’s an individual’s working method and has nothing to do with Zone System itself.

What Function Does Gio Scope have?

Zone System is a method by which a photographer can envision an image and knowing the limits of their medium reproduce their pre-visualized tones in the final image. This is a world of difference from Gio Scope which is just a new variation of a false color view using the RAW sensor data. Zone System encompasses the photographic process whereas Gio Scope is an exposure tool. Calling Gio Scope a Zone System is enthusiastic fan-fiction that misrepresents both. Certainly, it could be a tool used by someone using Zone System in their work, but only in the limited capacity of a means to analysis exposure on the sensor. Does Gio Scope replace a light meter? Not entirely, as a light meter could be seen as a quicker and more accurate tool.

I am not saying that Gio Scope is not useful, but it is not novel. Perhaps this explains the confusion expressed by so many users when their experience of this tool does not correlate with what they are being told about the tool. One user on the forum glibly offers that this is what happens in the face of innovation. I hesitate to go as far in this case because I don’t want to mistake change with innovation. Sometimes change is innovation but more often change is just change. We need to always be wary of change masquerading as innovation.

Where is the math in cinematography... by John G Arkenberg

An understanding of mathematics seems necessary to scientific research or engineering, but not to the cinematographer as artist. Perhaps this is why math is such a topic of incredible loathing in cinematography classes. I sympathize, after all, because at the end of high school I suffered through Calculus with a kind and brilliant but completely inept teacher. My final school experience of math is inextricably linked to two years of bumbling explanations and my frustrations to understand them. With many years of emotional distance I see that my dislike of the Calculus had nothing to do with the actual subject, but my poor experience. As a result I’ve decided to re-kindle my understanding of mathematics because it has become increasingly necessary in my research.

One semester I had a student who struggled with any homework that involved math. This is not new or unusual which is why I try to spell out the steps clearly in examples attached to the homework. Despite my efforts the student still approached me for help with the flustered response “I never took Calculus!” Each time we encountered math he would echo this statement in knee-jerk response. I always wanted to reply in exasperation “This is trigonometry!” or more often “This is arithmetic!” His personal block toward applying his mind to the math was so great I felt that if I jumped out of a dark alley and shouted “What’s 2+2?!?” he would scream “I never took Calculus!” Many semesters after he graduated I was pleased to hear from another student the following story as they collaborated on a project. Turns out they needed to solve a particularly difficult color temperature problem involving the choice of color correction gel. The student who never learned Calculus remembered that he had learned the concept of MIREDs and together the two of them worked through the problem. The student who relayed the story to me said that his friend’s fear of math had been replaced by exhilaration in recalling that he knew how to solve the problem, he just had to locate the formula and put in the numbers. Apparently, the gel with the MIRED they calculated worked perfectly in their film.

I attribute this change in attitude to math on the student’s part as not due to any brilliant teaching insight on my part. I simply helped him when asked and neither displayed any emotion toward his lack of understanding or fear of the subject, nor any emotion toward the subject itself. I remember just keeping our interactions simple and calm. I found nearly the exact opposite recently while reading David Stump’s Digital Cinematography where he trips over himself to warn the reader whenever there is math.

From pages 15 and 40 of Digital Cinematography by David Stump, ASC

From pages 15 and 40 of Digital Cinematography by David Stump, ASC

This attitude does no one any favors since treating anything with numbers or formulas as subjects of terror only reinforces this attitude to students and colleagues.  A heightened language will invoke a heightened response. I personally encounter this when realizing I can solve a decision on set by an equation or some mathematics. Perhaps I come across as too overjoyed because the response of other crew is sometimes a dismissive "It's ultimately art so why bother with the numbers?" or more aggravating a condescending statement like "Oh look, the nerds are at work!"

Despite the fact I'm not terribly good at math and am less certain what a nerd is and why I am one there must be a benefit to understanding and applying math at work? To better explore this problem I believe two central questions must be considered:

1 – Where is the math in cinematography?

2 – What benefits would be conferred upon a cinematographer who understands the math in his work?

In an effort to keep this post short I feel I should address the first question with a few examples which I have literally sketched out. I will inevitably expand each example out in a future post in which the answer to the second question will emerge.

Exposure

The relationship between light intensity, exposure time, sensitivity of the sensor, and the iris of the lens is held in tight reciprocal bonds. (Except for reciprocity failure, but in this case I’m speaking about the most generally common shooting conditions.) All of these coordinate scales rise and fall by a factor of two, which is very simple mathematic relationship. There is some complexity in the fact that each of these scales are in different units thus requiring a student to be conversant with the meaning of the numbers. Nonetheless, the decisions made to achieve correct exposure are entirely mathematically base. 

The graphic method of a cinematographer practicing his scales.

The graphic method of a cinematographer practicing his scales.

There has been a consistent observation by other cinematography professors and myself that the understanding and application of these scales purely mentally is more commonly seen in those with an analog background. This is most likely due to the fact that analog photography requires attention and memory or else you are punished when the print is projected. Now that exposure can be achieved visually on a calibrated monitor many younger students have trouble keeping track of their technical settings and fluidly relating them in their mind.

Optics

Geometry and trigonometry are the prevalent mathematics in the science of optics. I have used trigonometry a number of times in work to calculating angle of view to focal length. One class a group of students approached with a video that they found online that showed footage of a rock climber scaling a particularly vertical cliff in the desert. The technical information claimed that the footage was shot one mile away and the focal length listed sounded too small to frame the climber so close from such a long distance. Unfortunately, I never kept track of the video but we can analyze a similar situation with this beautiful video of Dean Potter tightrope walking against the Moon that was photographed by Mikey Schaefer. What focal length lens would be required to film this event from one mile away? My work is as follows:

This frame could be accomplished with a piece of optics from the astronomy industry or with the use of a 600mm or 800mm prime lens with 2x or 1.4x lens extenders. Regardless, the use of trigonometry goes beyond just being able to prove to my students the accuracy of a claim found online, but gives one the tools to make the correct technical decision in order to successfully achieve an artistic vision. 

Another, perhaps more common use is the geometry needed to calculate angular field of view because of its relation to format size and therefore the choice of focal length of lens. Currently there is constant shifting of format size unlike the analog era where the choice of format was largely confined to 16mm or 35mm. Now digital cameras have APS sized sensors, S-35mm sized sensors, full frame 35mm sensors, and camera manufacturers such as Red are constantly increasing the area of the format as they increase resolution. Moving between these cameras means becoming conversant with problem of picking focal lengths to achieve an equivalent field of view, especially when different cameras are working on the same set. Surprisingly this must still be explained to not only students, but often professionals working in the field. 

A quick sketch. The angles are not exactly accurate, but close enough to prove the point.

A quick sketch. The angles are not exactly accurate, but close enough to prove the point.

The illustration above relates field of view to focal length for three different film formats through simple geometry. I particularly like how the location of the lens to the image plane according to its focal length clearly demonstrates role of this number in its effect on field of view. The diagram also reveals the simple factor of two relationship between the three variables. Unfortunately, there is nothing quite so simple with the non-standard choice of sensors camera manufacturers currently are making. Nonetheless, I think an angular field of view calculator using these illustrations as a basis would be far simpler to understand and interpret rather than just a fixed diagram with numbers popping up to give an answer.

Photometry

The measurement of intensity of light and its relation to exposure is governed by the math of photometry. I regret not being taught this subject in school because its power of prediction on set is critical. Young filmmakers are routinely guessing what wattage of unit could produce a given f/stop and much of this is erased with knowing even some simple mathematical relationships using the 100:100:2.8 rule of thumb.

To share a very extreme example here is a sketch of a proposed lighting unit given in my Science of Cinematography Final that is wrapped around a column as an architectural feature on set and contains both blacklight and fluorescents in one unit. The questions surrounding this sketch involve picking gels for the regular fluorescents so they appear to match the blacklight bulbs. Finally, they must calculate the exposure given the photometrics and area of unit (total and apparent). The fact one can not only design complex rigs using lights from other industries but actually predict their behavior on set is incredibly liberating. Who would want to show up to set only to discover that their prized construction fails to illuminate the lead actor?

Depth-of-Field

The math behind calculating Depth of Field is perhaps best left to apps or an old mechanical calculator. I teach the most commonly used DoF equation in class but I discuss its use with great caution  because I feel DoF and Hyperfocal distance equations have failed me in my work as a focus puller. This is easy to understand because researching the topic I found that behind the formulas are a set of human assumptions. First, the conventional DoF math requires the cinematographer to pick the Circle of Least Confusion which is a human decision to determine the tolerances and best practices for the calculation. The formula is dumb and needs a human in order to assign it an intelligence in order to provide a correct answer. 

From Photographic Optics by Arthur Cox; Expanded Edition, 1971. These equations await your intelligence.

From Photographic Optics by Arthur Cox; Expanded Edition, 1971. These equations await your intelligence.

Beyond the fact that one cannot blindly follow this formula many people I talk to are surprised that there exist three different ways to calculate Depth of Field. Above is the conventional formulas used for determining the near and far limits of what is "acceptably" in focus. These you will commonly find in textbooks, but few discuss that the formula for Hyperfocal Distance which must be entered into these focus limit formula can be calculated in two different ways. One method calculates the Hyperfocal Distance considering the image enlargement and viewing angle to the subject as a print or projection, whereas the other only considers the size of the Circle of Confusion on the sensor. I wish you luck finding a chart or finder that reveals which method of determining the Hyperfocal Distance was used! 

From The Ins and Outs of Focus by Harold Merklinger

From The Ins and Outs of Focus by Harold Merklinger

To make matters more interesting the engineer Harold Merklinger published a largely unknown book in 1990 called The Ins and Outs of Focus in which he calculated a third and even more rigorous method than the traditional methods. In his case the calculations are run based on what object in front of the lens a photographer desires to be resolved. The approach is so novel as to turn everything one conventionally learns on its head. The math is more simple, but the precision incredibly ruthless. Reading Merklinger's work is a revelation in how he elucidates the problems inherent in the other formulas as well as novel rules of thumb that work much better than Hyperfocal Distance or the 1/3 Rule. So even research into a topic that seems as concrete as DoF equations can reveal that a dogmatic adherence to a formula one does not truly understand can lead to a less than desirable result.

How Does One Engage the Math?

These are just a few examples to demonstrate that math and geometry are inextricable from the work of cinematography. Some may feel quick to point out that a cinematographer does not need to remember the formulas nor carry a calculator. That may be so, I know very successful cinematographers who create amazing images and could never give you any equations or numbers. However, one cannot deny the need to understand the math because if you are using the tools of the craft you are engaging with the math. Just because a DP decides to expand their Depth of Field by pulling out the ND.6 filter, compensating for the four times greater amount of light by stopping down two stops on the iris thereby narrowing rays of incoming light from objects in front of and behind the object in focus so that these pencil rays of light are now placed within an acceptable tolerance of circle of least confusion without ever calculating the results does not mean he or she is not engaging with the math.

What I have realized in my discussions is that there are as many different relationships between a person and math as there are people. There are those who understand the concepts arising from the formulas in a very practical and often intuitive way. There are those who wish to plunge deep into the details. All of these paths and any in between are legitimate. The only fallacy lies in claiming that knowing the math is unnecessary because then one is denying a level of understanding to their craft. This would be like a woodworker claiming they don't need to become better at hammering a nail, as they hammer in a nail and badly at that. My hope is that those working as cinematographers who dislike math put their prejudice aside and delve into these topics because they can reveal real treasures of understanding. I also hope that the next time you see me on set excitedly solving a problem using my phone as a calculator that you understand I'm just doing my job.

Looking Back... by John G Arkenberg

Looking back on four years of non-blogging.

Reading these first two (and only) posts on this website is not unlike reading the work of a total stranger. I recognize the ideas and a familiar language and tone, but feel they were produced by an alien mind. But before I launch another possibly pitiful attempt to write with consistent impunity there should be a moment to reflect on what has passed and what potential lies before. For from this vantage point in time, surveying thoughts as distant as the seam of the horizon, I feel I can better begin to lay the course of my pen.

My work and research for many years now has been a lonely endeavor. This is not paradoxical in light of the fact that I teach an undergraduate class and work on set with large numbers of fellow people. Rather, I have found more generally in my life that the ideas of interest to myself are confined to a few people I know, and only rarely to many that I meet.

Frankly, I live with a lot of disappointment when my hours of research, testing, and data-collection end with someone decrying “I don’t think this really matters to artists.” (Students are the exception in this case, but that is because we encounter each other in an environment designed for learning and discovery.) This comment especially rankles because I am an artist who feels that my consumption of technical matters stems from a direct desire to understand and control my medium to produce an aesthetic result. If the type of brush, the material and shape of the bristles matter to the painter, than why not the intricacies of digital signal compression matter to the cinematographer? Perhaps the overbearing amount of information that must be digested by the cinematographer these days has produced a technological ennui, or denial as a psychological defense mechanism to hold back the flood of technocracy. To this attitude I can only simply reply that research and tests always matter. There is always some light to be shed into the dark corner of the camera obscura even if it only weakly illuminates the subject of study. We should never let our minds become dark rooms.

Another frequent comment I receive is “I don’t understand” which is easily remedied through education. I only have the opportunity to teach a handful of students each semester about information that is diffused between too many, and frequently hard to find, sources. This fact confronts me into admitting that I live in rarified air and should try to reach a wider public. For the education that exists today for aspiring cinematographers is piecemeal necessitating that students must learn largely through practice and experience in the industry. I am not criticizing learning through praxis, but wish for a harmony of systems that also includes solid scientific theory. Sadly, the cinematography texts that exist today are too frequently haphazardly organized, poorly researched, and riddled with mistakes. I hope these writings can help shore up a rickety scaffold of knowledge.

Also, I have also encountered increasing dismissiveness to certain facts that have become cornerstones to my teaching. Having worked hard to first locate these facts, and then to continually subject them to the crucible of testing only for a colleague to deem them “not relevant” takes the wind out of my sails. I believe this  attitude stems from too narrowly defining the concept of what cinematography is, and how to use photographic tools in order to create art. As I resume my studies of physics, logic and philosophy I hope to illustrate how even these ancillary and abstract topics are relevant to the cinematographer (or at least to me). All knowledge is a form of tool, and once we understand our tools than we can understand the relation between our craft to the greater world. For the entire span of the imaging chain must be considered from the photons streaming through the universe to the phenomenology of our visual system. All subjects that relate to visual art of cinematography, whether physical, psychological, physiological, mechanical, chemical, electronic, or otherwise, are relevant. By standing atop the mast we can begin to observe the curvature of the Earth.

My original intentions of this blog from four years back still stand, but must be augmented by the following: to write frequently about the topics and questions I grapple with in my research. Also, to discuss at length the tests that are performed in Science of Cinematography because they succeed and fail in interesting ways and the lessons they produce could be of greater use. To correct this need I am creating two new groups concerned solely with research and testing methodology.

One thing that has not changed about this blog is that fundamentally these writings are not so much a declaration as a forge by which to shape my ideas. I suppose the purpose is ultimately self-serving since I hope to observe the changing nature of my thoughts. This gazing-stone intention is why I have failed to include buttons to link to social media and allow comments. (I assume if you have a comment you can contact me in person and I will address you personally. If the comment sparks interesting ideas than they should be transformed into a piece of writing and not left moldering at the bottom of a post.) The self-reflexive nature of my efforts is not in conflict with posting publicly because I expect certain types of people to find them and respond. This blog has very little purpose as an advertisement for myself, a way to sell a product, or convince others of a doctrine. Rather, this is a quiet space for ideas, a safe harbor in a digital morass. My hope is that in time the right travelers can find shelter and engage in the commerce of ideas without the heightened tone that too commonly defines discourse on the internet.

Finding a Structure... by [Your Name Here]

In an effort to provide some sense of structure to my writings I will file the content into three categories and one “meta-category.”

Writings about the nature of this blog will be filed into the meta-category, the Überblog. This section already contains my introduction and this piece on structure, but will be expanded with the changes in my intentions.

The three main categories are as follows:

1. Science

Writings filed under ‘Science’ deal with the application of the scientific method to photography and cinematography. Both are technologically driven visual arts that leave the artist with the dilemma of this additional layer of complexity. True, the artist could take a passive approach and trust manufacturer and media suggestions, but this seems to be rarely the case. Most people I find are seeking a further understanding of their materials and perform tests on all different aspects of the medium.

At the most abstract level these writings will discuss the nature of testing photographic materials from a philosophical and scientific standpoint. This approach helps define many boundaries that are being hopelessly confused by not asking some important questions first. For example; Am I testing a quality or a quantity? Can I test both and at what point can the two become confused? Can a test exploring a particular aspect be used to make general claims about the medium? While these questions about the nature of a test may seem pedantic or trivial they provide insight into their purpose, methodology, and results.

With an understanding of fundamental questions one can better approach the information disseminated by manufacturers, professionals, and amateurs in publication and on-line. Moreover, there are even further questions that should be explored: What is the value of another person’s information and how does it inform the artist’s work? Who performed the test and what is their agenda? How much transparency exists in the procedure and presentation of the test? Are they confusing any of the topics addressed above and can we untangle them to make any sense?

In order to temper the theoretical and occasionally abstract issues of the topics above this section will also include a practical aspect. From the questions asked we can design many different tests to explore facets of the medium ranging from the technical to aesthetic. I think it is important to share these ideas, questions and procedural outlines so that anyone can use them and find within them their faults and values.

Ultimately, my intention is for the Science section to help the photographer or cinematographer ask of a test the correct questions, and later draw the correct conclusions.

2. Zeitgeist

Currently we live in a time of rapid technological innovation and turnover. The impact of this change on the visual arts is immediate and pervasive.

I feel that the accelerated pace of change has made decision making far more confusing for the artist. The changes in media have led to an increased volume in marketing and an acceleration in social trends. Add to this the opinions found on-line and there is a veritable cacophony. There is so much noise surrounding the technology used in the photographic medium that the quiet connection between technology and the aesthetic is effectively drowned out.

The Zeitgeist section is an attempt to lower the tone. Writings filed here offer a quiet place to step back and ask questions about the current climate: How does technology affect the art? How is the technology affecting how we work? How is the industry being transformed? What are we being asked to believe about the change in technology versus the truth?

The Zeitgeist section is by far the broadest topic but I hope the most insightful because it positions the art of photography and cinematography into a larger context. By scanning a broader horizon we can begin to challenge our own assumptions.  

3. Our Eyes Give it Shape

With each class I teach and each further topic I research I confront a fundamental and profound fact that is best expressed by Wittgenstein in his work Culture and Value:

“…daß das Okular auch des riesigsten Fernrohrs nicht größer sein darf, als unser Auge.”

“…that even the largest telescope has an eyepiece no bigger than the human eye.”

In a similar vein there is a song by Peter Hammill titled “Our Eyes Give it Shape” and it is this phrase which has inspired the writings in this section which will relate the physiological and psychological factors of vision to photography.

Photography and cinematography classes typically begin with studying the visual art itself whether through example or the use of the tools of the trade. This is especially difficult with photographic arts because they utilize optics in such a literal way as to trick the student into thinking that the camera sees how their eye sees.

I have come to firmly believe that one must learn how we see before we can learn how the photographic medium sees. Knowledge of the similarities and dissimilarities between the camera lens and the human visual system is liberating. Once the artist casts aside the assumption that the lens and eye are one and the same can they understand how to create the images they see in their own mind’s eye.

At the same time the medium of photography is both defined by the limits of our vision, just like Wittgenstein’s telescope. Our eyes are powerfully sensitive to detail, tone and color, but can also be easily fooled. The photographic medium is limited by the strengths of our visual system and also exploits its weaknesses. These instances are not mere medical curiosities, but can have a profound on the practical and aesthetic decisions of a photographer.

Finally

If I was to distill the writings on this site down to one thing I would say this is an exploration of relationships:

The Science section explores the internal relationships between the materials used in photography and cinematography. The Zeitgeist sections looks at the relation between society and photographic materials and how this affects the artist’s work and thought. Finally, Our Eyes Give it Shape looks at the relation between photographic materials and ourselves.

(I never thought that when I began writing a blog it would end up being about relationships.)

I will let the rest of the world generate the mass of content, but I want to give it form. I want to look at the relationship between parts because this is the only way in which to form an understanding of the whole.

Welcome... by [Your Name Here]

For those who know me there is some surprise in my writing a blog. In fact, some will consider this a retreat from my generally Luddite principles. So first allow me the time and space to justify my digital presence and illustrate the principles to which it will serve.

I receive increasing pressure from friends, colleagues, and students to post and write about the material explored in my “Science of Cinematography” class at NYU. This is due to the fact that I am given the resources and freedom to test film stocks, cameras and lenses. I have collected a wealth of data that has clarified a great deal for me and others. I regret to inform everyone I will not be posting the results here despite pressure otherwise. My reasoning is as follows:

First, the tests I have performed are designed to be seen in a medium of much higher quality than the internet can offer. Rather than offer compressed or poor samples I would rather explain the test procedure so that someone can gather their own material.

Second, blogs that contain ‘tests’ or ‘test results’ of photographic materials already exist in legion. I have not seen a single one that discusses the problems and process of conducting a good test. Furthermore, the process of asking questions about the nature of the materials used in visual arts brings up a number of interesting philosophical issues that are never fully explored.

Third, communication for me has always been a two way affair. I have participated in forums, but have found the terms of engagement too socially complicated. Also, the voice of many creates a cacophony. By posting these writings in a small niche of cyberspace I hope people discover them and feel free to contact me personally to share or criticize. I enjoy one on one communication with people of like or even un-like minds.

Finally, my class is contained to a finite term in which I am frequently unable to pursue ancillary topics. This site gives me the space to travel down some side roads that would otherwise be never explored. My hope is that these side journeys will enrich the landscape for students and outsiders alike.

I have and always will be interested in the exchange of ideas. These writings serve as the first step with which I hope to provide a singular exploration of the many facets of the photographic visual medium. I will probably pose a great deal more questions than answer, but this comes from a desire to avoid espousing dogma.

For those who still express surprise at my penning a blog you can see in the above image that I still work very much by hand.