Something has to be in Focus

Day Lily
[Copyright Andy Richards 2013
All Rights Reserved]
LATELY, I see a lot of social media-posted images of flowers. Flowers are pretty, and a naturally attractive subject. But there is a frequent component to these images which detracts from their otherwise attractiveness as images: focus.

Yellow Rose
Randomly selected from social media posting

EVERYBODY HAS a camera these days, mostly in the form of their cell phone. But surprisingly, I see a lot of these out-of-focus images by photographers – often in groups dedicated specifically to photography subjects or gear. Photos that, with recurring frequency, have no part of them in sharp focus. For illustration, I searched the internet for some of the ubiquitously posted flower images. Consider the above image of a yellow rose. The green and yellow pastel colors, and the background bokeh are pleasing enough. But the image just doesn’t really “make it.” There is no part of the plant – flower, petals or greenery that is in sharp focus. And I think that “kills” the image. In most instances, what we are trying to do in a photograph is to draw the viewer’s attention to at least some feature of the image.  A common way of doing that is making certain that what we want the viewer’s attention on is in focus. This is particularly true in this type of image. Close focus like this can be difficult, and I think yellow is one of the most difficult colors to get looking good and sharp. But something here needs to be in tack sharp focus, in order for this to be a “good image.” The shooter needs to decide what part of the image is most important and focus on that component; probably (but not necessarily – that will be up to the photographer) the petal edges in the front-center of the flower.

Sharpness is one of the fundamental photographic skills

SIMILARLY, THE Peonies photo below lacks any area of sharp focus, from foreground to background. I find my eye wandering all around the frame, looking for a center of interest. Some portion of the image needs to be in focus. I would probably work to get one of the two or three large foreground flowers sharp. The below image, in my view, suffers from the same issue. Overall, a pretty scene. But nothing in the image is tack sharp, with the same result as above. The eye is looking for a place to land.

Peonies
From Social Media

THERE IS certainly something inviting about a frame-filling, bokeh-enhanced, colorful flower in its element. But the one fundamental that, in my view, makes or breaks these images – the difference between a nice image and just a so-so picture posted on the internet – is focus. Sharpness is one of the handful of fundamental photographic skills. It is perhaps one of the first things we learn. It can be affected by inaccurate mechanical focus of the lens, by movement of the camera during capture, and movement of the subject during capture. Without something in the image being tack sharp, the result is just a fuzzy picture, albeit maybe a colorful one. I appreciate that sometimes, in the interest of art, a photographer might intentionally let the entire image go out of focus. In my view, that seldom works. Imagery that has some out of focus elements can be very effective – as long as there is some part of the image in focus. Which part? That depends on what the photographer is trying to portray. In the majority of my own such images, I try to have one area – and particularly, a component like the pollen saturated stamen in the opening image. In the image below, I illustrate my thinking that it is o.k. to have out-of-focus parts of an image – even most of the image. Indeed, this is a technique often used to draw a viewer’s attention to the portion of the image we want their eye to go to (in this case, the water-droplet saturated foreground flower). But there must be some part of the image that is in sharp focus.

Stella D’Oro Day Lillies
[Copyright Andy Richards 2013
All Rights Reserved]
CLOSELY RELATED is the concept of depth of field. Photographic images are two-dimensional. Like all two-dimensional art, there are graphic principles that will make us see them in three dimensions. One of those is the perception of elements from front to back. In oversimplified terms, depth of field is that portion of the image – front to back – that is in sharp focus. Sometimes we would like everything in the image to be in sharp focus. That can probably best be achieved with a short focal length lens and a small aperture. But more often than not, the closeup flower images I am referring to here are enhanced by some degree of purposeful out-of-focus elements. But not everything. At least some part of the image has to be in focus for the image to draw us and be believable. The out-of-focus portions of an image background (and sometimes foreground) is sometimes referred to as bokeh. In the Purple Coneflower image, each flower, from foreground to background, becomes progressively less in sharp focus. That is a function of depth of field. Understanding this will go a long way toward a nice, deep image. If I had focused on the middle flower, for example, I might have had an out of focus foreground, which would have seemed out of balance to me.

Purple Coneflowers
[Copyright Andy Richards]
All Rights Reserved]
WHAT PART of the image needs to be in focus? That is the artist’s question. But some part of it certainly must. The tulip image below just begs – in my view – for some depth. But it isn’t successful in my opinion, largely because there is no part of the image that is in really sharp focus (or at least no part that is intentional and of interest). I would love to see the center tulip in the foreground in crisp, sharp focus, with a gradation of progressive out-of-focus tulips behind it. It would give the image the depth and drama it lacks here. While certainly not my own best effort, the Shasta Daisy image further down illustrates what I am trying to convey. There is enough sharpness in the image to catch the eye, but I think the out of focus parts of the image make it much more interesting. And again, it progresses from sharp to less in focus from the front to the back of the image.

Tulips
Social Media Photo

I  DON’T know what cameras and lenses were used on the illustrative images (other than my own, of course) here. I wouldn’t be the least bit surprised that most (if not each) of them were made with smart phones. The yellow rose image that has nice bokeh – maybe not. But bokeh or at least out-of-focus areas in an image can be created with artificial intelligence, both in post-processing and now, in-camera with the impressive “computational photography” technology built into them. Hard to know, and not really relevant to my point here.

But in spite of the moniker “smart phone,” the camera (in a phone or stand-alone) is not smart

THERE ARE some takeaways here for photographers. The first is understanding of how focusing works. Most modern cameras and all phones today incorporate auto-focus. But in spite of the moniker “smart” phone, the camera (in a phone or stand-alone) is not smart. It is technologically amazing. But it doesn’t think. There are sensors in any camera that detect and focus the camera on points in an image.

Multiple AF Sensors depicted on camera screen or viewfinder

MODERN CAMERA focus area sensors have become very complex compared to the old-school manual focus aids, and then auto-focus sensors in our old analog cameras. The image above shows multiple sensors, linked to an autofocus algorithm that “smartly” chooses the areas to focus. For still photos like those shown here, I never use this feature, preferring to use only one sensor, and to have it movable by me to the area in the image I want to be in focus. Smart phones are not as easy, because the default settings do not show these sensors at all and the camera defaults to its programmed focus strategy (my Samsung Galaxy S21 has a “pro-mode” which lets me see the multiple sensors in multiple mode, but I still cannot see – or move the single AF sensor point around). It is important to know what is happening “under the hood.” Left to its own doing, the camera will do one of two things: (1) focus on whatever point the sensor sees, or (2) find nothing to focus on and get confused, rendering an out-of-focus image. Smartphones are pretty darned good at finding focus, but they do miss sometimes. But obviously the key point here is to understand what the sensor is “seeing” to focus on. The photographer must be aware that the camera will try to focus on something – but it may not be what the photographer thinks it is focusing on.


Shasta Daisies
[Copyright Andy Richards 2013
All Rights Reserved]
THE PHOTO below is another failed attempt at depth of field/bokeh. It is a “close, but no cigar” try. The technique of out-of-focus background (bokeh) is obviously at work here. But in order for the photo to be successful, the foreground image (or at least a substantial part of it) – the subject of the photo – has to be sharp. It is not, which is too bad, as it has the makings of a really nice image. A close look at the middle ground of the photo reveals very attractive color. To me, the image looks like it could even have been processed in a computer to create the background. That background may be a bit bright, but some work in post-processing could tone that down nicely. And then you have the makings of a good photograph, in my opinion. But the achilles heel of this one is the failure of the photographer to get the subject in sharp focus. It can’t be fixed (in spite of Google’s misleading commercials about how the new Google Pixel “fixes” blurry photographs. Nope. It doesn’t 🙂 ).

Flower Photo
Social Media

A  SECOND takeaway here is that focus is not the only thing that contributes to an unsharp image. Even if your lens is set up to properly focus a subject sharply, there is another thing that can cause blur in an image: movement. It can either be movement of the subject (animation, wind), or movement of the camera. Humans cannot possibly hold perfectly still. Today’s technology, once again, has come lightyears in counteracting this with image stabilization technology (IS). None is better than the IS in smartphones. But they all have their limits. If light conditions are low enough, it may be necessary to use wide open apertures and very slow (long) shutter speeds in order to allow enough light to fall on the sensor to capture a visible image. These conditions mean camera movement will create blurry photos. At the same time, wind may be moving things in the photo that the photographer expects to be in sharp focus. And of course, animation (people and animals) can create blur. So, again, the shooter must understand these issues. It looks possible to me that the above image was made in lower light conditions and that wind or camera movement could have caused the out-of-focus conditions. I think it is worth mentioning here, that the process of digital rendering of an image can also contribute to a lack of sharpness. I see that with my scanned (from film) images – particularly when the scanner/scanner software isn’t the very highest quality. For cameras, but probably not applicable to smart phones, there is also another operator, often at work. Many cameras (particularly in the early days of digital cameras) have filters on them (to prevent a type of color “polution”), called anti-aliasing filters. The technology of color digitization of images involves an array of red, green and blue sensor-filters. The combination of these filters creates the “pixels” we read about in our images. Anti-aliasing filters attempt to remove some unwanted fringes, tints or bleed-overs in images. In the process, it created some softness of the image. This can usually be remedied in post processing by some careful “sharpening.” Some of the soft images we see may be a victim of this, rather than an unsharp original. As time has gone on, this issue has been addressed by much better digital capture technology. If you are shooting jpg images, the phone or the camera already applied sharpening to account for this. But if you are shooting raw images, the anti-aliasing piece may be of concern/interest to you.

The photographer must be aware that the camera will try to focus on something – but it may not be what the photographer thinks it is focusing on

MY MOM always told me that if you have a criticism, you should have a helpful suggestion to go along with it. 🙂 So here goes: Most importantly, understand how your camera (smartphone or stand-alone) achieves focus. For smartphone users, there may be some settings “under the hood,” that you can use to see the focus bracket. There are also “apps” that you can download that will give you more “user-controls.” These apps – in some cases – will allow you to manually control aperture (which effects the depth-of-field/bokeh issues discussed). You can also use the phone’s AI (the iPhone 12 and above have particularly good ones), to create a bokeh effect. But you must remember to have some portion of the photo (usually in the foreground of the image) already in sharp focus. For stand-alone camera users, spend some time in the manual (yeah, even if you are a guy 🙂 ). Modern cameras use a number of different technologies to achieve focus and you have some control over that. For the images being discussed here, using auto-focus (and I mostly do), I like to set the single AF/single shot mode on the camera. You can also often set the size of the focus area. You can experiment with that – but it is important that you understand what it is doing (or trying to do). Think about which part or parts of an image you want to be in sharp focus, and then concentrate on getting the focusing sensor on those parts. Most often, background bokeh will be achieved by either wide apertures or long lenses – or a combination of both. Be aware that in both of these cases, the depth-of-field will be much narrower and what you see in the viewfinder might not be the same as the end result. Hope some of this stuff helps somebody out there.

[NOTE: I am always a little sensitive about writing “critical” blogs like this. I will be the first to admit that my images – by either my own critique, your critique, or just points of view – are imperfect. I think we should all be striving to make all of our images better. I also am very cognizant, particularly as I look at the completed version of this blog post, of what WordPress, Facebook, and other sites do to our images when posted – these are things that are beyond our control. It is very possible that any of the images used here may appear softer than they did on the creator’s individual screen. I have certainly noticed changes in image quality to my own images after posting. The “illustrative images” here are purposely unattributed. The idea is to illustrate a “teaching point” and not to be critical of any image or photographer. As always, I appreciate those of you who follow and who read here. Thank you]

Posting Your Pictures to Social Media – some “tips”

AS THE holidays approach, folks will be posting more photos on social media than usual. Maybe some new phones (and even new cameras), holiday gatherings, family gatherings, and holiday decorations. I know I have posted these “tips” before. But I think they are worth repeating. I continue to see a few things that make otherwise pretty good images well . . .  not so good, online. I see this more on Facebook than anywhere else (but maybe that’s because I spend more time there than any other site :-)). The same issues are prevalent on Instagram, though. Do you want your images to stand out? There are really just a few “rules” to follow to get you there.

I Have purposely tilted the horizon on this image for illustration purposes
[Copyright Andy Richards 2022
All Rights Reserved]

Watch Those Horizons

THIS IS still the most common issue I see. And, surprisingly, I see it more often than we should from some pretty good photographers. In my mind there is nothing more obvious (and deleterious) to an image (especially one with a well-defined horizon – like water) than a crooked horizon. Look at my illustration. Our eyes will fool us at first, due to the many interesting (and perhaps beautiful) elements of the picture. But eventually, your subconscious will tell you something is wrong here. My mind immediately wonders how all that water doesn’t drain out of the scene. 🙂 I think it is a combination of things. In the case of the experienced shooters, it probably results from handheld shooting. For most of you who are probably using your smartphone, of course everything is going to be handheld. All the more reason to be aware of this concern. The best approach is to try to get it level when shooting, and there are some handy aids for that. Virtually every camera today has grid lines that can be turned on that superimpose the frame and give you a point of reference for horizontal and vertical orientation (more below). Most dedicated cameras and many smartphones also have a “level” app that can be turned on. Use them. They are your friend. And finally, even if your result has an unlevel horizon, you can almost always fix that. Almost every phone camera now has an “app” where you can very easily level up the horizon on a photo you have taken (many of them are “one-click” automatic). If not, “there’s an app for that.” Don’t believe your own eyes. Most of us have a bias and while we think we are shooting level, we are not. I have proven that to myself empirically time and again. It is the easiest thing to fix and perhaps the most dramatic improvement you can make.

There is nothing more deleterious to an image than a crooked horizon

Most cameras and some smart phones today have a feature that you can turn on that incorporates a “level” tool. If you have it, turn it on!

In-camera Level tool

Focus

I  STILL see a lot of images that are not in sharp focus. With your cell phone there is usually only one “culprit” here. You are not “telling” the phone where to focus. Cameras that have autofocus (most of them – and all cell phones) have “sensors” that tells the camera what part of the image to focus on. Most of the time, the sensors (usually multiple points) are set at a general, fairly wide setting, with the camera software “programed” to get the image in focus. And in most cases that gets it right. But if you are very close to your subject, or there is a very distant background that has some detail, getting sharp focus front to back is more of a challenge. This is mainly because of something known as depth of field (DOF). For serious photographers here, if you aren’t familiar with DOF, you need to do some research and study, and understand it. For smartphone users, it is probably less critical. This is – in part – because most smartphones today incorporate 2 or more lenses and the mix and the software figures DOF out. Most of the time they get the right part of the image in focus. But only if you point the focusing mechanism at the correct target. Usually, the sensors detect contrast or detail and that is how they determine sharp focus. If there is an area in the image with sharp detail or contrast that is not your subject, it is important not to point the focusing sensor at that area, or it will “fool” the camera into focusing on the wrong thing. This often results in a blurry image. Get to know how that focusing bracket on your phone works (they are probably adjustable and movable). It will improve your results. But be careful. Look at the illustration below. Newer cameras can incorporate multiple focus points as below. They contain relatively sophisticated programmed in algorithms. They work pretty well in many situations but can also be “fooled.” If possible, I think it is best to set just one focus point and know where it is. On my cameras, I focus set to a single, movable (by me) AF sensor point. I feel that I then always know where the camera is “aiming” for focus. The illustration below shows a multiple point AF setting, with the rectangle activated and placed at the point of focus (as noted, on my own cameras – at least for still photos – I have all the other points “turned off” so you do not see them, and I am only using the single rectangle focus point). [I may change these settings if I am trying to shoot a moving target].

Sensor Placement

THERE ARE a couple other reasons for blurry shots. They both involve movement. In a still photograph, we are trying to capture an image outline in sharp detail. The shutter opens for a “moment in time” and then closes. It attempts to “freeze” the image during that time. Certain things influence the length of time the shutter remains open. Light is required for exposure and the less available light, the longer that shutter must remain open. I am oversimplifying this a bit, but it probably works well for smart phone users to think this way. If you are using a DSLR-type camera, you should familiarize yourself with something known as “the exposure triangle.” Movement occurs in one of two ways. If you are handholding your shot, and the time is fairly long, your body’s natural shake and movement will move the camera, resulting in blurriness. Today virtually all phone cameras have built in “image stabilization” (IS), which is essentially gyroscopic technology designed to counter our body tremors and movement. They work amazingly well but are not failsafe. The second way is when the subject moves. Image stabilization will not help with that. Subject movement happens most often when there is wind, or when the subject is animate (animals and humans).

Exposure/Lighting

THIS IS one of the harder ones. Most smartphones do a very good job in even lighting conditions. Where all cameras show their limitations is with difficult lighting. The difference between dark areas and bright areas in a picture is often referred to as contrast. “High” contrast images often have very dark and very light areas within the frame. The human eye is very good at adjusting our vision to that contrast. The camera sensor; not so much. In spite of leaps and bounds forward since the early days, digital sensors have limited ability to properly expose over a large range of contrast. In some pictures, you will have to make a choice about what you want to be well exposed (the bright areas or the dark areas). Usually, that is going to be the darker areas. Most often I see the problem in “people” portraits, where you can barely see faces. Often this happens on a very sunny day, or late in the afternoon when the sun is still bright, but very directional, raising the contrast and creating deep shadow areas. What is happening? Cameras have another sensor (meter) in them that “reads” the light and tells the camera how to expose an image. They calibrated to expose whatever they see at a setting known as “neutral gray.” It tries to properly expose the area it reads. But it has difficulty distinguishing between those very bright areas and the deep shadows. Newer meter technology has been “programmed” to kind of “recognize” a scene and adjust the overall exposure. It works pretty well, especially in even lighting conditions. But high contrast still confuses it. So, our result often shows the inability of the sensor to capture both light and dark areas beyond past a certain point. In the photo below, the strong direction of the sun and the visor on the subject’s cap are creating deep shadows. The meter tries to do its best to expose the overall photo. You can see that most of the image is decently exposed, but perhaps the really important part – his face, is largely in shadow. If he weren’t wearing sunglasses, you would probably not be able to see his eyes.

Deep Shadows Photo

HOW DO we address this? As we talked about with focus above, it is important to know what part of the image you want correctly exposed, and how the camera is going to accomplish this. Like exposure, there is usually an indicator of where the meter is pointing (often the same bracket or point as the focus sensor), and how much of the overall scene it is measuring. On higher-end cameras, we have the ability to fine tune how wide an area the sensor/meter reads. That capability is usually there, with some digging, on your smartphone, too (and if not, there is probably a downloadable app that will do it). if you meter the area the faces are in carefully, you will get them exposed and visible. But if the contrast is too bright, this may be at the expense of incorrect (usually over) exposure of the rest of the image. In the image above, we may need to purposely overexpose the entire image to get the eyes properly exposed. It is possible (even probable) that the sunny, bright background will be completely blown out though. So sometimes you have to make a choice. When you have control of the situation, a better approach is to try to move your subjects into what we refer to as “open shade.” Not dark shadows, but out of the bright sunlight. This evens out the lighting conditions and gives you a better chance for good exposure. And, when all else fails, if you turn on your flash (I know that is counter-intuitive on a sunny day) and pop some flash into the shaded area, you may get a better result. What that is doing is actually trying to “even” out the contrast, by lighting the shadowed areas. Since flash covers a very limited distance, it can be very effective. In the photo above a controlled pop of flash (referred to as “fill” flash) would probably help this image. But easiest, and perhaps best would be to move him to an area with less high contrast conditions, such as the “open shade” we discussed. You usually have control over things like that. Based on our natural intuition, it often comes as a surprise that bright sunlight is not always best for photography. It is also a good thing to be aware that certain environmental elements contribute to lighting issues. Water and snow, particularly, can create high contrast and reflect bright sunlight. Exposure meters often have difficulty with snow and water scenes, and it may be important to adjust accordingly.

Guidelines are not meant to be slavishly adhered to. They can, and sometimes should be broken

Composition

YOU MAY think I am getting too deep here, and that a discussion of composition is for artists and serious photographers. O.k., sure. But again, why not use just a few “tips” to make your personal snaps stand out? The main principle I follow is to ask: “what is my subject and what am I trying to convey?” It really doesn’t have to be to philosophical or analytical. But once you identify the subject it is much easier to think about the best way to portray it. There are some guidelines that may help with this. They may make the process more mechanical, but that is not actually a bad thing as it can get you where you want to go without having to think too hard about all the other esoteric considerations. The first guideline is the most important of all: Guidelines are not meant to be slavishly adhered to. They can, and sometimes should be broken! 🙂

If camera lenses are round, why are all our photographs square? – Stephen Wright

WE HAVE already covered another aspect of one of these tips above: the level horizon. But while we are talking about horizons, lets address a point that can make a huge difference in your images. And that is to consciously think about where you place the horizon. There are some “rules” that classic artists have developed over the years that are applicable here. Probably the most common “rule of thumb” is to use the so-called “rule of thirds” when placing a horizon (and subject) within the rectangular frame we deal with in photographs (I think it was comedian, Stephen Wright who asked: “if camera lenses are round, why are all our photographs square?” 🙂 ). In most cases, putting the horizon in the middle of the frame results in a less dynamic, often boring image. What I also see, is that sometimes we get so excited about our beautiful subject that we don’t see other things in the photo. If 50% or more of your image is plain sky (even plain blue), or plain water, you are most likely not accomplishing the dynamic image you think. Your eyes and brain see the image and fool you into thinking that is what you are capturing. The camera is mechanical and doesn’t have that “brain” function. Think about bringing your subject more “front and center” and excluding (or at least limiting) plain backgrounds and foregrounds, which will usually have the effect of moving the horizon up or down. That is usually a good thing. One major exception to this rule is the “mirror-type” reflection image. Often that image looks most dynamic when the break between subject and reflection is dead center (but even then, not always). Almost all cameras (and many smart phones) today have a feature you can turn on which superimposes a “rule of thirds” grid in your frame. This not only helps with composition but can also help with level horizons. I encourage you to turn this feature on!

Using Grid Lines and “Rule” of Thirds
[Copyright Andy Richards 2022
All Rights Reserved]
PLACEMENT OF your subject within the frame is closely related here. It involves not only the horizontal placement, but the vertical placement of an image. I refer to this as balance. If you are making a close up “portrait” image, you will probably just center the person or persons and it will yield the result you want. But if the person or people are doing something, you can often make the image more interesting by adding context. Are they looking at something? If so, it is usually preferable for them to be looking into the photo instead of out of it. Is there something that gives it context (a new car, a boat, mountains, etc.)? Then you may want to place parts of the image at the intersection points of that “rule of thirds” grid. I recently read a comment by a pro who suggests that this is really the wrong way to look at this issue. He argues that the placement of horizons and subjects within the frame is not nearly as important as what you are trying to convey with your image. I won’t disagree with him. But I might say it a little differently, as I did above. Guidelines are just that: Guidelines. They are not meant to be slavishly adhered to. But knowing them and thinking about them can certainly help to make your photos better in many instances. I hope maybe some of these things will help someone make some better photos during this holiday season. Above all things, have fun!

Interlude: How we Render and See Color; (2 things that work better than Saturation)

[I have had this one qeued up for a while. I think that on the eve of fall foliage photography season, it is timely. I will return to the Portugal Series on Saturday]

Peacham, Vermont 2095
As captured (no saturation adjustment)
[Copyright Andy Richards 2005]
THERE ARE many really good books out there on digital color management. Perhaps the all-time standard was written some years back by the late, great, Bruce Fraser, along with Chris Murphy and Fred Bunting: “Real World Color Management.” There have been major advances in the technology we use, including capture, scanning, editing and projection since it was written. And it is not a “For Dummies” style book. But for anyone really interested in digital color issues, it is probably worth the slog through it. All this to say that I am not an expert in color management, or theory. My armchair perspective should be given its appropriate weight. Yet some things I have seen recently lead me to delve into this subject from a (hopefully) less technical and more observational perspective. I know I am personally guilty of any number of digital processing issues, so my comments here are geared not toward showing my great expertise. Rather, I want to pass on some knowledge that has been generously shared with me over the years.

. . . overuse of the saturation adjustment in processing images is probably the biggest issue I see in photographs posted on-line

IN SPITE of some objective measurement standards, we all see color differently. As technically amazing as the human brain and senses are, they are in the end, biological. And as such there is a significant variance in each of our biological “equipment.” Likewise, in spite of the objective standards and technical manufacturing abilities available today, Different equipment (and even possibly the identical model of the same brand) will possibly render color differently. And finally, viewing conditions will almost never be exactly comparable. Each of these factors will dictate how we “see” images and color.

AND, PHOTOGRAPHY is in significant part, Art. To me, that means there is no “right” way to portray an image, except the way the photographer wants to portray it. I do, however, think that sometimes the poster of an online image thinks they are portraying it a certain way, but aren’t aware of things they are doing that might actually be detracting from that vision.

THAT BEING said, there are some things that can be nearly universal in this area. The most important of them is perhaps calibration. Despite the variables suggested above, calibrating your equipment – especially your monitor(s), but if you print, also your printer – are a really important step. I use an older technology to calibrate my monitors (since this writing, I have updated my monitor calibration tools and software to the “datacolor Spyder X) and probably don’t do so often enough. Good calibration tools will adjust the brightness, contrast and color of your monitor to a known standard and should take into consideration ambient viewing conditions. It is an added expense, but if you are going to be serious about post-processing images, it is a necessary expense, in my opinion.

Monitor Calibration Tool

MY RECENT observations include two significant areas that I think would improve the images I see on-line (again, I am not using my own images as any “standard,” nor am I trying to establish any superiority – I am just sharing what occurs to me as I view them).

I am not saying that adjusting the saturation in an image is always wrong. But what I am saying is that you cannot generally make a nature image “more” colorful than it already is

◊      SATURATION

The first (and I have written about this in the past, but it continues to be a commonly-seen occurence) is the overuse of the saturation adjustment in processing images. Color is a complex animal. At first blush, it might appear that just moving the saturation slider up renders brighter, and “better” color. One of the places I most often see this is in nature images – particularly fall foliage images. During the short, but intense foliage season, I follow (and even moderate a couple) foliage boards and Facebook Pages. Overuse of the saturation adjustment is rampant on those pages. An image is nice, but the “reds” just aren’t red enough. Push the saturation slider, and at first glance, it pumps up the reds, oranges, and even yellows, right? Well, yeah, but unfortunately, it is not that simple. Digital color is made up of several components. The two that are perhaps most applicable here are saturation and hue. Hue is the component that defines the “trueness” of the color. It moves colors around the color wheel, making them more or less pure colors (or hues). The problem with the hue adjustment is it is very sensitive and it is easy to go too far and actually change the color(s). And, adjusting it generally affects the entire image and all the colors within. Most of us don’t even go there, and if we do try it, never touch it again.

Saturation, on the other hand, is easier to use and give at least the illusion that you are improving the color(s). But maybe not. Note the second image in the composite below (this is a very popular image in the fall, and I often see it posted – frequently oversaturated). Alone, it seems to look good. Nice bright, saturated colors. But compared against the “as shot” color version, you can start to see the problems. Perhaps the best indicator is to look at something in the image that should be a neutral color – like the low, gray shed in the foreground. Note how in the second image, it is unnaturally yellow and the roof has a red cast. In the third image, it has a plainly reddish cast to it. The second area my eyes go to are the bright red foliage immediately behind the steeple. It is pretty brilliant even in the “as shot” frame. But in all of the others, it approaches unreality – almost cartoonish. Next, look at the green roof. In the two “saturated” images, it looks unnaturally bright green. In the “red added,” image, it has a marked reddish cast (as does the sky, the white church, and everything else in that image). And, in the first saturated image, the corn stubble looks totally unrealistic – almost neon. If I could live with one, it would be the saturation adjusted in NIK Viveza 2. I still think it is overdone, but it does seem more subtle. It probably falls within the range of “artistic” adjustment by the shooters. But the other two are just not very good.

Peacham, Vermont 2005
Left to right, (1) as shot; (2) Pushed saturation slider in Photoshop; (3) made color adjustments in photoshop to increase reds; and (4) Increased saturation in NIK Viveza 2
[Copyright Andy Richards 2005]

Like the hue adjustment, though, the saturation adjustment – by itself – is a global adjustment. In other words, moving the slider affects all the colors within the entire image (so, if you make leaves “redder” by pushing the saturation, you are also making neutral colors like roads and tree trunks, take on an unnatural, oversaturated color – often giving them a red/magenta cast that is not natural or pleasant looking. But more important to understand is that it does not really make color “more colorful” (e.g., it does not make reds “redder”). What saturation does is changes the intensity of the color. But as it does so, it also almost always degrades the image. It also usually introduces an unwanted and often unpleasant color cast. If you closely observe the action of the saturation adjustment, you can see this. Take an image with some good detail in the reds (a fall foliage image with maple leaves, for example). Magnify the image on screen enough so that you can see the edges of a leaf or leaves, and/or the veiny details of the leaf itself. Then increase saturation slowly and you will see the details become mushy. This loss of detail is often one of the telltales I see in oversaturated images.

Peacham, Vermont 2005
This image was (purposely) oversaturated in post processing
[Copyright Andy Richards 2005]

The overall color cast created by overuse of the saturation adjuster is visible. Sometimes it is painfully obvious, and sometimes it is more subtle – but it is still there. Look at the other areas in the image (often areas you do not want to change), and note that they have taken on a reddish, or even magenta cast. Our eyes are very good at adjusting to these subtleties, and “tell us” that the asphalt, or the grey wooden barn, or the tree trunks are supposed to be grey and so we accept that. But on a more critical view, they often have been changed and aren’t truly depicted as grey (or any of the many other neutral tones they should show). Increasing saturation globally is a bit like taking a color filter and overlaying it on the foliage image. It affects every part of the image. Of course, there are better ways to make adjustments, than globally, and a lot of the overall color cast problem can be fixed with good selection techniques and targeted adjustments. And a targeted approach is almost always the best way to attack these issues. But we are still not really making reds “more red” by saturating them.

I am not saying that adjusting the saturation in an image is always wrong. But what I am saying is that you cannot generally make a nature image “more” colorful than it already is, and have it be effective. I see so many of these images that are, or border on, being cartoonish, have a color “pall” about them, or a significant to complete lack of detail. At the beginning of this post, I made the observation that color is seen differently by each of us. I will take that a step further and say that color is largely a matter of perception. One of the ways we “perceive” color is by its contrast to other colors around it. I shoot 99% of my images in the raw format. Raw capture doesn’t usually look very good straight out of the camera (for reasons more technical than I can adequately explain here), so they universally require some post-processing adjustments, at least in a raw converter software. I use Photoshop (primarily because I always have and am familiar with it). Light Room, perhaps the most popular photographer’s software, has essentially the same “engine” as Photoshop. There are others out there that are equally good. Whatever you prefer to use, one of them will be necessary to process your raw images (and you should shoot raw in most cases). Sometimes though, we just want more “reds” in a foliage image than were actually there. So, in post processing, we “goose” the saturation and/or the reds. But when compared with reality, it really isn’t working.

In the Michigan U.P. foliage scene below, the reds are impressive. But are they real? Nope. Look particularly at that tree in the background toward the left middle. It looks like a colored spotlight is shining on it. And the high leaves in the immediate foreground are just plain unrealistic. Likewise, the “neon” glow of the corn stubble just doesn’t hold up under reality (my reality, anyway). Compare the more “realistic” image further below. No, it as not as “bright red” as the image above. It wasn’t as bright red in real time either. That is what we get sometimes. But I think it is a great example of why trying to make a foliage image “more colorful” than it really is, just doesn’t work (as I am writing this, I note that when an image is uploaded to the WordPress site, something gets done to the colors – so none are what I saw in my calibrated monitors when I uploaded them. But hopefully, the illustration serves).

Oversaturated Image – Michigan U.P.
{Copyright Andy Richards 2018]

A better way – In the image here, I would make adjustments, but none of them would involve the saturation adjustment tools. And for the most part those adjustments would be selective or targeted (i.e., to selected parts of the image). In a digital image, the way we present (and see) details, is often by the contrast provided along the edges of each individual pixel. When we have contrasting colors, there will be contrast along the edges of adjoining pixels of different color. Consequently, one of the most effective ways to “enhance” the color of a nature image can be by adjusting the contrast in the image (note that this can also increase apparent sharpness in an image). Again, the plain old “contrast” adjustment is global, and you have to observe what a contrast adjustment is doing not just to the parts of your image, but everywhere (indeed, a truism for virtually any adjustment). I often find just that one adjustment will bring my otherwise blah raw image into the color condition I am looking for (and saw in the field). Again, targeted is better. Because of this, I often do fine tuning adjustments in Photoshop, after the raw conversion. The raw converters have vastly improved since the beginning, though, and there is a brush feature that allows a degree of targeted adjustment in the converter (but not with the precision that can be accomplished in Photoshop, in my view). It works very similarly to the NIK software (which I still use regularly), with an AI-based selection, using a round brush. I find it a rather blunt tool, and use it sparingly, finding that I can do more fine-tuned work in Photoshop. But even before you get to the brush tool, the converters have been working to give you more selective adjustment tools. For example, the dehaze, vibrance, clarity and texture sliders in the ACR/Lightroom raw converters use AI to apply essentially contrast adjustments to only particular ranges within the image. Unfortunately, the nomenclature is sometimes inconsistent and confusing. All 4 of these tools are really contrast adjustment tools. But they do the adjusting in a much more targeted manner than the global adjustments. Again, I would apply these adjustments sparingly and carefully – if at all. It is very easy to overdo them. When it was first added, for example, I was a fan of the clarity adjustment, and often added clarity. It adds contrast to only the “mid-tones” of an image. But what I have observed from using it is that it also tends to desaturate color and give images a kind of grungy look. While that may have its place, I don’t care for it in a nature image. I have found that I like its effect, however, on night scenes. On the other hand, I have become more of a fan of the newer, texture adjustment, which is slightly more subtle, and doesn’t seem to desaturate color. These are all tools, and they should be used carefully, sparingly, and balanced with each other. But they are powerful tools that will increase the appearance of bright, saturated color without the negative effects of the global saturation adjustment. Note, however, that these new tools are Adobe algorithms, and may increase saturation as compensation for other adjustments. It is important to critically watch what an image adjustment is doing to all parts of the image.

Michigan U.P.
[Copyright Andy Richards 2018
All Rights Reserved]

A second adjustment that will have a great effect on color and color separation is the brightness/lightness, or luminance adjustment. Color theorists reading to this point have probably also concluded that I missed the boat when describing the aspects of color. I said Hue and Saturation. But there are actually three: Hue, Saturation, and Luminance (for our purposes, think brightness). In most software, there are “brightness” (or similar) adjustments. Againsame drumbeat – they are global and as such, any application will affect the entire image. This is usually not the preferred result. But for our purposes here, again take the foliage image we have been discussing and make up and down brightness adjustments, and observe what it does to your perception of trueness and richness of the colors. You should (generally) see the color get “better” as you decrease (too a point, obviously) brightness. Of course that is also going to decrease apparent detail, and may have an overall effect that is not pleasing. But you can see from this exercise that targeted adjustments of brightness can have significant effect. As I mentioned above, I use the NIK/Google plug in tools. There are those who have been doing this a lot longer (and better) than I who use very technical selection and mask techniques to accomplish fine-tuned adjustments. NIK does that for us, and it does a pretty good job. I continue to use it because I find it very easy to use and generally very good at what I am trying to accomplish, which is targeted adjustments. Selecting particular areas in the image and adjusting contrast and brightness is the most effective way I know to adjust color and make it look “good.” I only very rarely add saturation, and I often decrease saturation in certain parts of an image. If I do add saturation, it will be targeted, and I watch carefully to see the effect on image details.

There is a last, non-saturation based item that can affect the color in your images. As I said earlier, our perception of pixel edges in digital images is primarily governed by contrast at those edges. As such, the term “sharpening” is really kind of a misnomer. An image (aside from some technical issues with capture and capture sharpening not within the scope of this discussion) is really as sharp – or not – as it is going to get upon capture. What digital “sharpening” is really doing is adjusting the contrast at the edges where colors differ (or contrast). Again, global sharpening rarely yields the best results. I use a plug-in (there are many of them out there today – often built into your processing software) and most are good – better than most of us can do on our own. But one thing I notice is that when sharpening an image, it often results in a slightly lighter or brighter overall image. That may be o.k., but it is something that needs to be watched. Adjustments are inter-related.

SO, LOOK again at those saturation – adjusted images. Are you seeing mushiness? Do you see the magenta-colored asphalt or macadam or dirt/gravel? A purple hue on what should be a nice red (perhaps weathered) barn, or even on the gray/brown weathered wood of unpainted barns and fences? Do you see the almost neon green grass and fields; or neon yellow corn stubble? Those are clear signs of a heavy hand on the saturation slider.

I am not advocating a rigid, “water must always be white” approach. But I am advocating that we observe the water in our imagery

◊      THE COLOR OF WATER

Unlike oversaturation, this color issue is one that the limitations of the capture media often creates. When getting exposure and white balance correct for the majority of the image, it often renders water and water-based items with a color cast. This one must be detected and corrected in post processing. It is perhaps the most “missed” element in color rendering and correction in images posted digitally. Several years back, a talented pro and mentor, the late James Moore, was working with me on some of my images, and it was one of the first “gems” he imparted. It has stuck with me. Water is naturally white (maybe a little off-white with maybe just a touch of grey). But once again, our eye/brain response is so smart that our eyes accept that the water should be – and therefore is – white in an image, even when it isn’t. Compare the original and corrected versions of the images below. This applies to flat water, moving water (e.g. waterfalls) and fog (including clouds). That doesn’t necessarily mean that all depictions of water will be white. Water is also a reflective medium and we often see reflected color in water. Indeed, it can be what makes an image “magical.” But other times, we want the water, as an element of the image, to appear natural. I see this most often in two scenarios: (1) waterfalls (drops and pools) and surrounding pockets, and (2) fog.

Burton Hill Road Farm
Barton, Vermont
As Shot
[Copyright Andy Richards 2010]

The Burton Hill Road Farm is perhaps my single favorite Vermont image. I had an image similar to the one above on my website for several years, before one day noticing that I had not carefully considered the fog. Not that is it unpleasing (if that’s a word 🙂 ) as is, but the digital sensor rendered the fog a grey/blue color and a magenta cast. It really wasn’t that color to my eye on that morning. The image below is now corrected for a closer rendition to what fog looked like. More natural. The top image is out of the camera, with the exception of some slight contrast adjustment and sharpening. In the image below, I desaturated the fog/cloud bank in the valley and made slight adjustments to brightness and contrast. I personally find this is done most easily and effectively using NIK Viveza 2. There are of course, other ways to approach this, including NIK’s Color Efex, and Photoshop selection and masking. And though I am not personally famililar with them, I am certain some of the other, popular, third-party software and plugins have the same facility. No other changes were made to either image.

Burton Hill Road Farm
Barton, Vermont
Color corrected fog
[Copyright Andy Richards 2010].

I am not advocating a rigid, “water must always be white” approach. But I am advocating that we observe the water in our imagery (both before, during, and post capture). What did our eyes tell us the water looked like in the field? Now take a critical look at the image in post-processing. Is the water being depicted as we saw it (or as we plan to present it)? I say “critical,” because, as noted above, our eyes can fool us. Our brain knows what the color should look like. But our eyes see it, and our brain corrects it. So we have to apply some experience and understanding of what the water should look like on screen or on printed media. A third party, which “fresh” eyes and who has not been to the scene will likely see the “off” color we may have missed. Usually this is blue.

Water is not blue (although there are certainly times when it does – and should – look blue). But many other times it is white, or near white. That is particularly true in most waterfall and moving water photos. We see photos, paintings and the like, and water is portrayed as blue more often than not. Why? Water is a very reflective medium. The sky has a blue hue on clear days and water reflects that blue color. So, we tend to associate water with blue. In the right context, that is perfectly fine. And, in other cases, it is often the reflected color that is what we want in the image and anything else might look unnatural. But sometimes, blue also looks unnatural. It is not always easy to see, and you have to be actually looking for it, in my opinion. The water in the Sable Falls images is a good example. It was – once again – rendered a blue/blue-grey by the digital sensor. There is so much going on in the image that your color senses may be on overload. most of the other colors appear natural, but look closely at the water again, especially in the darker areas. Compare it with the image following. See it?

Sable Falls
Grand Marais, Michigan
AS SHOT
[Copyright Andy Richards 2012]

It is common to see a waterfall image posted in which the water is not a natural color. Again, our brain often will “correct” what our eyes see, but that doesn’t change things. The actual color rendered is still often an unnatural blue. Where you will really notice it is if you make a print of the image. Fortunately, it is relatively easy to “fix” this, once we recognize the issue. But it is going to take careful targeted adjustments. While we want to change the color of the water, we generally don’t want to affect the surrounding details. Because the primary adjustment here involves de-saturation. But if the surrounding rock has rich hues, we don’t want to lose them, at the expense of correcting the water. So, it will involve some careful selection and masking. Again, I have pretty good success with letting NIK Viveza 2 do the masking for this. But there are many ways to get to it, and your own mileage will vary depending upon your own software, knowledge, and perceptions.

Sable Falls
Grand Marais, Michigan
(Water has been color-corrected to remove blue cast)
[Copyright Andy Richards 2012]

One thing to be careful of is not to correct to white, if the water is not supposed to be white. In the Michigan U.P., there can be a lot of tannins in the water, and this is often true of waterfall cascades up there. We may want to be on the lookout for the blue cast, but not at the expense of the “root beer” tonality the tannins can produce. What we are really saying here is that you have to be observant of the actual conditions on the ground, and then the rendering when you have it on screen later.

Tahquamenon Falls, Michigan
[Copyright Andy Richards 2004]

Color issues can be hard to see – especially in our own images

THE KEY, I think, is to look critically at our images during processing, and think about what we are really seeing and what looks natural and what doesn’t. Color issues can be hard to see – especially in our own images. I have found that after posting or uploading an image I thought was good, returning to it later is eye-opening and I often think, “that color is off. How did I miss that?”

[Disclaimer: I calibrate my monitors with a Datacolor SpyderX calibration tool and software. I have two matched (identical) monitors on my desktop that I use for 99.99% of my processing. As hard as I try, however, to get color right, there are outside factors beyond my control. I have noted that on both the WordPress site, and on Facebook (where I post the bulk of my work), their software’s algorithm does funky things to my color-corrected images, including compressing some of the color separation and darkening the shadows, often to a point where I wonder why I spent so much time calibrating. 😦 . So, some of the images here may not do as good a job illustrating my points as I intended. The Burton Hill fog photo is a good example. The final image looks much nicer on my monitor before I upload it to the WordPress site. Hopefully, though, you can see enough differences to illustrate my point.]

Why I Post-Process My Images – Part 2 – Image Sharpness

Malahide Castle
Dublin, Ireland
Copyright Andy Richards 2019

In the first part of this series, I considered general image appearance and color. While I am not assigning any particular order of importance, I did that first because I think it is the most “in-your-face” aspect of photography. Color, brightness, contrast, and appearance in general is what first draws the eye. But after some visual inspection, I think the eye becomes more discerning, and almost immediately sees – and critiques – image sharpness. It also think it is important to understand that the first two topics, color and appearance and sharpness, are very much interrelated. As we noted in Part 1, in all of photography, we are dealing with “appearance,” and with the viewers very individual capacity to interpret what they see. But things like saturation, brightness, contrast and exposure can certainly influence the appearance (and importance) of sharpness, both visually and mechanically (overuse of some of the “appearance” algorithms can deteriorate images and reduce apparent sharpness). I am not saying that every image must be in razor sharp focus. Rather, I am saying that awareness of this aspect of photography is important.

Image Sharpness:

Do we really need to sharpen our images? Years ago, on the old AOL photography board, I got into a “discussion” with another photographer. He didn’t believe any sharpening was necessary because his newest name-brand DSLR produced razor sharp images straight out of the camera (they didn’t). Another photographer commented on an image we were “critiquing,” and said: “looks sharp to me.” While unintentional on his part, he made a point I want to underscore here. Much of photography is appearance. And this is perhaps nowhere more true than in the concept of image sharpness.

Sharpness is a significant consideration that is often very misunderstood. Perhaps the most important concept here is that every technical definition I have ever read or heard speaks of “apparent” sharpness. Apparent sharpness is influenced by viewing distance and the size of the image being viewed. But every lens has a focus point where it is at is at its most sharp, and good focusing technique remains critically important for the photographer. For years there were really only about 3-4 factors that influenced apparent sharpness: the quality of the glass, the characteristics of the film, the size of reproduction (usually print), and the ability of the shooter.  Glass continues to have a large influence. Indeed, as the digital sensors get larger (in terms of pixels), the difference in lens quality is exacerbated. There is a very real difference in the sharpness of focus of images rendered by a good quality lens and a poor quality lens. Modern manufacturing technology has perhaps made the divide between traditionally lower quality lenses (think, generally, cheaper third-party manufacturers) and high quality lenses generally made by the major camera companies. And even between them, there were even significant quality differences – usually between their less expensive offerings and their “pro-quality” expensive glass. Today some of the third-party lenses are virtually indistinguishable in glass quality (often the differences come down to build quality and re-sale value).

Technique, of course, will, always be a critical factor in all parts of photography. Understanding depth of field, sharp focus and how (and when and when not) to attain it, and what the limits of sharp focus in your particular image are continue to be important. And yes, even if you use auto-focus (AF), which has gotten very, very good over the years, you still need to understand what it is doing and how to use it. In fact, AF can fool even an experienced shooter who is not paying attention. Some years back, I blogged about some differences between my full-frame and APS framed Nikon DSLRs. An astute pro friend and mentor of mine, pointed out that an image I was using to illustrate flowing water was out of focus. The corresponding one wasn’t. I let the AF fool me and didn’t pay attention to the details.

nearly every image made from a digital camera could benefit from some sharpening

Most of us no longer deal with film characteristics. But digital imagery has added a new factor to replace the film consideration. Sharpness in film was mostly determined by its grain structure. Digital sharpness is mostly a matter of pixel size, number and configuration. A digital photograph is comprized of many (thousands) of tiny pixels, represented by computing’s most basic “1’s” and “0”‘s, stacked around each other. Imagine a Lego creation. Each one of those little lego blocks represents a pixel, for our purposes. If you step away from the object, the further away you get, the less obvious the indvidual legos, and the more the object looks like its intended form. Also, smaller blocks, more densely packed will take on the form sooner than larger ones as you pull away. Our “lego” pixels, however are tiny. You generally cannot distinguish individual “blocks” without substantial magnification. Take any image and increase its on-screen size, and you will eventually begin to see the individual pixels, and how the stack around each other to make the image. Note that they are rectangular. The border between each pixel creates a line, and generally, depending on the width of that line and the difference in contrast or color between the pixels, you can get a kind of “jaggy” look from the square corners (sometimes referred to as “stair-stepping). Known as “aliasing,” this latter effect can look cartoonish, to downright unpleasant, and is the common result from raw “sampled” images from a camera sensor. Most currently sold camera sensors have an anti-aliasing filter in front of the sensor, which has the effect of slightly blurring those “aliased” pixel borders. Of course is a lot more complicated than my elementary explanation, but hopefully it is enough to understand why sharpening is “a thing.”

Adding proverbial “insult to injury,” when converting an image to digital “1’s” and “0’s,” an image processor engages in interpolation, from the fixed grid of pixels on the sensor, in order to complete the entirety of the image. Interpolation turns nature’s continous tones into discrete pixels, and the process itself often results in some softeness, regardless of the quality of the sensor and lens. And finally, the process of re-converting the pixels to media (especially print media) can result in significant softening of an image. This is why sharpening – at differing stages of the processing, and for different end purposes, is so important.

Given all this, it stands to reason that nearly every image made from a digital camera could benefit from some sharpening. I am hedging here on purpose. Recall my earlier reference to “apparent” sharpness. The anti-aliasing process will vary, based on color, lighting conditions, the presence (or not) of contrast in the image, etc. I almost always run some sharpening on an image at some point during my post-processing. If you “capture” your images only as JPEG (or in a few cameras that have the capability: TIFF), the software in the camera will sharpen the image and in more sophisticated DSLR cameras, you have some ability to “adjust” the sharpening. In most cases, I recommend you capture raw images. They will not be sharpened until post-processing, because it takes raw conversion software to do that. There are a couple “high-end” cameras out there (Nikon, Sony and Canon have them for sure – I would guess so do other major manufacturers) that do not have the anti-aliasing filter. That seems to be an important item especially for landscape shooters. It has its own set of problems. But the beauty is that in post-processing, you can generally fix all these issues.

All this stuff seems like a lot of work … but the good news is that all this complicated stuff has already been done for you!

Every photo software program has some manner of sharpening capability. Some are better than others. Many (Photoshop’s sharpening tools come to mind) are complex and difficult to learn and require a lot trial and error. Most authors and power users recommend sharpening in steps. Generally, they suggest a “pre-sharpen” designed to counteract the effect of the sensor anti-aliasing filter; case-by-case “targeted sharpening” on specific parts of an image (euphemistically called “creative” sharpening), and finally, a sharpening step designed for the type of output (generally print or monitor) and size of image being rendered. The Orange lily below is an example of an unsharpened raw conversion on top, a “pre-sharpened” image in the middle, and a fully sharpened for monitor image on the bottom. All 3 were rendered from raw using Photoshop’s Adobe Camera Raw (ACR) engine. I set the sharpening settings to no sharpening in the ACR conversion (more on that later). Both stages of sharpeing were pushed to their “100%” adjustment. It may be difficult to see the difference between the first two images. Part of this will be variances between monitors and part of it that “perception” thing I mentioned at the beginning. I am basically trying to illustrate the process here. The final image, for example, might be considered by some to be oversharpened. Oversharpening can yield a “crunchy” artificial look. Sharpening (like most aspects of photography) will often be a matter of taste. Of course, it is important that you get the desired sharpness when making the image with the camera. No software (yet) can “fix” or sharpen an image that is blurry because the shooter didn’t get it right in the camera. Here we are mainly talking about working with the limitations of digital presentation.

Raw; Presharpen and Final Sharpen Examples
Copyright Andy Richards 2019

If this stuff really excites you (you may want to seek counseling 🙂 ). There are some good explanations out there on the internet, and some really good print references. I recommend “The Digital Negative,” by Jeff Schewe. If you are a real masochist, you might want to pick up and read “Real World Image Sharpening,” by the late Bruce Fraser and Jeff Schewe. In Photoshop, there was a “sharpen” algorithm that was, frankly, a pretty blunt instrument, and there was “Unsharp Mask” (a term that actually comes from the old “wet” darkroom). Years back USM was the best tool and it was – for me at least – a mind-boggling math problem, involving 3 settings known as radius, threshold and amount. I know, right? 🙂 . This all seems like a lot of work. But it is worth the effort in many cases.

And there is good news. All this complicated stuff has already been done for you. Starting a number of versions back, a group comprised of pro-users and some of the actual Photoshop Team set up a collaboration known as Pixel Genius (Martin Evening, Seth Resnick, Andrew Rodney, Mike Skurski, Jeff Schewe, and Bruce Fraser). They created a Photoshop “plug-in” known as Photokit Sharpener. I used Photokit for a few years as it incorporated the 3-step sharpening mentioned above, and was much easier to use (if less versatile) than Photoshop’s native “unsharp mask” algorithm. They might have been the first ones to do it on a commercial scale, I don’t know. But over the years, a goodly number of competitors have emerged, and now there are a plethora of programs which are either Photoshop “Plug-ins,” stand-alone, incorporated into their own suite (OnOne, DxO and Topaz are examples). Recently, Pixel Genius announced that it would close, and their final gift to us was a freeware, freely downloadable, most current version of their product (I haven’t tried it yet, but just downloaded it and will for sure play with it).

By “consistent,” I do not mean use the same settings every time, by the way

It really doesn’t matter which on you use, in my opinion. Many, if not most, of them are good. They all work differently. The point is, you should probably obtain one of them and use it – preferably one you are comfortable with and incorporates reasonable ease of use. Just search for “image-sharpening software” and you will find lots of information. When I made the move to NIK software (formerly NIK, then Google, and now owned by DxO), I eventually started to used its Raw Presharpener and Sharpener Pro modules. I did so mostly because I was using the other NIK modules and I liked the ease and familiarity I gained from its interface. The 3 examples above were made using NIK.

Another thing that is worth noting is that most raw conversion software today (at least the integrated programs like Photoshop, OnOne, etc.) generally have sharpness settings in the conversion process, which may very adequately handle the first stage of sharpening. Because I routinely use NIK pre-sharpener, I have my raw converter sliders set to zero for sharpening. I think it is important, once again, to have a good idea what is going on underneath the hood. I think having sharpening set up in the raw converter and then doing an additional “pre-sharpening” would not only be redundant, but would result in unintended results, defeating the purpose of sharpening the image. I am not certain which of these are better (though my intuition tells me it may be the plugin programs), but I think the important thing is to think about it, have it as part of your routine and be consistent about it. By “consistent,” I do not mean use the same settings every time, by the way. To me, this is one of the risks of the raw conversion settings. They are often hidden in the interface from your view. I just downloaded the free version of Photokit and will try to do some comparisons for my own interest soon. One of those things to play around with during the winter ( oh yeah. I forgot. I live in Florida now. We dont’ have “winter” 🙂 ).

Overuse of any of these global adjustments will often defeat the purpose of all of this

There are other post-processing issues that will have an affect on apparent sharpness. One of the most important is the contrast adjustments most software offers. For years, “contrast” was basically the only adjustment available. It operates globally on the image and basically works with the blacks and whites. In order to use it effectively one often had to do some complex masking of an image in order to achieve the intended result only on the parts of the image that needed it. Currently, there is another pretty sophisticated global adjustment. It has different names in different softwares, but Photoshop calls it “clarity.” This adjustment attempts (rather successfully, I think) to adjust contrast just in the mid-tones which are generally not visually effected by the more global “contrast” adjustment (or at least you cannot see them because the highs and lows are so dramatic). Even more recently Photoshop and other have added another adjustment (called “dehaze” in Photoshop), which has the same effect of targeting just certain tones in an image. While not truly “sharpening,” these adjustments often make the image “appear” sharper to the eye.

A final word of caution. Overuse of any of these global adjustments will often defeat the purpose of all of this. As we mentioned above, it is possible to “oversharpen” an image for its purpose, and it can result in an ugly or unrealistic look. Creatively you might want to play with this. Note that I said “for its purpose.” When making inkjet prints on my Epson home printer, I often had to sharpen an image to the point that on-screen it looked overdone, but turned out a very sharp and pleasing print. Conversely, there are adjustments that do not really have anything to do with sharpening – directly (recall the comment that everything is interrelated). One of the best examples is the much overused “saturation” slider tool. I recently ranted about what is see as abusive overuse of “saturation “(especially – for some reason – with fall foliage images – sometimes by shooters who really should “know better”). To see the effect of this tool, open an image in your favorite image processing software and magnify it enough so you can at least begin to see the pixels. Then move the slider to its upper limit and back down again slowly. Observe what happens to the definition around the saturated colors. These days I only very rarely use the “saturation” tool on an image – partly for this very reason.

 

Next up:  In the final instalment, I will cover compositional considerations of post-processing . . .