FROM THE FIELD TO THE DESKTOP; How “Process” affects Processing

The Common Road
Waitsfield, Vermont
Copyright Andy Richards 2008
All Rights Reserved

ESPECIALLY THIS time of year, I see lots of new landscape and travel images. The first part of the fall season is one of the most popular times for landscape photography. It is also a popular time for travel. For these purposes, “the fall season” really starts before the first day of fall. In the west and northwest, it can be from late August on. In other parts of the country, it is probably best measured from late September on. Fall ravel begets lots of nice photographic imagery.

good process will enhance processing

AS I view some of these pictures, the “critic” in me invariably comes out. I think it is probably because I tend to also be critical of my own work, and as I see things that can be improved, I also start to see them in other people’s pictures. Before I go further, I want to be clear about something. The word, “critical” has 2 very different meanings (at least in the Oxford online dictionary I used). One meaning is negative. That’s not the one I refer to when speaking of art and photography. The other meaning – the one I intend – is defined as: “expressing or involving an analysis of the merits and faults of a work of literature, music, or art.” I think that is positive, although I think even “faults” is a pretty strong negative word. I often think what we view as a “fault” is really just a matter of opinion. After all, it is art.

THIS WHOLE contemplation got me thinking about how the constructive criticism process (both external and self-administered) has affected my photography. I am thinking about the things I do to either anticipate or correct things, both in the field, and later in the “digital darkroom.” In other words: Process. We hear and read the phrase “post-processing” pretty often these days. On the off chance that some reader(s) here are not familiar with that term, it refers to the process of uploading an image and “working” on it with some kind of processing software. Mostly these days, these images will be generated by a digital camera (either stand-alone or built into a “smart phone”). I am old enough to remember a time when we were feeding our celluloid copies into a scanner and creating digital copies. We refer to it as “post” processing because we do it after we have made the image in our camera.

Quintessential Vermont Farm Scene – Barton, Vermont
Copyright Andy Richards 2010
All Rights Reserved

IN MY view, “processing” is different from “process.” A good process will enhance processing. And this process starts in the field and continues into the “digital darkroom.” Let’s talk about some of the things I am referring to:

Your most important piece of equipment is between your earsIt all starts with mental process. I am not sure when I came to this realization. We will always have a mental process, but it may not be cognizant, and it may not always be as good as it could be. I had been traveling to Vermont, Northern Michigan, and West Virginia to photograph fall foliage for a number of years, when the idea of mental preparation and process first came to my conscious attention. It was during some back and forth with a pro photographer on one of the foliage BB’s that I frequented. Back-channel, I asked him for some of his thoughts on good preparation for these kinds of trips. A little bit surprisingly to me, he never once spoke of equipment, location planning, equipment and lenses, etc. Instead he focused solely on the process of mental engagement. It starts before you get into the field, as you think about what you want to accomplish or portray when you do get there. Then it continues into the field, focusing on visualizing the end result, and looking at abstract things like shapes, form, perspective and most importantly, light (ironic, in that I use the name: “LightCentric”). These things are all going to drive the more tangible “processes” like where you go, when you arrive, and what you use for equipment. Because, I guess, we can, we tend to spend a lot of time talking about the latter items. But it is a key to the process that we recognize they are all tangential to our mental approach.

This is a perfect example of an unnatural grey color cast in the fog, created by the grey cloud cover above. In spite of the bank of grey clouds, there was blue sky behind us, with warm early morning sunlight lighting the subject. It seems a little “cool” here, but it was easy to “warm up” in post – overall, this is an image that benefits greatly from color-correction in post

Burton Hill Road Farm
Barton, Vermont
Copyright Andy Richards 2010
All Rights Reserved

Color Presentation is perhaps the thing that most often catches my eye. In the spirit of the mental process discussion, I think it’s important here, to keep in mind that we are talking about art, and interpretation. My view of realism in a landscape image can legitimately differ from yours.  One of the things that will affect the process of realistic rendering of color in a digital camera is white balance. The white balance issue is one of the major advantages of (reasons to?) capture of your imagery in raw digital format. White balance can be fine-tuned after the fact in a good raw processing application. We want to strive to have our final images show the colors we saw in the field. But that is the overall look. Perhaps the thing I am most sensitive to is color cast. What I mean color cast is elements within the image that are not being shown in their natural color. There is some “grey area” here (see what I did there?) 🙂 Sometimes the light falling on the subject element will create color casts. Sometimes we want that result and it is part of our image. But much of the time it doesn’t belong in the image. To my eye, it can be a really nice overall image, but there is just something that strikes me as “off.” More often than not as I view it more critically, it is a color cast that seems unnatural to me.

Copyright Andy Richards 2023
All Rights Reserved

We have all seen the cartoonish, grossly oversaturated images that are frequently posted on-line. They should be easy to see by the experienced photographer. I posted a blog dedicated to this subject a year or so back. Except when captured as a gas and ignited, neon is not a color that occurs in nature. When I see red barns and buildings that look like they have been plugged in, I immediately know somebody oversaturated their image. When I see shades of red that are just not believable, I immediately know somebody oversaturated their image. This is always done, BTW, in the post processing stage. You may not realize it, because the camera (or most often “smartphone”) is actually doing some post processing before it even shows you the image. When you do see this stuff, look carefully. A common by-product of oversaturation is muddy-looking, lack of detail. There are 2 controls (usually in the form of the ubiquitous “slider”) in almost every processing software application that I would label (for experienced users only), lock down, hide (I wouldn’t eliminate them because there are some good uses – even needs for them). They are generally named or labeled: “Saturation” and “
Vibrance.” They are variations on the same concept. Most importantly, they are not what you think they are for!


In the Field: The white balance as noted above, is important here. If you shoot raw images, you don’t really need to worry about it in the field. If you shoot another format, then you need to be cognizant of white balance. Most modern cameras and smart phones do a pretty good job with their “automatic” white balance settings. But in times and places where the light may be very colorful or even unnatural, you have to think about white balance. When a scene contains a particularly strong or large element (particularly if it is a reflective surface of some nature), though, you have to watch for color casts.  Try to internalize those shots that might render a color cast, when you make them. Experience has taught me that there are certain scenes and elements that are almost certain to pick up and introduce color casts. One of the biggest ones is “whitewater.” I find that I have to color correct about 95% of any waterfalls, drops and pools, or other white water that I capture. Just the nature of the beast. They are often in shadow or partial shadow, and shadows often render blue (especially in digital capture). In spite of what our mind tells us, this kind of water is not blue. Another area I often see picking up color casts is asphalt. Most often, I see it rendering a blue color cast, or a magenta cast. Another element that can pick up casts are clouds in the sky. While there is not always anything you can directly do about it in the field, awareness – knowing you will likely be doing selective correction – will be important when you get back home and start the post-processing of images.

On the Desktop: My terminology here is highly influenced by my personal bias. I do all of my serious post-processing on a desktop computer, in my home, in a room that is a neutral color, using color balanced monitor(s). I recommend that you get some kind of color management measuring device for your monitor(s). I use a reasonably inexpensive SpyderX by datatcolor. If you don’t calibrate to a standard, it will be hard to identify when and whether your images are “off” or the viewers’ aren’t color managed. I carry a laptop when I am out on travel. While I have the same post-processing setup on the laptop, I only use it to see on screen how things went during the day, and maybe to do a “quick and dirty” post so I can put an image up on social media to show folks what conditions may be like. But those are never my final images, and I start from scratch again when I get into my desktop environment. How ever you do it, this is the time when you have to really look critically at your images. The human eye/mind combination is an amazing thing. It sees an image and mentally corrects things like color casts. This is especially true when its memory banks hold the original image (which it should since you were there). But this can make us “color blind” to the offending cast. Someone else looking critically though, will notice it. Not everyone will. but the critical observer will. This is where the “in-the-field” preparation comes in. There are several ways to deal with these color casts. My “go-to” is a third-party software: NIK (originally NIK software, acquired by Google, and then sold off later to the current purveyor, DxO). My version is very old, but still works. DxO is an excellent company and I have no reason to think it is anything other than just as good or better. It is a bundle of different applications and at a reasonable price point, I think a really good way to go. But certainly not necessary, and I have no connection to them). The basic approach here is to make a selection of the area(s) you want to color-correct. NIK just does that in its own rather sophisticated way, BTW. There are some other third-party applications that will also do this. If you are going to go it on your own, having selection skills will be important here. Knowing when and how to feather (and sometimes blend) the selection will go a long way toward making your correction look good – and even realistic itself. And by all means, do any of this work on a separate layer! This allows you another fine-tuning step, as you can dial the opacity of the layer in or out and use other blending modes. Once the selection is made, the primary approach here is going to be to reduce saturation (remember when I said there were good reasons for the saturation slider? This is one of them) in that area. For white water, I find reducing saturation by as much as 85 – 95% percent often yields the best result. Sometimes this will require a couple other “moves,” including brightness (sometimes up, sometimes down) ever so slightly, and contrast (again, usually only very small amounts). Changes to clouds will need a slight increase in brightness, in my experience. Asphalt is a bit of a different animal. You really have to watch the context with these adjustments. Are there shadows present that are part of what you are trying to depict? If so, decreasing saturation may affect the shadows and you may need to adjust brightness. If the image has an overall warm tone that you are striving for, you may need to bump up the “warmth” in the selection (one of the reasons I like NIK so much is that it has sliders dedicated to all those adjustments). I think when you make some of these moves, and look at the before/after, you will be surprised (and hopefully pleased) at the result.

Composition Example – Placement in Rule of Thirds
Copyright Andy Richards 2015
All Rights Reserved

Composition is another area that should be front and center in your “process.” A couple thoughts come to mind here. There are “rules of composition.” “Rules” are made to be broken. Sometimes. I think we all should have a good handle on the “rules of composition.” It stands to reason that if we are familiar with the rules, then we know and understand when we break them. During the first many years of my own photography, I didn’t give composition much conscious thought. Somewhere along the way, somebody pointed out the importance of level horizons (still today the single worst mistake I see in image after image on-line). Then I became aware of the rule of thirds, and even began installing etched screens in my old SLR camera bodies. Of course, with digital technology came the ability to turn these things on and off. I pretty much always have some kind of a grid up on my view screen. But I had to wonder, at some point, what all this meant artistically, and about the same time I had my conversation with the pro on fall imagery, I bought a very rudimentary book on art, drawing, and composition, That taught me the basic behind the rule of thirds, the “golden mean,” s-curves, and things like balance. I highly recommend any photographer who is serious about her art find a similar instructional piece. All of this helps with the in-the-field process sometimes referred to by photographers as “seeing.” In my example above, I illustrate the classic use of the Rule of Thirds. When you look at the following example, although it may be subtle, you can see what I refer to as “bullseye” composition. In the (better in my view) first image, I have placed the middle of the main subject right at the intersection of the lower left Rule of Thirds line. I think it gives more balance to the image. It leaves “space” for them to look/walk into. In this case, the secondary couple also has “space” to walk into the image without walking out of the frame.

In the Field: I made reference above to a couple of the tools that will assist in the field. First and foremost, some kind of leveling device. Not the little bubble level build into many tripod heads. I am sure they are of some help getting the tripod level. But I don’t pay any attention to them (my current gear doesn’t even have one). Just because the tripod is level, doesn’t mean your composition will automatically be level. And there are times when you simply cannot set the tripod up level. The primary thing you want to be level is your image horizon line. As an aside, it is not always easy to identify what the horizon line in an image is. Usually, it is. But not always. Remember that thing I said above about the eye-brain mechanism? It holds forth here too. My good friend Al Utzig underscored that for me years back. We were in the field shooting together and he had one of those little yellow bubble levels that attach to the camera body hot-shoe. I didn’t. When he mentioned the utility of it, I opined that between my “good vision” and the lines on my screen grid, I didn’t need the level. He challenged that, so I did a bit of empirical testing. Guess who was right? 🙂 The level doesn’t lie. I bought one and have never traveled or shot without it since. These days most digital cameras (and better smartphones) have a built-in digital level. They will also get the job done. I always have mine showing when I shoot handheld shots. On the tripod, I prefer to use the mechanical bubble on my hotshoe. It doesn’t really matter. Just use one!

“Bullseye” placement of the subject often makes an image less interesting and even less dynamic. I left space in the original image (it wasn’t “bullseye,” but it wasn’t the placement I though looked best when processing either)
Copyright Andy Richards 2015
All Rights Reserved

The other area that should really be a consideration is the possibility of cropping the final image. These days, with plenty of pixels available, cropping is much more realistically possible than it once was. When I post-process (more below) an image today, I don’t always slavishly adhere to the native image dimension, or any “standard” (like 8 x 10, for example). I sometimes like to crop based just on what “looks good” to me. When in the field, I always try to shoot at least some frames where I have left myself room to crop the image later. Digital storage media is cheap. We have long been taught to get it right in the camera, and I still make that my primary goal. But then I make another frame (or two) that gives me some “space” around the “ideal” composition. Another thing that I always try to look at in the field is orientation. I think our natural tendencies are often driven by our “gear.” Cameras are almost always designed to hold them in a manner that yields a “landscape” oriented image. Phones tend to be the opposite. So most of us shoot the way our device guides us. That’s too bad. Years back, I made at least looking at an image in both landscape and portrait orientation part of my in-the-field process. Even to the experienced eye, we often come at an image with a preconceived idea of which orientation it “fits.” Of course, having more pixels helps. I get better quality results from crops from my 46mp Sony “full-frame” than I do from my 20mp Olympus m4/3 sensor. All things to take into consideration while shooting.

On the Desktop: Didn’t believe me about the level? There is good news. Most post-processing software has some pretty good tools for fixing unlevel images. That should be one of the first compositional things you look at. Even on-screen, your eyes may still deceive you. If your software has the capability of superimposing gridline, or even better “dragging” a guide-line around the image, do that. In Photoshop, I like to pull guides from the top and sides when looking at level and perspective. Sometimes it is a very small amount off-level. But believe me, a third party who is sensitive to the issue will see it immediately. One thing to keep in mind is that (in spite of the advent of AI) when you do a post-processing “level,” you are cropping and you are discarding at least some pixels. Better to try to get that one right in the field.

Cropping is where I have really changed my own approach. I have concluded that the vast majority of images being displayed today are on digital screens somewhere. As such, we don’t really need to think about “standard” print sizes, mat sizes, frame sizes, etc. We can really render images in the size and shape (though I must admit, I still stick to the rectangle for the most part) we want. My view today is that unless I have some limiting purpose for the image, I will render it in a size that looks “good” to me. Lately, many of my images have taken on a more “pano” look to them (landscape orientation with longer “long” sides). It just seems like they often balance the horizon and other elements better. But sometimes, I conclude after I am looking at the image on screen that I should have shot it in the opposite (landscape or portrait) orientation. If I wasn’t true to my process in the field, I may not have made one. Very occasionally, I will actually crop to that orientation. Not the best practice. But sometimes possible. And just when I thought I had a firm new approach, along comes artificial intelligence (AI). In fairness, it didn’t just come along. But in its most recent iteration, it really has relevance to the subject. A number of years back, Photoshop introduced its “content-aware” technology. I have been using that for years (but at first it was to “clone” out unwanted elements, and/or move things around in the photo). Game-changing at the time, but just the beginning, it seems. Not long afterward, they added a feature to the crop tool that brought it around to this subject. They added “content-aware” cropping. By checking a box, it would do its best (which was pretty darn good) to retain the cropped area when you did a level adjustment for example, by using its content-aware feature. I used it a lot and really liked it. It is still there, but accessing it is a bit less intuitive. All of these tools are useful for creating the final composition you want. I left room to crop in the illustrative image of Florence, Italy above. On the desktop, I could see that it would benefit from moving my subject slightly left in terms of composition, and the space in the image allowed me to do that. I try to think about things like that as I am shooting.

Bogie Hill Farm
Barnet, Vermont
Not a panoramic, but cropped to a narrower aspect ratio than as shot in the camera
Copyright Andy Richards 2021
All Rights Reserved

Perspective is closely related to the composition and cropping discussion above. Whenever you get closer to your subject and try to include the entire subject in the image, you are going to run into perspective issues. This is primarily a wide-angle lens issue. In order to bring a full frame image into the picture, wide angle lenses often introduce serious curvature. More often than we would like, this results in elements (most often architectural) that are tipped or distorted. There is really very little you can do about this in the field, but there are some things to consider.

In The Field: The most important thing about this subject is awareness. Watch your composition carefully. Getting the camera and lens higher often helps, so if you can get into an elevated position, you will handle the distortion better than from the street/ground level. Also, better quality “rectilinear” wide angle lenses work hard to ameliorate this distortion. From a planning perspective, it is important to understand the tools you have available to you in post processing software, so you can plan our framing accordingly. I make considerable use of the perspective tools in Photoshop (and it is an area where I don’t think any other software can compare to Photoshop’s capability – one of the reasons I continue to use it in lieu of trying one or more of the other applications out there). Knowing I am going to correct perspective later when I am shooting the image is a critical piece. Much of the perspective correction done in any of the applications will result in important parts of the image moving to, and outside of, the frame. Understanding that, you can leave space in those important areas for later processing.

On The Desktop: These days every software has some kind of perspective correction. Unfortunately, in my opinion, most of them are just too rudimentary for the times. The only exception I know of is Photoshop (if there are others that have the capabilities photoshop has, I would love to hear about them). I am not saying you shouldn’t make use of even the “rudimentary” tools though. In many cases, you should if that is the only tool(s) you have. The correction tools I am talking about are usually set up in the form of sliders. In Adobe Camera Raw (ACR) you will find these under the general adjustments section called “Geometry.” In Light Room, it is designated as “Transform” in the Develop Module. Both programs use an essentially identical “engine” for these adjustments, so they work pretty similarly. I like the interface in Light Room slightly better, as I think it is easier to see what is actually happening. Other programs may use slightly different names, but they all appear to work the same way. Since I primarily use ACR, I will describe its sliders, which are labeled: Vertical, Horizontal, Rotate, Aspect, Scale, and Offsets X and Y. As I mentioned above, a lot of wide-angle shots would benefit from some perspective correction. The Rotate adjustment is probably the most useful of all of these tools. Not strictly a perspective tool, it gives you the ability to fix those tilted horizons. For perspective correction, the Vertical adjustment just may be the most useful, as it can be used to correct the “tilted back” look that tall objects (think buildings) sometimes get when photographed from the ground. The Horizontal adjustment can help those wide, landscape-oriented shots that occasionally seem to be turned away from center. You will see that when you start to work with these adjustments, they will often “move” parts of the image outside of the frame and/or your desired composition. The Offset adjustments let you move the (entire) image either vertically or horizontally. Scale makes the (entire) image larger or smaller. The purpose of these last adjustments is not very intuitive. For the most part, I don’t find them particularly useful, but presumably they are to make corresponding adjustments to the vertical and horizontal moves – which are more strictly “perspective.” Be aware that the Horizontal and Vertical adjustments distort the image! While this is true of the more sophisticated perspective tools in Photoshop, in my view, they are less pronounced there. As you can see, these tools have some limitations. But do try working with them and do as much as you can with images that are perspective distorted.

Tower Bridge – As Shot
London, England
Copyright Andy Richards 2019
All Rights Reserved

My reservations above are a primary reason why I still use Photoshop as my workhorse processing tool. Under the Edit drop-down in Photoshops top menu, there is a grouping that deals with perspective corrections in a much more sophisticated (and therefore flexible) manner. The primary tools there are in the “Transform” flyout, and include Scale, Rotate, Distort, Perspective, Warp. For me, the “go-to” tool here is the Transform – Perspective tool. Keep in mind that you have to make these corrections on a separate layer (I make a duplicate of the background layer). I don’t do much with scale or rotate, preferring to deal with that in the cropping and then image size areas of the program. I do sometimes find the Warp command useful for fine-tuning verticals and horizontals. It takes some “playing” around, but after a while you get the “hang” of it. And it is well worth it in my opinion. One of my best illustrations of what can be done with some work and patience is the image here of London’s Tower Bridge. The original image shows the challenges of getting both towers and the full span into a photograph. Taken from a perspective almost directly underneath one end of the bridge, getting a shot without distorted perspective with a standard camera/lens is virtually impossible. The perspective corrected version of the same photograph below shows that with some work, you can achieve a pretty amazing result. There are a couple other much more sophisticated tools (Puppet Warp and Perspective Warp) for those who might want to venture into complex perspective corrections. I have played around a bit, but they are really beyond my needs (and at this point, capability). The Tower Bridge shot was made very hastily and without much thought to all of the above. If I were to shoot it again, I would have backed away a bit and left more room on the sides. I would have liked to get the second tower on the right (visible in the original image) in the shot, and leaving more room would have allowed that.

Tower Bridge – Perspective Corrected in Photoshop
Copyright Andy Richards 2019
All Rights Reserved

Retouching is the final area I almost always need to address. The process involves the removal of unwanted dust spots, to color-correction of skin tones, removal of blemishes, bright spots in the image, etc. For years, Photoshop’s clone tool, and then later the healing brush and spot healing brush tools were the mainstay. Now, the AI additional features make it even easier and more effective. I have personally been using the new “Remove” tool to my satisfaction a lot of the time. These applications have become increasingly more prevalent in most of the available processing software today and are even being built into smartphone software. While we do have to be cognizant of the “reality” line, this stuff is pretty fun for the photographer.

In The Field:  As mentioned above, some of the smartphone software has built in “retouching” capability. At this time, I am only aware that the newest iPhone iterations have tools to “remove” objects in photos. The primary industry leaders seem to be Apple’s iPhone (by far the most copies in the hands of the public), Samsung, and Google’s Pixel. The Pixel seems to be trying to distinguish itself as the most advanced for photography and may also have the remove feature. There may be some retouching capability in some cameras, too. But it is important to remember that all of that (phones included) are in-device partly post-processed and it is all going to be rendered as jpg images. As I have harped on many times here, we really want to do our in-device capture as raw images and use our own post-processing applications – where we have control of the process. So to me, retouching will never be something done in the field. But we can anticipate some retouching we may need to do. Again, this is where the AI capabilities in the applications have made a huge impact. If we know, for example, that we can remove an element in an image, we can shoot accordingly, and think about how that element is positioned and separated from other parts of the image. We can also look at parts of an image that we know can be worked on later (like how dark or light a part of the image is), or how well in-focus it is, and think about how we are going to handle that in post. For years, one of the most difficult things I encountered in my photography was the difficulty in capturing high contrast parts of an image. We used split neutral density filters, for example, to “tame” an overly bright sky with very dark areas in the shadows. I never had much luck with them personally, and always struggled – especially with transitions. With digital capture, we know now that we can capture multiple images (on a tripod of course) with different exposures and/or focus points, and then very effectively blend them in post. So now, when I shoot these kinds of images, I always capture multiple exposures (correctly exposing the different parts of the image I want to be well-exposed in the end image) and anticipating the blending process in my post processing.

Golden Gate Bridge
San Francisco, California
Copyright Andy Richards 2011
All Rights Reserved

On The Desktop: Once I am back in my “digital darkroom,” there is now a pretty powerful “arsenal” of retouching tools. I use Photoshop, so that is what I really know. I suspect there are only a very few things that Photoshop offers that aren’t also available in Light Room (but I think they are significant enough that I don’t use Light Room for editing my images. Other applications will certainly have a lot of these tools, though they may name them something else and may work slightly differently. But they basically combine a basic “copy/paste/clone function now with some pretty advanced AI. I have numerous examples of Photoshop’s retouching capabilities, including removal of unwanted objects. The image of the Golden Gate Bridge above is one of my older images, made right around the time Adobe first introduced its “content aware” technology in the form of “Content Aware Replace,” and “Content Aware Move.” We were shooting the bridge from the Marin Headlands above, with downtown San Francisco in the background. We fortunate – for our purposes – to have a pretty clear day (though I would love to go back and shoot it in the dense fog someday). There are several vantage points, and I was in the process of moving from one to another as the container ship came into my field of consciousness. I scrambled as quickly as I could to get set up, but by the time I did get ready to make the shot, the ship had moved further under the bridge than I though was ideal composition. No worries. When I got home, using the Content Aware Move tool, I just moved it back to where I wanted it to be. I don’t want to be too cavalier here. It isn’t quite as easy (in spite of the hype to the contrary) as circling it and dragging it. You do have to be aware of the surrounding pixels. Sometimes it takes more than one move. Sometimes, it will require some additional cloning for cleanup. You also have to watch things like shadows thrown by the subject, and in this case, the water and wake (behind and off the bow). But it can be done. Pretty cool. Sometimes I will get the comment that what I am doing is “cheating.” If you say so. 🙂

I  LEAVE for Vermont in just a few days for a short, but hopefully intense, photo trip, mostly in Vermont’s vaunted Northeast Kingdom. Following that, I will make my first venture into the Great Smoky Mountain National Park, in Tennessee. Having this blog fresh in my mind will, I hope, instill in me the correct process for my shooting and following up a few weeks later, with my processing of images. I hope it will stimulate you, too, to think about your “process.” Good fall shooting to everyone, wherever you might be!

[I have just 3 more blog posts to come from our Iceland, England, and Ireland Cruise, including a reprise of our Beatles Tour a couple years back. They are queued up and I will post them in the coming weeks. But in just 4 days, I will make yet another one of my periodic Fall Foliage trips to Vermont. It is that time of the year again. I know I am not alone in traveling, finding, and shooting fall images across the country – and perhaps even the world. This seems like a very timely topic. I will probably not post for a couple weeks now, as I will be out in the field. But stay tuned for the finale of my 2023 European adventures. As always, thanks for reading!]