For the most part, as a photographer for some 45 years I have made images with a “photorealistic” look. I freely and unabashedly admit that I “enhance” my shots using post-processing software. Lately, as I have gotten older (or perhaps bored?), I have begun doing more alternative presentation with my photography. That is certainly a reason to use digital software on my images. But it is not the reason. I am a firm believer that no matter how “natural” one thinks an image should look, nearly all digital images need to be “worked” with in post-processing, including the “photorealistic” ones.
I know photographers, at all different skill ranges who are very content with shooting images and presenting them, as-shot. Many shoot in JPEG mode in the camera and never really give a lot of thought about post-processing their images. This was even more common before programs like Adobe Lightroom, OnOne, Topaz, DxO, and others began offering”user-friendly” software interfaces, with “preset” post-processing. Today there are hundreds of “presets” available, many of them free, and many more at a relatively low cost, all designed to create a “look” (or occasionally, to correct problems). I will take a whimsical look at some of these presets in a blog post coming soon. There may come a day (indeed some may say it has already arrived) when these presets will essentially eliminate the need for manual post-processing entirely. Given this, there is really no reason today not to do some post-work, even if it is entirely confined to these presets.
there is really no reason today not to do some post-work, even if it is entirely confined to presets
Some of us; largely because it is what we learned and are comfortable with, will no doubt remain “old school.” I am always looking at new software and have embraced some of it. But mostly, I still post-process in Adobe Photoshop. Adobe has a lot of competition these days, but when I jumped in, Photoshop was really the only option. I learned it and have stayed with it. If I were starting new, I probably wouldn’t, as the learning curve is steep and there are programs which may be more photographer specific out there.
“Arty” effects and alternative presentation is certainly a reason to use digital software on my images. But it is not the reason
While you might be content with your photos right out of the camera, I hope as your skills and interests develop, you will consider the significant advantages of raw capture, and the ability to make your presentation better even with just some simple post-processing. As I wrote this, it got long (well, longer than usual 🙂 ) and I realized that perhaps it would be more palatable as a 3-part series. I hope you will read on, as the additional “parts” contain some good information. For starters, though, I will begin with general image appearance and – perhaps most importantly – color.
Digital photography is kind of like cooking (an analogy I am sure I “borrowed” from someone else years back, and continue to use, because I think it is apt). Many foods must be cooked to enjoy them at their best, and this means combining ingredients. It also means knowing how to cook things. A lot of the raw materials used in cooking just aren’t very appealing in their raw stage. In fact, some of them are just downright un-palatable.
“Raw is not an acronym
Perhaps appropriately, the format of images digitally captured, are referred to in their native format, as “raw.” I have pointed out previously, that “raw” means just that: raw; as in “uncooked.” Raw (unlike most other formats, such as JPEG, TIFF, etc.) is not an acronym. It is possible to cook anything from “scratch,” using only raw ingredients. It is also possible to purchase food in a pre-cooked, or partially prepared, stage. I am not suggesting that this cannot be done very well. Stauffer makes a pretty good Lasagna, pre-done with only baking required. 🙂 But most will appreciate a meal fully cooked from scratch, by a skilled chef. In most cases, I suspect, there is a combination of ingredients, some of them acquired in various stages of pre-preparation (perhaps analagous to the “presets” mentioned earlier).
Most of the “new” photographers I have the occasion to counsel shoot the same general subjects that I do. I do very little sports photography, and no photo-journalism. I strongly (and perhaps pedantically) recommend to these new photographers that they shoot in raw format (assuming their gear supports it). There are certainly those who are happy to accept “fast food” cooking (the camera’s built in conversion to jpeg or tiff), and to know enough about the cook and how to direct the cook, to obtain the result they are satisfied with. And there are certainly some cases where that is just expedient. Sports and photojournalists, more often than not, just don’t have time for a “gourmet” meal. But I believe most of us would benefit from getting our ingredients “raw,” and then take some time to appreciate the aromas of fine cooking. In order to obtain a palatable final image from a raw file, it is going to have to be “cooked.”
Here, then, are some of the most important reasons I do my own “cooking:”
Two different photographers can stand side by side and see the scene very differently
One of the primary results that is largely effected by the “cooking” process is color. There are many things that effect color, including the medium of capture (whether digital or film), lighting conditions, and the perception of the viewer. It is common for certain colors to be rendered by the default raw converter as the “average viewer” perceives that color. The complication, of course, is how we define “average viewer.” Two different photographers can stand side by side in front of an image and see the scene very differently – including the perception of color. And, while there are certainly colorimetric “standards” for measurement of color, those same two photographers will still perceive those colors differently.
More importantly, it is common that the color displayed is not what the photographer wants to portray. Staying with the cooking analogy, some people like a little more spice, sweeter, etc. And that is the benefit of post-processing a raw image file.
As I have learned more and more about digital post-processing, I have discovered some primary areas where this happens. Things that are “supposed” to be white are a great example of this phenomena. In 2010, I made what may be my favorite Vermont fall foliage landscape image ever. Conditions were right. There was a “thread” on the “Scenes of Vermont” foliage board where I posted the image, which remains today. I probably should go back and replace it with my “corrected” images. A couple years later, while working on my LightCentricPhotograph website, it jumped out at me. Clouds, fog, and frothy water, in most cases, are perceived as white. My fog in this image had a very distinct purple/blue/grey hue. While I think the color is “possible,” it is not the image I saw that clear, cold, fall morning. I think the color-corrected image is much more realistic and much more pleasing. In another example, I had the perhaps misfortune of coming to Babcock State Park on a particularly rainy, cloudy weekend. Grey clouds always introduce a color cast and you can see the gray cast in the original image. What is harder to see is the gray-blue cast in the water. The second image incorporates a warming filter (in Photoshop). As you can see, while it improves the appearance of the rest of the scene a bit, the water still has a cast. Water–to our eye–should be “white.” The third image is corrected using NIK Viveza2, by reducing saturation of color in the water, with a bit of added contrast and brightness. I think the water looks more like it is “supposed” to look.
I put quotation marks around “supposed” on purpose. In art photography, there is no “correct” color. But there are certainly colors that are generally perceived by most of us within a range under the particular conditions. The Vermont image was made in the early morning sun, on what turned out to be a perfectly cloudless, sunny day. The original image from the camera might lead the viewer to think the opposite – that this was a cloudy, dark day. Color correction changes this perception entirely, in my opinion.
It is often the case that the color displayed is not what the photographer wants to portray
Exposure and Appearance:
Because I record images as raw files, there are usually some “darkroom” type adjustments that need to be made. Understand that post-processing software is not designed to “fix” your mistakes. Getting things like sharp focus, exposure and composition right in the camera is still required. But we have always found ways to improve aspects of our photographs, beginning in the darkroom. Today, digital processing software perhaps gives us more ability to be creative and improve images than ever before. Beginning in my raw conversion program, I first look at the image “globally,” checking for things like color, exposure and contrast. If any adjustment needs to be done, I do that first. But remember Newton’s Law, here (or at least a modified version of it). When you move a slider, it will effect the adjustments other sliders make.
One of the things I love about my current setup is the ability to make more targeted adjustments. Most software today has powerful masking tools, both incorporated and with the ability to more or less manually mask. This means adjusting can be done with an adustment (like Photoshop’s “clarity” slider) that is discriminate in its action, adjusting, for example, just certain tones in an image. It also means a more specifically target area can be masked individually and adjusted. Some of this has been available in software for some years now. Others are constantly newly developing.
Above, I noted that sometimes a photographer wants an image to look a certain way. The beauty of non-reportage or scientific photography is there is no one correct presentation of an image. Note the substantial difference in tone and color in the lily image above, from the sharpen example images earlier. Post processing gives you the ability to creatively work with images – and to experiment – especially with color.
For potentially bored readers, I have some good news. I just returned from another European trip in October, which means some new images again, rather than my historical stroll down memory lane. I am post-processing images right now. But first, another reminiscent post:
Kodachrome, was simply the standard by which any other choice was measured. And until the 1980’s they generally just didn’t measure up.
My “evolution” series got me thinking a bit about the medium. Those who have been shooting only during the past 20 years may be vaguely aware of an old cellulose material called film. When I jumped in, film was all we had, and the “pickins” were slim. If you wanted to shoot color slides (the medium of choice, it seems, for serious “nature” photographers), you mainly had Kodak. There were competitors, but in the early years, Kodak dominated the film world, for a number of reasons. Most shops and retail stores stocked Kodak products.
In the Beginning, God created the Heavens and the Earth ….. and Kodachrome
Perhaps the most important reason was that Kodachrome, was simply the standard by which any other choice was measured. And until the 1980’s the others generally just didn’t measure up.
prior to about 1936, color photography was not prevalent among all but a limited group of professionals. Color wasn’t really that great (though it was relative, I suppose). Early results (including some early color slide films) are reminiscent of the early “colorized” movies we saw. We all knew it was black and white with some color added. In my mind, this was true until Kodachrome became the standard.
In 1936, a couple of musician-turned-scientists were hired by Eastman Kodak to complete their experimental process. As I did my research, I was interested to learn that there actually was another “Kodachrome,” which was a 2-color process, developed by a Kodak engineer in 1913. In 1936, Kodak introduced the 3-layer process which became the vaunted Kodachrome. Called a non-substantive film (an odd name in my view – but addressing the lack of dye or colorant “substances” in the film emulsion itself), the Kodachrome process was complex. It was essentially a B&W film, in which color dyes were added to the 3 different layers during the development process.
This meant that a specialized processing setup was necessary, and until 1954, Kodak successfully maintained a monopoly on this process (known as K-14), by selling Kodachrome only with pre-paid processing by Kodak as part of the deal. In 1954, the United States challenged this practice as an anti-trust violation, and an agreement was entered into, among other things, ending this practice (and of course, allowing competitors to acquire the accoutrements to develop Kodachrome).
Originally, Kodachrome was released at ISO 10. A 20-exposure cassette cost $3.50. That, for those interested, would have been about $65.00 in 2019 dollars!
In 1936, there was no ISO (or ASA, as it was originally known). During the World War, the military wanted a single standard to be able to increase their efficiency. Prior to this time, there were several standards (and thus, several different marketed light meters – all handheld in those days). The American Standards Association created a “standard” measurement for light sensitivity measurement, which then became known as the film’s “ASA,” or ASA rating. In 1987, the International Organization for Standardization was created and film manufacturers worldwide shifted to this international standard (which is numerically identical to the old ASA standard). So we now refer to light sensitivity measure – on all media – as “ISO.”
There were non-believers …
That same year (1936) German film and camera manufacturer, AGFA, introduced Agfacolor. While very similar to Kodachrome, including its three layer emulsion, AGFA engineers embedded color dyes into the film emulsion, making the development process less complex. I shot maybe one or two rolls of Agfacolor. Didn’t care for it. It looked, as I noted above, like lightly colorized black and white. It did seem to make a big hit in the motion picture industry, however, and was widely used in film-making for a number of years.
The Fujifilm company was established in Japan in 1934. I was not able to find much about early film offered by Fuji. Of course, they catapulted into top status with the release of Velvia, years later.
Kodachrome II, was introduced in 1961, with an ASA/ISO rating of 25. In 1962, they a released 64 ASA version (later they simply became known as K-25 and K-64). In 2007, K-25 production was discontinued. K-64 production followed suit in 2009). One source noted that in 2009, sales of Kodachrome made up 1% of Kodak sales revenue. Kodak had essentially ceased processing Kodachrome themselves by 2006, and by 2010, the only one Kodak-certified facility remaining was Dwayne’s Photo, in Parsons, Kansas. Later that year the even they ceased Kodachrome processing. Which led me to wonder, what if I had any rolls of undeveloped Kodachrome? Some “Google” research will reveal that there are processors out there who claim to be able to process it. But I checked my freezer. No film of any kind in there. Phew! 🙂
Nothing lasts forever …
In addition to its complexity and considerable expense, there were other Kodachrome drawbacks. Transparencies were designed, of course, to be projected with a relatively strong light (anybody else remember those “travelogue” slide shows that were prevalent in the 1960s and 1970s?). The medium was consequently, relatively high contrast with lots of shadows. This made it particularly touchy to produce photographic prints from. And, we have learned in later years, the process of scanning and converting Kodachrome to digital images often is challenged by colorcasts which need to be addressed in the scanning process.
Stemming partly from photographers (particularly consumers) demand for cheaper and more convenient products, and also partly borne out of the 1954 antitrust decree directing Kodak to endeavor to release a newer, more consumer-friendly film that was in development, Ektachrome, with a new “E-process” in which dyes were embedded in the film emulsion was introduced in 1955. Originally ASA 32, a 160 version was introduced in 1959, and 64 and 100 ASA versions in 1977. I wasn’t even shooting yet! Two years after I got started, in 1979, Ektachrome 400 was released.
Ektachrome had some advantages. It was cheaper than Kodachrome and cheaper and easier to process. You could have it processed locally. If you wanted to make the investment in color processing equipment (essentially, some relatively affordable tanks and chemicals), you could process it yourself.
I shot very little of it. This was probably partly due to the prejudice I acquired early on, from my shooting inspiration. But there was also still no doubt that Kodachrome was still the professional-preferred medium for most. Ektachrome also had a known blue color cast, and I found it cool, and a bit less saturated than my personal taste. So, throughout the 1970’s and 1980’s I shot Kodachrome.
When I came back to serious shooting in the early 1990’s, the industry had changed. Fuji introduced its Velvia 50 in 1990. Its characteristic was a very colorful, saturated, and contrasty profile. It took aim at Kodachrome and punched it in the face. It quickly became the slide film of choice for nature photographers – especially for landscape and flowers. And it used the E-6 process (by this time virtually every emulsion used the E-6 process, except for Kodachrome).
Again, while there were others, there really weren’t 🙂 . Fuji and Kodak went head to head. Fuji released Velvia. Kodak parried with Lumiere 100 (a neutral balanced Ektachrome) and Lumiere 100X (a warm-saturated Ektachrome). Fujia added Velvia 100. Lumiere was short-lived and said by some to have some inconsistency in color from roll to roll. Kodak replaced it with E100S (saturated), E100SW (warm saturated) and E100VS (very saturated – Kodak’s answer to Velvia).
Fuji, in 2003, in response to criticism that Velvia was just too colorful (perhaps unrealistic to some), introduced Provia, in 100, 200, 400, and 1600 ISO versions, and Provia F (ultra-fine grain).
These were all so-called “professional” films. They tended to be more expensive. Whether they were that much better is probably a personal judgment. They probably had better quality control. I remember going to my photo shop in my community to buy these films (they weren’t generally available in the big box and drugstores), and they were generally stored in refrigerated conditions.
To cater to consumers, both companies released (almost simultaneously) “consumer” versions of the above film varieties. Kodak’s Ektachrome became Elite Chrome, Elite Chrome II, and Elite Chrome Extra Color.
Fuji’s consumer version of Provia was Sensia, Sensia II, and Sensia III, in various ISO ratings. I am not aware that they ever marketed a consumer version of Velvia.
Interesting stuff for some of us, and by the mid-2000’s, essentially irrelevant to most of us. 🙂
Before I did the research for this piece, I spent a few hours going through my archives to find examples of some of my images made with all of the above media. The problem is that it is truly impossible to make comparisons, here. This is partly because in order to do this on a blog, it became necessary at some point to convert all the media to one single media: digital. So this may not have been a very useful exercise – but it was fun doing it. Presented as digital media, I can see some nuances, but not any huge differences (of course, post processing software has “recipes” to “recreate” film “looks” in digital post-processing these days. I have done very little of that, except for B&W, and cannot really say how accurate they are). I did very little post-processing of the “film” images; just a bit of sharpening mainly. I would be interested if you can see any difference.
For me, digital processing made everything possible; digital capture made it much more convenient
And there shall come a Rapture …..
Digital shifted the focus (see what I did there?) from all of the considerations of film, down to one thing: the digital sensor. And it is all about the quality, sharpness, and resolving capability of those tiny little electronic chips. We moved from cost and technical ability to manufacture affordable digital sensors, to “size” matters, and then back again these days to the fact that maybe it doesn’t even matter so much any more.
Nikon D100 (2002)
6 megapixel – “APS”
The Purple Coneflower is one of my first flower images made with direct digital capture. As noted above, it is difficult to make useful comparisons with film. First, doesn’t scanning a film image convert it to a “digital” image? Then, once we get into the post-processing world, everything we once knew kind of goes out the window. We can post-process a film scanned image in much the same ways we can post-process a digital capture. It may be possible to capture “cleaner” images directly, but we still have to deal with “digital” grain (noise). Color rendering becomes pretty much what you want it to be. One interpretation of the coneflower, for example, is that it has a “color-cast.” This is purposeful on my part because I like the warm, saturated color in many of my more colorful “nature” images. But I did this (of course, you can inadvertently capture color-casts, but if you shoot in the raw format, you can almost always correct, or adjust it, as can be seen from the white daylily image made with the Nikon D200).
I had to laugh as I reviewed digital images in my Lightroom Catalog. I apparently have an affinity for lilies, probably because they are an easy, plentiful, and colorful subject (emphasis perhaps on “easy” 🙂 ). In any event, I have almost 400 lily images. The closest second is around 25.
We moved from cost and technical ability to manufacture affordable digital sensors, to “size” matters, and then back again these days to the fact that maybe it doesn’t even matter so much any more
When I shifted to digital, I was satisfied with what was available, but not completely happy that the sensors were still small. Size, with sensors, had at least three dimensions: actual physical sensor size, and pixel depth (the number of pixels on that space), and the actual physical size of each individual pixel. Obviously, they are interrelated. And in the beginning, this was a pretty big deal. Larger sensors and larger pixels could handle capture with less noise, at higher ISO levels and more detail. So, almost from the beginning, there were “pixel wars” between the purveyors of digital cameras. But also in the beginning, the successful manufacture and hence, availability of larger sensors was prohibitively expensive. Of course, the sensor size itself also effected the optics in a big way. The 35mm SLR camera had become the sort of “standard” by which most of this stuff was measured. But the affordable sensors at first were the so-called “APS” sized sensor, which is significantly smaller than the 35mm film rectangle we were used to, but as you can see by the gold rectangle below, much larger that what we first had with Point & Shoot cameras.
APS sensors meant that the lenses made for the 35mm perspective, did not work the same way. There were pros and cons (covered ad nauseum by others elsewhere). Because of the perceived combined “advantage” of a higher-quality capture and regaining the use, especially, of their wide angle lenses, many 35mm users almost immediately began to call for a so-called “full frame” sensor. I have always found this kind of illogical. What would a “medium format” (4 x 5 inch), or a full 8 x 10″ view camera user call a sensor made to their size? :-). But “full frame” caught on. I eventually jumped on that train, believing “full frame” capture was necessary for me to achieve the image quality I desired. I want to emphasize that there is certainly nothing negative about owning the larger sensor. There is little doubt that you can coax more out of it than the smaller sensors. But to me, it may have been the purchase of a dump truck, when a small pickup (or even a wheelbarrow) was sufficient. There are a couple factors – all empirical for me – that bring me to this conclusion.
First, I had some personal experience. While I am not sure this is any longer true, at the time, to me the “holy grail” was the photographic print, on traditional photographic paper. I have owned a couple Epson printers that were capable of making inkjet prints that rivaled anything I had ever received from any lab. And, I was able to do my own “darkroom” adjustments. For economic reasons, the largest I generally made were 13 x 19″ prints, and that became my de facto standard for measuring quality. And while I enjoyed and appreciated my “full frame” Nikons, my “testing” didn’t prove out the benefit (for me) of the larger sensor (except, perhaps with its integration with some really fine pro- zoom lenses designed for 35mm).
The real eye opener came some years later, when I made side-by-side images with my Sony “full frame” and my Sony RX100, and printed them. I could not see much difference. To be fair, much of what has gone on has been in the post-processing realm (both in terms of technology and my abilities). There have also been technology gains which have made the smaller sensor just that much better.
The red lily image illustrates this, I think. It prints beautifully as a 13 x 19. I believe it could easily print much larger with no noticeable degradation. It illustrates to me my earlier premise that resolving power, low light, and clean capture-capable sensors (regardless of size, and often regardless of the number of megapixels) has really become the “media” of today.
Our recent Celebrity Cruise was entitled “The British Isles.” So why did I lead with the Eiffel Tower? The cruise “title” is mostly accurate. One would generally think of England, Ireland, Scotland, Wales, and perhaps a couple smaller islands as the British Isles. Our cruise included ports of call in LaHavre, France, Bruges, Belgium, and Amsterdam, Netherlands. But who is complaining? 🙂 . As I often do, I made several hundred images over a 2 1/2 week period. In coming weeks, I will give a more detailed accounting of each of the many new places we visited. Today, I wanted to give just an overview of what a huge territory, and vast subjects we covered.
I have mentioned a few times here, that my wife and I like to cruise. When we can find like minded companions, that just makes it all the more fun. There were 4 of us this time, and I am pretty sure I can vouch that we all enjoyed our time in Europe. When we go to a new destination, we like to arrive in the departing port city a few days ahead, to explore, enjoy, and get to know the city. Though my wife and I had been to Dublin before, we found many new things to see and do during our 4 days there.
When we were in Ireland back in 2014, we made a very brief trip into Northern Ireland, to see the Church where King Brian Boru was buried. This time we had a full (very full) day from our port of call in Belfast. Our driver and guide, Mark, was as good as we have ever had, and he had some surprises in store for us. As an “outdoor” photographer, I love a pretty scenic image. Northern Ireland did not disappoint. Indeed, as I have been processing images, it is “sneaking up on me,” that Northern Ireland may have been my favorite stop of this trip. I would definitely return and explore further, if given the opportunity.
The following day, we arrived in Liverpool, England, across the Irish Sea. We were scheduled for a Beatles Tour (what else would one do in Liverpool? – well; stay tuned, it turns out: a lot). For my Michigan friends, my quick research lead me to (wrongfully) conclude that Liverpool would be like Flint (maybe we need to organize a Grand Funk Railroad tour in Flint?) :-). Look for my upcoming post on Liverpool. It was eye-opening for me.
Next, we were back across the Irish Sea, and in the south of Ireland, at the tiny, but beautiful little port of Cobh. Cobh possibly rivals Northern Ireland in my view, for photographic potential. I made some nice images there, though at least one of them was one of those (perhaps hackneyed) “must do” shots that has already been done thousands of times. Known locally as “The Deck of Cards,” maybe I was able to make a unique “take” on the famous row of houses with the cathedral in the background. I will let you be the judge: again, in the weeks ahead. We overnighted in Cobh, and spent a day there, and a day touring Blarney Castle (site of the famed, “Blarney Stone”), and Cork City.
By then, we had spent most of 8 days on our feet. Blessedly, the following day was an “at sea” day. It allowed for some much needed “R&R.” After our day of rest, we arrived in the British port of Dover. For reasons I will expound on when I get to Dover and London, a few weeks out, I might have planned this stop a little differently. But we took the train to London and had a day-long “Black Taxi” tour of London.
Our next port of call was LeHavre, France. We again overnighted there (this was unprecedented for my wife and me – two full overnight stops). We took advantage of an early arrival and a late departure 2 days later, and again rode the train to Paris, where we stayed overnight. A huge city, we spent 2 very full days there. That barely scratches the surface, but we saw a lot during our time there and I thought it was not only very worthwhile, but one of the highlights of the cruise. I will note in upcoming blogs, that both London and Paris really need multiple-day visits to do them justice. Unless a cruise ends or originates there, it probably they don’t really lend themselves to cruising.
Again, not really the “British Isles,” we ended our cruise with stops in Bruges, and Amsterdam. Known for its beer and chocolate, I sampled a little of both in Bruges. It is an impressive, historical, and very small city, which was well worth the visit. In Amsterdam, we rode the canals, did the obligatory walk through the “red light” and “cannabis” districts, and generally saw some impressive sites. Amsterdam is, again, a massive city. We only got a little taste of the more touristic (as they say in Europe) parts of the city.
In the end, we were exhausted, but the trip served up many new places, and added to our list of places to explore in more detail in the years to come. The only “gear” I carried was the Sony small camera (RX100iv) and my small tripod (which did not see any use). On cruises, it is rare to be on location in early morning, late afternoon, or at night. The only possible “night” shot might have been the Eiffel Tower, but the timing and place were just wrong. If I were to make a longer stay, land based trip, I might rethink the gear. I love the lightness and portability of the small camera. But I find myself missing the versatility of the DSLR on some occasions. The coming weeks will cover each of the above – with images – in more detail.
My recent series on my “evolution” got me thinking a little more about “gear.” I have done this before, in perhaps a different way.
As photographers, we spend a fair amount of time on art and a fair amount of time on gear. But the gear tends to be heavily stilted toward cameras and sensors. And there certainly has been no shortage of new developments in camera and sensor technology to discuss. But there are some other “odds and ends” which I think are pretty essential toward our craft. There is a lot of shooting that I do these days with just a camera in hand – and in the end – that is the essence of photography: making images with a camera. But whenever possible, I use aids to assist in the craft. One item I have been pedantic about over the years has been tripods. I try to shoot from one when circumstances allow – or demand.
I have two tripods I regularly use these days. The primary tripod is a medium large, carbon fiber tripod. I use a Sirui 3023, but there are many affordable and similar models available. I also use a very small (also Sirui – 3021)) tripod, which works not only wonderfully well for travel and for my very small Sony RX100-4 camera, but also for the larger gear, in a pinch (especially during travel). The beauty of the small tripod is that it folds to around 12 inches, and is very light, while still being reasonably stiff. It can require some bracing technique though, particularly with heavier equipment mounted. There are other, more affordable small setups (notably the MeFoto aluminum model, which is slightly larger, heavier, and pound for pound, not a rigid a base as my carbon fiber setup). Here are some of the items I consider essential aids to shooting from a tripod.
As I noted recently, the tripod head de-rigueur, for many years now, is the ball head. Again there are various models of ball head available. The heads were popularized by action shooters, as it is much easier to move the camera around to stay with any moving target. The biggest problem with ballheads is that it is difficult to make them firm enough to avoid “creep” when setting up an image. Generally, it is a function of the size of the ball, and the construction of the clamping mechanism. The larger the ball, generally, the more rigid and stable the operation. Particularly if you are shooting large bodies/lens combinations, if you don’t use a fairly large ball, you will experience “creep,” which makes it difficult to compose a still scene in which you are being picky about composition. I learned I had to almost anticipate that it would creep downward a bit and tighten the ball head above where I wanted it to end up. My former, Induro tripod had a very large ball and it worked better. But none of them – in my view – work as precisely and surely as the old 3-way heads.
The biggest problem with ballheads is that it is difficult to make them firm enough to avoid “creep” when setting up an image
Even better, is a “geared” 3-way head, which essentially allows micro-adjustment in each direction. The problem has been – until recently, the available solutions were limited. Bogen/Manfrotto has made a nice, lightweight geared head – the Manfrotto 405. There are two major problems with it for me. First, it costs about $560 dollars. That is a budget issue. Second, and the real deal-buster, is that it has the clunky and insecure (in my opinion) Manfrotto quick-release system cast onto the head. For the more budget conscious, they do also offer the “Jr” model, (the 410) at $299 – but still with the QR system. I couldn’t make that work for me.
To solve the QR issue, I could have purchased the very nice, well-built and sturdy Arca-Swiss geared head, with the dovetail QR system I prefer. Now we are in a range from about $1,500 to $850! Still couldn’t justify that cost. I have been perplexed for some time why other manufacturers haven’t produced a dovetail QR system geared head. Finally, in 2018, Benro introduced a 3-way geared head, at a comparatively reasonable, $200. After some research, I bought one. While I haven’t thoroughly field tested it yet, I have it on my large Sirui legs, and have used it enough to know it is what I had been waiting for. To each his own, but for landscape and architectural shooting, this is really a winner for me. I kept the Sirui Ball Head in the bag in case I wanted it for some action shooting – but not really being my shooting style, I doubt it will see much use. For portability, I will keep the ball head on the small legs and live with it for travel.
The L-Bracket is an ingenious contraption that has become – for me – an indispensable item. It replaces the dovetail plate on all my cameras. The design is such that you can remove the camera from the tripod, rotate it 90 degrees, and re-attach it, keeping the image plane on the same axis. It makes shifting from landscape to portrait mode for the same scene very easy, with no adjustment necessary to the horizontal axis. I do that a lot, and wondered, after finally obtaining one, why I didn’t do so much sooner.
Like any equipment item, there are certainly disadvantages to L-Brackets. They add some bulk and weight to the outfit. These days, however, the extruded and cast aluminum and magnesium manufacturing and engineering processes have allowed for some very small and light pieces. I have a bracket on my Sony 7r that is barely noticeable.
Another major drawback to these brackets is that they tend to be expensive. They were, at first, produced as specialty items by companies like Kirk and Really Right Stuff. Both companies are small, U.S. based companies, and their costs – as well as the R&D for these items make them an expensive choice. Also, the majority of them are camera-specific, meaning each new body design begets another needed re-tooling and design of the brackets. Of course, this also means you have to add the cost of acquisition of one of these, each time you change camera bodies. And, for effective use, you really need to have one attached to each camera you carry. There are some universal brackets on the market, but in my experience (I have tried them) they are ineffective because they allow the camera to turn. An effective bracket must be physically married to the body in a way that it seems integral and doesn’t allow for even a fraction of movement.
All said and done, an L-bracket is an item I wouldn’t be without
Competition (and perhaps either the expiration of – or bypassing by unique design – of patents) means that today, there are dozens of these items available by different sellers (a quick Amazon search produced at least 20 pages of them there). While you have to do your research, many – if not most – of them are well made, lightweight, and much more reasonably priced. I have used the Sunway brand (I am sure, just an import name for a bracket line that is manufactured for many other sellers – most likely in China) and found them to be very good, effective and reasonably priced, for the most part. All said and done, this is an item I wouldn’t be without.
I began photography in 1976-77. For the next approximately 30 years, I shot landscape images and more often than not, as time went on, did so from a tripod. And my eye for level was pretty good. At least that’s what I thought. :-). Here is the reality. They weren’t, and nobody else’s are either! Our eyes-brain interaction compensates for so much. It tells us what a level horizon in our view is. But the camera is mechanical. And because our eyes are so good, we trick ourselves into thinking our camera is level. Just look at the millions of “sunset” (very few “sunrise” 🙂 ) images over water on Facebook and Instagram that have tilted horizons. See, The world is not Flat.
My good friend and gifted photographer, Al Utzig, in critique of some of my images a few years back, remarked that a few of my horizons were not level. I was inclined to disagree, but there are some mechanical tests that are brutally honest. I opened a couple of them in Photoshop and pulled a horizontal “guide” down to the horizon. Damn! Al was right (and not for the first time 🙂 ).
Before electronics became common in cameras, it was possible to change out the screen in the viewfinder of many of the higher-end cameras. I put an “architectural” screen in one of my cameras (cannot remember which one, but probably the N90s). It had horizontal and vertical lines etched onto it. When electronics did become common, “turning” these on became possible as an option – one I highly recommend for a number of reasons. But even with these aids, I frequently missed the boat with level horizons., more commonly with handheld images, but sometimes even with a tripod-mounted camera. It was usually minuscule, but again, our eyes are good enough to sense something being not quite right about the image. The experienced eye recognizes it quickly as a tilted-horizon. Even with guidelines, it is possible for our own eyes to fool us.
Shortly after our conversation, Al and I were shooting together, and he showed me an inexpensive little item that he religiously uses, and suggested I set up an image with my eyes and then we put it on my camera. It was a hotshoe-mounted bubble level. We did, and my eyes were wrong. The level is never wrong (assuming it has been manufactured to tight quality control). These plast gizmos should seem familiar to anyone who has ever used a carpenter’s level. The range in cost between about $4.00 and $15.00. My experience is that you may want to skip the $4.00 models, as they are often not manufactured to quality specs and may not – themselves – be level. Your mileage may vary. When I have purchased them, I have put them on a tripod on a level surface and actually done comparative testing with a carpenter’s level.
The level is never wrong
The two models pictured here are the most common. I have both. I prefer the cube myself, for a couple reasons. It is slightly bigger and harder to lose (they are easy to lose and having an extra is a good idea). I also like the way it reads and looks. All wildly subjective and aesthetic considerations. But here is a key point. These are known as 3-axis levels. Do not rely on the spirit level that it often part of modern tripod legs. While the might be helpful for setting up, they are essentially useless for the purposes here. There is a spirit level available, also, that mounts in the camera hotshoe. Again, in my opinion, do not waste the money.
While a compass is not an item one would think of as a photographic accessory, it is a very useful item. I always carry one in the field. As a one-time mentor said to me once, “the compass doesn’t lie” (though it can “fib” if you aren’t careful about how you use it. Because they are driven by the earth’s magnetic pull, you must be careful not to use it very near any magnets, or ferrous material). I use a software program called ThePhotographer’s Ephemeris, to do my pre-scouting and research. This mapping program gives an overlay of sunrise, sunset, moonrise and moonset angles on the scene. When you arrive, it becomes pretty important to know at least the cardinal points (North, East, South and West) that the compass reading will give you. They are small, cheap, and an essential “carry” item.
There is a school of thought that after having spent $$$$ on expensive glass, you need to protect the front element by having a filter on the front of it. The most commonly used filter for this purpose is a UV or “Skylight” filter. Historically, these filters were designed to reduce the amount of ultra-violet light, rendering possibly better colors. Over the years, lens technology has included multi-coatings which integrally accomplish this purpose.
There is also a school that asks why you would spend hundreds or thousands of dollars on a highly engineered, quality optic for your camera, and then place another piece of – usually much cheaper and lower quality – glass in front of it. I am in that school. My gestalt view is that I never put anything in front of my expensive glass without a specific purpose. There are, of course some very good reasons. When I am around salt water and salt spray, I tend to have something on to protect my lens. If I am trying to achieve a particular effect, I use a filter. Usually, there are only two reasons I ever use a filter, and only two filters I use: a Polarizing Filter and a Neutral Density Filter.
I learned the virtue of a polarizing filter early on, probably because I had one and was able to play around with it. Polarizers mainly reduce “glare,” which is caused mostly by bright, blue light. Blue light rays are short and therefore often scattered. It is this scattering and omni-directional quality of the light that produces glare, and reduces the clarity of the image being captured. The polarizing effect creates a linear filter which only allows blue rays which are parallel to the filter to enter the lens.
There is also a school that asks why you would spend hundreds or thousands of dollars on a highly engineered, quality optic for your camera, and then place another piece of – usually much cheaper and lower quality – glass in front of it. I am in that school
There are some drawbacks to this filter, of course. The polarizer must be manipulated mechanically, by physically rotating it. This may be difficult in a handheld situation, and virtually impossible for “action” subjects. By its nature, it also reduces the light entering the lens, causing you to lose 1 to 2 effective stops of light. In tricky lighting conditions, this can be a handicap. However, with modern sensor technology, it is much easier to override this issue by bumping the ISO up with little penalty than it was back in my Kodachrome/25 ISO/ film days.
Another drawback is that they tend to be expensive. This is partly because of the technology needed to do them right. First, it requires dual rings in order allow the rotation. This can create a vignetting problem with wider angle lenses, and I always have purchased the thin ones, which are more expensive to manufacture. They also can create unnatural color casts, and the truly “neutral” polarizers just tend to be more costly.
To add to the issues, when auto-focus became more or less “the norm,” standard straight line polarizers interfered with the AF mechanism on most DSLR cameras, so circular polarizers, which were much more expensive, had to be purchased. The AF technology on the mirrorless cameras is different, and isn’t bothered by polarizers, and it was nice to be able to reduce my out-of-pocket for them once I shifted to my Sony mirrorless system. At the same time, some of this may be becoming moot. I shot with a polarizer on every lens probably 80% of the time for many years. However, with the software available today for post-processing, I use them significantly less than I did.
Neutral density filters are something I very seldom use, but almost always have with me. I generally use them in situations where I have too much light and am trying for a longer exposure – most notably water, and waterfalls. I use the square ones designed for a holder, and generally hand-hold them. There was a day when we used split-neutral density filters to try to bring high contrast scenes under control. The best example was a landscape in early light, where there was a bright sky, but shadows in the foreground. There was a line (it could be hard, or a gradient) halfway up the filter, with 1/2 being the ND filter and the other being clear. Because nature is rarely in perfectly straight lines, I found I had very little success with this technique, and once I learned how to blend images in Photoshop, stopped trying altogether.
Popular wisdom has always had it that if you are going to go to the trouble of mounting your camera on a stock-still tripod, you should trip the shutter remotely. The theory is to avoid camera shake caused by our touching it. In fact there are many variables at play. There are times (e.g., a very windy day) when we must touch the tripod to brace it. Other times, I will admit to laziness, and have often triggered my camera with my hand resting on the tripod. Except in touchy situations such as, perhaps, a macro shot, I am not sure that – once everything is at rest – it truly makes that much difference. Though generally, it is good advice, many shooters live life happily never having owned one.
There are various methods, from a mechanical cable release, to wireless, to using the self-timer on the camera. I don’t care for this latter method, mainly because of the degree of control you give up over the precise moment you want to trip the shutter. Most pre-electronic cameras had a mechanical, plunger wire type cable which screwed into the release on the camera, or around the housing of the release. Even my vintage Ashahiflex had one. Electronics in cameras, and particularly AF, changed all that. On most modern cameras, one or more electronic functions are actuated by a partial press of the release. This meant we had to go to electronic cable releases. Another cost, and something to screw into the camera body somewhere. I had the MC 30 release for my Nikons and they are still $65.00 at B&H. Of course, they were proprietary, though fortunately, the MC30 worked on all my cameras from the N6006 through the F800.
As great as their cameras are, Sony lags behind Nikon and Canon on accessories (especially flash). They just don’t have any elegant remote cable solution. At some point, cameras began to be able to be remotely triggered wirelessly. Unfortunately, the built-in systems are just not very good. The wireless signal is not particularly strong and is very directional, meaning that you normally have to hold the wireless trigger close to either the front or back of the camera. In short, I have found wireless cables frustrating, and that wired cables interfere with my L-bracket. Plus, I like the idea of wondering away from the setup at times, while shooting.
Having an affinity for gadgets, I recently purchased a Pixel Pro wireless remote apparatus that I am really enjoying. Part of what drove me to it was my flash setup. I needed to be able to trigger the camera and a flash setup with multiple flashes, and wanted to do it at night. The existing remotes weren’t up to the task. This unit only triggers the camera, but that sets the whole chain-reaction moving. This unit works even when I am a substantial distance away from the camera.
There is a downside, of course. It takes more batteries, more setup, and a learning curve to make sure everything will be working as expected when you are in the shooting situation. They work on channels and it is pretty important to make sure the sender and receiver are “married” at the same channel. And its just more moving parts to cart around. I will probably continue to play with it. But it may be more trouble than its worth. I will need to keep the focus on making good images and not having cool equipment.
My Asahiflex outfit had a large, bowl-shaped reflector which attached to a bakelite housing (holding 2 D cell batteries), which in turn attached to the side of the camera. When attached and turned on, triggering the shutter would also ignite a flash bulb. Some of us are old enough to have grown up with incandescent light bulbs and have experienced the pop when we turn on the light, and it burns out. Flash bulbs were light bulbs designed to do just that and to emit a certain color and intensity of light when they did. They were, unfortunately, one-time use, and relatively costly. They were also very hot to handle immediately after use. For those reasons, I seldom used flash.
In the ensuing years, electronic flash units became widely available, and were much easier (and perhaps safer) to use. They are generally referred to as “Speedlights.” They only expended batteries. They were, in most cases, at least partially adjustable. And as camera manufacturers became more advanced, the electronics in the cameras controlled the flash, with properly dedicated units. None was better than Nikon, and it was pretty easy to set things more or less automatic and let the Nikon system do the work. Nikon’s TTL (“through-the-lens”) worked by reading the light measured at the film surface.
I have always thought and said that digital changed everything for the better. Well, it was not really everything. Digital really messed up electronic flash for a period. It could no longer read the light off the film plane and, at least in the beginning, unable to read it correctly off the sensor. Eventually, it has been resolved and, true to its reputation, Nikon probably did it best first. Today, they have their i-TTL system. The drawback is that it only really works well with certain newer cameras and flash units. And they are relatively expensive. The SB600 (guide number of 98/30) was $350. The SB600 is now discontinued, but it can be found used for around $100, which is a bargain. The upgraded model is the SB700 and is about $330. I am not sure it is worth buying against the 1/3 of the cost, used 600. It has a slightly higher guide number, slightly faster recycle, and built-in remote control (for what its worth, I have a third-party setup for this that works pretty well, on virtually any flash – more below). The SB600 worked perfectly well for my purposes, and I found it basically as good as I had experienced with my film based system. The SB800, was larger and over $500 new. It was eventually upgraded (replaced) by the SB900, which was upgraded to the SB910. All of these are now discontinued but can be found used at some pretty good bargains. The current big flash is the SB5000, at $600. Too rich for my blood. Again, these are tools, and if you make a living with flash, the $600 may well be worth it.
Nobody has really made a third-party flash that works as well as Nikon’s own system for Nikon. Once again, my switch to Sony threw me out of the “good flash” loop again. I have been a rare user of flash over the years. I have always carried a speedlight in the field when on a dedicated photography excursion. I have mostly done so because I will occasionally find a scene I wish to photograph where there are some deep shadows in daylight conditions. I use the flash to “fill” those shadows without blowing out the brighter parts of an image. Occasionally, I will use it for portraits in very sunny conditions, but I don’t do much with those. Part of the reason is that without a really good automatic system, proper use and understanding of flash involves math. “I was told there would be no math.” The very thought of math makes me quake with fear. I am one of those persons who cannot add double digit numbers without a calculator. :-).
Sony offers 6 dedicated flash units. One of them is small enough that I don’t consider it useful. One, the F60m, is still on Sony’s website, but it appears to be discontinued, as I only find it used on site like B&H. With a guide number of 60, it would work for me, but is a bit on the low power size for its $300 price tag. It also is reputed to have overheating issues, which shuts it down, apparently even during moderate use. The other 4 units range from $300 – $600. Sony also seems to have had some disorientation when it comes to their electronic shoe system. This means there are compatibility issues, and a given flash may only fit a given range of cameras – at least without an adaptor (another item, another cost). And none of them, frankly, are close to as good, or as turn-key, as the Nikon system.
My current solution has been to buy a book, and try to learn the basic math (ugh), and go with third-party units. I bought a Godox (made in China and found and purchased on Amazon) unite compatible with my Sony A7. It has enough “automatic” features to work for my needs. I also wanted to experiment with multiple lighting and some night shooting, along with the ability to trigger the flash independently. So I later purchased a second, manual Voking branded light and a Godox TTL wireless flash trigger. Total cost to me for all of this stuff was under $200. The downside is it is more gear to lug, and more learning curve and setup and practice prior to going into the field.
Odds & Ends
It seems suitable to end with a connection to the blog title. There are a few items I keep in my bag, which (depending on the nature of the trip), I try to always consider:
Extra batteries and a couple chargers (you never know when one might fail)
Extra memory cards
Flashlight (and batteries) – (I also carry a headlamp with a red option, as red supposedly messes less with our night vision)
Proper clothing – especially rain gear (and covers for the camera and lens)
Towels and garbage bags (an assortment of large and small towels and large and kitchen size bags will be of great assistance in wet conditions)
Tools (small [jewelers] and medium phillips and regular screwdrivers, allen key/wrenches, small adjustable wrench, electrical and duct tape)