• Andy’s E-BOOK — Photography Travel Guides

  • PLEASE RESPECT COPYRIGHTS!!

    All Images and writing on this blog are copyrighted by Andy Richards. All rights are reserved. You may not, without my express, written permission, download, right click, or otherwise copy my images for any reason. Copying an image and putting it on your blog, website, or even as a screensaver on your computer is a breach of copyright, EVEN IF YOU ATTRIBUTE THE SOURCE! Please do not do so.
  • On This Blog:

  • Categories

  • Andy’s Photography Galleries

    Click Here To See My Gallery of Photographic Images

    LightCentric Photography

  • Andy's Flickr Photos

  • Prior Posts

  • Posts By Date

    April 2019
    M T W T F S S
    « Jan    
    1234567
    891011121314
    15161718192021
    22232425262728
    2930  
  • Advertisements

Don’t Touch That Dial!

The colors in 2006 were a bit “lack-luster”
Copyright Andy Richards 2006

Lately I have seen photos frequently posted that show significant overuse of the “saturation adjusment. I have certainly been guilty of this myself. We shoot a scene that seems magically colorful to us, but when we first see it on our computer screen, it just doesn’t “get there.” If you shoot raw, that is pretty normal. Most images need post-processing. So our first thought is often to use that slider adjustment in our post-processing software (and every software has it); the saturation slider. And 99% of the time that is a mistake! I see a disconcerting trend of overuse of the saturation slider (even among some of my “pro” acquaintances).  My friend, Al Utzig will be chuckling as he reads this. His most frequent critique of my imagery early on was that they seemed over saturated to him. Maybe. I have certainly made much less use of the saturation tools in my post-processing as I – and my software – have become more “seasoned”

Recently, I see a disconcerting trend of overuse of the saturation slider

Like any discussion, we should be sure we are talking about the same thing. A general definition of “saturation” says that “saturation is the state or process that occurs when no more of something can be absorbed, combined with, or added.” When speaking in terms of color and photography, “saturation” generally refers not to the actual color or its accuracy (“hue”), but rather to the intensity of the color. This is important. If you are trying to correct a color, more (or less) saturation won’t really do that (although it might help to improve the color’s appearance in some cases).

What I have learned over the years ….. is that what we are really seeking often has nothing to do with saturation

The problem with the saturation slider (and every software will have its own internal algorithms for this) is that it generally does just what the definition says. It adds (or subtracts) to the intensity of color. Unfortunately, the result is often counterproductive. The saturation slider is indiscriminate (it saturates all color), and too often, a boost in saturation results in a color cast over the entire image. For example, I have seen one Facebook poster recently who is boosting the saturation in fall foliage images and getting a reddish color cast over the entire image. And in most cases, it also generally results in the detail in the photo deteriorating. Try it. Find one of your images and magnify it enough on screen to see the detail, and then move the slider back and forth. The more aggressive you are with it, the more you will see the details go mushy. In many currently posted images it is obvious that the slider has been overused, as the picture looks unreal on our screens. The colors are intensean often, Just not believable.

It is a very real temptation to “goose” the colors to make them as brilliant as we wished they were. Using the saturation slider here boosts the foliage, but at the expense of a red “pall” over the remainder of the image
Copyright Andy Richards 2006

Of course, there may sometimes be reasons to boost saturation. You might actually be striving for unreality. Another is that the saturation on a projected computer screen is always more intense than when a print is made. I often push things a bit before sending them to the printer, to get as close as I can to the color I liked on screen. But even then, you must be very careful not to introduce a color cast effecting the entire image. There are generally better ways to “boost” the appearance of the image.

Here is a slightly more “selective” move, using only the red channel saturation slider. It is slightly better, but still creates a red color cast which can really be seen on the silos and the white house
Copyright Andy Richards 2006

The thing about photographic imagery is that it can often be very difficult to duplicate the “reality” that your eyes saw. For one thing, I believe that each of our eyes see color differently. But it is also the case that the attributes we see in a “good” photograph are often more based on appearance, than reality. Things like contrast, brightness, saturation and even sharpness, all influence the appearance of color. But when it comes to color, something that I have learned over the years using post-processing software is that what we are seeking often has nothing to do with saturation. I think it is important to add here, that I shoot and save all my images as raw files.  Without getting into the should or should-nots, I think nature images (the most common culprit of saturation slider overuse) will always benefit from starting with a raw file. The first step in making the captured image look the way you want it will always be done in the raw converter, and that is very powerful stuff.

Color was slightly better in 2010
Copyright Andy Richards 2010

Even if the image does call for increased saturation, a number of pixel gurus have referred to the saturation slider as a “blunt instrument.” I almost never touch it, except occasionally to de-saturate an image, or parts of it. There are other, better ways to achieve the color we saw in our mind’s eye. Often, getting color “right” is a matter of contrast adjustment rather than saturation. Contrast can be adjusted throughout the image, or perhaps a better approach; locally. I do use the contrast slider in ACR (but not usually later in Photoshop). There are numerous ways to adjust contrast locally, including the tried and true “curves” tool, used with layers (of course it can be used globally on the image – that will be a judgment you will make).

It is still important to resist the temptation to “overdo” with the saturation slider, creating a red color cast on parts of the photo (barn roof), as well as deteriorating already soft detail in the distant foliage
Copyright Andy Richards 2010

I have also found that a brightness change to an image can sometimes make the colors appear to “pop” more. Again this can be accomplished globally, or using an adjustment layer, or a plugin like NIK Viveza 2.

The Saturation Slider is indiscriminate

There is also a relatively new tool: the vibrance tool (Photoshop has it; other software may call it something different). This tool is a slightly “smarter” tool than the saturation slider, as it targets more muted colors to add saturation to them, while leaving already saturated colors alone. But again remember that the tool is action more or less “globally” on the image and that may be letting the computer make your decisions for you. I often add just a small amount of vibrance in ACR (5-15%), combined with small moves of the contrast slider.

The Burton Hill Farm is a favorite image of mine. There is so much going on in this image. I used Viveza 2 to selectively DE-saturate the clouds to take a bluish color cast out of them
Copyright Andy Richards 2010

There are some very sophisticated techniques which require digging under the hood a bit, like working with “luminosity masks” – essentially specialized layers and layer masks, (Tony Kuyper has a set of pre-programmed layers of you are interested). Some years back, I learned a technique espoused by Dan Margulis (his book is a wonderful textbook, but not for the faint of heart. It is technical and it is expensive). It involves moving your image into the LAB colorspace and making some opposite curves adjustments. It really is more of a selective contrast adjustment, but really works wonders to bring out the “snap” in a color photographs. These techniques require some effort. You will have a learning curve, and generally will spend some time on each image in more complex post-processing. Lots of folks would rather go out and shoot and have very little post processing and ease of use instead of having to become a software expert.

There is still a temptation to “goose” the reds with the red slider. But the result is not productive, with oversaturated, mushy reds in the distance and again, a color cast overall
Copyright Andy Richards 2010

This sentiment probably stimulated the many products available today as plugins to existing software. One of the first of these was the NIK plugins. Originally its own company, it was at one time purchased by Google, and for a period was actually free to download and use. Google eventually apparently abandoned it, and it was ultimately sold to the DXo folks, who now offer the package for $69.00. I use it on almost every image I post-process and for me that seems like a reasonable price. They say it is new and improved, but I cannot see any difference, and am still using my originally purchased package. So far, it integrates with Photoshop CC.

The Nik product that is most relevant to this discussion is called Viveza 2. It is all about local adjustments the easy way. They have found a way to locally adjust images using circles on screen and a slider. It is not going to be as particular as using luminosity masks, but for me, for all but the most problematic of images, it works very well. Caution: Viveza does have a saturation slider! Again, I rarely touch it. The sliders I find most useful are the brightness slider, the contrast slider, and sometimes the shadows slider. I have found that in an image that does not need additional color correction (because hue, not saturation is off), that these three sliders – applied locally to areas of the photograph, do everything necessary to render a colorful, vibrant, and realistic result.

This is an example of an image where I “pushed” the sharpening and color contrast in the LAB colorspace. It looks “crispy” on screen, but the natural smoothing process of inkjet printing made this work for a sharp print
Copyright Andy Richards 1997

Indeed, after I owned and learned this software for a while, I found myself going back to old images and completely re-working them. And I discovered that I was, indeed, guilty of SSO (saturation slider overuse). 🙂 . I like the re-worked images much better. If you look at some of your images critically, you might just agree with me.

Advertisements

Make Your Social Media Photos Better – Part IV (“Bullseye” Photography)

Barn in Wheat Field
CENTERED
Copyright Andy Richards 2018

Whew, we just covered some pretty “techy” areas in Parts II and III. If you have read on, I appreciate your patience, and there is good news. This one is much easier!  I want to talk about composition here. I left this for last because for the genre we are discussing (for lack of a better description, “smartphone snapshots”), it is perhaps the least important. Focus and exposure are going to take you a long way. Then we can think about composition (and the whole “level horizon” thing is really about composition – it is just such a common error that it got front and center coverage).

Hitting the bullseye is a good thing for darts, bullets, and arrows… photographs; not so much

Another very common thing I see is what I call the “bullseye” approach. Hitting the bullseye is a good thing for darts, bullets, and arrows. In most photographs; not so much. Our internalized image of a camera lens is round. On most cameras (including smartphones) there is an “aiming” point directly in the center of the viewing screen. If you don’t have movable metering and focus points, you are going to have to use that center aiming spot to achieve your photograph. Until (if) smartphone manufactures begin to put more sophisticated camera operating features in their native apps, you will do well to find one of the free, or nominal cost third party camera apps, as they will give you the much desired ability to use the camera as it should be used.

On most cameras (including smartphones) there is an “aiming” point directly in the center of the viewing screen

Given these built-in parameters, it is no wonder that many of the photographs we see posted have their subject dead center in the middle of the picture. If we are making a portrait of the subject (generally a front on of a person or pet), that is often desired. For most other (contextual) photographs, it usually results in boring composition.

Barn in Wheat Field
“COMPOSED”
Copyright Andy Richards 2011

At the same time, the vast majority of scenic images have a “horizon” somewhere in the image. The most common fault here, is placing that horizon line across the middle of the frame. The Barn in the yellow wheat field is a good example. Our immediate thought is that the barn is our subject and we almost involuntarily center it on our aiming point, – in the middle of the frame. As you can see from the second image, moving it to the lower approximatley 1/3 or even the approximately upper 1/3 makes a more interesting, dynamic (and even believable) photo.  Whether you place it higher or lower, will probably depend on whether you want to highlight the foreground or the background of the image. Moving the subject out of the dead center of the image will usually make a more dynamic and pleasing picture.

Farm Scene
CENTERED
Copyright Andy Richards 2018

How do we accomplish that?  I have made reference to several of the tools discussed in Parts I – III. The most important of these tools will be the grid lines. I always have grid lines on on all of my cameras. The most common grid pattern is based on the artists’ “rule of thirds.” Perhaps the majority of images can be made using this grid. With a rule of thirds grid, the intersection of the lines places points in the upper let, upper right, lower right and lower left of the viewing screen. The horizontal grid lines, in addition to helping with level horizons, will help you determine where the horizon should be placed.  The vertical lines help to get the image out of the “bullseye.” It is often my preference to place my subject on or near one of the 4 intersections described above. It depends on the subject and often which way it is looking or moving (a rule of thumb is that you would prefer an animate object looking or moving into the picture). I “contrived” the farm scene image in Photoshop, to create the centered composition. In practice, I only rarely compose a shot like that, so I didn’t really have a good example. In the image I actually shot, you can see that the barn is roughly centered around the top right intersection, and the cattle are at or just below the bottom left (there are some other problems with this image. I would like the cows moving into the image. But in the words of Mick Jagger, “you can’t always get what you want 🙂 ).

Rule of Thirds Grid

The other tools we discussed will now also become very useful. If you move the focusing point out of the center of the image to the point where you want to place your subject, you will now ensure that the subject is not only dynamically placed, but there is a much greater chance that the important parts of it will also be in sharp focus.

Farm Scene
“COMPOSED”
Copyright Andy Richards 2010

For exposure the metering point you want to measure in the picture may not be in the center. Most often, I find it is at the same point as my subject, but not always. Again, these are smartphone snapshots, so we don’t want to have too much to think about. I like to “pre-plan” a shot when I can. With my smartphone, I most often just move the focus/exposure point to the point in the image where I want it and then shoot.

Lobster Boat
CENTERED
Copyright Andy Richards 2018

Sometimes composing a scene involves taking a step back and looking at context. When action is happening quickly, you don’t always have time to do that. But if you do have time, it pays to look with your eyes and think about what you want to include (and what you do not want to include) in an image. This is especially true of a scenic shot. The lobster boat shot in Bernard, Maine, is a good example. The boat was the first thing that caught my attention. So the shot is about the boat, right? Centering in on the boat certainly makes this point. But would a “composed” shot be better? Stepping back, either by moving your body, or changing the zoom factor, makes a completely different image. I’ll argue that it is a better, more interesting image and that the boat is STILL the main subject in the photograph. And my original “composition” also gets rid of that void expanse of blue water in the foreground of the image.

Lobster Boat
“COMPOSED”
Copyright Andy Richards 2009

Composition is personal and subjective. I stood next to my best friend, Rich, to shoot this image. I don’t remember, but I would bet his composition was different (and probably better) than mine. What I have talked about here are not “rules.” They are guidelines. And rules and guidelines are “made to be broken.” There are certainly cases (for example, closeups and portraits) where centering the image is desired. But some thoughtful application of these guidelines will make your social media images more interesting.

I hope you have enjoyed this short series. A couple things. First, this stuff seems complicated and time-consuming. But it really isn’t. If you take some time to find and download a camera app, and set things up before you go out with the phone, you will find that paying attention to things like horizon, focus and exposure will quickly become habits and you will do them without really thinking about them. Composition may take a bit more time, but I truly encourage you to try it. Everything I have said in all four installments are “rules” or guidelines. They are not laws. They are only rules. And the age old saying that “rules are made to be broken,” certainly applies to photography.  It is your picture and you should feel free to break the rules when it suits your vision and taste.

Hopefully, if you slogged through all 4 of these blogs, that there was one or two tidbits of useful information.  As always, thanks for reading.

Make Your Social Media Photos Better – Part III (Focus on what you are doing)

Here the subject of the photo is the stream of water and it needs to be in focus. But there are lots of other “moving parts” in the image for the focusing sensor to pay attention to, so you have to make sure you are focusing on the water

Too many images posted on Social Media are soft, or often even downright blurry. While sometimes some softness is desirable, that should only be done intentionally and I am sure the majority of those soft images I see are not intended. As new photographers, one of the primary fundamental things we learn is that an image must be in sharp focus.

Sharp focus is one of the harder things to get right with a smartphone camera

Like the exposure issues in Part II, this is one of the harder things to get right with a smartphone camera. Modern “dedicated” cameras which are, after all, specifically designed to make pictures and do not have to do all the other work a smartphone does, have lots of controls. Most of them are user configurable. The “camera” (really just software) that comes standard with most smartphones has very limited capability to user-configure (though I am told the newest iPhone and Galaxy S9 have begun to add some of these things).  So, again, this is going to be easier with an third-party camera app, if possible.

You can see the water here is soft. The camera’s focusing sensor was “fooled” (really it was probably the user that was fooled) by focusing on the rock in front of the stream of water and let the water go blurry

As I noted in Part II, smartphone cameras are “all-automatic.” This means the software is attempting to do all the things you need to do to capture a digital image.  This means it has to determine proper exposure and focus for you. The focusing system is similar to the light-metering system, in that they both use a measuring device.  In the case of focus, the measurement is the distance from the sensor to the subject. That information is used to tell the camera lens to focus. Knowing that “subjects” often move, most camera manufacturers’ software by default, continually measures and refocuses. Some do better than others. It is possible that the moment you click the shutter does not exactly coincide with when the lens actually makes the capture and save. This may mean that the subject has moved in the meantime and the focus point has changed. This is not likely to happen often, as the camera apps just keep getting better at this. Most of the newest cameras also offer an algorithm called “face recognition.” It is designed to pick out a face or faces in the image area and purposely focus on them. Of course, the problem with this is when you have multiple faces in different places in the image (especially from back to front). Like any technological device, it is important for the user to understand what it is measuring, so you can control that aspect.

Like any technological device, it is important for the user to understand what it is measuring, so you can control that aspect

But there is another, perhaps less obvious concern, and this is most likely to be the culprit. Note that I have used the word “subject” several times above. To understand what is happening with the focusing system, we have to first know what we mean by “subject.” For our purposes here, I will define the “subject” as “that part of the image that you want to be in focus.” I know: the whole thing dummy, right? 🙂 . Digital images, as well as hard copy print images are essentially one-dimensional. The subject matter of our photographs is almost always 3-dimensional; often with substantial depth. The camera lens is not physically capable of rendering the entire depth of the scene in sharp focus. So we have to choose which part(s) we want to really be in focus.

Again, this is accomplished with the measuring tool in the camera. If we do not know where in the frame it is pointing, we will have to resort to our best judgment of the very small screen of our smartphone, often in poor light. The app I am using has the ability to turn on the little small rectangular bracket superimposed on the screen; and to move it around. Moving it around is a nice feature for composition. We will discuss that in Part IV. But the most important thing is to know that you must place the bracket on the part of the photo you want to be sharp (remember that you may also have this bracket set for measuring exposure, as we discuss in part II). When people are in the photo, that should be a face (and if possible, even someone’s eye).

One other thing to understand. Because the lens cannot make everything in the image sharp, it will become selectively more blurry as things move away from the focus point. The “back” (background) of a scene is more susceptible to blur than the front in most cases where smartphone lenses are involved. So if you are taking that sunset shot you want to try to set your focus point on the horizon, rather than a close object in the front (foreground) of the image (for the photographers out there, I appreciate that I am over-simplifying this, but again, this is mainly addressing smartphone snapshots).

Try experimenting with this and you will hopefully begin to see that you do have some control over the sharpness of the image. I think this will make your images better (at least – sharper 🙂 ).

Make Your Social Media Photos Better – Part II (“Through the Glass Darkly”)

In Part I, I talked about crooked horizons; by far most prominent fault of images posted on Social Media. Hopefully, we helped to fix that problem in Part 1 and you have gone back and straightened all your tilted images, restoring the water to the earth’s oceans and lakes. 🙂 . And, hopefully, as you will see, although it is the most common problem, the tilting horizon is also the most easily remedied. Parts II and III, probably the next most common faults, are also the most difficult to get right.  We will talk about exposure in this installment. I often see posts – usually of people – that are so dark that you can barely see the subject. The answer seems obvious: not enough light.  Sometimes that is true, but that is not always the reason.

Although it is the most common problem, the tilting horizon is also the most easily remedied

Nikon F100
Tokina 300 mm
Kodak E100SW
Exposure Data not recorded
Birds, Vol. 2

Getting exposure right is more difficult; but it is possible with an understanding of how your phone decides to expose an image

The human eye is (so far), by far, the most incredible and technologically advanced lens available. Coupled with our brain, it is able to register and “capture” an amazing range of light from very dark to very bright. In contrast (pun intended), even the most advanced camera sensor (the little chip in your smartphone that records photographic images) can only record a fraction of this range of light that our eye sees. This limitation is sometimes referred to as “latitude.” Because of this limited latitude, all cameras have a very difficult time recording images that have both very bright areas and dark shadowed areas (the difference is sometimes referred to as “contrast”). The typical example is a shot of someone in a sunny environment, where parts of the photo are very dark, even on a bright sunny day.  Our “fix” is going to be surprising to many of you.  The bird in the photo above is a good example. It was shot on a very sunny day, confusing the camera meter and underexposing the dark bird. Note that you cannot see the eye of the bird that is in the shadow.  What causes this underexposure?

Severely underexposed image with bright background

Smartphone cameras are mostly “automatic.” They are programmed to make choices that advanced photographers with dedicated cameras know how to make for themselves. The programming is pretty good, but it can, and often is, fooled by tricky light conditions. Understanding why and how this happens will help you make better images, even in an all-automatic smartphone.  Cameras have a measuring device (meter) that measures the light and “suggests” to the camera the proper exposure for that light. This works well when the light is even. But in contrasty lighting, the meter can be confused. Sometimes it will “average” the different light sources (dedicated digital cameras have metering algorithms that do this very well). But all too often, I see images where the meter chose one light intensity over the other, to the detriment of the image. Knowing where this meter is pointing will be very helpful in fixing this problem. My dedicated cameras all have user moveable brackets for where the meter is pointing in the image.  Most native smartphone cameras do not. A little “quick and dirty” Google research did not turn up anything useful about knowing where that is on the native phone camera, so you are probably going to have to result to a little trial and error here.  Watch the screen as you move the camera around for changes in lightness and darkness. While it begins to sound like a repetitive advertisement, I am again going to suggest you look for a new camera app for your camera if your native app doesn’t already allow you to user adjust the metering point.  Most apps mimic the dedicated digital cameras, and show a little rectangle that appears on-screen when you are ready to shoot.  That rectangle tells us where the light is being metered.  It is best if this is movable about the image. If we are pointing it to (or even near) a very bright part of the image, it will tell the camera that it needs to lower the exposure.  The problem is, the exposure gets lowered for then entire picture, leaving shadow areas too dark. The image of the sailboat above is contrived, to illustrate the problem (I couldn’t find an illustrative photo in my archives, so I exaggerated the darkening that occurs when exposure goes awry). The area were the sailor is pointing is under a canopy and in shade. The water and sky are bright overcast mid-day conditions. I often see an image nearly this bad. It is this way in most instances because the camera’s meter is pointing at the bright sky and telling the software to expose for it.  It thinks that if it exposes the shadow properly, the sky will be blown out to a bright, featureless white.

There is a surprise “fix” for sunny day exposures; Turn On Your Flash!

Now, here is the fix.  If we want to get good (not totally blown out) exposure in the brightest parts of the image, so we generally are going to meter near that brighter area.  Without some help, the dark areas will be too dark.  In this case, the subject is really the sailor and it is him we want to have properly exposed. We need to choose the proverbial “lesser of evils” and let the sky go more toward white.  I know, it  seems odd that this can happen on a sunny day.  The second image is better, by metering more toward the subject (perhaps on the darker colored water – not the whitecaps). But it is still in shadow.

There is a surprise fix for sunny day exposures:  turn on your flash!  Again many, if not most smartphone cameras allow for some choices, and if there is a “fill in flash” option, choose that one. The flash is not strong enough to affect the sky and water in the background, but it does light the person in the foreground (again, I simulated what a fill in flash exposure would do with this image).  The camera will, pretty intelligently, light the dark areas without overly affecting the bright areas. Obviously, it should go without saying that you can also use this flash feature when there is not enough light overall. Again, having an app that allows specific placement of the metering area will be useful here, both to get the differences in lighting covered, but also hopefully to tell you when you simply do not have enough light for a good exposure.

Fill in Light

There are limitations to flash (on every camera, not just smartphones).  Have you ever noticed spectators in the “nosebleed” section of a concert venue or sports stadium, popping flash images.?  Their flash is doing nothing for them except draining their phone battery. Flash is a wonderful addition, but it doesn’t reach very far.  It is only going to light up images that are very close to you.

Make Your Social Media Photos Better – Part I (The Earth is not Flat)

Better.” An obviously subjective term. Most of my blogs over the years have been directed at photographers or at least, people with interest in photography. In this series, I am addressing some of the things I see on Social Media. Admittedly, I am an “old” guy (another subjective term). So my Social Media exposure is relatively limited (like Facebook – remember when that was for “young” people?) 🙂 .  The earth is not flat. But we could excuse the casual observer who didn’t know better. When you look off into the distance, you see a horizon, and for all our eyes tell us, you could fall off the edge of that horizon. Except that we know it is not true. We learned it in grade school (or perhaps before). But there is another thing: that horizon we see off in the distance – it is always level.

smartphone users are primarily who I am directing these suggestions to

Today, there are almost 2 billion photos uploaded daily to Facebook, Instagram, Flickr, Snapchat, WhatsAp, and the like. 2 billionPer day.  I don’t know about you, but I have a hard time getting my head around that. I probably see 20 to 40 a day.  That makes my sample only about 0.00000001%!  But I think it’s enough to support my observations. Modern “smart” cellphones have focused (yes, pun intended) on quality image-making software and hardware.  The newest generation of iPhone and Android (particularly sector leader, Samsung) smartphones have remarkable lenses and pretty capable software for digital image capture. They are perhaps as much responsible for this explosion of posted images as any other one factor.  I would guess that about 1.99 billion of the 2 billion daily posted images were made with smartphones. So smartphone users are primarily who I am directing these suggestions to. The intent here is not to be judgemental, and if my comments offend anyone, I apologize in advance.  No offense meant.  Rather, I am hoping I will suggest some things everyone can do to make those images that all your online friends remark are “beautiful” to, truly beautiful, both aesthetically and technically.  Since you are posing the image, we will assume aesthetics are to your liking. What I am talking about here is the technical qualities that make an image better.  So this series of blogs will address some things I see.

Horizons

Tilted Horizon

This is the single most common fault I see. The vast majority of images I see online, which otherwise are very nice images in many cases, suffer from this malady. As I look at them, I often wonder aloud, “How there is any water left in any ocean or lake on our planet?” Because based on the photographic representations I see, it should all have drained by now. 🙂 There is no special natural talent that skilled and experienced photographers have to get this right. In fact, I don’t believe there is any one of us who see things levelly through the lens without aid of some kind. I had been shooting seriously for many years when my friend, Al Utzig mentioned some tilted horizons on a couple of my images. Unlike some of the images I see on line, they were very subtle, and I didn’t see it. Al suggested that I use a level device on my camera and I have done so ever since. These cost a few dollars and are an invaluable aid, but do some homework when ordering online, as I found some of them were notoriously not true to level (I check mine against a carpenters level). I was surprised at the difference between the “leveled” image, and what I thought was level with just my eyes.  And, my very limited empirical data tells me that most of us lean to the right. But the mechanical level simply doesn’t lie. 🙂

“How there is any water left in any ocean or lake on our planet?”

So how do we fix the problem?  Of course you cannot use the mechanical bubble level on your smartphone.  And it is really only useful when shooting from a tripod – which most of you don’t/won’t do. The fix is really pretty easy and there are three ways I can suggest that will be very easy for any user to adopt. The first two are in-camera and probably the best solution for snapping and posting photos.

Hotshoe Bubble Level

1) Virtually every digital camera today has settings that superimpose grid-lines on the viewer (and on your phone, if the setting isn’t there, “there is an app for that”). This is a very useful feature, and in my view, should always be activated for your camera.  Here is a very good explanation of use of and turning this feature on for both Android and iPhones. The grid lines are really intended as an aid to composition of the photo (more later),  but can be useful where there is a clear horizon line in the image, like the ubiquitous sunset over water image.  If your phone is older and doesn’t have grid lines, there are any number of camera apps available for download that will do this, as well as the other things we will be mentioning.

Viewing Screen Grid Lines

2) Many cameras also have a built-in electronic “level.”  “I have that always on” for all of my digital cameras (I still use the bubble level when shooting from a tripod).  As I have moved to the “small camera,” “street shooting,” hand-held mode in recent years, this has become an invaluable aid to me. It works – I think – better than the grid lines (the grid lines are always on also, as they have another important use, which I will cover in an upcoming blog). Using both gives you the ability to cross-check how things are working. Use is simple. All but the very newest phones (the new iPhone does) do not have this feature and you will have to download an app. There are different variations of this tool, but they all work pretty much the same. There is a fixed horizontal line and a rotating line (usually green or red), or two rotating lines that turn (usually) green when level is accomplished. Just insure that the horizontal lines match up. The tool is superimposed on the viewing screen and doesn’t interfere with composition or shooting.

One of many different variations of a built-in “electronic” level

The last way is a bit more work, and I suspect that few will go to this step. 3) Most of the time, horizon issues can be fixed with digital photo processing software. Of course this adds some steps and an element of complication to the seemingly straightforward process of taking the picture and then “sharing” it somewhere. But if you want the image to look good, it is worth doing if you didn’t get it during the shot. Older photo software required some loss of parts of the edges of the image (“cropping”) when fixing tilted horizons.  These days most software is so “smart” that it fixes those edges pretty well (Photoshop calls this “content-aware” cropping).  My screen capture didn’t pick it up, but when you move the cursor to one of the bracketed corners, a curved arrow appears showing the direction of rotation and you just “grab” the corner and rotate it until the horizon is straight.  Photoshop software allows placement of “guides” (the blue line near the horizon) both horizontally and vertically to help in determining level. The iphone and android stores all have apps that can be downloaded for either a nominal fee or free, which do post-processing (like Snapseed) and many of them have the capability to rotate an image after it has been shot and stored on the phone.

Correcting a Tilted Horizon in Post-Processing Software

So now you have no excuses for posting crooked images 🙂 . I’ll be watching for more level horizons on Facebook.

NOTE:  As I worked on this series of blog posts, I fiddled with my own smartphone (Samsung Galaxy S7), and learned that the native camera app is pretty lacking.  This has got me looking for an adequate substitute app.  What I am finding is that it does not appear that many of these app developers are photographers (or perhaps more accurately, they don’t shoot the way I do).  So many of them have lots of “bells and whistles,” but are lacking in one or more important features. Most notable is the level app.  I can find plenty of stand-alone level apps, but they do not seem to integrate well with the native camera app.  Looking at a full-featured app, I am currently trying out an app called “Snap Camera.”  I will try to remember to report my findings.  One thing is that this is not a free app ($1.99 on Google PlayStore, which I do not think is unreasonable for the benefit – assuming it works as well as I hope).

Some Tips for Casual Shooters

The ubiquitous black gondola (shown here with the also common blue cover) is a favorite subject of photographers Copyright 2013  Andy Richards

The ubiquitous black gondola (shown here with the also common blue cover) is a favorite subject of photographers
Copyright 2013 Andy Richards

September and October, and particularly during the “fall foliage” season, are prime vacation times. Many love the cool crisp fall air, the relatively less crowded venues, and are often attracted to destinations known for their fall foliage. Elsewhere, fall creates beautiful light and beckons vacationers for many other reasons.

This year looks, from all indicators, to be a year that will yield spectacular foliage as our mixed hardwood forests in the Northern parts of this country turn before dropping their leaves for the winter. With manifold lakes, mountains, rivers and ponds for backdrops and reflections, many travelers will be making memories with their smart phones, tablets, and small travel (“point and shoot”) cameras.

I see hundreds of images on the internet these days most taken with smart phones or tablets. There are just a few “tips” I would offer to perhaps make these images “better” memories.

Make sure the Horizon is Level

This may be the most common issue I see with many of the 100’s of photos posted on line. There are so many of these tilted photos that – If I didn’t know better from scientific proof – would lead me to believe that the world really is flat – and tilted! And not only tilted, but in the majority of cases, tilted to the right! I know from many of my left-leaning friends, there are are a fair number of left leaning shooters out there, but they still lean right when behind the viewfinder. We don’t see the earth tilted right or left with our eyes, but the apparatus we are using to capture the scene – or something in the scene – fools us and the end result is a tilted horizon. I have been shooting with an aid (either a bubble level or a built in level in the viewfinder/screen) for some years now, since my friend, Al Utzig recommended it. I still am amazed when I shoot an image without any aid, thinking the horizon is level and then view it, to see it is not. The level doesn’t lie. Our eyes (and mind) does. It especially shows up with images where there is a well-defined horizon (like those sunset images on your favorite lake, where if you look carefully, you would wonder why the water hasn’t drained out of the lake). J. Don’t let the other objects in the image fool you. They might just be tilted. But the horizon never is.

Think about the Sun

We have all see images – usually of people – where you can hardly see them because they are so dark in the image.

More often than not, these images are made on a perfectly clear, bright, sunny day. But as often, something creates a shadowed area in which the subjects are shrouded. This circumstance is created because of the angle and brightness of the sun. It is why portrait photographers love outdoor conditions that are “bright overcast” rather than bright, clear and sunny. The overcast creates a flat, even lighting, where the bright, clear conditions often create harsh brightness and deep shadows.

The human eye sees these scenes exactly as they are – clear and well-exposed. So why can’t we get it right with the camera? After all, it is a “smart” phone, right? While technology has moved light years in just a few calendar years, we still do not have any optical technology that holds a candle to the human eye, nor computer (yes, digital cams are computers) that matches wits with the human brain. That combination (eye and brain) corrects any lighting problem in a nanosecond, without us realizing it. And it sees a huge range of contrast from very bright to very dark, in great detail. Unfortunately our phones (cameras) are not that good. They “see” a very narrow range from bright to dark, and the computers in them then try to decide which of the tones are the most important to the image. But they just aren’t so “smart.” Unless we give them some direction, they will see a dominant tone in an image and try their best to expose it (more often than not, the brighter tone). And in the process of exposing the brighter tones, they will render darker tones, well – dark. There are a couple of “fixes.”

We could just turn the folks around and have the sun at the shooters back. A couple of problems with that: First, the “scene” we are trying to capture doesn’t lend itself to that. Second, when we do turn them around, they are looking into the bright sunlight and therefore, squinting. And last, the direct sun on them is often creates a harsh effect. So how do we fix the problem without turning them around?

Old San Juan Artisan's Market Copyright 2013  Andy Richards

Old San Juan Artisan’s Market
Copyright 2013 Andy Richards

There are a couple of “fixes.” The first and often the best, is to turn on your flash. Wait a minute. Flash? In the broad daylight? In bright sunlight? Yep. The flash will “fill” in those areas the computer is telling the camera to render as dark, evening out your image. A second fix is to move in as close as your subject will allow (“telling” the exposure meter built into the camera to measure the light closer to the subject.

Use the “Rule of Thirds”

The rule of thirds is an almost hackneyed rule used by artists and photographers to try to make images more dynamic. Too many otherwise nice images are composed using what I call the “bullseye” effect. The camera has a focus aid (often a square or round bracket in the viewfinder or on the screen), and the default position of that aid is dead center in the frame. So we tend to follow its lead and put our subject dead center in the frame. If the subject is a closeup without any surrounding context, that probably works. But again, the premise here is that you will be out seeing the sights and want to either capture the sight or capture friends and family with a feature of the site as a prop.

The rule of thirds says that a more dynamic composition divided the image into thirds both horizontal and vertically and places important parts of the images at one of the points there the dividing lines intersect. Many cameras now come with a grid that can be activated with lines at the “rule of thirds” point. A useful compositional aid, if you have it. If not, try to imagine your viewing screen or viewfinder divided in thirds in both directions.STONE HOUSE MANASSAS BATTLELFIELD NP MANASSAS, VA 082720100003_tone compressor

For much of the “travel” imagery I see, this is particularly important for the vertical placement of the horizon in an image. With one primary exception, placing the horizon in the middle of your viewfinder is a recipe for a boring image. The primary exception is when you are creating a mirror-image reflection. Even then, be careful not to overuse that.

You have to think about Image Sharpness

In “the olden days” (as I used to say when I was a kid), travelers shot with Kodak Baby Brownie cameras, or Instamatic Cameras. They were “fixed” focus, which meant they used optical formulas which pretty much guaranteed focus in most situations. This mean very short focal length lenses, with relatively small openings (apertures). Only “serious” shooters back then had cameras that were capable of being focused by the user.

Problems with image sharpness are caused by a number of factors. Movement (either your subject or yourself while holding the camera) is one. This is exacerbated as the focal length of the lens increases. Another is the actual optical focusing of the lens. “Fixed” lenses are not common on today’s cameras. Instead, most have some focusing capability. While desirable, this also causes issues from user misunderstanding.

The reason most lenses today are focusable is because of the advent of “autofocus” technology (AF). And, thankfully for those of us with old and poor eyes, AF just keeps getting better and better. But it is still an optical-mechanical technology. Which means it needs some guidance. On smartphones and tablets, there is generally very little user-adjustment. But the guidance is still happening. The software is telling the sensor you are aiming to focus at a particular spot. There is often a green confirmation. You must know where on your screen is telling the lens to focus. Most cameras have a bracket (see above), that will tell you the part of the image you are focusing on. It might not be your subject! But it will focus where it’s told, rendering what appears to be an out of focus image.

The second cause is not understanding the mechanics of your “camera.” If you are shooting with a camera with a very long zoom range and trying to shoot under relatively low lighting conditions, you will likely get blurry results. This is a function of the optics. If your subject is moving, you will likely have the same result. With cell phones the first problem is more likely, as the lenses in these cameras tend to be relatively short.

The above “tips” are just that. They are not “set in cement” rules. And even if they were, all artists know that “rules” are made to be broken. The tips should make general images “better” images and to be kept in mind as rules of thumb while shooting. As the old saying goes, “your mileage may vary.” Don’t be afraid to experiment and to break the rules. But breaking the rules work better when you know them and understand the consequence of breaking them.

Copyright Andy Richards

Copyright Andy Richards

I will be “out for a couple weeks” on travel. Hope to be able to bring some new images back and perhaps something to discuss here. In the meantime, safe travels, have fun, and be safe!

Size Matters!

Certainly, a hackneyed, old phrase, but in matters of digital photography, it is a fact. The size I am referring to is the digital sensor, but not megapixels; physical size. There are a fair number of parallels to the film-based systems we used before the so-called, “digital revolution.” But with digital sensors, there is a significant difference; technology. Very much like the computer industry, by the time the latest and greatest sensor hits the market, it is already “old technology” (there were major technological advances in film, too, over time, but they happened in a matter of years, rather than months).  This is a very technical subject that many others out on the net explain in technical and engineering details that I cannot begin to match.  This is a layman’s perspective.

The consumer camera began to drive new technology advances

The first digital cameras were adaptations of 35mm Single Lens Reflex (SLR) camera bodies that were used to build digital imaging tools (today referred to as a Digital SLR; or DSLR). They had very small imaging sensors (significantly, smaller than the rectangular cross-section of a single 35mm film frame), and were capable of producing only around 1.2 megapixel (MP) images. They cost $20,000 to $25,000; not within the budget of most photo-enthusiasts.

Digital image-making brought a new phenomenon to the camera manufacturing industry. Suddenly, the consumer camera (we often refer to them as “point and shoot” or “P&S”), began to drive new technology advances which often first appeared in the consumer P&S cameras, only to be put into the higher-end “pro” cameras later.

Sensor Sizes Compared

How does this all relate to sensor size? The P&S cameras have a much smaller image sensor in them than the DSLR cameras and the newer Medium Format digital cameras that are now available on the market. I currently routinely carry and use 3 different cameras: a Canon G12 P&S, a Nikon D7000 “DX” sensor camera, and a Nikon D700 “FX” sensor camera.  As the illustration shows, there is a pretty remarkable difference in sensor sizes (if you would like to do your own comparison, I used this really cool tool to make this illustration). Especially when we can upload and view all three of these at relatively the same viewing size on our computer monitors, and—within reason—make similar-sized print images from all three. But there is a notable difference in the quality of these images.

Why did the original cameras not have a sensor identical to the 35mm film cross-section that those bodies were designed for? The answer is simple; technology and cost. The technology that continues to “knock our socks off” today was the most limiting factor back in the late 1980’s. The cost to manufacture even a 1.2 megapixel small physical sensor was prohibitive. A 35mm size sensor cost as much as 20 times the cost to manufacture the smaller sensors. And while over time manufacturers rapidly designed and manufactured sensors holding many more photo-sites (hence, more megapixel capture capability), the cost to manufacture larger physical sensors remained expensive. This explains why the cost of the so-called “full-frame” DSLR is still substantially higher that a higher megapixel DSLR with the smaller sensor.

Why weren’t the original sensors identical to the 35mm frame size?

Megapixel Wars.  For the first 10 to 12 years after the introduction of the consumer-affordable DSLRs, there was a huge emphasis—and indeed a “race”—for more and more megapixels. Megapixels translated in many people’s minds into higher quality. There is some truth to this, but it is only part of the story.   When I talk about “size” here, I really mean the physical size of the sensor, more than the number of megapixels.  As Thom Hogan recently noted in his D800 review, the megapixel increases are linear, not geometric.  In other words, the 36 megapixel sensor in the D800 does not create images 3 times larger than the D700’s 12 megapixel sensor (in fact, Hogan estimates that the increase from the D700 to the D800’s image sizes are about 70%).  The arguably more important part of the story is the quality of those megapixels.

My first DSLR was the Nikon D100; a 6 megapixel camera. My “upgrade” to the D200 was 10 megapixels. My current “pro” model D700 is a 12 megapixel camera; while my “backup” D7000 is a 16 megapixel camera. Logically, it would seem that the progression from D100 to D7000 kept getting me to the best sensor. But that is not the case.  The older, 12 Megapixel D700 sensor still yields a much higher “quality” image than the 16 Megapixel D7000 sensor.

You might think that the progression from D100 to D7000 kept getting me to the best sensor. But that is not the case

At the time I bought the D100, there were point and shoot cameras available with higher than 6 megapixel counts. But I could still produce a cleaner, better quality image with the D100 that could be printed larger (I have 13 x 19 prints of images made on the D100 that are indistinguishable at that size from prints made from D700 12 megapixel images). Nikon’s newest “entry-model” consumer DSLR is the D3200, which is a whopping 24 megapixels (only the pro D800 beats it with 36 megapixels – the largest megapixel DLSR available at this writing). One would think it should make “better” images than the only 12 megapixel D700. But it cannot even come close!

The primary reason for this is size. You can readily see the difference in the image sensor sizes of my 3 current cameras. And the number of photo-sites that are packed onto the sensor and the size of the photo-sites make a huge difference in the quality of the image produced. This is most obvious in the low frequency (shadows and low light) side of the digital photographic equation.

Cruise Dock; Port Everglades, FL
Copyright 2012 Andy Richards

Noise.    When a sensor captures light it is converting light to electrical signals and certain “stray” signals can produce a grain-like pattern or effect in an image that is referred to as noise. This noise is largely created by low frequency signals (often the product of low-light conditions), but also by heat and other anomalies in the electronic processor. A small sensor, with many photo-sites packed onto it can create more degrading noise than a larger sensor. It can generate and accumulate more heat because it has less area to dissipate the heat energy. At the same time, the larger photo-sites are capable of capturing more and better detail, yielding a better quality digital image. These higher quality “raw” images, in turn, yield much better files to work with in the post processing stages. The image here, taken at the Princess Cruise Terminal in Ft. Lauderdale, Florida, in the early morning hours, demonstrates this. The noise is simply un-manageable. The same shot, taken with the D700 would have been salvageable.

G12                                         D7000                                         D700

Angle of View.    Another controversial area over the years has been the concept often referred to as “crop factor,” or “magnification factor.” When manufacturers began making digital interchangeable-lens camera bodies, they simply adapted the current, 35mm SLR body. While one might wonder why they didn’t just design and build a new “digital” body, the most obvious answer is probably again based on economics. It was probably much less costly to adapt the 35mm SLR body. And, it meant not having to create a completely new lens mount and require all of us to buy a whole new series of lenses. They had a ready-made consumer, just waiting to purchase the DSLR and use their existing lenses.  The composite above is shots from the same tripod position, taken a 140 mm on all three cameras, at their maximum aperture.  The G12 is obviously with its built-in lens.  The Nikon is with a 70-200mm f2.8 zoom.

The “crop factor” / “magnification” debate doesn’t really matter

The first sensors were significantly smaller in cross section than the 35mm film frame. The SLR lenses were designed for the larger 35mm cross-section. So, the sensors only used some of the inner part of the lens circle, effectively “cropping” the outer part. The effect of this was to create a narrower angle of view. The appearance is a “telephoto effect.” The practical effect was that you lost your wider angle and “gained” a longer view on all of your lenses, by a factor of 1.3 or 1.5. Because these sensors were similar to the size of the (largely failed) Nikon Pronea APS film camera, they came to be known as “APS” size sensors (Nikon has since denominated their “APS” sized sensor as a “DX” sensor).

There is no such thing as a “full frame” camera

There has been a considerable amount of debate and even “flame wars” on the internet over this concept. It really doesn’t matter. The debate is, for practical shooting, silly. The reality is that if you have a wide angle lens, on an APS sensor, it just won’t view or capture as wide. Conversely, if you have a longer lens, you get a longer angle of view on the entire sensor, effectively increasing the “telephoto” effect of the lens. Depending on your intended use, this can be a good thing or a bad thing (I actually made my recent backup decision to buy an APS size sensor partly on the premise that it would give me slightly more length for certain wildlife shooting).

Likewise, the controversy over whether a camera is or is not “full frame” is non-productive. But I’ll weigh in anyway :-). There is simply no such thing as a “full-frame” camera. I know that might draw some debate, but it is just a reference point. To a lifelong 35mm shooter, “full frame” means 35 mm (24mm x 36mm). But to a Medium Format or View Camera user, that’ hardly “full.” Indeed a View Camera 8 x 12 sheet of film makes a so-called “full frame” 35mm look like what it really is: a Postage Stamp! Again, it’s a reference point. To my way of thinking, the larger the better.

But there are practical considerations. When I bought my 6 megapixel APS frame D100, I carried a couple 2G flash cards around. With my 12 megapixel “full frame” (Nikon refers to their 35mm size frame as “FX”) D700, I use 8G cards. With the new D800 36 megapixel FX sensor, I don’t even know how large the cards would have to be. And with each of these increases in file size, an equal increase in hard drive storage and computer processing power (memory) is needed. At some point, it may become overkill. When Nikon announced their Flagship D4, at “only” 16 megapixels (same as the D7000 consumer body), many of us thought it might signal the end of the megapixel wars). But they have since released the D800 FX 36 megapixel body. Interestingly, the D800 is the “entry-level pro” camera and the D4 is the flagship pro body at a significantly higher price point (suggesting that a lot of working pro’s don’t consider the megapixel issue significant anymore at 16 megapixels). I cannot personally see a need for more than double the capacity of my D700 which creates some splendid digital images, in low light conditions.

It is difficult or impossible to get those pleasing out of focus effects on a point and shoot camera

Canon G12; 140mm; f4.5

Depth of Field.    Interestingly, sensor size also affects depth of field. This is analogous to the film reference above (as was the comment about “full frame, Medium and Large Format). It relates to the field of view and image size geometry. I am not capable of explaining the science, here and will leave it to more capable persons. But as a general rule, the smaller the sensor, the greater the depth of field for a given lens focal length. This explains why it is difficult or impossible to get those pleasing out of focus effects on a point and shoot camera.  As you an see, the G12 image, which is shot at its longest focal length and its widest aperture, still captures the background in reasonably sharp focus.  On the D700, everything is blurred except the main subject (of course a big part of that is the wider aperture).

Nikon D700; 140mm; f2.8

In the end, it is still first and foremost, about quality! When the consumer affordable DSLRs first came on the market, there was an immediate hue and cry for a “full frame” (i.e., 35mm size) sensor. The main reason, I believe, was because those of us used to using 35mm didn’t like the change required when thinking about and using our existing lens arsenals. I bought an 18mm lens and it solved my concern. Over time, the camera lens manufacturers have answered the call by designing lenses specifically for the APS sensor size. This has created some of its own issues, as we now have 35mm size sensors available. A “DX” lens on my “FX” D700 will have a big black vignette circle in the viewfinder, but will simply crop the image to the DX format. It just means we have to plan and think about what and how we are shooting.

My primary criteria for purchasing a camera will always be its sensor quality

When the D700 came out, I immediately decided to buy it. The sensor size really had absolutely nothing to do with it. I had “moved on” to the smaller sensor / lens combinations and had a bag of lenses that worked well for me. Buying the D700 made me have to re-think that and it was, frankly, in the short run, a nuisance to do so. Eventually I have sorted that all out. But my primary criteria for purchasing the D700 – and it will probably always be my primary consideration in purchasing any camera, was sensor image quality. And there is just no question on my mind that the camera I have in the bag that yields the highest quality image, is the one with the biggest sensor.

For lots more than you ever wanted to know about sensors, sensor size, and electronics, see Wikipedia and this DPreview page.