I’ve had a draft of the post below “in the can,” so to speak, for months now. I first started it in October 2017, picked it up again in November, and then put it aside, so it seemed, for good. But I continued to poke at it subconsciously. More to the point, I’ve continued to pursue its subject, daily, since then. So I’ll put this up now, with the understanding that I think I’ve gone a ways further with the experience described below, and I’ll resume it later, with a Part 1 (and perhaps subsequent ones).
Image: Mom, January 1972: an “arty” photo taken with the first camera I owned. Looking back through old pictures, I discovered this through-the-crook-of-the-elbow viewpoint seemed to be an early favorite.
It took me a loooong time to get into taking photos with a smartphone. But I’ve recently tried to suppress my annoyance with the limitations by regarding annoyance as simple disguised resistance — resistance, that is, to learning a new skill.The impetus for this newfound patience: a challenge posed by a Facebook friend. “7 days, 7 pictures of your life,” enjoined the meme. “No words, no explanations — just black and white.” (Since it was a meme, after all, it further instructed us to pass the challenge on to someone different with each image posted.)
Despite my reluctance, I’d been considering some kind of phone-photography project for a couple years — even better, a black-and-white project. The challenge spoke directly to this prospect; my resistance had become, as they say, futile — irrelevant.
(As an aside, I admit I was also hungry for some sort of distraction at the time. The Pooch had passed on to greener pastures just a few weeks before, and I felt quite desperate for distraction from that grief.)
Now, smartphones do not by nature take black-and-white photos. To get around this barrier for a project like this, either (a) you can edit the photo later, applying a digital filter which desaturates the colors to simple grayscale; or (b) your camera app may include a black-and-white filter right upfront, so you can satisfy yourself with the grayscale effect(s) before snapping the shutter.
I’d experimented with option (a) long before and found it generally one of those “This still isn’t a camera” disappointments, mostly because the filters didn’t behave as expected… which, of course, I would discover way too late for a reshoot. (Using this kind of second-guess “correction” also encourages over-correction — using multiple effects not just to desaturate the photo, but also to tinker with the brightness and contrast, adjust the sharpness, add old-timey sepia coloration, and so on. And if you’re shooting JPGs –the default format available in most smartphones — every little adjustment and resave actually degrades the image quality… exactly the wrong “special” effect.)
This time around, for option (b), I searched for a popular, well-tested and-reviewed Android app to show me the filter effect before I “clicked” the “shutter.” After trying several of these, I settled on one calledBlackCam Pro. (The screen capture at right shows you the main BlackCam user interface.) I make no claim for this app on technical photographic or image-processing grounds; for the time being, it simply takes photos which appeal to me aesthetically (for lack of a better, less self-conscious word). BlackCam also encourages me to impose two further limitations on myself:
- The black-and-white images, both as shown on-screen and after the shutter is operated, are square, a limit I have not dealt with in all the 40+ years I’ve been “seriously” taking pictures. I like the way this format seems to accentuate the classic b&w look of each photo, as if coming from a mid-20th-century Kodak Brownie or Polaroid camera.
- Like many of these apps, BlackCam first takes a regular, full-color, rectangular photo, to which it then applies the b&w filter (while cutting it down to square size), matching what you see on the screen. But BlackCam also allows you to not save the original color image. If you choose this option, as I have, you choose to do away with even the possibility of almost any post-processing. (You also save lot of space in the camera’s memory — a real issue for me, since my camera can’t use removable memory cards as offline storage.)
Together, these two restrictions have forced me to take more care in composing shots and selecting subjects than I’ve had to since starting to use a smartphone.
Let’s say I want to photograph a tall tree standing by itself in the middle of a clearing. Given a rectangular frame, and the option of cropping the image (in the darkroom for a real photograph, or with software after the fact), shooting this so it doesn’t look weird and empty is well-nigh impossible. Instead, I must figure out how to correct for this right from the start. Can I move a little left or right, so that other objects are at least visible to either side of the tree? Maybe I should move way in, filling the frame with tree bark instead of empty space?Doing without any post-shot processing has been a far bigger challenge: knowing that a shot could be hugely (and easily) improved if I just, say, trimmed away that distracting tree trunk on the left, or corrected a slight tilt in the composition, or bumped up the brightness and contrast — it’s been a temptation, I’ll tell you.
I’ve succumbed to that temptation a couple of times. This hasn’t (I console myself) come from a failure of will or good intentions, but from acknowledging certain realities of everyday smartphone photography.
Reality #1: the ergonomics of smartphones don’t require physical stability. They’re meant primarily as mobile phones, of course, and Internet-access devices. Their light weight is a feature, not a problem. You carry them in your hand, sometimes for hours at a clip. You walk-and-talk. They don’t demand steadiness of hand or balance in order to be effective for their main functions. (A smartphone you couldn’t use while running through an airport or riding in a bus or taxi wouldn’t be very smart.) The point being, it’s sometimes very hard — even impossible — to take a “still” photo without blurring the image. (You can use tripods and other stabilizing devices, as well as remote-control shutters — by sacrificing mobility and convenience.)
Reality #2: you can’t always see what you’ll get… because a smartphone lacks a viewfinder — a tiny little porthole against which you can place your eye, shutting out everything (reflections, bright lights, visual distractions to the right and left) but the scene the photograph will actually record.
(On the other hand, true, a viewfinder can seem an obstacle in its own right, exactly because to use one effectively you must, by definition, ignore everything outside the field of view. You miss things about to happen, for instance. You miss pedestrians and bicyclists about to ram you from the side. You simply look artificial to passersby, because, well, you’re engaged 100% in an artificial enterprise: freezing a split-second slice of time.)
But using a smartphone, which shows the image on its screen, requires that you distance your eye from the view. You’re generally holding the “camera” foot or two away, which adds a lot of variables to your attempts to frame the shot effectively. If the sun is directly behind you, for example, you often can’t see the screen at all: the reflection from the screen is too bright. You’re reduced to a lot of guesswork in environments like this.
Finally, Reality # 3: I’m familiar with an ill-kept secret of regular film-based photography — to wit, even the most careful, experienced photographer sometimes continues to “tinker” after the shutter is clicked. If you do your own darkroom work this is easy, if you don’t mind wasting consumables on failed experiments especially. And if you send film out to a professional photo lab for development and printing, you can ask them to do, well, a lot. “Push” the processing, say, to correct for likely under-exposure at the time the photos were taken. Crop prints to certain dimensions. Dodge or burn certain areas of the print to highlight or hide particular elements. Print on special archival-quality paper, or on paper and/or via processes offering other special properties: sepia toning, silver-bromide or tintype technologies, and so on.
(Even in photography as “real” as motion pictures, it’s not a stretch to think of film editing — cutting, splicing, resequencing — as after-the-fact “corrections” of the medium’s technology… or of its cost, both human and material.)
So anyhow, once I got past the initial, “pure” seven-day experience of the Facebook challenge, I concede: I have tinkered — very, very slightly — with the images as first taken. I have so far cropped one square image to horizontal, eliminating a distracting foreground between me and the subject, and I have corrected slight off-kilter verticality in a few shots, maybe two or three.
The photo at left is the cropped one. It also illustrates something else I’m slowly acclimating myself not to feel ashamed of: using filters at the time of the shot to adjust how the result will look. This was a high-contrast filter, but also added a halo around the tree — an effect I really liked when I saw it in the screen, although I hadn’t thought in advance to use it. (In truth, I didn’t even know such a filter was available, and haven’t used it again.)
I have found in the two-three months that I’ve been using BlackCam Pro that I favor higher-contrast filters to lower-contrast ones; I have learned, further, that the user can customize the order in which the filters display in the scrolling list, or even hide some filters altogether — with probably a couple dozen filters available, it can be a nuisance to keep hunting through the list for the ones you prefer.
In the BlackCam screen capture above, you can see the default ordering for the first few filters: None (i.e., color — with not even a B&W filter applied), Classic, Vintage, Vintage Deep, and Noir. Of these, predictably, the highest contrast is the Noir filter, and I gravitated quickly to it as a preference. Then I scrolled to the right a bit more and found Hard Boiled — boy, that one is really dark-and-light! (Witness the shot at the right, taken after dark one evening, sans flash, while the barbecue grill was heating up. See the little spark arcing out of the top of the charcoal chimney?)
Also included with BlackCam Pro are several color filters. This may seem kind of dumb — there’s no red, green, blue, orange, or yellow “color” in a black-and-white shot, after all.
But that logic denies the importance of the color of light coming from the subject. Yes, the result will be a standard, 256-shades-of-gray image. But the subject itself is emitting or reflecting a whole host of colors… and you can use a color filter (either in “real” analog or in digital photography) to affect the black, white, and gray of the resulting image.
Consider the images shown here:
White bowl with apples: unfiltered color image, plus two black-and-white images shot with color filters
As you can see from the leftmost of the three, this bowl contains six apples, two red and four green. Before taking the middle shot, I “turned on” BlackCam Pro’s red filter; for the rightmost one, the green filter. See the difference?
- The red filter allows only red tints to be exposed in the photo: the red areas are “brighter” than those of the “most opposite” colors, green and blue.
- The green filter, correspondingly, lightens the greens and darkens the reds and oranges.
(The green apples in both black-and-white images are affected less than the red ones, because they’re only light green; by contrast, the reds are “stronger reds.”)
One application in which a red filter becomes especially handy while shooting black-and-white: photos in which the sky plays a part. Two more examples for now — these two photos taken seconds apart:
It’s not really that hard to tell the unfiltered one from the red-filtered one, is it?
[…] me — it’s the photography, especially the black-and-white type. (I explained all this here some months ago, in what was theoretically just the first of a series of posts on the […]