computational photography

Latest Apple iPhone can adjust depth of field look – Phil Schiller.

Toronto. When photographic processes were announced in January, 1839, they were slow, monochromatic, and demanding of both the photographer and the equipment. Over the years we saw the processes “simplified” and incorporated (or as Kodak famously said, “you press the button, we do the rest”), sped up, changed to full colour, and become ubiquitous. In the first half of the last century, the amateur fraternity expanded as cameras, films, and processing became off the shelf items.  Professionals focussed on industrial, portrait, news, and marketing disciplines where proper lighting and/or immediacy demands had to be met.

When digital arrived, it was either expensive and crude high end technology for the news photographers fighting tight deadlines with little need for high quality or resolution, or cheaper technology and speed for the well-heeled amateur. By this century the prices had fallen and the quality risen to the point where a thousand dollars or less would buy anyone decent resolution and speed compared to film.

Professionals still spent thousands of dollars on high end cameras and lenses that approached film quality and far surpassed film speeds. Ink jet printers quickly improved to produce in seconds what film prints could offer in hours or days. And to add insult, the digital prints were often far more time insensitive, not fading to oblivion like many traditional film prints. The 1990s saw the introduction of computer programs like Photoshop that could correct exposure problems in the computer. Later on, programs like Lightroom were offered to allow key wording as well as adjustment of each image.

Then along came smart phones with built in cameras so tiny that their wide-angle equivalence had a massive depth of field from inches to infinity, like the human eye. Smart phone makers developed sophisticated on board computers that corrected for light balance, contrast, sharpness, etc. while other smart phone camera technology gave image stability. In later years, the tiny computer chips allowed post processing of lighting effects.

With film cameras, especially 35mm or bigger, the larger the lens opening, the shallower the depth of field. Cameras like the tiny Minox use such short focal length lenses (Minox uses 15mm) that even at f/3.5 the depth of field is immense. Cameras in smart phones have a focal length approaching 3 mm. With the latest release of Apple’s iPhones, their built in cameras have two lenses – wide angle (equivalent to 35mm) and telephoto lenses.

Apple’s fresh egg is a means to calculate the depth of field for objects not in sharp focus (as shown by a tiny square outline) and more, to adjust the degree of depth of field based on the synthetic aperture,  even though every image uses the full aperture of the tiny lens. It is now possible to imitate portrait lighting and traditional depth of field with its soft, blurry bokeh for out of focus areas. But worse, few people leave their smart phones at home meaning far more people than ever have the capability and do take photographs automatically corrected to eliminate user error. Sadly, the result is that the need for professionals keeps lessening as witnessed by the growing amount of pro gear and studio gear in our fairs and auctions…

 

 

This entry was posted in camera and tagged , , , . Bookmark the permalink.