Innovation and computational zoom

Effect of computational zoom (via software) on an image (UCSB)

Toronto. My thanks to my good friend Russ Forfar for bringing PetaPixel’s report on computational zoom to my attention. First we could correct colour balance, exposure, and contrast in an image.

A zoom lens lets us fill the frame with only meaningful elements of the scene. Note that prime lenses do the same thing with a bit of walking back and forth… By using  software to combine image elements from various focal lengths we can shift a new aspect of a scene making final images that are impossible to capture with the cameras  of today.

Photographers know that variations in focal length from extreme wide angle to telephoto affect the look of people and backgrounds. The extreme close-up lenses make noses look too big and faces “stretched” front to back. Telephotos flatten a face making the nose appear too small and features pushed into the face.

On the other hand, a wide angle allows more background to be visible while a telephoto crops and enlarges the background. A so called normal lens makes facial features look much like they do to the unaided eye. A medium telephoto lets us capture a half or 3/4 body shot with normal appearing features.

Now scientists at UCSB (University of California at Santa Barbara) have used software to combine a mix of wide angle and telephoto images of the same scene to be selectively used so the foreground for example can be selected from a medium zoom to appear “normal” to the eye while a wide angle shot of the background can  be combined to expand the view, or a telephoto shot to magnify and crop the background.

This entry was posted in processes and tagged , , , . Bookmark the permalink.