Toronto. Photographic lenses as you likely know once used various elements to eliminate distortion and flatten the field. Early cameras were large and bulky with terribly slow sensitive media. As general rule, the long focal length, slow lens designs required very few elements for correction.
As focal lengths shortened, lenses were designed to be faster, and the coverage at the focal plane larger, lenses became more complex. added elements were limited by inter-element reflections. Around post WW2 ‘lens’ coatings allowed more elements to be used. The coatings eliminated or reduced the internal reflections significantly.
This led to zoom lenses with their many, many, elements. Photographers debated lens designs and makers, as they struggled to seek the ‘best’ lenses for their cameras. By the time digital sensors arrived and film began to fade from sight, photographers disregarded lens design. Today, with every smart phone equipped with a camera, we no longer bother with debates as to the lens (or camera) maker as well.
Computational photography meant computers could not only focus our lens but add corrections too (especially correcting geometric distortion).
In the engineering journal called IEEE Spectrum for June 2023, Charles Choi gives an update called, “Hybrid meta-optics takes high-grade photos without bulky, conventional optics“. The strategy used promises even thinner lens designs when coupled with computers. Have a read!
My thanks to good friend, George Dunbar, for sharing this interesting article about how future lenses may be shrunk even more than today’s tiny marvels with even better resolution (and perhaps faster speed)!
NB. Apologies to Rick Moranis and the 1989 movie, “Honey, I Shrunk the Kids“.








