Toronto. Brian Coe’s book, “Colour Photography” gives a wonderful overview of the efforts to capture the colours of nature through photography, not by painting the monochrome print. In 18o2, the Young- Helmholtz theory of colour vision (enhanced in 1850) suggested the human eye had three receptors, each tuned to a narrow band of the visible spectrum. Signals from all three bands were combined by the brain to create the spectrum of colours we see.
Jim Maxwell proved this (more or less) in his famous experiment of 1861, just a few years before his untimely death at 48. After many frustrating years of experimentation, it was realized that the best way to capture colour was to use a tri-pack of panchromatic black and white film interspersed with filters and couplers that reacted with the silver halides to form colour dyes. An additive system gave rise to colour transparencies (also called reversal film or slides), while a subtractive system was used for colour negative film and colour paper. Early on these schemes were extremely slow. When the minicam era began, the need for better colour materials accelerated.
Today, almost all modern colour screens and sensors still use a version of the tri-pack filters whether for television, computer, digital camera or smartphone. The difference is the amazing speed and brightness of these products.