The photographic camera and now the digital camera have long followed the way the human eye is constructed. The technology has become sophisticated and cameras can send inputs to software that discriminates shapes and generates meaning, like the brain does from the images created by the eye.

The combination of high-resolution cameras and computers can resolve the detail to recognise a human face and their speed of action is good enough to support the software that manages a driverless car in busy traffic. In this last application, which needs fail-proof performance, however, the digital camera has been found to have limitations.

Missael Garcia, Tyler Davis, Steven Blair, Nan Cui, and Viktor Gruev, from the University of Illinois at Urbana-Champaign and Washington University in St Louis, write in the journal, Optica, how eyes in the animal world, adapted for different conditions, show the way digital cameras could use properties of light waves that the human eye cannot.

All that a camera and computer can make out, when an image is scanned, is a sequence of bright and dark spots, or spots of different colours. The translation of the sequence into a meaningful image is brought about by a process of learning, where a large number of sequences are associated with specific images, or not, so that the system, in time, is able to identify a new sequence as belonging to one category or another.

Thus, the imaging system can be trained to make out a vehicle, a car or bus or bicycle, for instance, or a pedestrian. Advanced software can then make out one image that is in front of another, and then, movement of objects. In this way, computers have been developed to control a driverless car, by turning the car to the left or right, to speed up and slow down, in the face of a turn in the road or other traffic, obstruction or pedestrians.

With improved optics and electronics, current systems are able to function in ordinary conditions. They run into trouble, however, when there are sudden changes, like when the car moves out of darkness to light, or if an objector obstruction is the same colour as the background, or when it is hazy. It is for these conditions that the team writing in the journal, Optica, proposes an alternative system, which uses other properties of light than simply forming geometric images.

The way light affects the eye, or the sensors of the camera, is that it is a wave that carries energy. At the level of cells of the eyes, light behaves like a particle and transfers a packet or lump-sum of energy to the cells. This is the action of light that humans are able to sense and it serves to make out shapes and colours.

But, apart from a straight line path and a frequency or colour, the light wave has another dimension, of the plane of vibration of its electromagnetic composition. We are familiar with waves, or ripples, on water, in which the movement of water is up and down, while the wave moves forward horizontally. Light waves, however, are not restricted to the up-down direction and the vibration can be in any plane.

Thus, if a beam of light is moving from left to right parallel to a sheet of paper, the electric vibration is either in and out of the plane of the paper or up and down within the plane of the paper, or in any other plane, so long as the plane is perpendicular to the direction of the beam. Sunlight, which arises from thermal emission of very hot gasses in the sun, consists of waves in all possible planes of vibration.

On reflection, however, there is a selection of the plane of vibration and this is called polarisation of the light. The scattered light from the blue sky and particularly diffused light at sunrise or at dusk is markedly polarised. Different surfaces also impose different modes of polarisation. Light from an object that is before a background of the same colour would hence be distinguishable because of polarisation, even if not by the colour or intensity of light.

As humans have evolved to rely on position and colour for navigation and hunting, and benefit from maximum sensitivity, the cells in the human eye respond equally to light waves of all planes of polarisation. This, however, is not true of some animals, birds or insects, which need to navigate without fixed markers and also to be sensitive to detect prey or food that is not always distinctly visible. Being able to detect the plane of polarisation helps animals know the position of the Sun even when it is hidden behind clouds or to locate specific reflecting surfaces while foraging.

The team writing in Optica considered that tapping this property of light may help the electronic camera overcome its limitations, which arose from its being modelled on the human eye, which relies only on the geometric form of optical images. The mantis shrimp, a shellfish found in the sea, is known to have perhaps the most complex visual apparatus of all in the animal kingdom.

Each compound eye consists of tens of thousands of clusters of light sensitive cells and the eyes can move independently. The eyes have 16 types of detectors of light, unlike the three types, for the primary colours, that humans have. And they detect not only 16 different shades of colour but also six kinds of polarisation of light, the Optica paper says.

Another property of the mantis shrimp’s eyes is that they can distinguish a vast range of intensity of light. This range is possible because the sensitivity is not distributed uniformly, from the dimmest to the brightest, but is well separated for dim light and less so for bright light. This is the reason, the paper says, that the mantis shrimp is a deadly predator in dimly lit waters and has inspired several artificial colour and polarisation imaging systems. In order to match the polarisation- resolving quality of photo-detectors with the sensitivity that they have to different colours, the team mimicked the architecture of the eye of the mantis shrimp.

The shrimps eye is structured into three parts, the paper says, two peripheral hemispheres and a midband section. The hemispheres have alternate stacks of microscopic projections and respond to light of opposite polarisation. The artificial imager consists of over a hundred and ten thousand pixels, and each has a filter sensitive to one of four kinds of polarised light. And the electronics is arranged to react sensitively to variations in low intensity and more broadly when the intensity rises. This gives the system a wide intensity range, like the eye of the mantis.

The result is a photo-sensor system that works at thirty frames a second, with a huge range of intensity discrimination and exceptional sensitivity, the paper says. The compact size and the low cost of the arrangement make it suitable for use in automobile automation or unmanned, remote sensing equipment, the paper says.

The writer can be contacted at [email protected]