Given that machine vision uses cameras, which work in a not-dissimilar way to the eye, we’ll take a
The human eye is the product of around five billion years of evolution. Perfectly “designed” to take advantage of the available light, its purpose is to project an in-focus image onto a surface which collects the image and sends it to the brain for interpretation.
The light source
The ability to see begins with a light source, such as the Sun or a lamp. Light from the source bounces off the object and enters the eye. Humans can only see “visible” light; that is, light with a wavelength of between 400 and 780nm. However, the electromagnetic spectrum is enormous, and visible light is only a tiny fraction of the radiation that lands on our planet every day.
Other species (and specialist cameras) are able to use different types of radiation to form images, such as ultraviolet and infrared radiation. And, of course, most of us are familiar with x-rays…
Why can we only see visible light?
Proteins in the cornea and lens absorb ultraviolet rays, and the water contained within the eye blocks transmission of infrared radiation, so these two types of light never reach the back of the human eye.
As an aside, this is where our machine vision cameras could be described as superior: we can build systems that utilise IR and UV radiation to see details that are invisible to the naked eye.
The first thing light hits as it enters the eye is the cornea, the transparent front part of the eye covering the iris, pupil and anterior chamber (otherwise known as the aqueous humour).
Together with the anterior chamber (gelatinous and transparent, its main function is to maintain the intraocular pressure) and the lens, the cornea refracts light. It accounts for around two-thirds of the eye’s total optical power and, although it undertakes most of the eye’s focusing power, the focus is fixed.
The iris is a thin, circular structure responsible for controlling the eye’s aperture. Muscles attached to the iris expand and contract the aperture at the centre of the iris – the pupil – controlling the amount of light that reaches the retina.
With cameras, the size of the aperture helps to determine the exposure of the image. A narrow aperture admits less light, and will produce a darker image; while a larger aperture admits more light for a lighter image.
Next, light hits the lens, which fine-tunes the eye’s focus. Our lenses need to be able to flex in order to adjust the focus so we get a clear image of things that are both near, and far away. <link to Father Ted sketch>
Human lenses are crystalline, biconvex structures that, together with the cornea, help to refract light and focus it on the retina. In a camera, the lens is usually physically moved further away or closer to the object, though some specialist lenses use a blob of oil as a lens, which is manipulated in much the same way as a human eye lens is.
The cornea and lens surfaces are almost spherically curved, giving an optical focal length of about 22.6mm, which is almost exactly the distance from the lens to the photoreceptors in the retina. This means that distant objects are in focus when the eyes are relaxed.
On the back surface of the eye, the light that has passed through the cornea, lens and pupil hits the photosensitive rods and cones of the retina. These are connected by the optic nerve to the brain.
The image is projected onto the retina in the back of the eye – but in vertebrate eyes, the retina is inverted. The photoreceptor cells are pointing inwards, away from the incoming light, so the light has to pass through the retina before it is detected. When the light hits the retina, forming an image, the image is reversed and inverted – and the optic nerve produces a blind spot. Then the brain turns the image the right way round. So you could say that we don’t see with our eyes at all, but with our brains.
Interestingly, octopus eyes are almost identical to human eyes – except for the fact that their retinas are NOT inverted, so their brains have less processing to do, and they don’t suffer from a blind spot. Much like cameras.
Images are no good if there’s nothing to interpret it: the human brain learns what the images mean in the first years of life. Similarly, without someone doing the processing in your vision system you just capture meaningless images.
Smart cameras or standard cameras plugged into computers enable all the processing to take place: this may include measurements, comparisons, data collection, and much more.
Are cameras superior to eyes?
In vision system terms: yes! Our eyes grow continuously, and deteriorate with age. The lens becomes increasingly rigid, losing its ability to change shape and focal length – which is why, when people get to around the age of 45, they begin to need reading glasses (as if creaky joints and a dodgy memory weren’t bad enough…). Although cameras can be damaged, they can provide many more hours of continuous use than our eyes.
Similarly, cameras don’t get tired, and they can be very accurately calibrated. Humans are, after all, fallible – and they quickly get bored of staring at the same thing for any length of time!
Like the eye, cameras require a careful mix of equipment and conditions to produce a high-quality image. We’ll be taking a closer look at lenses, filters and lighting soon…