CHAPTER 11 Digital Microscope Cameras 65
There are several methods for enabling a camera to pick up colors. Because the
sensors can not separate colors by their own working principles (they pick all
the photons in their range of wavelengths where silicon is working as a detector
- from about 400 nm up to 1,000 nm), some additional technical means are needed.
One initial method is to use colored illumination. Just shine red, green and
blue light onto your sample and take three shots in a sequence. Next, combine
the images into an RGB image, for example, on your computer. In a similar method,
put a three-color filter wheel in front of the sensor with a white light illumination,
then do the same sequence. This can be done automatically in a modern
microscope where controllable filter wheels and software acquisition control
can be used. Another method might be using a camera that has three built-in
sensors which are looking into the scene through a prism. Each sensor looks at
the same image through a different color RGB-filter at the back of the specially-formed
prism. You can achieve a color image in one shot as all three channels are
acquired
simultaneously. However, it might be more difficult to match the camera
to the microscope, due to the optical side effects of such types of beamsplitter
prisms.
The last and most common method is a color pattern filter on top of the pixels.
This means that a specific R, G, or B color filter sits within each single pixel so a
single pixel can only detect one color. These color filters are arranged in certain
regular alternating patterns on the sensor. The best known one is the so-called
Bayer pattern, which was invented by a Kodak scientist of this name many years
ago. The pattern implies a certain, small loss of spatial resolution, as, for example,
a green pixel only detects spatial information in the green channel and misses
the blue and red information at that same spot. However, with the help of special
computer interpolation algorithms and the huge number of small pixels and
careful adaption to the microscope, these effects can be minimized and are negligible
most of the time. But at certain cases, when you go to the limits of your instrument,
it might be worth giving it a little more thought.
And one more thing: color is only defined in the spectral range in which the
human
vision system is working. It is just defined by our physiological capabilities,
working roughly in a range from 400 nm (deep blue) up to 720 nm (deep red).
However, these silicon-based sensors can see and convert photons in a broader
wavelength spectrum up to 1,000 nm, as mentioned previously. So every color
camera in the world needs an infrared blocking filter to cut off the invisible light
from 720 nm up to 1,000 nm since no color can be assigned to this signal. Indeed,
the color reproduction of a color camera would be really very poor if you operated
such a system without the IR filter in place.
All of these color acquisition methods do have one common effect: they are reducing
the camera sensitivity compared to a monochrome camera. They reduce the
signal intensity depending of the wavelength and cut off the near infrared spectrum.