Sensors used in Image Capturing

Wimarshika Thamali
4 min readOct 28, 2020

--

Image sensors are used to detect and conveys information that used to make an image. When the image sensor works, it converts the variable attenuation of light waves( light pass through objects or reflects off by objects) or electromagnetic radiation into signals, small bursts of current that convey information. These resultant electrical signals can be viewed, analyzed or stored. Image sensors are a solid-state device and serve as one of the most important components inside a machine vision camera. Image sensors can be classified according to several criterias as follows: According to

  1. Structure type — CCD or CMOS
  2. Chroma type — Color or Monochromatic
  3. Shutter type — Global shutter or Rolling shutter

Other the these criterias image sensors can also be classified according to the resolution, frame rate, pixel size sensor format.

How typical image sensor work inside a camera?

In a camera system, through a lens or other optics, the image sensor will receive incident light which are photons. Then if the sensor is CCD (Charged Coupled Device) it will transfer information into a voltage or if the sensor is CMOS(Complementary Metal Oxide Semiconductor) it will transform information into a digital signal. CMOS sensors convert photons into electrons, then to a voltage, and then into a digital value using an on-chip Analog to Digital Converter (ADC). Different camera manufacturers use different general layouts and components in the camera. The main purpose of this layout is to convert light into a digital signal which can then be analyzed to trigger some future action. Consumer level cameras would have additional components for image storage (memory card), viewing (embedded LCD) and control knobs and switches that machine vision cameras do not. Typical sensor functions involve;

Sensor classification according Structure Type

CCD sensors (Charged Couple Device)

CCD sensors have invented over 30 years ago. CCD is a 2D array of pixel photosensitive sensors. The number of charge sites gives the resolution of the CCD sensor. Each sensor generates charges proportional to the incident light intensity. Each column of sensors is emptied into a vertical transport register (VTR).

The VTRs are emptied pixel by pixel into a horizontal transport register (HTR).

HTR empties the information row by row into a signal unit.

Advantages of CCD

  1. Relative simple

2. More sensitive than photographic film

3. Cheaper to replace if failure

4. Modulatory-easy upgrades

5. Detector costs simple

Disadvantages of CCD

  1. Demagnification is a major issue
  2. Vary with application
  3. Very expensive
  4. Relates to potentially lower DQE
  5. Irregularity of Charge site’s material
  6. Blooming effect

CMOS(Complementary Metal Oxide Semiconductor)

In a CMOS sensor, the charge from the photosensitive pixel is converted to a voltage at the pixel site and the signal is multiplexed by row and column to multiple on chip digital-to-analog converters (DACs). Inherent to its design, CMOS is a digital device. Each site is essentially a photodiode and three transistors, performing the functions of resetting or activating the pixel, amplification and charge conversion, and selection or multiplexing . This leads to the high speed of CMOS sensors, but also low sensitivity as well as high fixed-pattern noise due to fabrication inconsistencies in the multiple charge to voltage conversion circuits.

The multiplexing configuration of a CMOS sensor is often coupled with an electronic rolling shutter; although, with additional transistors at the pixel site, a global shutter can be accomplished wherein all pixels are exposed simultaneously and then readout sequentially.

Advantages of CMOS

  1. Lower power consumption compared to CCD sensor
  2. Ability to handle higher light levels without blooming effect
  3. Cheaper than CCD
  4. Directly related to intelligent cameras with on-board processing
  5. Small size at around 0.1 micrometer

Disadvantages of CMOS

  1. Analog circuit design: Leading edge processes are not characterized and tuned for analog circuit design.

2. Photodetectors: The photodetector structures are not characterized in any of the processes. It is the designer’s responsibility to assure that the photodetectors function as desired.

3. Second order effects: In the scaling process some second order device characteristics, such as subthreshold operation, are usually ignored or paid less attention, and their cancellation is more desired than their improvement.

4. Mismatch: Mismatch in CMOS devices is relatively high. This is specially hindering the reliability of analog processing in vision chips.

--

--

No responses yet