CHAPTER-1
INTRODUCTION
The market for image sensors explored in the last few years shows an
enormous increase in sales and developments of cameras. Imaging sensors are
mainly classified into two types: complementary metal oxide semiconductor
(CMOS) image sensors and charge couple device (CCD) sensors. Active pixel
sensors (APS) are the emerging sensors for the replacement of existing and
widely used charged couple device (CCD) sensors. Now a days, APS are extensively
used in webcams, robotics, X-rays, computer based videos, smart toys, both
still and video digital cameras, mobile phones cameras, automobiles, cinematography,
spectrography,radiography, photogrammetric, and in many scientific
applications. The above applications of APS are driving researchers to
concentrate on achieving low power consumption, reduced size, increasing
resolution, more sensitivity, marginal noise and more importantly fast operation.
This work explores state-of-the-art research on APS by reviewing the concepts behind
existing designs and new designs as well. Imaging sensors of different varieties are widely used in
commercial and scientific applications. CMOS active pixel sensor (APS) imagers
are fabricated in standard CMOS processes, which make it possible to integrate
the timing and control electronics, sensor array, signal processing
electronics, analog-to-digital converter (ADC) and full digital interface on
one chip. llus helps to achieve a cost effectivehghly integrated and highly
compact imaging system, i.e. camera-on-a-chip, by utilizing the same design
techniques that have been developed over the years for lowpowerCMOS digital and
analog circuits. A CMOS active
pixel imaging sensor was designed by Jet Propulsion Laboratory and manufactured
by a standard commercial CMOS production line for space applications.Aqualification methodology and
reliability analysis approach for imaging sensors has been also developed to
include both overall chip reliability and pixel reliability, which is directly
related to imaging quality and providessensor reliability information and
performance control.It should be nolted that the environmental, mechanical and packaging evaluation procedures and
tests are also part ofthe qualification plan and practice, but are notad dressed
herein. In addition, the impact of radiation on the imagers -including Gamma,
protons and heavyions - were presented
in.
1.1 CMOS ACTIVE PIXEL SENSOR
APS Technology: The
image sensor is photodiode-type CMOS active pixel sensor
imaging system on chip,designed by Jet Propulsion
Laboratory and manufactured by a standard commercial CMOS production line. The
imager is a 512 by 512
photodiode pixel array, which can randomly access any window in the array from
1 pixel by 1 pixel allthe way to 512 pixels by 512 pixels in any rectangular shape.
The minimum interface consists of five wires: Vdd, Ground, Serial Data Input,
Serial Data Output and Clock.The
imager size is approximately 10 mm by 15.5 mm with
1.2 DIGITAL
CAMERAS
Cameras are broadly
categorized into two types: one is a digital camera and the others a
traditional camera, based on _lm exposure to the light. With improvement in imaging
and fabrication technology, digital cameras are becoming popular than
traditional cameras. Digital cameras have many advantages over the traditional cameras.
They have a high picture quality with digital format display and do not have
the running cost of cameras. In order to
get the digital version of the picture captured by a traditional camera, it is required
to scan the printed slide pictures from the camera. The photographs that are
taken in digital cameras can be printed out selectively by the displays
provided in the digital cameras or previewing snaps on computers. These
facilities are not available in traditional cameras. Advancements in VLSI and
other associated technologies has brought the digital cameras in use for
applications like mobile digital photography, computer-based video, andin video
digital cameras. Upon that, the price of digital photography is much cheaper as
compared to that of cameras. Photographs
taken on the digital cameras can also be stored in memory devices like CDs,
DVDs and jump drives, etc., and transmission of such high quality images is
possible over internet to any part of the world in no time and with Minimum
cost.
1.3 PHOTO SENSOR
A photo sensor is a transducer which converts
light energy to electrical energy. That means a photo sensor converts the
photons incident on it to electron flow (current). A photo sensor is made up of
semiconductor material, generally silicon, that has a property called photoconductivity. The
generation of electrons in an electric field depends on the intensity of light
incident on it .The photodiode, bipolar phototransistor, and photo FET
(photosensitive field-effect transistor) are the three most commonly used photo
sensors. The working of these
devices is the same as a regular diode, bipolar transistor and field effect-transistor respectively. The difference between a photo sensor and an ordinary device (MOS, BJT) is that photo sensors have light as an input. These device have transparent windows that allow light energy to fall on it .The photo sensor used in this project is the photodiode.
1.4 IMAGE SENSOR
devices is the same as a regular diode, bipolar transistor and field effect-transistor respectively. The difference between a photo sensor and an ordinary device (MOS, BJT) is that photo sensors have light as an input. These device have transparent windows that allow light energy to fall on it .The photo sensor used in this project is the photodiode.
1.4 IMAGE SENSOR
A device which converts the
visual scene to electrical signals is called an image sensor. The main application of image sensors
is the digital camera. An image sensor consist
of an array of pixels which are characterized by either CCD technology or
CMOS technology. Before the existence of CMOS image
sensors, CCD cameras were dominant. CCD’s were mainly used in astronomical
telescopes, scanners and video camcorders. After CMOS sensors came into
existence, CCD reduced in importance, due to the low cost and the ability to integrate different
functions in CMOS sensors. CMOS image sensors have eventually become the image
sensor of choice in a large segment of the market. Both CCD and CMOS image
sensors capture light on a grid of small pixels on their surfaces. However the processing of the
signal and how they are manufactured distinguishes them.There are several major
types of color image sensors, differing by the means of the color separation:
The Bayer Sensor: The most
common and low-cost sensor is the Bayer filter, which passes red, green, or blue light to selected
pixels, forming a fixed pattern grid sensitive to red, green, and blue. The values of these
color filters are interpolated using a demosaicing algorithm.The Foveon X3 Sensor: An array of
layered sensors is used, where every pixel contains three overlapped
phototransducers, each sensitive to the individual colors .The 3CCD Sensor:
Three discrete image sensors are used, where the color separation is done by a dichroic
prism. This sensor is considered to be the best in terms of quality, and is more expensive than single-CCD sensors.The
sensor used in this project does not have any color filter array on the pixels.
Only black and white images are
captured. In general the active pixel sensor designed in this project can only
capture the black and monochromatic images. In this technical report, we
outline the design, simulation, and testing of a CMOS active pixel image sensor. The predicted sales for digital cameras are about
to reach 83 million pieces in the year2008 . Continuous advancements in the
image sensor technology have signi_cantlybrought down the manufacturing costs
of digital cameras. The basic system-level blockdiagram of a typical digital
camera is shown in Fig. 1.1. The basic components shown inthe block diagram of
the digital camera system are shutter, optical lens, analog-to-digital
converter (ADC) and digital signal processing (DSP) tool kit, and the rest of
the components are user interfaces which are built-in with the digital camera
used for USB and LCD display output. The lens brings light from the picture
into focus inside the camera so that it can expose an image. The shutter
provides the reset of all components for next coming image and also controls
the intensity of light through a period of time. RGB pattern is red, green,
blue which provides a photographic image. It is used when the light is exposed
to form colors for a display image on the monitor. Image sensor is another key
component for digital cameras, which is discussed in this thesis topic. ADC is
used for converting the analog sign alto digital signal inorder to provide the
digital image output. DSP processing monitors the image quality, performs
various image processing applications, and displays it in the liquid crystal
display (LCD) monitor.CMOS image sensors can be easily integrated withmade on a
single chip. This reduces the cost, power consumption and also the size of the
camera , and hence can be suitable for portable
electronic applications. CMOS mixedsignal
circuit technology allows the manufacturers to
integrate all components in a chip so
that all functions like timing of ADC and exposure
control are implemented on a single piece
of silicon.
CHAPTER-2
THEME: ACTIVE PIXEL SENSOR
2.1 CCD IMAGE SENSOR
CCD Image Sensors were invented in 1969 by Bell Laboratories. At the time digital photography was the major field of application. Upon exposure of the sensor, the charge on the first row (row of pixels) is transferred to the read out register. The read out register signals are fed to an amplifier and then on to an analog-to-digital converter. Once the row has been read, the charges on the readout register row are deleted, the next row is transferred to the first row and this procedure is carried on till last row is read out.
CCD Image Sensors were invented in 1969 by Bell Laboratories. At the time digital photography was the major field of application. Upon exposure of the sensor, the charge on the first row (row of pixels) is transferred to the read out register. The read out register signals are fed to an amplifier and then on to an analog-to-digital converter. Once the row has been read, the charges on the readout register row are deleted, the next row is transferred to the first row and this procedure is carried on till last row is read out.

Fig 2.1 CCD Imager Block Diagram.
2.2 CMOS IMAGE
SENSOR
Unlike
CCD’s CMOS image sensors are manufactured in factories called fabs. The basic
difference between CCD’s and CMOS is that in CMOS Image sensors there are circuits which help us to store and read out
the photo value whenever needed. CMOS is the highest yielding chip-making
process in the world. The latest CMOS processors, such as the Pentium IV,
contain almost 55 million active elements. As a result of these economies of
scale, the cost of fabricating a CMOS wafer is one-third the cost of fabricating
a similar wafer using a specialized CCD process. Costs are lowered even farther
because CMOS image sensors integrate the pixel along with circuitry, unlike CCDs,
which requires processing circuits on a separate chip. Early versions of CMOS image sensors had lots of noise problems, and
were used mainly in low-cost cameras.However, later versions of the CMOS
sensors have relatively low noise and have quality equal to CCD’s.

Fig 2.2 CMOS Imager Block Diagram.

Fig 2.2 CMOS Imager Block Diagram.
There are two basic
kinds of CMOS image sensors—passive and active.
2.3 PASSIVE PIXEL
SENSOR
Passive-pixel
sensors (PPS) were the first image-sensor devices used in the 1960s.In
passive-pixel CMOS sensors, a photo-sensor converts photons into an electrical charge. This charge is then boosted on chip by
an amplifier. Since the charge is carried through several stages, there is a
significant amount of noise added to the photo-signal in this process. To cancel out this noise, additional processing
steps are required, sometimes on chip
and sometimes off chip.
2.4 ACTIVE PIXEL SENSOR
Active-pixel sensors (APSs) reduce the noise associated with passive-pixel sensors. Each pixel has an extra circuit, an amplifier, which helps cancels the noise associated with the pixel. It is from this concept that the active-pixel sensor gets its name. The performance of this technology is similar to charge-coupled devices (CCDs) and also allows for a larger image array and higher resolution. The Image sensor used in this project to capture an image is an Active Pixel Sensor.
2.5 IMAGE SENSOR RESOLUTION
Image resolution is a measurement of how sharp images are. The most professional digital cameras have a total 12-million pixels (3000 x 4000). The human eye has 120 million pixels and 35mm film has 20 million pixels. These values are difficult to match by a CMOS imager, but the technology is getting closer to those numbers. A description of the screen display (that is, the number of pixels on a screen) introduced the term “resolution" in the computer world. For example, a screen may have 1024 pixels horizontally and 768 pixels vertically. The resolution of the active pixel sensor in this project is 32 x 32. However, to photographers, and for the optical community, resolution is the ability of a device to resolve lines such as those found on a test chart.
Active-pixel sensors (APSs) reduce the noise associated with passive-pixel sensors. Each pixel has an extra circuit, an amplifier, which helps cancels the noise associated with the pixel. It is from this concept that the active-pixel sensor gets its name. The performance of this technology is similar to charge-coupled devices (CCDs) and also allows for a larger image array and higher resolution. The Image sensor used in this project to capture an image is an Active Pixel Sensor.
2.5 IMAGE SENSOR RESOLUTION
Image resolution is a measurement of how sharp images are. The most professional digital cameras have a total 12-million pixels (3000 x 4000). The human eye has 120 million pixels and 35mm film has 20 million pixels. These values are difficult to match by a CMOS imager, but the technology is getting closer to those numbers. A description of the screen display (that is, the number of pixels on a screen) introduced the term “resolution" in the computer world. For example, a screen may have 1024 pixels horizontally and 768 pixels vertically. The resolution of the active pixel sensor in this project is 32 x 32. However, to photographers, and for the optical community, resolution is the ability of a device to resolve lines such as those found on a test chart.
2.6 IMAGE
SENSOR ASPECT RATIO
The ratio of image height to image width defines the aspect ratio. This ratio is always represented in the form W:H where W is the width and H is the height. Most image sensors fall in between the equality ratio (1:1) and the 35mm film ratio (1.5:1). The aspect ratio of a sensor determines the shape and proportions of the image taken.Images of different aspect ratio can be resized by a concept called cropping. Cropping is a code generated (for example, in MATLAB) to resize the images to the required aspect ratio. Sometimes we may loose data or clarity by cropping . The aspect ratio used in this project is 1:1 (32 rows and 32 columns).
2.7 FRAME RATE
The ratio of image height to image width defines the aspect ratio. This ratio is always represented in the form W:H where W is the width and H is the height. Most image sensors fall in between the equality ratio (1:1) and the 35mm film ratio (1.5:1). The aspect ratio of a sensor determines the shape and proportions of the image taken.Images of different aspect ratio can be resized by a concept called cropping. Cropping is a code generated (for example, in MATLAB) to resize the images to the required aspect ratio. Sometimes we may loose data or clarity by cropping . The aspect ratio used in this project is 1:1 (32 rows and 32 columns).
2.7 FRAME RATE
Frame rate is the rate at which an entire
image is taken, meaning, how fast the image is first acquired by the sensor and then
read out. The frame rate can also be defined as the inverse of the number of images taken
in one second. The term is mostly used in video cameras, computer graphics and
in motion capture systems. The frame rate is most often expressed in frames per
second (fps) or simply, Hertz (Hz). This project has a frame rate of 976.5 Hz.
2.8 COLOR FIDECLITY
The ability to replicate colors of an image in a real scene by a sensor is called color fidelity. In an imaging world, it is essential to maintain the flexibility to allow color to be graded for the desired image quality. Color digital imaging is a complicated process due to the fact that electronic images are monochromatic. The difference between red photons and blue photons is distinguished by silicon through color filters on each of the pixels. These color filters pass only specific wavelengths of light based on the filter used. Post processing after readout is done to replicate the intensity of the color incident on that particular pixel. Different approaches all have different impacts on sensitivity, resolving power, and the design of the overall system. There are no filters used in this project, so the output image is just a core image that differentiates intensities at each pixel.
The ability to replicate colors of an image in a real scene by a sensor is called color fidelity. In an imaging world, it is essential to maintain the flexibility to allow color to be graded for the desired image quality. Color digital imaging is a complicated process due to the fact that electronic images are monochromatic. The difference between red photons and blue photons is distinguished by silicon through color filters on each of the pixels. These color filters pass only specific wavelengths of light based on the filter used. Post processing after readout is done to replicate the intensity of the color incident on that particular pixel. Different approaches all have different impacts on sensitivity, resolving power, and the design of the overall system. There are no filters used in this project, so the output image is just a core image that differentiates intensities at each pixel.
2.9 CMOS
IMAGE SENSOR NOISE
CMOS
image sensors have poor image quality compared to CCD sensors due to high fixed
pattern noise (FPN), high dark current and poor sensitivity. Finding out the noise sources and canceling the noise will
improve the image quality in CMOS technology. Noise sources are present from the
sensor photodiode through the column and
programmable gain amplifiers (PGA) and analog-to-digital converters (ADC) in
each and every part of the sensor .
2.9.1 FIXED PATTERN
NOISE
FPN refers to spatial noise and is due to device mismatches in the
pixels, variations in the column amplifiers and mismatches between multiple
PGAs and ADCs. Dark current FPN, due to mismatches in the pixel photodiode
leakage currents, tends to dominate, especially with long exposure times. Low
leakage photodiodes reduce this FPN component. Dark
frame subtraction is another option to reduce the dark current FPN component but it increases the readout
time of the sensor.The most common FPN in image sensors is associated with rows
and columns due to mismatches in multiple signal paths, and un-correlated, row
operations in the image sensor. Most of this error results in offset, or dc, noise,
which can be canceled using a technique
called correlated double-sampling. On the
other hand, gain mismatches are more
difficult to remove, since they require more sample time for gain correction.
2.9.2 TEMPORAL NOISE
2.9.2 TEMPORAL NOISE
Temporal
noise is the time-dependent fluctuations of the signal level, unlike FPN which is fixed. Temporal noise can be found in the
pixel, column amplifiers, programmable
gain amplifiers and ADCs. There is also circuit-oriented temporal noise due to
substrate coupling or poor power supply rejection.
2.9.3 PIXEL NOISE
2.9.3 PIXEL NOISE
Noise
sources in the pixel are the photon shot noise, reset (kT/C) noise, dark current
shot noise and the MOS device noise Pixel Photon Shot Noise Photon absorption is a random process following Poisson
statistics. This means that the standard deviation (or noise) of the photon
noise limits the Signal-to-Noise Ratio (SNR) associated with detecting a mean
of N photons. The SNR is equal
to the square root of the number of photons absorbed and is given by Photon shot noise limits the SNR when the
detected signals are large. The system
noise floor determines the lower limit of the dynamic range of the sensor. The difference between the largest and smallest signal detected is called the dynamic range.The signal integrated on a pixel is measured relative to its reset level. The thermal noise associated with this reset level is called the reset or the kT/C noise. The correlated- double-sampling technique is used to eliminate the majority of this noise.
2.9.4 COLUMN AMPLIFIER NOISE
noise floor determines the lower limit of the dynamic range of the sensor. The difference between the largest and smallest signal detected is called the dynamic range.The signal integrated on a pixel is measured relative to its reset level. The thermal noise associated with this reset level is called the reset or the kT/C noise. The correlated- double-sampling technique is used to eliminate the majority of this noise.
2.9.4 COLUMN AMPLIFIER NOISE
The
column circuit stores both the pixel reset and photo sample values, and amplifies
the difference signal. Major noise sources that are associated with this
circuit are the kT/C thermal noise and
flicker noise.Column Amplifier kT/C Noise .
2.10 CMOS
IMAGE SENSOR ARCHITECTURE
Passive
CMOS sensor pixels (one transistor per pixel) had a good fill factor but suffered
from very poor signal to noise performance. Active CMOS sensors came into existence
to reduce the noise in passive sensors. Most of the CMOS designs today use active
pixel sensors, which have an amplifier in each pixel, a source follower
typically constructed with three transistors. A pixel with three transistors is
known as the 3T pixel. Other CMOS pixel designs include more transistors (4T
and 5T) for specific reasons to reduce noise and/or to achieve simultaneous shuttering.
The simpler structures have better fill factor and higher density while the
more complex structures have more functionality. Functionality versus density
is one major tradeoff .
2.10.1 3T ARCHITECTURE
The
3T architecture APS. The three transistors are the reset transistor, source
follower and the row select transistor. When reset is higher, the pixel is in
reset mode. When reset is low, the pixel integrates at a rate dependent on the
light falling on it. The source follower transistor is used to transfer the
voltage from the photodiode to the column line through a row select transistor.

Fig.2.3 3T architecture
2.10.2 4T/5T ARCHITECTURE
Fig 2.4 shows the 4T/5T architecture. The
architecture is similar to 3T architecture except it has one tranfer gate and a
MOScap. The transfer gate is used to program the integration time in order to
have a good quality image. The MOScap is used to prevent the loss of data from
the pixel and to reduce kT/C noise. The use of a transfer gate avoids the use
of a rolling shutter, as is commonly used in the 3T architecture. The total
array acts as an analog memory, storing each pixel values in the cell. 

Fig.2.4 4T/5T
architecture
2.11 DESIGN AND SIMULATIONS
2.11.1 ACTIVE PIXEL SENSOR
The active pixel sensor
uses the principle of taking two samples from the same pixel and then
subtracting it in order to reduce fixed-pattern noise and to get a better quality
image. This principle is known as correlated double sampling. The circuitry which
controls the reading out of a pixel’s voltage is mostly digital. Fig 3.1 shows
the block diagram of an active pixel sensor design. The active pixel sensor
design in this project consists of a 32x32 array of pixels.The light incident
on this array is stored on a MOSFET capacitor (MOScap) in the pixeland then is
read out using the decoder and multiplexer. A 12-bit counter is used to generate
the signals for the multiplexer and the decoder. The decoder selects one row at
a time. When a row is selected, the
sampled value in each row is fed to the column. 

Fig
2.5 Block Diagram of Active Pixel Sensor
one for
each column in the pixel array. The value stored in the column is then read out
using the 32x1 multiplexer. In order to vary the integration time, a Variable
Integration (VI) logic block is used. The VI logic block generates three different
reset pulses with three different integration times: 256μs, 512μs, and 1024μs, respectively.
These integration times are selected using a 4:1 multiplexer. The sensor works
at a frequency of 2 MHz, allowing 0.5μs of time between two successive samples.
The voltage generated by the light incident on the pixels is read out first,
followed by the reset value. There are three modes of operation: integration
mode, sample mode, and reset mode. Integration mode is when integration of the
photo-current takes place. Sample and reset modes are when the reset and sample
values are read out. The table below shows the 2 MSB’s (Q10 and Q11) of the
counter and the corresponding modes. The voltage generated by the light
incident on the pixels is read out first, followed by the reset value. There
are three modes of operation: integration mode, sample mode, and reset mode. The
rows and columns are selected only during the sample and reset modes, so that
we read only relevant data. In order to accomplish this task, the decoder
andmultiplexer select signals are passed through two-input AND gates where the
other input
is the Q11 bit of the counter. This allows
the sensor to select rows and columns only in
sample and reset mode.
2.11.2 PHOTO PIXEL
A photo pixel
consists of a photodiode, a few control transistors, a MOS
amplifier, and a MOScap to capture the light intensity.
The photo pixel used in this
project is the 4T/5T architecture. It consists of 4
transistors, a MOScap and a photodiode.The basic premise behind the photodiode
front end is to indirectly measure the relatively small photocurrent by
converting it into a large voltage swing. We also require the circuit which
performs this current-to-voltage conversion to occupy a small amount of area on
the chip. The photodiode senses the light falling on it and a photocurrent
flows through it. The photocurrent is converted into a voltage using a load.
The photodiode is modeled as a parasitic capacitance in parallel with a current
source. The photodiode with a load and the equivalent circuit model is shown in
figure 

Fig 2.6 Simulation Model
of Photo Diode.
Fig 2.6 shows the 4T/5T
pixel architecture used in this project. The photodiode is pulled up towards Vdd
through an NMOS transistor load, Mrst, whose gate is connected to the reset
signal. The value sampled is actually stored on the MOScap, whose gate is connected
to Vdd. The TX signal controls the value to be stored using an
NMOS pass gate, Mtx. The voltage on the MOScap is buffered through a source
follower amplifier, Msf. The pixel output value is transferred to the column
line using a row signal through Msel. When reset is high, the value at node Vpix
is Vdd -Vthn, since there is a threshold drop at the reset
transistor. The signal TX stays high for the desired integration time (256μs,
512μs or 1024μs) and goes low only in sample mode. The value measured on the column
line at the output of The voltage on the MOScap is buffered through a source
follower amplifier, Msf. The pixel output value is transferred to the column
line using a row signal through Msel. When reset is high, the value at node Vpix
is Vdd -Vthn, since there is a threshold drop at the reset
transistor.the source follower is Vdd -2 Vthn since there is a
second. 

Fig 2.7 Schematic of a Photo
Pixel.
threshold drop at the source follower. When a row
is selected, each pixel voltage can be
read at the associated column.
The pixel operates with Vdd = 3.3V and Vss
= 0V. The size of the MOScap is 4.05
μm /1.95 μm in the AMI 0.5μm process. The reset, Tx,
and row transistors are 3 μm /0.6
μm each. The source follower has a size of 4.8 μm
/1.5 μm. For simulations, the photodiode
model consists of a current of 50pA and capacitor
value of 20.5fF. Fig 3.4 shows
the simulation waveforms

Fig 2.8 Simulation
results of Photo Pixel
of the
pixel, including resetting after 512 μs of integration time. Vout is the sampled value. It integrates when reset is low and holds when TX
is low.The sudden glitch in the sampled value when reset goes low is due to
charge injection from the NMOS pass transistor when it turns off. The sampled
value again resets itself when reset and TX signals are high.
2.11.3 COLUMN CIRCUIT
The column circuit consists of a current
mirror, a PMOS capacitor, and a PMOSinput voltage follower. The current mirror
draws a current of 10μA for dynamically discharging the value at the capacitor
in order to store the next row’s value. the column circuit. The output from the
pixel is given to the column line. An external resistor of 230kΩ generates a
current of 11μA.An external resistor of 90kΩ generates a current of 33μA to
bias the voltage follower. The circuit works with Vdd = 3.3V and Vss =
0V. Fig 3.6 shows the schematic of the amplifier used in the voltage follower.
The amplifier has an open-loop gain of
35.1dB, or 57 V/V, and a bandwidth of 1 MHz. 

Fig
2.9 Schematic of voltage follower.
Fig 2.9 shows the open loop AC response of the
differential amplifier . The open loop gain is 35.1 dB, which corresponds to 57
Volts/Volts. The
cutoff frequency of the amplifier is 1MHz, so the
gain-bandwidth (GBW) product is
57MHz. The voltage follower configuration of the
differential amplifier has unity gain.
Since GBW is constant for a given amplifier, the
cutoff frequency of the voltage follower
is 57MHz. The samples of the active pixel sensor
come out at a 2MHz clock frequency.
The voltage follower cutoff frequency is way
beyond 2MHz; hence, it is suitable for this

Fig
2.10 AC Response of the Operational Amplifier.
Fig 2.10 shows the
transient response of the voltage follower configuration. The
voltage follower configuration is where the
negative input of the differential amplifier is
connected to the output and the input is given to
the positive output. Vin is the input and
Vout is the output of the voltage follower. Vdd = 3V and Vss =
0V. Vin is a xxkHz sinusoid
with an amplitude of yyV. From the waveforms, we
see that the output is the same as the
input. Hence, the voltage follower configuration
of the differential amplifier is working
well.
2.12 APS
CIRCUIT OPERATION
We consider the basic
three-transistor APS pixel and column APS circuit depicted in (a). The
capacitor Cpd represents the equivalent
photodiode capacitance. The bias transistor and capacitor CT are shared among
pixels in the same column. In standard APS operation, e.g., [1], the bit line
is not reset after reading each row. Instead, it is charged through the
follower transistor or discharged through the column bias transistor depending
on the bit-line voltage at the end of the previous row readout . Since bit-line
voltage changes linearly with time during discharging compared to exponentially
during charging, discharging time limits row readout time and can be very slow.
One can speed up the discharging operation by using a higher bias current, but
this leads to higher power consumption. To reduce the readout time to the
charging time, the bit line may be reset to a low-voltage value after each row
readout by pulsing the gate of the current source. For a given readout time
requirement, this method achieves lower power consumption than using the
standard operation.
2.13 MIXED MODE
ANALYSIS
Device
simulations discussed in the previous section, however, are not to estimate actual performance of a sensor
chip, where an array of photodiodes is em-bedded within a larger, distributed
electronic network. Modeling the whole circuit at the physical level would have
been practically prohibitive, so that less demanding approaches have been
followed. Depending on the degree of accuracy we sought for, either circuit
simulation or mixed-mode simulation have been used. In the former case, as
detailed later on, an equivalent-circuit model for the photodiode was needed;
the latter approach, instead, consist of a self-consistently coupled device-
and circuit-simulation: critical devices (the photodiode and its neighboring
junctions) are still described at the physical level, whereas peripheral
circuits are taken into account by compact modeling. The basic read-out scheme
for an APS sensor is sketched in, and includes are set and a transistor at each pixel; addressing decoders
and bus have to be considered as well:
the line load has been considered by means of a current sink, assuming a worst
case condition .
.
such a more
complete picture of the array architecture, we can extend indications coming
from the analysis of the photodiode alone: in fact, charge collected by the
photodiode junction is shared with parasitic capacitances coming from neigh
boring devices (reset and source-follower transistors), so that the actual
voltage drop at the photodiode cathode strictly depends on the overall pixel
architecture. On the other hand, increasing the area of the sensitive window
(which would reduce charge sharing and improve the _ll-factor as well) does not
necessarily imply a better response: the amount of charge generated into the
sensitive layer is almost independent of the window footprint (depending only
on the layer thickness), whereas the junction capacitance increases with it.
Thus, given a amount of charge, a larger capacitance yields a smaller voltage
drop. Comparison between responses for
technologies, depending on the sensitive area, are summarized in voltage
drops of tens of mill volts are achievable, in the same order of the response
of more traditional pixel read-out systems .We then compared similar
technologies of the (A) type (i.e., with no epi-layer), characterized by
deferent channel feature- size: namely, 0.25 and 0.18 technologies (supplied by the same foundry)
were examinated. The project specs, in terms of spatial resolution, could be,
so we still compared the two options in terms of performances pixels were designed,
accounting for layer structures and
design rules; , the simulations predicts a wider voltage swing for the 0.18 node, mostly
due to the parasitic-capacitance lowering associated with the scaled technology.
Once technology features had been assessed, we could proceed with the optimization
of the dimensions of the sensitive element, balancing sensitive volume
and parasitic capacitance in order to
maximize the output voltage swing. Eventually, a pixel size has been selected:
due to the absence of the low-doped epi- layer, charge sharing among adjacent
pixels is limited, and small pixel pitches are more easily achieved. Based on
such results, the very compact size of the pixel in the layout view in has been achieved. It is worth observing that,
as mentioned above, the actual resolution target did not require the adoption of
such an aggressive scaling-down; the choice of an advanced technology was
instead driven by performance evaluation. With respect to more mature
technological nodes,
moreover, such a choice better
guarantees long-term stability and maintenance.
2.14 SYSTEM
DESIGN
Design a of
the system chip require a more computationally approach: to this purpose, data
coming from device and mixed-mode simulations illustrated so far were exploited
to devise and carefully tune a compact model of the sensing element. Basically,
a junction diode (properly characterized with respect to the actual technology)
was supplemented by a current generator, describing radiation- induced current
pulses, as predicted by device simulation. Layout-extracted parasiticshave been
taken into account as well, thus resulting in a quite realistic, yet com- putationally
reasonable, photodiode model. A comparison between pixel responses predicted at
physical (i.e., device-simulation) level and at circuit level is and exhibits a satisfactory agreement. Once
validated the photodiode model, simulation of the complete circuit became feasible:
several array of pixels, organized in the customary matrix topology were
designed, and arranged on a test chip. A comparison between pixel responses predicted
at physical level and at circuit level
is and exhibits a satisfactory
agreement. Design a of the system chip require a more computationally approach:
to this purpose, data coming from device and mixed-mode simulations illustrated
so far were exploited to devise and carefully tune a compact model of the
sensing elementDigital circuitry needed for row and column addressing was
added, as well as bufers driving analog output
2.15 APS USING TFTs
Fig. 2.11 A
two-transistor active/passive pixel sensor
For applications such as large area digital x-ray imaging thin-film transistors (TFTs) can also be used in APS architecture.
However, because of the larger size and lower transconductance gain of TFTs
compared to CMOS transistors, it is necessary to have fewer on-pixel TFTs to
maintain image resolution and quality at an acceptable level. Two-transistor
APS/PPS architecture has been shown to be promising for APS using amorphous silicon TFTs.
In the two-transistor APS architecture on the right, TAMP is
used as a switched-amplifer integrating functions of both Msf and
Msel in the three-transistor APS. This results in reduced
transistor counts per pixel, as well as increased pixel transconductance gain. Here,
Cpix is the pixel storage capacitance, and it is also used to
capacitively couple the addressing pulse of the "Read" to the gate of
TAMP for ON-OFF switching. Such pixel readout circuits work
best with low capacitance photoconductor detectors such as amorphous selenium. The main reason for the ever growing interest in CMOS
MAPS lies in the possibility of integrating analog and digital processing
electronics together with the sensor array in the same substrate, taking
advantage of the large scale of integration and low power dissipation available
through commercial, low-cost CMOS processes. In the last few years, many
efforts were made to extend the application field of CMOS MAPS to
high-granularity particle detection in high energy physics experiments. The
interest of the particle physics community for monolithic active pixel sensors
stems from them being a possible solution to the material budget issue put
forward by the experiments to be run at the future colliders.
This research activity is concerned with the feasibility study of a new implementation of CMOS monolithic active pixel sensors (MAPS) for applications to charged particle tracking. As compared to standard three MOSFET MAPS, where the charge signal is read out by a source follower, the proposed front-end scheme relies upon a charge sensitive amplifier (CSA), embedded in the elementary pixel cell, to perform charge-to-voltage conversion. The area required for the integration of the front-end electronics is mostly provided by the collecting electrode, which consists of a deep n-type diffusion (deep n-well, DNW), available as a shielding frame for n-channel devices in deep submicron, triple well CMOS technologies.
This research activity is concerned with the feasibility study of a new implementation of CMOS monolithic active pixel sensors (MAPS) for applications to charged particle tracking. As compared to standard three MOSFET MAPS, where the charge signal is read out by a source follower, the proposed front-end scheme relies upon a charge sensitive amplifier (CSA), embedded in the elementary pixel cell, to perform charge-to-voltage conversion. The area required for the integration of the front-end electronics is mostly provided by the collecting electrode, which consists of a deep n-type diffusion (deep n-well, DNW), available as a shielding frame for n-channel devices in deep submicron, triple well CMOS technologies.
2.16 READOUT RATE
Depending on the applications, the required
readout rate and resolution can vary significantly. On one side of the spectrum,
there are applications where high
resolution (14 – 16 bits) and low readout rate (less than one frame per second fps) are demanded, and on the
other one there are aplications where low resolution (8 bits maximum) and high
readout rate
(in
excess of 10fps) are requested [31]. The type of application will affect the
type of readout scheme and in particular the type of analogue-to-digital
conversion scheme to be used. To be able to respond to this wide range of
requests from scientific users, we are developping anumber of
analogueto-digital (ADC) solutions, ranging from in-pixel conversion to
single-chip solutions, through column-parallel solutions. For each solution, a
different architecture of the ADC has to be chosen. The table 1 below
summarizes the different type of architectures we have developed so far. The
single-ramp ADC is a very compact solution and can be integrated in a column
[20] or even in a pixel. Its speed scales with Nbit2 and it can than become rapidly impracticable
for high resolution applications. The
pipeline ADC is our favourite solution for high resolution [32] applications.
It is however quite power hungry and occupies a large area, making it
impossible to integrate a large number of them on a single chip. This is
possible with successive approximation ADC. It is relatively easy to achieve a
moderate number of bits (10 – 12), and the architecture is inherently low power
and compact. The two figures below summarize these
considerations in the case of 8 and 16-bit resolution respectively. At low
resolution, the pixel parallel solution is the favourite one for frame
rate. However, the pixel parallel
solution requires a complicated structure in the pixel and this reduces the
fill factor. As shown in figure 4, at high resolution the pixel parallel
solution is better than a more conventional successive approximation ADC only
for very high pixel counts, since the frame rate is uniquely determined by the
long time needed to make a conversion. This is without taking into account any
loss of image quality due to the reduced fill factor.
2.17 ACTIVE PIXEL SENSOR DEVELOPMENT PROGRESS
To accommodate the needs of an
autonomous mission and to be consistent with the increasing demand for lower
massand power instrumentation, APS developments are underway that are directly
applicable to autonomous spacecraft applications. Currently, thereare several
areas of instrument development areas that are driving APS designs.Common to
each is the desire to reduce overall sensor complexity and part count, and to
reduce sensor power demand.CCDs require a sizable set of control electronics
and a range of support power supply voltages. The net result is arelatively
large component count, over and above the image sensor itself, that require several
different supply voltages,populating a circuit board with components.
A much more desirable solution, offered by the
APS, is to place the majority of array control circuitry on to the image sensor
itself. This reduces the number of components needed in the camera, while
reducing the overall power demands through reduced chip count and reduced
chip-to-chip interface drive requirements. Further, placing low-noise sampling
circuits on-chip also reduces the susceptibility to external noise sources
often countered in camera circuit boards having many components. a typical APS
tracker. APS technology provides a path towards a reduced mass and power star
camera in a minimized volume and profile. Electronics are simplified and part
counts are reduced.There are several challenges in the development of the APS
technology. These include reduction of read noise,improving quantum efficiency
and fill factor, demonstration of large format devices, increased radiation
hardness, andon-chip ADC development.Read noise (single read) has been reduced
to below 15 e-rms. In test devices, read noise as low as 5 e-rms has beendemonstrated.
One way to combat read noise is to reduce the capacitance of the readout
floating diffusion node. Thisincreases conversion gain (uV/e-) but reduces
"well capacity". Fortunately, 15 e-rms noise is adequate for
mostscientific applications.
The APS has a fundamental disadvantage in design
fill factor compared to full-frame CCDs. The fill factor of the APSpixel is
comparable to that of interline transfer CCDs, or about 25%. Increasing the
fill factor is important to improving Output Pixels, Centroids and Quaternuans MTF
and centroiding accuracy. However, it has been discovered that the
"dead" area of the APS pixel that includes the readout circuitry is
in fact reasonably responsive, and makes a significant contribution to the
overall QE of the sensor, as well as improves centroiding accuracy. The use of
microoptics, such as microlenses presently used in CCDs, can be used to improve
effective fill factor of the APS. Space qualified microlenses may be more
difficult to
develop.
Back side illuminated APS arrays have been
suggested, but the development of this technology is expectedto be costly.Improved
radiation hardness through the use of radiationhard CMOS foundries is an
attractive development approach.Use of these foundries is also relatively
expensive and progress here is expected to be slow. More clever designs that
improve
radiation hardness in conventional CMOS should also be explored.On-chip ADC has
been under investigation at JPL for one or two years.
Recent
progress has been encouraging and onchip ADC for 8 bit and 10 bit applications
is imminent. More resolution, such as that required for many scientific applications,
is more difficult to achieve on-chip with minimal chip area and power, and will
lag behind lower resolution approaches. Larger format APS detectors with small
pixel size are desired for many applications.
The use of
more advanced CMOS processes (e.g. 0.5 micron technology) enable realization of
10 micron pixel pitches and less, and enable the practical realization of
megapixel APS formats. More advanced CMOS processes are less mature and more
expensive to utilize than somewhat older processes, and for now, large format
array development is budget-limited. In a few years (e.g. 2-3), 0.5 micron CMOS
technology will be mature and more readily accessible, and the realization of
large format arrays with small pixel sizes readily demonstrated.
APS
Test Camera Our initial entry into the design and test of an APS-based sensor
utilized an APS array having a pixel format of 128 x128. A camera system was
designed which was intended to provide verification of the performance of the
APS as a star sensor for spacecraft guidance applications. The APS was mounted
on a two-stage thermoelectric cooler and housed in a vacuum cavity. Operating
the APS at reduced temperatures allowed us to evaluate the spectral sensitivity
of the detector for a wide range of target stars. Additionally, the point
spread functions for stellar images was to be evaluated in an effort to
determine how well centroiding algorithms would work with the APS sensor.
Figure 3 shows a star field imaged by the 128 x 128 APS array.First-light image
with a CC128 APSarray on the night sky. The camera is described in the text.The
APS was operated at a clock rate of 125 KHz and each pixel was digitized to 12 bits
using a 1.0 MHz conversionclock. Since the APS divides the input clock by four
to derive the pixel rate, each pixel was 32 usec in duration. Ofthat time, 13
usec was used by the AID for each conversion, leaving 19 sec to transfer each
digitized pixel to the hostdata acquisition computer. A 12-bit parallel data
transfer was used to the host computer.Analog processing of the APS output
signal is quite straightforward. Since the analog pixel output of the APS is
fairlyhigh level, about 1.3 volts, a relatively small amount of analog gain is
required. Referring to Figure 4, we employed abalanced differential unity gain
amplifier to provide a difference signal from the two APS outputs, SIGNOUT andRSTOUT.
The following stage provided additional gain and allowed any DC offsets from
the APS to be nulled out.
A
fast sample and hold circuit operating at unity gain then sampled the analog
video using the timing signal, PIXEL, as asample strobe, and presented the held
value to the AID converter. One shot multivibrators were used to optimally timethe
sample and hold amplifier to the analog video output.The host computer provided
the means of defining the pixel window in terms of Column Start and Column
Width, RowStart and Row Width. Light integration time was also programmable
having a range of 1 usec to over a half of an hourin 1 usec selection
increments. Figure 5 shows the point spread function as sampled by the 128 x
128 APS array.
Our next upgrade consisted of retrofitting the APS
device with a newer device having a 256 x 256 pixel format. Operationof this
device is essentially the same as the earlier 128 x 128 device in terms of host
computer commands and readout.The support electronics needed to effectively
operate an APS may be divided into two groups: Video Processing and Digital
Control. The required features of with circuit group are discussed below. Video
Processing —The video signal from each active pixel is available as a high-level
differential APS chip output. The processing electronics consists of a
differential amplifier which eliminates common mode signals and a following stage
allowing video offset adjustment (nulling). A Sample and Hold amplifier is
synchronized to sample the video signal during each pixel and present the held
sample to a 12-bit A/D converter which digitizes each pixel sample. The
digitized pixel signals are then utilized by a processor to extract guidance or
scene intelligence information.The synchronizing signals necessary to perform
the A/D conversion are created within the APS chip and further simplify the
task of digitizing the video data stream. Our APS chip provides pixel, row, and
frame sync to allow synchronization to the video data.
Digital
Control —The internal configuration of the APS chip is determined by the data
loaded into the eight internalregisters of the device. Each register is
accessible by means of a 3-bit address and an 8-bit data load. Parametersloaded
consist of Column starting location and width, Row starting location and width,
and optical integration timeinterval. Thus an array of pixels to be read out is
defined by a loaded register data as is the effective exposure time ofthe
array. A loading strobe, LOAD, is active during the time that the register data
is input. In the event that no registerdata is loaded, default values allow the
entire APS array to be readout with a nominal 256 count integration interval.
In order to establish the suitability of the
APS imagers for space or medical X-ray applications, a number of irradiation.
CHAPTER-3
CONCLUSION
CMOS sensors found their first applications in the detection of visible light
and became widely spread in consumer
applications. We are now developing this
technology to meet the stringent requirements of scientific applications. CMOS sensors can be efficiently
used to detect a broad spectrum of
electromagnetic radiation and charged particles. The dominant source of noise, the reset noise,
can be reduced and, at low illumination
levels, noise in the range of 10 rms can
be obtained,
without any
correlated double sampling. Different types of analogue to digital converter architectures allow to
trade off between speed and resolution
required. We anticipate that the use of CMOS sensors for scientific applications will expand in the
next few years
REFERENCES
1.www.wikipidia.com
2:
www.future20hottechnologies.com
3: www.wikipedia.com
4:
www.scribd.com
No comments:
Post a Comment