How does ccd sensor work




















CCDs are silicon-based sensors comprised of a silicon substrate, and a deposited epitaxial layer. An integrated circuit it etched onto the silicon surface to make an array of pixels, which count the number of incoming photons and convert them to photoelectrons.

These electrons are transferred down the sensor until they are readout and digitized to display an image on the imaging software. There are various formats of CCD sensor which aim to streamline the process of photoelectron transfer: full-frame, frame and interline transfer. Full-frame utilizes the entire sensor for collection of photons, but has a much slower readout as all electrons must be cleared before a new image can be acquired.

Frame transfer takes advantage of a parallel register, making it twice the height, rapidly shift any detected photons onto a storage array without compromising on the size of the light sensitive area.

However, these sensors are usually more expensive and susceptible to smearing artifacts. Interline transfer uses alternating parallel strips, in which a portion of each pixel is masked to light, allowing for fast transfer without any charge smear. This however reduces the light sensitive area and makes the light collecting area of each pixel smaller. As these sensors have a thicker depletion region, they are no longer transparent to NIR wavelengths and are therefore able to generate charge, detecting each NIR photon.

This allows for nanosecond gating optimal for ultra-low exposure times. Figure 1: Schematic depicting charge transfer on a CCD. A Different numbers of photoelectrons accumulate on pixels within the sensor when it is exposed to light.

Each row of electrons is shifted down a row using a positive voltage. B The electrons are shifted by spreading the positive voltage over neighboring pixels in the same column to transfer them to a new pixel. This will continue all the way down the sensr until they are transferred to the readout register. C Those electrons that are on the bottom row are transferred into the readout register.

D Once on the readout register, the electrons are shifted horizontally, column by column, via a positive charge until they reach the output node, where they are amplified and digitized. This process is repeated until the whole sensor is clear of electrons. Then the sensor can be exposed to light again to acquire a new image. Figure 2: Schematic showing three common ways that photoelectrons can be transferred from the CCD. A Full frame, in which the entire frame is light sensitive, and any charge accumulated must be vertically transferred down the sensor into the readout register.

B Frame transfer, in which half of the sensor is masked light insensitive allowing for rapid charge shift. C Interline transfer, in which alternating strips of light sensitive and insensitive pixels are used to allow for rapid charge transfer without the risk of charge smear. Figure 4: Deep-depletion CCDs are made of thicker silicon so are therefore able to detect NIR wavelengths which travel deeper into the silicon, unlike typical depletion CCDs which generates majority of signal from visible light.

Figure 5: Schematic showing vapor-phase i. Step 1: Reagents and carrier gas are adsorbed onto the silicon substrate. Adsoprtion is when a solid captures molecules of a solute, liquid or gas to form a thin film. The transition to phase 2 is carried out by applying positive potential to the phase-2 gates, spreading the collected charge between the phase-1 and phase-2 wells, and when the phase-1 potential is returned to ground, the entire charge packet is forced into phase 2.

A similar sequence of timed voltage transitions, under control of the parallel shift register clock, is employed to shift the charge from phase 2 to phase 3, and the process continues until an entire single-pixel shift has been completed. One three-phase clock cycle applied to the entire parallel register results in a single-row shift of the entire array. An important factor in three-phase transfer is that a potential barrier is always maintained between adjacent pixel charge packets, which allows the one-to-one spatial correspondence between sensor and display pixels to be maintained throughout the image capture sequence.

Figure 6 illustrates the sequence of operations just described for charge transfer in a three-phase CCD, as well as the clocking sequence for drive pulses supplied by the parallel shift register clock to accomplish the transfer. In this schematic visualization of the pixel, charge is depicted being transferred from left to right by clocking signals that simultaneously decrease the voltage on the positively-biased electrode defining a potential well and increase it on the electrode to the right Figures 6 a and 6 b.

In the last of the three steps Figure 6 c , charge has been completely transferred from one gate electrode to the next. Note that the rising and falling phases of the clock drive pulses are timed to overlap slightly not illustrated in order to more efficiently transfer charge and to minimize the possibility of charge loss during the shift.

With each complete parallel transfer, charge packets from an entire pixel row are moved into the serial register where they can be sequentially shifted toward the output amplifier, as illustrated in the bucket brigade analogy Figure 5 c. This horizontal serial transfer utilizes the same three-phase charge-coupling mechanism as the vertical row-shift, with timing control provided in this case by signals from the serial shift register clock. After all pixels are transferred from the serial register for readout, the parallel register clock provides the time signals for shifting the next row of trapped photoelectrons into the serial register.

Each charge packet in the serial register is delivered to the CCD's output node where it is detected and read by an output amplifier sometimes referred to as the on-chip preamplifier that converts the charge into a proportional voltage. The voltage output of the amplifier represents the signal magnitude produced by successive photodiodes, as read out in sequence from left to right in each row and from the top row to the bottom over the entire two-dimensional array.

The CCD output at this stage is, therefore, an analog voltage signal equivalent to a raster scan of accumulated charge over the imaging surface of the device. After the output amplifier fulfills its function of magnifying a charge packet and converting it to a proportional voltage, the signal is transmitted to an analog-to-digital converter ADC , which converts the voltage value into the 0 and 1 binary code necessary for interpretation by the computer.

Each pixel is assigned a digital value corresponding to signal amplitude, in steps sized according to the resolution, or bit depth, of the ADC. For example, an ADC capable of bit resolution assigns each pixel a value ranging from 0 to , representing possible image gray levels 2 to the 12th power is equal to digitizer steps. Each gray-level step is termed an analog-to-digital unit ADU. The technological sophistication of current CCD imaging systems is remarkable considering the large number of operations required to capture a digital image, and the accuracy and speed with which the process is accomplished.

The sequence of events required to capture a single image with a full-frame CCD camera system can be summarized as follows:. In spite of the large number of operations performed, more than one million pixels can be transferred across the chip, assigned a gray-scale value with bit resolution, stored in computer memory, and displayed in less than one second.

A typical total time requirement for readout and image display is approximately 0. Charge transfer efficiency can also be extremely high for cooled-CCD cameras, with minimal loss of charge occurring, even with the thousands of transfers required for pixels in regions of the array that are farthest from the output amplifier.

Three basic variations of CCD architecture are in common use for imaging systems: full frame , frame transfer , and interline transfer see Figure 7. The full-frame CCD, as referred to in the previous description of readout procedure, has the advantage of nearly percent of its surface being photosensitive, with virtually no dead space between pixels.

The imaging surface must be protected from incident light during readout of the CCD, and for this reason, an electromechanical shutter is usually employed for controlling exposures. Charge accumulated with the shutter open is subsequently transferred and read out after the shutter is closed, and because the two steps cannot occur simultaneously, image frame rates are limited by the mechanical shutter speed, the charge-transfer rate, and readout steps.

Although full-frame devices have the largest photosensitive area of the CCD types, they are most useful with specimens having high intra-scene dynamic range, and in applications that do not require time resolution of less than approximately one second. When operated in a subarray mode in which a reduced portion of the full pixel array is read out in order to accelerate readout, the fastest frame rates possible are on the order of 10 frames per second, limited by the mechanical shutter.

Frame-transfer CCDs can operate at faster frame rates than full-frame devices because exposure and readout can occur simultaneously with various degrees of overlap in timing. They are similar to full-frame devices in structure of the parallel register, but one-half of the rectangular pixel array is covered by an opaque mask, and is used as a storage buffer for photoelectrons gathered by the unmasked light-sensitive portion. Following image exposure, charge accumulated in the photosensitive pixels is rapidly shifted to pixels on the storage side of the chip, typically within approximately 1 millisecond.

Because the storage pixels are protected from light exposure by an aluminum or similar opaque coating, stored charge in that portion of the sensor can be systematically read out at a slower, more efficient rate while the next image is simultaneously being exposed on the photosensitive side of the chip. A camera shutter is not necessary because the time required for charge transfer from the image area to the storage area of the chip is only a fraction of the time needed for a typical exposure.

Because cameras utilizing frame-transfer CCDs can be operated continuously at high frame rates without mechanical shuttering, they are suitable for investigating rapid kinetic processes by methods such as dye ratio imaging, in which high spatial resolution and dynamic range are important. A disadvantage of this sensor type is that only one-half of the surface area of the CCD is used for imaging, and consequently, a much larger chip is required than for a full-frame device with an equivalent-size imaging array, adding to the cost and imposing constraints on the physical camera design.

In the interline-transfer CCD design, columns of active imaging pixels and masked storage-transfer pixels alternate over the entire parallel register array. Because a charge-transfer channel is located immediately adjacent to each photosensitive pixel column, stored charge must only be shifted one column into a transfer channel. This single transfer step can be performed in less than 1 millisecond, after which the storage array is read out by a series of parallel shifts into the serial register while the image array is being exposed for the next image.

The interline-transfer architecture allows very short integration periods through electronic control of exposure intervals, and in place of a mechanical shutter, the array can be rendered effectively light-insensitive by discarding accumulated charge rather than shifting it to the transfer channels. Although interline-transfer sensors allow video-rate readout and high-quality images of brightly illuminated subjects, basic forms of earlier devices suffered from reduced dynamic range, resolution, and sensitivity, due to the fact that approximately 75 percent of the CCD surface is occupied by the storage-transfer channels.

Although earlier interline-transfer CCDs, such as those used in video camcorders, offered high readout speed and rapid frame rates without the necessity of shutters, they did not provide adequate performance for low-light high-resolution applications in microscopy. In addition to the reduction in light-sensitivity attributable to the alternating columns of imaging and storage-transfer regions, rapid readout rates led to higher camera read noise and reduced dynamic range in earlier interline-transfer imagers.

Improvements in sensor design and camera electronics have completely changed the situation to the extent that current interline devices provide superior performance for digital microscopy cameras, including those used for low-light applications such as recording small concentrations of fluorescent molecules.

Adherent microlenses , aligned on the CCD surface to cover pairs of image and storage pixels, collect light that would normally be lost on the masked pixels and focus it on the light-sensitive pixels see Figure 8. By combining small pixel size with microlens technology, interline sensors are capable of delivering spatial resolution and light-collection efficiency comparable to full-frame and frame-transfer CCDs.

The effective photosensitive area of interline sensors utilizing on-chip microlenses is increased to percent of the surface area. An additional benefit of incorporating microlenses in the CCD structure is that the spectral sensitivity of the sensor can be extended into the blue and ultraviolet wavelength regions, providing enhanced utility for shorter-wavelength applications, such as popular fluorescence techniques employing green fluorescent protein GFP and dyes excited by ultraviolet light.

In order to increase quantum efficiency across the visible spectrum, recent high-performance chips incorporate gate structures composed of materials such as indium tin oxide, which have much higher transparency in the blue-green spectral region.

Such nonabsorbing gate structures result in quantum efficiency values approaching 80 percent for green light. The past limitation of reduced dynamic range for interline-transfer CCDs has largely been overcome by improved electronic technology that has lowered camera read noise by approximately one-half.

Because the active pixel area of interline CCDs is approximately one-third that of comparable full-frame devices, the full well capacity a function of pixel area is similarly reduced. Previously, this factor, combined with relatively high camera read noise, resulted in insufficient signal dynamic range to support more than 8 or bit digitization. High-performance interline cameras now operate with read noise values as low as 4 to 6 electrons, resulting in dynamic range performance equivalent to that of bit cameras employing full-frame CCDs.

Additional improvements in chip design factors such as clocking schemes, and in camera electronics, have enabled increased readout rates. Interline-transfer CCDs now enable bit megapixel images to be acquired at megahertz rates, approximately 4 times the rate of full-frame cameras with comparable array sizes.

Other technological improvements, including modifications of the semiconductor composition, are incorporated in some interline-transfer CCDs to improve quantum efficiency in the near-infrared portion of the spectrum. Several camera operation parameters that modify the readout stage of image acquisition have an impact on image quality. The readout rate of most scientific-grade CCD cameras is adjustable, and typically ranges from approximately 0.

The maximum achievable rate is a function of the processing speed of the ADC and other camera electronics, which reflect the time required to digitize a single pixel. Applications aimed at tracking rapid kinetic processes require fast readout and frame rates in order to achieve adequate temporal resolution, and in certain situations, a video rate of 30 frames per second or higher is necessary.

Unfortunately, of the various noise components that are always present in an electronic image, read noise is a major source, and high readout rates increase the noise level. Whenever the highest temporal resolution is not required, better images of specimens that produce low pixel intensity values can be obtained at slower readout rates, which minimize noise and maintain adequate signal-to-noise ratio.

When dynamic processes require rapid imaging frame rates, the normal CCD readout sequence can be modified to reduce the number of charge packets processed, enabling acquisition rates of hundreds of frames per second in some cases. The image acquisition software of most CCD camera systems used in optical microscopy allows the user to define a smaller subset, or subarray , of the entire pixel array to be designated for image capture and display. By selecting a reduced portion of the image field for processing, unselected pixels are discarded without being digitized by the ADC, and readout speed is correspondingly increased.

Depending upon the camera control software employed, a subarray may be chosen from pre-defined array sizes, or designated interactively as a region of interest using the computer mouse and the monitor display. The subarray readout technique is commonly utilized for acquiring sequences of time-lapse images, in order to produce smaller and more manageable image files.

Accumulated charge packets from adjacent pixels in the CCD array can be combined during readout to form a reduced number of superpixels. This process is referred to as pixel binning , and is performed in the parallel register by clocking two or more row shifts into the serial register prior to executing the serial shift and readout sequence. The binning process is usually repeated in the serial register by clocking multiple shifts into the readout node before the charge is read by the output amplifier.

Any combination of parallel and serial shifts can be combined, but typically a symmetrical matrix of pixels are combined to form each single superpixel see Figure 9.

As an example, 3 x 3 binning is accomplished by initially performing 3 parallel shifts of rows into the serial register prior to serial transfer , at which point each pixel in the serial register contains the combined charge from 3 pixels, which were neighbors in adjacent parallel rows. Subsequently, 3 serial-shift steps are performed into the output node before the charge is measured.

The resulting charge packet is processed as a single pixel, but contains the combined photoelectron content of 9 physical pixels a 3 x 3 superpixel. Although binning reduces spatial resolution, the procedure often allows image acquisition under circumstances that make imaging impossible with normal CCD readout. It allows higher frame rates for image sequences if the acquisition rate is limited by the camera read cycle, as well as providing improved signal-to-noise ratio for equivalent exposure times.

Additional advantages include shorter exposure times to produce the same image brightness highly important for live cell imaging , and smaller image file sizes, which reduces computer storage demands and speeds image processing. A third camera acquisition factor, which can affect image quality because it modifies the CCD readout process, is the electronic gain of the camera system. The gain adjustment of a digital CCD camera system defines the number of accumulated photoelectrons that determine each gray level step distinguished by the readout electronics, and is typically applied at the analog-to-digital conversion step.

Note that this differs from gain adjustments applied to photomultiplier tubes or vidicon tubes, in which the varying signal is amplified by a fixed multiplication factor. Although electronic gain adjustment does provide a method to expand a limited signal amplitude to a desired large number of gray levels, if it is used excessively, the small number of electrons that distinguish adjacent gray levels can lead to digitization errors. High gain settings can result in noise due to the inaccurate digitization, which appears as graininess in the final image.

If a reduction in exposure time is desired, an increase in electronic gain will allow maintaining a fixed large number of gray scale steps, in spite of the reduced signal level, providing that the applied gain does not produce excessive image deterioration. As an example of the effect of different gain factors applied to a constant signal level, an initial gain setting that assigns 8 electrons per ADU gray level dictates that a pixel signal consisting of electrons will be displayed at gray levels.

Digital image quality can be assessed in terms of four quantifiable criteria that are determined in part by the CCD design, but which also reflect the implementation of the previously described camera operation variables that directly affect the imaging performance of the CCD detector. The principal image quality criteria and their effects are summarized as follows:. In microscope imaging, it is common that not all important image quality criteria can be simultaneously optimized in a single image, or image sequence.

Obtaining the best images within the constraints imposed by a particular specimen or experiment typically requires a compromise among the criteria listed, which often exert contradictory demands.

For example, capturing time-lapse sequences of live fluorescently-labeled specimens may require reducing the total exposure time to minimize photobleaching and phototoxicity.

Several methods can be utilized to accomplish this, although each involves a degradation of some aspect of imaging performance. If the specimen is exposed less frequently, temporal resolution is reduced; applying pixel binning to allow shorter exposures reduces spatial resolution; and increasing electronic gain compromises dynamic range and signal-to-noise ratio.

Different situations often require completely different imaging rationales for optimum results. In contrast to the previous example, in order to maximize dynamic range in a single image of a specimen that requires a short exposure time, the application of binning or a gain increase may accomplish the goal without significant negative effects on the image. Performing efficient digital imaging requires the microscopist to be completely familiar with the crucial image quality criteria, and the practical aspects of balancing camera acquisition parameters to maximize the most significant factors in a particular situation.

A small number of CCD performance factors and camera operational parameters dominate the major aspects of digital image quality in microscopy, and their effects overlap to a great extent. Factors that are most significant in the context of practical CCD camera use, and discussed further in the following sections, include detector noise sources and signal-to-noise ratio, frame rate and temporal resolution, pixel size and spatial resolution, spectral range and quantum efficiency, and dynamic range.

Camera sensitivity, in terms of the minimum detectable signal, is determined by both the photon statistical shot noise and electronic noise arising in the CCD.

A conservative estimation is that a signal can only be discriminated from accompanying noise if it exceeds the noise by a factor of approximately 2. The minimum signal that can theoretically yield a given SNR value is determined by random variations of the photon flux, an inherent noise source associated with the signal, even with an ideal noiseless detector. This photon statistical noise is equal to the square root of the number of signal photons, and since it cannot be eliminated, it determines the maximum achievable SNR for a noise-free detector.

If a SNR value of 2. In practice, other noise components, which are not associated with the specimen photon signal, are contributed by the CCD and camera system electronics, and add to the inherent photon statistical noise.

Once accumulated in collection wells, charge arising from noise sources cannot be distinguished from photon-derived signal. Most of the system noise results from readout amplifier noise and thermal electron generation in the silicon of the detector chip.

The thermal noise is attributable to kinetic vibrations of silicon atoms in the CCD substrate that liberate electrons or holes even when the device is in total darkness, and which subsequently accumulate in the potential wells. For this reason, the noise is referred to as dark noise , and represents the uncertainty in the magnitude of dark charge accumulation during a specified time interval. The rate of generation of dark charge, termed dark current , is unrelated to photon-induced signal but is highly temperature dependent.

In similarity to photon noise, dark noise follows a statistical square-root relationship to dark current, and therefore it cannot simply be subtracted from the signal. Cooling the CCD reduces dark charge accumulation by an order of magnitude for every degree Celsius temperature decrease, and high-performance cameras are usually cooled during use.

Cooling even to 0 degrees is highly advantageous, and at degrees, dark noise is reduced to a negligible value for nearly any microscopy application. Providing that the CCD is cooled, the remaining major electronic noise component is read noise , primarily originating with the on-chip preamplifier during the process of converting charge carriers into a voltage signal. Although the read noise is added uniformly to every pixel of the detector, its magnitude cannot be precisely determined, but only approximated by an average value, in units of electrons root-mean-square or rms per pixel.

Some types of readout amplifier noise are frequency dependent, and in general, read noise increases with the speed of measurement of the charge in each pixel. The increase in noise at high readout and frame rates is partially a result of the greater amplifier bandwidth required at higher pixel clock rates. Cooling the CCD reduces the readout amplifier noise to some extent, although not to an insignificant level.

Spectral Instruments offers state of the art TEC or cryo-cooled cameras. CCDs generate photoelectrons at different rates depending on the wavelength of light. The conversion of photons into an electric signal is called quantum efficiency QE. As shown below, a normal front-illuminated device creates signal after the light has passed through the gate structures resulting in an attenuation of the incoming radiation. A back-thinned or back-illuminated CCD has the excess silicon on the bottom of the device etched away allowing unimpeded photoelectron generation to occur.

The process of back-thinning varies from company to company, which results in variation between manufacturers. CCD sensors were pioneered for scientific measurement applications in the early s and became the sensor of choice for nearly all imaging applications, including machine vision and consumer electronics. At this time, both types of sensors have their place in scientific measurement applications. In general, CMOS image sensors are the first choice when the application calls for high frame rates, and especially low noise at high frame rates.

In such applications, the optical integration time is so short that dark current and any luminescence from sensor transistors is insignificant.

Readout from each column in parallel means the practical frame rate is 2 to 3 orders of magnitude higher than the typical CCD. CMOS sensors designed for scientific measurement applications are currently limited in availability, but significant effort is being made to expand into the roles where CCDs have traditionally been the favored sensor.

CCDs excel in applications where the readout time is less important and readout follows a long integration time. A sufficiently cooled CCD has practically no dark current and no luminescence to mask the signal of interest. The output transistor has well-behaved low noise characteristics, and a dynamic range equal to or exceeding that of data converters. Because of the long history of development, CCDs can be optimized for best sensitivity in different wavelength ranges from near IR to x-rays by employing different silicon thickness, backside illumination still rare in CMOS sensors , optimized backside treatment and AR coatings.

Large area arrays with large pixels are routinely available. In short, if there is sufficient time for the long readout time of the CCD, there is no better sensor for very low light applications. Holst and Terrence S. Howell, Cambridge University Press, McLean, Springer-Praxis, What is a CCD? Simple diagram of a CCD pixel.



0コメント

  • 1000 / 1000