Multiple frame grabber

Information

  • Patent Grant
  • 8681274
  • Patent Number
    8,681,274
  • Date Filed
    Monday, October 24, 2011
    13 years ago
  • Date Issued
    Tuesday, March 25, 2014
    11 years ago
  • CPC
  • US Classifications
    Field of Search
    • US
    • 348 218100
    • 348 153000
    • 348 159000
    • 348 572000
    • 348 714000
    • 348 187000
    • 348 035000
    • 348 036000
    • 348 716000
    • 348 718000
    • 348 383000
    • 348 231990
  • International Classifications
    • H04N17/00
    • H04N5/225
    • H04N9/64
    • Disclaimer
      This patent is subject to a terminal disclaimer.
      Term Extension
      155
Abstract
A system is presented that applies M×N×K computational units to calculating image parameters on N picture images captured simultaneously by N digital camera devices, where there are N groups of frame grabber units, each containing M frame grabbers in which there are K computational units. The data operated on by a computational unit is separate and independent from the image data operated on by the other computational units. This results in a performance speedup of M×N×K compared to one computational unit making the same computations. A master frame grabber unit controls the illumination of the N digital camera devices, and synchronizes the illumination with the clocks of the N digital camera devices.
Description
BACKGROUND OF THE INVENTION

1. Field of Invention


The present invention is related to the testing of light sensitive devices and in particular to performing tests on digital camera and light sensor devices at a wafer level or a packaged module level.


2. Description of Related Art


The digital camera is becoming a ubiquitous device. Not only are digital cameras replacing the traditional film camera, digital camera devices are being used in many other applications, such as small electronic devices, such as PDA (personal data assistant) and cellular phones. With the explosion of cellular phones, the ability to take a picture and then send that picture to another individual using a second cellular phone comes the need to produce inexpensive digital camera modules and efficiently test these modules in large quantities. This is further complicated by the many different module configurations that are emerging as a result of the many different application requirements, including fixed focus, manual focus and automatic focus as well as physical size. The testing time for digital camera module, which can have mega-pixel capability, has traditionally been a relatively long process (approximately sixty seconds for a module with 0.3 megapixels) to insure the integrity and picture quality of the camera. This necessitates a testing capability, which insures the integrity and specification of the digital camera module, and which is quick and handles a large quantity of modules.


A patent application Ser. No. 10/417,317 dated Apr. 16, 2003, is related to miniature cameras and their manufacturing methods that are used as built-in modules in hand held consumer electronics devices such as mobile phones and PDA's. In a second patent application Ser. No. 10/434,743 dated May 8, 2003, a test system is described for digital camera modules used as built-in modules for consumer electronics, which performs electrical tests, adjustment of focus and sealing of the lens barrel with glue.


In addition there are a number of other prior art patents that are directed to testing of digital cameras: US 20040032496A1 (Eberstein et al.) is directed to a method of camera calibration and quality testing; EP 1389878A1 (Bednarz et al.) is directed to a method of camera calibration and testing camera quality; US 20040027456A1 (Pierce) directed to the use of calibration targets; EP 1382194A1 (Baer) is directed to dark current subtraction; JP 2003259126 (Keisuke) is directed to remove noise of an image; US 20030146976A1 (Liu) directed to a digital camera system enabling remote monitoring; JP 2003219436 (Fuminori) is directed to adjustment of a pixel shift camera; US 2003142374 (Silverstein) is directed to calibrating output of an image output device; JP 2003179949 (Hidetoshi) is directed to a luminance level inspection apparatus; JP 2003157425 (Vehvilainen) is directed to improving image quality produced in a mobile imaging phone; JP 2003101823 (Kenichi) is directed to specifying a picture data area; EP 1286553 A2 (Baer) is directed to a method and apparatus for improving image quality; US 20030030648 (Baer) is directed to a method and apparatus for improving image quality in digital cameras; U.S. Pat. No. 6,512,587 (Marcus et al.) is directed to measurement method and apparatus of an imager assembly; US 20030002749 (Vehvilainen) is directed to a method and apparatus for improving image quality; US 20020191973 A1 (Hofer et al.) is directed to a method and apparatus for focus error reduction; WO 2002102060 A1 (Baer) is directed to a method and apparatus for smear in digital images using a frame transfer sensor; JP 2002290994 (Hidetoshi) is directed to a method and apparatus to detect foreign matter on the surface of a lens; JP 200223918 (Yanshinao) is directed to an image inspection device and method, for a camera module; JP 2002077955 (Keisuke) is directed to a method and apparatus for evaluating camera characteristics; JP 2001292461 (Keisuke) is directed to a system and method for evaluating a camera; U.S. Pat. No. 6,219,443 B1 (Lawrence) is directed to a method and apparatus for inspecting a display using a low resolution camera; U.S. Pat. No. 6,201,600B1 (Sites et al.) is directed to a method and apparatus for inspection of optically transmissive objects having a lens; U.S. Pat. No. 5,649,258 (Bergstresser et al.) is directed to an apparatus and testing of a camera; EP 0679932 B1 (Kobayashi et al.) is directed to testing an electronically controlled camera; U.S. Pat. No. 5,179,437 (Kawada et al.) is directed to an apparatus for color correction of image signals of a color television camera; JP 03099376 (Hiroshi) is directed to the quality of a display screen; U.S. Pat. No. 4,612,666 (King) is directed to a pattern recognition apparatus; and U.S. Pat. No. 4,298,944 Stoub et al.) is directed to a method and apparatus for distortion correction for scintillation cameras.


SUMMARY OF THE INVENTION

It is an objective of the present invention to use parallel processing to compute image parameters on one or more digital camera devices under test.


It is also an objective of the present invention to compute image parameters of a plurality of digital camera devices simultaneously.


It is further an objective of the present invention to use a plurality of groups of frame grabber units containing a plurality of computational units to capture images and calculate image parameters.


It is still further an objective of the present invention to distribute an image to computational units within a frame grabber unit by pixel or group of pixels.


It is also further an objective of the present invention to control and illuminate a plurality of digital camera devices simultaneously to perform the same sequence of tests from which image data is created to be analyzed by the plurality of computational units in the plurality of frame grabber units.


It is also still further an objective of the present invention to illuminate a plurality of digital camera devices with the same light source controlled by one frame grabber unit of the plurality of frame grabbers, whereby a first set of pictures are captured by the plurality of digital camera devices and the light source is controlled to wait until a second set of pictures are ready to be taken.


It is still further an objective of the present invention to illuminate a plurality of light sensors in chip form on a wafer, or in a packaged form, with a light system similar to that described herein, controlled by a frame grabber unit or a light control unit, and wherein the light source produces a light pulse having a defined color, shape, intensity, duration and repetition.


In the present invention a computational unit comprises a memory controlled by a field programmable gate array (FPGA), and a CPU that accesses the memory to access image data from a digital camera device under test (DUT) and returns calculated results to the memory. The memory is coupled to a host computer to which the computed results are coupled for combining into a final result and allow a decision pertaining to the quality of the DUT that produced the image data.


The frame grabber controls the light source that is used to illuminate the DUT, synchronizing the light source with the DUT and connecting resulting image data to a frame grabber unit. The frame grabber unit comprises a plurality of computational units. Each computational unit operates on data that is independent from other computational units and allowing parallel processing of the image data with no serial overhead other than the initial loading of each memory in each computational unit. Computation can begin when the first pixel is loaded into a memory, allowing additional loading of the memory to be performed while computation of a part of an image is being done.


A plurality of frame grabber units are coupled to a plurality of digital camera devices (N-DUT) with one frame grabber, considered a master, controlling the source of light that illuminates the plurality of digital camera devices. A plurality of images are captured by the N-DUT are coupled to the plurality of frame grabber units for computation. A plurality of images are created by the N-DUT to be load the memories of the plurality of frame grabber units. So as not to impede the test process, the light source has a fast turn-on, turn-off and repetition throughout the change of colors and the shape of the light pulse. The plurality of images captured by the N-DUT are distributed across the plurality of frame grabber units in such a way that all frame grabber units are operating on pixels of the same image while additional images are created and delivered to the memories of the frame grabber units.


The DUT is a light sensitive semiconductor device located on a semiconductor wafer, or a carrier of unmounted devices, or the DUT can be a module that has been assembled with a light sensitive device. When a plurality of N-DUT is tested on a wafer or physically similar holding device, a single light source can be used to illuminate all devices at the same time. When a there is a plurality of part numbers located on separate carrying devices, a separate light source may be required where each separate light source is controlled by a separate “master” frame grabber. In the case of modules the physical distance of the test stations may be far enough apart as to require separate light sources, even though the modules are of the same part number.


When distributing a picture image by individual pixel, the P pixels are coupled to M frame grabber units, each of which contains K computational units. Computation on each pixel begins as soon as each pixel is loaded into the memory of each computational unit. A first pixel P=1 is coupled to a first computational unit K=1 of a first frame grabber unit M=1, and computation is started. A P=2 pixel is coupled to a first computational unit K=1 of second frame grabber unit M=2, and computation is started. A P=M pixel is then coupled to a first computational unit K=1 in an M frame grabber unit, and computation is started. Then a P=M+1 pixel is coupled to the K=2 computational unit of the M=1 frame grabber unit, and computation is started in the K=2 computational unit. The coupling process continues until M×K pixels have been coupled to all K computation units in the M frame grabber units. Then the P=M.times.K+1 pixel is coupled to the memory of the K=1 computational unit of the M=1 frame grabber unit. The coupling of pixels continues until the last pixel of a picture has been coupled to one of the M×K computational units. When the calculations on the P=1 pixel is completed the processor in the K=1 computational unit commences calculations on P=M×K+1 pixel. The overlap of computation and the loading pixels into computational units allows additional time to make calculations. It should be noted that groups of pixels forming a line, column or cluster can be coupled to the M frame grabber units in a fashion similar to that of coupling a pixel; however the start of computation will not be fully overlapped with the coupling process until all the M frame grabber units have been loaded with an initial group of pixels.


Alternatively, a P=1 pixel is coupled to a K=1 computational unit of the M=1 frame grabber unit, the P=2 pixel is coupled to a K=2 computational unit of the M=1 frame grabber unit and continuing until a P=K pixel is coupled to the K computational unit of the M=1 frame grabber unit. Each time a pixel is loaded into the memory of a computational unit, calculation of the parameters of the pixel is started. The P=K+1 pixel is coupled to the K=1 computational unit of the M=2 frame grabber unit. The P=2K pixel is coupled to the K computational unit of the M=2 frame grabber unit and the coupling of pixels to the frame grabber units continues until the P=M×K pixel is coupled to the K computational unit of the M frame grabber unit. The P=M×K+1 pixel is then loaded into the memory of the K=1 computational unit of the M=1 frame grabber unit. If the calculations being performed on the K=1 pixel are completed, the processor of the K=1 computational unit commences computation on the P=M×K+1 pixel. The process of coupling pixels to the computational units continues until the last pixel of a picture image is coupled to a computational unit of the M×K computational units. Calculations in each M×K computational unit on the next pixel commences when the calculations on the pixel previously loaded in a computational unit are completed.


Additional frame grabber units and computational units can be added along with additional memory to allow for an increase in the number of pixels, i.e. mega pixels and multiple mega pixels, in a picture image so that calculation time for picture images of higher pixel count is controlled to be within an acceptable time. Also, a single light comprising of a set of minors can be used to provide illumination to devices that are remote from each other and would otherwise require separate light sources. This reduces the variability that can occur between a plurality light sources and the need to calibrate a plurality of light sources to produce the same simultaneous.


Light sensors, which are not camera devices, in either wafer or package form can be illuminated with a light source similar to the one described herein. Where a frame grabber, or a similar control unit, controls a light source for intensity, color, shape, duration and repetition, with or without the use of mirrors to spread the light from a single light source over physical distances, to produce an illumination to enable the testing of the light sensors.





BRIEF DESCRIPTION OF THE DRAWINGS

This invention will be described with reference to the accompanying drawings, wherein:



FIG. 1A is a block diagram of the present invention showing a computational unit coupled to a device under test,



FIG. 1B is a block diagram of the present invention showing a plurality of computational units within a frame grabber unit coupled to a device under test,



FIG. 2 is a block diagram of the present invention of a plurality of frame grabber units coupled to a single device under test,



FIG. 3A is a block diagram of the present invention with two devices under test couple to separate light sources and frame grabber units,



FIG. 3B is a block diagram of the present invention with two devices under test couple to the same light source and separate frame grabber units,



FIG. 4A is a block diagram of the present invention of two devices under test, each coupled to a plurality of frame grabber units and separate light sources,



FIG. 4B is a block diagram of the present invention of a plurality of devices under test, each coupled to a plurality of frame grabber units and one light source,



FIG. 5 is a flow diagram of a method of the present invention for coupling a plurality of images to a plurality of computational units in a plurality of frame grabber units,



FIG. 6 is a flow diagram of a method of the present invention for coupling for coupling pixels of an image to a plurality of frame grabber units



FIG. 7 is a flow diagram of a method of the present invention for coupling images from a plurality of devices under test to the computational units of a plurality of frame grabbers, and



FIG. 8 is a flow diagram of a method of the present invention for coupling a plurality of digital camera devices to a same light source and a plurality of computational units to calculate image parameters.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

In FIG. 1A is shown a computational unit 10 of the present invention. A memory 11 containing a field programmable gate array (FPGA) is used to store image data 13 resulting from a light sensitive device such as a digital camera device under test (DUT) 14, where the FPGA controls the operation of the memory. The DUT can be a light sensitive semiconductor chip or a module containing the chip. A pin electronics unit (PE) 16 interfaces with the DUT 14 to provide signals and power to the DUT and receives analog and digital signals back from the DUT. An analog to digital converter (ADC) forms part of the PE 16 to convert analog image data from the DUT into a digital format that is stored in the memory 11. The memory is coupled to a CPU 12 to perform calculations on the image data stored in memory 11 to determine the capability of the DUT to sense the an image of light. Results from the calculations are stored back into the memory 11 and the results are accessed from a host computer 15 to determine a final composite test results. A select signal 17 is used to select the memory and a “sync out” 18 is used to synchronize the memory with the incoming stream of image data.


In FIG. 1B is shown a frame grabber unit (FGU) 20 that contains a plurality of computational units 10 ranging from computational unit (1) to computational unit (K). A typical value for K is K=4, but K=8 or more can be used depending upon the size of the computational task such as images with megapixels. Each of the computational units 10 are coupled to the host computer 15, which gathers the results of the computations from each memory 11 in the computational units 10 to form a composite result to determine the capability of the DUT to capture images of light coupled to the DUT.


Continuing to refer to FIG. 1B, image data is streamed into the frame grabber unit 20 and each FPGA contained within each computational unit (10) through computational unit (K) selects a portion of that data and stores it into memory 11 within the computational unit. The image data selected can be by individual pixel or by a split of the of the picture image into groups of pixels. The image from the DUT can be scanned out by row or column of image pixels in a left to right, top to bottom or right to left, bottom to top direction. When the image data is split, individual groupings of pixels within an image split are stored into the memories of the computational units. The split of the image data is done in a number of portions and distributed to the computational units 10 within the frame grabber unit 20. The portions are by pixel or groups of pixels. The groups of pixels can be by rows of pixels, columns of pixels or clusters of pixels, which form a checkerboard like pattern on the image. Distributing the image by pixel allows additional computation time that can be done as additional pixels are coupled to the computational units.


Continuing to refer to FIG. 1B, the CPU 10 within each computational unit (10) through computational unit (K) then performs calculation on the data in memory 11 simultaneously and in parallel with all CPU 10. Many lengthy computations are performed comprising for example performing Fast Fourier Transform (FFT); determining the modulation transfer function (MTF) of the image versus the focus target, detecting noise including fixed pattern noise (FPN), cross talk between colors of light, test of the ADC in the PE16; detection of bad pixels, dark current, noise including fixed pattern noise (FPN) and cross talk between colors; resolution measurements of the optics in the test station; peak detection of the brightness of a pixel or portion of an image; averaging an image over several pixels; and detecting vignetting and distortion of the optics system. All of these computations that are necessary to determine a good digital camera device (DUT) require a very large number of computations. As the pixel density increases in digital camera devices, more computational power is required so as to contain the test time of the DUT.


In FIG. 2 is shown a block diagram of a plurality of frame grabber units 20, FGU(1) to FGU(M), coupled to a DUT 14. Frame grabber unit FGU(1) acts as a master and controls a light source 25 to illuminate 26 the DUT 14. The light source 25 is controlled for color, intensity, light pulse shape and repetitiveness and is controlled to produce narrow light pulses in the range of microseconds extending to milliseconds depending upon the test being performed. The light source can be run continuously, repeating the light pulse and or any programmed repetition of a sequence of light pulses. The master frame grabber FGU(1) synchronizes the light source with the clock of the DUT 14 so that an image of the light can be captured by the DUT. Each frame grabber unit 20, FGU(1) to FGU(M), contain a plurality of computational units 10 as shown in FIG. 1B. The picture image coupled from the DUT 14 to the frame grabber units FGU(1) to FGU(M) is divided into portions and distributed to the K computational units contained within each frame grabber unit 20.


In FIG. 3A is shown two DUT 14, DUT(1) and DUT(2). The two DUT 14 are each connected to separate frame grabber units 20, DUT(1) to frame grabber unit FGU(1) and DUT(2) to frame grabber unit FGU(2). The DUT 14 are controlled and provided signals from a stimulus and control unit 30. Each frame grabber unit 20 controls a separate light source 25, FGU(1) controls light source LS(1) to illuminate 26 DUT(1), and FGU(2) controls light source LS(2) to illuminate 26 DUT(2). The light sources 25 are controlled by each frame grabber unit 20 to be synchronous with the clock of each respective DUT 14. This arrangement is necessary when DUT(1) and DUT(2) cannot be positioned under the same light source as might be the case if the DUT is a physical module located in two physically separate test apparatus, or the two DUT are of different part numbers requiring a difference in the illumination. The frame grabber units 20, FGU(1) and FGU(2), contain a plurality of computational units 10 as shown in FIG. 1B that operate simultaneously in parallel on portions of a picture image to calculate the image parameters.


In FIG. 3B there are two DUT 14, DUT(1) and DUT(2), controlled by the stimulus and control unit 30. DUT(1) is coupled to a frame grabber unit FGU(1) 20, which acts as a master for control of the light source 25 illuminating 26 both DUT(1) and DUT(2), and FGU(2) is coupled to DUT(2). The master frame grabber FGU(1) synchronizes the clocks of DUT(1) and DUT(2) with the occurrence of the illumination 26 from the light source 25 so that an image of the light can be captured by both DUT(1) and DUT(2). The light source 25 produces fast light pulses ranging from microseconds to milliseconds of a color of light. The light source has a range of colors that must be quickly switched between so as to not impede the testing of the DUT. The range of colors comprise red, blue, green, infrared and any additional colors required in the testing of the DUT 14. Each frame grabber unit, FGU(1) and FGU(2), contain a plurality of computational units 10 as shown in FIG. 1B.



FIG. 4A is a block diagram showing a plurality of frame grabber units 20 contained within in two groups of frame grabber units 21, each group 21 is connected to a DUT 14, DUT(1) and DUT(2). Each of frame grabber unit, FGU(1,1) to FGU(1,M) and FGU(2,1) to FGU(2,M) contain a plurality of computational units 10 as shown in FIG. 1B. The frame grabber unit FGU(1,1) acts as a master to control the light source 25 LS(1) to illuminate 26 DUT(1), and frame grabber unit FGU(2,1) acts as a master to control the light source 25 LS(2) to illuminate 26 DUT(2). The two DUT 14, DUT(1) and DUT(2), can be the same part number or different part numbers requiring separate light sources, or two devices or modules physically space such that a single light source cannot be used. The stimulus and control unit 30 provides control signals to the DUT (1) and DUT(2), which when illuminated by the light sources LS(1) and LS(2) produce a picture image that is coupled to the groups of frame grabber units 21. The Frame grabber unit FGU(1,1) synchronizes the occurrence of the illumination 26 with the clock of the DUT(1) and frame grabber unit (2,1) synchronizes the occurrence of the illumination 26 with the clock of the DUT(2). The picture data from DUT(1) is coupled to frame grabber units FGU(1,1) to FGU(1,M) and is distributed to the K computational units 10 (FIG. 1B) contained within each frame grabber unit in a fashion that divides the required calculations among the K computational units. In Like manner, the picture data from DUT(2) is coupled to frame grabber units FGU(2,1) to FGU(2,M) and is distributed to the K computational units contained within each frame grabber unit. Each computational unit 10 (FIG. 1A and 1B) operates in parallel, separately and independent of the other computational units to calculate the image parameters of the picture images from DUT(1) and DUT(2) producing a maximum parallel computational performance that is proportional to the number of computational units 10 that are being used.



FIG. 4B shows a configuration where a plurality of light sensitive devices under test 14, DUT(1), DUT(2) to DUT(N) are coupled to a plurality of groups of frame grabber units 21, G=1, G=2 to G=N. Within each group of frame grabber units 21 there are M frame grabber units 20, FGU(1,1) to FGU(1,M) in group G=1, FGU(2,1) to FGU(2,M) in group G=2 and FGU(N,1) to FGU(N,M) in group G=N. Each frame grabber unit 20, FGU(1,1) to FGU(1,M), FGU(2,1) to FGU(2,M) and FGU(N,1) to FGU(N,M), contain a plurality (1 to K) of computational units 10 as shown in FIG. 1B, which provides the computational power necessary to analyze a plurality of picture images to determine the capability of each DUT 14.


Continuing to refer to FIG. 4B, the image data is divided amongst the computational units 10 in each of the frame grabber units 20. For example, a portion P=1 of an image from DUT(1) is coupled to computational unit K=1 in frame grabber unit FGU(1,1), portion P=K is coupled to computational unit K in FGU(1,1), portion P=K+1 is coupled to computational unit K=1 in FGU(2,1), and portion P=M×K is coupled to computational unit K in FGU(1,M). The results of the computations are stored back into the memory 11 by each CPU 12 and read by a host computer 15 (FIG. 1A) to determine the composite result of each test performed on each DUT 14.


Continuing to refer to FIG. 4B, the frame grabber unit FGU(1,1) acts as a master to control the light source 25 that illuminates 26 each of the devices under test, DUT(1), DUT(2) to DUT(N). The devices under test 14 are digital camera devices of a single part number in a chip form located on a semiconductor wafer, for example, or can be in a module form, where each DUT 14 is physically spaced such that one light source can be used to illuminate all DUT simultaneously. The control of the light source 25 comprises selecting a color of light, selecting a pulse shape, intensity and duration, and synchronizing the occurrence of the resulting illumination 26 with the clock of the plurality of DUT 14 so that each DUT 14 can capture an image of the light. It is within the scope of this invention that there is a light source independently controlled for each DUT 14, DUT(1), DUT(2) to DUT(N), by a frame grabber unit 20 in each group, G=1, G=2 to G=N. For example, a separate light source for DUT(1) could be controlled by FGU(1,1), a light source for DUT(2) controlled by FGU(2,1) and a light source for DUT(N) controlled by FGU(N,1). Independent light sources might be required when distances are such that one light source cannot illuminate all of the devices 14. Also it should be noted that fewer than N light sources can be used, for example a light source 25 is controlled by FGU(1,1) for DUT(1) through DUT(N=4) and a second light source 25 is controlled by FGU(N=5,1) for DUT(N=5) to DUT(N).


In FIG. 5 is shown a method of computing the parameters of an image from a digital camera device (DUT). The image is divided into M portions 49. A first portion is coupled to an M=1 frame grabber unit 50, where M is the number of frame grabber units. The first portion is divided into K sub-portions and each of the K sub-portions are coupled to one of K computational units within the M=1 frame grabber unit 51, where K is the number of computational units within a frame grabber unit. A second portion is coupled to an M=2 frame grabber unit 52, and the second portion is divided into K sub-portions and each sub-portion is distributed to one of the K computational units contained within the M=2 frame grabber unit 53. An M portion is coupled to an M frame grabber unit 54. The M portion is divided into K sub-portions, and each sub-portion is distributed to one the K computational units contained within the M frame grabber unit 55. All K computational units within the M frame grabber units operate independently and in parallel to simultaneously compute the image parameters for M×K sub-portions of the M portions 56. This results in a computational speedup of M×K as compared to a single computation unit operating on the picture image. The results of the computations are stored back into the respective memories of the computational units and coupled to the host computer 57 for aggregating the computed results.


In FIG. 6 is shown a method in which image data is loaded into a frame grabber unit such that each computational unit within the frame grabber unit computes on pixels distributed to the computational units. For example, a first pixel P(1) of an image that is loaded into a first computational unit 60, where P(1)=1, a second pixel is loaded into a second computational unit 61, where P(2)=2, and a K pixel P(K)=K is loaded into a K computational unit 62. Computation of the parameters of loaded pixels commences 63 while additional pixels of an image are loaded into the computational units 64, 66, 60, 61, and 62. The next pixel P(1)=K+1 is coupled to the first computational unit 60, pixel P(2)=K+2 is coupled to the second computational unit, and pixel P(K)=2K is loaded into the K computational unit. The loading of pixels continues until pixel P(K)=Pmax is loaded into a computational unit 65. The results of the computations are stored in the respective computational unit memory 67. For example, computed results for the first pixel is stored in the memory of the first computational unit and the computed results for the second pixel is stored in the memory of the second computational unit. When the computational process finishes, the results stored in the memories of the computational units are coupled to the host computer 68 where the results are combined and analyzed to provide a final composite test results.


In FIG. 7 is shown a method for analyzing a plurality of N images created by a plurality of digital camera devices (N-DUT), DUT(1) through DUT(N), that are being tested simultaneously, where N is an integer representing a particular DUT. Referring to FIG. 1B and 4B along with FIG. 7, the N-DUT are illuminated with light from a light source and each N-DUT is controlled to capture an image of the light, totaling N images, one image for each DUT 70. A first image from DUT(1) is coupled to a first group (G=1) of frame grabber units 71. Each frame grabber unit FGU(1,1) through FGU(1,M) within group (G=1) contains K computational units. The first image is divided into portions P amongst each of the K computational units within each of the frame grabber units FGU(1,1) through FGU(1,M) 72. For example, a first portion (P=1) of the first image is coupled to the first computational unit (K=1) in frame grabber unit FGU(1,1), the second portion (P=2) of the first image is coupled to the second computational unit (K=2) of FGU(1,1), the P=K+1 image portion is coupled to the first computational unit K=1 of the frame grabber unit FGU(1,2), and so on until the last portion P=M×K of the first image is coupled to the last computational unit K of FGU(1,M), where the total image portions P is equal to the number of M frame grabber units times K computational units in each frame grabber unit, P=M×K.


Continuing to refer to FIG. 7, a second image from DUT(2) is coupled to a second group (G=2) frame grabber units 73. The second image is divided amongst each of K computational units of the second group (G=2) of M frame grabber units 74 in a similar fashion as was done for the first image. For example, a first image portion (P=1) of the second image is coupled to the first computational unit (K=1) in frame grabber unit FGU(2,1), the second portion (P=2) of the second image is coupled to the second computational unit (K=2) of FGU(2,1), the (P=K+1) image portion is coupled to the first computational unit (K=1) of the frame grabber unit FGU(2,2), and so on until the last portion (P=M×K) of the first image is coupled to the last computational unit (K) of FGU(2,M),


The process of coupling the images from the N digital camera devices continues until the image of the last DUT(N) is coupled to the frame grabber group (G=N) 75. The last image from the last DUT(N) is divided amongst each of K computational units of the last group (G=N) of M frame grabber units 76 in a similar fashion as was done for the image from DUT(1). For example, a first image portion (P=1) of the last image is coupled to the first computational unit (K=1) in frame grabber unit FGU(N,1), the second portion (P=2) of the last image is coupled to the second computational unit (K=2) of FGU(N,1), the (P=K+1) image portion is coupled to the first computational units (K=1) of the frame grabber unit FGU(N,2), and so on until the last portion (P=M×K) of the last image is coupled to the last computational unit K of FGU(N,M). The results of the computations are coupled to a host computer 78 for the combining of the results to determine the quality of the digital camera devices.


Continuing to refer to FIG. 7, there are P=M×K×N image portions that are calculated in parallel by M×K×N computational units. The portions of the image data are independent and separate from each other allowing each CPU 12 (FIG. 1A) to perform at a maximum computational rate. As the data is being loaded into each computational unit, computation commences on the loaded portion of the data as the remainder of the data is loaded to maximize the overlap of data loading and computation in order to improve total test time. The data can be loaded pixel by pixel. For example the first pixel (p=1) of an image is coupled to the first computational unit (K=1) of the frame grabber unit FGU(1,1), the second pixel (p=2) coupled to the second computational unit (K=2) of FGU(1,1), the (p=K) pixel coupled to the last computational unit K of FGU(1,1), the (p=K+1) pixel coupled to the first computational unit (K=1) of FGU(1,2), and the (p=K×M) pixel coupled to the computational unit K of FGU(1,M). Computation is started when the first pixel data is loaded while additional image data is loaded by coupling the (p=K×M+1) pixel into the first computational unit (K=1) of FGU(1,1), the (p=K×M+2) pixel into computational unit (K=2) of FGU(1,1), the (p=K×(M+1)) pixel into computational unit K of FGU(1,1), the (p=K×(M+1)+1) pixel into computational unit (K=1) of FGU(1,2), so on until the (p=2K×M) pixel is coupled to the computational unit K of FGU(1,M), and returning to the first computational unit (K=1) in FGU(1,1) for the next pixel. This load a pixel and begin computing procedure provides a way to have a maximum overlap between loading image data and computing image parameters. An alternative to loading pixel by pixel is to load the computational units by row or column, groups of rows or columns, or other grouping of pixels such as groups of pixels clustered in a checkerboard fashion.


In FIG. 8 is shown a method of performing image tests on a plurality of digital camera devices (N-DUT) simultaneously. A computational unit is formed containing a memory and a computer 90. A frame grabber unit is formed containing K computational units 91. A group of frame grabber units is formed containing M frame grabber units 92. Electrical contact is made to N-DUT 93 allowing N groups of frame grabber units to be coupled to the N-DUT 94. The N-DUT are illuminated by a single light source 95 controlled by a master frame grabber unit 96. The master frame grabber unit can be any of the M frame grabber units. The light source is controlled for light parameters comprising color, intensity, pulse shape, duration, and repetitiveness. The N-DUT are controlled to produce N picture images 97, and the N picture images are coupled to N groups of frame grabber units 98. Each of the N images is coupled to N groups of frame grabber units and is distributed to the K computational units contained within the M frame grabber units 99. Image parameters are calculated using the K computational units, operating simultaneously, that are located within each of the N groups of frame grabber units 100. The results of the computation are stored into the computational memories and coupled to a host computer for combining the final results to determine the quality of the N-DUT 101.


While the invention has been particularly shown and described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made without departing from the spirit and scope of the invention.

Claims
  • 1. A system comprising: a first processing device configured to receive a first portion of an image from an image capture device under test, wherein the first portion of the image comprises a first group of pixel columns and/or pixel rows generated by the image capture device under test in response to illumination; anda second processing device configured to receive a second portion of the same image from the image capture device under test, wherein the second portion of the image comprises a second separate group of pixel columns and/or pixel rows generated by the image capture device under test in response to the illumination, and wherein the first processing device and the second processing device are configured to process the first portion of the image and the second portion of the image, respectively, in parallel.
  • 2. The system of claim 1, wherein the first processing device is configured to divide the first portion into sub-portions, and wherein the first processing device includes multiple computational units configured to process corresponding sub-portions in parallel.
  • 3. The system of claim 2, wherein the second processing device is configured to divide the second portion into sub-portions, and wherein the second processing device includes multiple computational units configured to process corresponding sub-portions in parallel.
  • 4. The system of claim 1, wherein the first processing device and the second processing device are located in a frame grabber device.
  • 5. The system of claim 1, wherein the first processing device and the second processing device are located in different frame grabber devices.
  • 6. The system of claim 1, wherein the first processing device and the second processing device are configured to output the processed first and second portions of the image to a host computer.
  • 7. A method comprising: receiving an image generated by a device under test;dividing, by a frame grabber device, a portion of the image into physically separate sub-portions of rows and/or columns;processing the sub-portions of rows and/or columns for the image in parallel with corresponding computational units in the frame grabber device; andoutputting, by the frame grabber device, the processed sub-portions of the image to a host computer.
  • 8. The method of claim 7 wherein the device under test comprises an image capture device and further comprising controlling the image capture device to capture the image.
  • 9. The method of claim 8, wherein the controlling of the image capture device further comprises prompting illumination of the image capture device by a light source.
  • 10. The method of claim 9, wherein the controlling of the image capture device further comprises synchronizing a capture of the image by the image capture device with the illumination of the image capture device by the light source.
  • 11. The method of claim 7 further comprising dividing, by the frame grabber device, the image into multiple portions including the portion of the image divided in the sub-portions.
  • 12. A method comprising: dividing, by a frame grabber device, a portion of an image into sub-portions;processing the sub-portions of the image in parallel with corresponding computational units in the frame grabber device; andoutputting, by the frame grabber device, the processed sub-portions of the image to a host computer, wherein the sub-portions include groups of pixels of the image that at least correspond to rows of pixels of the image, columns of pixels of the image, or clusters of pixels in a checkerboard configuration in the image.
  • 13. A device comprising: a first frame grabber device configured to:receive a stream of pixels for an image generated by a device under test;divide a first portion of the stream of pixels for the image by rows and/or columns into sub-portions; andmultiple processing devices operating within the first frame grabber configured to separately process the sub-portions of the image in parallel, respectively.
  • 14. The device of claim 13 further comprising a second frame grabber device configured to divide a second portion of an image into sub-portions, wherein the frame grabber device includes multiple processing devices configured to process the sub-portions of the image in parallel.
  • 15. The device of claim 14, wherein the first frame grabber device is configured to divide the image into the first portion and the second portion.
  • 16. The device of claim 13, wherein the first frame grabber device is configured to output the processed sub-portions to a host computer.
  • 17. The device of claim 13, wherein the sub-portions include groups of pixels of the image.
  • 18. The device of claim 17, wherein the groups of pixels correspond to rows of pixels of the image.
  • 19. The device of claim 17, wherein the groups of pixels correspond to columns of pixels of the image.
  • 20. A device comprising: a first frame grabber device configured to divide a first portion of an image into sub-portions, wherein the frame grabber device includes multiple processing devices configured to process the sub-portions of the image in parallel, wherein the sub-portions include groups of pixels of the image and the groups of pixel correspond to clusters of pixels in a checkerboard configuration in the image.
Priority Claims (1)
Number Date Country Kind
04392036 Oct 2004 EP regional
RELATED PATENT APPLICATION

This is a continuation of U.S. patent application Ser. No. 12/456,925, filed on Jun. 24, 2009, now issued as U.S. Pat. No. 8,068,182, which is a divisional application of U.S. patent application Ser. No. 10/980,481, filed on Nov. 3, 2004, now issued as U.S. Pat. No. 7,567,273, all of which is herein incorporated by reference in its entirety. This application is related to U.S. patent application Ser. No. 10/930,351, filed on Aug. 31, 2004, now issued as U.S. Pat. No. 7,486,309. This application is related to U.S. patent application Ser. No. 10/929,651, filed on Aug. 30, 2004, now issued as U.S. Pat. No. 7,505,064. This application is related to U.S. patent application Ser. No. 10/930,353, filed on Aug. 31, 2004, now issued as US Patent Publication No. 2006/0038916. This application is related to U.S. patent application Ser. No. 10/929,652, filed on Aug. 30, 2004, now issued as U.S. Pat. No. 7,155,119. This application is related to U.S. patent application Ser. No. 10/929,300, filed on Aug. 30, 2004, now issued as U.S. Pat. No. 7,248,347. This application is related to U.S. patent application Ser. No. 10/929,653, filed on Aug. 30, 2004, now issued U.S. Pat. No. 7,403,229.

US Referenced Citations (152)
Number Name Date Kind
3255304 Dischert et al. Jun 1966 A
3447874 Back Jun 1969 A
3858063 Saleme Dec 1974 A
4167022 Dischert et al. Sep 1979 A
4298944 Stoub et al. Nov 1981 A
4326219 Griesshaber et al. Apr 1982 A
4384769 Brei et al. May 1983 A
4507683 Griesshaber et al. Mar 1985 A
4612666 King Sep 1986 A
4620790 Hufnagel Nov 1986 A
4760447 Koka et al. Jul 1988 A
4991007 Corley Feb 1991 A
5033015 Zwirn Jul 1991 A
5047260 Durand Sep 1991 A
5142648 Fitts et al. Aug 1992 A
5159455 Cox et al. Oct 1992 A
5179437 Kawada et al. Jan 1993 A
5237408 Blum et al. Aug 1993 A
5257092 Noguchi et al. Oct 1993 A
5263120 Bickel Nov 1993 A
5267038 Fister Nov 1993 A
5270961 Bajat et al. Dec 1993 A
5304809 Wickersheim Apr 1994 A
5327226 Tanabe Jul 1994 A
5389774 Gelman et al. Feb 1995 A
5467128 Yates et al. Nov 1995 A
5489771 Beach et al. Feb 1996 A
5495329 Anderson et al. Feb 1996 A
5649258 Bergstresser et al. Jul 1997 A
5673083 Izumi et al. Sep 1997 A
5699440 Carmeli Dec 1997 A
5745126 Jain et al. Apr 1998 A
5748230 Orlando et al. May 1998 A
5753931 Borchers et al. May 1998 A
5760829 Sussmeier Jun 1998 A
5781351 Murakami et al. Jul 1998 A
5825495 Huber Oct 1998 A
5867250 Baron Feb 1999 A
5880586 Dukart et al. Mar 1999 A
5906682 Bouras et al. May 1999 A
5953055 Huang et al. Sep 1999 A
5960081 Vynne et al. Sep 1999 A
5998867 Jensen et al. Dec 1999 A
5999251 Teramoto et al. Dec 1999 A
6005704 Chmielewski et al. Dec 1999 A
6041336 Steinlechner Mar 2000 A
6075601 Marcus et al. Jun 2000 A
6111243 Lara Aug 2000 A
6118886 Baumgart et al. Sep 2000 A
6153985 Grossman Nov 2000 A
6177986 Stockton Jan 2001 B1
6184675 Bannai Feb 2001 B1
6195159 MacDonald et al. Feb 2001 B1
6198838 Roehrig et al. Mar 2001 B1
6201600 Sites et al. Mar 2001 B1
6205244 Bawolek et al. Mar 2001 B1
6219443 Lawrence Apr 2001 B1
6232766 Saouli et al. May 2001 B1
6313652 Maeng Nov 2001 B1
6366312 Crittenden Apr 2002 B1
6389687 Glenn May 2002 B1
6424843 Reitmaa et al. Jul 2002 B1
6427078 Wilska et al. Jul 2002 B1
6473062 Debiez et al. Oct 2002 B1
6476417 Honda et al. Nov 2002 B2
6480346 Funakoshi Nov 2002 B2
6512587 Marcus et al. Jan 2003 B1
6518656 Nakayama et al. Feb 2003 B1
6678394 Nichani Jan 2004 B1
6686588 Webster et al. Feb 2004 B1
6701002 Karube Mar 2004 B1
6714241 Baer Mar 2004 B2
6741277 Rau May 2004 B1
6759814 Vogel et al. Jul 2004 B2
6788390 Tanaka Sep 2004 B2
6798450 Baer Sep 2004 B2
6807232 Nicholson et al. Oct 2004 B2
6809330 Krohn et al. Oct 2004 B2
6822657 Baer Nov 2004 B2
6940998 Garoutte Sep 2005 B2
6969843 Beach Nov 2005 B1
7015951 Yoshigahara et al. Mar 2006 B1
7030926 Miyake et al. Apr 2006 B2
7068302 Ebenstein et al. Jun 2006 B2
7071966 Lu et al. Jul 2006 B2
7103208 Wengender et al. Sep 2006 B2
7126630 Lee et al. Oct 2006 B1
7136157 Gomm et al. Nov 2006 B2
7151560 Matherson et al. Dec 2006 B2
7155119 Knoedgen et al. Dec 2006 B2
7158170 Gustavsson et al. Jan 2007 B2
7188036 Strittmatter Mar 2007 B2
7200497 Wang et al. Apr 2007 B2
7215374 Why et al. May 2007 B2
7248284 Pierce Jul 2007 B2
7248347 Knoedgen et al. Jul 2007 B2
7265781 Noguchi et al. Sep 2007 B2
7403229 Knoedgen et al. Jul 2008 B2
7405764 Gustavsson et al. Jul 2008 B2
7426316 Vehvilainen Sep 2008 B2
7486309 Knoedgen et al. Feb 2009 B2
7505064 Knoedgen et al. Mar 2009 B2
7812858 Knoedgen et al. Oct 2010 B2
7948519 Knoedgen et al. May 2011 B2
7965316 Knoedgen et al. Jun 2011 B2
8068182 Knoedgen et al. Nov 2011 B2
8274599 Gustavsson et al. Sep 2012 B2
20010050721 Miyake Dec 2001 A1
20020006687 Lam Jan 2002 A1
20020044215 Takagi et al. Apr 2002 A1
20020076088 Tsai Jun 2002 A1
20020080266 Shinamiya et al. Jun 2002 A1
20020093585 Lemstrom et al. Jul 2002 A1
20020113878 Iwai et al. Aug 2002 A1
20020126285 Gilmore et al. Sep 2002 A1
20020167605 Akimoto et al. Nov 2002 A1
20020186311 Sevat et al. Dec 2002 A1
20020191973 Hofer et al. Dec 2002 A1
20030002749 Vehvilainen Jan 2003 A1
20030007084 Nakjoh Jan 2003 A1
20030030648 Baer Feb 2003 A1
20030044042 King et al. Mar 2003 A1
20030048375 Seim Mar 2003 A1
20030067595 Alderson et al. Apr 2003 A1
20030112360 Liao et al. Jun 2003 A1
20030142374 Silverstein Jul 2003 A1
20030146976 Liu Aug 2003 A1
20030161163 Hussey et al. Aug 2003 A1
20030193604 Robins et al. Oct 2003 A1
20030214727 Min et al. Nov 2003 A1
20030218688 Shaw et al. Nov 2003 A1
20030221042 Watkins et al. Nov 2003 A1
20030223008 Kim et al. Dec 2003 A1
20040012688 Tinnerino et al. Jan 2004 A1
20040013420 Hara Jan 2004 A1
20040027456 Pierce Feb 2004 A1
20040032496 Ebenstein et al. Feb 2004 A1
20040058487 Eslamy et al. Mar 2004 A1
20040113047 Tu Jun 2004 A1
20040126005 Duvdevani et al. Jul 2004 A1
20040135907 Krohn et al. Jul 2004 A1
20040189812 Gustavsson et al. Sep 2004 A1
20050093990 Aoyama May 2005 A1
20050162517 Fujihara et al. Jul 2005 A1
20050219365 DiCarlo et al. Oct 2005 A1
20050231596 Marchese Oct 2005 A1
20050285973 Singh Dec 2005 A1
20060038916 Knoedgen et al. Feb 2006 A1
20060077259 Knoedgen et al. Apr 2006 A1
20070091334 Yamaguchi et al. Apr 2007 A1
20080049127 Maeda et al. Feb 2008 A1
20090167869 Knoedgen et al. Jul 2009 A1
Foreign Referenced Citations (39)
Number Date Country
3417016 May 1984 DE
19502276 Jan 1995 DE
393848 Oct 1990 EP
599175 Jun 1994 EP
656731 Jul 1995 EP
679932 Nov 1995 EP
819927 Jan 1998 EP
1071944 Mar 2001 EP
1180718 Feb 2002 EP
1286553 Feb 2003 EP
348996 Oct 2003 EP
1382194 Jan 2004 EP
1389878 Feb 2004 EP
785439 Apr 2004 EP
463342 Sep 2004 EP
2426813 Dec 2006 GB
63056927 Mar 1988 JP
2197870 Aug 1990 JP
3099376 Apr 1991 JP
2002128088 Aug 2001 JP
2001292461 Oct 2001 JP
2002077955 Mar 2002 JP
200232918 Aug 2002 JP
12003270292 Sep 2002 JP
2002290994 Oct 2002 JP
2002345001 Nov 2002 JP
22003051973 Feb 2003 JP
2003101823 Apr 2003 JP
2003157425 May 2003 JP
2003179949 May 2003 JP
2003035746 Jul 2003 JP
2003219436 Jul 2003 JP
3902487 Sep 2003 JP
2003259126 Sep 2003 JP
2004226462 Aug 2004 JP
20030052216 Jun 2003 KR
WO0191193 Nov 2001 WO
WO02102060 Dec 2002 WO
WO03096761 Nov 2003 WO
Non-Patent Literature Citations (7)
Entry
Donglai Xu et al.; “An FPGA-Based Low-Cost Frame Grabber for Image Processing Application”; Electronics, Circuits, and Systems, 2000, 7th IEEE International conference; Dec. 17-20, 2000; pp. 333-336.
Fujitsu Automation Ltd XP-200319159 Camera Module Testing System CMT-102-V4; Oct. 2003.
Burr-Bown Corp. “XTR110. Precision voltage to Current converter/Transmitter”; 1993, XP-002319985; retrieved from internet url:http://www.pci-card.com/XTR110.pdf7.
M.L. Chithambo et al.; “A Pulsed Light-Emitting-Diode System for Stimulation of Luminescence”; XP-002319984; Meas. Sci. technology 11 (2000) 418-424; IOP Pub. Ltd.
B.T. Turko; “Low Smear CCD Camera for High Frame Rates” IEEE Transion Nuclear Sciences, No. 1, vol. 36, Feb. 1989; 1989 IEEE pp. 165-169.
Merklinger, Harold; “The In's and Out's of Focus”; Aug. 1, 1990; Canada; XP002316955 ISBN:0-9695025-0-8; Internet Edition; 92 pages.
Stolowitz Ford Cowger LLP; Related Case Listing; Oct. 21, 2011; 2 pages.
Related Publications (1)
Number Date Country
20120038779 A1 Feb 2012 US
Divisions (1)
Number Date Country
Parent 10980481 Nov 2004 US
Child 12456925 US
Continuations (1)
Number Date Country
Parent 12456925 Jun 2009 US
Child 13280234 US