This application relates to the acquisition of image data by digital cameras and other electronic image acquisition devices, and, more specifically, to detecting the presence of a defined type of object within the image.
Electronic cameras image scenes onto a two-dimensional sensor such as a charge coupled-device (CCD), a complementary metal-on-silicon (CMOS) device or other type of light sensor. These devices include a large number of photo-detectors (typically two, three, four or more million) arranged across a small two dimensional surface that individually generate a signal proportional to the intensity of light or other optical radiation (including infrared and ultra-violet regions of the spectrum adjacent the visible light wavelengths) striking the element. These elements, forming pixels of an image, are typically scanned in a raster pattern to generate a serial stream of data representative of the intensity of radiation striking one sensor element after another as they are scanned. Color data are most commonly obtained by using photo-detectors that are sensitive to each of distinct color components (such as red, green and blue), alternately distributed across the sensor.
A popular form of such an electronic camera is a small hand-held digital camera that records data of a large number of picture frames either as still photograph “snapshots” or as sequences of frames forming a moving picture. A significant amount of image processing is typically performed on the data of each frame within the camera before storing on a removable non-volatile memory such as a magnetic tape cartridge, a flash memory card, a recordable optical disk or a hard magnetic disk drive. The processed data are typically displayed as a reduced resolution image on a liquid crystal display (LCD) device on the outside of the camera. The processed data are also typically compressed before storage in the non-volatile memory in order to reduce the amount of storage capacity that is taken by the data for each picture frame.
The data acquired by the image sensor are typically processed to compensate for imperfections of the camera and to generally improve the quality of the image obtainable from the data. The correction for any defective pixel photodetector elements of the sensor is one processing function. Another is white balance correction wherein the relative magnitudes of different pixels of the primary colors are set to represent white. This processing also includes de-mosaicing the individual pixel data to superimpose data from spatially separate monochromatic pixel detectors of the sensor to render superimposed multi-colored pixels in the image data. This de-mosaicing then makes it desirable to process the data to enhance and smooth edges of the image. Compensation of the image data for noise and variations of the camera optical system across the image and for variations among the sensor photodetectors is also typically performed within the camera. Other processing typically includes one or more of gamma correction, contrast stretching, chrominance filtering and the like.
Electronic cameras also nearly always include an automatic exposure control capability that sets the exposure time, size of its aperture opening and analog electronic gain of the sensor to result in the luminescence of the image or succession of images being at a certain level based upon calibrations for the sensor being used and user preferences. These exposure parameters are calculated in advance of the picture being taken, and then used to control the camera during acquisition of the image data. For a scene with a particular level of illumination, a decrease in the exposure time is made up by increasing the size of the aperture or the gain of the sensor, or both, in order to obtain the data within a certain luminescence range. An increased aperture results in an image with a reduced depth of field and increased optical blur, and increasing the gain causes the noise within the image to increase. Conversely, when the scene is brightly lighted, the aperture and/or gain are reduced and compensated for by increasing the exposure time, the resulting image having a greater depth of field and/or reduced noise. In addition to analog gain being adjusted, or in place of it, the digital gain of an image is often adjusted after the data have been captured.
Other processing that may also be performed by electronic cameras includes a detection of the likelihood that a certain type of object is present within the image. An example object is a human face. When there is a likelihood that the object is present in the image, its location is also determined. This allows the camera to act differently upon that portion of the image during acquisition and/or processing of the acquired data.
Primarily because of the large amount of data processing performed by a typical digital image capturing device, it is highly desirable that any processing to detect the presence of a certain object or objects in the image be done efficiently, using a minimum amount of hardware resources and performing the processing in a short amount of time.
In a method of detecting a likelihood that an object of a particular type is present within an image being captured, the image frame is divided into windows which preferably overlap each other. The image data within the individual windows are preferably evaluated independently of the data of other windows. Those window data are evaluated with respect to data stored in the camera of multiple feature sets representative of the object, one feature set at a time, to generate individual scores for the windows as to the likelihood that at least a portion of the object is present in the window. Typically, the first feature set is relatively simple and subsequent feature sets become more complicated with respect to characteristics of the object.
All of the windows of a given image are usually evaluated with respect to the first feature set but only those windows having the highest scores as a result of this first round of evaluation, such as those over a preset level, are then evaluated with respect to the second feature set. Any subsequent evaluation with respect to a third or more feature sets also process only data of windows having the highest score from the immediately preceding round of evaluation. By rejecting windows right away that cannot contain the object, the amount of data processing is significantly reduced.
As part of the individual window evaluations, a score results of from the evaluation of the image data with respect to the feature set data. Rather than simply increasing the score by one of two amounts by using only pass/fail criteria, non-linear interpolation between these two amounts is preferably utilized for evaluations that do not clearly result in one or the other of the two amounts. This improves the accuracy of the evaluations.
Also as part of the individual window evaluations, relative rotation between the window image and that of the stored feature set is preferably performed. This enables detection of the object over a range of rotations with respect to the image frame. Rather than rotating the image data with respect to the fixed feature set data, this rotation may be performed the other way around. That is, the feature set may be rotated by changing a parameter, such as a constant, of the stored feature set data. This feature set rotation is preferably performed at least in a plane of the x and y-axes, about the z-axis extending out of the surface of the image.
Rotation of the image about an axis passing through the object image may effectively be accomplished by providing the data of each feature set for a number of different rotational positions of the object. The image data for an individual window are then correlated with the stored feature set data for each of the number of rotational positions. Typically, feature set data are stored for several distinct rotational positions of the object about at least the y-axis.
As part of detecting the likelihood that the designated type of object is part of the image, its location within the image is determined since the evaluation has been performed on individual windows whose positions within the image are known. The camera may then use this information to advantage in one or more ways during acquisition of the image, during image processing after acquisition, or both. It may automatically focus on the object, overriding other focusing criteria normally used by the camera. The camera may also adjust the exposure of the image to take characteristics of the object into account. Color correction of the object may also be provided. A popular application of the object detection techniques herein is when the human face is the object, which is the example used, but it will be recognized that these techniques are not limited to faces but rather have application to a wide variety of different types of objects.
Additional objects, features and advantages of the various aspects of the present invention are included in the following detailed description of exemplary embodiments thereof, which description should be taken in conjunction with the accompanying drawings. All patents, patent applications, articles, other publications and things referenced herein are hereby incorporated herein by this reference in their entirety for all purposes. In the event of any conflict in the definition or use of terms herein with those of an incorporated document or thing, the definition and use herein shall prevail.
In
The optical system 13 can be a single lens, as shown, but can alternatively be a set of lenses. An image 29 of a scene 31 is formed in visible optical radiation through an aperture 32 and a shutter 33 onto a two-dimensional surface of an image sensor 35. A motive element 34 moves one or more elements of the optical system 13 to focus the image 29 on the sensor 35. An electrical output 37 of the sensor carries an analog signal resulting from scanning individual photo-detectors of the surface of the sensor 35 onto which the image 29 is projected. The sensor 35 typically contains a large number of individual photo-detectors arranged in a two-dimensional array of rows and columns to detect individual pixels of the image 29. Signals proportional to the intensity of light striking the individual photo-detectors are obtained in the output 37 in time sequence, typically by scanning them in a raster pattern, where the rows of photo-detectors are scanned one at a time from left to right, beginning at the top row, to generate a frame of image data from which the image 29 may be reconstructed. The analog signal 37 is applied to an analog-to-digital converter circuit chip 39 that generates digital data in circuits 41 of the image 29. Typically, the signal in circuits 41 is a sequence of individual blocks of digital data representing the intensity of light striking the individual photo-detectors of the sensor 35.
The photo-detectors of the sensor 35 typically detect the intensity of the image pixel striking them in one of two or more individual color components. Early sensors detected only two separate colors of the image. Detection of three primary colors, such as red, green and blue (RGB) components, is now common. Currently, image sensors that detect more than three color components are becoming available.
Processing of the image data in circuits 41 and control of the camera operation are provided, in this embodiment, by a single integrated circuit chip 43 (which may also include the analog-to-digital converter instead of using the separate circuit chip 39). These functions may be implemented by several integrated circuit chips connected together but a single chip is certainly preferred. In addition to being connected with the circuits 17, 21, 25 and 41, the circuit chip 43 is connected to control and status lines 45. The lines 45 are, in turn, connected with the aperture 32, shutter 33, focus actuator 34, sensor 29, analog-to-digital converter 39 and other components of the camera to provide a synchronous operation of them. Signals in the lines 45 from the processor 43 drive the focus actuator 34 and set the size of the opening of the aperture 32, as well as operate the shutter 33. The gain of the analog signal path is also set by the processor 43 through the lines 45. This gain typically takes place in the analog-to-digital converter which, in the case of a CCD sensor, is part of the sensor, or in the case of a CMOS sensor, may be part of a separate analog-to-digital converter as shown in
A separate volatile random-access memory circuit chip 47 is also connected to the processor chip 43 through lines 48 for temporary data storage. Also, a separate non-volatile memory chip 49 is connected to the processor chip 43 through lines 50 for storage of the processor program, calibration data and the like. The memory 49 may be flash memory, which is re-programmable, or a memory that is programmable only once, such as a masked programmable read-only-memory (PROM) or an electrically programmable read-only-memory (EPROM). A usual clock circuit 51 is provided within the camera for providing clock signals to the circuit chips therein and other components. Rather than a separate component, the clock circuit for the system may alternatively be included on the processor chip 43.
A general block diagram of the processor chip 43 is given in
Circuits 63 of
Overall Object Detection Processing
Referring to
The image is preferably divided into individual windows in order to be able to separately process the data of each window. This is illustrated in
A database stored within a non-volatile memory of the camera contains data of two or more sets of image features that are used in respective two or more processing stages to classify the individual windows as likely or not to contain at least a portion of the face or other object. In a step 73, data of a first of these feature sets is loaded into the processor memory. Each feature set includes data of two or more individual features of the face or other object being detected. This first set contains the most general features, in order to do a first pass of classifying the image with relatively simple processing. One or more other feature sets are later used to more specifically determine the likelihood that the object exists in the individual windows, and typically requires more processing and time to complete.
The brightness of the image within the current window is normalized, as indicated in a step 75, without use of data from any of the other windows. The image of that window is then scaled as part of determining the degree to which this image portion matches the particular feature with which it is being compared, as indicated in step 76. Specific exemplary techniques of scaling are described below. In scaling, the size of the image is altered to place it on the same scale as the features with which the image is later compared. Alternatively, the feature set data could be changed in scale to match that of the image.
In a step 77, the scaled and normalized data of the current window are then evaluated with respect to the loaded data of the individual features of the first feature set. The result is a numeric score with a value that represents a level of correlation between the portion of the image bounded within the current window and the individual features of the first set. The scores from the first feature set evaluation are stored, in a step 78, and the scores from all evaluations of the other features of the given feature set are then added to it. The high scores result from a determination that there is a high likelihood that the object is present within the current window, and low scores from a determination of a low likelihood of the object's presence. Additional details of this classifying step are given below.
The steps 77 and 78 are typically carried out many times to completely evaluate an image window, once for each of multiple features in each of multiple feature sets. In order to reduce the amount of processing, however, the later comparisons of the image with the individual features may be limited to areas of the image determined during evaluation of earlier features to possibly contain the object. Conversely, areas of the image determined early in the processing to not contain the object may be excluded from further evaluation with respect to additional feature sets.
After the current window of the image has been evaluated in steps 76 and 77 with respect to a specific feature, it is determined in a step 79 whether there are any more features of the current feature set that are yet to be evaluated. If so, the processing returns to the classifying step 77 for comparison of the image with the new feature in the same manner as described above. If not, in a step 80, after the image has been evaluated with respect to all the features of one feature set, the scores accumulated in the step 78 are compared with a threshold established specifically for the feature set just completed. This threshold is typically empirically determined and stored as part of the feature set data. If the score is less than this threshold, it is determined in the step 80 to reject the window, in which case processing of image data within that window ceases and moves through a step 84 to process data of another window yet unprocessed. But if the score is equal to or greater than the threshold, the processing proceeds from the step 80 to a step 82.
After completion of processing for one feature set of a window that is not rejected by the step 80, the next step 82 determines whether there are any further feature sets with which data of the current image window have not yet been processed and it is determined to be desirable to do such further processing. If so, the processing increments to the next feature set, in a step 83, and then begins by loading data of that feature set in the step 73. The processing described above with respect to the steps 75-80 is then repeated for this other feature set, except that the normalization step 75 and the image scaling step 76 are typically not repeated. If the scaling 76 is performed by scaling data of the image, it usually needs to be done only once for each window. The image scale initially determined for a given window may then be used during classification of the image portion in that window with respect to subsequent features.
Once it is determined by the step 82 that the current image window has been classified with respect to all of the feature sets, or some desired set of less than all the feature sets, then it is determined in a step 84 whether all the desired windows of the image have been processed. If not, another window not yet processed is pointed to at a step 85, and the processing returns to the step 72 where the data of the image within that window are processed in the same manner as described above. Once it is determined in the step 84 that all the desired windows have been processed and classified, the results are reported in a step 86. Those windows of the current image frame that have been identified as target windows (that is, those not rejected by the step 80 and therefore likely to contain an image of the object) are reported. The existence and location within the image frame of the face or other object of interest has then been determined.
Scaling
As part of one specific technique for carrying out the scaling step 76, the image may be divided into individual windows in order to be able to separately process the data of each window. This is illustrated in
As part of evaluating whether an object is within a given window, the portion of the image within the window is demagnified in steps to make it smaller. At each step, the data of the image are classified (step 77) by use of data of the feature currently being evaluated. Conversely, the data of the feature may be magnified in steps and compared with data of the image within the window at each step. The result in either case is to determine whether the window contains the object, and if so, optionally where within the window that the object is positioned. Usually, each scaled image is processed independently of the others, and the decisions about the presence or not of objects in each scale are then combined to make a final decision. It is determined in the step 80 whether the accumulated score for a particular feature set exceeds the predetermined threshold or not. This is the result of the processing of
If the cumulative score is less than this threshold for a first or subsequent feature set, a decision can be made that the object is not within this window. In a preferred embodiment, this window is then eliminated from any further object detection processing. This results in pruning windows from further processing with respect to any remaining feature sets, and thus reduces the amount of processing that is necessary to detect the presence of the object within the image frame. A first stage of the processing has then been completed.
However, if the cumulative score is equal to or higher than the threshold, the processing continues in a second stage by repeating steps 73-80 on the image data within the current window for a second feature set, except, as described above, the steps 75 and 76 may be omitted after completing processing of the first feature for any specific window. The threshold may again be exceeded, in which case a third stage of processing is repeated with a third feature set, if used, or rejected, in which case processing on image data of the current window terminates. If not earlier rejected, the current window data are evaluated with respect to a finite number of feature sets, which can be as many as ten or twenty or more, after which the processing for the current window ends. The same processing is then performed for each other desired window, in sequence, until all such windows have been evaluated.
A specific technique that may be used for processing the data of the individual image windows as part of the step 76 (
As part of this technique, there may be a number of specific image reduction sizes defined, fourteen for example. When performing the processing of
It will be noted that the techniques described with respect to
In the processing described with respect to
Image Orientation
As part of executing the image classifier (step 77 of
The object type and its orientation are first detected. After detecting the type and z-axis (“yaw”) orientation, a single, combined specific classifier, responsive to the detected type and z-axis orientation, is selected from a database of classifiers. This classifier is then used to decide whether the window contains the specified object or not. Note that in a preferred embodiment of the invention, the z-axis orientation is accounted for by rotating a parameterized feature set used by the specific type classifier chosen, not by rotating the images input to this type classifier, or by using a plurality of z-axis oriented classifiers of the specific object type.
With reference to
With reference to
In the example of
A system operating according to the specific example illustrated in
Cumulative Score Calculations
A major part of the steps 77 and 78 of
To explain this mathematically, the cumulative score of a given window may be represented as follows:
where Ii is the current window and N is the number of features in the current feature set. Others have maintained a cumulative score by defining Gi (I) by the following linear but discontinuous function:
where αi, βi and θi are constants determined during a calibration procedure, νi is a projection vector of the stored feature set against which the current image window is being evaluated, and F(νi, I) is a dot product of this projection vector onto the current window expressed as a vector.
The use of equation 2 is illustrated in
In the improvement being described herein, two thresholds θ0 and θ1 are used instead of a single threshold. This is illustrated in
This equation results in a linear interpolation being performed when F(νi, I) is between the two thresholds θ0 and θi. In that region, the component of the cumulative score G(I) is calculated to be somewhere between the values α and β, by the following:
α+(F(νi, I)−θ0)(β−α)/(θ1−θ0) (Equation 4)
The use of two evaluation thresholds in this manner makes the resulting score component G(I) more representative of the correlation between the current window and the current feature, at least when (F(νi, I) is between the two thresholds θ0 and θ1. The hardware can include parameters for selecting the G(I) used for each νi feature, with some examples of G(I) functions given in Equations 5, 5.2, 5.4 and 6.
But an even more representative result is obtained by a non-linear interpolation between the two thresholds, with one preferred function being illustrated in
where: x=F(νi, I)
Another embodiment is given by:
G(I)=aox2+b0x (Equation 5.2)
where: x=F(νi, I)
Here, G(I) describes a special parabola which always cross the axes origin. Yet another embodiment of the G(I) function that can be supported is defined in the following:
where: x=F(νi, I)
Although two parabolic functions are used in
where: x=F(νi, I)
It should be noted that the threshold levels θ0 and θ1, as well as some or all of the other constants in the equations given above, are typically unique to a particular feature set with which the image window is being compared. That is, there are typically a different set of some or all of these constants used for each feature set.
The above-described technique calculates a score indicating whether one object feature set exists in an individual window and then compares that score with a threshold to determine whether data of the window should be further processed. This is done for the individual windows across the image frame with respect to one feature set and then any remaining windows (those having a score in excess of the threshold) are further processed with data of the next in order feature set, and so on until the image has been processed in many stages with respect to all the feature sets.
An alternative is to rank the scores of the individual windows for the same feature set and select for further processing those windows having the higher scores. For example, the scores of the various windows may be ranked in order between the highest and lowest scores. Those windows having the higher scores are selected for further classification, while those having the lower scores are rejected at this point as highly unlikely to contain the object. Rather than comparing the individual window scores with an absolute predetermined threshold score, the windows may be classified into one of two groups based on their relative ranking within the list of scores. For example, the windows having the top one-third of the scores may be selected for further processing while the other two-thirds of the windows are rejected and no longer considered. This prunes the list of windows at each stage of the processing and therefore reduces the total amount of processing required. This procedure is then repeated at each stage until all of the stages for the given image frame have been completed, at which time the windows of the image containing the face or other object are identified.
An Implementation
Rather than making the calculations of
The image window is then oriented and its type classified at 103 of
The windows of a given image that have been evaluated with respect to one feature set are then pruned at 109 to select only some of them for evaluation with respect to the next feature set. In the processing described with respect to
If the image data acquisition device includes a motion detector 111, the existence or absence of motion of the device or objects within the image may be utilized by the pruning function 109. Motion is typically detected in digital cameras between preview images in order to then compensate for it, or as part of a compression algorithm for the resulting image data, or possibly other purposes. If the user is shaking the camera while an image is being captured, motion of the entire image is detected from one preview image to the next. But motion may also be detected in individual portions or windows of an image, which then detects motion of one or more objects within the scene. The pruning 109 may use such motion information to detect changes between two successive preview images, and thereby eliminate calculations associated with areas of the image that have not changed. If an object was detected or not detected in an area of the image that has not moved between two successive preview images, for example, then the data for that area need not be processed in the second image to look for an object. The result will be the same in such areas of both objects. Therefore, data of only those windows of each preview image that, when compared to the same windows of the immediately preceding preview image have moved or otherwise changed, are processed to detect whether an object exists or not.
Although the various aspects of the present invention have been described with respect to exemplary embodiments thereof, it will be understood that the present invention is entitled to protection within the full scope of the appended claims.
This is a continuation of U.S. patent application Ser. No. 12/023,877, filed on Jan. 31, 2008, which claims the benefit of U.S. Provisional Patent Application No. 61/016,205, filed on Dec. 21, 2007, both of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
6356649 | Harkless et al. | Mar 2002 | B2 |
6639998 | Lee et al. | Oct 2003 | B1 |
6924832 | Shiffer et al. | Aug 2005 | B1 |
6990239 | Nelson | Jan 2006 | B1 |
6999625 | Nelson | Feb 2006 | B1 |
7016532 | Boncyk et al. | Mar 2006 | B2 |
7020337 | Viola et al. | Mar 2006 | B2 |
7050607 | Li et al. | May 2006 | B2 |
7099510 | Jones et al. | Aug 2006 | B2 |
7197186 | Jones et al. | Mar 2007 | B2 |
7221809 | Geng | May 2007 | B2 |
7835549 | Kitamura et al. | Nov 2010 | B2 |
20020102024 | Jones et al. | Aug 2002 | A1 |
20040013304 | Viola et al. | Jan 2004 | A1 |
20040258313 | Jones et al. | Dec 2004 | A1 |
20050100195 | Li | May 2005 | A1 |
20060215905 | Kitamura et al. | Sep 2006 | A1 |
20070014434 | Wei et al. | Jan 2007 | A1 |
20070154095 | Cao et al. | Jul 2007 | A1 |
20070226255 | Anderson | Sep 2007 | A1 |
20070237387 | Avidan et al. | Oct 2007 | A1 |
20070282682 | Dietz et al. | Dec 2007 | A1 |
20080177994 | Mayer | Jul 2008 | A1 |
20090222329 | Ramer et al. | Sep 2009 | A1 |
20110150334 | Du et al. | Jun 2011 | A1 |
20110258049 | Ramer et al. | Oct 2011 | A1 |
Entry |
---|
Co-Pending U.S. Appl. No. 12/023,877, filed Jan. 31, 2008. |
Notice of Allowance Mailed Mar. 28, 2011 in Co-Pending U.S. Appl. No. 12/023,877, filed Jan. 31, 2008. |
Number | Date | Country | |
---|---|---|---|
20110205387 A1 | Aug 2011 | US |
Number | Date | Country | |
---|---|---|---|
61016205 | Dec 2007 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12023877 | Jan 2008 | US |
Child | 13099304 | US |