1. Field
The present invention relates in general to systems and techniques used to isolate a “segmented image” of a moving person or object, from an “ambient image” of the area 5 surrounding and including the person or object in motion. In particular, the present invention relates to a method and apparatus for isolating a segmented image of a vehicle occupant from the ambient image of the area surrounding and including the occupant, so that an appropriate airbag deployment decision can be made.
2. Description of Related Art
There are many situations in which it may be desirable to isolate a segmented image of a “target” person or object from an ambient image which includes the image surrounding the “target” person or object. Airbag deployment systems are one prominent example of such a situation. Airbag deployment systems can make various deployment decisions that relate to the characteristics of an occupant that can be obtained from the segmented image of the occupant. The type of occupant, the proximity of an occupant to the airbag, the velocity and acceleration of an occupant, the mass of the occupant, the amount of energy an airbag needs to absorb as a result of an impact between the airbag and the occupant, and other occupant characteristics are all factors that can be incorporated into airbag deployment decision-making.
There are significant obstacles in the existing art with respect to image segmentation techniques. Prior art image segmentation techniques tend to be inadequate in high-speed target environments, such as when attempting to identify a segmented image of an occupant in a vehicle that is braking or crashing. Prior art image segmentation techniques do not account for nor use motion of an occupant to assist in the identification of the boundary between the occupant and the area surrounding the environment. Instead of using the motion of the occupant to assist with image segmentation, prior art systems typically apply techniques best suited for low-motion or even static environments, “fighting” the motion of the occupant instead of utilizing characteristics relating to the motion to assist in the segmentation and identification process.
Related to the difficulties imposed by occupant motion is the challenge of timeliness. A standard video camera typically captures about 40 frames of images each second. Many airbag deployment embodiments incorporate sensors that capture sensor readings at an even faster rate than a standard video camera. Airbag deployment systems require reliable real-time information for deployment decisions. The rapid capture of images or other sensor data does not assist the airbag deployment system if the segmented image of the occupant cannot be identified before the next frame or sensor measurement is captured. An airbag deployment system can only be as fast as its slowest requisite process step. However, an image segmentation technique that uses the motion of the vehicle occupant in the segmentation process can perform its task more rapidly than a technique that fails to utilize motion as a distinguishing factor between an occupant and the area surrounding the occupant.
Prior art systems typically fail to incorporate contextual “intelligence” about a particular situation into the segmentation process, and thus such systems do not focus on any particular area of the ambient image. A segmentation process specifically designed for airbag deployment processing can incorporate contextual “intelligence” that cannot be applied by a general purpose image segmentation process. For example, it is desirable for a system to focus on an area of interest within the ambient image using recent past segmented image information, including past predictions that incorporate subsequent anticipated motion. Given the rapid capture of sensor measurements, there is a limit to the potential movement of the occupant between sensor measurements. Such a limit is context specific, and is closely related to factors such as the time period between sensor measurements.
Prior art segmentation techniques also fail to incorporate useful assumptions about occupant movement in a vehicle. It is desirable for a segmentation process for use in a vehicle to take into consideration the observation that vehicle occupants tend to rotate about their hips, with minimal motion in the seat region. Such “intelligence” can allow a system to focus on the most important areas of the ambient image, saving valuable processing time.
Further aggravating processing time demands in existing segmentation systems is the failure of those systems to incorporate past data into present determinations. It is desirable to track and predict occupant characteristics using techniques such as “Kalman” filters. It is also desirable to model the segmented image by a simple geometric shape, such as an ellipse. The use of a reusable and modifiable shape model can be a useful way to incorporate past data into present determinations, providing a simple structure that can be manipulated and projected forward, thereby reducing the complexity of the computational processing.
An additional difficulty not addressed by prior art segmentation and identification systems relates to changes in illumination that may obscure image changes due to occupant motion. When computing the segmented image of an occupant, it is desirable to include and implement a processing technique that can model the illumination field and remove it from consideration.
Systems and methods that overcome many of the described limitations of the prior art have been disclosed in the related applications that are cross-referenced above. For example, the co-pending application “MOTION-BASED IMAGE SEGMENTOR FOR OCCUPANT TRACKING,” application Ser. No. 10/269,237, filed on Oct. 11, 2002, teaches a system and method using motion to define a template that can be matched to the segmented image, and which, in one embodiment, uses ellipses to model and represent a vehicle occupant. These ellipses may be processed by tracking subsystems to project the most likely location of the occupant based on a previous determination of position and motion. The ellipses, as projected by the tracking subsystems, may also be used to define a “region of interest,” image representing a subset area of the ambient image, that may be used for subsequent processing to reduce processing requirements.
An advantageous method that may be applied to the problem of segmenting images in the presence of motion employs the technique of optical flow computation. The inventive methods according to the related U.S. Patent applications cross-referenced above employ alternative segmentation methods that do not include optical flow computations. Further, in order to apply optical flow computations for detecting occupants in a vehicle, it is necessary to remove obscuring effects caused by variations in illumination fields when computing the segmented images. Therefore, a need exists for image segmentation systems and methods using optical flow techniques that discriminate true object motion from effects due to illumination fields. The present invention provides such an image segmentation system and method.
An image segmentation system and method are disclosed that generate a segmented image of a vehicle occupant or other target of interest based upon an ambient image, which includes the target and the environment that surrounds the target. The inventive method and apparatus further determines a bounding ellipse that is fitted to the segmented target image. The bounding ellipse may be used to project a future position of the target.
In one embodiment, an optical flow technique is used to compute both velocity fields and illumination fields within the ambient image. Including the explicit computation of the illumination fields dramatically improves motion estimation for the target image, thereby improving segmentation of the target image.
Like reference numbers and designations in the various drawings indicate like elements.
Throughout this description, embodiments and variations are described for the purpose of illustrating uses and implementations of the inventive concept. The illustrative description should be understood as presenting examples of the inventive concept, rather than as limiting the scope of the concept as disclosed herein.
An ambient image 108 is output by the camera 106, and provided as input to a computer or computing device 110. In one embodiment of the inventive teachings, the ambient image 108 may comprise one frame of a sequence of video images output by the camera 106. The ambient image 108 is processed by the computer 110 according to the inventive teachings described in more detail hereinbelow. In one embodiment, after processing the ambient image 108, the computer 110 may provide information to an airbag controller 112 to control or modify activation of an airbag deployment system 114.
Teachings relating to airbag control systems, such as used in the system 100, are disclosed in more detail in the co-pending commonly assigned patent application “MOTION-BASED IMAGE SEGMENTOR FOR OCCUPANT TRACKING,” application Ser. No. 10/269,237, filed on Oct. 11, 2002, incorporated by reference herein, as though set forth in full, for its teachings regarding techniques for identifying a segmented image of a vehicle occupant within an ambient image. Novel methods for processing the ambient image 108 are disclosed herein, in accordance with the present inventive teachings.
As shown in the embodiment of the image processing system 200 of
In one embodiment, the tracking and predicting subsystem 210 provides information to the airbag controller 112 (
In one embodiment, a selected part or subset of the ambient image 108 (
At the STEP 501, an embodiment of the inventive method may invoke a region of interest module to determine a region of interest image. In one embodiment, the region of interest determination may be based on projected ellipse parameters received from the tracking and prediction subsystem 210 (
In other embodiments, or when processing some ambient images within an embodiment, the region of interest determination of the STEP 501 may omitted. For example, at certain times, the projected ellipse parameters may not be available because prior images have not been received or computed, or for other reasons. If the STEP 501 is omitted, or is not executed, and a region of interest thereby is not determined, the subsequent steps of the 15 exemplary method 500 may be performed on a larger ambient image, such as may be received from the camera 106 of
In one embodiment, at STEP 502 of the inventive method 500, an image smoothing process is performed on the ambient image using an image smoothing module. For example, the smoothing process may comprise a 2-dimensional Gaussian filtering operation. Other smoothing processes and techniques may be implemented. The 2-dimensional Gaussian filtering operation and other smoothing operations are well known to persons skilled in the arts of image processing and mathematics, and therefore are not described in further detail herein. The image smoothing process is performed in order to reduce the detrimental effects of noise in the ambient image. The image smoothing process step 502 may be omitted in alternative embodiments, as for example, if noise reduction is not required. The method next proceeds to a STEP 504.
At STEP 504, directional gradient and time difference images are computed for the ambient image. In one embodiment, the directional gradients are computed according to the following equations:
Ix=Image(i, j)−Image(i−N, j)=I(i, j)−I(i−N, j); (1)
Iy=Image(i, j)−Image(i,j−N)=I(i, j)−I(i,j−N); (2)
It=Image2(i, j)−Image1(i, j); (3)
wherein Image(i, j) comprises the current ambient image brightness (or equivalently, luminosity, or signal amplitude) distribution as a function of the coordinates (i, j); Image1(i, j) comprises the image brightness distribution for the ambient image immediately prior to the current ambient image; Image2(i, j) comprises the brightness distribution for the current ambient image (represented without a subscript in the equations (1) and (2) above, Ix comprises the directional gradient in the x-direction; Iy comprises the directional gradient in the y-direction; It comprises the time difference distribution for difference of the current ambient image and the prior ambient image; and N comprises a positive integer equal to or greater than 1, representing the x or y displacement in the ambient image used to calculate the x or y directional gradient, respectively. The directional gradient computation finds areas of the image that are regions of rapidly changing image amplitude. These regions tend to comprise edges of two different objects, such as, for example, the occupant and the background. The time difference computation locates regions where significant changes occur between successive ambient images. The method next proceeds to a STEP 506.
At the STEP 506, an optical flow computation is performed in order to determine optical flow velocity fields (also referred to herein as “optical flow fields” or “velocity fields”) and illumination fields. The standard gradient optical flow methods assume image constancy, and are based on the following equation:
wherein f(x,y,t) comprises the luminosity or brightness distribution over a sequence of images, and wherein v comprises the velocity vector at each point in the image.
These standard gradient optical flow methods are unable to accommodate scenarios where the illumination fields are not constant. Therefore, the present teachings employ an extended gradient (also equivalently referred to herein as “illumination-enhanced”) optical flow technique based on the following equation:
wherein ø represents the rate of creation of brightness at each pixel (i.e., the illumination change). If a rigid body object is assumed, wherein the motion lies in the imaging plane, then the term div(v) is zero. This assumption is adopted for the exemplary computations described herein. The extended gradient method is described in more detail in the following reference, S. Negahdaripour, “Revised definition of optical flow: Integration of radiometric and geometric cues for dynamic scene analysis”, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 20 no. 9, pp. 961-979, September 1998. This reference is referred to herein as the “Negahdaripour” reference, and it is hereby fully incorporated by reference herein, as though set forth in full, for its teachings on optical flow techniques and computation methods.
The term ø provides the constraints on the illumination variations in the image. There are two types of illumination variation that must be considered: (i) variations in illumination caused by changes in reflectance or diffuse shadowing (modeled as a multiplicative factor), and (ii) variation in illumination caused by illumination highlighting (modeled as an additive factor). In accordance with the above-incorporated Neghadaripour reference, the term ø can be expressed using the following equation:
wherein the term
corresponds to the change in reflectance, and wherein the term
corresponds to the illumination highlighting.
Also, in accordance with the Neghadaripour reference, optical flow velocity fields (or equivalently, the optical flow field image) and illumination fields (or equivalently, the illumination field image) may be computed by solving the following least squares problem:
wherein the terms δx and δy comprise the velocity estimates for the pixel (x,y), the expression δm=m−1 comprises the variation or difference value for the multiplicative illumination field, the term δc comprises the variation value for the additive illumination field, W comprises a local window of N by N pixels (where N is a positive integer greater than 3) centered around each pixel in the ambient image I, and I, Ix, Iy and It are as defined hereinabove with reference to Equations 1-3 (inclusive). The velocity variables δx and δy may also represent the U (horizontal) and the V (vertical) components, respectively, of the optical flow velocity field v.
Those skilled in the mathematics art shall recognize that equation (7) above may be solved for the velocity variables δx, δy, and the illumination variables δm and δc, by numerical computation methods based on the well known least squares technique, and as described in detail in the Negahdaripour reference.
Referring again to the
At the STEP 510, one embodiment of the inventive method may invoke an ellipse fitting module in order to compute the bounding ellipse parameters corresponding to the binary image output by the computation performed by the STEP 508. In other embodiments, shapes other than ellipses may be used to model the segmented image.
The bounding ellipse shape parameters may be determined by computing the central moments of a segmented, N×M binary image I(i, j), such as is represented by the binary image 802 of
The lower order moments, m00, μx and μx, above are computed according to the following equations (11), (12) and (13):
Based on the equations (8) through (13), inclusive, the bounding ellipse parameters are defined by the equations (14) through (18), inclusive, below:
Referring again to
Referring again to
Those of ordinary skill in the communications and computer arts shall also recognize that computer readable medium which tangibly embodies the method steps of any of the embodiments herein may be used in accordance with the present teachings. For example, the method steps described above with reference to
A number of embodiments of the present inventive concept have been described. Nevertheless, it will be understood that various modifications may be made without departing from the scope of the inventive teachings. For example, the methods of the present inventive concept can be executed in software or hardware, or a combination of hardware and software embodiments. As another example, it should be understood that the functions described as being part of one module may in general be performed equivalently in another module. As yet another example, steps or acts shown or described in a particular sequence may generally be performed in a different order, except for those embodiments described in a claim that include a specified order for the steps.
Accordingly, it is to be understood that the inventive concept is not to be limited by the specific illustrated embodiments, but only by the scope of the appended claims. The description may provide examples of similar features as are recited in the claims, but it should not be assumed that such similar features are identical to those in the claims unless such identity is essential to comprehend the scope of the claim. In some instances the intended distinction between claim features and description features is underscored by using slightly different terminology.
This application is a Continuation-in-Part (CIP) and claims the benefit under 35 USC § 120 to the following U.S. applications: “MOTION-BASED IMAGE SEGMENTOR FOR OCCUPANT TRACKING,” application Ser. No. 10/269,237, filed Oct. 11, 2002, pending; “MOTION BASED IMAGE SEGMENTOR FOR OCCUPANT TRACKING USING A HAUSDORF DISTANCE HEURISTIC,” application Ser. No. 10/269,357, filed Oct. 11, 2002, pending; “IMAGE SEGMENTATION SYSTEM AND METHOD,” application Ser. No. 10/023,787, filed Dec. 17, 2001, pending; and “IMAGE PROCESSING SYSTEM FOR DYNAMIC SUPPRESSION OF AIRBAGS USING MULTIPLE MODEL LIKELIHOODS TO INFER THREE DIMENSIONAL INFORMATION,” application Ser. No. 09/901,805, filed Jul. 10, 2001, pending. Both the application Ser. Nos. 10/269,237 and 10/269,357 patent applications are themselves Continuation-in-Part applications of the following U.S. patent applications: “IMAGE SEGMENTATION SYSTEM AND METHOD,” application Ser. No. 10/023,787, filed on Dec. 17, 2001, pending; “IMAGE PROCESSING SYSTEM FOR DYNAMIC SUPPRESSION OF AIRBAGS USING MULTIPLE MODEL LIKELIHOODS TO INFER THREE DIMENSIONAL INFORMATION,” application Ser. No. 09/901,805, filed on Jul. 10, 2001, pending; “A RULES-BASED OCCUPANT CLASSIFICATION SYSTEM FOR AIRBAG DEPLOYMENT,” application Ser. No. 09/870,151, filed on May 30, 2001, which issued as U.S. Pat. No. 6,459,974 on Oct. 1, 2002; “IMAGE PROCESSING SYSTEM FOR ESTIMATING THE ENERGY TRANSFER OF AN OCCUPANT INTO AN AIRBAG,” application Ser. No. 10/006,564, filed on Nov. 5, 2001, which issued as U.S. Pat. No. 6,577,936 on Jun. 10, 2003; and “IMAGE PROCESSING SYSTEM FOR DETECTING WHEN AN AIRBAG SHOULD BE DEPLOYED,” application Ser. No. 10/052,152, filed on Jan. 17, 2002, which issued as U.S. Pat. No. 6,662,093 on Dec. 9, 2003. U.S. application Ser. No. 10/023,787 cited above is a CIP of the following applications: “A RULES-BASED OCCUPANT CLASSIFICATION SYSTEM FOR AIRBAG DEPLOYMENT,” application Ser. No. 09/870,151, filed May 30, 2001, which issued on Oct. 1, 2002 as U.S. Pat. No. 6,459,974; “IMAGE PROCESSING SYSTEM FOR DYNAMIC SUPPRESSION OF AIRBAGS USING MULTIPLE MODEL LIKELIHOODS TO INFER THREE DIMENSIONAL INFORMATION,” application Ser. No. 09/901,805, filed Jul. 10, 2001, pending; and “IMAGE PROCESSING SYSTEM FOR ESTIMATING THE ENERGY TRANSFER OF AN OCCUPANT INTO AN AIRBAG” application Ser. No. 10/006,564, filed November 5, 2001, which issued June 10, 2003 as U.S. Pat. No. 6,577,936. U.S. Pat. No. 6,577,936, cited above, is itself a CIP of “IMAGE PROCESSING SYSTEM FOR DYNAMIC SUPPRESSION OF AIRBAGS USING MULTIPLE MODEL LIKELIHOODS TO INFER THREE DIMENSIONAL INFORMATION,” application Ser. No. 09/901,805, filed on Jul. 10, 2001, pending. U.S. Pat. No. 6,662,093, cited above, is itself a CIP of the following U.S. patent applications: “A RULES-BASED OCCUPANT CLASSIFICATION SYSTEM FOR AIRBAG DEPLOYMENT,” application Ser. No. 09/870,151, filed on May 30, 2001, which issued as U.S. Pat. No. 6,459,974 on Oct. 1, 2002; “IMAGE PROCESSING SYSTEM FOR ESTIMATING THE ENERGY TRANSFER OF AN OCCUPANT INTO AN AIRBAG,” application Ser. No. 10/006,564, filed on Nov. 5, 2001, which issued as U.S. Pat. No. 6,577,936 on 6-10-2003; “IMAGE PROCESSING SYSTEM FOR DYNAMIC SUPPRESSION OF AIRBAGS USING MULTIPLE MODEL LIKELIHOODS TO INFER THREE DIMENSIONAL INFORMATION,” application Ser. No. 09/901,805, filed on Jul. 10, 2001, pending; and “IMAGE SEGMENTATION SYSTEM AND METHOD,” application Ser. No. 10/023,787, filed on Dec. 17, 2001, pending. All of the above-cited pending patent applications and issued patents are commonly owned by the assignee hereof, and are all fully incorporated by reference herein, as though set forth in full, for their teachings on identifying segmented images of a vehicle occupant within an ambient image.
Number | Date | Country | |
---|---|---|---|
Parent | 10269237 | Oct 2002 | US |
Child | 10944482 | Sep 2004 | US |
Parent | 10269357 | Oct 2002 | US |
Child | 10944482 | Sep 2004 | US |
Parent | 10023787 | Dec 2001 | US |
Child | 10944482 | Sep 2004 | US |
Parent | 09901805 | Jul 2001 | US |
Child | 10944482 | Sep 2004 | US |
Parent | 10023787 | Dec 2001 | US |
Child | 10944482 | Sep 2004 | US |
Parent | 09901805 | Jul 2001 | US |
Child | 10944482 | Sep 2004 | US |
Parent | 09870151 | May 2001 | US |
Child | 10944482 | Sep 2004 | US |
Parent | 10006564 | Nov 2001 | US |
Child | 10944482 | Sep 2004 | US |
Parent | 10052152 | Jan 2002 | US |
Child | 10944482 | Sep 2004 | US |
Parent | 10023787 | Dec 2001 | US |
Child | 10269357 | US | |
Parent | 09901805 | Jul 2001 | US |
Child | 10269357 | US | |
Parent | 09870151 | May 2001 | US |
Child | 10269357 | US | |
Parent | 10006564 | Nov 2001 | US |
Child | 10269357 | US | |
Parent | 10052152 | Jan 2002 | US |
Child | 10269357 | US | |
Parent | 09870151 | May 2001 | US |
Child | 10023787 | US | |
Parent | 09901805 | Jul 2001 | US |
Child | 10023787 | US | |
Parent | 10006564 | Nov 2001 | US |
Child | 10023787 | US | |
Parent | 09901805 | Jul 2001 | US |
Child | 10006564 | Nov 2001 | US |
Parent | 09870151 | May 2001 | US |
Child | 10052152 | US | |
Parent | 10006564 | Nov 2001 | US |
Child | 10052152 | US | |
Parent | 09901805 | Jul 2001 | US |
Child | 10052152 | US | |
Parent | 10023787 | Dec 2001 | US |
Child | 10052152 | US |