Automated safety systems are employed in a growing number of vehicles. An exemplary embodiment set forth below is employed in the context of a passenger vehicle having an airbag deployment system. The skilled person will understand, however, that the principles set forth herein may apply to other types of vehicles using a variety of safety systems. Such types of vehicles include, inter alia, aircraft, spacecraft, watercraft, and tractors.
Moreover, although the exemplary embodiment employs an airbag in the exemplary safety system, the skilled person will recognize that the method and apparatus described herein may apply to widely varying safety systems inherent in the respective vehicle to which it is applied. In particular, a method or apparatus as described herein may be employed whenever it is desired to obtain advantages of automated safety systems requiring accurate classification of vehicle occupancy.
Accurate occupancy classification enhances the ability of automated safety systems to select appropriate safety equipment and determine appropriate use parameters for the selected equipment under the then-current conditions. In the exemplary embodiment described throughout, the automated safety system comprises an airbag deployment system. In this embodiment, if the occupancy classification is “empty,” the airbag would typically not be selected for deployment. However, if the occupancy classification is “occupied” (e.g., in the case of occupancy by an “adult,” “infant,” or “child”), the airbag may be selected for deployment under emergency conditions (e.g., a vehicle crash) or when otherwise desired upon further differential analysis according to knowledge of those skilled in the art.
Other embodiments include application of methods and apparatus in conjunction with various types of safety mechanisms triggered by an automated safety system. For example, a vehicle door may be selected to lock or unlock automatically under a specified emergency condition, such as, for example, in the event of a vehicle crash. As another example, the automated safety system may detect when a vehicle is underwater and deploy appropriate safety equipment, such as, for example, opening vehicle windows and/or deploying floatation devices. Other non-limiting examples of automated safety equipment include Global Positioning System (GPS) devices and other types of broadcasting mechanisms, traction systems that aid when encountering difficult terrains, and systems for re-directing shockwaves caused by vehicle collisions.
The present methods and apparatus obtain information about an environment and subsequently process the information to provide a highly accurate classification regarding occupancy. In the exemplary embodiment described in more detail below, occupancy of a position within a vehicle (e.g., a vehicle seat) is analyzed and classified using image-based sensing equipment.
According to one exemplary embodiment, occupancy of a vehicle seat is analyzed and classified for automated safety system applications, such as airbag deployment systems. Four classes of occupancy are often used in conjunction with airbag deployment systems. Those four classes are: (i) “infant,” (ii) “child,” (iii) “adult,” and (iv) “empty” seat. Accurate occupant classification has proven difficult in the past due to many factors including: vehicle seat variations; changing positions of occupants within seats; occupant characteristics such as height and weight; and the presence of extraneous items such as blankets, handbags, shopping bags, notebooks, documents, and the like. The present methods and apparatus improve the accuracy of occupant classification, particularly as it relates to differentiation between when a seat is “empty” or “occupied.”
According to one aspect of an exemplary embodiment, an image of a vehicle seat is analyzed to determine whether the seat is “empty” or “occupied.” Although the term “empty” is often associated with the absence of any object whatsoever in the vehicle seat, the term “empty” is used herein to indicate that no animate occupant (e.g., human or animal) is present in the vehicle seat. The presence of relatively small, inanimate objects, such as handbags, shopping bags, notebooks, documents, and the like, that are often placed on a vehicle seat when it is not occupied by a passenger, does not generally prevent a seat from being classified as “empty.” While the presence of relatively large, inanimate objects may trigger classification of a vehicle seat as “occupied,” the present method and apparatus distinguishes between occupancy by the more common relatively small, inanimate objects, and occupancy by an animate form. If the presence of a larger inanimate object results in classification of the seat as “occupied,” the object may be analyzed in more detail according to further embodiments of the invention (e.g., using methods for differentiating between occupancy by an “infant,” “child,” or “adult” as known to those of ordinary skill in the art. For example, such methods and apparatus include those described in U.S. Pat. Nos. 6,662,093; 6,856,694; and 6,944,527, all of which are hereby incorporated by reference for their teachings on methods and apparatus for differentiating between occupancy classifications.
As shown in
Incoming images 14 (in the exemplary embodiment, video images) are transmitted from the camera 10 to any suitable computer-based processing equipment, such as a computer system 16. As described in more detail below, the computer system 16 determines occupancy classification of the vehicle seat 12 and transmits the occupancy classification to an electronic control unit 18 (in this embodiment, an airbag controller) in the event of an emergency or when otherwise desired. Subsequently, in the exemplary embodiment, an airbag deployment system 20 responds to the airbag controller 18, and either deploys or suppresses deployment of an airbag based upon occupant classification of the vehicle seat 12 and other factors as desired. A variety of airbag controllers and airbag deployment systems are known to those skilled in the art and can be used in accordance with the present invention.
The computer system 16 processes images of the vehicle seat 12 obtained from the camera 10. According to one embodiment, processing of the images is implemented using wavelet transforms (e.g., Gabor filters) as described in more detail below. Any suitable computer system can be used to implement the present methods and apparatus according to operating principles known to those skilled in the art. In an exemplary embodiment, the computer system 16 includes a digital signal processor (DSP). The DSP is capable of performing image processing functions in real-time. The DSP receives pixels from the camera 10 via its Link Port. The DSP is responsible for system diagnostics and for maintaining communications with other subsystems in the vehicle via a vehicle bus. The DSP is also responsible for providing an airbag deployment suppression signal to the airbag controller 18.
According to this exemplary embodiment, the computer system 16 processes an image obtained from the camera 10 using several steps. A flow diagram 200 of the image processing steps according to this exemplary embodiment is illustrated in
Note that the flow diagram 200 of
The Input Image 202 is first segmented according to the classification process steps 206. In the flow diagram of
Segmentation alone has not proven sufficient for providing accurate and reliable occupancy classifications. One reason for this shortcoming is that small occupants (e.g., infants and children) typically fit within the boundaries of the vehicle seat and often do not appear any different than an empty seat when viewed in relation to the perimeter of the vehicle seat. Another reason for this shortcoming is that, even when the occupant of a vehicle seat is an adult, it can be difficult to accurately classify the occupant by analyzing the shape of the vehicle seat in a segmented image. The shape of an average adult male is typically used as a template for designing the shape of the vehicle seat; thus, the perimeter of a vehicle seat may have a shape approximating that of many adult occupants. Therefore, a further step according to the present methods and apparatus relies on textural analysis of the features within a segmented image. As shown in
According to this aspect of the invention, texture of a segmented image is analyzed using one or more wavelet transforms. This analysis is particularly useful for differentiating between an “empty” occupant classification and other “occupied” classifications, such as those where an animate form (e.g., person) is positioned within the area being analyzed. In particular, note that an empty seat typically has very little texture variance throughout, except for in areas where there is, for example, stitching or another type of variation in the exterior covering (e.g., leather or fabric of the seat). As described in more detail below, analysis of texture variance was found to be a useful tool in classifying between an “empty” seat and a seat that is “occupied” by some animate form of occupant (e.g., a human occupant).
The number, size, and location of key regions for feature extraction are selected based on a predetermined number of processing windows. For example, at least three or four distinct key regions may be used in conjunction with methods and apparatus exemplified herein. Each key region facilitates localized analysis of the texture of the segmented image. The key regions can overlap, partially or fully, with one or more adjacent key regions in one embodiment of the present methods and apparatus. However, the key regions need not overlap to any extent in other embodiments.
Four key regions are illustrated in the exemplary segmented image 600 shown in
After key regions are identified, representative texture of each of the key regions is assessed using a wavelet transform. An exemplary wavelet transform comprises a bank of multi-dimensional Gabor or similar texture filters or matrices. While Gabor filters were found to provide superior performance, a number of other texture filters and matrices are known and can be adapted for use according to the present invention. For example, two-dimensional Fourier transforms (although lacking in their comparative ability to analyze orientation in addition to frequency), co-occurrence matrices, and Haar wavelet transforms (which are based on step functions of varying sizes as compared to Gaussian functions) are a few examples of other tools useful for texture analysis. Any suitable texture analysis methods and apparatus, including combinations thereof, can be used. For example, it is to be understood that a combination of image filters relying on wavelet transforms can be used according to further embodiments. It is also to be understood that more than one wavelet transform can be applied to a particular key region or portion thereof. Such might be the case for desired redundancy or other purposes.
As with other wavelet transforms, the exemplary Gabor filter advantageously combines directional selectivity (i.e., detects an edge having a specific direction), positional selectivity (i.e., detects an edge having a specific position) and a spatial frequency selectivity (i.e., detects an edge whose pixel values change at a specific spatial frequency) within one image filter. The term “spatial frequency”, as used herein, refers to a level of a change in pixel values (e.g., luminance) of an image with respect to their positions. Texture of an image is defined according to spatial variations of grayscale values across the image. Thus, by assessing the spatial variation of an image across a key region using a Gabor filter or equivalent wavelet transform, texture of the image within the key region can be defined. According to one embodiment, texture is defined in accordance with the well known Brodatz texture database. Reference is made to P. Brodatz, Textures: A Photographic Album for Artists and Designers, (1966) Dover, N.Y.
Each Gabor filter within a multi-dimensional Gabor filter bank is a product of a Gaussian kernel and a complex plane wave as is well known to those skilled in the art. As used, each Gabor filter within the bank varies in scale (based on a fixed ratio between the sine wavelength and Gaussian standard deviation) and orientation (based on the sine wave).
According to the present methods and apparatus, Gabor filter coefficients (which are complex in that they include both a real part and an imaginary part) are computed for each of the Gabor filters within a bank of Gabor filters for each position under analysis. The coefficient of each Gabor filter that corresponds to the feature vector element is a measure of the likelihood that the associated key region is dominated by texture associated with that given directional orientation and frequency of repetition. A multi-dimensional Gabor filter bank is represented according to the following Equation I:
Gabor(x;k0,C)=exp(ix·k0)G(x;C) Equation I
As used in Equation I, the term “k0” is the wave number associated with the exponential function of which it is a part; and, the term “k0” dictates the frequency of the sinusoid associated with the exponential in Equation I. As used in Equation I, “x” represents the vector associated with that specific Gabor filter within the bank. The term “G(x; C)” represents the two-dimensional Gaussian kernel with covariance “C.” That Gaussian kernel is represented according to the following Equation II:
As applied to an exemplary embodiment of the disclosed methods and apparatus, in Equation II, “d” is assigned a value of two based on two-dimensional spatial filtering according to the invention and “T” refers to the transpose of vector “x.” For a two-dimensional row and column vector “x,” which has one column and two rows, the transpose “T” has two columns and one row. The remaining terms are as defined herein.
In one embodiment, a bank of two-dimensional Gabor filters is used to spatially filter the image within each key region. As a general principle, spatial filtering using Gabor filters is understood by those of skill in the art, despite Gabor filters having not been applied as in the present methods and apparatus. In spatially filtering an image, a feature vector is created from the bank of Gabor filters. Analysis of the bank of Gabor filters, and the resultant feature vector, provides a description of the texture (e.g., as represented by amplitude and periodicity) of the image in that the key region being analyzed based on an estimate of the phase responses of the image within the analyzed key region.
After being organized into a feature vector, pattern recognition is performed to determine classification of the analyzed position. This pattern recognition step corresponds to the “Occupant Classifier” step 212 of
According to an exemplary embodiment, pattern recognition is facilitated using histograms. According to this embodiment, histograms are generated for each of the elements of the particular feature vector as known to those skilled in the image processing arts. Histograms generated according to this step serve as statistical tools for determining the most common texture in each key region under analysis.
When analyzing one or more key regions for classification of a vehicle seat as “empty” or “occupied,” histograms associated with key regions within an “empty” vehicle seat will generally be distinguished by relative spikiness as compared to those key regions within an otherwise “occupied” vehicle seat. The spikes in the histogram generally correspond to angles and spacing of a textural pattern within an “empty vehicle seat.” This differentiation arises due to the presence on “occupied” seats of many edges defined by differently oriented planes intersecting, such as from planes corresponding to folds in clothing worn by the occupant, or curved lines where portions of the occupant's body join together (e.g., where arms meet ones body) as compared to the distinct edges typically associated only with stitching on a vehicle seat. Thus, more variation in spatial orientation throughout a key region is indicative of the presence of an object or occupant on an otherwise generally smooth surface (e.g., portion of a vehicle seat that has variations typically only where stitching is present on the vehicle seat). In the case of an “occupied” seat, the histogram will appear broader and uniformly distributed as compared to those narrow and focused histograms associated with an “empty” seat.
When determining overall classification of a position with the vehicle, such as when classifying a vehicle seat as being “empty” or “occupied,” results of pattern recognition from one or more key regions are used. Any suitable method can be used for overall classification based on the data obtained from the use of wavelet transforms in each key region according to the present methods and apparatus. For example, results of pattern recognition for multiple regions can be used in a voting process to arrive at an overall classification for the seat-“empty” or “occupied.” According to an exemplary voting process, each key region is assigned a relative weight as compared to the other key regions. As an example, the vehicle seat bottom can be assigned relatively less weight than the vehicle seat back, due to the likelihood that any inanimate object occupying the seat (e.g., purse, documents, and the like) will be located on the bottom of the seat, if at all.
The methods and apparatus described in the exemplary embodiments herein accumulate information about a position within a vehicle and process that information to assign an occupancy classification to the position. The methods and apparatus function to provide a highly accurate classification of the vehicle occupancy (including an identification that the position is “empty” when there is no animate form in that position) and, therefore, the methods and apparatus are advantageous as compared to previous occupancy classification systems.
As used herein, the term “image-based sensing equipment” includes all types of optical image capturing devices. The captured images may comprise still or video images. Image-based sensing equipment include, without limitation, one or more of a grayscale camera, a monochrome video camera, a monochrome digital complementary metal oxide semiconductor (CMOS) stereo camera with a wide field-of-view lens, or literally any type of optical image capturing device.
According to one exemplary embodiment, image-based sensing equipment is used to obtain image information about the environment within a vehicle and its occupancy. The image information is analyzed and classified in accordance with the present teachings. Analysis and classification according to the exemplary embodiment generally occurs using any suitable computer-based processing equipment, such as that employing software or firmware executed by a digital processor.
Those skilled in the art will appreciate that the disclosed method and apparatus may be practiced or implemented in any convenient computer system configuration, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, networked personal computers, minicomputers, mainframe computers, and the like. The disclosed methods and apparatus may also be practiced or implemented in distributed computing environments where tasks are performed by remote processing devices linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Various modifications and alterations of the disclosed methods and apparatus will become apparent to those skilled in the image processing arts without departing from the spirit and scope of the present teachings, which is defined by the accompanying claims. The appended claims are to be construed accordingly. It should also be noted that steps recited in any method claims below do not necessarily need to be performed in the order that they are recited. Those of ordinary skill in the image processing arts will recognize variations in performing the steps from the order in which they are recited.