Claims
- 1. A decision enhancement system configured to influence a deployment determination of a safety restraint application for a vehicle by communicating an at-risk-zone determination to the safety restraint application, said decision enhancement system comprising:
a sensor subsystem, said sensor subsystem providing for a sensor to capture a plurality of sensor readings, said plurality of sensor readings including a first sensor reading and a second sensor reading; a tracking subsystem, wherein said tracking subsystem provides for selectively identifying a tracked condition from a plurality of pre-defined conditions, said plurality of pre-defined conditions including a crash condition, wherein said tracked condition is selectively identified using said first sensor reading; and a detection subsystem, wherein said detection subsystem is invoked only after said tracking subsystem selectively identifies said crash condition as said tracked condition, wherein said detection subsystem generates said at-risk-zone determination, from said second sensor reading, and wherein said second sensor reading is not captured earlier than said first sensor reading.
- 2. The system of claim 1, wherein said second sensor reading is not said first sensor reading, and wherein said second sensor reading is captured after said first sensor reading.
- 3. The system of claim 1, wherein the sensor includes a high-speed mode and a low-speed mode, wherein said sensor is configured to operate in said low-speed mode before said tracking subsystem selectively identifies said crash condition as said tracked condition, and wherein said sensor is configured to operate in said high-speed mode after said tracking subsystem selectively identifies said crash condition as said tracked condition.
- 4. The system of claim 3, a subset of said plurality of sensor images being a plurality of filtered sensor images, said plurality of filtered sensor images including said second sensor image, and wherein said sensor readings captured by said sensor in said high-speed mode are said plurality of filtered sensor images.
- 5. The system of claim 4, wherein at least two or more of said plurality of sensor images are used to selectively identify said tracked condition, and wherein at least two or more of said plurality of filtered sensor images are used to generate said at-risk-zone determination.
- 6. The system of claim 3, wherein said sensor in said high-speed mode captures between about 25 and 40 sensor readings per second, and wherein said sensor in said low-speed mode captures fewer sensor readings per second than said sensor in said high-speed mode.
- 7. The system of claim 6, wherein said sensor in said low-speed mode captures fewer than about 11 sensor readings per second.
- 8. The system of claim 3, further comprising a pre-defined detection window, and wherein said plurality of filtered sensor images are filtered in accordance with said pre-defined detection window.
- 9. The system of claim 1, further comprising a zone intrusion and a window-of-interest, wherein said detection subsystem generates said at-risk-zone determination by identifying said zone intrusion with said second sensor reading, wherein said at-risk-zone determination is generated from the portion of said second sensor reading that is within said window-of-interest.
- 10. The system of claim 9, wherein said window-of-interest is divided into a plurality of patches.
- 11. The system of claim 10, further comprising a reference image and a correlation metric, wherein said correlation heuristic is performed on said plurality of patches and said reference image to generate a correlation metric.
- 12. The system of claim 11, wherein said at-risk-zone determination is generated with said correlation metric.
- 13. The system of claim 1, said detection subsystem further including a detection window module, a correlation module, and a window-of-interest;
wherein said detection window module generates said window-of-interest with said second sensor reading, and wherein said correlation module generates a correlation metric from said window-of-interest and a reference image, wherein said detection subsystem generates said at-risk-determination using said correlation metric.
- 14. The system of claim 13, said detection subsystem further including a test threshold, wherein said detection subsystem generates said at-risk-determination by comparing said correlation metric with said test threshold.
- 15. The system of claim 1, wherein said sensor is a standard video camera, wherein said tracking subsystem selectively identifies said tracked condition by invoking a multiple-model probability-weighted heuristic, and wherein said safety restraint is an airbag.
- 16. The system of claim 1, further comprising a future prediction wherein said at-risk-determination is said future prediction.
- 17. The system of claim 1, said plurality of pre-defined conditions including a stationary condition and human motion condition.
- 18. The system of claim 1, further comprising a pre-crash braking condition, wherein said crash condition is said pre-crash braking condition.
- 19. The system of claim 1, said sensor subsystem including an infrared illuminator for providing light in low illumination environments.
- 20. The system of claim 1, further comprising a power supply for providing a duration of keep-alive power.
- 21. The system of claim 1, wherein said detection subsystem provides for:
filtering at least one sensor reading with a pre-defined window of interest; setting said sensor to a higher speed; dividing at least one of the filtered sensor readings into a plurality of patches; defining a correlation metric between said plurality of patches and a reference image; and comparing the correlation metric with a test threshold to generate said at-risk-determination.
- 22. A safety restraint system for a vehicle, comprising:
a sensor, a plurality of sequential sensor images, a spatial area, a computer, a plurality of occupant attributes, a current condition, a plurality of pre-defined conditions, a deployment condition, a non-deployment condition, a detection heuristic, an at-risk-zone flag value, a safety restraint deployment mechanism; wherein said sensor is configured to capture said plurality of sequential sensor images of said spatial area; wherein said computer provides for:
tracking said plurality of occupant attributes from said plurality of sequential images; selectively identifying said current condition from said plurality of pre-defined occupant conditions, wherein said plurality of pre-defined conditions includes said deployment condition and said non-deployment condition; invoking said detection heuristic after selectively identifying said deployment condition as said occupant condition; using said detection heuristic to set said at-risk-zone flag value; communicating said at-risk-zone flag value to said safety restraint deployment mechanism; wherein said safety restraint deployment mechanism selectively precludes the deployment of said safety restraint when said occupant condition is said deployment condition, and when said at-risk-zone flag value is yes.
- 23. The system of claim 22, further comprising a video camera, a maximum shutter speed of about 50 images per second, a vehicle, a multiple-model probability-weighted tracking heuristic, and an airbag;
said sensor being said video camera with said maximum shutter speed of about 50 images per second, said vehicle being said automobile, said multiple-model probability-weighted tracking heuristic being invoked to selectively identifying said current condition, said safety restraint being said airbag, wherein said safety restraint system has only one said sensor.
- 24. A method for installing a decision enhancement application into a vehicle that includes a deployment mechanism for a safety restraint device, the method comprising:
defining an at-risk-zone corresponding to a location of the deployment mechanism within the vehicle; configuring a sensor to transmit sensor images to a computer; instructing the computer to filter out all areas within the image that are not part of a window-of-interest corresponding to the defined at-risk-zone after receiving a preliminary determination that a deployment of the safety restraint device is necessary; programming the computer to set an at-risk-zone flag corresponding to a detection of an occupant within the at-risk-zone, wherein the at-risk-zone flag is set using at least one window-of-interest image filtered by the computer; and placing the sensor and computer within the vehicle.
- 25. The method of claim 24, further comprising adapting the adapting the sensor to function in a low-speed mode to generate the preliminary determination and a high-speed mode to set the at-risk-zone flag.
- 26. The method of claim 24, wherein only one sensor is used to generate the preliminary determination and set the at-risk-zone flag.
- 27. The method of claim 24, further comprising loading a occupant tracking heuristic into said computer to generate said preliminary determination.
- 28. The method of claim 24, wherein setting an at-risk-zone flag includes:
dividing at least one of the window-of-interest images into a plurality of patches; invoking a correlation heuristic to generate a correlation metric relating to the patches and a template image; and comparing the correlation metric to a predetermined test threshold to set the at-risk-flag.
RELATED APPLICATIONS
[0001] This continuation-in-part application claims priority from the following patent applications, which are hereby incorporated by reference in their entirety: “IMAGE PROCESSING SYSTEM FOR DYNAMIC SUPPRESSION OF AIRBAGS USING MULTIPLE MODEL LIKELIHOODS TO INFER THREE DIMENSIONAL INFORMATION,” Ser. No. 09/901,805, filed on Jul. 10, 2001; “IMAGE PROCESSING SYSTEM FOR ESTIMATING THE ENERGY TRANSFER OF AN OCCUPANT INTO AN AIRBAG,” Ser. No. 10/006,564, filed on Nov. 5, 2001; “IMAGE SEGMENTATION SYSTEM AND METHOD,” Ser. No. 10/023,787, filed on Dec. 17, 2001; “IMAGE PROCESSING SYSTEM FOR DETERMINING WHEN AN AIRBAG SHOULD BE DEPLOYED,” Ser. No. 10/052,152, filed on Jan. 17, 2002; “MOTION-BASED IMAGE SEGMENTOR FOR OCCUPANT TRACKING,” Ser. No. 10/269,237, filed on Oct. 11, 2002; “OCCUPANT LABELING FOR AIRBAG-RELATED APPLICATIONS,” Ser. No. 10/269,308, filed on Oct. 11, 2002; “MOTION-BASED IMAGE SEGMENTOR FOR OCCUPANT TRACKING USING A HAUSDORF-DISTANCE HEURISTIC,” Ser. No. 10/269,357, filed on Oct. 11, 2002; “SYSTEM OR METHOD FOR SELECTING CLASSIFIER ATTRIBUTE TYPES,” Ser. No. 10/375,946, filed on Feb. 28, 2003; “SYSTEM AND METHOD FOR CONFIGURING AN IMAGING TOOL,” Ser. No. 10/457,625, filed on Jun. 9, 2003; “SYSTEM OR METHOD FOR SEGMENTING IMAGES,” Ser. No. 10/619,035, filed on Jul. 14, 2003; “SYSTEM OR METHOD FOR CLASSIFYING IMAGES,” Ser. No. 10/625,208, filed on Jul. 23, 2003; and “SYSTEM OR METHOD FOR IDENTIFYING A REGION-OF-INTEREST IN AN IMAGE,” Ser. No. 10/663,521, filed on Sep. 16, 2003.
Continuation in Parts (12)
|
Number |
Date |
Country |
Parent |
09901805 |
Jul 2001 |
US |
Child |
10703957 |
Nov 2003 |
US |
Parent |
10006564 |
Nov 2001 |
US |
Child |
10703957 |
Nov 2003 |
US |
Parent |
10023787 |
Dec 2001 |
US |
Child |
10703957 |
Nov 2003 |
US |
Parent |
10052152 |
Jan 2002 |
US |
Child |
10703957 |
Nov 2003 |
US |
Parent |
10269237 |
Oct 2002 |
US |
Child |
10703957 |
Nov 2003 |
US |
Parent |
10269308 |
Oct 2002 |
US |
Child |
10703957 |
Nov 2003 |
US |
Parent |
10269357 |
Oct 2002 |
US |
Child |
10703957 |
Nov 2003 |
US |
Parent |
10375946 |
Feb 2003 |
US |
Child |
10703957 |
Nov 2003 |
US |
Parent |
10457625 |
Jun 2003 |
US |
Child |
10703957 |
Nov 2003 |
US |
Parent |
10619035 |
Jul 2003 |
US |
Child |
10703957 |
Nov 2003 |
US |
Parent |
10625208 |
Jul 2003 |
US |
Child |
10703957 |
Nov 2003 |
US |
Parent |
10663521 |
Sep 2003 |
US |
Child |
10703957 |
Nov 2003 |
US |