The disclosure relates to the field of monitoring systems, and, more particularly, to occupant monitoring systems in motor vehicles.
Currently, there are some driver monitoring systems that monitor the driver via steering wheel sensors, seat sensors, thermo sensors, etc., for purposes of authentication.
The present invention may encompass the unique use cases described below for a vehicle occupant monitoring system (VOMS) that helps occupants with ergonomics, attention management and wellness management. In-vehicle cameras may detect the position of the occupant's eyes and the direction in which the occupant is looking in many of these embodiments.
Active Head Up Display (HUD)—The position of the occupant's eyebox is determined based on positions of the occupant's eyes. The position or orientation of the HUD mirror may be automatically adjusted depending upon the position of the occupant's eyebox in order to position the virtual image within the occupant's view.
Side-View Mirrors—Controls and the angles of the side-view mirrors may be automatically adjusted so that they are positioned for good visibility based on the positions of the occupant's eyes.
Intelligent Headlamps Enhancement—The angles of the vehicle headlamps may be automatically adjusted based on the direction in which the occupant is looking in order to shift additional light in the direction in which the occupant is looking. At low speeds, side lamps may be automatically turned on and the angles of the side lamps may be automatically adjusted in order to illuminate posted street addresses. Laterally directed cameras and character recognition software may be used to detect the presence of posted street addresses.
Nighttime Display Dimming—At night, or when it is dark, as determined by an ambient light sensor, the center stack display automatically dims in response to determining that the occupant is not looking at the center stack display.
Occupant Detection—The identity of the occupant may be determined via facial recognition software or via recognition of eye characteristics. Depending upon the occupant's gender and age, approximation changes may be automatically made to the vehicle display's font size, user interface (UI) mode, the screen brightness, etc.
Attention Cueing—The direction in which the occupant is looking may be determined via cameras. If a condition is detected that calls for the occupant's attention, the occupant's attention can be drawn to the condition by use of directional audio chimes. For example, the occupant's attention can be drawn in the direction of a blind spot. Attracting the occupant's attention in a certain direction may tie in with an enhanced audio landscape. For example, the occupant's head/eye position may cause the blind spot monitoring (BSM) warning to be triggered differently than it would be triggered in the absence of head/eye position data.
Distracted Departure—If it is detected that the occupant's eyes are not directed in the forward direction as the traffic light turns green, the inventive system triggers an audio cue to notify the occupant that the light has turned green and that it is time to accelerate.
Occupant Habit Awareness—Occupant habits may be observed and recorded, and statistics may be calculated and presented to the occupant to make him aware of his driving habits. Such statistics may include the length of time that the occupant's eyes are off the road, the number or frequency of the occupant's glances away from the road that are greater than two seconds, the length of time that the occupant looks down (e.g., at his phone), or the number or frequency of instances of the occupant looking down. The occupant may receive alerts about these instances, or there may be auditory stimuli to bring his attention back to the road and the driving task.
Pedestrian Red Carpet—A red hazard “carpet” may be light projected in front of the vehicle in a direction in which the occupant is not looking. Thus, pedestrians are shown and have notice of where the occupant is not looking across an intersection or cross walk. The pedestrian may then choose to not walk into the red hazard area. The vehicle may need to have a threshold combination of speed and closeness to the intersection or cross walk before this feature is triggered.
Drowsiness Detection—The drowsiness of the occupant may be determined based upon, for example, how far his eye lids are open, or the rate at which he blinks his eyes. If it is determined that the occupant is drowsy, then a call may be placed to the occupant's telephone, a window may be opened, or the occupant may be presented with a task in order to regain the occupant's attentiveness.
Emotion Recognition—The occupant's emotions may be determined based on facial recognition. For example, the number of smiles by the occupant within a period of time, or the percentage of time the occupant spends smiling may be determined.
Health Monitoring—The occupant's state of health may be determined though video processing of the driver's face and body. For example, the heartbeat of a human can be measured with video processing of the green channel for an RGB image sensor. In other examples the occupant monitor system may detect choking, coughing, sneezing and sweating through similar video processing methodologies. Using combinations of these informations streams, a sophisticated software algorithm may make predictions and suggest treatment for illnesses like colds, flu, intoxication, body toxins, hyperventilation, hypoventilation, apnea, choking and more. Using machine learning and video processing, an occupant's weight trends and organ failure (e.g., kidney failure) may be determined through changes in the skin color, blotchiness, droopiness and darkness. Mental health may also be monitored using the same methodologies as described above.
In one embodiment, the invention comprises a motor vehicle including a loudspeaker or array of loudspeakers disposed within a passenger compartment of the vehicle approximately between a occupant's seat of the vehicle and a space of interest disposed outside of the vehicle. A camera is positioned to capture images of a face of an occupant of the vehicle. A processing device receives the images captured by the camera, and determines from the images that the occupant is not looking at the space of interest disposed outside of the passenger compartment. In response to the determining that the occupant is not looking at the space of interest disposed outside of the passenger compartment, the processing device causes a sound to be emitted from the loudspeaker or combination of loudspeakers.
In another embodiment, the invention comprises a motor vehicle including a display screen disposed within a passenger compartment of the vehicle. A camera is positioned to capture images of a face of an occupant of the vehicle. A processing device receives the images captured by the camera, and determines based upon the images that the occupant is not looking at a visual warning that presented outside of the display screen. In response to the determining that the occupant is not looking at the visual warning, the visual warning is presented on the display screen.
In yet another embodiment, the invention comprises a motor vehicle including an occupant-actuatable control device which individually adjusts a position or orientation of each of a plurality of mirrors in response to the control device being actuated. A camera is positioned to capture images of a face of the occupant. A processing device receives the images captured by the camera, and determines from the images that the occupant is looking at one of the mirrors. In response to the control device being actuated, and in response to the determining that the occupant is looking at one of the mirrors, the position or orientation of the one mirror at which the occupant is looking is adjusted.
In a further embodiment, the invention comprises a motor vehicle including a plurality of mirrors each having an individually adjustable position or orientation. A head up display presents a virtual image to an occupant of the vehicle. The virtual image has an adjustable position. A camera is positioned to capture images of a face of the occupant. A processing device receives the images captured by the camera, and determines from the images a position of eyes of the occupant. The position or orientation of the mirrors and the position of the virtual image are adjusted based on the determined position of the eyes of the occupant.
An advantage of the occupant monitoring system of the present invention is that it may enable the vehicle to “know the occupant” and enhance the occupant's experience by providing the occupant with personalization/customization/settings such as mirror orientations. Also, the occupant monitoring system can determine what the occupant is paying attention to, and can provide a safer driving experience with enhanced attention management and wellness management.
A better understanding of the present invention will be had upon reference to the following description in conjunction with the accompanying drawings.
The foregoing description may refer to “motor vehicle”, “automobile”, “automotive”, or similar expressions. It is to be understood that these terms are not intended to limit the invention to any particular type of transportation vehicle. Rather, the invention may be applied to any type of transportation vehicle whether traveling by air, water, or ground, such as airplanes, boats, etc.
The foregoing detailed description is given primarily for clearness of understanding and no unnecessary limitations are to be understood therefrom for modifications can be made by those skilled in the art upon reading this disclosure and may be made without departing from the spirit of the invention.
This application claims benefit of U.S. Provisional Application No. 62/409,043 filed on Oct. 17, 2016, which the disclosure of which is hereby incorporated by reference in its entirety for all purposes.
Number | Date | Country | |
---|---|---|---|
62409043 | Oct 2016 | US |