SCENE AWARENESS SYSTEM FOR A VEHICLE

Abstract
A method of alerting a user of a vehicle as to a scene, external to the vehicle, includes capturing data pertaining to the scene, external to the vehicle. The captured data, pertaining to the scene external to the vehicle, is transmitted to a processor. The determination is made, in the processor that a characteristic is in the scene that requires the user's visual attention. Data pertaining to a gaze of the user is captured. Captured data, pertaining to the gaze of the user, is transmitted to the processor. A determination is made, in the processor that the drive is gazing toward a user device. A signal is transmitted to the user device such that the user device alerts the user as to the characteristic in the scene requiring the user's visual attention.
Description
TECHNICAL FIELD

The present disclosure is related to an adaptive external road scene transfer to improve driver awareness.


BACKGROUND

Vehicles, such as cars, typically include displays or indicators to provide information to the vehicle user. Such displays or indicators may, for example, provide information regarding mileage, fuel consumption, and vehicle speed. The vehicle user usually has to shift his eye gaze away from the road scene and onto an in-vehicle display in order to visually process the information presented by these displays or indicators.


SUMMARY

One possible aspect of the disclosure provides a method of alerting a user of a vehicle as to a scene, external to the vehicle, which includes capturing data pertaining to the scene, external to the vehicle. The captured data, pertaining to the scene external to the vehicle, is transmitted to a processor. The determination is made, in the processor that a characteristic is in the scene that requires the user's visual attention. Data pertaining to a gaze of the user is captured. Captured data, pertaining to the gaze of the user, is transmitted to the processor. A determination is made, in the processor that the user is gazing toward a user device. A signal is transmitted to the user device such that the user device alerts the user as to the characteristic in the scene requiring the user's visual attention.


Another aspect of the disclosure provides a scene awareness system for a vehicle that includes at least one exterior camera, at least one interior camera, and a vehicle controller. The exterior camera is configured to capture data pertaining to a scene, external to the vehicle. The interior camera is configured to capture data pertaining to an orientation of a gaze of a user of the vehicle. The vehicle controller is in communication with each of the at least one exterior camera and the at least one interior camera. The vehicle controller is configured to determine whether at least one characteristic in the scene, exterior to the vehicle, requires the user's visual attention. The vehicle controller is also configured to determine whether the user is gazing at a user device. The vehicle controller is configured to transmit a signal to the user device such that the user device alerts the user as to the characteristic in the scene requiring the user's visual attention when the controller determines there is at least one characteristic in the scene requiring the user's attention, simultaneous with the controller determining the user is gazing at the user device.


In yet another aspect of the disclosure, a vehicle includes a body and a scene awareness system. The scene awareness system includes at least one exterior camera, at least one interior camera, and a vehicle controller. The cameras are operatively attached to the body. The exterior camera is configured to capture data pertaining to a scene, external to the vehicle. The interior camera is configured to capture data pertaining to an orientation of a gaze of a user of the vehicle. The vehicle controller is in communication with each of the cameras. The vehicle controller is operable for receiving data pertaining to the scene, external to the vehicle, from the at least one exterior camera. The vehicle controller includes a process and a characteristic in the scene requiring the user's visual attention that is determined in the processor. Data, pertaining to a gaze of the user, is received from the interior camera. A determination is made that the user is gazing toward a user device. A signal is transmitted to the user device, from the controller that alerts the user as to the characteristic in the scene requiring the user's visual attention.


The above features and advantages and other features and advantages of the present teachings are readily apparent from the following detailed description of the best modes for carrying out the present teachings when taken in connection with the accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic illustrative side view of a vehicle.



FIG. 2 is a schematic diagrammatic view of a scene awareness system of the vehicle of FIG. 1.



FIG. 3A is schematic fragmentary view of an interior of the vehicle having a scene awareness system, illustrating the user of the vehicle gazing at a scene, external to the vehicle.



FIG. 3B is a schematic fragmentary view of the interior of the vehicle having the scene awareness system, illustrating the user of the vehicle gazing at a user device.



FIG. 4 is a schematic flow chart diagram of a method of alerting the user of the vehicle as to a scene, external to the vehicle, requiring the user's attention.





DETAILED DESCRIPTION

Those having ordinary skill in the art will recognize that terms such as “above,” “below,” “upward,” “downward,” “top,” “bottom,” etc., are used descriptively for the figures, and do not represent limitations on the scope of the invention, as defined by the appended claims. Furthermore, the invention may be described herein in terms of functional and/or logical block components and/or various processing steps. It should be realized that such block components may be comprised of any number of hardware, software, and/or firmware components configured to perform the specified functions.


Referring now to the drawings, wherein the like numerals indicate corresponding parts throughout the several views, FIG. 1 schematically illustrates a vehicle 10 including a body 12. The vehicle 10 may be a land vehicle, such as a car, or any other type of vehicle such as an airplane, farm equipment, construction equipment, a boat, etc. The vehicle 10 may be configured to be operated by a user 18, operated autonomously, and/or operated semi-autonomously. Regardless of the specific kind of vehicle, the vehicle 10 includes a windshield 14 coupled to the body 12. The windshield 14 is wholly or partly made of a substantially transparent material. Referring now to FIGS. 3A and 3B, the vehicle 10 includes an interior 16. Accordingly, a vehicle user 18 can be in position within the interior 16 and see through the windshield 14, external to the vehicle 10. Referring again to FIG. 1, the body 12 extends between a forward end 21 and a rearward end 23. The windshield 14 faces the forward end 21 of the vehicle 10, opposite the rearward end 23.


With reference to FIG. 2, a scene awareness system 25 is shown having one or more exterior cameras 26, one or more internal cameras 28, and a vehicle controller (C) 20. The vehicle controller 20, which is in communication with the internal and the exterior camera(s) 28, 26, is programmed to use the exterior camera(s) 26 to automatically locate and/or identify one or more characteristics in a scene 32, exterior to the vehicle 10, and to use the interior camera(s) 28 to simultaneously locate and identify where the user's 18 eyes are gazing within the vehicle 10 via execution of instructions embodying a method 100, an example of which is described below with reference to FIG. 4. The characteristics may include, but should not be limited to, identifying one or more objects 30, colors, brightness, darkness, temperatures, temperature gradients, graphical patterns, motion, and other like characteristics.


Referring to FIG. 2, the exterior camera(s) 26 is configured to monitor a scene 32, external to the vehicle 10. The exterior cameras 26 may be operatively attached to the vehicle 10 to view the scenery 32 in front of, beside, above, and/or behind the vehicle 10. Alternatively, at least one exterior camera 26 may be resident within a user device 42 located within the interior 16 of the vehicle 10 through the windshield 14 or some other window within the vehicle 10. In such an embodiment, the user device 42 would be positioned within the vehicle such that the exterior camera 26 is directed to view the scenery 32 around the vehicle. The scenery 32 may be of a landscape, a billboard, a landmark, and the like. It should be appreciated that the portions of the scenery 32 may be static, in motion, and/or a combination of in static or in motion. The exterior camera(s) 26 may include a camera, a sensor, and the like. The exterior camera(s) 26 is configured to capture images, in static or in motion, external to the vehicle 10, in real-time. The exterior camera(s) 26 is in operative communication with a processor 22 configured to determine if the scenery 32 captured in real-time is sufficient to require the user's 18 visual attention, i.e., the scenery 32 is visually salient. The processor 22 may be resident within the vehicle controller 20. The exterior camera(s) 26 may be configured to use machine vision (MV) to recognize characteristics, such as objects 30, in the scenery 32, in real-time. The captured images may be transmitted as data (arrow 38) from the exterior camera(s) 26 to the processor 22 within the controller to make an imaging-based determination of whether the scenery 32 is visually salient. Machine vision may make an imaging-based determination based on edge detection of objects 30, color analysis to identify objects 30, pattern recognition, feature detection, motion analysis, and the like. The processor 22 may be pre-programmed with criteria. By way of a non-limiting example, scenery 32 sufficient to require the user's 18 visual attention may include an upcoming curve in the road, stop sign, landmark, destination, and the like. As previously mentioned, such scenery 32 may include objects 30 which are static and/or objects which are in motion.


The interior camera(s) 28 is also in operative communication with the processor 22. The interior camera(s) 28 may be used to capture visual information with respect to the user 18 of the vehicle 10. More specifically, the interior camera(s) 28 may capture information about the users 18 head, including their eyes, in real-time. The interior camera(s) 28 may be positioned within the vehicle 10, e.g., attached to a rearview mirror, and the like. Alternatively, the interior camera(s) 28 may be positioned within the vehicle, where the interior camera 28 is resident within the user device 42. In such an example, the user device 42 may be configured with both the interior camera 28 and the exterior camera 26. The captured visual information may be transmitted as data (arrow 40) to the processor 22 within the vehicle controller 20. The interior camera(s) 28 is configured to determine where the eyes of the user 18 are looking and/or to determine an orientation of the user's 18 head, relative to the windshield 14 and/or the forward end 21 of the vehicle 10. The interior camera(s) 28 and/or the user device 42 may be configured to determine whether the user 18 is gazing out the windshield 14, gazing at the user device 42, or gazing in any other direction, using MV for facial detection. The interior camera(s) 28 may be a camera, an infrared (IR) sensor, and the like.


The vehicle controller 20 is also configured to be in operative communication with the user device 42. The user device 42 includes a device controller 44 and a display screen 46. The user device 42 is configured to display information, including text and graphics, on the display screen 46. The user device 42 may be a portable personal device, such as a cell phone, a tablet, a computer, and the like. Alternatively, the user device 42 may be integrated into the vehicle 10, e.g., as an integrated infotainment device.


As described in more detail below, the user device 42 is configured to be in operative communication with the vehicle controller 20. As such, the vehicle controller 20 may selectively transmit a signal (arrow 48) to the device controller 44 to clear the display screen 46 and/or to replace the displayed image of the display screen 46 with the captured scenery 32.


Referring again to FIG. 2, the vehicle 10 may be configured to be in selective communication with an off-board system 49. More specifically, the vehicle controller 20 may be configured to receive data relating to traffic conditions, traffic signals, weather conditions, and the like, from the off-board system 49. An example of an off-board system 49 may include a service provider, which may be configured as a server located off-board the vehicle 10, e.g., at a location remote from the vehicle 10. The off-board system 49 may be a vehicle integrated service provider, such as the OnStar® service system, which may be selectively linked to the vehicle interface device and/or in communication with the portable device. By way of another non-limiting example, the service provider may be configured to provide data to the vehicle controller 20 via Wi-Fi, a mobile telecommunications network, satellite, Bluetooth, and the like. In yet another non-limiting example, as illustrated in FIG. 2, the off-board system 49 may include environmental sensors 52 disposed in conjunction with landmarks, such as a tree, and the like. The environmental sensor 52 may be configured to transmit a signal (arrow 55) to the vehicle controller 20 to alert the vehicle controller 20 as to the proximity of such a landmark to the vehicle 10.


The vehicle controller 20 is programmed with, or has access to, the algorithm or method 100, the execution of which provides a method of improving awareness of a user 18 of the vehicle 10 as to the scene 32, external to a vehicle 10, with the algorithm 100 explained in detail below and as shown in FIG. 4.


The vehicle controller 20 of FIG. 1 may be embodied as one or more computer devices having a processor (P) 22 and tangible, non-transitory memory (M) 24 on which is recorded instructions for executing the method 100. The memory 24 may include magnetic or optical memory, electrically-erasable programmable read only memory (EEPROM), and the like. Additional transitory memory may be included as needed, e.g., random access memory (RAM), memory for internal signal buffers, etc. Other hardware of the vehicle controller 20 may include a high-speed clock, analog-to-digital (A/D) circuitry, digital-to-analog (D/A) circuitry, and any required input/output (I/O) circuitry and devices, as well as signal conditioning and buffer electronics. Individual control algorithms resident in the vehicle controller 20 or readily accessible by the vehicle controller 20 may be stored in memory 24 and/or other suitable memory, and automatically executed via the processor 22 to provide the required control functionality.


Still referring to FIG. 2, when the vehicle 10 is being operated, the exterior camera(s) 26 are operable for imaging the scene 32 exterior to the vehicle 10, in real-time, and the interior camera(s) 28 are operable for imaging the user's 18 head including their eyes, inside of the vehicle 10, in real-time. Other sensors may also be used to determine or supplement the interior camera's determination of whether the user 18 is gazing at the user device 42. More specifically, such sensors may sense a physical interaction between the device and the user 18 of the vehicle 10. The collected data (arrow 38) of the exterior camera(s) 26 and the collected data (arrow 40) of the interior camera(s) 28 are transmitted to the vehicle controller 20 for processing according to the method 100. The exterior camera(s) 26 and/or the interior camera(s) 28 may be three-dimensional (3D) point cloud cameras. As is known in the art, a 3D point cloud is a set of data points in a 3D coordinate system, such as the X, Y, Z Cartesian coordinate system. Such cameras are able to capture any number of data points describing the surface contour of a target object 30, and to output the collected data (arrow 38) as a depth data file with synchronized color data. The vehicle controller 20 may be preprogrammed with predetermined target data and/or to receive predetermined target information from the off-board system 49 (arrow 50) and/or the environmental sensors 52 (arrow 55) which define the known size, shape, color, movements, and/or other descriptive parameters of the particular objects 30 to be located by the external and/or interior cameras 28.


The vehicle controller 20, shown in FIG. 2, may be made aware of the dimensions, color, and/or other identifying features of the objects 30 it is attempting to identify. While one exterior camera 26 and one interior camera 28 are shown in FIG. 2 for illustrative simplicity, the present method 100 may work with more than one exterior camera 26 and/or more than one interior camera 28.


Referring to FIG. 4, an example embodiment of the method 100 begins with step 102, where the vehicle controller 20 receives data (arrow 56) pertaining to operation of the vehicle 10. The data may be received from a plurality of sensors 54. By way of a non-limiting example, some of the sensors may be configured to determine revolutions per minute (RPM) of a transmission and/or engine of the vehicle 10, i.e., via a transmission sensor 54A and/or an engine sensor 54B; determine, via a brake pedal sensor 54C, whether a brake pedal is being actuated; determine, via a steering wheel sensor 54D, whether the user's 18 hand is touching a steering wheel; determine a wheel speed of the vehicle 10, i.e., via a wheel speed sensor 54E; determine whether a clutch switch has been actuated, i.e., via a clutch sensor 54F; determine, via an accelerometer 54G, a longitudinal acceleration of the vehicle 10; and the like.


Additionally, determining whether the vehicle 10 is being operated may include receiving geographic data into the vehicle controller 20 regarding vehicle 10 position, e.g., via a global positioning system (GPS), and the like. This geographic data may, in turn, be used as a prompt to determine upcoming traffic conditions in order to suggest a driving route and/or suggest a traffic lane to be used. This geographic data may also be used to prevent a preview of an upcoming scene, e.g., a hidden intersection, a hidden driveway, a curve, and other geographic data. Next, the method proceeds to step 104.


At step 104, the vehicle controller 20 determines, based on the data, received by the vehicle controller 20, whether the vehicle 10 is being operated by the user 18. If the vehicle controller 20 determines the vehicle 10 is not being operated, the method returns to step 102. However, if the vehicle controller 20 determines the vehicle 10 is being operated, the method proceeds to step 106.


At step 106, the exterior camera(s) 26 captures data pertaining to the scene 32, external to the vehicle 10. The method then proceeds to step 108.


At step 108, the captured data, pertaining to the scene 32 exterior to the vehicle 10, is transmitted to the processor 22, in real-time. The method then proceeds to step 110.


At step 110, the processor 22 determines whether the scene 32 is visually salient. More specifically, the processor 22 determines whether there are important objects 30 or characteristic within the captured data of the scene 32. If a determination is made that an object 30 considered to be important is detected in the scene 32, the method proceeds to step 112. However, if a determination is made that no visually salient objects 30 or characteristics are detected in the scene 32, the method returns to step 102.


At step 112, with the user 18 seated within the interior 16 of the vehicle 10, the interior camera(s) 28 captures data, in real-time, data pertaining to a gaze 29 of the user 18, including but not limited to, an orientation of the head and/or eyes of the user 18. The method then proceeds to step 114.


At step 114, the captured data is transmitted to the processor 22 in the vehicle controller 20, in real-time. The method then proceeds to step 116.


At step 116, the processor 22 determines where the eyes of the user 18 are gazing. The vehicle controller 20 receives the captured data and processes the captured data using a processor 22. If the processor 22 determines at step 116 that the user 18 is not gazing at the user device 42, the method returns to step 118.


At step 118, the vehicle controller 20 may transmit a signal to the user device 42 such that the user device 42 displays content on the display screen 46, not pertaining to the scenery 32, i.e., the display screen 46 returns to a nominal condition or to an originally displayed condition. It should be appreciated that, in one embodiment, such a signal may only be transmitted to the user device 42 after certain criteria are satisfied, e.g., the passage of a pre-defined period of time where the user 18 is not gazing at the user device 42, etc. The method then returns to step 102.


However, if the processor 22 determines at step 116 that the user 18 is gazing at the user device 42, the method proceeds to step 118.


At step 118, the controller may, in turn, transmit a signal to the user device 42. Such a signal may instruct the controller of the user device 42 to clear the display screen 46. Additionally, such a signal may instruct the controller of the user device 42 to display imagery corresponding to the visually salient scenery 32. By changing the display screen 46 of the user device 42, the user 18 is alerted to scenery 32 external to the vehicle 10, requiring the user's 18 attention.


It should be appreciated that the signal transmitted by the vehicle controller 20 may not be limited to clearing the display screen 46 and/or displaying scenery 32 external to the vehicle 10. By way of a non-limiting example, the signal may be also be transmitted to the user device 42 to instruct the user device 42 to output an audible sound and/or cause the user device 42 to vibrate. In another non-limiting example, the signal may be transmitted to change displays of the driving displays, e.g., instrument cluster, driver information center (DIC), navigation screen, heads-up display, inside rearview mirror, outside rearview mirror, and the like when a determination is made that the user 18 may be gazing at such driving displays, instead of at the scenery 32 external to the vehicle 10.


While the best modes for carrying out the many aspects of the present teachings have been described in detail, those familiar with the art to which these teachings relate will recognize various alternative aspects for practicing the present teachings that are within the scope of the appended claims.

Claims
  • 1. A method of alerting a user of a vehicle as to a scene, external to the vehicle, the method comprising: capturing data pertaining to the scene, external to the vehicle;transmitting the captured data pertaining to the scene, external to the vehicle, to a processor;determining, in the processor, a characteristic of the scene requiring the user's visual attention;capturing data pertaining to a gaze of the user;transmitting the captured data pertaining to the gaze of the user to the processor;determining, in the processor, the driver is gazing toward a user device; andtransmitting a signal to the user device such that the user device alerts the user as to the characteristic of the scene requiring the user's visual attention.
  • 2. A method, as set forth in claim 1, wherein transmitting is further defined as transmitting a signal to the user device such that the user device alerts the user as to the characteristic of the scene requiring the user's visual attention by clearing the screen of the user device.
  • 3. A method, as set forth in claim 2, wherein transmitting is further defined as transmitting a signal to the user device such that the user device alerts the user as to the characteristic of the scene requiring the user's visual attention by displaying a visual depiction of the characteristic of the scene requiring the user's visual attention on the display screen of the user device.
  • 4. A method, as set forth in claim 1, further comprising: receiving data pertaining to operation of the vehicle; anddetermining the vehicle is being operated by the user.
  • 5. A method, as set forth in claim 4, wherein capturing data pertaining to a gaze of the user is further defined as capturing data pertaining to a gaze of the user when a determination is made that the vehicle is being operated by the user.
  • 6. A method, as set forth in claim 4, wherein receiving data is further defined as receiving geographic data regarding a vehicle position.
  • 7. A method, as set forth in claim 1, wherein capturing data pertaining to the scene, external to the vehicle, is further defined as capturing data pertaining to the scene, external to the vehicle, with at least one external camera.
  • 8. A method, as set forth in claim 1, wherein capturing data pertaining to a gaze of the user is further defined as capturing data pertaining to a gaze of the user when a determination is made that a characteristic of the scene requiring the user's attention is detected in the scene.
  • 9. A method, as set forth in claim 1, wherein capturing data pertaining to a gaze of the user is further defined as capturing data pertaining to a gaze of the user with at least one internal camera.
  • 10. A method, as set forth in claim 9, wherein capturing data pertaining to a gaze of the user is further defined as capturing data pertaining to an orientation of at least one of the head and the eyes of the user.
  • 11. A method, as set forth in claim 1, wherein capturing data pertaining to a gaze of the user is further defined as capturing data pertaining to a gaze of the user after determining, in the processor, a characteristic of the scene requiring the user's visual attention.
  • 12. A scene awareness system for a vehicle, the scene awareness system comprising: at least one exterior camera configured to capture data pertaining to a scene, external to the vehicle;at least one interior camera configured to capture data pertaining to an orientation of a gaze of a user of the vehicle;a vehicle controller in communication with each of the at least one exterior camera and the at least one interior camera;wherein the vehicle controller is configured to determine whether at least one characteristic in the scene, exterior to the vehicle, requires the user's visual attention;wherein the vehicle controller is configured to determine whether the user is gazing at a user device; andwherein the vehicle controller is configured to transmit a signal to the user device such that the user device alerts the user as to the characteristic in the scene requiring the user's visual attention when the controller determines there is at least one characteristic in the scene requiring the user's attention simultaneous with the controller determining the user is gazing at the user device.
  • 13. A scene awareness system, as set forth in claim 12, further comprising at least one sensor configured for transmitting data to the vehicle controller; wherein the data pertains to operation of the vehicle.
  • 14. A vehicle comprising: a body;a scene awareness system including: at least one exterior camera operatively attached to the body;wherein the at least one exterior camera is configured to capture data pertaining to a scene, external to the vehicle;at least one interior camera operatively attached to the body;wherein the at least one interior camera is configured to capture data pertaining to an orientation of a gaze of a user of the vehicle;a vehicle controller in communication with each of the at least one exterior camera and the at least one interior camera, the vehicle controller operable for: receiving data pertaining to the scene, external to the vehicle, from the at least one exterior camera;determining, in the processor, a characteristic in the scene requiring the user's visual attention;receiving data pertaining to a gaze of the user from the at least one interior camera;determining the driver is gazing toward a user device; andtransmitting a signal to a user device such that the user device alerts the user as to the characteristic in the scene requiring the user's visual attention.
  • 15. A vehicle, as set forth in claim 14, wherein the body extends between a forward end and a rearward end; and wherein the at least one exterior camera is operatively attached to the body, proximate the forward end.
  • 16. A vehicle, as set forth in claim 15, wherein the body defines an interior configured for receiving the user of the vehicle therein; wherein the at least one interior camera is operatively disposed in the interior of the body.
  • 17. A vehicle, as set forth in claim 16, further comprising at least one sensor configured to be in operative communication with the vehicle controller; wherein the vehicle controller is further operable for receiving data, pertaining to operation of the vehicle, from the at least one sensor.
  • 18. A vehicle, as set forth in claim 14, wherein the vehicle controller is further operable for: receiving data pertaining to operation of the vehicle; anddetermining the vehicle is being operated by the user.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of International Patent Application No. PCT/US2014/035398, filed on Apr. 25, 2014, which claims the benefit of, U.S. Provisional Application No. 61/816,089, filed Apr. 25, 2013, each of which are hereby incorporated by reference in their entirety.

Provisional Applications (1)
Number Date Country
61816089 Apr 2013 US
Continuations (1)
Number Date Country
Parent PCT/US2014/035398 Apr 2014 US
Child 14920420 US