Automotive vehicles such as cars and trucks have structural features that may obscure the driver's view of the exterior physical environment of the vehicle, leading to blind spots. It is with respect to this general technical environment that aspects of the present disclosure are directed.
The present application describes a method comprising: determining a viewpoint of a driver of a vehicle; obtaining one or more images of a portion of a physical environment outside of the vehicle that is obscured, from the viewpoint of the driver, by a portion of the vehicle; applying parallax compensation to the one or more images based on the viewpoint of the driver to generate a compensated image; and displaying the compensated image using a display component inside the vehicle.
In some examples, and in combination with any of the above aspects and examples, the method further includes: detecting that a line of sight of the driver intersects the portion of the vehicle, where the computing system displays the compensated image in response to detecting that the line of sight of the driver intersects the portion of the vehicle.
In some examples, and in combination with any of the above aspects and examples, detecting that the line of sight of the driver of the vehicle intersects the portion of the vehicle includes: receiving, from an interior camera of the vehicle, information indicating a head position of the driver; estimating the line of sight of the driver based on the head position of the driver; and determining that the line of sight of the driver intersects the portion of the vehicle based on a three-dimensional representation of the vehicle.
In some examples, and in combination with any of the above aspects and examples, detecting that the line of sight of the driver of the vehicle intersects the portion of the vehicle includes: receiving, from an eye-tracking system of the vehicle, information indicating an eye position of the driver; estimating the line of sight of the driver based on the eye position of the driver; and determining that the line of sight of the driver intersects the portion of the vehicle based on a three-dimensional representation of the vehicle.
In some examples, and in combination with any of the above aspects and examples, displaying the compensated image using the display component includes projecting the compensated image onto an interior surface of the portion of the vehicle.
In some examples, and in combination with any of the above aspects and examples, displaying the compensated image using the display component includes displaying the compensated image using a display screen located on the portion of the vehicle.
In some examples, and in combination with any of the above aspects and examples, displaying the compensated image using the display component includes displaying the compensated image on a head-up display of the vehicle.
In some examples, and in combination with any of the above aspects and examples, determining the viewpoint of the driver includes: receiving, from an interior camera of the vehicle, information indicating a head position of the driver; and determining the viewpoint of the driver based on the head position of the driver.
In some examples, and in combination with any of the above aspects and examples, the method further includes: adjusting a visual characteristic of the compensated image based on a time of day.
In some examples, and in combination with any of the above aspects and examples, obtaining the one or more images includes receiving one or more live video images from one or more external cameras of the vehicle.
In some examples, and in combination with any of the above aspects and examples, obtaining the one or more images includes receiving two or more images from two or more external cameras of the vehicle, where the compensated image includes a merging of the two or more images.
In another aspect, the present technology includes a method that includes: detecting that a line of sight of a driver of a vehicle intersects an opaque portion of the vehicle; in response to detecting that the line of sight of the driver intersects the opaque portion of the vehicle: obtaining one or more images of a portion of a physical environment outside of the vehicle that is obscured, from a viewpoint of the driver, by the opaque portion of the vehicle, applying parallax compensation to the one or more images based on the viewpoint of the driver to generate a compensated image, and displaying the compensated image on a surface of the opaque portion of the vehicle.
In some examples, and in combination with any of the above aspects and examples, displaying the compensated image on the surface of the opaque portion of the vehicle includes projecting the compensated image onto the surface of the opaque portion of the vehicle.
In some examples, and in combination with any of the above aspects and examples, detecting that a line of sight of the driver of the vehicle intersects the opaque portion of the vehicle includes receiving information from a camera or an eye-tracking system.
In some examples, and in combination with any of the above aspects and examples, the method further includes determining the viewpoint of the driver based on information received from a camera or from an eye-tracking system.
In another aspect, the present technology includes a system that includes at least one processor; and memory, storing instructions that, when executed by the at least one processor, cause the system to perform a method, the method comprising: determining a viewpoint of a driver of a vehicle; obtaining one or more images of a portion of a physical environment outside of the vehicle that is obscured, from the viewpoint of the driver, by a portion of the vehicle; applying parallax compensation to the one or more images based on the viewpoint of the driver to generate a compensated image; and displaying the compensated image using a display component inside the vehicle.
In some examples, and in combination with any of the above aspects and examples, the method further includes: detecting that a line of sight of the driver intersects the portion of the vehicle, where the computing system displays the compensated image in response to detecting that the line of sight of the driver intersects the portion of the vehicle.
In some examples, and in combination with any of the above aspects and examples, detecting that the line of sight of the driver of the vehicle intersects the portion of the vehicle includes: receiving, from an interior camera of the vehicle, information indicating a head position of the driver; estimating the line of sight of the driver based on the head position of the driver; and determining that the line of sight of the driver intersects the portion of the vehicle based on a three-dimensional representation of the vehicle.
In some examples, and in combination with any of the above aspects and examples, detecting that the line of sight of the driver of the vehicle intersects the portion of the vehicle includes: receiving, from an eye-tracking system of the vehicle, information indicating an eye position of the driver; estimating the line of sight of the driver based on the eye position of the driver; and determining that the line of sight of the driver intersects the portion of the vehicle based on a three-dimensional representation of the vehicle.
In some examples, and in combination with any of the above aspects and examples, determining the viewpoint of the driver includes: receiving, from an interior camera of the vehicle, information indicating a head position of the driver; and determining the viewpoint of the driver based on the head position of the driver.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Non-limiting and non-exhaustive examples are described with reference to the following Figures.
Automotive vehicles such as cars and trucks have structural features that may obscure the driver's view of the exterior physical environment of the vehicle, resulting in blind spots. For example, the supporting pillars of a vehicle may obscure the driver's view of areas alongside and/or behind the vehicle, making lane changes more difficult. For example, the hood of the vehicle may obscure the driver's view of an area directly in front of the vehicle such that a driver may not be able to see a hazard that is in front of the vehicle and low to the ground.
As described herein, systems and methods for displaying an image of a portion of an exterior physical environment of a vehicle inside the vehicle can be used to mitigate the effect of blind spots. In some examples, the system detects when a line of sight of a driver of the vehicle intersects an opaque portion of the vehicle, such as when the driver looks in the direction of an opaque pillar of the vehicle in preparation for making a lane change. When the system detects that the line of sight of the driver intersects the opaque portion of the vehicle, the system displays, on a surface of the portion of the vehicle, an image (e.g., a still image or a live video image) of a portion of the physical environment outside of the vehicle that is obscured by the opaque portion of the vehicle, thereby simulating an unobstructed view of the exterior physical environment. In some examples, the system displays the image on a head-up display of the vehicle.
In some examples, the vehicle includes one or more external-facing cameras to capture images of the portion of the exterior physical environment that is obscured by the opaque portion of the vehicle. In some examples, the system merges images from multiple external-facing cameras to generate the image that is displayed. In some examples, the system includes an eye-tracking system to detect the line of sight (e.g., the gaze direction) of the driver based on the driver's eye movements. In some examples, the system determines whether the line of sight intersects a particular portion of the vehicle using a three-dimensional representation of the vehicle in a coordinate system associated with the vehicle. In some examples, the three-dimensional representation of the vehicle includes an indication of pre-determined portions of the vehicle that may obscure a view of the driver, such as pillars of the vehicle or a hood of the vehicle, and the system determines whether a portion of the vehicle obscures the exterior physical environment from the viewpoint of the driver by determining whether an estimated line of sight of the driver or a vector intersects a pre-determined portion of the vehicle.
In some examples, the system includes an internal-facing camera to detect a head position of the driver, which can be used to estimate a viewpoint of the driver and/or to estimate a line of sight of the driver. In some examples, the system applies parallax compensation to the image (prior to or while displaying the image) based on a detected viewpoint of the driver to adjust the image so that it appears to be a view of the exterior physical environment from the current viewpoint of the driver rather than from the viewpoint of the camera that captured the image. In some examples, the system updates the parallax compensation of the image based on detected changes in the viewpoint of the driver (e.g., if the driver changes their head location or orientation).
In some examples, the system displays an image of a portion of the physical environment outside of the vehicle that is obscured by an opaque portion of the vehicle from the viewpoint of the driver whether or not the driver's line of sight intersects the opaque portion of the vehicle. That is, the system may display the image continuously rather than based on the line of sight of the driver.
In the following detailed description, references are made to the accompanying drawings that form a part hereof, and in which are shown by way of illustrations specific embodiments or examples. These aspects may be combined, other aspects may be utilized, and structural changes may be made without departing from the present disclosure. Examples may be practiced as methods, systems or devices. Accordingly, examples may take the form of a hardware implementation, an entirely software implementation, or an implementation combining software and hardware aspects. In addition, all systems described with respect to the figures can comprise one or more machines or devices that are operatively connected to cooperate in order to provide the described system functionality. The following detailed description is therefore not to be taken in a limiting sense, and the scope of the present disclosure is defined by the appended claims and their equivalents.
In some examples, the viewpoint 118 of the driver 110 corresponds to a location and/or orientation, in three-dimensional space, of the driver's head within the vehicle. In some examples, a driver 110 can change their line of sight without changing their viewpoint; for example, a driver can rotate their head to look in a different direction without changing the location and/or orientation of their head or can move their eyes to look in a different direction while keeping their head in the same location.
In the example of
In some examples, before displaying the image, the system applies parallax compensation to the image to adjust the image based on the viewpoint 118 of the driver 110. The system then displays the compensated image. Parallax compensation may be used to compensate for differences between the viewpoint of the driver and the viewpoint of the camera(s) that captures the image(s) such that the compensated image appears, to the driver, as it would appear if the driver 110 were directly viewing the portion of the physical environment that is included in the image (e.g., if the pillar 108 did not obscure the view of the physical environment from the viewpoint 118 of the driver). For example, parallax compensation may be used to compensate for the difference in the distance and/or angle from the driver's eyes to objects in the physical environment relative to the distance and/or angle from the camera's lens to such objects. <Inventors: please edit/augment description of parallax compensation as needed>
In some examples, the system adjusts a visual characteristic (e.g., a brightness and/or a color tone) of the image based on a time of day at which the image is displayed and/or based on ambient lighting. For example, the system may display images at a reduced brightness and/or using different color tones when the image is displayed at nighttime and/or when the ambient lighting is detected to be relatively dark relative to when the image is displayed during the day and/or when the ambient lighting is detected to be relatively bright.
The example of
In some examples, the system detects that a line of sight of the driver intersects the hood or windshield of the vehicle, and in response, displays, on a head-up display of the vehicle, an image of a portion of the physical environment that is obscured by the hood of the vehicle (e.g., to enable the driver to see objects that are directly in front of the vehicle and not visible over the hood of the vehicle), such as depicted by the head-up display 122 in
As previously mentioned, in some examples, the system displays the image of the portion of the exterior physical environment of the vehicle that is obscured by an opaque portion of the vehicle regardless of whether the driver's line of sight intersects the opaque portion of the vehicle, such as depicted in
Additional details regarding a system that may be used to implement aspects of the above-described features is described with reference to
In some examples, computing resource 202 includes one or more first computing devices (e.g., computing device 500 described with reference to
Eye-tracking system 204 may include a camera and/or other sensing device that enables the eye-tracking system 204 to monitor eye movements of a driver of a vehicle. Computing resource 202 may receive, from eye-tracking system 204, signals representing movements of the driver's eyes. Computing resource 202 may use such signals to determine or estimate a viewpoint and/or a line of sight of the driver based on information received from the eye-tracking system 204. For example, computing resource 202 may estimate a line of sight of the driver by determining a direction of a normal vector extending from a detected pupil location(s) of the driver's eye(s). For example, computing resource 202 may estimate a viewpoint of the driver based on a position (e.g., location and/or orientation) of the driver's eyes.
Exterior camera(s) 206 may be attached to the exterior of the vehicle and configured to capture still or live (video) images of the physical environment outside of the vehicle. Computing resource 202 may receive images from the exterior camera(s) 206 and process such images to generate an image for display inside the vehicle, such as described with reference to
Interior camera(s) 208 may be attached to the interior of the vehicle and configured to capture live or still images of the interior of the vehicle. Computing resource 202 may receive images from the interior camera(s) 208 and use such images to determine or estimate a viewpoint and/or a line of sight of the driver based on a location and/or orientation of the driver's head within the vehicle. For example, computing resource 202 may estimate a line of sight of the driver by determining a direction of a normal vector extending from a face of the driver (or a portion thereof), and/or may estimate a viewpoint of the driver based on the location of a fixed point on the driver's head (e.g., the midpoint between the driver's eyes) within the vehicle.
Display component 210 may be configured to receive images from computing resource 202 and display the images on a portion of the interior of the vehicle, such as on an opaque portion of the interior of the vehicle (e.g., a pillar), and/or on a head-up display. Display component 210 may include a projector for projecting the image onto a surface of the interior of the vehicle and/or a display screen for displaying the image. For example, one or more interior surfaces of the vehicle may be wrapped or covered in a flexible display screen, such as a flexible organic light emitting diode (OLED) screen, or other curvable display technologies.
At operation 302, a viewpoint of a driver of a vehicle is determined (e.g., viewpoint 118 of driver 110). In some examples, the viewpoint of the driver corresponds to a point (e.g., a location) in three-dimensional space inside the vehicle from which the driver may view the interior of the vehicle and/or exterior physical environment of the vehicle, such as a point corresponding to a location of the driver's head, eyes, ears, or other physical feature. In some examples, the viewpoint of the driver is assumed to be a static location relative to the interior of the vehicle (e.g., a location and/or orientation at which a driver's head is likely to be positioned), and determining the viewpoint of the driver includes obtaining the viewpoint of the driver from a storage element of a computing device. In some examples, the viewpoint may be estimated based on a sensed position of the driver's seat (which can be a rough indication of the driver's height). In some examples, the viewpoint of the driver is not assumed to be static and is instead dynamically determined by a computing device based on signals received from various components located inside the vehicle, such as an object/facial recognition system, an eye-tracking system and/or an interior camera. For example, a computing device may determine a current viewpoint of the driver based on a current head position of the driver, eye position of the driver, line of sight of the driver, or other physical information about the driver obtained via in-vehicle cameras or sensors. At operation 304, one or more images are obtained of a portion of a physical environment outside of the vehicle that is obscured, from the viewpoint of the driver, by a portion of the vehicle (such as right front pillar 108 or a hood of the vehicle). In some examples, the one or more images are live (video) or still images that are obtained (e.g., received) from one or more cameras mounted on the exterior of the vehicle. In some examples, the portion of the vehicle that obscures the portion of the physical environment is opaque.
At operation 306, parallax compensation is applied to the one or more images based on the viewpoint of the driver to generate a compensated image. In examples where multiple images are obtained, the images may be merged and parallax compensation may be applied to the images before, during, and/or after the merging of the images to generate a single compensated image. In some examples, the parallax compensation is applied to the one or more images by performing a parallax compensation algorithm on the one or more images. <Inventors: do you have examples of parallax compensation algorithms that could be applied? A quick web search was inconclusive>
At operation 308, the compensated image is displayed using a display component inside the vehicle. For example, the compensated image (which may be a live video that is parallax compensated based on the viewpoint of the driver) is displayed using a projector, head-up display, display screen, or other form of display component. In some examples, the compensated image is displayed on an interior surface of the portion of the vehicle. For example, the compensated image may be projected onto on an interior surface of a pillar of the vehicle.
At operation 402, it is detected that a line of sight of a driver of a vehicle intersects an opaque portion the vehicle. In some examples, a computing device detects that a line of sight of a driver intersects the opaque portion of the vehicle based on information received from an eye-tracking system and/or from one or more interior cameras. In some examples, a computing device estimates a line of sight of a driver by identifying, based on information received from an eye-tracking system and/or one or more interior cameras, a plane associated with the driver's face and/or eyes, determining a normal vector projecting from the plane, and determining that the normal vector intersects the opaque portion of the vehicle based on, for example, a three-dimensional representation of the vehicle. In some examples, the portion of the vehicle is a pre-defined portion of the vehicle (e.g., a pillar, a windshield, a hood). In some examples, the portion of the vehicle is an opaque portion of the vehicle.
In response to detecting that the line of sight of the driver intersects the portion of the vehicle, operations 406-410 are performed.
At operation 406, one or more images are obtained of a portion of a physical environment outside of the vehicle that is obscured, from the viewpoint of the driver, by the opaque portion of the vehicle. In some examples, the one or more images are live (video) or still images are obtained (e.g., received) from one or more cameras mounted on the exterior of the vehicle.
At operation 408, parallax compensation is applied to the one or more images based on the viewpoint of the driver to generate a compensated image, such as described with reference to operation 306 of method 300.
At operation 410, the compensated image is displayed on a surface of the opaque portion of the vehicle (e.g., using a display component), such as described with reference to operation 308 of method 300.
In some examples, after displaying the compensated image, in response to detecting that the line of sight of the driver does not intersect the portion of the vehicle (e.g., the driver is no longer looking at the portion of the vehicle) the compensated image ceases to be displayed.
The operating system 505, for example, may be suitable for controlling the operation of the computing device 500. Furthermore, aspects of the invention may be practiced in conjunction with a graphics library, other operating systems, or any other application program and is not limited to any particular application or system. This basic configuration is illustrated in
As stated above, a number of program modules and data files may be stored in the system memory 504. While executing on the processing unit 502, the program modules 506 may perform processes including, but not limited to, one or more of the operations of the methods illustrated in
Furthermore, examples of the invention may be practiced in an electrical circuit comprising discrete electronic elements, packaged or integrated electronic chips containing logic gates, a circuit utilizing a microprocessor, or on a single chip containing electronic elements or microprocessors. For example, examples of the invention may be practiced via a system-on-a-chip (SOC) where each or many of the components illustrated in
The computing device 500 may also have one or more input device(s) 512 such as a keyboard, a mouse, a pen, a sound input device, a touch input device, etc. The output device(s) 514 such as a display, speakers, a printer, etc. may also be included. The aforementioned devices are examples and others may be used. The computing device 500 may include one or more communication connections 516 allowing communications with other computing devices 518, which may include cloud-based servers or remote computational resources. Examples of suitable communication connections 516 include, but are not limited to, RF transmitter, receiver, and/or transceiver circuitry; universal serial bus (USB), parallel, and/or serial ports.
The term computer readable media as used herein may include computer storage media. Computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, or program modules. The system memory 504, the removable storage device 509, and the non-removable storage device 510 are all computer storage media examples (i.e., memory storage.) Computer storage media may include RAM, ROM, electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other article of manufacture which can be used to store information and which can be accessed by the computing device 500. Any such computer storage media may be part of the computing device 500 and/or coupled with computing device 500. Computer storage media may be non-transitory and tangible and does not include a carrier wave or other propagated data signal.
Communication media may be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” may describe a signal that has one or more characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared, and other wireless media.
Aspects of the present invention, for example, are described above with reference to block diagrams and/or operational illustrations of methods, systems, and computer program products according to aspects of the invention. The functions/acts noted in the blocks may occur out of the order as shown in any flowchart. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Further, as used herein and in the claims, the phrase “at least one of element A, element B, or element C” is intended to convey any of: element A, clement B, element C, elements A and B, elements A and C, elements B and C, and elements A, B, and C.
The description and illustration of one or more aspects provided in this application are not intended to limit or restrict the scope of the disclosure as claimed in any way. The aspects, examples, and details provided in this application are considered sufficient to convey possession and enable others to make and use the best mode of claimed disclosure. The claimed disclosure should not be construed as being limited to any aspect, example, or detail provided in this application. Regardless of whether shown and described in combination or separately, the various features (both structural and methodological) are intended to be selectively rearranged, included or omitted to produce an embodiment with a particular set of features. Having been provided with the description and illustration of the present application, one skilled in the art may envision variations, modifications, and alternate aspects falling within the spirit of the broader aspects of the general inventive concept embodied in this application that do not depart from the broader scope of the claimed disclosure.
This application claims the benefits of U.S. Provisional Application No. 63/609,130 filed Dec. 12, 2023, entitled “In-Vehicle Display of Exterior Images to Mitigate Blind Spots,” which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63609130 | Dec 2023 | US |