The present disclosure relates to augmented reality for occupants in a vehicle.
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
During travel, occupants of vehicles often desire to view or adjust their appearance. As the look and feel of modern vehicles progresses, the surface area of transparent or semi-transparent structural components may be increased to improve occupant experience, removing structural components previously used to address occupant vanity.
This section provides a general summary of the disclosure and is not a comprehensive disclosure of its full scope or all of its features.
For example, a vehicle may remove support structures (e.g., body panels, cross bars, pillars) in order to provide a more transparent passenger compartment. For example, the windshield and roof may be formed through a single pane or without opaque support structures, which can remove the support structures necessary for visors, vanity mirrors, and other vehicle components. Cantilever supports or other mechanisms may provide access to visors, vanity mirrors, and other vehicle components and further detract from occupant experience, obstructing views through the pane (e.g., windshield, windows). A display may be used to provide occupants with an indication of their current appearance or provide other information or entertainment content without obstructing views with opaque components.
In one or more forms, the present disclosure includes a method for depicting a visual representation on one or more pane of a vehicle. The one or more pane includes a first location. The method includes determining the visual representation based on an occupant. The method also includes depicting the visual representation at the first location. The determination of the visual representation may include capturing radiation reflected from the occupant. The determination of the visual representation may include applying a transform to a digital representation based on the radiation. The transform may adjust a perspective distortion of the digital representation. The radiation may be within a visible light spectrum or the radiation is within an infrared spectrum.
The depiction of the visual representation may be based on an input. The input may be based on a gesture of the occupant. The gesture may be a facial expression of the occupant, and the facial expression may be a movement of an eye of the occupant. The method may include determining a state of operation associated with the vehicle. The input may be based on the state of operation. The method may include determining a weather condition associated with the vehicle. The input may be based on the weather condition. The input may be based on ambient light associated with the vehicle. The method may include adjusting the depiction of the visual representation from the first location to a second location. The adjustment to the second location may be based on an orientation of an eye of the occupant. The first location may have a vertical height greater than the second location with respect to the occupant. The determination of the visual representation may include generating radiation based on the input. The generation of the radiation may be based on the depiction of the visual representation.
In one or more forms, the present disclosure includes a method for conducting a conference call in a vehicle. The vehicle may include one or more panes. The method may include determining a digital representation based on an occupant of the vehicle. The method may include establishing the conference call based on the digital representation. The method may include depicting a visual representation of a participant of the conference call. The depiction of the visual representation may be within a region of the one or more panes.
In one or more forms, the present disclosure includes a system. The system may include a sensor. The system may include a display. The system may include one or more pane. The one or more pane may include one or more processors. The system may include one or more non-transitory memory. The non-transitory memory may include instructions operable upon execution by the one or more processors to determine a visual representation based on an occupant and based on the sensor. The non-transitory memory may include instructions operable upon execution by the one or more processors to depict a portion of the visual representation within the region. The depiction of the portion of the visual representation may be based on the display. The sensor may be a camera. The display may include a projector configured to emit light comprising the portion of the visual representation. The display may include an array of light emitting diodes configured to emit light comprising the portion of the visual representation.
Further areas of applicability will become apparent from the description provided herein. It should be understood that the description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
In order that the disclosure may be well understood, there will now be described various forms thereof, given by way of example, reference being made to the accompanying drawings, in which:
The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way.
The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features.
In one or more forms, the present disclosure includes a method for depicting a visual representation on one or more pane of a vehicle. The one or more pane includes a first location. The method includes determining the visual representation based on an occupant. The method also includes depicting the visual representation at the first location. The determination of the visual representation may include capturing radiation reflected from the occupant. The determination of the visual representation may include applying a transform to a digital representation based on the radiation. The transform may adjust a perspective distortion of the digital representation. The radiation may be within a visible light spectrum or the radiation is within an infrared spectrum.
Referring to
For example, the pane 102 may be configured to permit augmented reality for occupants across the entire pane 102. The pane 102 may include technologies for providing augmented reality in the form of a heads-up display. A heads-up display may provide information, indications, representations, graphics, and other depictions without requiring a gaze associated with the occupants to leave the pane 102. Some example technologies for providing the display 104 are described herein, and those described herein are a non-exhaustive list of technologies that are contemplated for providing augmented reality to occupants through a heads-up display. The display 104 may cause a visual output. The visual output may comprise one or more user interface element 130, 132 or a visual representation 120 discussed herein. The user interface element 130, 132 may be used to interface with the vehicle or other systems. For example, the user interface element 130, 132 may be depicted as a knob, switch, button, or another control used to perform an operation (e.g., start a movie, adjust volume, change air conditioning, lock doors). The visual output may comprise content (e.g., videos, images, graphics) or any other emission of light within the electromagnetic spectrum or that is perceivable to the human eye.
The display 104 includes at least one region (e.g., regions 106, 108, 110, 112, 114, 116, 118) for depicting information (e.g., one or more portions 122, 124 of a visual representation 120) on the pane 102 such that light through the pane 102 is transmitted to an eye of the occupant. The transmission of light may be augmented, providing an augmented reality for the occupant. The visual representation may be based on an occupant of a vehicle, a participant to a conference call, or a combination thereof. The regions 106, 108, 110, 112, 114, 116, 118 may be defined by locations that are associated with a particular display technology. For example, regions near the dashboard (e.g., regions 112, 114, 116) may be provided by a heads-up display based on a projector or otherwise and regions (e.g., regions 108, 110, 118) near the top of the pane 102 or on a roof portion of the pane 102 may be provided by a technology based on an organic light emitting diode (OLED) array, liquid crystal display, transparent display, microLED, neoQLED, or otherwise. The output from heads-up display technology may be integrated together such that the display 104 fills the entire pane or portions thereof. Regions 106, 108, 110, 112, 114, 116, 118 are shown as various shapes and sizes and integrated together in a patchwork such that the display provides a desired area of coverage. The regions may have adjacent borders such that the depiction of a visional representation (e.g., visual representation 120) is seamless or that the occupant cannot perceive that the depiction is provided by different display technologies. Region 106 is situated to provide blind spot monitoring and may be similarly situated on either the driver or passenger sides of the vehicle.
The system 100 includes a sensor 126 (e.g., a visual light camera, infrared detector) for generating the visual representation 120. For example, the sensor 126 may capture visible light (e.g., electromagnetic radiation 128) generated by the display 104 and reflected from an occupant. The sensor 126 may convert the electromagnetic radiation 128 from energy to digital values, which may be indicative of a representation of the occupant (e.g., visual representation 120). The visual representation 120 is shown depicted at a first location 140. The first location 140 may have a vertical height greater than the second location 142 with respect to the occupant 320 (occupant 320 is shown in
Referring to
Referring to
Adjacent technologies may be subject to overlap or blurring caused by bleeding or reflections from adjacent technologies. Wedge film may be used to reduce the overlap or blurring between edges of adjacent regions. Further, dimming of boundary areas (e.g., reducing luminance) where regions adjoin may be used to reduce overlap, blurring, bleeding, unintended reflections, or other imperfections caused by adjacent technologies.
Referring to
The sensor 126 may convert the electromagnetic radiation 128 into a digital form and communicate with the controller 300 over a communications bus. The communications bus may be a controller-area network (CAN). The controller 300 may include one or more processor 306, non-transitory memory 302 and instructions 304 disposed thereon. The instructions 304 may be configured to, upon execution by the one or more processors 306, to perform one or more of the steps described herein (e.g., determining, depicting, transforming). For example, the instructions 304 may cause the one or more processors 306 to output a visual representation 120 from the projector 202 or light emitting diodes 220 for depiction on the display 104.
Referring to
Referring to
Referring to
Referring to
Referring to
The curated data 802, 804 may include a second corpus of images that comprises depictions of the movement of a second body part. For example, the movement may be based on an eye, hand, or another body part indicative of a desired action. The neural network 800 may include an input layer 806 for receiving the images. The input layer may receive an image or stream of images from the curated training data 802, 804 during training or sensor 126, 136, 406 during use in a vehicle to recognize gestures, operations, or selections. The input layer 806 may be concatenated in layer 808 and fed, alone or with other data, to the feature recognition layers 810. The feature recognition layers 810 may be used to recognize features within the images or digital representations to recognize one or more gesture. The gesture may be indicative of an operation 812 (e.g., turning of a knob, pressing of an augmented reality button). The operation may turn up the volume, take a picture, start a call, or otherwise provide an interface for the occupant of the vehicle to interact with the vehicle based on the display 104. The gesture may be further indicative of a selection of one or more of the user interface elements across the display 104 or pane 102. For example, pane 102 may be augmented to provide a display 104 on the entire windshield, window, or otherwise and the combination of eye and hand gestures may be used to control the vehicle with user interface elements 130, 132. With user interface elements 130, 132 across the entire pane 102, the gaze may be used to determine the intended selection 814 between user interface element 130, 132 and the hand motion may be used to indicate the desired operation 812. The operation 812 and the selection 814 may be executed 816 by the one or more processors to obtain the desired effect. During training, an error between the annotations 818 of the ground truth and the recognized operation 812 and selection 814 may be used to further improve the recognition by the neural network 800 until an acceptable error is obtained.
Referring to
Step 902 may include additional steps for adjusting the appearance of the visual representation 120. For example, the sensor (e.g,. sensor 126, 406) may capture electromagnetic radiation (e.g., radiation 128) indicative of the occupant 320. The sensor may convert the electromagnetic radiation to the digital representation 604. The digital representation 604 may be skewed from the sensor orientation relative to the occupant 320 and the display orientation. As such, the digital representation 604 and associated pixels 602 may be transformed from a first perspective to a second perspective to form pixels 606. The transform may warp the pixels 602 to the pixels 606 to form the visual representation 120 such that the pixels 606 of visual representation 120 are displayed to appear with a different perspective than the perspective the pixels 602 were captured from. In such a way, the sensor (e.g., sensor 126, 406) may be located to capture electromagnetic radiation from a different perspective than the emitted electromagnetic regulation from display 104.
In step 904, the visual representation 120 may be depicted. For example, the display 104 may be configured to present the visual representation 120 in one or more regions of the one or more panes 102. For example, the visual representation 120 may be depicted using more than one display technology. The depiction may be based on one or more inputs, and the depiction may include various parameters or settings. For example, the parameters may define how the visual representation 120 is depicted (e.g., location, size, luminance, filters) or when the visual representation 120 is depicted (e.g., based on a state of operation of the vehicle 700). The input may be used to determine the parameters.
The input may be a switch actuation (e.g., button press), received from another device, determined based on a state of the vehicle or surroundings of the vehicle, or otherwise. The input may be information available to the vehicle 700 for influencing operation of the depiction of the visual representation 120. The input may be a gesture of the occupant 320. The gesture may be determined by the controller 300 or sensor (e.g., sensor 126, 406). Artificial intelligence may be used to determine the gesture. For example, a convolutional neural network may be used to determine the presence of a particular gesture. The convolutional neural network may be trained on images or video of gestures. The gesture may be a physical movement of the occupant 320. For example, the gesture may be a facial expression. Facial expressions may include eye movements or a combination of hand or eye movements. For example, the occupant 320 may touch their face or look up as if looking into a mirror, which may trigger the depiction of the visual representation 120 and allow the occupant 320 to examine their face, eyes, hair, other features, or features associated with their person (e.g., clothing). The gaze (e.g., gazes 502, 504, 506) may be monitored to determine the gaze direction and determine whether the occupant 320 is looking forward or upward for use as an input.
The depiction may be adjusted from a first location to a second location in step 906. For example, the display 104 may define a coordinate system (e.g., cartesian) with respect to the one or more panes 102. The visual representation 120 may be adjusted from a first location to a second location. The locations may be associated with the regions 106, 108, 110, 112, 114, 116, 118 or defined as a subset of the regions. The locations may be defined by an origin (e.g., lower left-hand corner, upper right-hand corner) of the visual representation 120. An example first location 140 is shown along with an example second location 142 in
The radiation (e.g., radiation 128) received by the sensor (e.g., sensor 126, 406) may be generated based on the depiction of the visual representation. For example, the depiction may emit visible light that may reflect off of the occupant 320 and be further received or captured by the sensor (e.g., sensor 126, 406).
In
In
In step 1102, the visual output may be caused. The visual output may be based on a display 104, pane 102, or combination thereof. In step 1104, a gesture may be recognized. For example, a neural network 800 may be used to recognize one or more gesture, and the gesture may be used to determine an operation 812 or a selection 814. The selection 814 may be indicative of one or more user interface elements 130, 132. For example, user interface element 130 may be a depiction of a knob and the gesture may be used to select the knob and perform the operation 812 associated with the knob (e.g., adjust volume). In step 1106, the operation 812 may be executed. For example, the volume may be adjusted based on the gesture. Any operation that impacts vehicle experience or operation is contemplated.
In
Unless otherwise expressly indicated herein, all numerical values indicating mechanical/thermal properties, compositional percentages, dimensions and/or tolerances, or other characteristics are to be understood as modified by the word “about” or “approximately” in describing the scope of the present disclosure. This modification is desired for various reasons including industrial practice, material, manufacturing, and assembly tolerances, and testing capability.
As used herein, the phrase at least one of A, B, and C should be construed to mean a logical (A OR B OR C), using a non-exclusive logical OR, and should not be construed to mean “at least one of A, at least one of B, and at least one of C.”
In this application, the term “controller” and/or “module” may refer to, be part of, or include: an Application Specific Integrated Circuit (ASIC); a digital, analog, or mixed analog/digital discrete circuit; a digital, analog, or mixed analog/digital integrated circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor circuit (shared, dedicated, or group) that executes code; a memory circuit (shared, dedicated, or group) that stores code executed by the processor circuit; other suitable hardware components (e.g., op amp circuit integrator as part of the heat flux data module) that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip.
The term memory is a subset of the term computer-readable medium. The term computer-readable medium, as used herein, does not encompass transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); the term computer-readable medium may therefore be considered tangible and non-transitory. Non-limiting examples of a non-transitory, tangible computer-readable medium are nonvolatile memory circuits (such as a flash memory circuit, an erasable programmable read-only memory circuit, or a mask read-only circuit), volatile memory circuits (such as a static random access memory circuit or a dynamic random access memory circuit), magnetic storage media (such as an analog or digital magnetic tape or a hard disk drive), and optical storage media (such as a CD, a DVD, or a Blu-ray Disc).
The apparatuses and methods described in this application may be partially or fully implemented by a special purpose computer created by configuring a general-purpose computer to execute one or more particular functions embodied in computer programs. The functional blocks, flowchart components, and other elements described above serve as software specifications, which can be translated into the computer programs by the routine work of a skilled technician or programmer.
The description of the disclosure is merely exemplary in nature and, thus, variations that do not depart from the substance of the disclosure are intended to be within the scope of the disclosure. Such variations are not to be regarded as a departure from the spirit and scope of the disclosure.