This disclosure generally relates to vehicles, and more particularly relates to systems and methods to compute a vehicle dynamic pose for augmented reality tracking.
Despite significant developmental efforts in vehicle automation, many vehicles still require a certain level of human participation to carry out operations that are not fully autonomous. For example, a parking operation performed by a vehicle that is partially autonomous may necessitate certain actions to be carried out by an individual who is standing on a curb or roadside and remotely controlling the movements of the vehicle via a nomadic device.
Before a nomadic device can remotely control a vehicle, the nomadic device must be initialized or authenticated and coupled to the vehicle. The authentication enables operational features from a nomadic device, and the corresponding operating systems to send commands that are executed by the vehicle. Authentication is necessary to allow a nomadic device to control a vehicle. Government rules dictate that such authentication include input from a nearby user to authorize remote control of a vehicle and enables user control to be handed off from vehicle input to nomadic device input. Before such authentication can occur, the nomadic device must perform a handshake operation with the vehicle informing the vehicle that the nomadic device is nearby and prepared for user engagement.
It is desirable to provide solutions that address the ease of authentication for nomadic devices to remotely control a vehicle.
A detailed description is set forth below with reference to the accompanying drawings. The use of the same reference numerals may indicate similar or identical items. Various embodiments may utilize elements and/or components other than those illustrated in the drawings, and some elements and/or components may not be present in various embodiments. Elements and/or components in the figures are not necessarily drawn to scale. Throughout this disclosure, depending on the context, singular and plural terminology may be used interchangeably.
Overview
In terms of a general overview, this disclosure is generally directed to systems and methods for authorization between a vehicle and a nomadic device to enable user control of a vehicle by matching a pose. An example method includes receiving proximity characteristics at a nomadic device, the proximity characteristics including mapped obstruction data identifying possible locations of one or more obstructions of an area surrounding a vehicle. The method further includes calculating at least one unobstructed pose orientation for establishing a secure connection between the nomadic device and the vehicle based on the proximity characteristics. The method further includes displaying the unobstructed pose orientation on the nomadic device within an array of augmented reality pose orientations on the nomadic device.
The disclosure will be described more fully hereinafter with reference to the accompanying drawings, in which example embodiments of the disclosure are shown. This disclosure may, however, be embodied in many different forms and should not be construed as limited to the example embodiments set forth herein. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made to various embodiments without departing from the spirit and scope of the present disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described example embodiments but should be defined only in accordance with the following claims and their equivalents. The description below has been presented for the purposes of illustration and is not intended to be exhaustive or to be limited to the precise form disclosed. It should be understood that alternative implementations may be used in any combination desired to form additional hybrid implementations of the present disclosure. For example, any of the functionality described with respect to a particular device or component may be performed by another device or component. Furthermore, while specific device characteristics have been described, embodiments of the disclosure may relate to numerous other device characteristics. Further, although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the disclosure is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the embodiments.
It should also be understood that the word “example” as used herein is intended to be non-exclusionary and non-limiting in nature. Furthermore, certain words and phrases that are used herein should be interpreted as referring to various objects and actions that are generally understood in various forms and equivalencies by persons of ordinary skill in the art. For example, the word “application” or the phrase “software application” as used herein with respect to a nomadic device such as a smartphone, refers to code (software code, typically) that is installed in the nomadic device. The code may be launched and operated via a human machine interface (HMI) such as a touchscreen. The word “action” may be used interchangeably with words such as “operation” and “maneuver” in the disclosure. The word “maneuvering” may be used interchangeably with the word “controlling” in some instances. The word “vehicle” as used in this disclosure can pertain to any one of various types of vehicles such as cars, vans, sports utility vehicles, trucks, electric vehicles, gasoline vehicles, hybrid vehicles, and autonomous vehicles. Phrases such as “automated vehicle,” “autonomous vehicle,” and “partially-autonomous vehicle” as used in this disclosure generally refer to a vehicle that can perform at least some operations without a driver being seated in the vehicle.
The Society of Automotive Engineers (SAE) defines six levels of driving automation ranging from Level 0 (fully manual) to Level 5 (fully autonomous). These levels have been adopted by the U.S. Department of Transportation. Level 0 (L0) vehicles are manually controlled vehicles having no driving related automation. Level 1 (L1) vehicles incorporate some features, such as cruise control, but a human driver retains control of most driving and maneuvering operations. Level 2 (L2) vehicles are partially automated with certain driving operations such as steering, braking, and lane control being controlled by a vehicle computer. The driver retains some level of control of the vehicle and may override certain operations executed by the vehicle computer. Level 3 (L3) vehicles provide conditional driving automation but are smarter in terms of having an ability to sense a driving environment and certain driving situations. Level 4 (L4) vehicles can operate in a self-driving mode and include features where the vehicle computer takes control during certain types of equipment failures. The level of human intervention is very low. Level 5 (L5) vehicles are fully autonomous vehicles that do not involve human participation.
The vehicle computer 110 may perform various functions such as controlling engine operations (fuel injection, speed control, emissions control, braking, etc.), managing climate controls (air conditioning, heating etc.), activating airbags, and issuing warnings (check engine light, bulb failure, low tire pressure, vehicle in a blind spot, etc.).
The vehicle computer 110, in one or more embodiments, may be used to support features such as passive keyless operations, remotely-controlled vehicle maneuvering operations, and remote vehicle monitoring operations. Vehicle computer 110, in one or more embodiments, may execute certain operations associated with remotely-controlled vehicle maneuvering and/or remote vehicle monitoring in accordance with the disclosure.
The wireless communication system may include a set of wireless communication nodes and/or sensors 150a, 150b, 150c, 150d and 150e mounted upon vehicle 130 in a manner that allows the vehicle computer 110 to communicate with devices such as the nomadic device 120. Examples of wireless communication nodes 150a, 150b, 150c and 150e may include sensors and/or emitters capable of detecting objects, distances such as ultrasonic radar, LiDAR, cameras, and the like. In one or more embodiments, wireless communications nodes 150a, 150b, 150c, 150d and 150d may further include one or more of Bluetooth®, or Bluetooth® low energy (BLE) sensors. Further, in one or more embodiments, wireless communication node data may be enhanced or substituted with cloud-based network data communicated to vehicle 130. In an alternative implementation, a single wireless communication node and/or sensor 150e may be mounted upon the roof of the vehicle 130. The wireless communication system may use one or more of various wireless technologies such as Bluetooth®, Ultra-Wideband (UWB), Wi-Fi, ZigBee®, Li-Fi (light-based communication), audible communication, ultrasonic communication, near-field-communications (NFC), Bluetooth® low energy (BLE) and the like, for carrying out wireless communications with devices such as the nomadic device 120.
The vehicle computer 110, and the nomadic device 120 may connect via the communications network 140. The communications network 140 may include any one network, or a combination of networks, such as a local area network (LAN), a wide area network (WAN), a telephone network, a cellular network, a cable network, a wireless network, and/or private/public networks such as the Internet. For example, the communications network 140 may support communication technologies such as TCP/IP, Bluetooth®, cellular, near-field communication (NFC), Wi-Fi, Wi-Fi direct, Li-Fi, acoustic or ultrasonic audio communication, Ultra-Wideband (UWB), machine-to-machine communication, and/or man-to-machine communication.
In one or more embodiments, communications network 140 includes a cellular or Wi-Fi communication link enabling the nomadic device 120 to communicate with network 140, which may include a cloud-based network or source for transferring data in accordance with this disclosure.
A software application may be provided in the nomadic device 120, which allows a user to use the nomadic device 120 for performing remote-control operations such as, for example, locking and unlocking of the doors of the vehicle 130, and for monitoring some actions performed autonomously by the vehicle 130. One example of an action performed autonomously or semi-autonomously by the vehicle 130 is a self-parking operation. The nomadic device 120 may communicate with the vehicle computer 110 via one or more of the first set of wireless communication nodes 150a, 150b, 150c, 150d and 150e perform a self-parking operation. For example, nomadic device 120 may transmit remote-control commands to control some maneuvers performed by the vehicle 130 during the self-parking operation (referred to in the industry as a Remote Park Assist (RePA) operation) as well as other operations such as a trailer-hitch assist operation (referred to in the industry as a Remote Trailer Hitch Assist (ReTHA) operation) and a trailer maneuvering assist operation (referred to in the industry as a Remote Trailer Maneuver Assist (ReTMA) operation).
In an example self-parking procedure, a user who may be driving the vehicle 130, gets out of the vehicle 130 and uses the nomadic device 120, which may be an iPhone®, a handheld device, or a wearable, such as a headset or other hands-free device capable of being used, for example, to remotely initiate an autonomous parking procedure of the vehicle 130. During the autonomous parking procedure, the vehicle 130 moves autonomously to park itself at a parking spot located near the user. In another case, the vehicle 130 can be a L2 level vehicle that performs a parking maneuver without human assistance. The user remains in contact with nomadic device 130 and monitors the movement of the vehicle 130 during the parking maneuver to replicate the similar oversight the individual provides when manually driving. In one or more embodiments, the user interacts with vehicle 130 using augmented reality tracking. Prior to enabling augmented reality tracking of a vehicle, embodiments provide for enabling secure authentication of a nomadic device to vehicle 130 for augmented reality tracking of vehicle 130.
In the example procedure provided above, the nomadic device 120 includes software and hardware that enables an authorization operation to take place prior to enabling the nomadic device 120 to remotely control vehicle 130 or perform augmented reality tracking. One or more embodiments include matching a vehicle pose shown on nomadic device 120 to a live view shown on the nomadic device 120.
The software application 260 may be a software application that is downloaded into the nomadic device 120 from an app store. In one or more embodiments, the software application 260 is an augmented reality software application that enables remote control of a vehicle. One example of a remote-control software application is FordPass™. The remote-control software application may be used to carry out various remote-control operations such as, for example, a vehicle self-parking operation, a trailer-hitch assist operation, and/or a trailer maneuvering assist operation.
In one or more embodiments, software application 260 operates to carry out various actions for authorizing the nomadic device 120 to a vehicle.
The transceiver 250 can include a wireless transmitter and/or a wireless receiver that is used to communicate with a transceiver in the vehicle 130. The communications may be carried out by using any of various wireless formats such as, for example, Bluetooth®, Ultra-Wideband (UWB), Wi-Fi, ZigBee®, Li-Fi (light-based communication), audible communication, and ultrasonic communication. The transceiver 250 may be coupled to various components in the vehicle 130, such as, for example, a system for in-vehicle communications (displaying messages, providing warnings, etc.) and in some embodiments also be coupled to wireless communication nodes 150a, 150b, 150c, 150d and 150e for detecting objects or obstructions outside vehicle 130. In one or more embodiments, wireless communication nodes detect obstructions and map the obstructions in a 360 degree mapping, such as, for example, a computer-aided design (CAD) file for transmission to transceiver 250. In one or more embodiments, the CAD file may instead or in addition include a series of data packets containing a data structure representing a 360 degree map. In addition, the CAD file may also be formed as a stream of data packets to be spooled into a CAD data file. As one of ordinary skill in the art will appreciate with the benefit of the present disclosure, a file sent wirelessly may be a compressed data file or other type of data formation appropriate for different wireless protocols used for transmission as necessary for system requirements. As used herein, the term “file” includes without limitation a series of data packets capable of forming a mapping of obstructions.
The computer 110 in the vehicle 130 may be configured to operate in cooperation with the software application 260 in nomadic device 120 to execute various operations associated with authenticating the nomadic device 120 to vehicle 130 by transmitting vehicle 130 information about obstructions surrounding vehicle 130 in accordance with one or more embodiments.
The memory 220, which is one example of a non-transitory computer-readable medium, may be used to store the operating system (OS) 280, database 270, and various modules such as the software application 260. One or more modules in the form of computer-executable instructions may be executed by the processor 210 for performing various operations in accordance with the disclosure. More particularly, software application 260 may be executed by the processor 210 for authenticating nomadic device to vehicle 130 to enable augmented reality tracking for carrying out various operations upon the vehicle 130 (self-parking operation, trailer-hitch assist operation, trailer maneuvering assist operation, etc.). The software application 260, in one or more embodiments, may be executed for performing augmented reality procedures and visually illustrating poses on the nomadic device 120 in accordance with the disclosure.
Referring to
Referring to
Block 520 provides for calculating at least one unobstructed pose orientation for establishing a secure connection between the nomadic device and the vehicle based on the proximity characteristics, the unobstructed pose orientation determined at least in part on the mapped obstruction data. For example, nomadic device 120 may receive mapped obstruction data from vehicle 130 to enable nomadic device 120 to perform calculations, such as running an algorithm to generate an array, such as array 400 shown in
For example, in one or more embodiments, the array 400 is an array surrounding the vehicle based on the mapped obstruction data, and the calculating at least one unobstructed pose orientation may include identifying one or more unobstructed positions, such as 420, 422 within the array 400 surrounding the vehicle using the mapped obstruction data. In one or more embodiments, the identifying one or more unobstructed positions within the array surrounding the vehicle using the mapped obstruction data, includes sequentially correlating the defined array 400 surrounding the vehicle with the identified unobstructed positions within the array, and selecting a most likely to be unobstructed static pose orientation from the one or more or more unobstructed positions within the defined array. For example, of poses 420 and 422, the nomadic device may determine one of the poses is closer to the user of the nomadic device and select the closer pose.
Block 530 provides for displaying the unobstructed pose orientation on the nomadic device within an array of augmented reality pose orientations on the nomadic device. For example, after generating array 400, nomadic device 120 may display the array to a user to identify areas around vehicle 130 which are more accessible to a user of nomadic device 120 to enable a vehicle pose authentication to occur.
In one or more embodiments, the displaying the unobstructed pose orientation of block 530 includes displaying the unobstructed pose orientation as a visual augmented reality instruction to a user of the nomadic device 120, the instruction providing directions to the user. For example, in one embodiment, displaying the unobstructed pose orientation may include displaying the unobstructed pose orientation as an animated overlay in an augmented reality view on the nomadic device 120, the overlay including a surface model of the vehicle 130 and an icon that indicates where a user of the nomadic device 120 should be stand within the unobstructed pose orientation. In some examples, the nomadic device may be a headset with binocular vision and a camera that presents an augmented reality overlay to what is seen by a user of the nomadic device.
In one or more embodiments, the displaying the unobstructed pose orientation of block 330 includes establishing a level of engagement as a part of a vehicle self-parking operation. For example, a level of engagement could be a Level 2 automation level.
Referring now to
Block 620 provides for calculating at least one unobstructed pose orientation for establishing a secure connection between the nomadic device and the vehicle based on the mapped obstruction data. For example, nomadic device 120 may perform an algorithm after receiving mapped obstruction data from vehicle 130 that determines at least one unobstructed pose orientation.
Block 630 provides for displaying the unobstructed pose orientation on the nomadic device within an array of augmented reality pose orientations on the nomadic device. For example, a display on nomadic device 120 can include augmented reality pose orientations in the form of an array surrounding vehicle 130.
Block 640 provides for displaying an interactive overlay as a direction to a user of the nomadic device, the interactive overlay coaching the user to move toward at least one unobstructive pose orientation. For example, in one or more embodiments, nomadic device 120 includes a display that includes an array and an interactive overlay that provides directions or instructions to a user, such as arrows or a map directing a user toward a most accessible pose location for authenticating the nomadic device 120 to vehicle 130.
In the above disclosure, reference has been made to the accompanying drawings, which form a part hereof, which illustrate specific implementations in which the present disclosure may be practiced. It is understood that other implementations may be utilized, and structural changes may be made without departing from the scope of the present disclosure. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” “an example embodiment,” “example implementation,” etc., indicate that the embodiment or implementation described may include a particular feature, structure, or characteristic, but every embodiment or implementation may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment or implementation. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment or implementation, one skilled in the art will recognize such feature, structure, or characteristic in connection with other embodiments or implementations whether or not explicitly described. For example, various features, aspects, and actions described above with respect to an autonomous parking maneuver are applicable to various other autonomous maneuvers and must be interpreted accordingly.
Implementations of the systems, apparatuses, devices, and methods disclosed herein may comprise or utilize one or more devices that include hardware, such as, for example, one or more processors and system memory, as discussed herein. An implementation of the devices, systems, and methods disclosed herein may communicate over a computer network. A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or any combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmission media can include a network and/or data links, which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of non-transitory computer-readable media.
Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause the processor to perform a certain function or group of functions. The computer-executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
A memory device such as the memory 320, the memory 420, and the memory 520, can include any one memory element or a combination of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)) and non-volatile memory elements (e.g., ROM, hard drive, tape, CDROM, etc.). Moreover, the memory device may incorporate electronic, magnetic, optical, and/or other types of storage media. In the context of this document, a “non-transitory computer-readable medium” can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: a portable computer diskette (magnetic), a random-access memory (RAM) (electronic), a read-only memory (ROM) (electronic), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory) (electronic), and a portable compact disc read-only memory (CD ROM) (optical). Note that the computer-readable medium could even be paper or another suitable medium upon which the program is printed, since the program can be electronically captured, for instance, via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
Those skilled in the art will appreciate that the present disclosure may be practiced in network computing environments with many types of computer system configurations, including in-dash vehicle computers, personal computers, desktop computers, laptop computers, message processors, nomadic devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, various storage devices, and the like. The disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by any combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both the local and remote memory storage devices.
Further, where appropriate, the functions described herein can be performed in one or more of hardware, software, firmware, digital components, or analog components. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein. Certain terms are used throughout the description, and claims refer to particular system components. As one skilled in the art will appreciate, components may be referred to by different names. This document does not intend to distinguish between components that differ in name, but not function.
At least some embodiments of the present disclosure have been directed to computer program products comprising such logic (e.g., in the form of software) stored on any computer-usable medium. Such software, when executed in one or more data processing devices, causes a device to operate as described herein.
While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the present disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described example embodiments but should be defined only in accordance with the following claims and their equivalents. The foregoing description has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the present disclosure to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. Further, it should be noted that any or all of the aforementioned alternate implementations may be used in any combination desired to form additional hybrid implementations of the present disclosure. For example, any of the functionality described with respect to a particular device or component may be performed by another device or component. Further, while specific device characteristics have been described, embodiments of the disclosure may relate to numerous other device characteristics. Further, although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the disclosure is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the embodiments. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments could include, while other embodiments may not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments.
Number | Name | Date | Kind |
---|---|---|---|
9536353 | Alaniz et al. | Jan 2017 | B2 |
20120262580 | Huebner | Oct 2012 | A1 |
20130249942 | Green et al. | Sep 2013 | A1 |
20130335301 | Wong | Dec 2013 | A1 |
20140188348 | Gautama | Jul 2014 | A1 |
20140200863 | Kamat | Jul 2014 | A1 |
20150175068 | Szostak et al. | Jun 2015 | A1 |
20150379873 | Tippelhofer | Dec 2015 | A1 |
20190202446 | Golgiri | Jul 2019 | A1 |
20210064877 | Ramasamy et al. | Mar 2021 | A1 |
Number | Date | Country |
---|---|---|
2196961 | Jun 2010 | EP |
Entry |
---|
Lotfi Abdi et al, “In-Vehicle Augmented Reality Traffic Information System: A New Type of Communication Between Driver and Vehicle”, Procedia Computer Science 73, published by Elsevier, 2015, 8 pages. |
Number | Date | Country | |
---|---|---|---|
20230065190 A1 | Mar 2023 | US |