One of the aspects of the embodiments relates to a navigation technology that guides a route to a destination.
Some navigation apparatuses for guiding a route to a destination for drivers of vehicles such as automobiles and motorcycles use a head-up display that displays a guidance image such as a route guidance arrow in a display area that can see-through-display a route such as a road in the external world of the vehicle. Since the guidance image is superimposed on the route in the external world, the driver can check the route while viewing the external world.
In a case where an arrow indicating a turn, such as a left turn ahead of the road sign in the external world viewable through the display area is superimposed on the road sign, the apparatus disclosed in Japanese Patent Laid-Open No. 2018-173399 does not display an arrow portion superimposed on the sign and facilitates understanding that the turning position is located ahead of the sign.
However, a driver may not be able to view a post-turn route or destination due to buildings or other obstructions. In this case, the driver has difficulty in checking whether or not he is to actually turn according to the arrow displayed in the display area, and consequently may not be able to smoothly turn or may go past the turning position.
A display processing apparatus according to one aspect of the embodiment includes a memory storing instructions, and a processor that executes the instructions to display a guidance image configured to guide a route to a destination area for a user, in a display area superimposed on an external world so that the guidance image is superimposed on the external world, acquire three-dimensional (3D) information on an object existing between the user and the destination area after the user specifies the destination area in a captured image of a location including the destination area, and generate, using the three-dimensional information, the guidance image including a shielded area image that displays the route and the destination area that are respectively shielded from the user by the object. A movable apparatus, a wearable apparatus, or a terminal device having the above display processing apparatus also constitutes another aspect of the embodiment. A display processing method corresponding to the above display processing apparatus also constitutes another aspect of the embodiment.
Further features of the disclosure will become apparent from the following description of embodiments with reference to the attached drawings.
In the following, the term “unit” may refer to a software context, a hardware context, or a combination of software and hardware contexts. In the software context, the term “unit” refers to a functionality, an application, a software module, a function, a routine, a set of instructions, or a program that can be executed by a programmable processor such as a microprocessor, a central processing unit (CPU), or a specially designed programmable device or controller. A memory contains instructions or programs that, when executed by the CPU, cause the CPU to perform operations corresponding to units or functions. In the hardware context, the term “unit” refers to a hardware element, a circuit, an assembly, a physical structure, a system, a module, or a subsystem. Depending on the specific embodiment, the term “unit” may include mechanical, optical, or electrical components, or any combination of them. The term “unit” may include active (e.g., transistors) or passive (e.g., capacitor) components. The term “unit” may include semiconductor devices having a substrate and other layers of materials having various concentrations of conductivity. It may include a CPU or a programmable processor that can execute a program stored in a memory to perform specified functions. The term “unit” may include logic elements (e.g., AND, OR) implemented by transistor circuits or any other switching circuits. In the combination of software and hardware contexts, the term “unit” or “circuit” refers to any combination of the software and hardware contexts as described above. In addition, the term “element,” “assembly,” “component,” or “device” may also refer to “circuit” with or without integration with packaging materials.
Referring now to the accompanying drawings, a description will be given of embodiments.
The GPS sensor 33 acquires self-location information on an automobile 20 (that is, a driver as a user) and inputs it into the display processing apparatus 30.
The display device 34 is a head-up display that projects and displays a guidance image, which will be described below, on a display area set so that the display area is superimposed on the external world on the windshield of the automobile 20 through which the external world of the vehicle can be viewed. The driver can view the displayed guidance image while viewing the outside of the vehicle (external world) through the windshield. Thereby, the driver can drive the automobile 20 while viewing the guidance image superimposed on the route, such as a road, in the external world and provided with route guidance. The display device 34 may use one that projects and displays a guidance image onto a display area set on a transparent member placed between the windshield and the driver so that the display area is superimposed on the external world.
The on-board camera 35 moves together with the automobile 20 (that is, the driver) and images the external world. The imaging angle of view of the on-board camera 35 is set in accordance with the driver's field of view (FOV) relative to the external world. The external world image obtained by the on-board camera 35 is used to set the position, size, etc. of the display area on the windshield of the guidance image by the display device 34.
The navigation system 10 is used by a driver who downloads captured images of various locations viewable on websites on the Internet into a terminal device 50 such as a smartphone or a tablet computer. The captured images include images captured by an external camera (such as a surveillance camera, and cameras installed on an aircraft and satellite). The driver can specify as a region of interest (ROI) a destination area (such as a parking lot) toward which the automobile 20 is heading from among locations (such as the entire facility represented by one address) included in the captured image downloaded by the terminal device 50. Information such as the position (coordinates) and shape of the designated destination area is input from the terminal device 50 to the display processing apparatus 30 through communication using Bluetooth (registered trademark) or the like.
The display processing apparatus 30 can acquire three-dimensional map data of the above various locations stored in the external server 60 via communication. The three-dimensional map data can use, for example, data published as a three-dimensional city model by Project PLATEAU (trademark) in Japan.
The display processing apparatus 30 includes a computer (such as an Electronic Control Unit (ECU)) including at least one processor, memory, etc., and has a three-dimensional information acquiring unit 31 as an information acquiring unit and an image generator 32 as an image processing unit. The three-dimensional information acquiring unit 31 specifies a building, a signboard, and another object existing between the automobile 20 and the destination area on the three-dimensional map data based on the user's position information obtained from the GPS sensor 33 and the position information on the destination area. The three-dimensional information on the specified object is acquired from the three-dimensional map data.
The image generator 32 generates a guidance image. The guidance image includes a visible area image that is displayed and superimposed on the route viewable by the driver in the external world, and a shielded area image that is displayed and superimposed on the object so as to indicate a route or destination area that is shielded by an object (shield or obstruction) and is not visible to the driver. The image generator 32 generates a shielded area image based on the three-dimensional information on the shield acquired by the three-dimensional information acquiring unit 31.
The image generator 32 detects the route and shields in the external world image acquired by the on-board camera 35 by template matching, AI processing using a machine learning model, etc. The display position and display size of the visible area image and the shielded area image on the display area (windshield) by the display device 34 are set in accordance with these positions and sizes.
As illustrated in
In a case where the object 85 is determined to be the shield, a shielded area image is displayed as a guidance image illustrating a map 75 of the destination area 80 and a map of the nearby route so that the shielded area image is superimposed on the map 71 of the object 85 in the display area corresponding to the FOV surface 70. Thereby, the driver can visually recognize the destination area 80 and the nearby route.
As illustrated by the broken line in
On the other hand,
By starting to display the arrow image 131 and the frame image 132 before the user turns (turns left) near the destination area 80, the driver can be smoothly guided to the destination area 80 without hesitation as to whether or not he is allowed to turn. In particular, the arrow image 131 and the frame image 132 may be displayed before the user turns multiple times near the destination area 80.
The colors of the wall 104 and surroundings may be detected from the external world image from the on-board camera 35, and the arrow image 131 and the frame image 132 may be displayed in colors different from the colors of the wall 104 and surroundings. For example, the arrow image 131 and the frame image 132 may be displayed in complementary colors to the average color of the surroundings of the wall 104 and others.
In
In
Similarly to
Similarly to
A flowchart in
A program (application) illustrated in
In step S1 of
Next, in step S3, the image generator 32 identifies a field of view (FOV) that is estimated to be viewed by the driver based on the imaging angle of view of the on-board camera 35 and the external world image and sets the position and size of the display area where the display device 34 can display the guidance image according to the field of view.
Next, in step S4, the image generator 32 determines whether or not the destination area enters the field of view based on the user's vehicle position information from the GPS sensor 33 and the field of view information specified in step S3. Here, the “destination area enters the field of view” means that the destination area is included in the field of view of the user's vehicle on the two-dimensional map, regardless of whether the destination area is actually visible to the driver. In a case where the destination area enters the field of view, the flow proceeds to step S5, and in a case where the destination area does not enter the field of view, the flow proceeds to step S8.
Next, in step S5, the three-dimensional information acquiring unit 31 acquires from the above three-dimensional map data three-dimensional information on an object existing between the user's vehicle position and the destination area position that can be specified from the user's vehicle position information and the destination area position information from the GPS sensor 33, respectively.
Next, in step S6, the image generator 32 determines whether an object existing between the user's vehicle and the destination area is a shield that makes the destination area invisible to the driver, based on the user's vehicle position information, the position and shape information on the destination area, and the three-dimensional information acquired in step S5. More specifically, for example, whether or not the object is a shield is determined by checking the space ID of a voxel of the object that can be acquired from the three-dimensional information. In a case where the object is the shield, the flow proceeds to step S7; in a case where the object is not the shield, the flow proceeds to step S8.
In step S7, the image generator 32 generates a guidance image that includes a visible area image to be displayed on the route that is illustrated in the external world image from the on-board camera 35 (visible to the driver) and a shielded area image that displays a route and a destination area that is not illustrated in the external world image (invisible to the driver). Then, the flow proceeds to step S9. At this time, the destination area may be displayed in the visible area image, or only one of the route and the destination area may be displayed in the shielded area image. As described with reference to
On the other hand, in step S8, the image generator 32 generates a guidance image of only the visible area image to be displayed on the route illustrated in the external world image from the on-board camera 35. Then, the flow proceeds to step S9. Again, the destination area may be displayed using the visible area image.
In step S9, the image generator 32 displays the guidance image generated in step S7 or S8 in the display area through the display device 34.
Next, in step S10, the image generator 32 determines whether or not to end the display of the guidance image. More specifically, the image generator 32 determines whether or not the user's vehicle has approached the destination area based on the user's vehicle position information from the GPS sensor 33. In a case where the display of the guidance image is to end because the user's vehicle has approached the destination area, this flow ends; otherwise, the flow returns to step S3.
This embodiment can display the destination area and route that are not visible to the driver due to a shield and thus can provide route guidance that is easier for the driver to understand.
In this embodiment, three-dimensional information on an object is acquired from three-dimensional map data and it is determined whether the object is a shield. On the other hand, in a case where the user's vehicle has a function of detecting a three-dimensional object using Light Detection and Ranging (LIDAR), etc., a three-dimensional structure of an object that exists between the user's vehicle and the destination area may be acquired as three-dimensional information using the above function. Then, by using the projection transformation of this three-dimensional structure into the display area, whether the object shields the destination area may be determined.
In an image captured by the external camera, a final destination area may be specified as at least one relay point between a departure point and the destination area (such as a parking lot for taking a break or checking the destination near an expressway exit). In this case, a shielded area image can be displayed near the relay point and near the final destination area.
A shielded area image may be displayed according to the constraint condition on the entry into the destination area. For example, in a case where the destination area is a parking lot, the shielded area image may be displayed so as to provide route guidance (parking guidance) based on forward and backward parking restrictions, restrictions on the vehicle size that can be parked, and restrictions from the sizes and situations of the vehicles parked in the adjacent parking lots.
A description will now be given of a second embodiment. A flowchart in
In a case where the display processing apparatus 30 determines that the destination area enters the field of view in step S4, the flow proceeds to step S15. In step S15, the three-dimensional information acquiring unit 31 acquires three-dimensional information on each of a plurality of (n) objects that exist between the user's vehicle position and the destination area position, which can be specified from the user's vehicle position information from the GPS sensor 33 and the position information on the destination area.
In the next step S16, the image generator 32 acquires information on the shape of the portion of the destination area that is shielded (hidden) by the k-th (k=1 to n) object from the user's vehicle side. Information on the shape of the shielded portion can be acquired using the user's vehicle position information, the position and shape information on the destination area, and the three-dimensional information acquired in step S5.
Next, in step S17, the image generator 32 determines whether or not there is a part hidden by the k-th object in the destination area in step S16 (that is, the object is a shield), and in a case where there is that part, the flow proceeds to step S18; otherwise, the flow proceeds to step S19.
In step S18, the image generator 32 generates a guidance image that includes a visible area image to be displayed on the route in the external world image from the on-board camera 35 and a shielded area image to be displayed and superimposed on the k-th object (shield). Then, the flow proceeds to step S9.
On the other hand, in step S19, the image generator 32 determines whether or not k is n. In a case where k is n, the flow proceeds to step S8; in a case where k is not n, the flow proceeds to step S20 to increment k by 1. Then, the flow returns to step S16.
In step S8, the image generator 32 generates a guidance image of only the visible area image to be displayed on the route illustrated in the external world image from the on-board camera 35, similarly to step S8 of the first embodiment. Then, the flow proceeds to step S9.
In step S9, the image generator 32 displays the guidance image generated in step S21 or S8 on the display area through the display device 34, similarly to step S9 of the first embodiment. In step S10, similarly to step S10 of the first embodiment, it is determined whether or not to end the display of the guidance image. If not, the flow returns to step S3, and if it does, the flow ends.
Thus, in a case where there are a plurality of objects between the user's vehicle and the destination area, this embodiment determines whether the object is a shield in order from the object closest to the user's vehicle. Then, this embodiment displays and superimposes the shielded area image on the object that is determined to be a shield first among the plurality of objects, and displays only the visible area image if none of the objects is a shield.
Even if the driver cannot view the destination area or route due to any one of shields among the plurality of objects between the user's vehicle and the destination area, this embodiment can display them. Thereby, this embodiment can provide route guidance that can be easily understood by the driver.
In each of the above embodiments, the display processing apparatus is installed in an automobile, but it may also be installed in various movable apparatuses other than automobiles (such as a ship and aircraft).
The display processing apparatus may be installed in a wearable apparatus that a user wears in front of his eyes, such as a glasses-type device using the augmented reality (AR) technology. As illustrated in
Each of the above embodiments has discussed a guidance image displayed and superimposed on the external world in a display area that is superimposed on the actual external world. However, the superimposition on the external world is not limited to superimposition on the actual external world but also includes superimposition on an external world image of the vehicle generated by imaging. That is, as illustrated in
Embodiment(s) of the disclosure can also be realized by a computer of a system or apparatus that reads out and executes computer-executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer-executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer-executable instructions. The computer-executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read-only memory (ROM), a storage of distributed computing systems, an optical disc (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the disclosure has been described with reference to embodiments, it is to be understood that the disclosure is not limited to the disclosed embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
Each embodiment can provide route guidance that can be easily understood by a user even if the user cannot view the route or destination area due to a shield.
This application claims the benefit of Japanese Patent Application No. 2022-185311, filed on Nov. 18, 2022, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2022-185311 | Nov 2022 | JP | national |