The disclosure of Japanese Patent Application No. 2009-157473, which was filed on Jul. 2, 2009, is incorporated herein by reference.
1. Field of the Invention
The present invention relates to an image processing apparatus. More particularly, the present invention relates to an image processing apparatus which displays on a screen, together with navigation information, an image representing an object scene captured by cameras arranged in a moving body.
2. Description of the Related Art
According to one example of this type of apparatus, a scenery in an advancing direction of an automobile is captured by a camera attached at the nose of the automobile. An image combiner combines a navigation information element onto an actually photographed image captured by the camera, and displays the combined image on a display machine. This enables a driver to comprehend more sensuously a current position or an advancing path of the automobile.
However, the actually photographed image combined with the navigation information element merely represents the scenery in the advancing direction of the automobile. Thus, the above-described apparatus is limited in its steering assisting performance.
An image processing apparatus according to the present invention comprises: a plurality of cameras which are arranged at respectively different positions of a moving body moving on a reference surface and which output object scene images representing a surrounding area of the moving body; a first creator which creates a bird's-eye view image relative to the reference surface, based on the object scene images outputted from the plurality of cameras; a first displayer which displays the bird's-eye view image created by the first creator, on a monitor screen; a detector which detects a location of the moving body, in parallel with a creating process of the first creator; a second creator which creates navigation information based on a detection result of the detector and map information; and a second displayer which displays on the monitor screen the navigation information created by the second creator, in association with a displaying process of the first displayer.
The above described features and advantages of the present invention will become more apparent from the following detailed description of the embodiment when taken in conjunction with the accompanying drawings.
With reference to
The bird's-eye view image is created based on output from the plurality of cameras 1, 1, . . . arranged at the respectively different positions of the moving body, and reproduces the surrounding area of the moving body. The navigation information created based on the location of the moving body and the map information is displayed on the monitor screen 7 together with such a bird's-eye view image. This enables confirmation of both of the safety of the surrounding area of the moving body and the navigation information on the same screen, thereby improving a steering assisting performance.
A steering assisting apparatus 10 of this embodiment shown in
With reference to
As shown in
Returning to
As can be seen from
Subsequently, the CPU 12p defines cut-out lines CT_0 to CT_3 corresponding to a reproduction block BLK shown in
In parallel with a process for creating such a drive assisting image ARV, the CPU 12p detects a current position or location of the vehicle 100 based on output of a GPS device 20, and further determines whether a display mode at a current time point is either a parallel display mode or a multiple display mode. It is noted that the display mode can be switched between the parallel display mode and the multiple display mode in response to a mode switching operation on an operation panel 28.
If the display mode at a current time point is the parallel display mode, then the CPU 12p creates a wide-area map image MP1 representing the current position of the vehicle 100 and its surrounding area, based on map data saved on a database 22. The created wide-area map image MP1 is developed on a right side of a display area 14m formed on the memory 14, as shown in
A display device 24 set onto a driver's seat of the vehicle 100 repeatedly reads out the wide-area map image MP1 and the drive assisting image ARV developed in the display area 14m, and displays the read-out wide-area map image MP1 and drive assisting image ARV, on the same screen, as shown in
On the other hand, if the display mode at a current time point is the multiple display mode, then the CPU 12p creates a narrow-area map image MP2 representing the current position of the vehicle 100 and its surrounding area, based on the map data saved on the database 22. The created narrow-area map image MP2 is developed on whole of the display area 14m, as shown in
Subsequently, the CPU 12p adjusts the magnification of the drive assisting image ARV so as to be adapted to the multiple display mode, detects the orientation of the vehicle 100 at a current time point based on the output of the GPS device 20, and detects road surface paint appearing in the drive assisting image ARV by pattern recognition. An overlay position of the drive assisting image ARV is determined based on the orientation of the vehicle 100 and the road surface paint, and the drive assisting image ARV having the adjusted magnification is overlaid onto the determined overlay position, as shown in
More particularly, the magnification of the drive assisting image ARV is adjusted so that a width of the road surface on the drive assisting image ARV matches a width of the road surface on the narrow-area map image. Moreover, the overlay position of the drive assisting image ARV is adjusted so that the road surface paint on the drive assisting image ARV fits along the road surface paint on the narrow-area map image. It is noted that the orientation of the vehicle 100 is referred to in order to avoid a situation where a vehicle image G1 is overlaid onto a road surface on an opposite vehicle lane on the narrow-area map image.
The display device 24 repeatedly reads out the narrow-area map image MP2 and the drive assisting image ARV which are developed in the display area 14m, and displays the read-out narrow-area map image MP2 and drive assisting image ARV, on the screen.
If an operation for setting a target site is performed on the operation panel 28 shown in
If the display mode at a current time point is the parallel display mode, then the CPU 12p creates route information RT1 indicating the route to the target site in a wide area, and overlays the created route information RT1 onto the wide-area map image MP1 developed in the display area 14m. On the other hand, if the display mode at a current time point is the multiple display mode, then the CPU 12p creates route information RT2 indicating the route to the target site in a narrow area, and overlays the created route information RT2 onto the drive assisting image ARV developed in the display area 14m. The route information RT1 is overlaid as shown in
It is noted that in this embodiment, the wide-area map image MP1, the narrow-area map image MP2, the route information RT1, and the route information RT2 are collectively called “navigation information”.
Furthermore, the CPU 12p refers to the drive assisting image ARV in order to repeatedly search the obstacle from the surrounding area of the vehicle 100. If an obstacle OBJ is discovered, then the CPU 12p overlays warning information ARM onto the drive assisting image ARV developed in the display area 14m. The warning information. ARM is overlaid corresponding to a position of the obstacle OBJ, as shown in
The CPU 12p executes a plurality of tasks including a route control task shown in
With reference to
When YES is determined in the step S3, the process advances to a step S7 so as to detect the current position based on the output of the GPS device 20. In a step S9, based on the detected current position and the map data saved in the database 22, the route to the target site is set. Upon completion of the process in the step S9, the flag FLG is set to “1” in a step S11, and thereafter, the process returns to the step S3.
When YES is determined in the step S5, the process advances to a step S13 so as to cancel the setting of the route to the target site. Upon completion of the process in the step S13, the flag FLG is set to “0” in a step S15, and thereafter, the process returns to the step S3.
With reference to
In the step S27, the wide-area map image MP1 representing the current position of the vehicle 100 and its surrounding area is created based on the map data saved in the database 22. In a step S29, the created wide-area map image MP1 is developed on the right side of the display area 14m. In a step S31, the magnification of the drive assisting image ARV created in the step S21 is adjusted so as to be adapted to the parallel display mode. In a step S33, the drive assisting image ARV having the adjusted magnification is developed on the left side of the display area 14m. Upon completion of the process in the step S33, the process advances to a step S49.
In the step S35, the narrow-area map image MP2 representing the current position of the vehicle 100 and its surrounding area is created based on the map data saved in the database 22. In a step S37, the created narrow-area map image MP2 is developed on whole of the display area 14m. In a step S39, the magnification of the drive assisting image ARV created in the step S21 is adjusted so as to be adapted to the multiple display mode.
In a step S41, the orientation of the vehicle 100 at a current time point is detected based on the output of the GPS device 20, and in a step S43, the road surface paint appearing in the drive assisting image ARV is detected by the pattern recognition. In a step S45, based on the orientation of the vehicle 100 detected in the step S41 and the road surface paint detected in the step S43, the overlay position of the drive assisting image ARV is determined. In a step S47, the drive assisting image ARV having the magnification adjusted in the step S39 is overlaid onto the position determined in the step S45. Upon completion of the process in the step S47, the process advances to the step S49.
In the step S49, it is determined whether or not the flag FLG indicates “1”. When a determined result is NO, the process directly advances to the step S61, and when the determined result is YES, the process advances to the step S61 after undergoing steps S51 to S59.
In the step S51, it is determined whether or not the display mode at a current time point is either the parallel display mode or the multiple display mode. If the display mode at a current time point is the parallel display mode, then the process advances to the step S53 in order to create the route information RT1 indicating the route to the target site in a wide area. In the step S55, the created route information RT1 is overlaid onto the wide-area map image MP1 developed in the step S29. If the display mode at a current time point is the multiple display mode, then the process advances to the step S57 in order to create the route information RT2 indicating the route to the target site in a narrow area. In the step S59, the created route information RT2 is overlaid onto the drive assisting image ARV developed in the step S47.
In a step S61, it is determined whether or not there is the obstacle OBJ in the surrounding area of the vehicle 100. When a determined result is NO, the process directly returns to the step S21 while when the determined result is YES, the process returns to the step S21 after overlaying the warning information ARM onto the drive assisting image ARV in a step S63. The warning information ARM is overlaid onto the drive assisting image ARV, corresponding to the position of the obstacle OBJ.
As can be seen from the above-described explanation, the cameras CM_0 to CM_3 are arranged at the respectively different positions of the vehicle 100 moving on the road surface, and output the object scene images P_0 to P_3 representing the surrounding area of the vehicle 100. The CPU 12p creates the drive assisting image ARV based on the outputted object scene images P_0 to P_3 (S21), and displays the created drive assisting image ARV on the screen of the display device 24 (S31 to S33, and S39 to S47). Moreover, the CPU 12p detects the location of the vehicle 100, in parallel with the process for creating the drive assisting image ARV (S23), creates the navigation information (the map image and the route information) based on the detected location and the map data in the database 22 (S27, S35, S53, S57), and displays the created navigation information on the screen of the display device 24 (S29, S37, S55, S59).
The drive assisting image ARV is created based on the output from the cameras CM_0 to CM_3 arranged at the respectively different positions of the vehicle 100, and reproduces the surrounding area of the vehicle 100. The navigation information created based on the location of the vehicle 100 and the map data is displayed on the display device 24 together with such a drive assisting image ARV. This enables confirmation of both of the safety of the surrounding area of the vehicle 100 and the navigation information on the same screen, thereby improving a steering assisting performance.
It is noted that in this embodiment, the route information RT1 is overlaid onto the wide-area map image MP1, the drive assisting image ARV is overlaid onto the narrow-area map image MP2, the route information RT2 is overlaid onto the drive assisting image ARV, and the warning information ARM is overlaid onto the drive assisting image ARV. Herein, a transmissivity of the overlaid image is not limited to 0%, and may be appropriately adjusted within a range of 1 to 99%.
Moreover, in this embodiment, the vehicle traveling on the road surface is assumed as the moving body. It is, however, also possible to adapt the present invention to a ship sailing on a sea surface.
Moreover, in this embodiment, the parallel display mode and the multiple display mode alternately selected by the mode switching operation are prepared, and the wide-area map image MP1 and the drive assisting image ARV are displayed in parallel in the parallel display mode while the narrow-area map image MP2 and the drive assisting image ARV are multiple-displayed in the multiple display mode.
However, the following may be optionally arranged: when the vehicle 100 remains away from an intersection at which to turn left or right, the wide-area map image MP1 is displayed on whole of the monitor screen and the wide-area route information RT1 is overlaid on the wide-area map image MP1 (see
In this case, instead of the process according to the flowcharts shown in
With reference to
When the determined result in the step S75 is YES, the process advances to a step S79 so as to detect the current position of the vehicle 100 based on the output of the GPS device 20. In a step S81, the route information RT1 indicating the route to the target site in a wide area is created. In a step S83, the created route information RT1 is overlaid onto the wide-area map image MP1 developed in the step S77. In a step S85, a distance to a next intersection at which to turn left or right is calculated based on the current position of the vehicle 100, the wide-area map image MP1, and the route information RT1. In a step S87, it is determined whether or not the calculated distance is equal to or less than a threshold value TH (=for example, 5 m), and when a determined result is NO, the process returns to the step S71 after the process in the step S77 and on the other hand, when the determined result is YES, the process advances to a step S89.
In the step S89, the wide-area map image MP1 created in the step S73 is developed on the right side of the display area 14m. In a step S91, the magnification of the drive assisting image ARV created in the step S71 is adjusted. In a step S93, the drive assisting image ARV having the adjusted magnification is developed on the left side of the display area 14m. In a step S95, the route information RT2 indicating the route to the target site in a narrow area is created. In a step S97, the created route information RT2 is overlaid onto the drive assisting image ARV developed in the step S93. Upon completion of the overlay process, the process returns to the step S71.
Thus, the drive assisting image ARV is displayed in parallel on the monitor screen at a timing at which the distance from the vehicle 100 to the intersection at which to turn left or right falls below the threshold value TH (that is, at a timing at which the vehicle 100 is about to enter the intersection). The driver is capable of visually confirming the surrounding area of the vehicle 100 through the monitor screen under a circumstance where confirming the safety of the surrounding area of the vehicle 100 is important, for example, at a time of turning right or left at the intersection. Thus, the drive assisting performance is improved.
Although the present invention has been described and illustrated in detail, it is clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the spirit and scope of the present invention being limited only by the terms of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2009-157473 | Jul 2009 | JP | national |