The present invention relates to a navigation device, a navigation method, and a vehicle, and more particularly to a navigation device, a navigation method, and a vehicle which show an image taken by an imaging device such as an in-vehicle camera and the like to a driver for assisting travel of the vehicle.
Conventionally, there is widely known a navigation device which is installed in a vehicle, shows a traveling direction to a driver through a display, and performs route navigation. In such a navigation device, map information is stored in advance in an HDD or a DVD (Digital Versatile Disk). The map information includes CG (Computer Graphics) data concerning road information and a junction such as an intersection point, and the like. When the navigation device detects that the vehicle approaches a junction, the navigation device superimposes and draws an arrow indicating a navigation route (a traveling direction) with respect to the CG data concerning the junction for notifying the driver of a course to advance. The CG data concerning the junction is extremely high definition data, and includes data similar to an actual view. However, the CG data is different from a view which is viewed by the driver in various points, for example, a vehicle present ahead of the own vehicle and a facility newly built are actually not drawn. Thus, an extra load occurs for the driver to recognize which position in the actual view a course drawn in the CG data corresponds to. With respect to such a problem, there is disclosed a technique in which a camera is installed at the front of a vehicle for taking an image of an anterior view at a constant imaging magnification, and when it is detected that the vehicle approaches a branch intersection point within a predetermined distance, a navigation arrow with a size according to a distance is superimposed and displayed on the image as the vehicle approaches the intersection point afterwards (e.g. Patent Document 1). According to this, a view which is viewed by a driver corresponds to the image indicating a navigation route, thereby reducing the driver's load for recognition.
Patent Document 1: Japanese Laid-Open Patent Publication No. 2000-155895
However, since the imaging magnification and an imaging direction are fixed in the technique disclosed in the Patent Document 1, it is hard to show an appropriate image to the driver either at a point (referred to as a distant point) which is distant from the intersection point for a predetermined distance, or at a point (referred to as a near point) near the intersection point. In other words, although the imaging magnification needs to be increased at the distant point for showing a detailed view near the intersection point, only an image of a small area is taken with such a high magnification at the near point, and a clear image cannot be shown to the driver. On the other hand, although the imaging magnification needs to be decreased at the near point for showing an appropriate image near the intersection point, it is hard for the driver to confirm a traveling direction at the intersection point with such a low magnification at the distant point.
Information required for the driver in turning right or left, or the like is considered different depending on a distance to the intersection point. In other words, at the distant point, since the driver requires clue information for turning right or left (such as “turning left at a bank at a corner of the intersection point”, and the like), a taken image of the center of the intersection point is desired to be shown. At the near point, however, in addition to the clue information, since a situation after making right turn or left turn (presence of an obstacle such as a pedestrian, and the like on a road after making left turn) has to be confirmed, a taken image of the intersection point in a direction to turn right or left as well as the taken image of the center of the intersection point need to be shown.
Thus, an object of the present invention is to provide a navigation device which at a junction such as an intersection point, and the like, offers to a driver an appropriate image of the vicinity of the junction which is required in turning right or left, and the like.
To achieve the above object, the present invention has the following aspects.
A first aspect is a navigation device which is installed in a vehicle for displaying on a screen an image which is taken by imaging means for taking an image of an area ahead of the vehicle, the navigation device comprising: route search means for searching for a route leading to a destination which is set by a user; junction information obtaining means for obtaining position information of a junction on the searched route; position information detection means for obtaining a current position of the vehicle; distance calculation means for calculating a distance from the current position of the vehicle to a position of the junction; and control means for changing a display magnification for an image to be displayed on the screen according to the calculated distance. It is noted that the term “image” includes a moving image and a static image. Also, changing a display magnification for an image to be displayed on the screen includes changing the display magnification for the image to be displayed on the screen by adjusting an imaging magnification of the imaging means in addition to changing the display magnification by enlarging an image taken by the imaging means, or the like.
In a second aspect according to the first aspect, the control means changes the display magnification by enlarging at least a region of an image taken by the imaging means to a predetermined display size according to the calculated distance.
In a third aspect according to the second aspect, the control means increases a size of the region to be enlarged as the calculated distance is shortened, and changes the display magnification by enlarging the region to the predetermined display size.
In a fourth aspect according to the second aspect, the control means further moves a position of the region to be enlarged which is set with respect to the image taken by the imaging means in a direction corresponding to a branch direction of the vehicle at the junction, and changes the display magnification by enlarging the region to the predetermined display size.
In a fifth aspect according to the fourth aspect, the control means changes a displacement amount of the position of the region to be enlarged which is set with respect to the image taken by the imaging means according to the calculated distance.
In a sixth aspect according to the second aspect, the navigation device further comprises imaging regulation setting means for setting an imaging regulation of the imaging means, which includes at least a regulation for timing of a start and a termination of imaging for each junction and a regulation for a size of the region to be enlarged at a time of the start of imaging for each junction; and road width information obtaining means for obtaining a width of a road at the junction, and the control means determines a size of the region to be enlarged at the time of the start of imaging based on a road width at each junction and the imaging regulation for each junction.
In a seventh aspect according to the first aspect, the control means changes the display magnification for the image to be displayed on the screen by changing an imaging magnification of the imaging means according to the calculated distance.
In an eighth aspect according to the seventh aspect, the control means changes the display magnification by decreasing the imaging magnification of the imaging means as the calculated distance is shortened.
In a ninth aspect according to the seventh aspect, the control means further changes an imaging direction of the imaging means to a branch direction of the vehicle at the junction, and changes the imaging magnification.
In a tenth aspect according to the ninth aspect, the control means changes an angle, based on which the imaging direction of the imaging means is changed, according to the calculated distance.
In an eleventh aspect according to the seventh aspect, the navigation device further comprises imaging regulation setting means for setting an imaging regulation of the imaging means, which includes at least a regulation for timing of a start and a termination of imaging for each junction and a regulation for an imaging magnification of the imaging means at a time of the start of imaging for each junction; and road width information obtaining means for obtaining a width of a road at the junction, and the control means sets the imaging magnification at the time of the start of imaging based on a road width at each junction and the imaging regulation for each junction.
In a twelfth aspect according to the seventh aspect, the navigation device further comprises recognition means for detecting a person by performing image recognition with respect to the image taken by the imaging means, and the control means changes the imaging magnification after changing an imaging direction of the imaging means according to a position of the detected person.
In a thirteenth aspect according to the first aspect, the navigation device further comprises virtual viewpoint conversion means for performing viewpoint conversion from an image for which the display magnification is changed into an image which is viewed from a virtual viewpoint.
In a fourteenth aspect according to the thirteenth aspect, the virtual viewpoint conversion means relatively increases a height of the virtual viewpoint as a distance to the junction is shortened.
In a fifteenth aspect according to the first aspect, the navigation device further comprises image edit means for superimposing another image on an image for which the display magnification is changed by the control means.
A sixteenth aspect is a navigation method comprising a taken image obtaining step to obtain an image which is taken by imaging means provided to a vehicle for taking an image of an area ahead of the vehicle; an information obtaining step to obtain a route leading to a destination which is set by a user, position information of a junction on the route, and a current position of the vehicle; a distance calculation step to calculate a distance from the current position of the vehicle to a position of the junction; and a control step to change a display magnification for an image to be displayed on a screen based on the calculated distance.
In a seventeenth aspect according to the sixteenth aspect, at the control step, the display magnification is changed by enlarging at least a region of the image obtained at the taken image obtaining step to a predetermined display size according to the calculated distance.
In an eighteenth aspect according to the sixteenth aspect, at the control step, the display magnification of the image to be displayed on the screen is changed by changing an imaging magnification of the imaging means according to the calculated distance.
A nineteenth aspect is a vehicle comprising a vehicle body for accommodating imaging for taking an image of an anterior view in a traveling direction to be provided thereto; and a navigation device for displaying on a screen an image taken by the imaging means, the navigation device comprising route search means for searching for a route leading to a destination which is set by a user; junction information obtaining means for obtaining position information of a junction on the searched route; position information detection means for obtaining a current position of the movable body; distance calculation means for calculating a distance from the current position of the movable body to a position of the junction; and control means for changing a display magnification for an image to be displayed on the screen according to the calculated distance.
In a twentieth aspect according to the nineteenth aspect, the imaging means is provided in a compartment of the vehicle.
In a twenty-first aspect according to the nineteenth aspect, the imaging means is provided outside a compartment of the vehicle.
According to the above first aspect, an image of the junction can be displayed with a size (a display magnification), with which it is easy for the user to see the image, according to the distance between the junction and the vehicle. Thus, an image of the vicinity of the junction, which is required for the user, is displayed in a form which provides easy understanding, thereby enabling the user to drive safely.
According to the second aspect, since a part of the taken image is enlarged and displayed, an image which facilitates understanding of a state of the junction can be offered to the user.
According to the third aspect, during a period when the vehicle approaches the junction, an image of the junction with the substantially same area can be offered. Thus, an image of the junction which constantly provides easy understanding can be offered.
According to the fourth and fifth aspects, when the vehicle approaches close to the junction, an image in the branch direction can be shown in advance to the user to draw the user's attention. Thus, the user can drive with attention paid to road state, and the like after right turn or left turn.
According to the sixth aspect, an image with an appropriate size according to the road width at the junction can be offered. Thus, an image which facilitates user's understanding of a state of the junction can be offered.
According to the seventh aspect, since an image of the junction is zoomed and taken, a clear image of the junction can be offered. This makes it easier for user to understand a state of the junction.
According to the eighth to eleventh aspects, the same advantageous effects as those of the third to sixth aspects can be obtained.
According to the twelfth aspect, image recognition is performed with respect to a person such as a pedestrian, and the like, and the imaging direction of the imaging means can be changed so that the person is caught. Thus, the presence of the pedestrian can be notified to the user, with the result that it is possible for the user to drive more safely.
According to the thirteenth and fourteenth aspects, since an image of the vicinity of the junction from a high view point can be offered, information about the vicinity of the junction can be offered in a form which provides easier understanding.
According to the fifteenth aspect, editing such as superimposing an arrow image on an image after change of the display magnification, and the like can be performed, and navigation can be performed to the user in a clearer form.
According to the navigation method of the present aspect, the same advantageous effects as those of the above first, second, and seventh aspects can be obtained.
According to the nineteenth aspect, the same advantageous effects as those of the above first aspect can be obtained.
According to the twentieth aspect, since the imaging means is provided in the compartment, the imaging means can be prevented from getting dirty and being stolen.
According to the twenty-first aspect, since the imaging means is provided outside the compartment, an image of a view outside the vehicle can be taken with a wide area which is not obstructed by an obstacle in comparison to the case where the imaging means is provided in the compartment. As a result, it is possible to collect and offer more information to the user.
101 input section
102 route search section
103 position information detection section
104 distance calculation section
105 map DB
106 imaging regulation storage section
107 imaging regulation setting section
108 control section
109 imaging section
110 display section
The following will describe embodiments with reference to the figures. It is noted that the present invention is not limited by the embodiments.
The input section 101 is means for inputting information concerning a destination to the navi device, and includes a remote control, a touch panel, a microphone for audio input, and the like.
The route search section 102 refers to the information concerning the destination which is inputted by the input section 101, vehicle position information which is detected by the position information detection section 103, and the map DB 105, and searches for a route leading to the destination.
The position information detection section 103 obtains information concerning the vehicle position which is measured by a positioning sensor as typified by a GPS (Global Positioning System) which is mounted to the vehicle.
The distance calculation section 104 refers to the vehicle position information which is detected by the position information detection section 103, and calculates a distance between the vehicle and a junction which the vehicle advances into first on a route after the current position of the vehicle among junctions (points where the vehicle is to turn right, left, and the like) on a route which is searched by the route search section 102.
The map DB 105 is means for storing map information required for navigating and searching for a route. For example, the map DB 105 is provided by an HDD or a DVD.
The imaging regulation storage section 106 stores data concerning a regulation (hereinafter, referred to as an imaging regulation) in taking an image of a junction with the imaging section 109. The data stored in the imaging regulation storage section 106 will be described in detail later.
The imaging regulation setting section 107 extracts junctions on the route which is searched for by the route search section 102. Then, with respect to each junction, the imaging regulation setting section 107 refers to information concerning the width of a road at the junction which is stored in the map DB 105, and sets the imaging regulation such as timing of a start and a termination of imaging, and the like. Also, the imaging regulation setting section 107 outputs the set imaging regulation to the control section 108.
The control section 108 controls the imaging section 109 based on the imaging regulation. Also, the control section 108 outputs to the display section 110 an image which is taken by the imaging section 109.
The imaging section 109 take an image of an area ahead of the vehicle. The imaging section 109 is achieved, for example, by a CCD (Charge Coupled Devices) camera, a CMOS (Complementary Metal Oxide Semiconductor) camera, or the like. The camera may be placed either inside or outside the vehicle, and preferably placed at a location adjacent to a room mirror if it is inside the vehicle and placed at a high location from a road surface such as the roof of a vehicle body if it is outside the vehicle. It is noted that in the present embodiment, for convenience of explanation, the position and the facing direction of the imaging section 109 are set in advance so that on the assumption that there is an intersection point as a junction 300 m ahead, the intersection point is located at the center of a camera image.
The display section 110 displays the image which is taken by the imaging section 109. The display section 110 is achieved by a liquid crystal display, a head-up display, a projection device which projects an image on a windshield, or the like.
The following will describe an outline of navigation which is performed by the navi device according to the present embodiment. First, a driver gets inside the vehicle, and inputs information of a destination to the navi device, so that the navi device searches for a route to the destination. After the route is searched for, the navi device extracts junctions such as intersection point to turn right or left, an exit of an expressway, and the like which exist on the route, and set the above-described imaging regulation for each junction. In the present embodiment, the imaging regulation is set so that navigation is started from 300 m before the junction, and terminated at the junction. Then, the driver starts driving. When the vehicle approaches a distance of 300 m from the intersection point (hereinafter, referred to as a branch intersection point) as a junction, the imaging section 109 (hereinafter, referred to as a camera) starts taking an image of an area ahead of the vehicle. Along with this, the display section 110 displays a live-action image which is taken by the camera. At this time, the display section 110 displays an image region of the vicinity of the branch intersection point in an image taken by the camera, which is enlarged with digital zoom and on which an arrow indicating a traveling direction at the intersection point is superimposed. Then, as the vehicle approaches the branch intersection point, a region which is subject to the digital zoom is enlarged, and a zoom magnification is decreased. In other words, the display section 110 always displays an image of the vicinity of the branch intersection point with the substantially same area.
The following will describe various tables which are required for executing navigation processing of the present embodiment. In the present embodiment, a navigation timing master 50, an initial display region table 60, and a change rate master 80 are used. The navigation timing master 50 and the change rate master 80 are created in advance, and stored in the imaging regulation storage section 106. On the other hand, the initial display region table 60 is produced by the imaging regulation setting section 107, and stored in a memory which is not shown. Then, the control section 108 refers to the initial display region table 60 which is stored in the memory, and controls the imaging section 109.
It is noted that the change rate is not limited to the above numeric value (one time at the distance of 30 m), and may be optionally determined in view of various conditions such as performance and the mounting location of the camera, and the like. Also, a unit change amount may be not a constant value as shown in
The following will describe a detailed operation of the navigation processing executed by the navi device 100 using
Next, the imaging regulation setting section 107 obtains information concerning junctions on the searched route from the map DB 105, refers to the imaging regulation storage section 106, and executes imaging regulation setting processing for setting an imaging regulation for each junction (step S103).
Next, the imaging regulation setting section 107 refers to the navigation timing master 50 from the imaging regulation storage section 106, and obtains the start distance 51 and the termination distance 52 (step S202). Here, for all the junctions, the start distance is 300 m, and the termination distance is 0 m. In other words, when the vehicle reaches 300 m before the junction, a navigation screen (hereinafter, referred to as a navi screen) as described below is displayed in the display section 110, and when the vehicle reaches the junction, the display of the navi screen is terminated.
Next, the imaging regulation setting section 107 sets an initial imaging region for each junction (step S203). In other words, the above initial display region table 60 is generated. Describing the processing of the step S203 more specifically, the imaging regulation setting section 107 adds a predetermined margin to the road width of each junction which is obtained at the step S201, and determines a horizontal width of the initial display region. Next, the imaging regulation setting section 107 determines a certain width in vertical direction according to the horizontal width to determine the initial display region coordinate 62. Next, the imaging regulation setting section 107 sets, based on the horizontal width and the above start distance 51, the initial magnification 63 which is a magnification of the digital zoom in displaying the navi screen. It is noted that the size of the display target region is designated in the camera image using a coordinate in pixel units.
Next, the imaging regulation setting section 107 sets the change rate (step S204). More specifically, the imaging regulation setting section 107 reads the each-distance magnification 82 from the change rate master 80 according to the initial magnification 63, and stores it in a memory so as to be associated with the targeted junction.
Next, it is determined whether or not the imaging regulations have been set for all the junctions on the route (step S205). When the imaging regulations have not been set for all the junctions (NO at the step S205), the imaging regulation setting section 107 returns to the step S201 to repeat the processing. On the other hand, when the imaging regulations have been set for all the junctions (YES at the step S205), the imaging regulation setting section 107 terminates the imaging regulation setting processing.
Returning to
Subsequently, the control section 108 determines whether or not the distance calculated by the distance calculation section 104 is equal to or shorter than the start distance 51 which is set by the imaging regulation setting section 107 (or whether the vehicle enters within 300 m before the junction) (step S105). As the result of the determination, when the calculated distance is not equal to or shorter than the start distance 51 (NO at the step S105), the control section 108 returns to the step S104 to repeat the processing. On the other hand, when the calculated distance is equal to or shorter than the start distance 51 (YES at the step S105), the control section 108 controls the imaging section 109 based on the imaging regulation which is set by the imaging regulation setting section 107 (step S106). More specifically, the control section 108 takes an image with the camera, and cuts out the above display target region. Then, the control section 108 generates an image into which the display target region is digitally zoomed based on the each-distance magnification 82 according to the calculated distance.
Next, the control section 108 generates an image on which an arrow image 22 (see
Next, the control section 108 outputs to the display section 110 the navi image generated at the step S107 (step S108) In other words, the image as shown in
Next, the control section 108 determines whether or not the distance between the junction and the vehicle has reached the termination distance 52 (step S109). As a result, when the distance has not reached the termination distance 52 (NO at the step S109), the control section 108 returns to the step S104 to repeat the processing. On the other hand, when the distance has reached the termination distance 52 (YES at the step S109), the control with respect to the imaging section 109 and the image output to the display section 110 are terminated, and the processing proceeds to the next step S110.
At the step S110, the control section 108 determines whether or not the vehicle has reached the destination (step S110). As a result, when the vehicle has not reached the destination (NO at the step S110), the processing of the step S104 and the subsequent processing are repeated with respect to the remaining junctions until the vehicle reaches the destination. On the other hand, when the vehicle has reached the destination (YES at the step S110), the navigation processing is terminated.
As described above, in the first embodiment, in navigating a route using an image of a junction which is taken by the camera installed in the vehicle, an image of an actual view required for the driver is cut out from a camera image according to a distance to the junction, enlarged and shown. Thus, an image of the vicinity of the junction, which is required for the driver, is displayed in a form which provides easy understanding, thereby enabling the driver to drive safely.
It is noted that although the center of the display target region coincides with the center of the camera image in the above embodiment, it is not limited, and the center of the display target region may be shifted, for example, toward a traveling direction (a branch direction) according to a distance to the junction.
Further, the display target region may be changed, for example, according to an object such as a pedestrian, and the like in addition to the traveling direction. In this case, for example, the control section 108 is made to have an image recognition function. Then, a taken image is analyzed, and when a pedestrian is detected in the traveling direction, the display target region may be appropriately shifted so as to include the pedestrian. Thus, it becomes easier for the driver to look at the pedestrian, and the like, thereby allowing the driver to drive more safely.
Also, with respect to an image taken as the vehicle approaches the junction, viewpoint conversion may be performed. The viewpoint conversion will be described using
Here, algorism of the viewpoint conversion will be described. Algorism for converting an image taken from the camera viewpoint into an image viewed from the virtual viewpoint 2 is geometrically and uniquely determined by a camera parameter of the imaging section 109 at the camera viewpoint and a camera parameter at the virtual view point 2. The following will describe the method. A first step is to determine a correspondence relation between a coordinate system of the ground obtained from the camera parameter at a virtual viewpoint and a coordinate system of a virtual image sensor surface at the virtual viewpoint. Thus, it is calculated which position on the coordinate system of the ground each pixel of the coordinate system of the virtual image sensor surface corresponds to. A second step is to determine a correspondence relation between a coordinate system of the ground at the virtual viewpoint 2 and a coordinate system of the ground obtained from the camera parameter of the imaging section 109. Thus, it is calculated which position on the coordinate system of the ground from the camera parameter of the imaging section 109 each coordinate of the coordinate system of the ground at the virtual viewpoint 2 corresponds to. A third step is to determine a correspondence relation between the coordinate system of the ground from the camera parameter of the imaging section 109 and a coordinate system of an image sensor surface from the camera parameter of the imaging section 109. Thus, it is calculated which position on the coordinate system of image sensor surface from the camera parameter of the imaging section 109 each coordinate of the coordinate system of the ground from the camera parameter of the imaging section 109 corresponds to. By performing such processing, the coordinate system of the image sensor surface of the imaging section 109 and the coordinate system of the virtual image sensor surface at the virtual viewpoint 2 are related to each other, and stored as a conversion table in the imaging regulation storage section 106. The above processing is possible for any virtual viewpoint a camera parameter for which is known.
As described above, the conversion of a viewpoint makes it possible to offer more useful information for the driver. This will be described in detail. When the vehicle approaches the junction, detailed information (“turn left after the convenience store”, “turn left before the traffic light”, and the like) is required for specifying a branch direction, and the like. Thus, information of a region such as the region β in
The following will describe a second embodiment of the present invention with reference to
The following will describe data used in the second embodiment. The data used in the present embodiment is basically the same as that in the first embodiment but different from that in the first embodiment in that the each-distance magnification 82 of the change rate master 80 described using
The following will describe navigation processing according to the second embodiment of the present invention using
As shown in
Using
As described above, by changing imaging magnification according to a distance, a camera image (or, an image of the substantially same area) in which the vicinity of the branch intersection point is enlarged can be taken. Thus, the arrow image 22 is only superimposed on the image to generate a navi screen. Also, a zoomed-in image is directly outputted. Thus, a clear image can be offered as compared to the case where an image is digitally zoomed and outputted.
It is noted that at the step S306, the control section 108 may change the facing direction (the imaging direction) of the camera to a branch direction of the vehicle according to a distance to the junction. This is because of the same reason as that for shifting the display target region according to the traveling direction of the vehicle in the first embodiment as described using
Further, the optical zoom and the digital zoom described in the first embodiment may be combined. For example, a predetermined display target region is cut out from an image taken with optical zoom of five times, digitally zoomed in two times, and displayed. Thus, while a cost for installing a high power optical zoom mechanism is suppressed, a clear image can be offered as compared to the case of using only digital zoom.
Also, the distance calculation section 104, the imaging regulation setting section 107, and the control section 108 in
Also, each embodiment described above may be offered in a form of a program which is executed by a computer. In this case, a navigation program stored in the imaging regulation storage section 106 may be read, and the control section 108 may execute the processing as shown in
A navigation device, a method, and a vehicle according to the present invention can change an image to be displayed for navigation according to a distance between a junction and the vehicle, is useful for a car navigation device installed in the vehicle, an image display device such as a display, and the like, an in-vehicle information terminal, a camera unit, a control unit for camera control, and the like.
Number | Date | Country | Kind |
---|---|---|---|
2005-283647 | Sep 2005 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2006/318189 | 9/13/2006 | WO | 00 | 3/6/2008 |