The disclosure of Japanese Patent Application No. 2009-105358, which was filed on Apr. 23, 2009, is incorporated herein by reference.
1. Field of the Invention
The present invention relates to a maneuver assisting apparatus. More particularly, the present invention relates to a maneuver assisting apparatus which assists maneuvering a moving object by reproducing a bird's-eye view image representing a surrounding area of the moving object.
2. Description of the Related Art
According to this type of apparatus, a shooting image of a surrounding area of a vehicle is acquired from a camera mounted on the vehicle. On a screen of a display device, a first display area and a second display area are arranged. The first display area is assigned to a center of the screen, and the second display area is assigned to a surrounding area of the screen. A shooting image in a first range of the surrounding area of the vehicle is displayed in the first display area, and a shooting image in a second range outside of the first range is displayed in the second display area in a compressed state.
However, manners of displaying the shooting images are fixed both in the first display area and in the second display area. Thus, in the above-described apparatus, there is a limit on a maneuver assisting performance.
A maneuver assisting apparatus according to the present invention, comprises: a plurality of cameras which are arranged on a moving object that moves on a reference surface and which capture the reference surface from diagonally above; a creator which repeatedly creates a bird's-eye view image relative to the reference surface based on an object scene image repeatedly outputted from each of the plurality of cameras; a reproducer which reproduces the bird's-eye view image created by the creator; a determiner which determines whether or not there is a three-dimensional object in a side portion of a direction orthogonal to a moving direction of the moving object based on the bird's-eye view image created by the creator; and an adjuster which adjusts a ratio of a partial image equivalent to the side portion noticed by the determiner to the bird's-eye view image reproduced by the reproducer based on a determination result of the determiner.
Preferably, the determiner includes: a detector which repeatedly detects a motion vector amount of the partial image equivalent to the side portion out of the bird's-eye view image; an updater which updates a variable in a manner different depending on a magnitude relationship between the motion vector amount detected by the detector and a threshold value; and a finalizer which finalizes the determination result at a time point which a variable updated by the updater satisfies a predetermined condition.
More preferably, the determiner further includes a threshold value adjustor which adjusts a magnitude of the threshold value with reference to a moving speed of the moving object.
Preferably, the adjuster includes a changer which changes a size of the partial image and a controller which starts the changer when the determination result is positive and stops the changer when the determination result is negative.
In a certain aspect, the changer decreases a size in a direction orthogonal to the moving direction of the moving object.
In other aspect, the reproducer displays a bird's-eye view image belonging to a designated area out of the bird's-eye view image created by the creator on a screen, and the adjustor further includes a definer which defines the designated area in a manner to have a size corresponding to a size of the partial image and an adjustor which adjusts a factor of the birds-eye view image belonging to the designated area so that a difference in size between the designated area and the screen is compensated.
The above described features and advantages of the present invention will become more apparent from the following detailed description of the embodiment when taken in conjunction with the accompanying drawings.
With reference to
The ratio of the partial image equivalent to the side portion of the direction orthogonal to the moving direction of the moving object is adjusted in a manner to be differed depending on whether or not this partial image is equivalent to the three-dimensional object image. Thus, a reproducibility of the bird's-eye view image is adaptively controlled, and as a result, a maneuver assisting performance is improved.
A maneuver assisting apparatus 10 of this embodiment shown in
With reference to
The camera C_1 has a viewing field VW_1 capturing a forward portion of the vehicle 100, the camera C_2 has a viewing field VW_2 capturing a right direction of the vehicle 100, the camera C_3 has a viewing field VW_3 capturing a backward portion of the vehicle 100, and the camera C_4 has a viewing field VW_4 capturing a left direction of the vehicle 100. Furthermore, the viewing fields VW_1 and VW_2 have a common viewing field VW_12, the viewing fields VW_2 and VW_3 have a common viewing field VW_23, the viewing fields VW_3 and VW_4 have a common viewing field VW_34, and the viewing fields VW_4 and VW_1 have a common viewing field VW_41.
Returning to
The bird's-eye view image BEV_1 is equivalent to an image captured by a virtual camera looking perpendicularly down on the viewing field VW_1, and the bird's-eye view image BEV_2 is equivalent to an image captured by a virtual camera looking perpendicularly down on the viewing field VW_2. Moreover, the bird's-eye view image BEV_3 is equivalent to an image captured by a virtual camera looking perpendicularly down on the viewing field VW_3, and the bird's-eye view image BEV_4 is equivalent to an image captured by a virtual camera looking perpendicularly down on the viewing field VW_4.
According to
Subsequently, the CPU 12p deletes a part of the image outside of a borderline BL from each of the bird's-eye view images BEV_1 to BEV_4, and combines together the other part (that is left after the deletion) of the bird's-eye view images BEV_1 to BEV_4 (see
In
The CPU 12p defines a cut-out area CT on the complete-surround birds-eye view image secured in the work area W2, and calculates a zoom factor by which a difference between a screen size of the display device 16 set to a cockpit and a size of the cut-out area is compensated. Thereafter, the CPU 12p creates a display command in which the defined cut-out area CT and the calculated zoom factor are written, and issues the created display command to the display device 16.
The display device 16 refers to a writing of the display command so as to read out one portion of the complete-surround bird's-eye view image belonging to the cut-out area CT, from the work area W2, and performs a zoom process on the read-out complete-surround birds-eye view image. As a result, a drive assisting image shown in
Subsequently, a manner of creating the bird's-eye view images BEV_1 to BEV_4 is described. It is noted that the bird's-eye view images BEV_1 to BEV_4 are all created in the same manner, and thus, a manner of creating the birds-eye view image BEV3, which represents the bird's-eye view images BEV_1 to BEV_4, is described.
With reference to
In the camera coordinate system X-Y-Z, an optical center of the camera C3 is used as an origin O, and in this state, the Z axis is defined in an optical axis direction, the X axis is defined in a direction orthogonal to the Z axis and parallel to the road surface, and the Y axis is defined in a direction orthogonal to the Z axis and X axis. In the coordinate system Xp-Yp of the imaging surface S, a center of the imaging surface S is used as the origin, and in this state, the Xp axis is defined in a lateral direction of the imaging surface S and the Yp axis is defined in a vertical direction of the imaging surface S.
In the world coordinate system Xw-Yw-Zw, an intersecting point between: a perpendicular straight line passing through the origin O of the camera coordinate system X-Y-Z; and the road surface is used as an origin Ow, and in this state, a Yw axis is defined in a direction vertical to the road surface, an Xw axis is defined in a direction parallel to the X axis of the camera coordinate system X-Y-Z, and a Zw axis is defined in a direction orthogonal to the Xw axis and Yw axis. Also, a distance from the Xw axis to the X axis is “h”, and an obtuse angle formed by the Zw axis and the Z axis is equivalent to the above described angle θ.
When coordinates in the camera coordinate system X-Y-Z are written as (x, y, z), “x”, “y”, and “z” indicate an X-axis component, a Y-axis component, and a Z-axis component in the camera coordinate system X-Y-Z, respectively. When coordinates in the coordinate system Xp-Yp of the imaging surface S are written as (xp, yp), “xp” and “yp” indicate an Xp-axis component and a Yp-axis component in the coordinate system Xp-Yp of the imaging surface S, respectively. When coordinates in the world coordinate system Xw-Yw-Zw are written as (xw, yw, zw), “xw”, “yw”, and “zw” indicate an Xw-axis component, a Yw-axis component, and a Zw-axis component in the world coordinate system Xw-Yw-Zw, respectively.
A transformation equation between the coordinates (x, y, z) of the camera coordinate system X-Y-Z and the coordinates (xw, yw, zw) of the world coordinate system Xw-Yw-Zw is represented by Equation 1 below:
Herein, if a focal length of the camera C_3 is “f”, then a transformation equation between the coordinates (xp, yp) of the coordinate system Xp-Yp of the imaging surface S and the coordinates (x, y, z) of the camera coordinate system X-Y-Z is represented by Equation 2 below:
Furthermore, based on Equation 1 and Equation 2, Equation 3 is obtained. Equation 3 shows a transformation equation between the coordinates (xp, yp) of the coordinate system Xp-Yp of the imaging surface S and the coordinates (xw, zw) of the two-dimensional road-surface coordinate system Xw-Zw.
Furthermore, a bird's-eye-view coordinate system X3-Y3, which is a coordinate system of the bird's-eye view image BEV_3 shown in
A projection from the two-dimensional coordinate system Xw-Zw that represents the road surface onto the bird's-eye-view coordinate system X3-Y3 is equivalent to a so-called parallel projection. When a height of a virtual camera, i.e., a virtual view point, is assumed as “H”, a transformation equation between the coordinates (xw, zw) of the two-dimensional coordinate system Xw-Zw and the coordinates (x3, y3) of the bird's-eye-view coordinate system X3-Y3 is represented by Equation 4 below. The height H of the virtual camera is previously determined.
Furthermore, based on Equation 4, Equation 5 is obtained, and based on Equation 5 and Equation 3, Equation 6 is obtained. Moreover, based on Equation 6, Equation 7 is obtained. Equation 7 is equivalent to a transformation equation for transforming the coordinates (xp, yp) of the coordinate system Xp-Yp of the imaging surface S into the coordinates (x3, y3) of the bird's-eye-view coordinate system X3-Y3.
The coordinates (xp, yp) of the coordinate system Xp-Yp of the imaging surface S represent coordinates of the object scene image P_3 captured by the camera C_3. Therefore, the object scene image P_3 from the camera C_3 is transformed into the bird's-eye view image BEV_3 by using Equation 7. In reality, the object scene image P_3 firstly is subjected to an image process such as a lens distortion correction, and is then transformed into the bird's-eye view image BEV_3 by using Equation 7.
Subsequently, an operation for defining the cut-out area CT and an operation for reproducing the complete-surround birds-eye view image belonging to the defined cut-out area CT are described.
Firstly, the cut-out area CT is initialized so that it has a rectangle in which the overlapped areas OL_12 to OL_41 shown in
With reference to
Motion vector amounts MV_1 to MV_6 are detected with reference to partial images IM 1 to IM_6 belonging to the blocks BLK_1 to BLK_6. Due to a birds-eye transformation characteristic, magnitudes of the detected motion vector amounts MV_1 to MV_6 differ depending on whether there is a three-dimensional object (architectural structure) in the blocks BLK_1 to BLK_6.
As shown in
The variable L_K is incremented up to a constant Lmax that is an upper limit when a motion vector amount MV_K (K: 1 to 6, the same applies below) exceeds the threshold value THmv, and is decremented down to “0” that is a lower limit when the motion vector amount MV_K is equal to or less than the threshold value THmv. A flag FLG_K is set to “1” when the variable L_K exceeds the constant Lmax, and set to “0” when the variable L_K falls below “0”.
Therefore, if a state shown in
When the flag FLG_K is set to “1”, the partial image IM_K is reduced. More specifically, a lateral-direction size of the partial image IM_K is decreased to ½. The complete-surround bird's-eye view image is changed in shape as a result of the reduction of the partial image IM_K. The cut-out area CT is re-defined with reference to a horizontal size of the complete-surround bird's-eye view image thus changed in shape. The re-defined cut-out area CT has a horizontal size equivalent to the horizontal size of the complete-surround bird's-eye view image and an aspect ratio equivalent to an aspect ratio of a monitor screen, and a central position of the cut-out area CT matches a central position of the complete-surround bird's-eye view image.
Therefore, a complete-surround bird's-eye view image shown in an upper left of
When the cut-out area CT is re-defined, a zoom factor of the complete-surround bird's-eye view image is calculated. The zoom factor is equivalent to a factor by which a difference between the size of the re-defined cut-out area CT and a size of the monitor screen is compensated. In a display command issued toward the display device 16, the re-defined cut-out area CT and the calculated zoom factor are written.
The display device 16 displays the complete-surround bird's-eye view image on the monitor screen according to such a display command. That is, the display device 16 cuts out the complete-surround bird's-eye view image belonging to the cut-out area CT, as shown in a lower left of
Specifically, the CPU 12p executes a process according to a flowchart shown in
With reference to
A complete-surround bird's-eye view image creating process in the step S7 follows a sub routine shown in
The image-shape changing process shown in the step S9 in
In a step S23, the motion vector amount of the partial image IM_K is detected as MV_K, and in a step S25, it is determined whether or not the detected motion vector amount MV_K exceeds the threshold value THmv.
When a determination result is YES, the variable L_K is incremented in a step S27. In a step S29, it is determined whether or not the incremented variable L_K exceeds the constant Lmax. When the variable L_K is equal to or less than the constant Lmax, the process directly advances to a step S43. When the variable L_K exceeds the constant Lmax, the flag FLG_K is set to “1” in a step S31, and in a step S33, the variable L_K is set to the constant Lmax. Then, the process advances to the step S43.
When the determination result in the step S25 is NO, the variable L_K is decremented in a step S35, and it is determined in a step S37 whether or not the decremented variable L_K falls below “0”. When the variable L_K is equal to or more than “0”, the process directly advances to the step S43. When the variable L_K falls below “0”, the flag FLG_K is set to “0” in a step S39, and in a step S41, the variable L_K is set to “0”. Then, the process advances to the step S43.
In the step S43, it is determined whether or not the variable K reaches “6”. When a determination result is NO, the variable K is incremented in a step S45, and then, the process returns to the step S23. When the determination result is YES, the process advances to a step S47. The variable K is set to “1” in the step S47, and in a subsequent step S49, it is determined whether or not the flag FLG_K indicates “1”.
When the determination result is NO, the process directly advances to a step S53, and when the determination result is YES, the partial image IM_K is reduced in a step S51, and then, the process advances to a step S53. Specifically, the process in the step S51 is equivalent to a process for decreasing the size of the lateral direction of the partial image IM_K to ½. In the step S53, it is determined whether or not the variable K reaches “6”. When a determination result is NO, the variable K is incremented in a step S55, and then, the process returns to the step S49. When the determination result is YES, the process advances to a step S57.
In the step S57, a horizontal size of the complete-surround birds-eye view image changed in shape resulting from the process in the step S51 is detected, and the cut-out area CT is re-defined so as to be adapted to the detected horizontal size. In a step S59, with reference to the size of the re-defined cut-out area CT, the zoom factor of the complete-surround bird's-eye view image is calculated.
The re-defined cut-out area CT has the horizontal size equivalent to the horizontal size of the complete-surround bird's-eye view image and the aspect ratio equivalent to the aspect ratio of the monitor screen, and the central position of the re-defined cut-out area CT matches the central position of the complete-surround bird's-eye view image. The calculated zoom factor is equivalent to a factor by which a difference between the size of the re-defined cut-out area CT and the size of the monitor screen is compensated.
In a step S61, the display command in which the re-defined cut-out area CT and the calculated zoom factor are written is created, and the created display command is issued toward the display device 16. Upon completion of the process in the step S61, the process is restored to a routine at a hierarchical upper level.
As can be seen from the above description, the cameras C_1 to C_4 are arranged in the vehicle 100 that moves on the road surface, and capture the road surface from diagonally above. The CPU 12p repeatedly creates the complete-surround bird's-eye view image relative to the road surface, based on the object scene images P_1 to P_4 repeatedly outputted from the cameras C_1 to C_4 (S5, S7). The created complete-surround bird's-eye view image is reproduced on the monitor screen of the display device 16.
The CPU 12p determines whether or not there is the three-dimensional object such as an architectural structure in the side portion of the direction orthogonal to the moving direction of the vehicle 100, based on the complete-surround bird's-eye view image created as described above (S21 to S45). Thereafter, the CPU 12p adjusts the ratio of the partial image equivalent to the side portion noticed for the determining process, to the complete-surround bird's-eye view image reproduced on the monitor screen, based on the determination result (S47 to S59).
The ratio of the partial image equivalent to the side portion in the direction orthogonal to the moving direction of the vehicle 100 is adjusted to be differed depending on whether or not this partial image is equivalent to the three-dimensional object image. Thus, a reproducibility of the birds-eye view image is adaptively controlled, and as a result, the maneuver assisting performance is improved.
It is noted that in this embodiment, upon combining the bird's-eye view images BEV_1 to BEV_4, one portion of the image outside of the borderline BL is deleted (see
Furthermore, in this embodiment, the size of the lateral direction of the three-dimensional object image is compressed to ½. However, the three-dimensional object image may be optionally non-displayed, as shown in
Notes relating to the above-described embodiment will be shown below. It is possible to arbitrarily combine these notes with the above-described embodiment unless any contradiction occurs.
The coordinate transformation for producing a bird's-eye view image from a photographed image, which is described in the embodiment, is generally called a perspective projection transformation. Instead of using this perspective projection transformation, the bird's-eye view image may also be optionally produced from the photographed image through a well-known planer projection transformation. When the planer projection transformation is used, a homography matrix (coordinate transformation matrix) for transforming a coordinate value of each pixel on the photographed image into a coordinate value of each pixel on the bird's-eye view image is evaluated in advance at a stage of a camera calibrating process. A method of evaluating the homography matrix is well known. Then, during image transformation, the photographed image may be transformed into the bird's-eye view image based on the homography matrix. In either way, the photographed image is transformed into the bird's-eye view image by projecting the photographed image on the bird's-eye view image.
Although the present invention has been described and illustrated in detail, it is clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the spirit and scope of the present invention being limited only by the terms of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2009-105358 | Apr 2009 | JP | national |