The disclosure of Japanese Patent Application No. 2008-318860, which was filed on Dec. 15, 2008, is incorporated herein by reference.
1. Field of the Invention
The present invention relates to an obstacle sensing apparatus. In particular, the present invention relates to an obstacle sensing apparatus, arranged in a moving object such as an automobile, which senses a surrounding obstacle.
2. Description of the Related Art
According to one example of this type of apparatus, an image representing an object scene around a vehicle is repeatedly outputted from an imaging device mounted on a vehicle. An image processing unit transforms each of two images outputted from an imaging device into a bird's-eye view image, aligns positions of the two transformed bird's-eye view images, and detects a difference between the two bird's-eye view images in which the positions are aligned. In the detected difference, a component equivalent to an obstacle having a height appears. Thereby, it becomes possible to sense the obstacle from an object scene.
However, in the above-described device, resulting from an error in the process for transforming into a bird's-eye view image and an error in the process for aligning positions, the accuracy for sensing the obstacle may be deteriorated.
An obstacle sensing apparatus according to the present invention, comprises: a fetcher which fetches an object scene image repeatedly outputted from an imager which captures an object scene in a direction which obliquely intersects a reference surface; a transformer which transforms the object scene image fetched by the fetcher into a bird's-eye view image; a detector which detects a difference between screens of the bird's-eye view image transformed by the transformer; a first specifier which specifies one portion of difference along a first axis extending in parallel to the reference surface from a reference point corresponding to a center of an imaging surface, out of the difference detected by the detector; a second specifier which specifies one portion of difference along a second axis extending in parallel to the reference surface in a manner to intersect the first axis, out of the difference detected by the detector; and a generator which generates a notification when the difference specified by the first specifier and the difference specified by the second specifier satisfy a predetermined condition.
Preferably, further comprised is a first definer which defines the first axis corresponding to each of one or at least two angles in a rotation direction of a reference axis extending from the reference point in a manner to be perpendicular to the reference surface, wherein the first specifier executes a difference specifying process in association with a defining process of the first definer.
More preferably, further comprised is a creator which creates a histogram representing a distributed state in the rotation direction of the difference detected by the detector, wherein the first definer executes the defining process with reference to the histogram created by the creator.
More preferably, further comprised is a second definer which defines the second axis in each of one or at least two positions corresponding to the difference specified by the first specifier, wherein the second specifier executes a difference specifying process in association with a defining process of the second definer.
Preferably, the difference specified by the second specifier is equivalent to a difference continuously appearing along the second axis.
Preferably, the predetermined condition is equivalent to a condition under which a size of the difference specified by the first specifier exceeds a first threshold value and a size of the difference specified by the second specifier falls below a second threshold value.
The above described features and advantages of the present invention will become more apparent from the following detailed description of the embodiment when taken in conjunction with the accompanying drawings.
A maneuver assisting apparatus (obstacle sensing apparatus) 10 of this embodiment shown in
With reference to
The camera C_3 is installed at a substantially center in a width direction of a rear portion and on an upper side in a height direction of the automobile 100, and oriented rearward, obliquely downward of the automobile 100. The camera C_4 is installed at a substantially center in a width direction on a left side and on an upper side in a height direction of the automobile 100, and oriented leftward, obliquely downward direction of the automobile 100.
A state where the automobile 100 and its surrounding grounds are aerially viewed is shown in
Returning to
The bird's-eye view image BEV_1 is equivalent to an image captured by a virtual camera looking perpendicularly down on the viewing field VW_1, and the bird's-eye view image BEV_2 is equivalent to an image captured by a virtual camera looking perpendicularly down on the viewing field VW_2. Moreover, the bird's-eye view image BEV_3 is equivalent to an image captured by a virtual camera looking perpendicularly down on the viewing field VW_3, and the bird's-eye view image BEV_4 is equivalent to an image captured by a virtual camera looking perpendicularly down on the viewing field VW_4.
According to
Subsequently, in order to join the bird's-eye view images BEV_1 to BEV_4 each other, the CPU 12p rotates and/or moves the bird's-eye view images BEV_2 to BEV_4 by using the bird's-eye view image BEV_1 as a reference. The coordinates of the bird's-eye view images BEV_2 to BEV_4 are transformed on the work areas F2 to F4 so as to depict a whole-circumference bird's-eye view image shown in
In
Moreover, a unique area OR_1 is equivalent to an area for reproducing one portion of a viewing field of the viewing field VW1 except for the common viewing fields VW_41 and VW_12, and a unique area OR_2 is equivalent to an area for reproducing one portion of a viewing field of the viewing field VW2 except for the common viewing fields VW_12 and VW_23. Furthermore, a unique area OR_3 is equivalent to an area for reproducing one portion of a viewing field of the viewing field VW3 except for the common viewing fields VW_23 and VW_34, and a unique area OR_4 is equivalent to an area for reproducing one portion of a viewing field of the viewing field VW4 except for the common viewing fields VW_34 and VW_41.
A display device 14 installed in a driver's seat on the automobile 100 defines a block BK1 in which the overlapped areas OL_12 to OL_41 are located at four corners, and reads out one portion of the bird's-eye view image belonging to the defined block BLK1 from each of the work areas F1 to F4. Moreover, the display device 14 joins the read-out bird's-eye view images each other, and pastes a graphic image G1 resembling an upper portion of the automobile 100, at a center of the thus-obtained whole-circumference bird's-eye view image. As a result, a maneuver assisting image shown in
Subsequently, a manner of creating the bird's-eye view images BEV_1 to BEV_4 is described. It is noted that all the bird's-eye view images BEV_1 to BEV_4 are created according to the same manner, and therefore, on behalf of all the bird's-eye view images BEV_1 to BEV_4, the manner of creating the bird's-eye view image BEV_3 is described.
With reference to
In the camera coordinate system (X, Y, Z), an optical center of the camera C3 is used as an origin O, and in this state, the Z axis is defined in an optical axis direction, the X axis is defined in a direction orthogonal to the Z axis and parallel to the ground, and the Y axis is defined in a direction orthogonal to the Z axis and X axis. In the coordinate system (Xp, Yp) of the imaging surface S, a center of the imaging surface S is used as the origin, and in this state, the Xp axis is defined in a lateral direction of the imaging surface S and the Yp axis is defined in a vertical direction of the imaging surface S.
In the world coordinate system (Xw, Yw, Zw), an intersecting point between a perpendicular line passing through the origin O of the camera coordinate system (X, Y, Z) and the ground is used as an origin Ow, and in this state, the Yw axis is defined in a direction vertical to the ground, the Xw axis is defined in a direction parallel to the X axis of the camera coordinate system (X, Y, Z), and the Zw axis is defined in a direction orthogonal to the Xw axis and Yw axis. Also, a distance from the Xw axis to the X axis is “h”, and an obtuse angle formed by the Zw axis and the Z axis is equivalent to the above described angle θ.
When coordinates in the camera coordinate system (X, Y, Z) are written as (x, y, z), “x”, “y”, and “z” indicate an X-axis component, a Y-axis component, and a Z-axis component, respectively, in the camera coordinate system (X, Y, Z). When coordinates in the coordinate system (Xp, Yp) of the imaging surface S are written as (xp, yp), “xp” and “yp” indicate an Xp-axis component and a Yp-axis component, respectively, in the coordinate system (Xp, Yp) of the imaging surface S. When coordinates in the world coordinate system (Xw, Yw, Zw) are written as (xw, yw, zw), “xw”, “yw”, and “zw” indicate an Xw-axis component, a Yw-axis component, and a Zw-axis component, respectively, in the world coordinate system (Xw, Yw, Zw).
A transformation equation between the coordinates (x, y, z) of the camera coordinate system (X, Y, Z) and the coordinates (xw, yw, zw) of the world coordinate system (Xw, Yw, Zw) is represented by Equation 1 below:
Herein, if a focal length of the camera C_3 is assumed as “f”, a transformation equation between the coordinates (xp, yp) of the coordinate system (Xp, Yp) of the imaging surface S and the coordinates (x, y, z) of the camera coordinate system (X, Y, Z) is represented by Equation 2 below:
Furthermore, based on Equation 1 and Equation 2, Equation 3 is obtained. Equation 3 shows a transformation equation between the coordinates (xp, yp) of the coordinate system (Xp, Yp) of the imaging surface S and the coordinates (xw, zw) of the two-dimensional ground coordinate system (Xw, Zw).
Furthermore, the bird's-eye-view coordinate system (X3, Y3), which is a coordinate system of the bird's-eye view image BEV_3 shown in
A projection from the two-dimensional coordinate system (Xw, Zw) that represents the ground, onto the bird's-eye-view coordinate system (X3, Y3), is equivalent to a so-called parallel projection. When a height of a virtual camera, i.e., a′virtual view point, is assumed as “H”, a transformation equation between the coordinates (xw, zw) of the two-dimensional coordinate system (Xw, Zw) and the coordinates (x3, y3) of the bird's-eye-view coordinate system (X3, Y3) is represented by Equation 4 below. A height H of the virtual camera is previously determined.
Furthermore, based on Equation 4, Equation 5 is obtained, and based on Equation 5 and Equation 3, Equation 6 is obtained. Moreover, based on Equation 6, Equation 7 is obtained. Equation 7 is equivalent to a transformation equation for transforming the coordinates (xp, yp) of the coordinate system (Xp, Yp) of the imaging surface S into the coordinates (x3, y3) of the bird's-eye-view coordinate system (X3, Y3).
The coordinates (xp, yp) of the coordinate system (Xp, Yp) of the imaging surface S represent coordinates of the object scene image P_3 captured by the camera C_3. Therefore, the object scene image P_3 from the camera C_3 is transformed into the bird's-eye view image BEV_3 by using Equation 7. In reality, the object scene image P_3 firstly undergoes an image process such as a lens distortion correction, and is then transformed into the bird's-eye view image BEV_3 using Equation 7.
With reference to
In this embodiment, an obstacle showing a relative movement between the obstacle 200 and the automobile 100 is defined as a “dynamic obstacle”. Therefore, an obstacle moving around a stationary automobile 100, a stationary obstacle around a moving automobile 100, an obstacle moving at a speed different from a moving speed of the automobile 100, or an obstacle moving in a direction different from the moving direction of the automobile 100 is regarded as the “dynamic obstacle”. In contrary, a stationary obstacle around a stationary automobile 100, or an obstacle moving in the same direction as the moving direction of the automobile 100 at the same speed as the moving speed of the automobile 100 is regarded as a “static obstacle”.
In a situation shown in
In the description below, of the whole-circumference bird's-eye view image shown in
Moreover, with reference to
Moreover, a point that is present on the whole-circumference bird's-eye view image and that is equivalent to a center of the imaging surface of the camera C_3 is defined as a “reference point RP_3”, and an axis extending from the reference point RP_3 orthogonally to the ground is defined as a “reference axis RAX_3”. Likewise, a point that is present on the whole-circumference bird's-eye view image and that is equivalent to a center of the imaging surface of the camera C_4 is defined as a “reference point RP_4”, and an axis extending from the reference point RP_4 orthogonally to the ground is defined as a “reference axis RAX_4”.
In the image processing circuit 12, in response to the vertical synchronization signal Vsync, a variable L is set to each of “1” to “4”, and corresponding to each of the numerical values, the process described below is executed.
Firstly, a difference image DEF_L representing a difference between frames of a reproduced image REP_L is created by a difference calculating process. When the automobile 100 is moved, a position aligning process for aligning positions performed in consideration of the movement of the automobile 100 between the difference image REP_L in a preceding frame and the difference image REP_L in a current frame is executed before the difference calculating process. As a result, for the reproduced image REP_2 shown in
The obstacle 200 is steric, and thus, when the image of the dynamic and steric obstacle 200 captured from an oblique direction is transformed into the bird's-eye view image, irrespective of the position alignment between the frames, the bird's-eye view image of the obstacle 200 in a current frame differs, in principle, from the bird's-eye view image of the obstacle 200 in a preceding frame. Therefore, in the difference image DEF_2, a high luminance component representing the obstacle 200 clearly appears.
In contrary, the pattern 300 depicted on the ground is in the form of a plane, and thus, when the position between the frames is aligned, the bird's-eye view image of the pattern 300 in a current frame matches, in principle, the bird's-eye view image of the pattern 300 in a preceding frame. However, in reality, resulting from an error in a process for transforming into a bird's-eye view image and an error in position alignment between frames, a high luminance component representing a profile of the pattern 300 appears in the difference image DEF_2.
When the difference image DEF_L is created, a histogram representing a luminance distribution of the difference image DEF_L in a rotation direction of a reference axis RAX_L is created. For the difference image DEF_2 shown in
Subsequently, one or at least two angle ranges (angle range: an angle range in a rotation direction of the reference axis RAX_L), continuously having a significant difference amount is specified from the histogram. The specified angle ranges are designated as analysis ranges in which whether or not the dynamic obstacle exists is analyzed. According to
A size of the designated analysis range is compared with a reference value REF. Then, when the size of the analysis range falls below the reference value REF, one connecting line axis extending from the reference point RP_L in parallel to the ground is defined as an angle equivalent to a center of the analysis range. In contrary, when the size of the analysis range is equal to or more than the reference value REF, a plurality of connecting line axes extending from the reference point RPL in parallel to the ground are defined, having a uniform angle being provided between each connecting line axis, over the whole region of the analysis range.
As a result, for the analysis range AR1 shown in
Subsequently, one or at least two connecting-line-axis graphs, which respectively correspond to the one or at least two defined connecting line axes, are created. The created connecting-line-axis graphs represent a luminance change of a difference image along the connecting line axis to be noticed. Therefore, for the connecting line axis CL1 shown in
Moreover, one or at least two positions (position: position on the connecting line axis) having a significant difference amount are detected based on the connecting-line-axis graph created according to the above-described manner. In each of the one or at least two detected positions, a connecting-line vertical axis, which is an axis orthogonal to the connecting line axis, is defined. The defined connecting-line vertical axis has a length equivalent to the continuous significant difference amount.
Therefore, as shown in
The connecting-line-vertical-axis graph is created for each connecting line axis by noticing the one or at least two connecting-line vertical axis thus defined. The created connecting-line-vertical-axis graph represents an average of one or at least two luminance changes, which respectively lay along the one or at least two connecting-line vertical axes defined on the connecting line axis to be noticed.
Thereby, a connecting-line-vertical-axis graph shown in
Thus, upon completion of the connecting-line-axis graph and the connecting-line-vertical-axis graph, which correspond to each of the angles θ1 to θ5, whether or not a luminance characteristic indicated by the connecting-line-axis graph and the connecting-line-vertical-axis graph satisfies a predetermined condition is determined corresponding to each of the angles θ1 to θ5. Herein, the predetermined condition is equivalent to a condition under which a magnitude of a range in which a luminance level continuously rises in the connecting-line-axis graph exceeds a threshold value TH1 and a magnitude of a range in which a luminance level continuously rises in the connecting-line-vertical-axis graph falls below a threshold value TH2.
As described above, the image of the steric obstacle 200 is reproduced as if to have fallen along the connecting line L linking the camera C_2 and the bottom of the obstacle 200. Moreover, when the image (captured from an oblique direction) of the dynamic and steric obstacle 200 is transformed into the bird's-eye view image, the transformed bird's-eye view image differs, in principle, between the frames. Thereby, the high luminance component representing the obstacle 200 clearly appears in the difference image DEF_2.
Therefore, a luminance level of the difference image corresponding to the obstacle 200 rises in a wide range in the connecting-line-axis graph while rises in a narrow range in the connecting-line-vertical-axis graph.
In contrary, the bird's-eye view image corresponding to the pattern 300 that is in the form of a plane and that is depicted on the ground matches, in principle, between the frames. Thus, with respect to the pattern 300, resulting from the error in a process for transforming into a bird's-eye view image or an error in position alignment between frames, the profile of the pattern 300 merely appears in the difference image DEF_2. Therefore, a luminance level of the difference image corresponding to the pattern 200 rises in narrow ranges of both of the connecting-line-axis graph and the connecting-line-vertical-axis graph.
Graphs that satisfy the predetermined condition are the connecting-line-axis graph shown in
An area in which the obstacle 200 exists (area: an area on the reproduced image REP_2) is detected based on the specified connecting-line-axis graph and connecting-line-vertical-axis graph. In the detected area, a rectangular character CT1 is displayed as shown in
The CPU 12p specifically executes a plurality of tasks in parallel, including an image creating task shown in
With reference to
With reference to
In a step S19, one or at least two angle ranges (angle range: an angle range in a rotation direction of the reference axis RAX_L), each of which continuously has a significant difference amount, is specified with reference to the histogram created in the step S17, and each of one or at least two specified angle ranges is designated as the analysis ranges. In a step S21, in order to notice a first analysis range, out of one or at least two designated analysis ranges, a variable M is set to “1”.
In a step S23, it is determined whether or not the magnitude of an M-th analysis range exceeds the reference value REF. When YES is determined, the process advances to a step S25, and on the other hand, when NO is determined, the process advances to a step S27. In the step S25, one connecting line axis extending from the reference point RP_L in parallel to the ground is defined at a center of the M-th analysis range. In the step S27, a plurality of connecting line axis extending from the reference point RP_L in parallel to the ground are defined, having a uniform angle being provided between each connecting line axis, over the whole region of the M-th analysis range.
In a step S29, it is determined whether or not the variable M reaches a total number (=Mmax) of analysis ranges specified in the step S19. When NO is determined in this step, the variable M is incremented in a step S31, and thereafter, the process is returned to the step S23. As a result, in each of one or at least two analysis ranges specified in the step S19, one or at least two connecting line axes are defined.
When the variable M reaches the total number Mmax, the process advances from the step S29 to a step S33 so as to set the variable N to “1”. In a step S35, out of one or at least two connecting line axes defined according to the above-described manner, an N-th connecting line axis is noticed to create an N-th connecting-line-axis graph. The created N-th connecting-line-axis graph represents the luminance change of the difference image along the N-th connecting line axis.
In a step S37, one or at least two positions having a significant difference amount are detected from the N-th connecting-line-axis graph, and the connecting-line vertical axis, which is orthogonal to the connecting line axis, is defined in each of the detected one or at least two positions. In a step S39, one or at least two defined connecting-line vertical axes are noticed to create the connecting-line-vertical-axis graph. The created connecting-line-vertical-axis graph represents an average of luminance changes (luminance change: a luminance change of the difference image) along each of one or at least two defined connecting-line vertical axes.
In a step S41, it is determined whether or not the variable N reaches the total number (=Nmax) of connecting line axes defined in the step S25 or S27. When NO is determined in this step, the variable N is incremented in a step S43, and thereafter, the process is returned to the step S35. As a result, the connecting-line-axis graph and the connecting-line-vertical-axis graph, which correspond to each of the connecting line axes equivalent to the total number Nmax, are obtained.
When YES is determined in the step S41, the variable N is set again to “1” in a step S45. In a step S47, it is determined whether or not the luminance changes in the N-th connecting-line-axis graph and connecting-line-vertical-axis graph satisfy the predetermined condition. When NO is determined, the process directly advances to a step S53 while YES is determined, the process advances to the step S53 via steps S49 to S51.
In the step S49, based on the N-th connecting-line-axis graph and connecting-line-vertical-axis graph, an area in which the dynamic obstacle exists is specified on the reproduced image REP_L. In the step S51, in order to multiplex the rectangular character on the reproduced image REP_L corresponding to the area specified in the step S49, a corresponding instruction is applied to the display device 14.
In a step S53, it is determined whether or not the variable N reaches “Nmax”, and when NO is determined, the variable N is incremented in a step S55, and then, the process returns to the step S47. When YES is determined in the step S53, it is determined whether or not the variable L reaches “4” in a step S57. When NO is determined, the variable L is incremented in a step S59, and then, the process returns to the step S15. When YES is determined, the process directly returns to the step S11.
As can be seen from the above description, the CPU 12p fetches the object scene images P_1 to P_4 repeatedly outputted from the cameras C_1 to C_4 capturing the object scene in a direction which obliquely intersects the ground (reference surface) (S3). The fetched object scene images P_1 to P_4 are transformed by the CPU 12p into the bird's-eye view images BEV_1 to BEV_4, respectively (S5). The difference between the screens of the transformed bird's-eye view images BEV_1 to BEV_4 is also detected by the CPU 12p (S15). The CPU 12p specifies one portion of difference along the connecting line axis extending in parallel to the ground from each of the reference points RP_1 to RP_4 corresponding to the center of the imaging surfaces of the cameras C_1 to C_4, out of the difference between the screens of each of the bird's-eye view images BEV_1 to BEV_4 (S35). Moreover, the CPU 12p specifies one portion of difference along the connecting-line vertical axis extending in parallel to the ground in a manner to intersect the connecting line axis, out of the difference between the screens of each of the bird's-eye view images BEV_1 to BEV_4 (S39). When the difference thus specified satisfies the predetermined condition, the CPU 12p multiplexes the rectangular character on the maneuver assisting image corresponding to the position of the obstacle area in order to notify the existence of the obstacle (S47 to S51).
The difference to be noticed in this embodiment is equivalent to the difference between the screens of each of the bird's-eye view images BEV_1 to BEV_4 corresponding to the object scene image captured in a direction which obliquely intersects the ground. Therefore, when the dynamic obstacle exists in a position corresponding to the connecting line axis, a difference equivalent to a height of the dynamic obstacle is specified along the connecting line axis, and a difference equivalent to a width of the dynamic obstacle is specified along the connecting-line vertical axis. On the other hand, when the pattern depicted on the ground or the static obstacle exists in a position corresponding to the connecting line axis, a difference equivalent to the error in the process for transforming into the bird's-eye view images BEV_1 to BEV_4 is specified along the connecting line axis and the connecting-line vertical axis. When it is determined whether or not the difference thus specified satisfies the predetermined condition, it becomes possible to improve the performance for sensing a dynamic obstacle.
Notes relating to the above-described embodiment will be shown below. It is possible to arbitrarily combine these notes with the above-described embodiment unless any contradiction occurs.
The coordinate transformation for producing a bird's-eye view image from a photographed image, which is described in the embodiment, is generally called a perspective projection transformation. Instead of using this perspective projection transformation, the bird's-eye view image may also be optionally produced from the photographed image through a well-known planer projection transformation. When the planer projection transformation is used, a homography matrix (coordinate transformation matrix) for transforming a coordinate value of each pixel on the photographed image into a coordinate value of each pixel on the bird's-eye view image is previously evaluated at a stage of a camera calibrating process. A method of evaluating the homography matrix is well known. Then, during image transformation, the photographed image may be transformed into the bird's-eye view image based on the homography matrix. In either way, the photographed image is transformed into the bird's-eye view image by projecting the photographed image on the bird's-eye view image.
Although the present invention has been described and illustrated in detail, it is clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the spirit and scope of the present invention being limited only by the terms of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2008-318860 | Dec 2008 | JP | national |