Exemplary embodiments according to the present invention will be explained in detail below with reference to accompanying drawings.
The example herein employs a plurality of conventional cameras to detect the markers in lieu of an all-direction camera. For vehicles equipped with a plurality of cameras at the front, sides, rear, etc., for obtaining a peripheral view relevant to the vehicle, these existing cameras may be utilized.
To distinguish a camera CMi mounted on a moving object M from the virtual camera VCij, the former may be called as a “Real camera CMi”. Here, “Marker” means a photograph subject of the camera CMi, for example, a traffic signal, a road traffic sign, a structural body such as a utility pole, or an item that is attached to such a body and is recognizable from the surrounding area.
For the following descriptions and equations, an operation principle is explained using a pair of markers Pi and Pj that are photographed by cameras CMi and CMj and as the principle is identical, explanation is omitted regarding a pair of markers Pj and Pk that are photographed by cameras CMj and CMk.
As shown in
And, a virtual camera VCjk is virtually located at a position from which two markers Pj and Pk that are actually photographed by different cameras CMj and CMk mounted on the moving object M. That is, the virtual camera VCjk is virtually installed at an intersecting point of a straight line that passes through the camera CMi and the marker Pj and a straight line that passes through the camera CMk and the marker Pk.
In this way, the virtual camera VCij is located at a position where the inter-marker angle θij can be calculated and assuming a moving object origin Om to be the optical center, the position Vij of virtual camera VCij on a moving object coordinate system can be calculated from the direction of cameras CMi and CMj and markers Pi and Pj. Similarly, the virtual camera VCjk is set up at a position where the inter-marker angle θjk can be calculated and assuming the moving object origin Om to be the optical center, the position Vjk of virtual camera VCjk on a moving object coordinate system can be calculated from the direction of cameras CMj and CMk and markers Pj and Pk.
In the described related art, a circle that passes through the marker markers Pi and Pj. and the moving object origin Om was called the marker circle Eij, however in this embodiment, a circle with a center position Oij and a radius rij that passes through the markers Pi and Pj and the installation position Vij of virtual camera VCij is called the marker circle Eij. The marker circle can identify a circle with its center position Oij and radius rij (shown in
From the relative position relationship between the installation position Vij of virtual camera VCij and the moving object origin position, a moving object origin trajectory T that becomes a position candidate where the moving object origin position Cij exists is identified.
condition 1: a distance d between the moving object origin position Cij and the virtual camera VCij is constant.
condition 2: a direction of virtual camera VCij as viewed from the moving object coordinate system (moving object origin Om) is constant (angle formed by two lines that direct to marker Pj and to the moving object origin position Cij as viewed from the virtual camera VCij is constant.)
condition 3 the virtual camera VCij exists on the marker circle Eij (or, angle formed by two lines that direct to the marker Pi and Pj starting from the virtual camera position VCij is constant.)
A camera coordinate system 302 is a coordinate system that assumes a camera origin Oc to be the optical center of each camera CMi and expresses a horizontal plane and is defined by a Yci-axis corresponding to an optical axis of camera CMi and an Xci-axis that is perpendicular to Yci-axis. The camera coordinate system 302 exists for each camera CMi. A global coordinate system 303 is a coordinate system that expresses a horizontal plane and is defined by a Yw-axis corresponding to a north/south direction and an Xw-axis corresponding to an east/west direction, assuming an origin Ow to be a predetermined reference point. A Z-axis (not shown in
Where, ΘH is a horizontal image angle of camera CMi and Wp is a width of an image. When conversion from the camera coordinate system 302 to the moving object coordinate system 301 is executed by horizontal movement of (tx, ty) and rotational movement by an angle β, the camera origin (optical center) Oci of the camera CMi and a straight line Li (marker line) passing through the marker Pi are expressed by equation 3.
Similarly, for the other camera and marker, its camera origin (optical center) and straight line (marker line) passing through the marker can also be calculated by equation 3.
In case that Pi and Pj were photographed by different cameras CMi and CMj, point Qij becomes an intersecting point. The intersecting point Qij is called “virtual optical center Qij”. This virtual optical center Qij is expressed by equations 4.1 to 4.5.
The inter-marker angle θij* of two markers Pi and Pj as viewed from the virtual optical center Qij is expressed by equation 5.
θ*ij=αi−βi−αj+βj (5)
An angle φim formed by one marker Pi as viewed from the virtual optical center Qij and the moving object origin Om is expressed by equations 6.1 to 6.3, and an angle φim formed by another marker Pj as viewed from the virtual optical center Qij and the moving object origin Om is expressed by equations 6.2, 6.4, and 6.5.
where, ki is a unit line indicating direction from the virtual optical center Qij to marker Pi, and kj is a unit line indicating direction from the virtual optical center Qij to marker Pj.
As shown in
Where, Pi*=(pix*, piy*), Pj*=(pjx*, pjy*) are coordinates of marker positions in the global coordinate system 303.
Since t distance |Qij| between the moving object origin Om and the virtual optical center Oij*, and the angle θim formed by moving object origin Om and the marker Pi are common in both the moving object coordinate system 301 and the global coordinate system 303, the position Cij* of the moving object M satisfies equation 8.
Where, the position Cij* of the moving object M is merely known to exist on a circle expressed by equations 7.1 and 7.2, and therefore can not be determined by only equation 8. If the virtual optical center Qij* is expressed using a variable μij in a polar coordinate system, it can be re-written as equation 9.
Q*
ij(μij)=O*ij+r*ij(cos μij, sin μij) (9)
Where, the position Cij* can be calculated for an arbitrary pair of markers Pi and Pj. For example, if the position Cjk* is calculated for another pair of markers Pj and Pk by equation 10, the position Cjk* is identical to the position Cij*. Therefore, by calculating the variables μij and μjk that make a value of F(μij, μjk) equal to 0 in equation 11, and by substituting the variables μij and μjk into equation 10, it is possible to determine the moving object position Cij* in the global coordinate system 303.
F(μij,μjk)=|C*ij(μij)−C*jk(μjk)| (11)
It is also possible to determine the azimuthal angle ω(μij) with which the direction of the Ym-axis of the moving object coordinate system 301 indicates the direction of moving object M by equations 12.1 and 12.2.
It is also possible to calculate the variables μij and μjk by using an analytical method or by using a numerical analysis method. When using a numerical analysis method, although the result depends on an initial value, it is possible to obtain a good initial value by using a previous position as an initial value for the moving object position in the case of continuous measurement or by using a value that was obtained from another moving object locating system. In case of the latter method, accuracy of other moving object locating system may be relatively low. For example, moving object locating systems such as a conventional GPS or an electronic compass may also be utilized.
Here, a calculation method for the initial value of variable μij from a measured GPS value is explained.
At this time, although the coordinate value of Rij* is not known, it is known that its position is on a circle Eij that is expressed by a center position Oij* and radius rij* obtained from equations 7.1 and 7.2. Assuming that an angle formed by two lines directed from the marker Pi* to the virtual optical center coordinate and a line directed from Rij* to the latitude/longitude coordinate Gij* can be approximated by the angle φim (refer to equations 6.1 to 6.3), the virtual optical center coordinate Rij* exists on the circle E expressed by equations 13.1 and 13.2.
Thus, the virtual optical center coordinate Rij* can be obtained by calculating the intersecting point of the circle E and the circle Eij. The virtual optical center coordinate Rij* can be expressed by equation 14.
R*
ij
=O*
ij
+r*
ij(cos λij, sin λij) (14)
In equation 14, a value that best satisfies the variable λij can be determined as an initial value of the variable μij.
Next, the numerical analysis method for variable μij and μjk is described concretely.
The values (μij, μjk) that make F(μij, μjk) equal to 0 can be obtained by using a numerical analysis method. Assuming that an arbitrary position (λij,λjk) is an initial value of (μij, μjk), a position where F becomes 0 (or minimum value) when moving the position (λij, λjk) in direction in which the value decreases in the solution space of F(μij, μjk) is a desirable value for (λij, μjk).
However, since the curved surface of F(μij, μjk) is not necessarily decreasing monotonically and the solution does not necessarily converge to a correct solution, how the initial value is selected is important. Hence, conversion to an incorrect solution is avoided either by using the (μij, μjk) value of a previous moving object-locating or by using the initial value of the variable ij obtained from a rough position calculation result by the GPS.
The CPU 901 governs operation control of the entire moving object locating system. The ROM 902 stores a program such as a boot up program. The RAM 903 is used as a work area of the CPU 901. The HDD 904 controls reading/writing of data in the HD 905 under the control of the CPU 901. The HD 905 stores data written under the control of the HDD 904.
The optical disk drive 906 controls reading/writing of the optical disk 907 under the control of the CPU 901. The optical disk 907 stores data written under the control of the optical disk drive 906 and allows the moving object locating system to read the stored data in the optical disk 907.
The display 908 displays data such as a document, an image, function information, as well as a cursor, an icon, toolbox, etc. For the display 908, a cathode ray tube (CRT), thin film transistor (TFT) liquid crystal display, plasma display, etc. for example, may be adopted.
The I/F 909 is connected to a network such as the internet through a communication line, and is also connected to external equipment via the network. Furthermore, the I/F 909 acts as an interface between the network and the internal moving object locating system and controls input/output of data from/to external equipment. For the I/F 909, a modem, a local area network (LAN) adapter, etc. for example, may be adopted.
The input key 910 is a set of buttons for inputting characters, numerical values, or various commands and serves as a data entry device. It may be a touch-panel. The microphone 911 receives an audio signal from an external source and converts it to a digital signal. The speaker 912 outputs sound according to the audio signal converted from the digital signal.
The GPS 913 includes an antenna for receiving an electromagnetic wave from a GPS satellite, a tuner for demodulating the received signal, and a logic circuit for calculating a current position according to the demodulated information. Furthermore, the current position of the moving object M can be determined by receiving an electromagnetic wave from the GPS satellite and calculating a geometrical position against the GPS satellite. The electronic compass 914 is an integrated circuit (IC) for calculating the direction of the moving object and includes an earth magnetism sensor. It detects the earth magnetism from north to south generated by the earth using the earth magnetism sensor and calculates the azimuthal angle to the moving object. The plurality of cameras CM is an aggregate of multiple cameras CM1 to CMn looked at through by the moving object M.
The marker type is information to identify the type of the marker Pi. For example, “0” is defined as a reference point, “1” is defined as a traffic signal, “2” is defined as a road traffic sign, “3” is defined as a utility pole, etc. The marker latitude means the latitude of the marker Pi, and the marker longitude means the longitude of the marker Pi.
Longitudinal distance means the distance in the longitudinal direction of Pi from the reference point, and latitudinal distance means the distance in the latitudinal direction of Pi from the reference point. Specifically, the marker DB 1000 realizes its function by recording media such as the ROM 902, the RAM 903, the HD 905, and the optical disk 907, for example, shown in
Each camera CMi, CMj, and CMk are set so as to photograph the horizontal road environment from the moving object M. The installation position coordinates (tix, tiy) (tjx, tjy) and (tkx, tky) of each camera CMi, CMj, and CMk in the camera coordinate system 302 and the rotation angles βi, βj, and βk that express the difference of coordinate axis from that of the moving object coordinate system 301 have already been determined. The installation position coordinates (tix, tiy) (tjx, tjy) and (tkx, tky) and the rotation angles βi, βj, and βk of each camera CMi, CMj, and CMk are read into the detecting unit 1102 herein later described.
The image processing unit 1101 extracts images of the markers Pi, Pj and Pk from images photographed outside the moving object M by the plurality of cameras CM. Image extraction and recognition of the markers Pi, Pj and Pk in the photographed images may be executed by using an appropriate conventional technology. In general, upon the markers Pi, Pj and Pk being traffic signals, the markers can be extracted by identifying the color (such as red, yellow, and green) and shape (such as a circle) of a light that a traffic signal lens might emit. Although the image processing unit 1101 is built as 1 to 1 with a camera, it is also possible to take such a structure that processes the group of camera CM with a single image processing unit 1101.
The detecting unit 1102 detects position and direction of the markers Pi, Pj and Pk that are contained in each image photographed of the outside environment of the moving object M. More concretely, the detecting block calculates the horizontal image angle αi, αj and αk of markers Pi, Pj and Pk of which image was extracted by the image processing unit 1101 by using equation 2. Then, referring to the position information of the markers Pi, Pj and Pk that were stored in the marker DB 1000 of the global coordinate system 303 (for example, latitude, longitude, or orthogonal coordinate value on setting the reference point), the detecting block extracts the marker data corresponding to the observed markers Pi, Pj and Pk from the marker DB 1000.
Since it is difficult to identify the markers Pi, Pj and Pk when the position and direction of the moving object M are unknown, the detecting unit 1102 identifies the position of image processed markers in the global coordinate system 303 by estimating the longitude/latitude and the marker type of the extracted markers Pi, Pj and Pk, by extracting the nearest marker data from the marker DB 1000, by using the output result of the GPS 913 as the position of the moving object M and by using the measured value of the electronic compass 914 as the direction of moving object M.
In the case that position measurement is continuously executed, it is also possible to use the previous position data of the vehicle itself measured by the moving object locating system 1100 at a previous time. In case of utilizing an assumed position and direction of vehicle itself, the detecting unit 1102 extracts the markers existing in horizontal direction angle αi, αj and αk, of each camera CMi, CMj and CMk when the moving object M is arranged in its position and direction. And then, it extracts the markers Pi, Pj and Pk with the nearest distance and having the smallest angle error from the extracted markers Pi, Pj and Pk as the corresponding markers (coordinate values Pi*, Pj*, and Pk* in the global coordinate system 303).
Marker data Di={αi,βi, (tix, tiy),Pi*},Dj={αj,βj, (tjx, tjy),Pj*},Dk={(αk,βk, (tkx, tky),Pk*} of the extracted marker Pi*, Pj* and Pk* are output to the identifying unit 1103.
The identifying unit 1103 identifies the moving object origin trajectory Cij* relating to an unknown origin position of the moving object M according to the markers Pi* and Pj* (marker data Di, marker data Dj) detected by the detecting unit 1102. It also identifies the moving object origin trajectory Cjk* relating to an unknown origin position of the moving object according to the markers Pj* and Pk* (marker data Dj, marker data Dk). A detailed functional structure of the identifying unit 1103 is described later (refer to
The locating unit 1104 locates the position of an unknown moving object origin to a coordinate of an intersecting point of trajectories of moving object origin Cij* and Cjk* identified by the identifying unit 1103 by using equations 10 and 11. It also locates the azimuthal angle ω(μij) of the moving object M by using equations 12.1 and 12.2. That is, the locating unit 1104 outputs the coordinate of the intersecting point of trajectories of the moving object origin Cij* and Cjk* and the azimuthal angle ω(μij) of the moving object M as the output data 1110.
Next, a functional structure of the identifying unit 1103 is described in detail.
Here, a case in which the identifying block reads two marker data Di and Dj is explained. As for the case that the identifying block read two marker data Dj and Dk, a description is skipped because the same explanation can be applied by just replacing the suffix “ij” with “jk”.
First, the virtual optical center calculation unit 1131ij acquires two marker data Di and Dj and calculates the virtual optical center Qij by using equation 4.1 to 4.2. Using equation 5, the inter-marker angle calculation unit 1132ij calculates the inter-marker angle θij* of the two markers Pi and Pj as viewed from the virtual optical center Qij that was calculated by the virtual optical center calculation unit 1131ij.
Using equations 7.1 and 7.2, the marker circle calculation unit 1133ij calculates the center position Oij* and radius rij* of a marker circle Eij for identifying the position of the virtual optical center Oij estimated from the inter-marker angle θij* calculated by the inter-marker angle calculation unit 1132ij in the global coordinate system 303.
Using equations 8 and 9, the function decision unit 1134ij decides the origin trajectory function Cij*(μij) of the moving object M in the global coordinate system 303 (refer to equation 10. In this way, the function decision unit 1134 decides the origin trajectory function Cij*(μij) and Cjk* (μjk) for each marker pair (Pi, Pj) and (Pj, Pk)
In this case, the locating unit 1104 calculates (λij, μjk) that makes the evaluation function F(μij, μjk) “0” or minimizes it by using the origin trajectory function Cij* (μij) calculated from one marker pair (Pi, Pj) and the origin trajectory function Cjk* (μjk) calculated from another marker pair (Pj, Pk). Then, by substituting the calculated (μij, μjk) into equations 9 and 10, position of moving object origin can be obtained, and by substituting the calculated (μij, μjk) into equations 11 and 12, the azimuthal angle as a direction of the moving object M can be obtained.
In order to calculate (μij, μjk) that minimizes the evaluation function F(μij, μjk), it is also possible to use exploratory numerical analysis method such as a steepest descent method. In this case, because whether the optimum solution can be obtained depends on what initial value is selected, the number of exploratory repletion greatly changes depending on the initial value. Therefore, it is also possible to use the values derived from the position and the azimuthal angle obtained from the GPS 913 and the electronic compass 914 as the initial value of exploring. In case that position measurement is executed continuously, it is possible to start exploration using the value derived from the position and the azimuthal angle that have already been obtained during a previous measurement or the previous value of (μij, μjk) itself.
Then, the image processing unit 1101 extracts marker images from images photographed by the camera CMi (step S1302). The detecting unit 1102 calculates the horizontal image angle αi (step S1303) and detects the marker data Di, Dj and Dk of each marker Pi*, Pj* and Pk* in the global coordination system 303 (step S1304).
Next, the identifying unit 1103 first calculates the marker line Li of marker Pi* and the marker line Lj of the marker Pj* by using equation 3 (step S1305), and the virtual optical center calculation unit 1131 calculates the virtual optical center Qij that is an intersecting point of the marker line Li and Lj by using equations 4.1 to 4.2 (step S1306).
Then, the inter-marker angle calculation unit 1132ij calculates the inter-marker angle θij* of two markers Pi and Pj as viewed from the virtual optical center Qij calculated by the virtual optical center calculation unit 1131 by using equation 5 (step S1307).
By using equations 7.1 and 7.2, the Marker circle calculation unit 1133ij calculates the center position Oij* and the radius rij* of the marker circle Eij to identify the position of the virtual optical center Qij in the global coordinate system 303 that is estimated from the inter-marker angle θij* calculated by the inter-marker angle calculation unit 1132ij (step S1307).
The function decision unit 1134ij calculates the inter-marker angle φim between one marker Pi and the moving object origin Om looked at from the virtual optical center Qij by using equations 6.1 to 6.3, and also calculates the inter-marker angle φjm between the marker Pj and the moving object origin Om looked at from the virtual optical center Qij by using equations 6.2 to 6.5 (step S1309).
Where, the processes from step S1305 to step S1309 are executed similarly for the marker data Dj and Dk of the marker Pj* and Pk* using the inter-marker angle calculation unit 1132jk, the marker circle calculation unit 1133jk and the function decision unit 1134jk.
After this, the function decision unit 1134 executes the variable decision process (step S1310). That is, it decides the origin trajectory function Cij*(μij), Cjk*(μjk), . . . for each marker pair (Pi, Pj), (Pj, Pk), . . . by using equations 8 and 9. Then, by substituting the origin trajectory function Cij*(μij), Cjk*(μjk) into equation 11, the function decision unit 1134 decides the variables (μij, μjk) that makes evaluation function F(μij, μjk) “0” or minimizes it.
The locating unit 1104 calculates the moving object origin position Cij* in the global coordinate system 303 by substituting (μij, μjk) into equations 9 and 10 (step S1311) And the locating unit 1104 calculates also the azimuthal angle ω(μij) as the direction of the moving object M by substituting (μij, μjk) into equations 12.1 and 12.2 (step S1312).
After this, whether to continue moving object-locating is judged (step S1313). When moving object-locating is continued (step S1313: Yes), the process returns to step S1302. In this case, calculation of a new position and azimuthal angle of the moving object origin is executed by using the moving object origin position Cij* and the azimuthal angle ω(μij) calculated in Step S1311 and Step S1312. On the other hand, in case that moving object-locating is not continued (step S1313: No), the procedure of moving object-locating is completed.
First, it is assumed that an initial value of the variable (μij, μjk) corresponds to a focused point (λij, λjk) and its step width is Δλ (step S1401). Then, the focused point (λij,λjk) is substituted into the evaluation function F and, at the same time, values neighboring the focused point (λij,λjk) (in this case, neighboring four points) are substituted into the evaluation function F (step S1402).
As shown in
On the other hand, in case that all of the values of the evaluation function F corresponding to the neighboring four points is greater than the value of the evaluation function F of the focused point (λij,λjk) (step S1403: Yes), whether the step width Δλ is less than a threshold value λt (for example, λt=π/1000) is judged (step S1405).
When the step width Δλ is equal to or greater than the threshold value λt (step S1405: No), the process makes the step width Δλ small to ½ (step S1406) and returns to step S1402. On the other hand, if the step widthΔλ was smaller than the threshold value λt (step S1405: Yes), the process decides to take the finally obtained focused point (λij,λjk) as the value of the variable (μij, μjk) (step S1407)
In this way, the process is completed judging that a sufficiently accurate solution was obtained. Although this variable decision process is a numerical analysis method in the angle space of μ, even if this is done in the same orthogonal coordinate space as the global coordinate system 303, if minimization of the evaluation function F is executed as the result, the same result is obtained.
A moving object locating method described in a form of implementation of this invention can be realized by executing a prepared program on a computer such as a personal computer or a work station. This program is recorded on readable storage media such as a hard disk, a flexible disk, a CD-ROM, an MO, a DVD that can be read by a computer, and is executed by being loaded to a computer from these recording media. This program can be a transmission media that can be delivered to users via networks such as the internet.
According to the embodiments described above, it is possible to increase the camera installation flexibility on a moving object as well as to increase accuracy in locating the moving object.
Although the invention has been described with respect to a specific embodiment for a complete and clear disclosure, the appended claims are not to be thus limited but are to be construed as embodying all modifications and alternative constructions that may occur to one skilled in the art which fairly fall within the basic teaching herein set forth.
Number | Date | Country | Kind |
---|---|---|---|
2006-230471 | Aug 2006 | JP | national |