The present disclosure relates to displacement measurement.
PTL 1 discloses a correction method at a time of displacement measurement by subtracting an error component due to a movement of a camera device. In addition, PTL 2 discloses a method of calculating a displacement or the like of a subject by simultaneously imaging the subject and a steady point other than the subject. Note that NPL 1 and NPL 2 disclose bundle adjustment for reconstructing a three-dimensional shape from images.
Each of PTL 1 and PTL 2 discloses a method of suppressing an influence of a movement of a camera in displacement measurement using photographed images. However, in the methods described in PTL 1 and PTL 2, there is a problem that it is difficult to measure a displacement of a measurement object with high precision, unless the resolution of the camera is sufficiently high.
An illustrative object of the present disclosure is to provide technology for measuring a displacement of a measurement object with high precision.
In one mode, there is provided a displacement measurement device including: an acquiring means for acquiring an image including a first position and an image including a second position, which are captured by an imaging means in a first time period, and an image including the first position and an image including the second position, which are captured by the imaging means in a second time period; and a calculation means for calculating a difference between the second position in the first time period and the second position in the second time period, the difference being based on the first position, using a plurality of the acquired images and correlation information calculated based on configuration information of the imaging means.
In another mode, there is provided a displacement measurement system including: an imaging means for capturing an image including a first position and an image including a second position in a first time period and in a second time period; and a calculation means for calculating a difference between the second position in the first time period and the second position in the second time period, the difference being based on the first position, using a plurality of the acquired images and correlation information calculated based on configuration information of the imaging means.
In still another mode, there is provided a displacement measurement method including: acquiring an image including a first position and an image including a second position, which are captured by an imaging means in a first time period, and an image including the first position and an image including the second position, which are captured by the imaging means in a second time period; and calculating a difference between the second position in the first time period and the second position in the second time period, the difference being based on the first position, using a plurality of the acquired images and correlation information calculated based on configuration information of the imaging means.
In still another mode, there is provided a program for causing a computer to execute: a process of acquiring an image including a first position and an image including a second position, which are captured by an imaging means in a first time period, and an image including the first position and an image including the second position, which are captured by the imaging means in a second time period; and a process of calculating a difference between the second position in the first time period and the second position in the second time period, the difference being based on the first position, using a plurality of the acquired images and correlation information calculated based on configuration information of the imaging means.
According to the present disclosure, a displacement of a measurement object is measured with high precision.
The measurement object in this context is, for instance, a construction such as a building or a bridge. In some modes, the measurement object is an object which requires high-precision displacement measurement, compared to the size of the object. However, the measurement object is not limited to a specific object, if displacement measurement by methods to be described below is possible. In addition, although displacement of the measurement object occurs due to factors such as time (degradation or the like), temperature (thermal expansion or the like) and a load (presence/absence of a load, or the like), the factors are not limited to specific factors.
The acquiring unit 110 acquires images captured by an imaging unit. The imaging unit in this context is configured to include, for example, one or a plurality of cameras each including an imaging element for converting light to image information of each pixel, such as a Charge-coupled device (CCD) image sensor or a Complementary metal-oxide-semiconductor (CMOS) image sensor. The imaging unit may be mounted on a moving body such as a vehicle or aircraft.
The acquiring unit 110 acquires an image, for example, by accepting an input of image data which are expressed in a predetermined format. The image acquired by the acquiring unit 110 may be a visible image, but may also be an image including information of a wavelength in an invisible region of near-infrared light or the like. In addition, the image acquired by the acquiring unit 110 may be either a monochromatic image or a color image, and the number of pixels or the number of gray levels (color depth) are not particularly limited. The acquisition by the acquiring unit 110 may be acquisition via a communication line, or may be readout from a recording medium included in the own device or in some other device.
The acquiring unit 110 acquires an image including a first position, and an image including a second position. The first position is a position which serves as a base in displacement measurement. On the other hand, the second position is a different position from the first position, and is a position of an object of displacement measurement. For example, the first position is a position whose displacement does not substantially occur or is negligibly small, compared to the displacement of the second position. The second position is, for example, a position whose displacement tends to easily occur or can easily be measured. Hereinafter, the image including the first position is also referred to as “first image”, and the image including the second position is also referred to as “second image”.
The first position and second position may be any positions, if distinction from other positions is possible. The first position and second position may be positions which are provided with signs, such as markers, which make visual identification easier. Alternatively, for example, when it is difficult to dispose a sign on a measurement object, the first position and second position may be visually identified by a slight difference in roughness or color on the surface of the object.
The second position is a part of the measurement object. On the other hand, the first position may be, or may not be, a part of the measurement object. For example, the first position may be a part of another object whose displacement is smaller than the displacement of the measurement object (or whose displacement does not occur). Specifically, it is not always necessary that the first position and second position be included in a single object.
The acquiring unit 110 acquires a first image and a second image, which are captured in a first time period, and a first image and a second image, which are captured in a second time period. In other words, the acquiring unit 110 acquires a plurality of first images and a plurality of second images, which are captured at different timings. The second time period is, for example, after the first time period, and is a period in which a displacement occurs (or may possibly occur) in the measurement object. A plurality of first images and a plurality of second images may be captured in each of the first time period and second time period.
The first image and second image are, for example, a pair of images captured at a predetermined timing. The acquiring unit 110 can also acquire the first image and second image by acquiring a movie from the imaging unit and extracting still images at specific time points of the acquired movie. For example, the acquiring unit 110 may extract still images captured at an identical time instant from a plurality of movies captured by a plurality of cameras, and may use the extracted still images as the first image and second image. Alternatively, the acquiring unit 110 may use, as the first image and second image, still images captured by a plurality of cameras which are controlled to perform imaging at the same timing.
The calculation unit 120 calculates a displacement of the measurement object. The calculation unit 120 calculates a displacement of the second position, the displacement being based on the first position, by using the images acquired by the acquiring unit 110, and correlation information calculated based on configuration information of the imaging unit. The displacement in this context means a difference between the second positions when the second position in the first time period and the second position in the second time period are compared.
The configuration information in this context is, for example, information indicative of a difference between imaging directions of the first image and second image, and magnifications of imaging of the first image and second image. For example, when the first image and second image are captured by different cameras, the configuration information may represent the imaging conditions of the plural cameras. The imaging conditions in this context include, for example, relative positions or angles of the plural cameras. The configuration information may be stored in advance in the displacement measurement device 100, or may be acquired via the acquiring unit 110 together with the first image and second image.
The correlation information is described by, for example, a homogenous transformation matrix, Euler's angle, quaternion, etc. The correlation information makes it possible to describe the coordinates of the first image and the coordinates of the second image by using a common coordinate system. The correlation information may be calculated by the displacement measurement device 100, or may be calculated in advance by a device which is different from the displacement measurement device 100.
The correlation information correlates the first image and second image. To be more specific, the correlation information correlates the first image and second image which are captured in the same time period. It can also be said that the correlation by the correlation information is correlating the first position included in the first image and the second position included in the second image. By using the correlation information, the calculation unit 120 can correlate a plurality of images whose imaging ranges do not overlap.
A first camera captures a first image 211 in the first time period, and captures a first image 221 in the second time period. The first images 211 and 221 are common in that the first images 211 and 221 include the first position 201, but their imaging ranges may not necessarily agree. In addition, a second camera captures a second image 212 in the first time period, and captures a second image 222 in the second time period.
The relative positional relationship between the first camera and second camera is unchanged between the first time period and second time period. Then, assuming that the measurement object 200 has not deformed, the position where the measurement object 200 appears in the second image 222 in the second time period can uniquely be specified from the first image 221 in the second time period and the configuration information, as indicated by a two-dot-and-dash line in
However, assuming that the measurement object 200 has actually deformed, the position where the measurement object 200 appears in the second image 222 becomes a different position from the position indicated by the two-dot-and-dash line. The calculation unit 120 can calculate a displacement D, based on the difference between the actual position of the second position 202 in the second image 222 and a position which can be assumed from the first image 221 and the correlation information.
In step S310, the acquiring unit 110 acquires a first image and a second image which are captured in the first time period, and a first image and a second image which are captured in the second time period. Note that the acquiring unit 110 may simultaneously acquire, or may acquire at different timings, the images captured in the first time period and the images captured in the second time period. In step S320, the calculation unit 120 calculates a displacement of the measurement object by using the first images and second images acquired in step S310 and the correlation information.
As described above, the displacement measurement device 100 is configured to calculate a displacement of the measurement object by using the correlation information which is calculated based on the configuration information of the imaging unit. This configuration enables displacement measurement by local imaging of the measurement object, without imaging the entirety of the measurement object. Thus, according to the displacement measurement device 100, compared to the case in which the entirety of the measurement object is imaged, the resolution per unit area of the image including the measurement object can be enhanced, and therefore the displacement of the measurement object can measured with high precision. In other words, it can be said that the displacement measurement device 100 can measure the displacement of the measurement object with high precision, even without enhancing the resolution of image sensors.
Second Example Embodiment
The UAV 410, while flying, images a measurement object. The UAV 410 may be remote-controlled by the displacement measurement device 420 or other remote-control equipment, and may also be configured to image a specific position of the measurement object by image recognition. Further, the UAV 410 may be configured to continue imaging a specific position of the measurement object for a predetermined time period, while hovering. The UAV 410 includes an imaging unit 411 and a communication unit 412.
The imaging unit 411 further includes cameras 411a, 411b and 411c. The imaging unit 411 generates image data representing images captured by the cameras 411a, 411b and 411c, and supplies the generated image data to the communication unit 412. The imaging unit 411 may execute a well-known image process for facilitating an arithmetic process in the displacement measurement device 420. In addition, the imaging unit 411 may capture not a still image but a movie.
The communication unit 412 transmits the image data, which are supplied from the imaging unit 411, to the displacement measurement device 420. The communication unit 412 executes a process, such as encoding, on the image data supplied from the imaging unit 411, and transmits the image data to the displacement measurement device 420 according to a predetermined communication method.
The displacement measurement device 420 includes a communication unit 421 and a calculation unit 422. The communication unit 421 receives image data from the UAV 410. The communication unit 421 supplies the image data, which were transmitted via the communication unit 412, to the calculation unit 422. The calculation unit 422 further includes a first calculation unit 422a which calculates a homogeneous transformation matrix, and a second calculation unit 422b which calculates a displacement of the measurement object. The homogeneous transformation matrix corresponds to an example of the correlation information in the first example embodiment. The second calculation unit 422b calculates a displacement of the measurement object by using the image data supplied from the communication unit 421 and the homogeneous transformation matrix calculated by the first calculation unit 422a.
In the present example embodiment, the communication unit 421 corresponds to an example of the acquiring unit 110 in the displacement measurement device 100 of the first example embodiment. In addition, the calculation unit 422 corresponds to an example of the calculation unit 120 in the displacement measurement device 100 of the first example embodiment.
Note that the displacement measurement device 420 may include a configuration for recording the displacement calculated by the calculation unit 422. For example, the displacement measurement device 420 may include a recording device which records in a recording medium the displacement calculated by the calculation unit 422 together with the date/time (imaging date/time). Alternatively, the displacement measurement device 420 may include a display device which displays information corresponding to the displacement calculated by the calculation unit 422.
The displacement measurement system 400, under the above configuration, images the measurement object and calculates the displacement of the measurement object. Specifically, the displacement measurement system 400 can calculate the displacement of the measurement object by operating as described below. Hereinafter, as an example, it is assumed that the measurement object is a bridge.
In some modes, the positions P1, P2 and P3 are positions where base points, such as feature points, can easily be extracted. To be more specific, the positions P1, P2 and P3 are positions including specific patterns or objects, which can easily be distinguished from other areas. For example, each of the positions P1, P2 and P3 may include a character or sign which is drawn, or may include a boundary between a certain member and another member.
Hereinafter, each of the positions P1 and P3 is also referred to as “base position”. The base position corresponds to one example of the first position in the first example embodiment. In addition, hereinafter, the position P2 is also referred to as “measurement position”. The measurement position corresponds to an example of the second position in the first example embodiment. Each of the base position and measurement position is an area with a certain size, and may include a plurality of feature points that are to be described later.
In the example of
When displacement measurement is performed, the displacement measurement device 420 acquires in advance intrinsic parameters of the cameras 411a, 411b and 411c. The intrinsic parameters are, for example, an optical axis center and a focal distance. The intrinsic parameters may be provided from a camera maker, or may be obtained in advance by calibration for calculating the intrinsic parameters. In addition, the displacement measurement device 420 acquires in advance parameters indicative of the relative positions and angles of the cameras 411a, 411b and 411c. The parameters correspond to one example of the configuration information in the first example embodiment.
Besides, in the present example embodiment, the calculation unit 422 calculates in advance a homogeneous transformation matrix, before executing displacement measurement. Note that the calculation unit 422 may calculate the homogeneous transformation matrix while executing displacement measurement (e.g. between step S810 and step S820 to be described later). The calculation unit 422 can calculate the homogeneous transformation matrix, for example, by executing the calibration to be illustrated below.
The pattern 710 is imaged by the camera 411a. The pattern 720 is imaged by the camera 411b. The pattern 730 is imaged by the camera 411c. The patterns 710, 720 and 730 are provided at positions where the positioned cameras 411a, 411b and 411c can simultaneously image the patterns 710, 720 and 730.
The calculation unit 422 sets any one of the cameras 411a, 411b and 411c as a base, and calculates a homogeneous transformation matrix indicative of the relationship between the camera of the base and the other cameras. Here, the camera 411a is set as the base. In this case, the calculation unit 422 calculates a homogeneous transformation matrix of four rows and four columns (hereinafter referred to as “M12”) which indicates the relationship between the camera 411a and camera 411b three-dimensionally, and a homogeneous transformation matrix of four rows and four columns (hereinafter referred to as “M13”) which indicates the relationship between the camera 411a and camera 411c three-dimensionally. The calculation unit 422 can calculate the homogeneous transformation matrices M12 and M13 by using any one of general and well-known methods for calculating homogeneous transformation matrices.
For example, the UAV 410 ascends or descends, while flying, so that the positions P1, P2 and P3 may not be located outside the imaging ranges, thereby imaging the positions P1, P2 and P3 at a plurality of imaging positions. In other words, it can be said that the UAV 410 images the positions P1, P2 and P3 from a plurality of viewpoints. For example, the UAV 410 images the positions P1, P2 and P3 such that distortion occurs between the captured images due to ascending or descending.
In step S810, the communication unit 421 acquires image data of a number corresponding to the product between the number of cameras and the number of times of imaging. For example, in the case of the present example embodiment, the number of cameras included in the UAV 410 is “3”. Accordingly, the total number of first images and second images acquired in step S810 is “3M” if the number of times of imaging is “M”.
In step S820, the calculation unit 422 extracts feature points from each of the first images and second images which the plural image data received in step S810 represent. To be more specific, the calculation unit 422 extracts feature points from the position P1, P2 or P3 included in the respective images. Usable feature amounts in step S820 are, for example, feature amounts representing local features of images, such as Features from Accelerated Segment Test (FAST) feature amounts, or Scale-Invariant Feature Transform (SIFT) feature amounts.
In step S830, the calculation unit 422 correlates the first image including the base position and the second image including the measurement position, by using the feature points extracted in step S820. In the present example embodiment, the calculation unit 422 can correlate the first image and second image three-dimensionally, by reconstructing the three-dimensional shape of the feature points by using an algorithm in which bundle adjustment is expanded.
The bundle adjustment is a method of reconstructing (estimating) the three-dimensional shape of a captured scene, based on the base points included in a plurality of images captured by photographing the same position. The bundle adjustment is one of elemental techniques of Structure from Motion (SfM). In the bundle adjustment, captured video is modeled by perspective projection of the following equation (1).
In equation (1), x and y are the position of a certain point in an image, i.e. two-dimensional coordinates. In addition, X, Y and Z indicate three-dimensional coordinates of this point in the space. Symbols s and f0 are freely chosen proportionality factors, which are not zero. P is a matrix of three rows and four columns, which is called “projection matrix”.
The projection matrix P is expressed by the following equation (2), when the focal distance f, the optical axis center is (u0, v0), the central position of the lens in a world coordinate system is t=(xt, yt, zt), the rotation matrix representing directions is R, and the identity matrix is I. In equation (2), K is an intrinsic matrix (also called “intrinsic parameter matrix”) relating to the camera. (I-t) is a matrix in which the identity matrix I and t are arranged in the column direction, and is a matrix of three rows and four columns in equation (2).
When an element of an i-th row and a j-th column of the projection matrix P is expressed as pij, x and y are expressed by the following equation (3).
Here, when an N number of points (Xα, Yα, Zα) in the scene were photographed M times from mutually different positions, it is assumed that these were observed at a position (xα, yα) of a κ-th image (κ=1, 2, . . . , M, α=1, 2, . . . , N). When the projection matrix for the κ-th image is Pκ, the total of sums of squares of displacements between the positions at which all points are to be projected and the observed positions is expressed by the following equation (4). E expressed by equation (4) is called “reprojection error”.
Here, Iακ is a visualization index. The visualization index Iακ is “1” when the point (Xα, Yα, Zα) appears in the κ-th image, and is “0” when the point (Xα, Yα, Zα) does not appear in the κ-th image. In addition, an error on the image, if measured by the distance with the proportionality factor f0ij being “1”, is expressed by the following equation (5). Here, Pκij represents an element of an i-th row and a j-t column of the projection matrix Pκ.
In general SfM, to estimate the point (Xα, Yα, Zα) and projection matrix Pκ, which minimize the reprojection error of equation (4) for one camera, is the method of reconstructing the three-dimensional shape of the scene. By contrast, in the present example embodiment, a plurality of cameras are used. In consideration of the use of plural cameras, the SfM of the present example embodiment has a feature in that the general SfM is expanded by using a homogeneous transformation matrix M1γ (γ is the number of cameras).
Specifically, the projection matrix Pγ(γ=1, . . . , L) of the present example embodiment is expressed by the following equation (6). In other words, the projection matrices Pγ (varying from camera to camera) are mutually associated by the homogeneous transformation matrix M1γ. It is assumed, however, that M11 is an identity matrix.
[Math. 6]
P
γ
R(I−t)M1γ (6)
In addition, the reprojection error E of the present example embodiment is expressed by the following equation (7). In equation (7), pαγκ, gαγκ, and rαγκ are expressed by the following equation (8). Here, Pγκij represents an element of an i-th row and a j-th column of the projection matrix Pγκ. The projection matrix Pγκ is calculated for each image and each camera. In addition, Iαγκ is a visualization index similar to Iακ.
In the present example embodiment, the calculation unit 422 calculates the point (Xα, Yα, Zα) and projection matrix Pγκ, which minimize the reprojection error of equation (7) for the observed (xαγκ, yαγκ). Equation (7) makes it possible to evaluate images captured by plural cameras by one reprojection error equation. Note that the calculation unit 422 can calculate the point (Xα, Yα, Zα) and projection matrix Pγκ, by applying the well-known method described in NPL 2.
The process of step S830 is as described above. In this manner, if the feature points included in the first images or the second images are correlated, the calculation unit 422 calculates a displacement of the measurement position in step S840. At this time, the calculation unit 422 searches for the correspondence relation of the feature points extracted in the first time period (the state with no load) and the second time period (the state with a load) by a well-known robust estimation method such as Random Sample Consensus (RANSAC).
The feature points extracted in step S820 may include not only correct correspondence (inlier) but also erroneous correspondence (outlier). A feature point that is judged as an outlier is excluded from the feature points constituting the captured scene in step S840. Hereinafter, the feature point judged as an inlier, i.e. judged as having a correct correspondence relation, is also referred to as “corresponding point”.
The calculation unit 422 executes alignment between the feature points, which were extracted in the first time period and from which the three-dimensional shape was estimated, and the feature points, which were extracted in the second time period and from which the three-dimensional shape was estimated. A well-known method such as Iterative Closest Point (ICP) algorithm is applicable to the alignment between these point groups. The calculation unit 422 iteratively calculates the combination of feature points, which minimizes the error after the alignment.
In the alignment, the calculation unit 422 assumes that there is no displacement between the corresponding points extracted from the first images in the first time period and second time period. In other words, the calculation unit 422 sets a presupposition that an error between the corresponding points extracted from the first images in the first time period and second time period is sufficiently smaller (i.e. negligibly smaller) than an error between the corresponding points extracted from the second images in these time periods. By assuming this, a displacement of the measurement position can be expressed as an error (i.e. residual) remaining after the alignment.
As described above, the displacement measurement system 400 is configured to evaluate images captured by plural cameras (411a, 411b, 411c) by one reprojection error equation. This configuration can avoid restrictions in the general SfM in which the number of cameras used when the three-dimensional shape is reconstructed from images is one. If a plurality of cameras can be used in the displacement measurement, it is possible to locally image only the position that is to be measured on the measurement object. Thus, according to the displacement measurement system 400, even when the measurement object is a large object such as the bridge 600, the resolution per unit area of the image including the measurement object can be enhanced, and therefore the displacement of the measurement object can be measured with high precision.
The UAV 910 includes a motion control unit 913 in addition to an imaging unit 911 and a communication unit 912. Cameras 911a, 911b and 911c are different from the cameras 411a, 411b and 411c of the second example embodiment, in that the cameras 911a, 911b and 911c are configured such that their positional relationship is variable.
The motion control unit 913 controls the movement of the cameras 911a, 911b and 911c. The motion control unit 913 can control the positions or imaging directions of the cameras 911a, 911b and 911c. By the control of the motion control unit 913, the cameras 911a, 911b and 911c change their relative positions or angles. The motion control unit 913 controls the movement of the cameras 911a, 911b and 911c, for example, by driving servo motors or linear-motion-type actuators.
The control by the motion control unit 913 may be remote control, i.e. control by the displacement measurement device 420 or other remote-control equipment. Alternatively, the control by the motion control unit 913 may be control based on images captured by the cameras 911a, 911b and 911c. For example, the motion control unit 913 may control the positions or imaging directions of the cameras 911a, 911b and 911c so as to continue photographing a specific position of the measurement object.
The motion control unit 913 supplies configuration information to the communication unit 912. The configuration information of the present example embodiment includes information indicative of imaging conditions (relative positions, angles, etc.) of the cameras 911a, 911b and 911c. For example, the configuration information includes information indicative of displacements or rotational angles of the cameras 911a, 911b and 911c from a position that is a base position. The motion control unit 913 does not need to always supply the configuration information to the communication unit 912, and may supply the configuration information to the communication unit 912 only when a change occurs in the configuration information.
Note that the cameras 911a, 911b and 911c may have optical zoom functions. Specifically, the cameras 911a, 911b and 911c may have mechanisms for capturing images by optically enlarging (or reducing) the images. In this case, the imaging magnification by each of the cameras 911a, 911b and 911c can be set independently (i.e. regardless of the imaging magnifications of other cameras). In this case, the configuration information may include information indicative of the imaging magnifications of the cameras 911a, 911b and 911c.
The communication unit 912 transmits to a displacement measurement device 920 the configuration information supplied from the motion control unit 913, in addition to the image data. The configuration information, for example, may be embedded in the image data as metadata of the image data. The configuration information may be information indicative of a difference from an immediately previous state of each of the cameras 911a, 911b and 911c.
The displacement measurement device 920 includes a communication unit 921 and a calculation unit 922. The communication unit 921 differs from the communication unit 421 of the second example embodiment in that the communication unit 921 receives the configuration information as well as the image data. The calculation unit 922 differs from the calculation unit 422 of the second example embodiment in that the calculation unit 922 calculates (i.e. changes) the homogeneous transformation matrix, based on the configuration information received from the communication unit 921.
The calculation unit 922 may execute the calibration as illustrated in
Note that the displacement measurement process, which the calculation unit 922 executes, is similar to the displacement measurement process (see
As described above, the displacement measurement system 900 is configured to control the movement of the cameras 911a, 911b and 911c and to calculate a displacement of the measurement object by using the homogeneous transformation matrix corresponding to the movement of the cameras 911a, 911b and 911c. The similar advantageous effects to those of the displacement measurement system 400 of the second example embodiment can be obtained by the displacement measurement system 900. In addition, the displacement measurement system 900 can calculate the displacement of the measurement object, even without the configuration in which the cameras 911a, 911b and 911c are fixed immovably.
[Modifications]
The present disclosure is not limited to the above-described first example embodiment to third example embodiment. For example, the present disclosure includes modifications which are described below. In addition, the present disclosure may include modes in which the matters described in the present description specification are properly combined or replaced as needed. For example, the matters described by using a specific example embodiment can be applied to the other example embodiments within the range in which no contradiction occurs. Moreover, the present disclosure may include, in addition to these example embodiments, other example embodiments to which modifications or applications that are understandable by a so-called person skilled in the art are applied.
(Modification 1)
The UAV 410 may include a configuration for measuring an angular velocity or acceleration. For example, the UAV 410 may be configured to include a so-called Inertial Measurement Unit (IMU). The angular velocity and acceleration measured by the IMU make it possible to calculate the variations of the angle and position of the UAV 410 by integration.
The calculation of equation (7) is an optimization calculation with a relatively large number of unknown numbers, in which the point (Xα, Yα, Zα) in the scene, and the central position t of the lens and the rotation matrix R included in the projection matrix Pγκ are calculated. However, if the IMU is used, since the central position t of the lens and the rotation matrix R can be calculated from the change of the angle and position of the UAV 410, these can be treated as known values.
(Modification 2)
In general, in the three-dimensional position of the scene, i.e. the point (Xα, Yα, Zα), which is calculated by the SfM, there exists indefiniteness of the scale in relation to the three-dimensional position in the real world. The indefiniteness in this context refers to such a property that the proportionality factor s in equation (1) cannot uniquely be specified from the reprojection error E of equation (4) or equation (7). Because of the indefiniteness, the displacement calculated by the calculation unit 422 cannot be described by the unit which represents an absolute length, such as meters, and is expressed by a relative ratio to a position serving as a base.
The calculation unit 422 may calculate the ratio between the amount of movement of the camera, which is calculated by the SfM, and the actual amount of movement, in order to express the displacement by the unit representing the absolute length. The actual amount of movement of the camera can be calculated, for example, by using an inertia sensor or an atmospheric pressure sensor.
The calculation unit 422 records the position of a specific camera (here, the camera 411a, for instance) among the plural cameras. Hereinafter, the position of the camera 411a at the time of capturing the κ-th image is defined as “tκ”. In addition, the central position tκ, of the lens, which is calculated by minimizing the reprojection error E of equation (7), corresponds to the position of the camera 411a at the time of capturing the κ-th image. Accordingly, the following equation (9) is established between the position tκ, the central position tκ and the proportionality factor s.
[Math. 9]
t′
κ
=st
κ (9)
The calculation unit 422 can calculate a displacement of, e.g. the meter unit, by calculating the proportionality factor s from equation (9). Specifically, the calculation unit 422 can describe the displacement by the unit representing the absolute length, by multiplying the displacement, which is calculated by minimizing the reprojection error E of equation (7), by the proportionality factor s.
(Modification 3)
The displacement measurement device 420 may set a point (hereinafter referred to as “steady point”), which is a base point, from among a plurality of corresponding points. The steady point is selected from among the corresponding points included in the first image, i.e. the image including the base position. The displacement measurement device 420 may calculate the position of the camera, which is based on the steady point, and may measure the position of a corresponding point (hereinafter “movable point”) which is not the steady point, based on the calculated position of the camera. The movable point is selected from among the corresponding points included in the second image, i.e. the image including the measurement position.
The displacement measurement device 420 may calculate a displacement of the measurement object, based on the difference between the position of the movable point which is estimated from the position of the steady point, and the position of the movable point which was actually measured. Thereby, the displacement measurement device 420 can calculate the displacement of the movable point in real time.
Specifically, the process, which is executed in the displacement measurement device 420, is as follows. To begin with, the calculation unit 422 acquires the positions (Xα, Yα, Zα) of the corresponding points in advance, based on the images captured in the first time period, and specifies their positional relationship (where α=1, 2, . . . , N). Next, the calculation unit 422 sets a steady point (Xf, Yf, Zf) and a movable point (Xm, Ym, Zm) from among these corresponding points. Here, f and m are a set of nonoverlapping values among the values which α can take. Specifically, the sum of steady points and movable points is equal to the total number of steady points.
When the displacement of the steady point is calculated in real time, the calculation unit 422 minimizes reprojection errors EF and EM of the steady point and movable point by using the following equation (10) and equation (11), and estimates the respective central positions t of the lenses and the rotation matrices R. At this time, since the positions (Xα, Yα, Zα) of the corresponding points are already known, these values are treated as fixed values.
If the reprojection errors of equation (10) and equation (11) are minimized, the central position and rotation matrix based on the steady point and the central position and rotation matrix based on the movable point can be calculated. Hereinafter, the central position and rotation matrix based on the steady point are expressed as “tF” and “RF”, respectively. In addition, the central position and rotation matrix based on the movable point are expressed as “tM” and “RM”, respectively.
These central positions and rotation matrices can be expressed as indicated by the following equation (12), if differences due to the displacement of the movable point are Δt and ΔR. These matrices are expressed as homogeneous transformation matrices by adding a fourth row to the matrices of three rows and four columns (see equation (2)) representing the central positions of lenses and the rotations, so that inverse matrixes can be calculated.
With respect to equation (12), if an inverse matrix of a homogeneous transformation matrix, which is based on the central position and rotation matrix based on the movable point, is multiplied from the left, a homogeneous transformation matrix of the difference of the central position and rotation matrix can be calculated as indicated by the following equation (13).
In equation (13), At and AR represent a displacement of the movable point, which is based on the steady point. Specifically, by acquiring in advance the positions (Xα, Yα, Zα) of the corresponding points and operating as described above, the calculation unit 422 can calculate the displacement of the movable point without executing the imaging multiple times in the second time period. Therefore, the calculation unit 422 can calculate the displacement of the measurement object by imaging the measurement object only once, without changing the imaging position.
(Modification 4)
The base point according to the present disclosure is not limited to the feature point based on the feature amount. For example, as the method of reconstructing the three-dimensional shape, there is also known a method using not the feature amount but the luminance or color of pixels. The displacement measurement method according to the present disclosure is also applicable to such methods. The displacement measurement method according to the present disclosure is also applicable to, for example, position estimation using Parallel Tracking and Mapping for Small AR Workspaces (PTAM), Large-Scale Direct Monocular Simultaneous Localization and Mapping (LSD-SLAM), or the like. The base point according to the present disclosure may be a specific pixel used in such methods.
(Modification 5)
In the displacement measurement device 100A, the first calculation unit 1020 calculates the correlation information. Concrete calculation methods of the correlation information may be similar to those in the first example embodiment to third example embodiment. The second calculation unit 1030 calculates a displacement of the second position (measurement position) by using the first images and second images acquired by the acquiring unit 1010 and the correlation information calculated by the first calculation unit 1020. Concrete calculation methods of the second position may be similar to those in the first example embodiment to third example embodiment.
In the displacement measurement device 100B, the storage unit 1040 stores the correlation information. The storage unit 1040 stores the correlation information which was calculated by some other device in advance (i.e. before imaging the measurement object). Accordingly, the displacement measurement device 100B does not need to calculate the correlation information.
In the displacement measurement device 100C, the storage unit 1060 stores all images necessary for displacement measurement, and the correlation information. The storage unit 1060 stores the correlation information which was calculated by some other device in advance. In addition, the storage unit 1060 stores images which were captured in advance. Like the displacement measurement device 100B, the displacement measurement device 100C does not need to calculate the correlation information.
(Modification 6)
The displacement measurement system according to the present disclosure is not limited to the configuration of the second example embodiment or third example embodiment. For example, the displacement measurement system according to the present disclosure may not necessarily include the UAV.
The imaging unit 1110 captures a first image including a first position and a second image including a second position in a first time period and in a second time period. The calculation unit 1120 calculates a displacement of the second position, the displacement being based on the first position, using the first image and second image and the correlation information calculated based on the configuration information of the imaging unit 1110.
The writer unit 1211 stores in a detachable recording medium the image data supplied from the imaging unit 411. The recording medium is, for example, a so-called Universal Serial Bus (USB) memory or a memory card. The reader unit 1221 reads the image data from the recording medium in which the image data was stored by the writer unit 1211. Each of the writer unit 1211 and reader unit 1221 is, for example, a reader/writer of memory cards.
The UAV 1210 and displacement measurement device 1220 do not need to transmit/receive image data. When imaging by the UAV 1210 is finished, the user takes out the recording medium, in which image data are stored, from the UAV 1210, and attaches the recording medium to the displacement measurement device 1220. The displacement measurement device 1220 reads the image data from the recording medium which was attached by the user, and calculates the displacement.
(Modification 7)
A concrete hardware configuration of the displacement measurement device according to the present disclosure includes many variations and is not limited to a specific configuration. For example, the displacement measurement device according to the present disclosure may be realized by using software, or may be configured to share various processes by using a plurality of pieces of hardware.
The CPU 1301 executes a program 1308 by using the RAM 1303. The program 1308 may be stored in the ROM 1302. Alternatively, the program 1308 may be stored in a recording medium 1309 such as a memory card and may be read by the drive device 1305, or the program 1308 may be transmitted from an external device via a network 1310. The communication interface 1306 exchanges data with the external device via the network 1310. The input/output interface 1307 exchanges data with peripheral equipment (an input device, a display device, etc.). The communication interface 1306 and input/output interface 1307 can function as constituent elements for acquiring or outputting data.
Note that the constituent elements of the displacement measurement device according to the present disclosure may be composed of single circuitry (a processor or the like), or may be composed of a combination of a plurality of circuitries. The circuitry in this context may be general-purpose circuitry or purpose-specific circuitry. For example, a part of the displacement measurement device according to the present disclosure may be realized by a purpose-specific processor, and the other part may be realized by a general-purpose processor.
The configuration described as a single device in the above example embodiments may be provided in a plurality of devices in a distributed fashion. For example, the displacement measurement device 100, 420 or 920 may be realized by cooperation of a plurality of computer devices, by using cloud computing technology or the like.
The present application claims priority based on Japanese Patent Application No. 2016-184451, filed Sep. 21, 2016; the entire contents of which are incorporated herein by reference.
Number | Date | Country | Kind |
---|---|---|---|
2016-184451 | Sep 2016 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2017/033438 | 9/15/2017 | WO | 00 |