The embodiments discussed herein are related to an image processing apparatus, an image processing method and a medium for storing an image processing program for processing image data acquired from picking up an object.
In structures, such as tunnels, changes in states including appearance of cracks or peeling may occur in concrete wall surfaces due to aged deterioration. Locations at which changes in states have occurred are inspected in order to ensure safety of the structures.
Visual inspection by a human inspector from the close position is high in cost and low in efficiency. It is considered to pick up images of a structure by a camera carried on a vehicle travelling along the structure in order to inspect the structure in shorter time and without obstructing traffic. For example, images of a tunnel wall surface are continuously picked up by the camera on the vehicle traveling along the wall surface of the tunnel to acquire a plurality of still image (each still image corresponds to a single frame). In this method, the vehicle carrying the camera travels between a point of time at which an image frame is picked up and a point of time at which the next image frame is picked up; therefore, positions of objects in a developed image, in which a plurality of picked up image frames are disposed in a rectangular frame, are not accurate. Further, if the distance between the camera and an object area varies in a case in which the structure, such as a wall surface of a tunnel, curves, and in a case in which the vehicle carrying the camera is not able to travel along the structure, the size of the object area differs in each of the image frames in the developed image.
The developed image of the tunnel is used to check the locations at which changes in states have occurred in the tunnel wall surface. If adjoining frames are joined in a misaligned manner or if the object areas differ in size among frames, there is a possibility that locations at which changes in states have occurred to be detected are not displayed on the developed image, or that a single location at which a change in state has occurred is displayed at two or more locations on the developed image.
Japanese Laid-open Patent Publication No. 2004-012152 is an example of the related art.
According to an aspect of the invention, an image processing apparatus includes a camera which acquires an image of an area of an object while moving with a moving vehicle, a moving amount acquisition unit which acquires a moving amount of the camera from a predetermined position on a moving path of the moving vehicle to an image acquiring position at which the camera acquires the image of the area, a distance acquisition unit which acquires a distance between the area of the object and the camera when the camera acquires the image of the area, a first processing unit which performs correction in which the image acquired by the camera is displaced in a moving direction of the moving vehicle in accordance with the moving amount, a second processing unit which performs correction in which a size of the image acquired by the camera is changed in accordance with the distance acquired by the distance acquisition unit using a size of a predetermined image acquired by the camera and a distance corresponding to the predetermined image, and a third processing unit which arranges a plurality of images corrected by the first processing unit and the second processing unit to generate an inspection image.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, as claimed.
The camera 11 picks images of an object repeatedly while moving and acquires image data. The camera 11 may be selected arbitrarily and may be, for example, a linear sensor camera with visual sensors arranged in one dimensional direction and an area sensor camera with visual sensors arranged in two dimensional directions. The data acquired by image picking-up of the linear sensor camera is one-dimensional image data and the data acquired by image picking-up of the area sensor camera is two-dimensional image data. An infrared camera is preferably used which is capable of easily detecting deterioration, such as cracks and peeling, of a structure of an object.
The camera 11 may be moved in arbitrarily selected manner. The camera 11 is carried and moved on a moving device, such as a car. The camera 11 may pick up the image of the object by scanning the object in a direction which crosses the direction in which the moving device is moving. The direction which crosses the direction in which the moving device is moving is, for example, perpendicular to the moving direction. For example, the object may be scanned by picking up images by the camera 11 which is rotated such that a straight line between a sensor of the camera 11 and the object is rotated about a straight line extending in the moving direction. For example, the camera 11 scans the object from the top to the bottom, and then repeats the scanning from the top to the bottom. A device for scanning the object is provided to the camera 11. The device adjusts the orientation and position of the camera 11. A scanning camera in which an operation mechanism for scanning an object is incorporated may be used. Hereinafter, a scanning linear sensor camera is used in the present embodiment. The scanning linear sensor camera picks up an image of object while being rotated such that a straight line between a sensor and the object is rotated about a straight line extending in the moving direction of the moving device.
The moved amount acquisition unit 12 is a device which acquires a moved amount of the camera 11 from a predetermined position to an image pick-up position. An exemplary moved amount acquisition unit 12 is a device which measures a moved amount of the camera 11 in the moving direction in a period since the camera 11 picks up an image until the camera 11 picks up another image. The moved amount is usually acquired in synchronization with picking up of the image by the camera 11. The moved amount acquisition unit 12 is not particularly limited: any moved amount sensor which measures the moved amount of the camera 11 in the moving direction of the moving device may be used. When the camera 11 is mounted on a vehicle, for example, a vehicle speed sensor provided in the vehicle may be used as the moved amount sensor. The vehicle speed sensor measures the moved amount of the vehicle from a predetermined position to an image pick-up position (e.g., the moved amount of the vehicle moved between a position at which an image is picked up and a position at which another image is picked up) in accordance with pulse signals generated by a vehicle speed pulse generator in proportion to the rotational speed of a vehicle shaft. A distance sensor capable of measuring the distance between the object area and the camera 11 during pick-up of an image may be used as the distance acquisition unit 13: in that case, the moved amount acquisition unit 12 may be a device which calculates the moved amount of the camera on the basis of each distance measured by the distance sensor at a plurality of image pick-up events, and of an amount of change of a feature point of the image data acquired in the plurality of image pick-up events. The amount of change in the feature point of the image data is acquired on, for example, a pixel basis. For example, an amount of change is converted on the pixel basis into an actual amount of change (e.g., meters) by multiplying the actual dimension size of a single image pick-up element by an amount of change of the feature point. An average value of the plurality of distance values acquired in the plurality of image pick-up events is calculated. The moved amount of the camera may be calculated by the following formula:
moved amount of camera=average value of distance x actual dimension of pixel/focal length.
The distance acquisition unit 13 is a device which acquires the distance between an object of the structure and the camera 11 when the camera 11 picks up an image of the object area. The distance is usually acquired in synchronization with picking up of the image by the camera 11. The distance acquisition unit 13 is not particularly limited: for example, a distance sensor, such as a range sensor, which measures the distance to an object by applying a laser beam, an ultrasonic wave and so on against the object and measuring the time until the light reflected from the object may be used. A vehicle speed sensor capable of measuring the moved amount from a predetermined position to the image pick-up position, such as a vehicle speed pulse generator, may be used as the moved amount acquisition unit 12: in that case, the distance acquisition unit 13 may be a device which calculates the distance from the moved amount measured by the moved amount sensor at the time of a plurality of image pick-up events, and the distance from the center of each image data acquired by the plurality of image pick-up events to a feature point of each image data. At each image pick-up position, an angle between a straight line connecting a position of the object corresponding to the feature point and camera 11 and a straight line in the moving direction of the camera 11 moved by the moving device may be calculated by multiplying the distance (on a pixel basis) from the center of each image data acquired by the plurality of image pick-up event to the feature point of each image data by a viewing angle of a pixel. The distance from the camera 11 to the object may be calculated on the basis of the moved amount of the camera 11 and the angle in each image pick-up position (triangulation).
The normalization processing unit 14 includes a movement processing unit 25 (i.e., a first processing unit) and an expansion and contraction processing unit 24 (i.e., a second processing unit or a fifth processing unit). The movement processing unit 25 performs correction such that frames of a plurality of pieces of image data picked up by the camera 11 are displaced in the moving direction of the moving device in accordance with the moved amount of the camera 11 from a predetermined position to an image pick-up position. The expansion and contraction processing unit 24 performs correction such that a frame size of image data picked up by the camera 11 in accordance with the distance acquired by the distance acquisition unit 13 is expanded and contracted with reference to the frame size of predetermined image data and the predetermined distance corresponding to the image data. The normalization process is performed on a certain coordinate axis regarding a plurality of image frames acquired, for example, by a single scanning event of the object in the scanning direction. Details of the normalization processing unit 14 will be described below.
The combination processing unit 15 (i.e., a third processing unit or a sixth processing unit) plots the plurality of pieces of image data corrected by the movement processing unit 25 and the expansion and contraction processing unit 24 on a two-dimensional coordinate system, and generates a two-dimensional image. The two-dimensional image data may be generated by calculating positions of the image frames adjoining in the moving direction on the basis of the moved amount of the camera 11 acquired by the distance acquisition unit 13 during the pick-up of a plurality of images. Although a plurality of image frames may be disposed on a two-dimensional coordinate system depending only on the distance acquisition unit 13, it is preferred to correct a plurality of image frames in the moving direction of the camera as needed from the viewpoint of reduction in misalignment of the objects plotted on the acquired two-dimensional image. The moving direction of the camera may be corrected by: correcting such that the difference absolute value sum of image pixel values (i.e., pixel values) of an area in which two adjoining image frames overlap each other may become the smallest; and correcting using a matching method by normalized correlation of the image pixel values in an area in which two adjoining image frames overlap. An exemplary combination process will be described later with reference to
The image processing apparatus of the present embodiment may be provided with an image storing device 16 in which an image (i.e., a developed image) plotted on a two-dimensional coordinate system is stored.
The normalization process includes a moving-direction expansion and contraction process S101 and a moving-direction movement process S102 which are moving-direction process of the image frame, and a scanning-direction expansion and contraction process S103 which is a scanning-direction process. The expansion and contraction processing unit 24 performs the moving-direction expansion and contraction process S101 and the scanning-direction expansion and contraction process S103. The movement processing unit 25 performs the moving-direction movement process 102.
Output image data 27 for which the moving-direction expansion and contraction process S101, the moving-direction movement process S102 and the scanning-direction expansion and contraction process S103 have performed is combined in the combination processing unit 15 and thereby two-dimensional image data is generated.
The illustrated components are functional and conceptual examples and thus do not physically correspond to actual components. That is, specific forms of distribution and integration of each device is not limited to those illustrated; but each device may be partially or entirely distributed and integrated functionally or physically in an arbitrary unit.
Moving-Direction Expansion and Contraction Process
The moving-direction expansion and contraction process will be described with reference to
where x represents the X coordinate of the input image and x1 represents the X coordinate after the moving-direction expansion and contraction process is performed.
Moving-Direction Movement Process
The moving-direction movement process will be described with reference to
Correction is made in the moving direction by moving other image frames y by x0 (y). Accordingly, the X coordinate x′ after the moving-direction movement process may be expressed by linear transformation of the following formula (2).
Scanning-Direction Expansion and Contraction Process
An expansion and contraction process in the scanning (vertical) the direction will be described with reference to
r(y)=2D(y)tan(θv/2) (3)
The vertical visual field rv when the images of the virtual wall surface 32 are picked up after the normalization process for the distance is completed may be calculated using the following formula (4). After the normalization process for the distance is completed, the distance from the center of the pick-up center of the camera 11 is D0.
r
v=2D0 tan(θv/2) (4)
An enlargement and reduction ratio s (y) of each image frame y may be calculated using the following formula (5) from the similarity ratio.
That is, each image frame y is expanded and contracted at an expansion and contraction ratio D0/D(y) in the scanning-direction expansion and contraction process. The relationship between the position y of the image frame in the scanning direction and the position y′ in the scanning direction after the normalization may be expressed in following formula (6) in a cumulative format.
The normalization process of the present embodiment is performed by the above-described moving-direction expansion and contraction process, the moving-direction movement process and the scanning-direction expansion and contraction process.
The processes described above may be performed substantially in an arbitrary order; but it is desired the moving-direction expansion and contraction process and the moving-direction movement process precede a vertical-direction expansion and contraction process. The moving-direction expansion and contraction process and the moving-direction movement process may be efficiently processed with the height of each pixel frame corresponds to a single pixel (unit: pixel). However, since the height of each pixel frame becomes D0/D(y) after the vertical-direction expansion and contraction process is performed, the data for which the moving-direction expansion and contraction process and the moving-direction movement process are to be performed is usually no longer a pixel unit. Therefore, the moving-direction expansion and contraction process and the moving-direction movement process become inefficient.
The moving-direction expansion and contraction process preferably precedes the moving-direction movement process. Performing the moving-direction movement process before the moving-direction expansion and contraction process means that the above-described formula (2) regarding X coordinate x′ after the moving-direction movement process is transformed as expressed by the following formula (7).
In the formula (7), addition (D(y)/D0) (y) x0 of x in parenthesis is a movement correction in the moving direction. This addition is inefficient because it means correcting the acquired moved amount x0(y) in accordance with the acquired distance D(y).
Therefore, the normalization process is preferably performed in the order of the moving-direction expansion and contraction process, the moving-direction movement process and the scanning-direction expansion and contraction process.
The above-described normalization process represents each pixel in the acquired image is converted into which pixel by the normalization. In actual conversion of an image, however, the quality of transformation result becomes high when inverse transformation is performed. In the inverse transformation, information about the correspondence between each pixel in the normalized image and the pixel in the acquired image is acquired.
The inverse transformation in the X-axis direction is linear transformation and thus acquired analytically by the following formula (8).
The inverse transformation in the y-axis direction is acquired by numerical computation since the relationship between the position y of the image frame in the scanning direction and the position y′ in the scanning direction after the normalization is cumulative format as illustrated in the formula (6).
The image processing apparatus of the present embodiment forms an image by, after the normalization process of each image is performed, performing the combination process of the output image 27 for which the normalization process has been performed by the combination processing unit 15, and then outputs the formed image.
According to the image processing apparatus of the present embodiment, an image with which defects and the position of the pattern on the wall surface may be recognized correctly may be generated by picking up images of the object by scanning in the direction which crosses the moving direction while travelling along the object, and by performing the normalization process and the combination process for a plurality of acquired still images.
An area sensor camera may be used as the camera 11 as stated above. In that case, since the distance between the camera 11 and the object area of the structure is usually considered the same value in each of the acquired image frames, there is a possibility that precision of the normalization result may become low as an area of the object of which images are to be picked up in each image frame becomes large. However, the area sensor camera is preferred in that it may pick up images of the structure in a short time.
If the text feature amount in the evaluation area for the evaluation of overlapping state is insufficient, the position of the search result may be inaccurate. The amount of texture in the evaluation area is evaluated in advance and if the evaluated texture amount is smaller than a predetermined amount of texture, a default value may be used without performing the image search process. The text feature amount herein is, for example, the distribution of a brightness value and the distribution of a brightness differential value.
The image processing apparatus of the second embodiment is mounted on a moving device, such as a vehicle, and picks up images of one side of the wall surface of the tunnel while travelling in the tunnel. The image processing apparatus of the second embodiment includes cameras 11, 11a, a distance acquisition unit 13, a moved amount acquisition unit 12, a developed image generation unit 20, a center boundary detection unit (i.e., a detection unit) 23 and an inbound and outbound developed image generation unit 28 (i.e., a fourth processing unit). The cameras 11 and 11a are the same as those provided in the image processing apparatus of the modification of the first embodiment illustrated in
A center boundary detection unit 23 detects data about the centering boundary by the centering detection unit from the generated outbound developed image and inbound developed image.
The horizontal pixel position which includes the peak not smaller than a predetermined threshold t is detected and is stored as the center boundary position. The center boundary position is recorded with an opening position of the tunnel being a reference position.
Next, a correlation process of the center boundary positions of the outbound developed image and the inbound developed image is performed.
The inwardly and outwardly developed image creation unit 28 generates an inbound and outbound developed image using the data correlated about the center boundary position of the outbound developed image and the inbound developed image. An embodiment of the inbound and outbound developed image generating process will be described hereinafter. Although a case in which the inbound developed image is joined with reference to the outbound developed image will be described, the outbound developed image may be joined with reference to the inbound developed image.
First Inbound and Outbound Developed Image Generating Process
[Step 1]
An image correction process of a partially developed image of the inbound centering boundary section [bi, bi+1] corresponding to the outbound centering boundary section [ai, ai+1] is performed. In particular, an expansion process to r times is performed in the moving direction as follows:
r=(ai+1−ai)/(bi+1−bi)
The expansion and contraction process to r times may be performed in the moving direction and in the scanning direction.
[Step 2]
Next, the combination process of the outbound developed image 51 and the expanded and contracted inbound developed image 55 are performed. That is, an image search process is performed and the combination process is performed in accordance with the searched overlapping position. The combination process may be performed on a partially developed image basis. Since the combination process has been described with reference to
Second Inbound and Outbound Developed Image Generating Process
[Step 1]
The rearrangement process is performed for the partially developed image of the inbound centering boundary section [bi, bi+1] corresponding to the outbound centering boundary section [ai, ai+1]. In particular, the position of each of the image frames 56 which constitute the inbound developed image 55 is shifted in the moving direction by the following amount d.
d={(ai+1−ai)−(bi+1−bi)}/Ni
where Ni is the number of junctions of the frames in the moving direction which exists in the inbound centering boundary section [bi, bi+1]. For example, in the partially developed image of the centering boundary section [bi, bi+1] illustrated in
The rearrangement process may not be performed to all the frame images which constitute the inbound partially developed image, but may be performed only to the following image frames: i.e., image frames stored in the inbound developed image generation process because the image search process has not been performed therefor due to an insufficient texture amount. In that case, the position of the image frame is shifted in the moving direction by the following amount d.
d={(ai+1−ai)−(bi+1−bi)}/Mi
where Mi is the number of frames for which the image search process has not been implemented in the outbound or inbound developed image generation process among the number of combined frames in the moving direction which exists in the inbound centering boundary section [bi, bi+1].
[Step 2]
Next, the combination process of the outbound developed image 51 and the rearranged inbound developed image 55 are performed. That is, the image search process is performed and, in accordance with searched overlapping positions, the combination process is performed in the same manner as in the first outbound developed image generating process.
Note that, in [Step 2] of the above-described first and second inbound and outbound developed image generating processes, the image search process and the image combination process may be performed on the image frame basis, which image frames constitute the partially developed image such that the inbound developed image may be reconstructed.
According to the developed image generation device of the second embodiment, an inbound and outbound developed image of high quality may be generated by combining pieces of image data of the objects with reduced misalignment or variation in the entire inner wall of the tunnel. For example, inbound and outbound developed image of high quality may be generated even if the vehicle speed or the distance from the camera to the wall surface varies in the outbound and inbound travels.
The image processing apparatus of the first embodiment and the second embodiment may be implemented using, for example, a general computer.
The embodiment is not limited to that described above. Two or more embodiments may be combined without sacrificing consistency. The above-described embodiments are illustrative only; any embodiments having substantially the same configuration and similar operations and effects as those of the technical idea described in the claims are included in the technical scope of the above-described embodiments.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although the embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
This is a continuation of International Application No. PCT/JP2009/004701 filed on Sep. 17, 2009, the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2009/004701 | Sep 2009 | US |
Child | 13422711 | US |