The present invention relates to an information processing apparatus, an information processing method, and a non-transitory computer-readable storage medium.
In inspection of concrete structures such as bridges and tunnels, image inspection using images obtained by capturing wall surfaces of the structures is performed. In this image inspection, damage including a crack on a wall surface is detected using an image recognition technology such as machine learning. Since a detection result includes detection omission in which damage is not detected, an inspection worker performs confirmation work of the detection result. The confirmation work of the detection result takes time and effort. Japanese Patent No. 7156527 discloses a technology of detecting damage for each detection point from a road surface image and discriminating an image position having a high possibility of erroneous detection. Use of the technology of Japanese Patent No. 7156527 can efficiently confirm and correct an erroneous detection result included in a detection result. Japanese Patent No. 5645730 discloses a technology of detecting a closure crack.
However, in the known technologies, since a detection target such as damage is detected alone, it has been difficult to reduce detection omission.
According to one aspect of the present disclosure, there is provided an information processing apparatus comprising: one or more processors; and one or more memories including instructions that, when executed by the one or more processors, cause the information processing apparatus to: acquire an image obtained by capturing a structure in which a detection target exists, execute first detection data acquisition processing of acquiring first detection data that is data related to a plurality of detection targets detected from the image, acquire combination information related to a combination of a first detection target included in the plurality of detection targets and a second detection target associated with the first detection target, and analyze the first detection target of the first detection data based on the second detection target combined with the first detection target indicated by the combination information, and generate an analysis result.
According to another aspect of the present disclosure, there is provided an information processing method comprising: acquiring an image obtained by capturing a structure in which a detection target exists; acquiring first detection data that is data related to a plurality of detection targets detected from the image; acquiring combination information related to a combination of a first detection target included in the plurality of detection targets and a second detection target associated with the first detection target; and analyzing the first detection target of the first detection data based on the second detection target combined with the first detection target indicated by the combination information, and generating an analysis result.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing a computer program that, when read and executed by a computer, causes the computer to function as an image acquisition unit that acquires an image obtained by capturing a structure in which a detection target exists, a first detection data acquisition unit that acquires first detection data that is data related to a plurality of detection targets detected from the image, a combination information acquisition unit that acquires combination information related to a combination of a first detection target included in the plurality of detection targets and a second detection target associated with the first detection target, and an analysis unit that analyzes the first detection target of the first detection data based on the second detection target combined with the first detection target indicated by the combination information, and generates an analysis result.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Hereinafter, embodiments will be described in detail with reference to the attached drawings. Note, the following embodiments are not intended to limit the scope of the claimed invention. Multiple features are described in the embodiments, but limitation is not made to an invention that requires all such features, and multiple such features may be combined as appropriate. Furthermore, in the attached drawings, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted.
Hereinafter, embodiments in which an information processing apparatus of the present invention is applied to a computer apparatus used for inspection of an infrastructure structure such as a concrete structure will be described. In the first embodiment, an example will be described in which a computer apparatus operates as an information processing apparatus, and, when displaying a detection result in which a detection target is detected from an image obtained by capturing an inspection target and the image, can display an image position periphery where there is a high possibility that the detection target exists.
In the present embodiment, the “inspection target” is a structure made of concrete or the like that is a target to be inspected, such as a limited-access road, a bridge, a tunnel, and a dam. The information processing apparatus performs detection processing of detecting presence or absence and a state of a detection target such as a crack by using an image obtained by a user capturing a structure of an inspection target. For example, in a case of a concrete structure, the “detection target” is a crack, chalking, creep, or flaking of concrete. The “detection target” includes, in addition, efflorescence, rebar exposure, rust, water leakage, water dripping, corrosion, damage (defect), cold joint, deposits, and honeycomb.
A hardware configuration of the information processing apparatus of the first embodiment will be described with reference to
The information processing apparatus 100 includes a control unit 101, a nonvolatile memory 102, a work memory 103, a storage device 104, an input device 105, an output device 106, a network interface 107, and a system bus 108.
The control unit 101 includes an arithmetic processing processor such as a central processing unit (CPU) or a micro processing unit (MPU) that integrally controls the entire information processing apparatus 100. The control unit 101 may include an arithmetic processing processor such as a graphics processing unit (GPU) or a quantum processing unit (QPU) in addition to or in place of the CPU and the like.
The nonvolatile memory 102 is a read only memory (ROM) that stores a program to be executed by the processor of the control unit 101 and parameters necessary for executing the program. Here, the program is a program for executing processing of each embodiment described later. Specifically, the nonvolatile memory 102 stores an operating system (OS) that is basic software executed by the control unit 101, and an application that implements an application function in cooperation with this OS.
In the present embodiment, the nonvolatile memory 102 stores an application for the information processing apparatus 100 to implement control processing. The control processing of the information processing apparatus 100 of the present embodiment is implemented by reading software provided by an application. It is assumed that the application includes software for using basic functions of the OS installed in the information processing apparatus 100. The OS of the information processing apparatus 100 may include software for implementing the control processing in the present embodiment.
The work memory 103 is a random access memory (RAM) that temporarily stores programs and data supplied from an external device or the like. The work memory 103 holds data obtained by executing control processing in
The storage device 104 includes a memory card including a semiconductor memory and a magnetic disk, a solid state drive (SSD), and a hard disk. The storage device 104 includes a storage medium including a disk drive that reads/writes data from/to an optical disk such as a DVD or a Blue-ray Disc. The storage device 104 may be an internal device built in the information processing apparatus 100, or may be an external device detachably connected to the information processing apparatus 100.
The input device 105 is an operation member such as a mouse, a keyboard, or a touchscreen that receives a user operation. The input device 105 outputs an operation instruction received from a user to the control unit 101.
The output device 106 is an example of a display unit, and is a display apparatus such as a display and a monitor including a liquid crystal display (LCD) and organic electro luminescence (EL), and displays data retained by the information processing apparatus 100 and data supplied from an external device.
The network interface 107 is connected to a network such as the Internet and a local area network (LAN) so as to be able to communicate thereover.
The system bus 108 connects the control unit 101, the nonvolatile memory 102, the work memory 103, the storage device 104, the input device 105, the output device 106, and the network interface 107 of the information processing apparatus 100 so as to be able to exchange data. The system bus 108 includes an address bus, a data bus, and a control bus.
Next, a functional block of the information processing apparatus 100 of the first embodiment will be described with reference to
The management unit 122 performs management such as registration, deletion, acquisition, and update of data of a processing target stored in the storage unit 121, the data including image data that is data of an image such as a structure and detection data.
The image acquisition unit 123 acquires image data of the processing target from the storage unit 121 via the management unit 122.
The first detection data acquisition unit 124 acquires first detection data regarding a first detection target. The first detection target includes a crack, chalking, creep, flaking, rebar exposure, efflorescence, water leakage of concrete.
The combination information acquisition unit 125 performs processing of acquiring combination information of the detection target stored in the database 130.
The second detection data acquisition unit 126 acquires second detection data regarding a second detection target having a high relatedness with the first detection target based on the combination information from the storage unit 121 via the management unit 122. The second detection target is any included in the first detection target, and is, for example, chalking, rebar exposure, flaking, cracking, and the like.
The analysis unit 127 analyzes the first detection target of the first detection data based on the second detection target combined with the first detection target indicated by the combination information. Specifically, the analysis unit 127 sets an analysis range to the first detection data based on the second detection data related to the second detection target, and analyzes the first detection target combined with the second detection target within the analysis range of the first detection data and generates an analysis result.
The noted position determination unit 128 determines a noted position on an image based on an analysis result.
The display control unit 129 creates and outputs, to the output device 106, display data that is data of a display screen using a selected image, the first detection data, the noted position, and the like, and causes the output device 106 to display the display screen.
Details of the processing and the functions of the image acquisition unit 123, the first detection data acquisition unit 124, the combination information acquisition unit 125, the second detection data acquisition unit 126, the analysis unit 127, the noted position determination unit 128, and the display control unit 129 will be described later.
Combination information used for control processing of the information processing apparatus 100 of the present embodiment will be described with reference to
For example, in close visual inspection of a site, an inspection worker sometimes performs work of marking chalking in the vicinity of a crack occurring on a wall surface of the inspection target. Based on this finding, the combination of the crack and the chalking can be one piece of the combination information stored in the database 130. When the wall surface of the structure of the inspection target deteriorates, the surface may be peeled off to cause flaking. When this deterioration progresses, the rebar inside the inspection target may be exposed around the flaking. Based on this finding, the combination of the flaking and the rebar exposure can also be one piece of the combination information stored in the database 130.
In the combination information in the present embodiment, there is a case where it is desired to limit to some detection targets. For example, it is a case where all cracks and chalking having a certain length or more are set as combination information. In such a case, it is possible to cope with the case by providing a remarks column as in combination information 211 of
Next, an image and detection data used for control processing of the information processing apparatus 100 of the present embodiment will be described with reference to
However, since it is difficult to express all detection targets on a paper surface, the detection targets displayed on the paper surface are limited to some of them. The detection data is information in which a result of detecting a crack or the like occurring on a concrete wall surface from an image by AI using machine learning, deep learning, or the like, or an input result by a human is recorded. Since the detection data may contain a detection error by AI and an input error by a human, detection omission can occur.
An object of the present embodiment is to find this detection omission. Description of the present embodiment assumes that detection data is managed in association with the drawings.
The position on the drawing of each of detection data in the detection data 321 is defined by pixel coordinates constituting the detection data.
In the control processing of the present embodiment, the information processing apparatus 100 can also use an image and detection data managed in association with each other. By associating the image with the detection data, the information processing apparatus 100 can obtain the positional relationship between the detection data and the image without using the drawings.
The control processing of the information processing apparatus 100 in the present embodiment will be described with reference to
The image acquisition unit 123 performs processing of acquiring an image of the processing target. In the present embodiment, an image stored in the storage unit 121 is acquired via the management unit 122.
The first detection data acquisition unit 124 acquires the first detection data of the first detection target corresponding to the image acquired in S401.
The combination information acquisition unit 125 acquires combination information indicating a combination of the first detection target and the second detection target having a high relatedness from the database 130.
In the processing of S403, a plurality of pieces of combination information may be acquired. For example,
The second detection data acquisition unit 126 performs processing of acquiring the second detection data of the second detection target set based on the combination information. In the present embodiment, the second detection data acquisition unit 126 sets the second detection target based on the combination information acquired in S403, and acquires the second detection data of the second detection target. This second detection data acquisition processing will be described with reference to
In S405, the analysis unit 127 sets an analysis range and performs processing of analyzing the first detection data in the analysis range. The analysis processing in the present embodiment will be described with reference to
When performing the analysis processing, the analysis unit 127 first sets the analysis range. As a setting method of the analysis range, the analysis unit 127 may use the second detection data. Specifically, the analysis unit 127 selects one piece of chalking data 912 from the second detection data 911. Then, the analysis unit 127 sets a range 913 of a dotted line surrounding the chalking data 912. The size of the range 913 may be changed to any size as long as it is a range including coordinate information of the chalking data 912. For example, the analysis unit 127 can set the range surrounding the chalking data 912 as an initial range and expand the range by an any size in the X axis and Y axis directions in the drawing. Subsequently, the analysis unit 127 superimposes the range 913 on the first detection data 901. The analysis unit 127 sets the range 903 obtained by this superposition as an analysis range. In this manner, the analysis unit 127 can set the analysis range using the second detection data.
Next, the analysis unit 127 performs analysis processing on the first detection data included in the analysis range. Specifically, the analysis unit 127 selects crack data 902 included in the range 903 in the first detection data 901. Then, the analysis unit 127 calculates the number of pieces of selected crack data, the total extension of the selected crack data, and the like, as analysis processing for the crack data 902.
The analysis unit 127 may use another method as a setting method of the analysis range using the second detection data.
The analysis unit 127 may use still another method as a setting method of the analysis range. For example, the analysis unit 127 may set an entire image of the processing target as one analysis range. The analysis unit 127 may divide the image of the processing target into rectangular ranges of a certain size, and set individual divided ranges as the analysis range. When setting the analysis range without using the second detection data, the analysis unit 127 may also perform, on the second detection data, processing similar to the analysis processing of the first detection data in the analysis range. Due to this, the noted position determination unit 128 can determine a noted position also using the analysis result of the second detection data in the processing of S406 described later.
As another analysis method, the analysis unit 127 can use a different analysis method for each piece of combination information.
The noted position determination unit 128 performs processing of determining a noted position on an image based on an analysis result by the analysis unit 127. The noted position is, for example, a position within an analysis range, among the plurality of analysis ranges, in which the second detection target is detected but the first detection target associated with the second detection target is not detected. In other words, the noted position is a position within the analysis range having a high possibility of detection omission to which the user or the like should pay attention. Noted position determination processing will be described with reference to
[Equation 1][Equation 1]
D<D
c (Formula 1)
In Formula 1, a parameter Dc is any constant, and a parameter D is the detection data quantity in the analysis result. Here, description of the present embodiment assumes that the parameter Dc is “1”. Note that the parameter Dc may be the number of second detection data within the target analysis range, that is, the number of second detection targets detected within the analysis range. Due to this, a plurality of pieces of chalking are detected within the analysis range and it is possible to cope with a case where one of a plurality of corresponding cracks is not detected. The noted position determination unit 128 selects one analysis result from the analysis result 1101, substitutes the detection data quantity of the selected analysis result into Formula 1, and determines whether or not Formula 1 is satisfied.
Next, the noted position determination unit 128 determines the noted position based on the analysis result 1111 after narrowing down. Methods of determining the noted position include a method of calculating barycentric coordinates of reference data D011, which is a reference of the analysis range, in the analysis result 1111 and setting the barycentric coordinates as the noted position. The reference data is a part of the second detection data and has coordinate information on the drawing. Therefore, the noted position determination unit 128 can calculate the barycentric coordinates on the drawing based on the reference data. As another method of determining the noted position, the noted position determination unit 128 may perform a method of calculating the center coordinates of the analysis range and setting the center coordinates as the noted position. In this manner, the noted position determination unit 128 can determine the noted position on the image based on the analysis result.
Although an example of using the detection data quantity has been described as a method of narrowing down the analysis result, the noted position determination unit 128 may use another method. For example, the noted position determination unit 128 may perform narrowing down based on a determination result as to whether or not the detection area in the analysis result is less than a reference value. In this case, using Formula 1 as it is, the noted position determination unit 128 may set the reference value of the detection area to the parameter Dc, and substitute the value of the detection area of the analysis result to the parameter D. In this manner, the noted position determination unit 128 can narrow down the analysis result by using any information regarding the analysis result.
The processing of narrowing down the analysis result of the noted position determination unit 128 can be omitted. In the present embodiment, the noted positions corresponding to all the analysis results can be obtained by omitting the narrowing down processing. Therefore, in display processing of S407 described later, it is possible to confirm, without fail, not only the image position where the possibility of detection omission is high but also the image position where the possibility that the detection target exists is high.
In S407, the display control unit 129 performs processing of creating and outputting, to the output device 106, display data to be displayed as an image based on a partial image in the vicinity of the noted position. The output device 106 displays the display image based on the display data.
A display screen 1201 of
In the present embodiment, the display control unit 129 displays, on the output device 106, an enlarged view 1205 of the vicinity of the noted position selected by the user in the noted position list 1202. The enlarged view 1205 is a view in which crack detection data and chalking detection data are superimposed on the partial image in the vicinity of a noted position 1204 having been selected. In this enlarged view 1205, there is chalking data 1208 corresponding to chalking 1207 on the image, but there is no crack data corresponding to a crack 1206 on the image. In this manner, the noted position indicates an image position where there is a high possibility of detection omission regarding the first detection target. Therefore, by utilizing the information of the noted position, the user can efficiently perform confirmation work of the detection result.
A range 1209 indicated by a dotted line is an analysis range regarding the analysis result used in the noted position determination processing. In this manner, by visualizing the analysis range, the display control unit 129 makes it easy for the user to understand the image range to be confirmed. By selecting the display target in display switching 1203, the user can individually switch between display and hide of each of detection data and each partial image included in the enlarged view 1205. The user can move or deform a thick frame 1216 of an overall view 1215 in response to various operations of the input device 105. Due to this, the display control unit 129 can freely switch the display range and the display position of the display image by the display data displayed as the enlarged view 1205.
In the display processing of the display control unit 129, the combination information acquired in the processing of S403 can be displayed on the screen. A display screen 1211 of
The noted position list 1214 on the display screen 1211 indicates a state in which ascending order sorting of flaking detection areas is applied. In this manner, the display control unit 129 may change the arrangement order of the noted position list 1214. In a case where the number of noted positions is large, it takes time and effort for the user to confirm all the noted positions. Therefore, the user can efficiently confirm the noted position by the display control unit 129 provided with a narrowing function of narrowing down and displaying only the top five with a small number of flaking detection data in the noted position list.
In the processing of S407, the display control unit 129 may display the entire image including the noted position. After performing the noted position determination processing using the present embodiment on a large number of images, the display control unit 129 can display the entire image including the noted position in the processing of S407. In this manner, the display control unit 129 can pick up an image having a high possibility of including detection omission of the first detection target, whereby the user can efficiently narrow down an image to be confirmed.
In a case where there are a large number of detection results in the vicinity of the noted position, it is difficult for the user to browse an image in the vicinity of the noted position. In such a case, in the processing of S407, the display control unit 129 may display only the detection data regarding the analysis result used in the noted position determination processing and hide the other detection data.
The display image 1301 indicates a state in which flaking detection data and crack detection data are superimposed on the image of the processing target. The image of the processing target of the display image 1301 includes a plurality of cracks other than flaking 1302 used in the noted position determination processing, a crack 1303 having a maximum width of 1 mm or more, and crack data 1304. The display image 1311 is different from the display image 1301 in displaying only the crack data 1304 regarding the noted position determination processing among the plurality of crack data. In the display image 1311, as compared with the display image 1301, the user can easily visually recognize the flaking 1302, the crack 1303, and the crack data 1304 on the image. In this manner, the display control unit 129 may display only detection data regarding the noted position determination processing and hide other detection data.
In the first embodiment, the analysis unit 127 can also use detection data corrected by the user. Specifically, the first detection data acquisition unit 124 acquires a correction history of the first detection data from the storage unit 121 together with the first detection data of the first detection target. The first detection data acquisition unit 124 corrects the first detection data based on the correction history. In the subsequent analysis processing, the analysis unit 127 performs analysis processing using the corrected first detection data. The other processing is similar to that of the first embodiment. Since the noted position obtained in this manner indicates a position having a high possibility that detection omission occurs even after the user corrects the first detection data, it is desirable to confirm the noted position. Therefore, the first detection data acquisition unit 124 uses the correction history of the detection data, whereby the analysis unit 127 can determine the image position having a high possibility of detection omission.
In the first embodiment, the information processing apparatus 100 can also utilize a user's operation history including a browsing history indicating browsing for the detection result. Specifically, the first detection data acquisition unit 124 acquires the detection data of the first detection target and a user browsing history of the processing target image. In the subsequent noted position determination processing, the noted position determination unit 128 may determine the noted position on the image based on the operation history including the browsing history. For example, the noted position determination unit 128 determines the noted position from the image range not browsed by the user. In this manner, by using the user operation history, the noted position determination unit 128 can determine, as the noted position, a position having a high possibility that detection omission by the user in the detection result confirmation work. Note that the timing of using a user's browsing history needs not be the noted position determination processing. For example, in the second detection data acquisition processing, the second detection data acquisition unit 126 may acquire the second detection data from a range not browsed by the user. In the analysis processing, the analysis unit 127 may exclude the range browsed by the user from the analysis range. In this manner, the information processing apparatus 100 can use the user's browsing history in various methods.
According to the first embodiment described above, the information processing apparatus 100 analyzes the first detection data indicating the result of detection of the first detection target from an image based on the combination information of the first detection target such as a crack and the second detection target such as chalking associated with the first detection target having a high possibility of existing around the first detection target. Due to this, the information processing apparatus 100 can suppress detection omission of the first detection target as compared with a case where the first detection target is detected and analyzed alone.
In the information processing apparatus 100, the display control unit 129 generates and outputs, to the output device 106, display data of a display image based on the analysis result of the first detection data, and displays the display image on the output device 106. Due to this, the information processing apparatus 100 can provide the user with a display image by an analysis result with less detection omission.
In the information processing apparatus 100, the second detection data acquisition unit 126 acquires the second detection data that is a detection result of the second detection target associated with the first detection target. Due to this, the information processing apparatus 100 can analyze the first detection data based on the second detection data, and thus, even in a case where the first detection target is not detected, it is possible to provide the user with useful information by detecting the second detection target.
In the information processing apparatus 100, the analysis unit 127 sets an analysis range in an image based on the second detection data, analyzes the first detection data in the analysis range, and generates an analysis result. Due to this, the information processing apparatus 100 can efficiently suppress detection omission of the first detection target while reducing the range to be analyzed and reducing the processing burden.
In the information processing apparatus 100, the noted position determination unit 128 determines the noted position within the analysis range. Due to this, the information processing apparatus 100 can reduce the range to which the user should pay attention, and thus the confirmation burden by the user can be reduced.
In the information processing apparatus 100, the display control unit 129 determines the noted position in the analysis range on the image in accordance with the combination information, and creates the display data of the display screen on which a partial image in the vicinity of the noted position and the first detection data of the detection target are superimposed. Due to this, the information processing apparatus 100 can generate a display screen by the partial image including the noted position having a high possibility of detection omission to which the user should pay attention. By displaying, on the output device, the display screen created by the display control unit 129, the information processing apparatus 100 can improve the efficiency of the confirmation work of the detection data by the user.
In the information processing apparatus 100, the display control unit 129 displays a display screen including combination information associated with the coordinates of the noted position. Due to this, the information processing apparatus 100 can provide the user with necessary information in detail.
In the information processing apparatus 100, the display control unit 129 includes, in the display screen, an enlarged view including the analysis range including the detection data. Due to this, the information processing apparatus 100 can provide the user with detection data to be noted among the detection data analyzed within the analysis range.
In the information processing apparatus 100, the first detection target is a crack and chalking, and the second detection target is chalking. Since the information processing apparatus 100 analyzes the first detection data based on the combination information in which the crack and the chalking are associated with each other, even in a case where the crack is not detected, if the chalking is detected, it is possible to determine the noted position in the analysis range of the first detection data, and therefore, it is possible to reduce detection omission of the crack.
In the first embodiment, an example of automatically determining the noted position using the combination information of the detection target has been described. In a case where there are a large number of pieces of combination information of detection targets, a large amount of calculation cost is required. Therefore, the information processing apparatus 100 of the second embodiment displays the combination information of a detection target on the screen in addition to an image of the processing target and the detection data, and receives selection of the combination information by the user. Then, the information processing apparatus 100 performs the noted position determination processing based on the combination information selected by the user. In this manner, the information processing apparatus 100 of the second embodiment can suppress the calculation cost by causing the user to select the combination information to be used for the noted position determination processing.
Since the hardware configuration of the information processing apparatus 100 according to the second embodiment is similar to the configuration of the first embodiment illustrated in
In the case of the second embodiment, after acquiring the combination information in S403, the information processing apparatus 100 proceeds to S1501. In the processing of S1501, the display control unit 129 creates display data including the combination information acquired in S403, and displays, on the output device 106, a display image by the display data. Thereafter, in S1502, the reception unit 1401 receives selection of the combination information from the user.
Thereafter, in S404, the second detection data acquisition unit 126 acquires the second detection data based on the selected combination information. Thereafter, through the analysis processing in S405 and the noted position determination processing in S406, the display control unit 129 creates and outputs, to the output device 106, display data in S407. In S1503, when determining that the browsing has ended, the information processing apparatus 100 ends the processing, and otherwise, executes the processing of S1502 again. An outline of the processing of S1501 and S1502 will be described below.
The display control unit 129 creates and outputs, to the output device 106, display data including the combination information acquired in S403. A screen 1601 of
The reception unit 1401 performs the processing of receiving selection of the combination information by the user. For example, when the user presses an analysis execution button 1606 on the screen 1601 of
When the user selects a plurality of pieces of combination information, the reception unit 1401 receives all the plurality of pieces of combination information that are selected. In that case, similarly to the first embodiment, the processing of S404, S405, and S406 may be executed for each piece of combination information.
By using the method described above, the information processing apparatus 100 performs the noted position determination processing using the combination information selected by the user and received by the reception unit 1401. Due to this, the information processing apparatus 100 can generate the display image by the analysis result based on the second detection target in line with the user's desire while suppressing unnecessary calculation cost.
In the first embodiment, an example has been described in which a position having a high possibility that detection omission of the detection target such as a crack exists is obtained as a noted position. This detection omission means a noted position where a detection target difficult for an existing AI model to detect exists in the vicinity. Therefore, the image in the vicinity of the noted position is effective data for updating the AI model. Therefore, in the third embodiment, a method of collecting, as learning data, an image in the vicinity of the noted position will be described.
Since the hardware configuration of the information processing apparatus 100 according to the third embodiment is similar to the configuration of the first embodiment illustrated in
In S1801, the collection unit 1701 performs processing of setting a collection range in an image and collecting, as learning data, at least one of an image in the collection range and the first detection data. Description of the learning data collection processing of the present embodiment assumes that a crack is the first detection target and chalking is the second detection target.
The method of setting the image range to be collected as the learning data may be, for example, a setting method of an analysis range regarding the analysis result used in the noted position determination processing in S406. By collecting the partial image in the collection range corresponding to the analysis range, the collection unit 1701 can efficiently collect an image having a high possibility that detection omission exists. The collection unit 1701 may freely change the size of the collection range set based on the analysis range. For example, the collection unit 1701 can change the collection range by setting the analysis range as an initial collection range and expanding the analysis range in any directions on the X axis and the Y axis on the drawing. The collection unit 1701 can also reduce the collection range so as to include at least the analysis range. As a method of setting the collection range, the collection unit 1701 can also set an analysis range including the noted position as a collection range.
As a method of setting the collection range to be collected as the learning data, the collection unit 1701 can also perform determination based on the user's operation instruction. For example, in the processing of S407, the display control unit 129 displays display data including a partial image in the vicinity of the noted position. The user inputs a range desired to collect as learning data while changing a display position and a display range by an operation of the input device on the display data. The collection unit 1701 extracts and collects, as learning data, a partial image corresponding to the range that is input. In this manner, the collection unit 1701 may determine the range of the image to be collected as the learning data based on the user's operation instruction.
The collection unit 1701 may collect the entire image including the noted position as the learning data as the image to be collected as the learning data. By collecting the image including the noted position in units of image, the collection unit 1701 can collect an image having a high possibility that detection omission occurs. Note that the collection unit 1701 may also collect detection data in the learning data collection processing. The collected detection data can be used as an initial value of training data. In this manner, the collection unit 1701 may collect the detection data together with the image.
The analysis unit 127 may set the analysis range based on the correction history of the first detection data by the user and analyze the first detection data in the corrected analysis range. For example, the analysis unit 127 may set, as a new analysis range, the analysis range corrected by the user among the plurality of analysis ranges that are set, and analyze again the first detection data in the analysis range. In this case, the collection unit 1701 may collect, as the learning data, the collection range set based on the correction history of the first detection data by the user in the learning data collection processing.
The user confirms the detection data of the first detection target while visually confirming the partial image in the vicinity of the noted position on the screen created by the processing of S407. When detection omission is found in the confirmation work, the user additionally writes or corrects the detection data.
In the information processing apparatus 100, the collection unit 1701 sets a collection range in the vicinity of the noted position and collects a partial image that is an image of the collection range. Due to this, the information processing apparatus 100 can use, as the learning data, the partial image to be noted, and therefore the information processing apparatus 100 can efficiently collect the learning data.
In the information processing apparatus 100, the collection unit 1701 collects, as a partial image, an image in the collection range set based on the analysis range set based on the correction history. Due to this, the information processing apparatus 100 can efficiently collect the learning data because it collects, as a collection range, a corrected range, that is, a range having a high possibility that detection omission has occurred, and collects, as learning data, a partial image in the collection range.
In the information processing apparatus 100, the collection unit 1701 collects, as learning data, the first detection data corrected together with the partial image. Due to this, the information processing apparatus 100 can efficiently perform learning based on the corrected first detection data.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2023-178384, filed Oct. 16, 2023, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2023-178384 | Oct 2023 | JP | national |