The present invention relates to an information processing device, an information processing method, a computer-readable medium, and an information processing system.
Japanese Patent No. 4551990 describes a method and a device for creating a panoramic image that extends long in a direction of movement and extends wider than a viewing angle of each line camera by repeatedly capturing images with each line camera while a moving object is moving.
According to an aspect of the present invention, an information processing device includes a generation means configured to connect captured images obtained by capturing, by an image capturing device installed in a moving object, a target region including a target object and a part other than the target object while dividing the target region into a plurality of image capturing regions along a moving direction of the moving object, to generate a display screen displaying a composite image including a boundary between the target object and the part other than the target object in the moving direction of the moving object.
The accompanying drawings are intended to depict exemplary embodiments of the present invention and should not be interpreted to limit the scope thereof. Identical or similar reference numerals designate identical or similar components throughout the various drawings.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present invention.
As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In describing preferred embodiments illustrated in the drawings, specific terminology may be employed for the sake of clarity.
However, the disclosure of this patent specification is not intended to be limited to the specific terminology so selected, and it is to be understood that each specific element includes all technical equivalents that have the same function, operate in a similar manner, and achieve a similar result.
An embodiment of the present invention will be described in detail below with reference to the drawings.
An embodiment has an object to confirm a position of a target part in an image captured by an image capturing device installed in a moving object.
Hereinafter, embodiments for carrying out the invention will be described with reference to the drawings. In the description of the drawings, the same elements are denoted by the same reference numerals, and redundant description is omitted.
First, an outline of a condition inspection system will be described with reference to
The condition inspection system 1 includes the moving object system 60, an evaluation system 4, a communication terminal 1100 of a national or local government, and a communication terminal 1200 of an entrusted business operator. The moving object system 60 is an example of an image capturing system, and includes a data acquisition device 9 and a moving object 6 such as a vehicle on which the data acquisition device 9 is mounted. The vehicle may be a vehicle running on a road or a vehicle running on a railway track. The data acquisition device 9 includes an image capturing device 7, which is an example of a measurement device for measuring a structure, a distance sensor 8a, and a global navigation satellite system (GNSS) sensor 8b. GNSS is a general term for satellite positioning systems such as a global positioning system (GPS) or a quasi-zenith satellite system (QZSS).
The image capturing device 7 is a line camera equipped with a line sensor in which photoelectric conversion elements are arranged in one row or a plurality of rows. The image capturing device 7 captures an image of a position along a predetermined image capturing range on an image capturing surface along the traveling direction of the moving object 6. Note that the image capturing device is not limited to the line camera, and may be a camera equipped with an area sensor in which photoelectric conversion elements are arranged in a planar shape. Alternatively, the image capturing device may include a plurality of cameras.
The distance sensor 8a is a time of flight (ToF) sensor, and measures a distance to a subject captured by the image capturing device 7. The GNSS sensor 8b is a positioning means that receives signals transmitted at each time from a plurality of GNSS satellites, and calculates a distance to the satellite based on a difference in the time at which each signal has been received, thereby measuring a position on the earth. The positioning means may be a device dedicated to positioning, or may be an application dedicated to positioning installed on a personal computer (PC), a smartphone, or the like. The distance sensor 8a and the GNSS sensor 8b are examples of sensor devices. The distance sensor 8a is an example of a three-dimensional sensor.
The ToF sensor used as the distance sensor 8a irradiates an object with laser light from a light source to measure the scattered or reflected light to thereby measure a distance from the light source to the object.
In the present embodiment, the distance sensor 8a is a light detection and ranging (LiDAR) sensor. LiDAR is a method of measuring a time of flight of light using a pulse, and as another method of the ToF sensor, the distance may be measured using a phase difference detection method. In the phase difference detection method, a measurement range is irradiated with laser light amplitude-modulated at a fundamental frequency, light reflected therefrom is received, and a phase difference between the irradiated light and the reflected light is measured to obtain time, and a distance is calculated by multiplying the time by a speed of light. Alternatively, the distance sensor 8a may include a stereo camera.
The moving object system 60 can use the three-dimensional sensor to obtain three-dimensional information that is difficult to obtain from a two-dimensional image, for example, the height, the inclination angle, or the bulging of a slope.
In the meantime, the moving object system 60 may further include an angle sensor 8c. The angle sensor 8c is a gyro sensor or the like for detecting an angle (attitude) or an angular velocity (or each acceleration) of the image capturing direction of the image capturing device 7.
The evaluation system 4 is configured with an evaluation device 3 and a data management device 5. The evaluation device 3 and the data management device 5 of the evaluation system 4 are configured to communicate with the moving object system 60, the communication terminal 1100, and the communication terminal 1200 via a communication network 100. The communication network 100 is configured with the Internet, a mobile communication network, a local area network (LAN), or the like. The communication network 100 may include not only wired communication but also a network by wireless communication such as 3rd generation (3G), 4th generation (4G), 5th generation (5G), wireless fidelity (Wi-Fi) (registered trademark), worldwide interoperability for microwave access (WiMAX), or long term evolution (LTE). In addition, the evaluation device 3 and the data management device 5 may have a communication function using a short-range communication technology such as near field communication (NFC) (registered trademark).
The data management device 5 is an example of an information processing device, and is a computer such as a PC that manages various data acquired by the data acquisition device 9. The data management device 5 receives various acquired data from the data acquisition device 9 and passes the received various acquired data to the evaluation device 3 that analyzes data. Note that a method of passing various acquired data from the data management device 5 to the evaluation device 3 may be manual transfer using a universal serial bus (USB) memory or the like.
The evaluation device 3 is a computer such as a PC that evaluates the condition of a slope based on the various acquired data received from the data management device 5. In the evaluation device 3, an application program dedicated to evaluating the slope condition is installed. The evaluation device 3 detects the type or structure of the slope based on data on captured image and sensor data to extract shape data, and performs a detailed analysis by detecting the presence or absence of deformation and a degree of the deformation. The evaluation device 3 also uses the captured image data and the sensor data, evaluation target data, and a result of the detailed analysis to generate a report to be submitted to a road administrator of a national or local government, or an entrusted business operator. Data on the report generated by the evaluation device 3 is submitted to a national or local government via an entrusted business operator in the form of electronic data or the form of being printed on paper. The report generated by the evaluation device 3 is referred to as an investigation record table, an inspection table, an investigation ledger, a record, or the like. The evaluation device 3 is not limited to a PC, and may be a smartphone, a tablet terminal, or the like. In addition, the evaluation system 4 may have a configuration in which the evaluation device 3 and the data management device 5 are configured as one device or terminal.
The communication terminal 1200 is provided to an entrusted business operator, and the communication terminal 1100 is provided to a national or local government. The evaluation device 3, the communication terminal 1100, and the communication terminal 1200 are examples of communication terminals capable of communicating with the data management device 5, and they are configured to browse various data managed by the data management device 5.
Alternatively, in a case where the position of the slope is unknown, the moving object system 6 causes the moving object 6 to travel several kilometers to several tens of kilometers on the road, and the image capturing device 7 captures images of a predetermined range including the slope and a region other than the slope. The region other than the slope includes an earthwork structure other than the slope, such as a rockfall protection net and a rockfall protection fence, a road, a side road, a natural slope, a traffic light, a sign, a store, the sea (when running along a coastline), and a car.
Here, as illustrated in
In recent years, deterioration of earthwork structures built several decades ago has become remarkable, and development of social infrastructure has become a major issue. It is therefore important to early find the deterioration of earthwork structures and to conduct inspections and keep the earthwork structures in good conditions in order to prolong the earthwork structures. Conventional inspections of natural slopes and earthwork structures have been carried out through a visual inspection by an expert and involve investigating rockfall, collapses, landslides, or debris flows on the slopes to create repair plans.
However, the visual inspections by an expert have problems with efficiency, for example, a problem that a large number of earthwork structures throughout Japan cannot be inspected in a certain period of time and a problem that embankments and the like at high locations and along rivers cannot be inspected. Further, in the visual inspections, the degree of deformation such as cracks or peeling occurring on the surface layer of earthwork structures cannot be quantitatively grasped.
To address this, the condition inspection system 1 according to the embodiment acquires data on a captured image of an earthwork structure slope using the image capturing device 7, and acquires sensor data including three-dimensional information using a three-dimensional sensor such as the distance sensor 8a. Then, the evaluation system 4 combines the captured image data and the sensor data thus acquired to evaluate the slope condition, thereby detecting shape data indicating the three-dimensional shape of the slope and detecting deformation such as cracks and peeling. This enables the condition inspection system 1 to efficiently perform evaluation that is difficult to visually inspect by a human.
Next, a hardware configuration of each device of the condition inspection system 1 will be described with reference to
The controller 900 includes an image capturing device interface (I/F) 901, a sensor device I/F 902, a bus line 910, a central processing unit (CPU) 911, a read only memory (ROM) 912, a random access memory (RAM) 913, a hard disk (HD) 914, a hard disk drive (HDD) controller 915, a network I/F 916, a digital versatile disk rewritable (DVD-RW) drive 918, a media I/F 922, an external device connection I/F 923, and a timer 924.
Among them, the image capturing device I/F 901 is an interface for transmitting and receiving various data or information to and from the image capturing device 7. The sensor device I/F 902 is an interface for transmitting and receiving various data or information to and from the sensor device 8. The bus line 910 is an address bus, a data bus, or the like for electrically connecting each element such as the CPU 911 illustrated in
The CPU 911 controls the operation of the entire data acquisition device 9. The ROM 912 stores a program used for driving the CPU 911 such as an IPL. The RAM 913 is used as a work area of the CPU 911. The HD 914 stores various data such as programs. The HDD controller 915 controls reading or writing of various data from or to the HD 914 under the control of the CPU 911. The network I/F 916 is an interface for data communication using the communication network 100.
The DVD-RW drive 918 controls reading or writing of various data from or to the DVD-RW 917 as an example of a detachable recording medium. Note that the medium is not limited to the DVD-RW and may be a DVD-R, a Blu-ray (registered trademark) disc, or the like.
The media I/F 922 controls reading or writing (storing) of data from or to a recording medium 921 such as a flash memory. The external device connection I/F 923 is an interface for connecting an external device such as an external PC 930 having a display, a receiving unit, and a display control unit. The timer 924 is a measurement device having a time measurement function. The timer 924 may be a computer-based software timer. The timer 924 is preferably synchronized with the time of the GNSS sensor 8b. This makes it easy to synchronize the time and correlate the positions in the sensor data and the captured image data.
Among them, the CPU 301 controls the operation of the entire evaluation device 3. The ROM 302 stores a program used for driving the CPU 301 such as an IPL. The RAM 303 is used as a work area of the CPU 301. The HD 304 stores various data such as programs. The HDD controller 305 controls reading or writing of various data from or to the HD 304 under the control of the CPU 301. The display 306 displays various types of information such as a cursor, a menu, a window, a character, or an image. The display 306 is an example of a display unit. The external device connection I/F 308 is an interface for connecting various external devices. The external device in this case is, for example, a USB memory, a printer, or the like. The network I/F 309 is an interface for data communication using the communication network 100. The bus line 310 is an address bus, a data bus, or the like for electrically connecting each element such as the CPU 301 illustrated in
The keyboard 311 is a type of an input means including a plurality of keys with which to input characters, numerical values, various instructions, and so on. The pointing device 312 is a type of an input means with which to select and execute various instructions, select a processing target, move a cursor, and so on. The DVD-RW drive 314 controls reading or writing of various data from or to the DVD-RW 313 as an example of a detachable recording medium. Note that the medium is not limited to the DVD-RW and may be a DVD-R, a Blu-ray disc, or the like. The media I/F 316 controls reading or writing (storing) of data from or to a recording medium 315 such as a flash memory.
Each of the above programs may be recorded in a computer-readable recording medium as a file in the form of an installable or executable format and distributed. Examples of the recording medium include a compact disc recordable (CD-R), a digital versatile disk (DVD), a Blu-ray disc, an SD card, and a USB memory. In addition, the recording medium can be provided to domestic or foreign countries as a program product. For example, the evaluation system 4 according to the embodiment implements the evaluation method according to the present invention by executing the program according to the present invention.
Next, a functional configuration of the condition inspection system according to the embodiment will be described with reference to
First, a functional configuration of the data acquisition device 9 will be described with reference to
The communication unit 91 is mainly implemented by processing of the CPU 911 on the network I/F 916, and communicates various data or information with other devices via the communication network 100. For example, the communication unit 91 transmits, to the data management device 5, the acquired data acquired by the captured image data acquisition unit 95 and the sensor data acquisition unit 96. The calculation unit 92 is implemented by processing of the CPU 911 and performs various calculations.
The image capturing device control unit 93 is mainly implemented by processing of the CPU 911 on the image capturing device I/F 901, and controls image capturing processing by the image capturing device 7. The sensor device control unit 94 is mainly implemented by processing of the CPU 911 on the sensor device I/F 902, and controls data acquisition processing for the sensor device 8. The image capturing device control unit 93 is an example of an angle changing unit.
The captured image data acquisition unit 95 is mainly implemented by the processing of the CPU 911 on the image capturing device I/F 901, and acquires captured image data related to an image captured by the image capturing device 7. The sensor data acquisition unit 96 is mainly implemented by processing of the CPU 911 on the sensor device I/F 902, and acquires sensor data which is a detection result by the sensor device 8. The sensor data acquisition unit 96 is an example of a distance information acquisition unit and a position information acquisition unit. The time data acquisition unit 97 is mainly implemented by processing of the CPU 911 on the timer 924, and acquires time data indicating the time when the data is acquired by the captured image data acquisition unit 95 or the sensor data acquisition unit 96.
The request receiving unit 98 is mainly implemented by processing of the CPU 911 on the external device connection I/F 923, is implemented by processing of the CPU 911, and receives a predetermined request from the external PC 930 or the like.
The storage/read unit 99 is mainly implemented by processing of the CPU 911, and stores various data (or information) in the storage unit 9000 and reads various data (or information) from the storage unit 9000.
Next, a functional configuration of the evaluation device 3 will be described with reference to
The communication unit 31 is mainly implemented by processing of the CPU 301 on the network I/F 309, and communicates various data or information with other devices via the communication network 100. The communication unit 31 transmits and receives various data related to the evaluation of the slope condition to and from the data management device 5, for example.
The receiving unit 32 is mainly implemented by processing of the CPU 301 on the keyboard 311 or the pointing device 312, and receives various selections or inputs from a user. The receiving unit 32 receives various selections or inputs on an evaluation screen 400 to be described later. The display control unit 33 is mainly implemented by processing of the CPU 301, and causes the display 306 to display various images. The display control unit 33 causes the display 306 to display the evaluation screen 400 to be described later. The determining unit 34 is implemented by processing of the CPU 301 and performs various determinations. The receiving unit 32 is an example of an operation receiving means.
The evaluation target data generation unit 35 is implemented by processing of the CPU 301 and generates data on an evaluation target. The detection unit 36 is mainly implemented by processing of the CPU 301, and performs processing of detecting the slope condition using the evaluation target data generated by the evaluation target data generation unit 35. The map data management unit 37 is mainly implemented by processing of the CPU 301, and manages map information acquired from an external server or the like. The map information includes position information on any position on the map.
The report generation unit 38 is mainly implemented by processing of the CPU 301, and generates an evaluation report to be submitted to the road administrator based on an evaluation result.
The storage/read unit 39 is mainly implemented by processing of the CPU 301, and stores various data (or information) in the storage unit 3000 and reads various data (or information) from the storage unit 3000. A setting unit 40 is mainly implemented by processing of the CPU 301 and performs various settings.
Next, a functional configuration of the data management device 5 will be described with reference to
The communication unit 51 is mainly implemented by processing of the CPU 501 on the network I/F 509, and communicates various data or information with other devices via the communication network 100. The communication unit 51 receives, for example, captured image data and sensor data transmitted from the data acquisition device 9. The communication unit 51 transmits and receives various data related to the evaluation of the slope condition and so on to and from the evaluation device 3 and so on, for example. The communication unit 51 is an example of an instruction receiving means. The determining unit 52 is an example of a position generation means, and implemented by processing of the CPU 501 and performs various determinations.
The data management unit 53 is mainly implemented by processing of the CPU 501, and manages various data related to the evaluation of the slope condition. For example, the data management unit 53 registers, in an acquired data management DB 5001, the captured image data and the sensor data transmitted from the data acquisition device 9. The data management unit 53 also registers, for example, data processed or generated by the evaluation device 3 in a processed data management DB 5003. A generation unit 54 is mainly implemented by processing of the CPU 501, and generates various types of image data related to the slope. A setting unit 55 is mainly implemented by processing of the CPU 501 and performs various settings.
The storage/read unit 59 is mainly implemented by processing of the CPU 501, and stores various data (or information) in the storage unit 5000 and reads various data (or information) from the storage unit 5000.
Next, a functional configuration of the communication terminal 1100 will be described with reference to
The communication unit 1101 is mainly implemented by processing of the CPU on the network I/F, and communicates various data or information with other devices via the communication network 100.
The receiving unit 1102 is mainly implemented by processing of the CPU on the keyboard or the pointing device, and receives various selections or inputs from the user. The display control unit 1103 is mainly implemented by processing of the CPU, and causes the display to display various images. The determining unit 1104 is implemented by processing of the CPU 301 and performs various determinations. The receiving unit 1102 is an example of the operation receiving means.
The storage/read unit 1105 is mainly implemented by processing of the CPU, and stores various data (or information) in the storage unit 1106 and reads various data (or information) from the storage unit 1106.
Next, a functional configuration of the communication terminal 1200 will be described with reference to
The communication unit 1201 is mainly implemented by processing of the CPU on the network I/F, and communicates various data or information with other devices via the communication network 100.
The receiving unit 1202 is mainly implemented by processing of the CPU on the keyboard or the pointing device, and receives various selections or inputs from the user. The display control unit 1203 is mainly implemented by processing of the CPU, and causes the display to display various images. The determining unit 1204 is implemented by processing of the CPU 301 and performs various determinations.
The storage/read unit 1205 is mainly implemented by processing of the CPU, and stores various data (or information) in the storage unit 1206 and reads various data (or information) from the storage unit 1206.
Among these, the type name is a name indicating a condition type for identifying a slope, a physical quantity around the slope, and a condition of site information. Here, the condition type includes a type of a slope itself which is a structure such as a retaining wall, a slope frame, a sprayed mortar, a wire mesh, a fence, a drain hole, a pipe, and a drainage channel of a berm, and a type indicating a physical quantity around the slope such as gush, moss, plants, rockfall, earth and sand, and sun exposure. The condition type also includes a type, e.g., a pole, an electric pole, a sign, or a signboard, as site information for supporting data acquisition by the moving object system 60. Further, the condition type may include, as additional information on the structure, information on markers such as chalking indicating the presence of deformation, installed at the time of past inspection or construction, and man-made objects such as a measurement device or a trace of a countermeasure. The training image is an example of training data, and is a training image used in machine learning for determining a slope, a physical quantity around the slope, and a condition type of site information from the captured image data. Here, the training data is not limited to a luminance image, an RGB image, or the like, which is generally called an image. The training data may be any data including information for determination of a condition type, and may be in the form of depth information, text, audio, or the like. In the remarks column, information serving as a detection criterion for detecting a condition type is shown.
Acquired Data Management Table
Among them, the captured image data and the sensor data are data files of acquired data transmitted from the data acquisition device 9. The acquisition time indicates the time when the captured image data and the sensor data have been acquired by the data acquisition device 9. Data acquired in one inspection process is stored in the same folder. The captured image data and three-dimensional sensor data included in the sensor data are stored in correlation with coordinates as described later. The captured image data and the three-dimensional sensor data included in the sensor data are stored in correlation with positioning data included in the sensor data. As a result, when any position in the map information managed by the map data management unit 37 of the evaluation device 3 is selected, the captured image data and the three-dimensional sensor data on the selected position can be read from the acquired data management DB 5001.
Among them, the evaluation target data is a data file used for detection and evaluation of the slope condition by the evaluation device 3. The evaluation data is a data file indicating an evaluation result by the evaluation device 3. The positioning data is data indicating position information measured by the GNSS sensor 8b. In addition, the comment is bibliographic information input by an evaluator for the evaluation target data or the evaluation data. As a result, when any position in the map information managed by the map data management unit 37 of the evaluation device 3 is selected, the evaluation data on the selected position can be read from the processed data management DB 5003.
The moving object system 60 captures an image of a slope provided on a road using the image capturing device 7 provided in the data acquisition device 9 while causing the moving object 6 to travel. An X-axis direction illustrated in
As the moving object 6 travels, the data acquisition device 9 acquires a captured image 1 and a ranging image 1, and a captured image 2 and a ranging image 2 in time series as illustrated in
As described above, the moving object system 60 acquires the captured image data obtained by capturing an image of the slope and the sensor data obtained in response to the image capturing by the image capturing device 7 while causing the vehicle as the moving object 6 to travel, and uploads the acquired data to the data management device 5. Note that the data acquisition device 9 may separately acquire the ranging image and the captured image during different runs, but it is preferable to acquire the ranging image and the captured image during the same run with respect to the same slope shape, in consideration of a change in slope shape due to collapse or the like.
Then, the luminance information of each pixel 7A1 of the captured image data 7A is stored in the storage unit 5000 as the captured image data illustrated in
Then, the distance information of each pixel 8A1 of the ranging image data 8A is stored in the storage unit 5000 as three-dimensional data included in the sensor data illustrated in
Here, since the captured image data 7A illustrated in
Here, in a case where the position of the slope 80 in the X-axis direction is unknown, the image capturing device 7 captures an image of the target region 70 including the slope 80 that is an inspection and evaluation target object and a region other than the inspection and evaluation target object with the target region 70 divided into the plurality of the image capturing regions d11, d12 ⋅ ⋅ ⋅ , and a plurality of image capturing regions obtained by capturing images of the slope 80 is identified from the plurality of image capturing regions as described later.
As illustrated in
Similarly to the case illustrated in
In this case, the image capturing device 7 includes a plurality of image capturing devices, and the target regions 702A and 702B are imaged by an image capturing device different from the image capturing device that captures images of the target regions 701A and 701B.
In addition, the target region 701B is imaged by the same image capturing device that images the target region 701A under different image capturing conditions, and the target region 702B is also imaged by the same image capturing device that images the target region 702A under different image capturing conditions.
As illustrated in
As a result, as described with reference to
The image capturing device 7 includes a plurality of image capturing devices 71, 72, and 73, and the image capturing devices 71, 72, and 73 capture a target region 701 on the slope 80, a target region 702 above the target region 701, and a target region 703 above the target region 702, respectively.
Here, first and second target regions indicate any two of the target region 701, the target region 702, and the target region 703, and first and second image capturing devices indicate image capturing devices corresponding to the first and second target regions among the plurality of image capturing devices 71, 72, and 73.
Next, data acquisition processing using the moving object system 60 will be described with reference to
Specifically, the image capturing device control unit 93 starts image capturing processing for a predetermined region by requesting the image capturing device 7 to capture an image.
The position of the slope is not necessarily known. That is, the moving object system 6 causes the image capturing device 7 to capture images of a predetermined region including the slope and a region other than the slope while causing the moving object 6 to travel, and the image capturing device control unit 93 starts the image capturing processing for the region other than the slope, performs the image capturing processing for the slope, and then ends the image capturing processing for the region other than the slope. This enables image capturing of the entire region from one end to the other end of the slope in the moving direction of the moving object 6.
In addition, the sensor device control unit 94 starts detection processing by the distance sensor 8a and the GNSS sensor 8b in synchronization with the image capturing processing by the image capturing device 7. Then, the captured image data acquisition unit 95 acquires the captured image data acquired by the image capturing device 7, and the sensor data acquisition unit 96 acquires the sensor data acquired by the distance sensor 8a and the GNSS sensor 8b. The time data acquisition unit 97 acquires time data indicating the time when various data has been acquired by the captured image data acquisition unit 95 and the sensor data acquisition unit 96.
Next, when the inspection worker performs predetermined input operation or the like on the external PC 330 and the like, the request receiving unit 98 receives a request to upload various data acquired (Step S13). The communication unit 91 then uploads (transmits) the captured image data, the sensor data, and the time data, which are the acquired data acquired in Step S12, to the data management device 5 (Step S14). As a result, the communication unit 51 of the data management device 5 receives the acquired data transmitted from the data acquisition device 9. Then, the data management unit 53 of the data management device 5 registers the acquired data received in Step S14 in the acquired data management DB 5001 (see
The data management unit 53 stores the captured image data and the sensor data in one folder in association with the time data indicating the acquisition time of each set of data contained in the acquired data.
Hereinafter, the sequence between the evaluation device 3 and the data management device 5 will be described, and the same applies to the sequence between the data acquisition device 9, the communication terminal 1100 as well as the communication terminal 1200, and the data management device 5.
When a user of the evaluation device 3 designates a folder, the receiving unit 32 of the evaluation device 3 receives selection of generation target data (Step S31). Alternatively, when the user of the evaluation device 3 selects any position in the map information managed by the map data management unit 37 of the evaluation device 3, the receiving unit 32 of the evaluation device 3 may receive the selection of the position information in the map information.
Next, the communication unit 31 transmits a request to generate evaluation target data related to the generation target data selected in Step S11 to the data management device 5, and the communication unit 51 of the data management device 5 receives the request transmitted from the evaluation device 3 (Step S32). The request includes the folder name selected in Step S31. Alternatively, the request may include position information in the map information.
Next, the storage/read unit 59 of the data management device 5 searches the acquired data management DB 5001 using, as a search key, the folder name included in the generation request received in Step S32 to read the acquired data associated with the folder name included in the generation request. Alternatively, the storage/read unit 59 searches the acquired data management DB 5001 using, as a search key, the position information included in the request received in Step S32 to read the acquired data associated with the position information included in the request. The acquired data includes the captured image data, the sensor data, and the time data.
The generation unit 54 of the data management device 5 generates evaluation target data based on the acquired data read out by the storage/read unit 59 (Step S33). Specifically, the generation unit 54 performs inclination correction on the captured image data based on the attitude of the image capturing device 7 (moving object 6) at the time of image capturing based on the acquired sensor data of the distance sensor 8a. In addition, the generation unit 54 correlates the positioning data, which is the acquired sensor data of the GNSS sensor 8b, with the captured image data based on the time data acquired. Further, the generation unit 54 performs processing of combining a plurality of sets of captured image data into one set of image data.
Specifically, as described with reference to
Further, the generation unit 54 generates a composite image in which the captured images of the plurality of target regions 701A, 702A, 701B, and 702B are connected, thereby obtaining the captured image of the entire target region 70.
Here, as described above, the target region 70 includes the slope 80 and a region other than the slope 80.
As described above, the generation unit 54 has an inclination correction function for image data, a function of linking image data with position information, and a function of combining image data. The generation unit 54 uses the acquired data to perform image correction on the acquired captured image data so that the processing by the detection unit 36 and the report generation unit 38, described later, can be easily performed.
Next, the generation unit 54 generates an input/output screen including the composite image (Step S34). The input/output screen is an example of a display screen displaying a composite image in which the images captured by dividing the target region 70 into the plurality of image capturing regions dn along the moving direction of the moving object 6 are connected together, and Step S34 is an example of a generation step.
Here, the generation unit 54 generates a composite image having a resolution lower than that of the composite image generated in Step S33, and generates an input/output screen including the composite image having the lower resolution.
That is, the generation unit 54 generates an input/output screen so as to display a composite image 2500 with a resolution lower than that of each image captured with the target region divided into the plurality of image capturing regions dn stored in the acquired data management DB 5001. This improves the processing speed at the time of generating the input/output screen including the composite image.
In addition, the generation unit 54 generates an input/output screen including a plurality of composite images corresponding to the target regions 701, 702, and 703 imaged by the image capturing devices 71, 72, and 73 described with reference to
That is, the target region 70 includes a first target region and a second target region which are different ranges in a direction intersecting the moving direction of a moving object 66, and the generation unit 54 generates an input/output screen 2000 including at least one of a first composite image and a second composite image. The first composite image is obtained by connecting the first captured images pn obtained by capturing images with the first target region divided into the plurality of first image capturing regions dn along the moving direction of the moving object 66. The second composite image is obtained by connecting the second captured images pn obtained by capturing images with the second target region divided into the plurality of second image capturing regions dn along the moving direction of the moving object 66.
The communication unit 51 transmits input/output screen information related to the input/output screen generated in Step S34 to the evaluation device 3, and the communication unit 31 of the evaluation device 3 receives the input/output screen information transmitted from the data management device 5 (Step S35).
Here, as described above, since the input/output screen includes the composite image generated with a resolution lower than those of the plurality of captured images stored in the acquired data management DB 5001, the communication load when transmitting the input/output screen including the composite image is reduced.
Next, the display control unit 33 of the evaluation device 3 causes the display 306 to display the input/output screen received in Step S34, and the receiving unit 32 of the evaluation device 3 receives predetermined input operation by the user on the displayed input/output screen (Step S36). The input operation includes determination operation for determining to specify a partial region in the composite image.
Here, as described above, since the input/output screen includes the composite image generated with a resolution lower than those of the plurality of captured images stored in the acquired data management DB 5001, the processing speed when displaying the input/output screen including the composite image is improved.
The communication unit 31 transmits input information related to the input operation received by the receiving unit 32 to the data management device 5, and the communication unit 51 of the data management device 5 receives the input information transmitted from the evaluation device 3 (Step S37). The input information includes specified region information for specifying a partial region in the composite image, a comment, and identification information for identifying a specific slope among a plurality of slopes.
Next, the setting unit 55 updates the evaluation target data generated in Step S33 based on the input information received in Step S37, and stores the resultant in the processed data management DB 5003 (see
Specifically, the setting unit 55 sets a partial image corresponding to a partial region, position information, and a specified point group in a three-dimensional point group corresponding to the plurality of image capturing regions dn, based on specified region information specifying a partial region in the composite image, thereby updating the evaluation target data and storing, in one folder, the evaluation target data, the positioning data, and the comment included in the generated data in association with each other.
Here, as described above, the composite image included in the input/output screen is an image generated with a resolution lower than those of the plurality of captured images stored in the acquired data management DB 5001. However, since the partial image stored in Step S38 is an image having the same high resolution as those of the plurality of captured images stored in the acquired data management DB 5001, processing by the detection unit 36 and the report generation unit 38 described later can be executed with high accuracy.
Next, the communication unit 51 transmits partial image information indicating the partial image contained in the generated data updated in Step S38 to the evaluation device 3, and the communication unit 31 of the evaluation device 3 receives the partial image information transmitted from the data management device 5 (Step S39). Then, the display control unit 33 of the evaluation device 3 causes the display 306 to display the partial image received in Step S39.
In the above, the functions of the data management device 5 in
In Step S34 of
Here, each of the plurality of divided image groups 250A and 250B indicates a range displayed on one input/output screen, and is switched and displayed on the display 306 or the like, for example. The generation unit 54 preferably performs an image analysis on each of the plurality of divided image groups 250A and 250B, and identifies a location where a boundary between the slope 80 and a part other than the slope 80 in the moving direction of the moving object 6 exists probably.
Further, in Step S34 of
That is, the generation unit 54 generates an input/output screen such that the length of the composite image 2500 in the moving direction of the moving object 66 corresponding to the moving distance of the moving object 66 is different.
The display control unit 33 of the evaluation device 3 displays the input/output screen 2000 including a specifying receiving screen 2010 for receiving specifying operation for specifying a partial region in the composite image 2500 and a determination receiving screen 2020 for receiving determination operation for determining to specify the partial region in the composite image 2500.
The display control unit 33 displays the composite image 2500 on the specifying receiving screen 2010 and displays a pointer 2300 operated with the pointing device 312 on the composite image 2500.
As described in Step S34 of
In
The display control unit 33 displays a start position designation button 2402, an end position designation button 2404, a reduce button 2406, and an enlarge button 2408 in the determination receiving screen 2020.
The start position designation button 2402 and the end position designation button 2404 are buttons used to instruct the display of a start position bar 250S and an end position bar 250G, respectively, on the composite image 2500.
The start position bar 250S and the end position bar 250G can be moved to any positions on the composite image 2500 by operating the pointer 2300.
A specified position determination button 2400 is a button used to determine the positions of the start position bar 250S and the end position bar 250G on the composite image 2500.
The reduce button 2406 and the enlarge button 2408 are buttons used to instruct the display of the composite image 2500 to be reduced or enlarged. A screen switching button 2409 is a button used to switch between the display of the plurality of divided image groups 250A and 250B illustrated in
In
When the user operates the end position designation button 2404, the receiving unit 32 receives the operation, and the display control unit 33 displays the end position bar 250G at any position on the composite image 2500.
When the user operates the pointer 2300 to move the start position bar 250S and the end position bar 250G to the boundary positions on both sides of the slope 80 on the composite image 2500, the receiving unit 32 receives the movement as the specifying operation for specifying a partial region in the composite image 2500. Here, the position information indicating the positions of the start position bar 250S and the end position bar 250G in the composite image 2500 is an example of specified region information for specifying a partial region in the composite image 2500.
When the user operates the specified position determination button 2400, the receiving unit 32 receives the operation as determination operation for determining to specify a partial region in the composite image 2500.
In
Here, if the composite image 2500 displays only a region between the start position bar 250S1 and the end position bar 250G1, the user cannot confirm the boundaries on both sides of the slope 80 in the moving direction of the moving object 66, which makes it impossible to accurately confirm the position and range of the slope 80.
In the present embodiment, the generation unit 54 generates the input/output screen 2000 including the composite image 2500 such that the composite image 2500 includes boundaries on both sides of the slope 80 in the moving direction of the moving object 6. This enables the user to confirm the composite image 2500 including the boundaries on both sides of the slope 80 displayed on the input/output screen 2000, and to accurately confirm the position and range of the slope 80. In addition, the display control unit 33 gives, among the plurality of divided image groups 250A and 250B described with reference to
Further, the generation unit 54 generates the input/output screen 2000 including the composite image 2500 such that the composite image 2500 includes boundaries on both sides of the plurality of slopes 80 at different positions in the moving direction of the moving object 66. This enables the user to confirm the composite image 2500 including the boundaries on both sides of each of the plurality of slopes 80 displayed on the input/output screen 2000, and to accurately confirm the positions and ranges of the plurality of slopes 80.
The composite image 2500 illustrated in
In the composite image 2500 illustrated in
When the start position bar 250S and the end position bar 250G are moved on the composite image 2500 by operating the pointer 2300, the receiving unit 32 of the evaluation device 3 receives the movement as specifying operation for specifying a partial region in the composite image 2500 (Step S151), and when the specified position determination button 2400 is operated, the receiving unit 32 receives the operation as determination operation for determining to specify a partial region in the composite image 2500 (Step S152).
Next, the determining unit 34 of the evaluation device 3 detects the X coordinates of the start position bar 250S and the end position bar 250G on which the specifying operation has been performed in the composite image 2500 as specified region information (Step S153).
Next, the communication unit 31 of the evaluation device 3 transmits input information related to the input operation received by the receiving unit 32 to the data management device 5 (Step S154). The input information includes specified region information indicating the specified region in the X coordinate based on the specifying operation with the pointer 2300.
The communication unit 51 of the data management device 5 receives the input information transmitted from the evaluation device 3, the setting unit 55 sets, as a partial image, a plurality of captured images between the X coordinates on both sides of the specified region in the composite image 2500 generated in Step S33 of
The setting unit 55 sets, as other partial images, a plurality of captured images of other image capturing regions having the X coordinates corresponding to the partial image set in Step S155 among other composite images captured by other image capturing devices, and the generation unit 54 performs geometric, color, brightness, and color shift correction on the other partial images so that the slope 80 can be easily evaluated in a subsequent process. The storage/read unit 59 stores the other partial images and the coordinates thereof in the storage unit 5000 (Step S156).
Here, the partial image set in Step S155 is, as an example, a partial image in the target region 702 imaged by the image capturing device 72 described with reference to
That is, in Step S155, the setting unit 55 sets a first partial image corresponding to a partial region in the first composite image based on first determination operation for determining to specify a partial region in the first composite image and, in Step S156, the setting unit 55 sets a second partial image corresponding to a partial region in the second composite image.
The setting unit 55 sets an integrated partial image obtained by connecting the partial image set in Step S155 and the other partial image set in Step S156, and the generation unit 54 performs connection processing on the integrated partial image so that the slope 80 can be easily evaluated in a subsequent process. The storage/read unit 59 stores the integrated partial image and the coordinates thereof in the storage unit 5000 (Step S157).
The setting unit 55 sets, from among the three-dimensional point group data illustrated in
The setting unit 55 sets position information with acquisition time corresponding to the integrated partial image set in Step S157 among the positioning data correlated with the captured image data in Step S33, and the storage/read unit 59 stores the position information with acquisition time corresponding to the integrated partial image in the storage unit 5000 (Step S159).
The communication unit 51 transmits integrated partial image information indicating the integrated partial image set in Step S157 to the evaluation device 3 (Step S161).
Then, as described in Step S39 of
The middle partial image 255M is a partial image in the target region 702 imaged by the image capturing device 72 described with reference to
The lower partial image 255 L and the upper partial image 255U are partial images in the target regions 701 and 703 imaged by the image capturing devices 71 and 73 respectively, described with reference to
The upper partial image 255U, the middle partial image 255M, and the lower partial image 255L are subjected to geometric, color, brightness, and color shift correction by the generation unit 54 so that the slope 80 can be easily evaluated in a subsequent process, as described in Steps S155 and S156 of
As described in Step S157 in
First, when the user of the evaluation device 3 designates a folder, the receiving unit 32 of the evaluation device 3 receives selection of generation target data. Alternatively, when the user of the evaluation device 3 selects any position in the map information managed by the map data management unit 37 of the evaluation device 3, the receiving unit 32 of the evaluation device 3 may receive the selection of the position information in the map information.
The communication unit 31 of the evaluation device 3 transmits a request to generate evaluation target data to the data management device 5 (Step S41). The generation request includes the name of a folder in which data to be generated is stored. Alternatively, the request may include position information in the map information. As a result, the communication unit 51 of the data management device 5 receives the generation request transmitted from the evaluation device 3.
Next, the storage/read unit 59 of the data management device 5 searches the acquired data management DB 5001 using, as a search key, the folder name included in the generation request received in Step S41 to read the acquired data associated with the folder name included in the generation request (Step S42). Alternatively, the storage/read unit 59 searches the acquired data management DB 5001 using, as a search key, the position information included in the request received in Step S32 to read the acquired data associated with the position information included in the request.
Then, the communication unit 51 transmits the acquired data read in Step S42 to the evaluation device 3 (Step S43). The acquired data includes the captured image data, the sensor data, and the time data, whereby the communication unit 31 of the evaluation device 3 receives the acquired data transmitted from the data management device 5.
Next, the evaluation target data generation unit 35 of the evaluation device 3 generates evaluation target data using the acquired data received in Step S43 (Step S44). Specifically, the evaluation target data generation unit 35 performs inclination correction on the captured image data based on the attitude of the image capturing device 7 (moving object 6) at the time of image capturing based on the acquired sensor data of the distance sensor 8a received. In addition, the evaluation target data generation unit 35 correlates the positioning data, which is the received sensor data of the GNSS sensor 8b, with the captured image data based on the time data received. Further, the evaluation target data generation unit 35 performs processing of combining a plurality of sets of captured image data into one set of image data.
Specifically, as described with reference to
Further, the evaluation target data generation unit 35 generates a composite image in which the captured images of the plurality of target regions 701A, 702A, 701B, and 702B are connected, thereby obtaining the captured image of the entire target region 70.
Here, as described above, in a case where the position of the slope 80 is unknown, the target region 70 includes the slope 80 and a region other than the slope 80.
As described above, the evaluation target data generation unit 35 has an inclination correction function for image data, a function of linking image data with position information, and a function of combining image data. The evaluation target data generation unit 35 performs image correction on the received captured image data by using the acquired data received from the data management device 5 so that the processing by the detection unit 36 and the report generation unit 38, described later, can be easily performed.
Next, the evaluation target data generation unit 35 generates an input/output screen including the composite image. The input/output screen is an example of a display screen displaying a composite image in which the images captured by dividing the target region 70 into the plurality of image capturing regions dn along the moving direction of the moving object 6 are connected together, and Step S44 is an example of a generation step.
Next, the display control unit 33 causes the display 306 to display the generated input/output screen, and the receiving unit 32 of the evaluation device 3 receives predetermined input operation by the user on the displayed input/output screen. The input operation includes determination operation for determining to specify a partial region in the composite image.
Next, the setting unit 40 updates the generated evaluation target data based on input information related to the input operation. The setting unit 40 is an example of the setting means.
Specifically, the setting unit 55 sets a partial image corresponding to a partial region, position information, and a specified point group in a three-dimensional point group corresponding to the plurality of image capturing regions dn based on specified region information specifying a partial region in the composite image, thereby updating the evaluation target data.
Next, the communication unit 31 of the evaluation device 3 transmits the generated data, which is generated and updated in Step S44, to the data management device 5 (Step S45), and the generated data includes the evaluation target data, the positioning data, and the comment that are generated by the evaluation target data generation unit 35 and updated by the setting unit 55. As a result, the communication unit 51 of the data management device 5 receives the generated data transmitted from the data evaluation device 3. Then, the data management unit 53 of the data management device 5 stores the generated data received in Step S35 in the processed data management DB 5003 (see
In this manner, the evaluation system 4 performs image processing based on various data (captured image data, sensor data, and time data) acquired from the data acquisition device 9 to generate and update the evaluation target data used for evaluation of the slope condition.
First, the display control unit 33 of the evaluation device 3 causes the display 306 to display the evaluation screen 400 with which to perform the evaluation processing of the slope condition (Step S51).
Next, the receiving unit 32 of the evaluation device 3 receives selection of evaluation target data (Step S52).
Next, the communication unit 31 transmits a request to read the evaluation target data selected in Step S52 to the data management device 5 (Step S53). The read request includes the folder name selected in Step S52. As a result, the communication unit 51 of the data management device 5 receives the read request transmitted from the data evaluation device 3.
Next, the storage/read unit 59 of the data management device 5 searches the processed data management DB 5003 (see
Then, the display control unit 33 of the evaluation device 3 causes the display 306 to display the processed data received in Step S54 (Step S56).
Next, the evaluation device 3 performs processing of detecting a slope condition using the evaluation target data (Step S57). Details of the processing of detecting a slope condition will be described later.
The receiving unit 32 receives a request to upload an evaluation result (Step S58). Then, the communication unit 31 uploads (transmits) the evaluation result to the data management device 5 (Step S59). As a result, the communication unit 51 of the data management device 5 receives the evaluation data transmitted from the evaluation device 3. Then, the data management unit 53 of the data management device 5 registers the evaluation data received in Step S59 in the processed data management DB 5003 (see
The receiving unit 32 also receives a request to generate an evaluation report (Step S61). Then, the report generation unit 38 generates an evaluation report based on the detection result of the slope condition by the detection unit 36 (Step S62). The report generation unit 38 generates an evaluation report by arranging evaluation data indicating the above-described evaluation result based on inspection guidelines issued by a national government or the like or a format according to a request from a road administrator.
Here, the processing of detecting the slope condition will be described in detail with reference to
First, the receiving unit 32 receives a shape detection request (Step S71). Next, the detection unit 36 performs shape detection processing using the evaluation target data (Step S72). Here, the shape data indicating the shape of the slope is represented by three-dimensional information such as extension, height, and inclination angle of the slope, position information, and the like. The extension of the slope is the length of the slope in the plan view (the length in the depth direction of the transverse section in which the inclination of the slope can be seen). The shape data also includes information indicating a type of the slope, namely, indicating whether the slope is a natural slope or an earthwork structure. In a case where the slope is an earthwork structure, the shape data also includes information on the type of the earthwork structure. The type of the civil engineering structure is, for example, a retaining wall, a slope frame, a sprayed mortar, presence or absence of an anchor, embankment, or the like.
Specifically, the detection unit 36 detects the extension, the height, and the inclination angle of the slope based on the image data and the three-dimensional data included in the evaluation target data. In addition, the detection unit 36 detects the type of the slope indicated in the image that is the evaluation target data using the condition type management DB 3001 (see
Next, the display control unit 33 causes the display 306 to display the shape data, which is the detection result in Step S72 (Step S73). In Steps S71 to S73 described above, “structure information detection” processing may be performed instead of the “shape detection” processing.
In this case, the receiving unit 32 receives a request to detect structure information (Step S71). Next, the detection unit 36 performs the structure information detection processing using the evaluation target data (Step S72). Then, the display control unit 33 causes the display 306 to display the structure information detection information, which is the detection result in Step S72 (Step S73).
Here, the structure information includes additional information on the structure in addition to the shape data described above. Specifically, the detection unit 36 detects, based on the image data and the three-dimensional data included in the evaluation target data, the type of the slope indicated in the image that is the evaluation target data and the type of the additional information on the slope using the condition type management DB 3001 (see
Next, if the receiving unit 32 receives a damage detection request for requesting damage detection in the slope condition (YES in Step S74), then the processing proceeds to Step S75. On the other hand, if the receiving unit 32 does not receive a damage detection request (NO in Step S74), then the processing proceeds to Step S77. The detection unit 36 performs processing of detecting damage to the slope condition on the evaluation target data (Step S75).
Here, in the damage detection processing in the slope condition, the presence or absence of deformation on the slope or the degree of the deformation is detected as damage data indicating the degree of damage to the slope. The degree of deformation indicates a degree of deterioration of deformation, and is the width of a crack, the size of separation, the size of a bulge, or the like. The detection unit 36 detects the presence or absence of deformation on the slope or the degree of the deformation based on the image data and the sensor data included in the evaluation target data. (Example of Evaluation Step) The detection unit 36 also detects whether the degree of the deformation exceeds a predetermined value by using a predetermined detection formula for the degree of deterioration of deformation or the like. In this case, the detection unit 36 determines whether the width of the crack is equal to or larger than a certain value, the size of the peeling is equal to or larger than a certain value, the bulge is large, or the like.
Then, in Step S38 illustrated in
Next, the display control unit 33 causes the display 306 to display a display screen indicating the damage detection result in Step S75 (Step S76).
The display control unit 33 also causes the display 306 to display a cross-sectional image. The cross-sectional image shows a cross-sectional view of the slope to be evaluated, which is drawn based on the shape data detected by the detection unit 36. Since the shape data is detected using the sensor data from the distance sensor 8a (three-dimensional sensor), the shape data can be represented in detail including three-dimensional information such as the inclination or height of the slope, which cannot be calculated only from a two-dimensional image.
Next, if the receiving unit 32 receives a map information acquisition request (YES in Step S77), then the processing proceeds to Step S78. On the other hand, if the receiving unit 32 does not receive a map information acquisition request (NO in Step S77), then the processing ends. The detection unit 36 generates map information indicating the position of the slope condition to be evaluated (Step S78). Specifically, the detection unit 36 generates map information in which an image indicating the position of the slope is added to a position (north latitude and east longitude) indicated by the positioning data acquired in Step S55 corresponding to map data available using a predetermined service or application provided by an external WEB server or the like. The map data provided from an external WEB server or the like is managed by the map data management unit 37.
Next, the display control unit 33 causes the display 306 to display map information 490 generated in Step S78 (Step S79).
If the receiving unit 32 receives a sign detection request for requesting to detect a sign of damage to the slope condition (YES in Step S80), then the processing proceeds to Step S81. On the other hand, if the receiving unit 32 does not receive a sign detection request (NO in Step S80), then the processing ends. The detection unit 36 performs processing of detecting a sign of the slope condition on the evaluation target data (Step S81).
In the condition inspection system 1, conventionally, when deformation of a slope is found, the condition and the position of the slope are identified. However, the viewpoint of measuring information indicating a sign of a position of slope deformation before the deformation occurs on the slope is not known. Here, in the damage sign detection processing of the slope condition, a sign of slope deformation is detected based on measurement data on the slope including peripheral data indicating a physical quantity around the slope as sign data indicating a sign of damage to the slope.
The measurement data includes the captured image data obtained by capturing an image of the slope by the image capturing device 7 or the sensor data obtained by measuring the slope by a three-dimensional sensor such as the distance sensor 8a.
The peripheral data includes measurement data on an object other than the slope, and the object other than the slope includes at least one of seepage, earth and sand, rocks, and plants.
In a case where the measurement data on the slope includes peripheral data indicating seepage occurring on the slope surface, there is a possibility that accumulated water is exerting pressure from the back side of the slope, and thus, it is detected that there is a sign of deformation of the slope. Specifically, it is not limited to the presence or absence of seepage, but it is detected that there is a sign of deformation of the slope depending on the amount, type, and location of the seepage.
In a case where the measurement data on the slope includes peripheral data indicating plants and moss growing on the slope surface, there is a possibility that seepage occurs and accumulated water is exerting pressure from the back side of the slope, and thus, it is detected that there is a sign of deformation of the slope. Specifically, it is not limited to the presence or absence of plants and moss, but it is detected that there is a sign of deformation of the slope depending on the amount, type, and location of the plants and moss.
In a case where the measurement data on the slope includes peripheral data indicating rockfall, earth and sand around the slope, there is a possibility that an abnormality has occurred on the back side and upper side of the slope, and thus, it is detected that there is a sign of deformation of the slope. Specifically, it is not limited to the presence or absence of rockfall, earth and sand, but it is detected that there is a sign of deformation of the slope depending on the amount, type, and location of the rockfall, earth and sand.
In a case where the measurement data on the slope includes peripheral data indicating blockages in a drain hole, a pipe, a drainage channel of a berm, and so on, there is a possibility that drainage from the back side to the front side of the slope is hindered and accumulated water is exerting pressure from the back side of the slope, and thus, it is detected that there is a sign of deformation of the slope. Specifically, it is not limited to the presence or absence of blockage, but it is detected that there is a sign of deformation of the slope depending on the amount, type, and location of a foreign matter that causes the blockage.
In a case where a drain hole, a pipe, or a drainage channel of a berm itself is damaged, it is detected as deformation of the slope, but blockage in a drain hole, a pipe, a drainage channel of a berm, or the like is not detected as deformation of the slope, but is detected as a sign of deformation of the slope.
The measurement data on an object other than the slope may be detected as a sign of deformation of the slope based on a combination of a plurality of sets of measurement data. Specifically, even if peripheral data indicating that seepage is present only in a small part of the slope, in a case where the entire slope is covered with moss, it is estimated that the seepage normally spreads over the entire slope surface, and it is detected that there is a sign of deformation of the slope.
In addition, the peripheral data includes measurement data of a physical quantity other than an object, and the measurement data of a physical quantity other than an object includes measurement data on light.
In a case where the measurement data on the slope includes peripheral data indicating good sun exposure, it is detected that there is a sign of deformation of the slope in combination with the measurement data on an object other than the slope described above. Specifically, in a case where moss grows on a slope that is easily dried due to good sun exposure, there is a possibility that seepage occurs and accumulated water is exerting pressure from the back side of the slope, and thus, it is detected that there is a sign of deformation of the slope.
In the damage sign detection processing of the slope condition, a comment on a sign of slope deformation is generated based on measurement data on the slope including peripheral data indicating a physical quantity around the slope as sign data indicating a sign of damage to the slope. Then, in Step S38 illustrated in
Specifically, based on the captured image data which is an example of the acquired peripheral data, the training images of the condition type management table illustrated in
Next, the display control unit 33 causes the display 306 to display a display screen indicating the sign detection result in Step S81 (Step S82).
The display control unit 33 also causes the display 306 to display a cross-sectional image. As described above, the evaluation system 4 detects, as the evaluation of the slope condition, the shape of the slope including the three-dimensional information, the degree of damage to the slope, the sign of deformation of the slope, and the position of the slope to be evaluated.
Hereinafter, the sequence between the evaluation device 3 and the data management device 5 will be described, and the same applies to the sequence between the data acquisition device 9, the communication terminal 1100 as well as the communication terminal 1200, and the data management device 5.
When a user of the evaluation device 3 designates a folder, the receiving unit 32 of the evaluation device 3 receives selection of target data (Step S91). Alternatively, when the user of the evaluation device 3 selects any position in the map information managed by the map data management unit 37 of the evaluation device 3, the receiving unit 32 of the evaluation device 3 may receive the selection of the position information in the map information.
Next, the communication unit 31 transmits a request for an input/output screen related to the target data selected in Step S91 to the data management device 5, and the communication unit 51 of the data management device 5 receives the request transmitted from the evaluation device 3 (Step S92). The request includes the folder name selected in Step S91. Alternatively, the request may include position information in the map information.
Next, the storage/read unit 59 of the data management device 5 searches the processed data management DB 5003 (see
The generation unit 54 of the data management device 5 generates an input/output screen including the image data based on the image data read out by the storage/read unit 59 (Step S93). The input/output screen is a screen with which to receive instruction operation for instructing generation of an image indicating a specified position in a luminance image indicating the slope.
The communication unit 51 transmits input/output screen information related to the input/output screen generated in Step S93 to the evaluation device 3, and the communication unit 31 of the evaluation device 3 receives the input/output screen information transmitted from the data management device 5 (Step S94). Step S94 is an example of a determination receiving screen transmission step.
Then, the display control unit 33 of the evaluation device 3 causes the display 306 to display the input/output screen received in Step S94 (Step S95). The receiving unit 32 of the evaluation device 3 receives predetermined input operation by the user on the displayed input/output screen. The input/output operation includes instruction operation for instructing generation of an image indicating a specified position in a luminance image indicating the slope. Step S95 is an example of a receiving step.
The communication unit 31 transmits input information related to the input operation received by the receiving unit 32 to the data management device 5, and the communication unit 51 of the data management device 5 receives the input information transmitted from the evaluation device 3 (Step S96). The input/output information includes instruction information for instructing generation of an image indicating a specified position in a luminance image indicating the slope.
The generation unit 54 of the data management device 5 generates a display image using the image data read by the storage/read unit 59 in Step S93 based on the received input information (Step S97). The display image includes a surface display image including a surface image indicating the surface of the slope and a surface position image indicating a specified position in the surface image, and a cross-section display image including a cross-sectional image indicating the cross-section of the slope and a cross-sectional position image indicating a specified position in the cross-sectional image. Step S97 is an example of an image generation step.
The communication unit 51 of the data management device 5 transmits the display image generated in Step S97 to the evaluation device 3, and the communication unit 31 of the evaluation device 3 receives the display image transmitted from the data management device 5 (Step S98). Step S98 is an example of a display image transmission step.
The display control unit 33 of the evaluation device 3 causes the display 306 to display the display image received in Step S98 (Step S99). Step S99 is an example of a display step.
Although
In such a case, Steps S92, 94, 96, and 98 related to data transmission and reception are omitted, and the evaluation device 3 can perform display processing similar to that in
Generation of Surface Display Image Based on Operation for Designating Specified position
The display control unit 33 of the evaluation device 3 displays an input/output screen 2000 including a specific receiving screen 2010 for receiving designation operation for designating a specified position in a luminance image indicating a slope and a determination receiving screen 2020 for receiving determination operation for determining to generate an image indicating the specified position in the slope.
The display control unit 33 displays a surface image 2100 indicating the surface of the slope on the specific receiving screen 2010 and displays a pointer 2300 operated with the pointing device 312 on the surface image 2100.
The surface image 2100 is a luminance image read from the captured image data illustrated in
The display control unit 33 displays a determination receiving screen 2020 including a specified position determination button 2400, a deformation confirmation button 2410, a deformation sign confirmation button 2420, a front view analysis button 2430, a front view comparison button 2440, a cross-sectional view analysis button 2450, and a cross-sectional view comparison button 2460. The deformation confirmation button 2410, the deformation sign confirmation button 2420, the front view analysis button 2430, the front view comparison button 2440, the cross-sectional view analysis button 2450, and the cross-sectional view comparison button 2460 are buttons used to instruct generation of an image indicating a specified position on the slope, with the position of a part satisfying a predetermined condition in the surface image 2100 or a cross-sectional image 2200 set as the specified position.
The specified position determination button 2400 is a button used to determine a specified position on the slope designated on the specific receiving screen 2010 and to give an instruction to generate an image indicating the specified position on the slope. The specified position determination button 2400 may determine not only the specified position designated on the specific receiving screen 2010 but also the specified position specified by the determining unit 52 or the like and displayed on the specific receiving screen 2010.
The deformation confirmation button 2410 is a button used to instruct generation of an image indicating a specified position on the slope with a position indicating deformation on the slope as the specified position. The deformation sign confirmation button 2420 is a button used to instruct generation of an image indicating a specified position on the slope with a position indicating a sign of deformation on the slope as the specified position.
The front view analysis button 2430 is a button used to instruct generation of an image indicating a specified position on the slope with a part obtained by analyzing the surface image 2100 as the specified position. The front view comparison button 2440 is a button used to instruct generation of an image indicating the specified position on the slope with a part obtained by comparing the surface image 2100 with another image as the specified position.
The cross-sectional view analysis button 2450 is a button used to instruct generation of an image indicating a specified position on the slope with a part obtained by analyzing a cross-sectional image described below as the specified position. The cross-sectional view comparison button 2460 is a button used to instruct generation of an image indicating the specified position on the slope with a part obtained by comparing the cross-sectional image with another image as the specified position.
When a predetermined position on the surface image 2100 is pointed to by the pointer 2300, the receiving unit 32 of the evaluation device 3 receives the pointing operation (Step S101), and when the specified position determination button 2400 is operated, the receiving unit 32 receives the operation (Step S102).
Next, the determining unit 34 of the evaluation device 3 detects the XY coordinates of the pointed position in the surface image 2100 as the specified position (Step S103). The specified position may indicate a point in the XY coordinates or may indicate a region therein.
Next, the communication unit 31 of the evaluation device 3 transmits input information related to the input operation received by the receiving unit 32 to the data management device 5 (Step S104). The input information includes designation information for designating a specified position in the XY coordinates based on pointing operation by the pointer 2300, and instruction information for instructing generation of an image indicating the specified position on the slope based on operation of the specified position determination button 2400.
The communication unit 51 of the data management device 5 receives the input information transmitted from the evaluation device 3, and the generation unit 54 generates a surface position image overlapping the XY coordinates of the specified position by superimposing the surface position image on the surface image using the image data illustrated in
Subsequently, the generation unit 54 generates a cross-sectional image corresponding to the X coordinate of the specified position using the image data illustrated in
Note that, although the generation unit 54 generates, in Step S106, the cross-sectional image of the cross section including the Z-axis direction and the vertical direction illustrated in
The generation unit 54 generates a cross-sectional position image overlapping the Y coordinate of the specified position by superimposing the cross-sectional position image on the ridge line of the cross-sectional image, to thereby generate a cross-section display image (Step S107).
The communication unit 51 transmits, to the evaluation device 3, the surface display image generated in Step S105 and the cross-section display image generated in Step S107 (Step S108).
Then, as illustrated in Steps S98 and S99 of
The display content of the determination receiving screen 2020 is the same as that in
The display control unit 33 of the evaluation device 3 displays, on the specific receiving screen 2010, a surface display image 2150 including the surface image 2100 indicating the surface of the slope and a surface position image 2110 indicating a specified position in the surface image 2100, and a cross-section display image 2250 including the cross-sectional image 2200 indicating the cross-section of the slope and a cross-sectional position image 2210 indicating a specified position in the cross-sectional image 2200.
The display control unit 33 displays the cross-sectional image 2200 in correlation with the Y-axis direction and the Z-axis direction illustrated in
The user can appropriately evaluate and confirm the condition of the specified position by comparing the surface position image 2110 and the cross-sectional position image 2210.
In the modification illustrated in
The determining unit 534, the evaluation target data generation unit 535, the detection unit 536, the map data management unit 537, the report generation unit 538, and the setting unit 540 illustrated in
The storage unit 5000 of the data management device 5 includes a condition type management DB 5005 instead of the condition type management DB 3001 included in the storage unit 3000 of the evaluation device 3 as illustrated in
The condition type management DB 5005 illustrated in
The detection unit 536 uses the condition type management DB 3001 (see
The generation unit 54 generates a detection data display screen including the detection data detected in Step S201 (Step S202). The generation unit 54 can generate a detection data display screen including a plurality of sets of detection data.
The detection unit 536 estimates a boundary between the slope 80 and a part other than the slope 80, that is, a start position and an end position of the slope 80 in the moving direction of the moving object 6 based on the detection data detected in Step S201 (Step S203). The detection unit 536 can estimate a plurality of combinations of the start position and the end position. Here, in the embodiment illustrated in
The generation unit 54 generates an input/output screen in which the start position bar and the end position bar are superimposed on the composite image based on the start position and the end position estimated in Step S203, similarly to the input/output screen 2000 illustrated in
The generation unit 54 generates, based on the start position and the end position estimated in Step S203, a map screen in which an image indicating the start position and an image indicating the end position are superimposed on the map data, similarly to the map information generated in Step S78 of
The communication unit 51 transmits, to the evaluation device 3, detection data display screen information indicating the detection data display screen generated in Step S202, input/output screen information indicating the input/output screen generated in Step S204, and map screen information indicating the map screen generated in Step S205 (Step S206).
The communication unit 51 can transmit these pieces of information also to the data acquisition device 9, the communication terminal 1100, and the communication terminal 1200.
The communication unit 31 receives the detection data display screen information, the input/output screen information, and the map screen information transmitted from the data management device 5 (Step S211).
The display control unit 33 causes the display 306 to display the detection data display screen indicated in the detection data display screen information received in Step S211 (Step S212).
When the receiving unit 32 receives selection operation for selecting one or a plurality of sets of detection data included in the detection data display screen (Step S213), the display control unit 33 causes the display 306 to display the input/output screen indicated in the input/output screen information received in Step S211 so as to include the detection data selected in Step S213 (Step S214).
Specifically, as illustrated in
Further, the input/output screen displayed in Step S214 is similar to the input/output screen 2000 illustrated in
Here, the start position bar 250S is an example of a first marker indicating an estimated position of a boundary at one end of the slope 80 with a part other than the slope 80, and the end position bar 250G is an example of a second marker indicating an estimated position of a boundary at the other end of the slope 80 with a part other than the slope 80.
Next, the display control unit 33 causes the display 306 to display the map screen indicated in the map screen information received in Step S211 so as to include the detection data selected in Step S213 (Step S215).
The display control unit 33 of the evaluation device 3 causes the display 306 to display the detection data display screen 3000 including text information 3100A to 3100D indicating the plurality of sets of detection data detected in Step S201 of
When the pointer 2300 points to a predetermined position on any one of the text information 3100A to 3100D and the image information 3200A to 3200D, the receiving unit 32 of the evaluation device 3 receives operation for selecting the pointed detection data as illustrated in Step S213 of
The display control unit 33 may switch and display the detection data display screen 3000 on the input/output screen 2000 illustrated in
The display control unit 33 causes the display 306 to display the map screen 490 including an image capturing path 492 including an image capturing start position 492a and an image capturing end position 492b, a start position 491a of the slope 80 in the moving direction of the moving object 6, and an end position 491b of the slope 80 in the moving direction of the moving object 6. The start position 491a is an example of one end of the slope 80, and the end position 491b is an example of the other end of the slope 80.
The image capturing path 492 corresponds to the image capturing position of the composite image described in
In addition, the start position 491a and the end position 491b correspond to the start position and the end position estimated in Step S203 of
The display control unit 33 may switch and display the map screen 490 on the input/output screen 2000 illustrated in
Next, a modification to the moving object system 60 will be described with reference to
In the embodiment described above, the height of the image capturing device 7 from the ground is low, which makes it difficult to perform image capturing of a berm above a retaining wall, a berm above a slope frame, or a berm above a sprayed mortar as illustrated in
The drone as the moving object 6 is equipped with not only an image capturing device 7 but also a data acquisition device 9 including a sensor device such as a distance sensor 8a, a GNSS sensor 8b, or an angle sensor 8c. This makes it possible to evaluate a condition of a high location or embankment that cannot be evaluated using a vehicle as the moving object 6. In particular, embankments and high locations are places where it is difficult for human to go for close visual inspection, so, image capturing with the drone as in the second modification is desired. In addition, slopes of embankments and high locations are often covered with a lot of vegetation such as trees and grasses. Therefore, the data acquisition device 9 preferably includes the image capturing device 7 capable of capturing a wide-angle image.
As described in Step S123 of
For example, the slope is not flat but undulating (for example, an earthwork structure in which mortar is sprayed to a cliff), is covered with vegetation, or covered with a wire mesh. Therefore, a moving object system 60 (60a, 60b, 60c) according to the third modification includes, as a sensor device 8, a spectrum camera, an infrared camera, or an expanded depth of field (EDof) camera capable of acquiring wavelength information in order to distinguish the shape of a slope from an object such as a plant or a wire mesh.
In addition, the moving object system 60 according to the third modification preferably has a configuration in which not only a tool for distinguishing the shape of the slope but also a lighting device is mounted on the data acquisition device 9 so that an image of the slope can be captured under various conditions such as weather and sun exposure. The lighting device in this case is preferably a line lighting device that irradiates an area corresponding to the image capturing range by the image capturing device 7, or a time-sharing lighting device synchronized with the image capturing device 7 and the sensor device 8.
Further, in order to process data acquired by the moving object system 60 according to the third modification, the evaluation target data generation unit 35 of the evaluation device 3 preferably has an image processing function such as a camera shake correction function, a focal depth correction function (blur correction function), a distortion correction function, or a contrast enhancement function so as not to overlook even small deformation. In addition, the evaluation target data generation unit 35 preferably has a function of deleting noise that conceals deformation on an earthwork structure such as grass, moss, or a wire mesh, and a function of distinguishing between a shadow of grass or the like and deformation such as a crack. As described above, by using the moving object system 60 according to the third modification, the condition inspection system 1 can accurately evaluate the slope condition even at a part having a complicated structure or a part where grass, moss, a wire mesh, or the like is present.
The data management device 5 according to an embodiment of the present invention includes the generation unit 54 that generates the input/output screen 2000 displaying the composite image 2500 including the boundary between the slope 80 and a part other than the slope 80 in the moving direction of the moving object 6, the composite image 2500 being acquired by connecting the captured images pn, captured by the image capturing device 7 installed in the moving object 6, of the target region 70 including the slope 80 and a part other than the slope 80 with the target region 70 divided into the plurality of image capturing regions dn along the moving direction of the moving object 6.
Here, the data management device 5 is an example of an information processing device, the slope 80 is an example of a target object, the input/output screen 2000 is an example of a display screen, and the generation unit 54 is an example of a generation means.
This makes it possible to confirm the composite image 2500 including the boundary between the slope 80 and a part other than the slope 80 displayed on the input/output screen 2000, and to confirm a position of an unknown slope 80.
In the first aspect, the generation unit 54 generates the input/output screen 2000 such that the composite image 2500 includes boundaries between the plurality of slopes 80 at different positions and parts other than the plurality of slopes 80 in the moving direction of the moving object 66.
This enables confirming the positions of the plurality of slopes 80.
In the first aspect or the first aspect-2, the generation unit 54 generates the input/output screen 2000 such that the length of the composite image 2500 in the moving direction of the moving object 66 corresponding to the moving distance of the moving object 66 differs according to the resolution of the display 306 or the like on which the input/output screen 2000 is displayed.
This improves the visibility of the composite image 2500 including the boundaries on both sides of the slope 80 displayed on the input/output screen 2000, and enables confirming the position of the slope 80 easily.
In the first aspect, the generation unit 54 generates the input/output screen 2000 displaying the start position bar 250S and the end position bar 250G, which are examples of markers indicating the estimated position of the boundary, superimposed on the composite image 2500.
This enables the user to easily recognize the estimated position of the boundary using the start position bar 250S and the end position bar 250G.
In the second aspect, the generation unit 54 generates the input/output screen 2000 displaying, by one screen or one line, the first marker indicating the estimated position of the boundary at one end of the slope 80 and the second marker indicating the estimated position of the boundary at the other end of the slope 80. The start position bar 250S is an example of the first marker, and the end position bar 250G is an example of the second marker.
This enables the user to easily recognize the boundary at one end of the slope 80 and the boundary at the other end thereof by one screen or one line.
In any one of the first to third aspects, the generation unit 54 generates the detection data display screen 3000 displaying the text information 3100A to 3100D indicating the estimated types of the slope 80. The text information 3100A to 3100D is an example of the type information, and the detection data display screen 3000 is an example of the type display screen. This enables the user to confirm the estimated type of the slope 80.
In the fourth aspect, the display control unit 33 of the evaluation device 3 causes the display 306 to display the captured image capturing an image of the slope 80 corresponding to the text information 3100A to 3100D selected in the composite image 2500 based on the selection operation for selecting the text information 3100A to 3100D or the image information 3200A to 3200D corresponding to the text information 3100A to 3100D displayed on the display 306.
This enables the user to confirm a captured image capturing an image of the slope 80 corresponding to the estimated type of the slope 80 in the composite image 2500.
In any one of the first to fifth aspects, the data management device 5 includes the setting unit 55 that sets the partial image 255 corresponding to a partial region based on the determination operation on the specified position determination button 2400 with which to determine to specify the partial region in the composite image 2500. The setting unit 55 is an example of a setting means.
This enables specifying a partial region corresponding to the slope 80 and enables setting the partial image 255 corresponding to the slope 80.
In any one of the first to sixth aspects, the generation unit 54 generates the input/output screen 2000 so as to display, side by side, the divided images of each of the plurality of divided images 250A1 to Am obtained by dividing the composite image 2500.
As a result, even when the length of the composite image 2500 or the slope 80 in the moving direction of the moving object 6 is long, the position of the slope 80 can be easily confirmed on one input/output screen 2000.
In any one of the first to seventh aspects, the generation unit 54 generates the input/output screen 2000 so as to display the composite image 2500 with a resolution lower than that of each captured image pn captured with the target region divided into the plurality of image capturing regions dn stored in the acquired data management DB 5001.
As a result, the processing speed at the time of generating or displaying the input/output screen 2000 is increased, and the communication load at the time of transmitting and receiving the input/output screen information indicating the input/output screen 2000 is reduced.
In any one of the first to eighth aspects, the target region 70 includes a first target region and a second target region which are different ranges in a direction intersecting the moving direction of the moving object 66, and
As a result, the position of the slope 80 can be confirmed by checking the first composite image and the second composite image that show different ranges in the direction intersecting the moving direction of the moving object 66.
In the ninth aspect, the data management device 5 includes the setting unit 55 that sets a first partial image corresponding to a partial region in the first composite image based on first determination operation for determining to specify a partial region in the first composite image, and sets a second partial image corresponding to a partial region in the second composite image.
As a result, based on the first determination operation for setting the first partial image corresponding to a partial region in the first composite image 2500, the second partial image corresponding to a partial region in the second composite image 2500 can be set.
In the tenth aspect, the setting unit 55 sets an integrated partial image in which the first partial image and the second partial image are connected.
In the sixth aspect, the setting unit 55 sets position information corresponding to a partial region based on the determination operation.
In the sixth aspect or the twelfth aspect, the setting unit 55 sets a specified point group corresponding to a partial region for the three-dimensional point group corresponding to the plurality of image capturing regions dn based on the determination operation.
An information processing method according to an embodiment of the present invention includes executing a generation step of generating the input/output screen 2000 displaying the composite image 2500 including the boundary between the slope 80 and a part other than the slope 80 in the moving direction of the moving object 6 by dividing the target region 70 including the slope 80 and a part other than the slope 80 into the plurality of image capturing regions dn along the moving direction of the moving object 66 and connecting the captured images pn captured by the image capturing device 7 installed in the moving object 66.
An information processing method according to an embodiment of the present invention includes an image capturing step of capturing an image of the target region 70 including the slope 80 and a part other than the slope 80 with the target region 70 divided into the plurality of image capturing regions dn along the moving direction of the moving object 66 by the image capturing device 7 installed in the moving object 6, and a generation step of generating the input/output screen 2000 displaying the composite image 2500 including the boundary between the slope 80 and a part other than the slope 80 in the moving direction of the moving object 66 by connecting the captured images pn obtained by the image capturing with the target region 70 divided into the plurality of image capturing regions dn.
A program according to an embodiment of the present invention causes a computer to execute the information processing method according to the fourteenth aspect or fifteenth aspect.
The condition inspection system 1 according to an embodiment of the present invention includes the moving object system 60 having the moving object 66 and the image capturing device 7 installed in the moving object 66 and the data management device 5 for processing an image captured by the moving object system 60, in which the moving object system 60 captures an image, using the image capturing device 7, of the target region 70 including the slope 80 and a part other than the slope 80 with the target region 70 divided into the plurality of image capturing regions dn along the moving direction of the moving object 66, and the data management device 5 includes the generation unit 54 that generates the input/output screen 2000 displaying the composite image 2500 including the boundary between the slope 80 and a part other than the slope 80 in the moving direction of the moving object 6 by connecting the captured images pn obtained by the image capturing with the target region 70 divided into the plurality of image capturing regions dn.
Here, the condition inspection system 1 is an example of the information processing system, and the moving object system 60 is an example of the image capturing system.
In the seventeenth aspect, the evaluation device 3, the communication terminal 1100 or 1200 capable of communicating with the data management device 5 is further included, and the data management device 5 further includes the communication unit 51 that transmits input/output screen 2000 information indicating the input/output screen 2000 to the terminal device, and the evaluation device 3, the communication terminal 1100 or 1200 includes the communication units 31, 1101, and 1201 that receive the input/output screen 2000 information transmitted from the data management device 5, and the display control units 33, 1103, and 1203 that display the input/output screen 2000 on the display 306.
Each function of the embodiments described above can be implemented by one or a plurality of processing circuits. Here, the “processing circuit” in the present embodiment includes a processor programmed to execute each function using software, such as a processor implemented by an electronic circuit, and devices such as an application specific integrated circuit (ASIC), a digital signal processor (DSP), a field programmable gate array (FPGA), a system on a chip (SOC), a graphics processing unit (GPU), and a conventional circuit module designed to execute the functions described above.
In addition, the various tables of the embodiments described above may be generated by a learning effect of machine learning, and the table does not have to be used by classifying data of each associated item by machine learning. Here, the machine learning is a technology that allows a computer to acquire human-like learning capabilities, and refers to a technology in which a computer autonomously generates algorithms necessary for determination such as data identification from learning data imported previously, and applies the algorithms to new data to make prediction. A learning method for machine learning may be any one of supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, and deep learning, or may be a combination of these learning methods, and any learning method for machine learning may be used.
In addition, the various tables of the embodiment described above may be generated using an image processing method. Examples of the image processing method include edge detection, line detection, and binarization processing. Similarly, when audio is dealt with, an audio conversion method such as Fourier transform may be used.
Although the evaluation system, the condition inspection system, the evaluation method, and the program according to one embodiment of the present invention have been described so far, the present invention is not limited to the above-described embodiment, and can be changed within the scope that can be conceived by those skilled in the art, such as addition, change, or deletion of other embodiments, and any aspect is included in the scope of the present invention as long as the operation and effect of the present invention are achieved.
According to an embodiment, it is possible to confirm a position of a target part in an image captured by an image capturing device installed in a moving object.
The above-described embodiments are illustrative and do not limit the present invention. Thus, numerous additional modifications and variations are possible in light of the above teachings. For example, at least one element of different illustrative and exemplary embodiments herein may be combined with each other or substituted for each other within the scope of this disclosure and appended claims. Further, features of components of the embodiments, such as the number, the position, and the shape are not limited the embodiments and thus may be preferably set. It is therefore to be understood that within the scope of the appended claims, the disclosure of the present invention may be practiced otherwise than as specifically described herein.
The method steps, processes, or operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance or clearly identified through the context. It is also to be understood that additional or alternative steps may be employed.
Further, any of the above-described apparatus, devices or units can be implemented as a hardware apparatus, such as a special-purpose circuit or device, or as a hardware/software combination, such as a processor executing a software program.
Further, as described above, any one of the above-described and other methods of the present invention may be embodied in the form of a computer program stored in any kind of storage medium. Examples of storage mediums include, but are not limited to, flexible disk, hard disk, optical discs, magneto-optical discs, magnetic tapes, nonvolatile memory, semiconductor memory, read-only-memory (ROM), etc.
Alternatively, any one of the above-described and other methods of the present invention may be implemented by an application specific integrated circuit (ASIC), a digital signal processor (DSP) or a field programmable gate array (FPGA), prepared by interconnecting an appropriate network of conventional component circuits or by a combination thereof with one or more conventional general purpose microprocessors or signal processors programmed accordingly.
Each of the functions of the described embodiments may be implemented by one or more processing circuits or circuitry. Processing circuitry includes a programmed processor, as a processor includes circuitry. A processing circuit also includes devices such as an application specific integrated circuit (ASIC), digital signal processor (DSP), field programmable gate array (FPGA) and conventional circuit components arranged to perform the recited functions.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2022-152354 | Sep 2022 | JP | national |
| 2023-124018 | Jul 2023 | JP | national |
The present application is a continuation of International Application No. PCT/JP2023/032361, filed on Sep. 5, 2023 which claims the benefit of priority of the prior Japanese Patent Application No. 2022-152354, filed on Sep. 26, 2022, and the prior Japanese Patent Application No. 2023-124018, filed on Jul. 31, 2023. The contents of which are incorporated herein by reference in their entirety.
| Number | Date | Country | |
|---|---|---|---|
| Parent | PCT/JP2023/032361 | Sep 2023 | WO |
| Child | 19075281 | US |