The present invention relates to a structure inspection assistance apparatus, a structure inspection assistance method, and a program.
Social infrastructure includes structures such as bridges and tunnels. These structures may become damaged, and due to the progressive nature of such damage, regular inspection is desirable.
An inspector who has inspected a structure is required to prepare, as a form indicating the result of the inspection, an inspection report in a predetermined format on the basis of inspection guidelines established by a person such as an administrator of the structure. By reviewing a damage diagram prepared in a predetermined format, even an expert different from the inspector who actually carried out the inspection can grasp the progress of damage to the structure and formulate a maintenance plan for the structure.
Similarly, the condition of structures such as condominiums and office buildings is regularly inspected, and repairs and overhauls are carried out on the basis of the inspection results. As part of the inspection, an inspection report is prepared. With regard to the preparation of an inspection report, JP2019-082933A discloses a system that can shorten the time required to prepare an inspection report.
In an inspection report for a structure, the basis and rationale for the determination of a countermeasure category with respect to damage, soundness, and the like are written as comments in a findings column. However, it is necessary to prepare the comments while referring to various data, and the work is complicated. Moreover, in some cases, human factors such as the proficiency level of the preparer may result in variations in the comments.
The present invention has been created in the light of such circumstances, and provides a structure inspection assistance apparatus, a structure inspection assistance method, and a program that enable more efficient commenting work and an equalization of findings and other comments.
A structure inspection assistance apparatus according to a first aspect includes a processor configured to perform a selection process of accepting a selection of target structure-related information including at least one of a captured image or damage information pertaining to a target structure, a preparation process of preparing a comment on damage to the target structure on a basis of the selected target structure-related information, and a display process of displaying the comment on a display.
In a structure inspection assistance apparatus according to a second aspect, the preparation process prepares at least one comment on damage to the target structure on a basis of at least one of the captured image or the damage information.
In a structure inspection assistance apparatus according to a third aspect, the preparation process uses machine learning to prepare the comment on damage to the target structure.
A structure inspection assistance apparatus according to a fourth aspect includes a database storing past target structure-related information, including at least one of a captured image or damage information pertaining to a structure, in association with a comment on damage to the structure, wherein the preparation process prepares, as a comment on damage to the target structure, a comment on damage to the structure regarding similar damage that is similar to damage to the target structure, on a basis of the target structure-related information and structure-related information stored in the database.
In a structure inspection assistance apparatus according to a fifth aspect, the selection process accepts a selection of the target structure-related information through a selection of a three-dimensional model of a structure associated with the target structure-related information.
In a structure inspection assistance apparatus according to a sixth aspect, the selection process accepts an automatic selection, from the target structure, of target structure-related information about which the comment is prepared.
In a structure inspection assistance apparatus according to a seventh aspect, the selection process accepts the automatic selection of the target structure-related information on a basis of at least one of the captured image or the damage information.
In a structure inspection assistance apparatus according to an eighth aspect, the processor executes an editing process of accepting an edit to the comment and modifying the comment.
In a structure inspection assistance apparatus according to a ninth aspect, the editing process accepts, as the edit to the comment, a comment candidate selected from a plurality of comment candidates corresponding to the comment.
In a structure inspection assistance apparatus according to a 10th aspect, the processor executes a related information extraction process of extracting related information that is related to damage to the target structure, and the display process displays the related information on the display.
A structure inspection assistance method according to an 11th aspect is enacted by a processor and includes a selecting step of accepting a selection of target structure-related information including at least one of a captured image or damage information pertaining to a target structure, a preparing step of preparing a comment on damage to the target structure on a basis of the selected target structure-related information, and a displaying step of displaying the comment on a display.
A structure inspection assistance program according to a 12th aspect causes a computer to achieve a selection function of accepting a selection of target structure-related information including at least one of a captured image or damage information pertaining to a target structure, a preparation function of preparing a comment on damage to the target structure on a basis of the selected target structure-related information, and a display function of displaying the comment on a display.
According to a structure inspection assistance apparatus, a structure inspection assistance method, and a program of the present invention, more efficient commenting work and an equalization of findings and other comments are possible.
Hereinafter, preferred embodiments of a structure inspection assistance apparatus, a structure inspection assistance method, and a program according to the present invention will be described in accordance with the attached drawings. Herein, a “structure” is a construction work, including civil engineering structures such as bridges, tunnels, and dams, for example, and also encompassing other architectural works such as office buildings, houses, and the walls, columns, and beams of buildings.
A computer or a workstation may be used as the structure inspection assistance apparatus 10 illustrated in
The input/output interface 12 can input various data (information) into the structure inspection assistance apparatus 10. For example, data to be stored in the storage unit 16 is inputted through the input/output interface 12.
The CPU (processor) 20 centrally controls each unit by reading out various programs, including a structure inspection assistance program according to an embodiment, stored in the storage unit 16, the ROM 24, or the like, loading the programs into the RAM 22, and performing computations. The CPU 20 also performs various processes of the structure inspection assistance apparatus 10 by reading out a program stored in the storage unit 16 or the ROM 24 and performing computations with the use of the RAM 22.
The CPU 20 has a selection processing unit 51, a preparation processing unit 53, a display processing unit 55, and the like. The specific processing function of each unit will be described later. Since the selection processing unit 51, preparation processing unit 53, and display processing unit 55 are a part of the CPU 20, the CPU 20 may also be described as executing the process of each unit.
Returning to
The storage unit 16 mainly stores target structure-related information 101, a three-dimensional model 103, and inspection report data 105.
The target structure-related information 101 includes at least one of a captured image or damage information pertaining to at least a target structure. The captured image is an image of a structure. The damage information includes at least one of a location of damage to the target structure, the type of damage, or the extent of damage. The target structure-related information 101 may also include an image (damage image) indicating damage detected from a captured image of the structure. The damage information may be acquired automatically by image analysis or the like, or may be acquired manually by a user.
The target structure-related information 101 may include multiple types of data, such as a panoramic composite image and a two-dimensional plan, for example. The panoramic composite image is a group of images corresponding to a specific member, obtained as a composite of multiple captured images. The damage information (damage image) may also be a panoramic composite.
The three-dimensional model 103 is the data of a three-dimensional model of the structure created on the basis of multiple captured images, for example. The three-dimensional model 103 includes data pertaining to the areas and names of members forming the structure. Each member area and member name may be specified on the three-dimensional model 103.
Member areas and member names may be specified with respect to the three-dimensional model 103 automatically from information related to the shape, dimensions, and the like of the members. Member areas and member names may also be specified with respect to the three-dimensional model 103 on the basis of user operations.
The target structure-related information 101 and the three-dimensional model 103 may be associated with one another. For example, the target structure-related information 101 is stored in the storage unit 16 in association with locations and members on the three-dimensional model 103. Through the specification of locations on the three-dimensional model 103, the target structure-related information 101 may also be displayed on the three-dimensional model 103. Also, through the specification of the target structure-related information 101, the three-dimensional model 103 may be displayed together with the target structure-related information 101.
The inspection report data 105 is a template (a document file in a designated format) for a two-dimensional inspection report. The template may also be in a format defined by the Japanese Ministry of Land, Infrastructure, Transport and Tourism or a local government.
The operating unit 18 illustrated in
The display device 30 is a liquid crystal display (LCD) panel or other display device, for example, and can be made to display three-dimensional model data, the target structure-related information 101, the inspection report data 105, and comments.
The selection processing unit 51 accepts a selection of the target structure-related information 101 (selecting step: step S1). As described above, the target structure-related information 101 includes at least one of a captured image or damage information.
The target structure-related information 101 is acquired from the storage unit 16 where the information is stored. If the target structure-related information 101 is not stored in the storage unit 16, the target structure-related information 101 may be acquired through the input/output interface 12 from another storage unit over a network.
Next, a preferred selection process in the selecting step (step S1) will be described.
In a first selection process, the selection processing unit 51 may accept the selection of the target structure-related information 101 through the selection of the three-dimensional model 103 of the structure associated with the target structure-related information 101.
First, the three-dimensional model will be described.
In the example in
The target structure-related information 101 is associated with locations and members on the three-dimensional model 103.
The method of creating the three-dimensional model 103 is not limited. Various models exist, and for example, the structure from motion (SfM) method may be used to create the three-dimensional model 103. SfM is a method of reconstructing a three-dimensional shape from a multi-view picture, in which feature points are calculated by an algorithm such as scale-invariant feature transform (SIFT), for example, and employed as a guide to calculate the three-dimensional positions of a point cloud using the principle of triangulation. Specifically, straight lines are drawn from the cameras to a feature point using the principle of triangulation, and the intersection of the two lines passing through the corresponding feature point is the reconstructed three-dimensional point. The three-dimensional positions of the point cloud can be obtained by performing the above work on each detected feature point. The three-dimensional model 103 may also be created using the captured images 107 (captured image group 107C) illustrated in
Note that although size is not calculated with SfM, the correspondence with actual scale can be obtained by capturing images with a scaler of known dimensions placed in the photographic subject.
Next, the first selection process will be described on the basis of
By having the user designate a specific position in the three-dimensional model 103A through the operating unit 18, the selection processing unit 51 can accept the selection of the captured image 107 or the damage information 109, that is, the target structure-related information 101. The selection processing unit 51 may also accept the selection of both the captured image 107 and the damage information 109 which are the target structure-related information 101. A three-dimensional model 103 with mapped damage is displayed, and from the three-dimensional model 103, the user can select damage to comment on.
Next, a second selection process will be described. In the second selection process, the selection processing unit 51 may accept an automatic selection, from a target structure, of target structure-related information 101 about which comments are prepared.
In one aspect of the second selection process, the user selects target members (bridge) or spans (tunnel) through the operating unit 18, and the selection processing unit 51 accepts an automatic selection of target structure-related information 101 for each member.
As illustrated in
Other than the method of designating a specific member on the three-dimensional model 103, the user may also select a member from a member list displayed on the display device. The selection processing unit 51 can accept an automatic selection of at least one of the captured image 107 or the damage information 109, that is, the target structure-related information 101, from among the members designated from the member list.
In other words, the selection processing unit 51 automatically selects and accepts the selection of a predetermined quantity of target structure-related information 101 corresponding to target damage for each damage type from among the target members and spans. However, if the entire structure is free of damage, target structure-related information 101 corresponding to damage is not selected.
In another aspect of the second selection process, the selection processing unit 51 automatically selects and accepts the selection of a predetermined quantity of target structure-related information 101 corresponding to target damage from the entirety of a target structure. However, if the entire structure is free of damage, target structure-related information 101 corresponding to damage is not selected.
Next, preferred criteria when automatically selecting the target structure-related information 101 will be described.
As a first selection criterion, the selection processing unit 51 may select the damage that has progressed the most extensively from among the target structure-related information 101. For example, the selection processing unit 51 may select the most extensive damage from among the results of the extent of damage (ranks a, b, c, d, and e) in the damage information 109, and accept the selection of the target structure-related information 101.
As a second selection criterion, the selection processing unit 51 may select the damage with the fastest rate of progression from among the target structure-related information 101. For example, the selection processing unit 51 may select the damage with the fastest rate of progression from among the results of the change over time in the damage information 109, and accept the selection of the target structure-related information 101. The rate of progression may be obtained from the yearly change in length, yearly change in width, yearly change in area, or the like. The rate of progression may also be obtained from the change in damage size per year. For example, the rate of progression may be obtained as the change in crack length per year, or in other words the yearly change in length. The rate of progression may also be obtained as the yearly change in width from the change in the width of damage, such as the change in crack width per year. The rate of progression may also be obtained as, for instance, a yearly change in area from the change in damage area per year or as a yearly change in depth from the change in damage depth per year, such as a change in the depth of wall thinning due to corrosion of steel members.
As a third selection criterion, the selection processing unit 51 may select the damage of largest size from among the target structure-related information 101. For example, the selection processing unit 51 may select, from the results of the captured image 107 or the damage information 109, the longest or widest damage, such as cracking or crazing, or select the damage of broadest area, such as delamination, water leakage, free lime, or corrosion. The selection processing unit 51 accepts the selection of the target structure-related information 101 pertaining to the selected damage.
As a fourth selection criterion, the selection processing unit 51 may select, from among the target structure-related information 101, damage that corresponds to a user-designated cause of damage, such as fatigue, salt damage, neutralization, alkali-aggregate reaction, frost damage, poor construction, or excessive external forces in the case of a concrete member, or fatigue, salt damage, water leakage, material deterioration, coating deterioration, poor construction, or excessive external forces in the case of a steel member. For example, the selection processing unit 51 accepts, from the results of the damage information 109, the selection of the target structure-related information 101 pertaining to the selected damage.
As a fifth selection criterion, the selection processing unit 51 may select damage that exists in an area of focus. For example, the “Guidelines for Regular Inspection of Bridges” (March 2019) by the Japanese Ministry of Land, Infrastructure, Transport and Tourism gives examples of areas of focus that need close attention in regular inspections of concrete bridges. The given areas of focus are (1) end supports, (2) intermediate supports, (3) center span, (4) ¼ span, (5) concrete cold joints, (6) segment joints, (7) anchorage, and (8) notches. The selection processing unit 51 may accept an automatic selection of the target structure-related information 101 pertaining to damage in each area of focus.
For example, the “Guidelines for Regular Inspection of Road Tunnels” (March 2019) by the Japanese Ministry of Land, Infrastructure, Transport and Tourism gives examples of areas of focus where similar deformations occur due to road tunnel construction methods and the like. The given areas of focus are (1) joints and concrete cold joints of the lining, (2) near the top edge of the lining, and (3) near the middle of the lining span. The selection processing unit 51 may accept an automatic selection of the target structure-related information 101 pertaining to damage in each area of focus.
Note that if the damage cannot be limited to a prescribed number by one of the first to fifth selection criteria alone, a combination of the first to fifth selection criteria may be applied. For example, the selection processing unit 51 may accept an automatic selection of the target structure-related information 101 pertaining to damage that meets the first and second selection criteria. As another example, the selection processing unit 51 may accept an automatic selection of the target structure-related information 101 pertaining to damage that meets the first and third selection criteria. Although the above describes cases of combining two selection criteria, three or more selection criteria may also be combined.
As described above, the CPU 20 functions as the selection processing unit 51.
As illustrated in
Next, a preferred comment preparation process in the preparing step (step S2) will be described.
In a first comment preparation process, the preparation processing unit 53 may use artificial intelligence (AI) to prepare at least one comment on damage to the target structure on the basis of the selected target structure-related information 101.
For the AI, a trained model developed using a convolutional neural network (CNN) can be used, for example.
In
The trained models 53A, 53B, and 53C each contain an input layer, an intermediate layer, and an output layer, the layers being structured such that a plurality of “nodes” are connected by “edges”.
The selected target structure-related information 101 is inputted into the input layer of a CNN. The target structure-related information 101 is at least one of the captured image 107 or the damage information 109 (for example, at least one of damage type, extent, progressiveness of damage, or cause of damage).
The intermediate layer has multiple sets, with each set containing a convolutional layer and a pooling layer, and is the portion where features are extracted from the captured image 107 or the damage information 109 inputted from the input layer. The convolutional layer applies filter processing (performs convolutional operations using a filter) on a nearby node in the previous layer, and acquires a “feature map”. The pooling layer reduces the feature map outputted from the convolutional layer to generate a new feature map. The “convolutional layer” is responsible for feature extraction, such as edge extraction, from the captured image 107, or is responsible for feature extraction, such as natural language processing, for example, from the damage information 109.
The output layer of the CNN is the portion that outputs a feature map indicating the features extracted by the intermediate layer. The output layers of the trained models 53A, 53B, and 53C in this example output inference results as damage detection results 53D, 53E, and 53F. The damage detection results 53D, 53E, and 53F include at least one comment on damage derived from each of the trained models 53A, 53B, and 53C.
For example, the trained model 53A is a model that has been trained by machine learning to detect water leakage, surface free lime, and rust fluid damage, and outputs, as the damage detection result 53D, damage areas for each of water leakage, surface free lime, and rust fluid together with the damage type and comments for each damage area. The trained model 53B is a model that has been trained by machine learning to detect delamination and exposed rebar damage, and outputs, as the damage detection result 53E, damage areas for each of delamination and exposed rebar together with the damage type and comments for each damage area. The trained model 53C is a model that has been trained by machine learning to detect cracking and linear free lime damage, and outputs, as the damage detection result 53F, damage areas for each of cracking and linear free lime damage together with the damage type and comments for each damage area.
As described above, the outputted damage detection results 53D, 53E, and 53F are prepared as comments on damage to the target structure. Note that the trained models 53A, 53B, and 53C of the preparation processing unit 53 are not limited to the above embodiment. For example, one configuration may have individual trained models for each damage type, in which each trained model outputs, as the damage detection results, damage areas and comments corresponding to each damage type. In this case, the number of trained models provided is equal to the number of damage types to be inspected. Another configuration may have a single trained model which can handle all damage types and which outputs, as the damage detection results, damage areas together with the damage type and comments for each damage area. Appropriate, that is, accurate and error-free comments can be generated on the basis of the target structure-related information 101. A case in which damage areas together with the damage type and comments for each damage area are outputted as the damage detection results is described, but comments may simply be outputted.
A second comment preparation process will be described on the basis of
As illustrated in
The preparation processing unit 53 further includes a similar damage extraction unit 53G.
The target structure-related information 101 selected by the selecting step (step S1) is outputted to the similar damage extraction unit 53G. The similar damage extraction unit 53G extracts, on the basis of the target structure-related information 101, structure-related information 140 and comments 142 that are similar to the target structure-related information 101 from among the past structure-related information 140 and contemporaneously prepared comments 142 on damage to the structures which are stored in the database 60.
The similar damage extraction unit 53G may make a similarity determination on the basis of damage information regarding the type of damage, location of damage, and extent (such as length, width, area, density, and depth) of damage (such as an average or maximum value).
The similar damage extraction unit 53G may also make a similarity determination on the basis of the location of damage and the change in extent over time in addition to the damage information. In this case, at least one of: images captured at multiple points in time of the same area of the structure, damage information at multiple points in time detected from the images captured at multiple points in time, or information indicating change in the damage information over time is preferably stored in the database 60.
The similar damage extraction process by the similar damage extraction unit 53G can detect change in the damage information over time on the basis of the target structure-related information 101 at multiple points in time, and use information indicating the change over time as one piece of information when extracting similar damage that is similar to damage to the target structure from the database 60.
Furthermore, the similar damage extraction unit 53G make a similarity determination with consideration for information other than damage information, for example, at least one of structure information, environment information, history information, or inspection information about a structure.
The similar damage extraction unit 53G extracts, as a similar damage detection result 53H, damage areas of similar damage together with the damage type and comments for each damage area.
Note that, in addition to the target structure-related information 101, the other information below may also be outputted to the similar damage extraction unit 53G. On the basis of one or more of the other information below, the similar damage extraction unit 53G may extract, as the similar damage detection result 53H, structure-related information 140 and comments 142 that are similar to the target structure-related information 101.
The other information includes at least one of structure information, environment information, history information, or inspection information about a structure indicated below.
The other information may also include the diagnostic purpose of the target structure. The diagnostic purpose may be to determine the extent of damage, determine a countermeasure category, determine soundness, estimate the cause of damage, determine whether repair is necessary, select a repair method, or the like.
In
Note that in
The similar damage extraction unit 53G calculates, in the feature space illustrated in
The distance may be the distance when the multiple parameters of the first and second feature vectors are not weighted (Euclidean distance) or the distance with weighted parameters (Mahalanobis distance). What weights are assigned to which parameters may be determined by statistical methods such as principal component analysis.
In addition to a determination like the above, additional search criteria may be designated as points or ranges in the feature space. For example, if a bridge with a completion date on or after Jan. 1, 1990, and a basic structure such as a girder bridge is designated, damage to structures within the designated range can be extracted as similar damage.
In addition to the above, damage information, structure information, environment information, history information, inspection information, and the like included in the other information can be set as axes of the feature space to extract similar damage.
Note that a method different from determining the distance in a feature space may also be used as the method of extracting similar damage. For example, similar damage may also be extracted using artificial intelligence (AI) for determining similarity from images, AI for determining similarity with a combination of multiple types of information from among images, the damage information, and the other information, or the like.
By preparing and referencing comments from similar damage in the past, the know-how of experienced engineers can be used as a reference to train young engineers.
As illustrated in
The user is able to use the comments prepared by the structure inspection assistance apparatus 10, and thus can perform the work of preparing comments more efficiently. The findings and other comments by the structure inspection assistance apparatus 10 can be equalized without being influenced by human factors. Note that the inspection assistance program achieves a selection function corresponding to the selecting step, a preparation function corresponding to the preparing step, and a display function corresponding to the displaying step with the CPU 20 or the like.
Next, a preferred embodiment of the selecting step (step S1), preparing step (step S2), and displaying step (step S3) will be described.
As illustrated in
As illustrated in
As illustrated in
To simplify the editing process, the preparation processing unit 53 may prepare a comment template according to the selected damage, and the display processing unit 55 may display the comment template on the display device 30. The user may re-select appropriate comments from the comment template.
In one example, the comments contain the following information: “(Damage type) due to (cause of damage) estimated. (Damage type) is occurring in (location/range), and (comment related to progressiveness) (comment related to response).” In
On the structure inspection assistance apparatus 10, as illustrated in
For example, candidates in the pull-down menu for (cause of damage) may include “fatigue”, “salt damage”, “neutralization”, “alkali-aggregate reaction”, “frost damage”, “poor construction”, “excessive external forces”, and the like in the case of a concrete member, or “fatigue”, “salt damage”, “water leakage”, “material deterioration”, “coating deterioration”, “poor construction”, “excessive external forces”, and the like in the case of a steel member.
Candidates in the pull-down menu for (damage type) may include “cracking”, “slab cracking”, “water leakage”, “free lime”, “delamination”, “exposed rebar”, “crazing”, and “corrosion”, for example.
Candidates in the pull-down menu for (location/range) may include “main girder”, “crossbar”, “pier”, “abutment”, “bearing”, and the like, and may also include “throughout”, “at the edges”, and the like.
Candidates in the pull-down menu for (comment related to progressiveness) may include “progressing quickly.”, “progressing slowly.”, “not a concern regarding progression.”, and the like.
Candidates in the pull-down menu for (comment related to response) may include “A detailed inspection will be performed.”, “A follow-up inspection will be performed.”, “Repairs will be carried out.”, “Countermeasures will be implemented.”, and the like.
The candidates in the pull-down menus are selected appropriately depending on the damage. The user may also edit the comments of the candidates in the pull-down menus.
Comments on similar damage extracted in the similar damage extraction process may also be displayed on the display device 30 as a findings template. Comments on similar damage may also be edited using pull-down menus in a manner similar to the editing process illustrated in
The CPU 20 may execute a related information extraction process of extracting related information that is related to damage to the target structure, and the display processing unit 55 of the CPU 20 may display the related information on the display device 30.
Inspection data regarding the selected damage, past inspection data regarding the selected damage, data about other closely related damage (such as damage located close to the selected damage or damage on the other side), and inspection data regarding similar damage (which may be of another structure) may be displayed on the display device 30 as the related information that is related to damage to the target structure. By displaying related information that is related to damage to the target structure, the user may edit the comments.
In the above embodiment, the hardware structure of a processing unit that executes various processing is any of various types of processors like the following. The various types of processors include: a central processing unit (CPU), which is a general-purpose processor that executes software (a program or programs) to function as any of various types of processing units; a programmable logic device (PLD) whose circuit configuration is modifiable after fabrication, such as a field-programmable gate array (FPGA); and a dedicated electric circuit, which is a processor having a circuit configuration designed for the specific purpose of executing a specific process, such as an application-specific integrated circuit (ASIC).
A single processing unit may be configured as any one of these various types of processors, or may be configured as two or more processors of the same or different types (such as multiple FPGAs, or a combination of a CPU and an FPGA, for example). Moreover, multiple processing units can be configured as a single processor. A first example of configuring a plurality of processing units as a single processor is a mode in which a single processor is configured as a combination of software and one or more CPUs, as typified by a computer such as a client or a server, such that the processor functions as the plurality of processing units. A second example of the above is a mode utilizing a processor in which the functions of an entire system, including the plurality of processing units, are achieved on a single integrated circuit (IC) chip, as typified by a system on a chip (SoC). In this way, various types of processing units are configured as a hardware structure by using one or more of the various types of processors indicated above.
More specifically, the hardware structure of these various types of processors is circuitry combining circuit elements such as semiconductor devices.
Each configuration and function described above is achievable, as appropriate, by any hardware, software, or a combination of hardware and software. For example, the present invention is also applicable to a program causing a computer to execute the processing steps (processing procedure) described above, a computer-readable recording medium (non-transitory recording medium) storing such a program, or a computer in which such a program is installable.
Although an example of the present invention is described above, the present invention is not limited to the foregoing embodiments, and obviously a variety of modifications are possible within a scope that does not depart from the gist of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
2020-192303 | Nov 2020 | JP | national |
The present application is a Continuation of PCT International Application No. PCT/JP2021/037302 filed on Oct. 8, 2021 claiming priority under 35 U.S.C. § 119(a) to Japanese Patent Application No. 2020-192303 filed on Nov. 19, 2020. Each of the above applications is hereby expressly incorporated by reference, in its entirety, into the present application.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2021/037302 | Oct 2021 | US |
Child | 18308763 | US |