The present invention relates to a technique of synthesizing captured images.
Conventionally, when inspecting concrete wall surfaces of a bridge, a dam, a tunnel, and the like, an inspection engineer approaches the concrete wall surface and visually checks variations such as cracks. However, such inspection work called close visual inspection has a problem that the operation cost is high. PTL 1 discloses a technique in which a concrete wall in a tunnel is captured using a camera, and cracks are detected based on the obtained captured image.
PTL 1: Japanese Patent Laid Open No. 2002-310920
However, in the related art described above, when an obstacle exists between an inspection object such as a concrete wall surface and a camera that performs capturing, there is a problem that an image of a portion shielded by the obstacle cannot be obtained. Therefore, there is a possibility that a variation such as a crack on the concrete wall surface of the portion may be overlooked.
The present invention has been made in consideration of such problems, and provides a technique of generating an image which enables more suitable detection of variations in a situation in which an obstacle exists between an inspection object and a camera.
In order to solve the above-described problems, an image processing apparatus according to the present invention includes an arrangement described below. That is, the image processing apparatus comprises:
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
The accompanying drawings are included in and constitute a part of the specification, illustrate embodiments of the present invention and, together with the description, serve to explain the principles of the present invention.
Hereinafter, an example of embodiments of the present invention will be explained in detail with reference to the accompanying drawings. Note that the following embodiments are merely examples, and not intended to limit the scope of the present invention.
As the first embodiment of an image processing apparatus according to the present invention, an image processing apparatus that synthesizes two images obtained by capturing a floor slab of a bridge from two different viewpoint positions will be described below as an example.
A camera captures a floor slab region surrounded by a lattice formed by the steel bridge girder. In particular, for descriptive simplicity, it is assumed below that capturing is performed using the camera from a viewpoint position directly below the floor slab region surrounded by each lattice. In addition, it is assumed that an image of an adjacent floor slab region is captured together with the center floor slab region in each captured image. Note that it is assumed here that image capturing is performed from different viewpoint positions by moving the camera.
Ranges 130a and 130b in
As is understood from
That is, the floor slab region shielded by the diagonal member is different between the image 200a and the image 200b due to the parallax, so that the crack 210 differently appears in these images, Therefore, in the first embodiment, an image of the floor slab region 100a with a small shielded region is generated by synthesizing the image 200a and the image 200b. This makes it possible to obtain an image which enables more suitable detection of variations such as the crack 210.
In the following description, a mode will be described in which each functional unit of the image processing apparatus 300 shown in
A CPU 320 comprehensively controls the image processing apparatus 300. The CPU 320 implements each functional unit shown in
The HDD 326 stores, for example, an application program used in the image processing apparatus 300 or various types of control programs. The HDD 326 also stores images captured by the camera as described above (for example, the captured image indicated by the ranges 130a and 130b) and a design drawing to be described later. Further, the HDD 326 also stores various types of information related to the application program or various types of control programs. A RAM 321 is also used to temporarily store the various types of information. Each of a keyboard 325 and a mouse 324 is a functional unit that accepts an instruction input from a user. A display 323 is a functional unit that visually provides various types of information to the user.
An image data storage module 301 is formed by, for example, the HDD 326, an SSD (Solid State Drive), or a combination thereof. As described above, it stores images captured by the camera from a plurality of viewpoint positions. In addition, it stores a design drawing, which is an orthographic projection view as viewed from below the bridge, created based on the design drawing or the like of the target bridge.
Note that in the following description, captured images indicated by the ranges 130a and 130b are assumed as captured images, but each captured image may be a stitch-synthesis image of a plurality of captured images (one-shot captured images). That is, in order to perform precise crack detection, it is necessary to capture the floor slab with a high resolution (for example, one pixel of the captured image corresponds to 0.5 mm on the floor slab). At this time, when capturing is performed using a camera having, for example, 24 million pixels (lateral 6000 pixels×longitudinal 4000 pixels), one shot image includes the range of lateral 3 m×longitudinal 2 m, so the above-described captured-image condition (=including the image of an adjacent floor slab image) cannot be satisfied. Therefore, it is preferable to synthetize a plurality of one-shot images by stitch synthesis or the like to generate an image that satisfies the above-described captured-image condition.
An image data management module 302 manages image data stored in the HDD 326 described above. For example, it manages the viewpoint position at which each of the plurality of captured images is captured. Also, it manages association information between images derived by an alignment module 303 to be described later.
The alignment module 303 is a functional unit that determines association of position coordinates between two (or more) mages. For example, two images are displayed on the display 323 as a graphical user interface (GUI), and a designation of coordinates to be associated with each other in the two images is accepted from the user via the keyboard 325 and/or the mouse 324. Then, a coordinate conversion parameter for associating the coordinates with each other between the images is derived. Here, it is assumed that a known homography matrix is derived as the coordinate conversion parameter, but another coordinate conversion parameter may be calculated. Note that it is assumed here that the floor slab region can be approximated by a two-dimensional plane. It is also assumed that the portion of the steel girder surrounding the floor slab region, the portion in contact with the floor slab, exists on substantially the same plane as the floor slab region.
In general, it is necessary to designate four pairs of coordinates to derive a homography matrix. However, more pairs of coordinates may be designated to derive a homography matrix. In that case, for example, processing of calculating the sum of errors each of which is obtained as a result of coordinate conversion of coordinate values of each pair of coordinates and optimizing the parameter so as to minimize the sum is performed. In practice, the accuracy tends to improve as the number of coordinate pairs increases.
A replacement region designation module 304 is a functional unit that accepts a region in a base image (referred to as a main image hereinafter) of two images to be synthesized, the region in which the floor slab is shielded so does not appear. That is, a region to be replaced with the other image (referred to as a sub image hereinafter) of the two images to be synthesized is designated. For example, the main image is displayed as a GUI on the display 323, and a designation of the image region of the diagonal member included in the main image is accepted as a region of an arbitrary shape such as a polygon, a circle, or an ellipse from the user via the keyboard 325 and/or the mouse 324. Note that the replacement region may be designated not only as one region but also as a combination of two or more regions. In this case, a logical sum (OR) or a logical product (AND) for the combination is further designated.
Note that in processing of detecting a crack or the like, it is desirable that the image is captured from a position directly facing the floor slab. Therefore, in the following description, the image 200a (range 130a) obtained by capturing the floor slab region 100a from the front is used as the main image, and the image 200b (range 130b) obtained by capturing the floor slab region 100a diagonally from the left is used as the sub image. However, as image processing, any image may be used as the main image as long as the floor slab region to be processed appears in the image.
On the other hand, a GUI 802 is a GUI that superimposes and displays the main image and the sub image. Here, a state is shown in which the sub image is superimposed on the main image with a transparency of “80%”. Noted that the transparency of the sub image can be designated. When there are a plurality of sub images, the sub image to be displayed may be switchable. Further, the plurality of sub images having undergone translucent processing may be superimposed and displayed on the main image.
By configuring such GUIs, it is possible to intuitively know the region in the sub image corresponding to the replacement region designated in the main image. Note that each of the GUI 801 and the GUI 802 is provided with buttons or the like for receiving an operation from the user. For example, a button for selecting the main image, a pull-down list for selecting the sub image, a button for starting designation of a replacement region on the main image, and a button for clipping a region on the sub image corresponding to the replacement region are provided.
An image clipping module 305 is a functional unit that clips the image of the region in the sub image corresponding to the replacement region designated by the replacement region designation module 304. For example, the coordinates of the replacement region designated by the replacement region designation module 304 are converted into coordinates in the sub image using the coordinate conversion parameter derived by the alignment module 303, and the image of the region corresponding to the coordinates obtained by the conversion is clipped from the sub image.
The image synthesis module 306 is a functional unit that overwrites the replacement region designated by the replacement region designation module 304 with the clipped image clipped by the image clipping module 305 to generate a synthesized image. For example, it generates an image obtained by superimposing and displaying the clipped image clipped from the sub image on the main image.
Note that misalignment can occur between the main image and the clipped image due to various factors. Therefore, the clipped image may be displayed as an editable object image in a GUI, and editing (deformation such as scaling) of the clipped image may be accepted from the user via the keyboard 325 and/or the mouse 324.
The variation detection module 307 is a functional unit that performs variation detection processing on the synthesized image generated by the image synthesis module 306. Here, variation detection processing is processing of detecting a variation (a crack or the like) in an image and recording the detected variation. For example, it displays the synthesized image on the GUI, accepts the position of the crack from the user by a trace operation using a mouse or the like, and records position data representing the trace location or the traced locus as crack data (variation data). Alternatively, a crack is automatically detected by image analysis processing using an algorithm such as machine learning, and the detected location of the crack is recorded as crack data. Note that, in order to facilitate processing, the crack data is preferably recorded as vector data indicating the locus of the crack.
In this embodiment, the crack data can be used as independent graphic data. For example, it is possible to provide a user interface capable of switching on/off the superimposed display of crack data on the synthesized image. In this manner, by checking the synthesized image while switching on/off the superimposed display, the user can more easily check correctness of the variation detection processing and perform supplementary work. In addition, the crack detection data is stored and, after a predetermined period has passed, superimposed and displayed on a synthesized image obtained by capturing the same floor slab of the same bridge. This makes it easy for the user to visually check whether the crack has extended. At this time, it may be configured that superimposition of crack detection data can be switched on/off. In particular, according to this embodiment, a synthesized image in which a region hidden by a shielding object has been complemented can be used as a target of variation detection processing, so that the variation detection processing can be performed more easily. That is, a complicated work is unnecessary, such as performing variation detection processing on each of a plurality of images captured from different capturing positions and distinguishing overlapping portions and non-overlapping portions from the results obtained from the plurality of images.
In step S501, the alignment module 303 acquires a main image and one or more sub images to be synthesized from the image data storage module 301. For example, a captured image obtained by capturing the floor slab region 100a to be processed from the front (immediately below) is read out from the image data storage module 301 as a main image. Then, the image data management module 302 is inquired of a captured image obtained by capturing the floor slab region laterally adjacent to the floor slab region 100a from the front, and the corresponding captured image is read out from the image data storage module 301 as a sub image. Here, a captured image of the left adjacent floor slab region 100b captured from the front is read out as a sub image.
In step S502, the alignment module 303 reads out the design drawing from the image data storage module 301. As has been described above, the design drawing is an orthographic projection view as viewed from below the bridge, and is, for example, an image as shown in
In step S503, the alignment module 303 accepts an association relationship of the coordinates between the main image and the design drawing from the user, and derives a homography matrix as a coordinate conversion parameter between the main image and the design drawing. In addition, an association relationship between the coordinates of the sub image and the design drawing is accepted from the user, and a homography matrix as a coordinate conversion parameter between the sub image and the design drawing is derived. Thus, it becomes possible to convert the main image and the sub image (the central projection image captured by the camera) into an orthographic projection image (orthogonal conversion). In addition, as a result, the coordinate relationship is associated between the main image and the sub image.
Here, the user associates four vertices, serving as feature points, of the four corners of the floor slab region 100a in the image 200a with four vertices, serving as feature points, of the four corners of the floor slab region 100a on the design drawing. In addition, four vertices of the floor slab region 100a in the image 200b are associated with the four vertices of the four corners of the floor slab region 100a on the design drawing. However, it is expected that the two left vertices in the image 200b are shielded by the steel bridge girder. Therefore, it is preferable to designate two vertices with this in mind. For example, instead of the four corners of the floor slab region 100a, corners of the bridge girder that is assumed to exist on substantially the same plane as the floor slab region may be designated. Also, for example, the positions of the hidden two vertices behind the bridge girder on the image may be specified by drawing an auxiliary line based on the remaining two viewable vertices, the corners of the bridge girder, and the like, and the specified points may be designated. A feature point other than the vertex may be designated, if any. Note that the feature point desirably exists on substantially the same plane as the floor slab.
In step S504, the replacement region designation module 304 accepts a designation of a replacement region in the main image from the user. As has been described above, a replacement region is a region in which the floor slab is shielded so it does not appear in the main image. Here, the region of the diagonal member in the image 200a is designated. Note that a region larger than the region of the diagonal member may be designated, assuming that image edition may be required after synthesis is performed. Instead of directly designating the replacement region, a designation of an image feature (texture or the like) of the replacement region may be accepted. In this case, the replacement region designation module 304 selects a region in the main image similar to the designated image feature and designates the region as the replacement region.
In step S505, the image clipping module 305 clips the image of the region in the sub image corresponding to the replacement region (given region) designated in step S504. Here, the coordinates of the replacement region in the main image are converted into coordinates in the sub image using the coordinate conversion parameters derived by the alignment module 303, and the image of the clipped region corresponding to the coordinates obtained by the conversion is clipped from the sub image. Note that a region larger than the clipped region derived from the replacement region may be clipped, assuming that image edition may be required after synthesis is performed. However, if it is determined based on the image feature or the like that a portion other than the floor slab appears in the extended region, this region may be excluded from the clipping target.
In step S506, the image synthesis module 306 overwrites (replaces) the replacement region in the main image designated in step S504 with the clipped image clipped in step S505 to generate a synthesized image. Note that when the clipped image is synthesized with the main image, the clipped image is converted into a coordinate position in the main image based on the homography matrix derived in step S503, and then synthesis is performed. The generated synthesized image is held in the image data storage module 301.
As has been described above, according to the first embodiment, images captured from two different viewpoint positions are synthesized. With this processing, it becomes possible to generate an image suitable for variation detection processing even in a situation in which an obstacle (diagonal member) exists between the inspection object (floor slab) and the camera.
Note that in the above description, a mode in which one sub image is used for one main image has been described, but two or more sub images may be used. For example, in addition to the captured image (image 200b) obtained by capturing the left adjacent floor slab region from the front, a captured image obtained by capturing the right adjacent floor slab region from the front may also be used as the sub image.
In modification 1, a mode in which a plurality of images captured from a plurality of viewpoint positions are directly associated with each other will be described. Note that the apparatus arrangement is substantially the same as that in the first embodiment, so that a description thereof wilt be omitted.
In step S701, the alignment module 303 reads out a main image and one or more sub images to be synthesized from the image data storage module 301. In step S702, the alignment module 303 accepts an association relationship of the coordinates between the main image and the sub image from the user, and derives a homography matrix as a coordinate conversion parameter between the images.
Here, the user associates four vertices of the four corners of the floor slab region 100a in the image 200a with four vertices of the four corners of the floor slab region 100a in the image 2001. However, it is expected that the two left vertices in the image 200b are shielded by the steel bridge girder. Therefore, it is preferable to designate two vertices with this in mind. A feature point other than the vertex may be designated, if any. Note that the feature point desirably exists on substantially the same plane as the floor slab.
Steps S703 to S705 are similar to steps S504 to S506 in the first embodiment, so that a description thereof will be omitted.
As has been described above, according to modification 1, it becomes possible to synthesize captured images captured from two different viewpoint positions without using the design drawing. With this processing, it becomes possible to generate an image suitable for variation detection processing even in a situation in which an obstacle (diagonal member) exists between the inspection object (floor slab) and the camera.
In the first embodiment, when a positional shift occurs between the main image and the image clipped from the sub image, the user can perform edition such as scaling or moving using the GUI as shown in
In step S1401, the variation detection module 307 performs crack detection processing on a main image and a clipped image.
In step S1405, the clipped image correction module 1301 accepts the association relationship of the coordinates between the clipped image and the main image from the user, and creates a pair from the two input coordinates. This is processing of replenishing a pair so that geometric conversion to be described later can be executed based on a total of four or more pairs. For example, a pair of coordinates determined by the alignment module 303 and associated with each other to align the main image and the sub image (step S503 or S702) may be used. Alternatively, a pair of coordinates associated by the user newly selecting an arbitrary position in each of the main image and the clipped image may be used. Note that the coordinates designated on the clipped image are desirably coordinates on a region where the user wants to suppress the influence of conversion in geometric conversion of the clipped image performed in step S1406 to be described later. Also, even when four or more pairs have been generated in step S1403, a user input of a pair may be accepted for the purpose of designating a region where the user wants to suppress the influence of conversion. Note that it is determined here whether the number of pairs is “four or more” in step S1404, assuming geometric conversion using a homography matrix. However, when another geometric conversion is used, the number may be any number necessary for the geometric conversion, and may not be “four or more”.
In step S1406, the clipped image correction module 1301 derives a homography matrix using the pairs of the coordinates on the main image and the coordinates on the clipped image. Then, geometric conversion based on the derived homography matrix is applied to the clipped image.
Note that in the boundary portion between a main image and a clipped image, not only a positional shift of a variation or texture but also a shift of color (color shift) may occur. Two methods of correcting a color shift in the boundary portion will be described below. One is a method using histogram matching that matches the distribution of the color histogram of the clipped image with the distribution of the color histogram of the main image. The other is a method of blending the clipped image with the main image by setting the transparency continuously changed from the center to the outside in the vicinity of the boundary portion of the clipped image.
If a positional shift of a variation or texture or a color shift occurs in the boundary portion between the main image and the clipped image, false detection such as miscounting of the number of cracks or regarding the boundary line as a crack may occur in variation detection processing. According to modification 2, occurrence of such false detection can be reduced. Note that in modification 2, a method of correcting a positional shift and a method of correcting a color shift have been described, but image processing is not limited thereto. Similarly, based on any or all of the hue, the saturation, and the brightness in one of a main image and a sub image, any of the hue, the saturation, and the brightness of a partial region in the other image can be adjusted and reflected on a synthesized image. This can improve the accuracy of variation detection processing that is executed subsequently.
According to the present invention, a technique of generating an image which enables more suitable detection of variations can be provided. Other features and advantages of the present invention will become apparent from the description with reference to the accompanying drawings.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
Number | Date | Country | Kind |
---|---|---|---|
2017-248005 | Dec 2017 | JP | national |
2018-192136 | Oct 2018 | JP | national |
This application is a continuation of U.S. patent application Ser. No. 16/897,601, filed Jun. 10, 2020, which is a continuation of International Patent Application No. PCT/JP2018/041090, filed Nov. 6, 2018, which claims the benefit of Japanese Patent Application No. 2017-248005, filed Dec. 25, 20217, and Japanese Patent Application No. 2018-192136, filed Oct. 10, 2018, each of which are hereby incorporated by reference herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
6424752 | Katayama | Jul 2002 | B1 |
6621921 | Matsugu | Sep 2003 | B1 |
6970166 | Kuroki et al. | Nov 2005 | B2 |
7092109 | Satoh et al. | Aug 2006 | B2 |
7110021 | Nobori | Sep 2006 | B2 |
7411588 | Kuroki et al. | Aug 2008 | B2 |
7446768 | Satoh et al. | Nov 2008 | B2 |
7928977 | Tanimura | Apr 2011 | B2 |
7990385 | Kake | Aug 2011 | B2 |
8576242 | Kitago | Nov 2013 | B2 |
8660337 | Takahashi | Feb 2014 | B2 |
8830231 | Ito | Sep 2014 | B2 |
9098907 | Takahashi | Aug 2015 | B2 |
9182487 | Troy | Nov 2015 | B2 |
9215375 | Shintani | Dec 2015 | B2 |
9270976 | Houvener | Feb 2016 | B2 |
9374522 | Anabuki | Jun 2016 | B2 |
9804144 | Schlaudraff | Oct 2017 | B2 |
9811910 | Baldwin | Nov 2017 | B1 |
9934575 | Takahashi | Apr 2018 | B2 |
10078899 | Oh | Sep 2018 | B2 |
10375279 | Ilic | Aug 2019 | B2 |
10776910 | Karube | Sep 2020 | B2 |
11170477 | Yonaha | Nov 2021 | B2 |
11416978 | Iwabuchi | Aug 2022 | B2 |
20060140473 | Brooksby | Jun 2006 | A1 |
20070081231 | Shirota | Apr 2007 | A1 |
20070293755 | Shirahata | Dec 2007 | A1 |
20070297665 | Segev | Dec 2007 | A1 |
20080247667 | Jin | Oct 2008 | A1 |
20090015675 | Yang | Jan 2009 | A1 |
20090092334 | Shulman | Apr 2009 | A1 |
20100020201 | Chao | Jan 2010 | A1 |
20100053214 | Goto | Mar 2010 | A1 |
20100171828 | Ishii | Jul 2010 | A1 |
20100259541 | Kitago | Oct 2010 | A1 |
20110063297 | Toyama | Mar 2011 | A1 |
20110129144 | Takahashi | Jun 2011 | A1 |
20120262569 | Cudak et al. | Oct 2012 | A1 |
20130278727 | Tamir | Oct 2013 | A1 |
20140071132 | Noshi | Mar 2014 | A1 |
20140104376 | Chen | Apr 2014 | A1 |
20140126807 | Takahashi | May 2014 | A1 |
20140294263 | Hermosillo Valadez | Oct 2014 | A1 |
20150125030 | Suzuki | May 2015 | A1 |
20150154803 | Meier | Jun 2015 | A1 |
20150294475 | Takahashi | Oct 2015 | A1 |
20160117820 | Oh | Apr 2016 | A1 |
20160188992 | Hiraga | Jun 2016 | A1 |
20170018113 | Lattanzi | Jan 2017 | A1 |
20170094163 | Ohba | Mar 2017 | A1 |
20170193693 | Robert | Jul 2017 | A1 |
20180095450 | Lappas | Apr 2018 | A1 |
20180235577 | Buerger | Aug 2018 | A1 |
20180247135 | Oami | Aug 2018 | A1 |
20180300868 | Kikuchi | Oct 2018 | A1 |
20180300874 | Karube | Oct 2018 | A1 |
20190026955 | Ogata | Jan 2019 | A1 |
20190075250 | Asai | Mar 2019 | A1 |
20190139244 | Pras | May 2019 | A1 |
20190180485 | Kim | Jun 2019 | A1 |
20190273902 | Varekamp | Sep 2019 | A1 |
20190340737 | Kawaguchi | Nov 2019 | A1 |
20190355148 | Horita | Nov 2019 | A1 |
20200302595 | Iwabuchi | Sep 2020 | A1 |
20200400453 | Lee | Dec 2020 | A1 |
20230056006 | Gum | Feb 2023 | A1 |
Number | Date | Country |
---|---|---|
101859444 | Oct 2010 | CN |
102082954 | Jun 2011 | CN |
2911041 | Aug 2015 | EP |
H06-124339 | May 1994 | JP |
H11-189970 | Jul 1999 | JP |
2001-349887 | Dec 2001 | JP |
2002-310920 | Oct 2002 | JP |
2003-051021 | Feb 2003 | JP |
2011-023814 | Feb 2011 | JP |
2011-032641 | Feb 2011 | JP |
5230013 | Jul 2013 | JP |
2016-079615 | May 2016 | JP |
2019-114238 | Jul 2019 | JP |
7058585 | Apr 2022 | JP |
2017075247 | May 2017 | WO |
2017126367 | Jul 2017 | WO |
Entry |
---|
Feb. 25, 2023 Chinese Official Action in Chinese Patent AppIn. No. 201880084133.8. |
Mar. 24, 2023 Japanese Official Action in Japanese Patent Appln. No. 2022-062547. |
Jan. 22, 2019 International Search Report in International Patent Application No. PCT/JP2018/041090. |
Sep. 27, 2021 European Search Report in European Patent Application No. 18893774.2. |
Thomas Ortner, et al., “Visual analytics and rendering for tunnel crack analysis”, The Visual Computer, vol. 32, No. 6, XP035982460, ISSN: 0178-2789, DOI: 10.1007/s00371-016-1257-5, May 11, 2016, pp. 859-869. |
Number | Date | Country | |
---|---|---|---|
20220327687 A1 | Oct 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16897601 | Jun 2020 | US |
Child | 17849978 | US | |
Parent | PCT/JP2018/041090 | Nov 2018 | US |
Child | 16897601 | US |