MACHINE VISION-BASED TECHNIQUES FOR NON-CONTACT STRUCTURAL HEALTH MONITORING

Information

  • Patent Application
  • 20240249405
  • Publication Number
    20240249405
  • Date Filed
    April 02, 2024
    5 months ago
  • Date Published
    July 25, 2024
    a month ago
Abstract
In an example embodiment, a structural health monitoring software application provides non-contact structural health monitoring using a video of a structure captured by a video camera. The application selects an area of interest and divides the area of interest into a grid of cells. One or more machine vision algorithms are selected from a set of multiple machine vision algorithms provided by the application, wherein the set of multiple machine vision algorithms includes at least one phase-based algorithm and at least one template matching algorithm. The application applies the one or more machine vision algorithms to the video of the structure to determining a displacement of each cell, detects defects or damage based on differences in the displacement of cells, and displays an indicator of the detected defects or damage in a user interface.
Description
BACKGROUND
Technical Field

The present disclosure relates generally to structural health monitoring (SHM) and more specifically to techniques for non-contact SHM.


Background Information

SHM of structures (e.g., buildings, bridges, dams, etc.) is increasingly important as our infrastructure ages since it allows engineers to detect defects and/or damage prior to catastrophic failures. Traditionally, SHM was largely performed manually. Human inspectors would carefully examine the structure, noting and measuring any defects and/or damage. However, such manual SHM is highly labor intensive, incomplete and subjective, and potentially dangerous. It may take significant time for human inspectors to cover every part of a structure. Sometimes, parts of a structure cannot easily or safely be reached. Further, different human inspectors may evaluate similar defects and/or damage differently, based on their own subjective criteria and biases.


More recently, manual SHM by human inspectors has sometimes been supplanted by sensor-based SHM. Typically, a structure is instrumented with a network of sensors that monitor dynamic structural responses. The type of sensors may vary depending on the type of responses of interest. However, most sensors are generally contact sensors such that they are physically placed on the structure. For example, to measure displacement, linear variable different transformers (LVDTs) or global positioning system (GPS) transceivers are often placed on the structure at points of interest.


While an improvement over manual SHM, contact sensor-based SHM has a number of limitations. Installing contact sensors may be labor intensive, costly, and sometimes dangerous. Contact-sensors may need to be installed at locations on the structure that cannot easily or safely be reached. Further, electrical wiring may need to be run to power the sensors at those locations. In addition, contact sensors may require other references to measure certain responses, or may be unable to measure some types of responses. For example, LVDTs may require a known stationary reference point to in order to measure displacement, and GPS transceivers may be unable to measure subtle displacements due to their limited resolution.


There have been some attempts to implement computer-vision based non-contact SHM to address the shortcomings of manual SHM and contact sensor-based SHM. However, such attempts have suffered their own shortcomings, which has prevented them from achieving widespread commercial acceptance. One common shortcoming is that effectiveness varies greatly depending on conditions (e.g., lighting conditions, background conditions, texture condition, alignment conditions, etc.). Existing computer-vision based non-contact SHM typically employs only a single machine vision algorithm to determine responses. However, no single machine vision algorithm is known that is effective at measuring responses under all conditions. Accordingly, while the computer-vision based non-contact SHM may be effective sometimes, it may be lacking other times, when the conditions change.


Accordingly, there is an unmet need for improved machine vision based techniques for non-contact SHM. It would be desirable to address this need, while also ensuring computationally efficiency such that the techniques may be practically implemented using available processing and memory resources.


SUMMARY

In various example embodiments, machine vision-based techniques for non-contact SHM are provided that may integrate both phase-based algorithms and template matching algorithms to enable selection of one or more machine vision algorithms that are effective at measuring responses (e.g., displacement, strain, acceleration, velocity, etc.) under present conditions. Results of a single algorithm or a combination of results from multiple algorithms may be returned. In such techniques, improved template matching algorithms may be employed that provide, for example, sub-pixel accuracy. Responses may be adjusted to cancel out camera vibration and video noise. Defects or damage may be determined by tracking changes in displacement within an area of interest.


In one example embodiment for non-contact SHM, a structural health monitoring software application obtains a video of a structure captured by a video camera. One or more machine vision algorithms are selected from a set of multiple machine vision algorithms provided by the structural health monitoring application, wherein the set of multiple machine vision algorithms includes at least one phase-based algorithm and at least one template matching algorithm. The selected one or more machine vision algorithms are applied to the video of the structure to produce responses. The structural health monitoring software application displays an indicator based on responses in a user interface.


In another example embodiment for non-contact SHM, a structural health monitoring software application obtains a video of a structure captured by a video camera. One or more templates are defined in an initial frame of the video. A search is performed for each template in subsequent frames of the video by sliding the template, comparing the template to the image patch at each possible location, and generating a similarity map based on the comparing. A best-matching image patch for each template is determined using the respective similarity map and responses are calculated based on location of the template in the initial frame and the location of the best-matching image patch. The structural health monitoring software application displays an indicator based on responses in a user interface.


In yet another example embodiment, a structural health monitoring software application obtains a video of a structure captured by a video camera. An area of interest is selected and that area is divided into a grid of cells. One or more machine vision algorithms are applied to the video of the structure to determine a displacement of each cell. Damage is detected based on differences in the displacement of cells. The structural health monitoring software application displays an indicator of the detected defects or damage in a user interface.


It should be understood that a variety of additional features and alternative embodiments may be implemented other than those example embodiments discussed in this Summary. This Summary is intended simply as a brief introduction to the reader for the further description which follows and does not indicate or imply that the example embodiments mentioned herein cover all aspects of the disclosure, or are necessary or essential aspects of the disclosure.





BRIEF DESCRIPTION OF THE DRA WINGS

The description below refers to the accompanying drawings of example embodiments, of which:



FIG. 1 is a block diagram of an example computing device that may be used with the present techniques;



FIG. 2 is a high level flow diagram of an example sequence of steps that may be executed by a structural health monitoring software application;



FIG. 3 is a screen shot of an example user interface that may be presented by the structural health monitoring software application to solicit user input;



FIG. 4 is a flow diagram of an example sequence of steps for camera vibration cancellation based upon motion of a selected reference point;



FIG. 5 is a flow diagram of an example sequence of steps that may be employed to detect defects and/or damage;



FIG. 6A is an example image of a grid of cells showing a substantially uniform color distribution consistent with no defects or damage;



FIG. 6B is an example image of a grid of cells showing a color boundary consistent with a crack;



FIG. 7A is a screen shot of an example user interfaces that may be presented to display an indicator based on responses; and



FIG. 7B is a screen shot of another example user interfaces that may be presented to display an indicator based on responses.





DETAILED DESCRIPTION


FIG. 1 is a block diagram of an example computing device 100 that may be used with the present techniques. The computing device 100 includes at least one processor 110 coupled to a host bus 120. The processor 110 may be any of a variety of commercially available processors. A volatile memory 130, such as a Random Access Memory (RAM) is also coupled to the host bus via a memory controller 125. When in operation, the memory 130 stores processor-executable instructions and data that are provided to the processor 110. An input/output (I/O) bus 150 is accessible to the host bust 120 via a bus controller 145. A variety of additional components are coupled to the I/O bus 150. For example, a video display subsystem 155 is coupled to the I/O bus 150. The video display subsystem 155 may include a display screen 170 and hardware to drive the display screen. At least one input device 160, such as a keyboard, a touch sensor, a touchpad, a mouse, etc., is also coupled to the I/O bus 150. A persistent storage device 165, such as a hard disk drive, a solid-state drive, or another type of persistent data store, is further attached, and may persistently store the processor-executable instructions and data, which are loaded into the volatile memory 130 when needed. Still further, a network interface 180 is coupled to the I/O bus 150. The network interface 180 enables communication over a computer network, such as the Internet, between the computing device 100 and other devices, using any of a number of well-known networking protocols. Such communication may enable accessing (e.g., downloading) video of a structure (e.g., building, bridge, dam, etc.) captured by video cameras (e.g., one or more substantially-stationary digital video cameras). Such communication may also enable collaborative, distributed, or remote computing with functionality spread across multiple electronic devices.


Working together the components of the computing device 100 (and other devices in the case of collaborative, distributed, or remote computing) may execute instructions for an a structural health monitoring software application 140 that applies machine vision-based techniques to video of the structure captured by the video cameras to determine responses (e.g., displacement, strain, acceleration, velocity, etc. of points of interest). The structural health monitoring software application 140 may include a number of modules, such as a phase-based machine vision module 142, a template matching machine vision module 144, a camera vibration cancellation module 146, a denoising module 147, a defect/damage detection algorithm 148 and a user interface module 149, among others. As discussed in more detail below, such modules may be integrated to produce an application that is effective at measuring responses under a variety of conditions (e.g., lighting conditions, background conditions, texture condition, alignment conditions, etc.).



FIG. 2 is a high level flow diagram of an example sequence of steps 200 that may be executed by the structural health monitoring software application 140. At step 210, the structural health monitoring software application 140 obtains a video of the structure, for example, loads a video that was captured by a video camera in response to user input processed by the user interface module 149. At step 220, the structural health monitoring software application selects one or more machine vision algorithms from among a set of multiple machine vision algorithms, for example, manually based on user input processed by the user interface module 149, or automatically based on characteristics determined by an analysis of the video. FIG. 3 is a screen shot 300 of an example user interface that may be presented by the user interface module 149 of the structural health monitoring software application to solicit user input used in steps 210 and 220 of FIG. 2.


The set of multiple machine vision algorithms may be divided into phase-based algorithms and template matching algorithms. If a phase-based algorithm is selected, execution proceeds to step 230. In general, when using a phase-based algorithm, responses at predetermined points (referred to herein as “interested points”) are extracted by tracing change of local phases. When lighting conditions are substantially constant, the intensity of an interested point will be constant within a small neighborhood, even if the point keeps moving. At step 230 the phase-based machine vision module 142 selects interested points in an initial frame of the video, as well as the size of an averaging area (e.g., as a width and height in pixels) for each interested point. The selection may be in response to manual input processed by the user interface module 149, or made automatically. Various criteria may guide selection. For example, points may be selected based on their characteristics indicating high texture contrast with their surroundings.


At step 232, the phase-based machine vision module 142 converts each frame of the video to a gray scale frame. At step 234, the phase-based machine vision module 142 calculates local phases of the interested points of each frame. At step 236, the phase-based machine vision module 142 tracks phase changes for the interested points within respective neighborhoods. Velocity of the interested points may be initially calculated and other responses (e.g., displacement, acceleration, velocity, etc.) derived from the velocity.


The operations of steps 234 and 236 may be implemented in a variety of different manners. In one implementation, the operations may be based on the following example formulas.


In one dimension, the intensity of an interested point forms a contour of constant intensity c with respect to time, given as:








I

(

x
,
t

)

=
c

,




where, x represents the position of the interested point and/(x, t) denotes the intensity of the interested point at time t. The derivative of intensity with respect to time is given as:











I



x






x



t



+



I



t



=

0
.





Therefore, the velocity of the point movement in the x direction can be obtained as:









x



t


=


-


(



I



x


)


-
1








I



x


.






Spatial motion corresponds to the phase change in the frequency domain. Suppose the whole image moves Δx from time t to time (t+Δt) in the video, then the intensity at time (t+Δt) becomes:







I

(

x
,

t
+

Δ

t



)

=


I

(


x
-

Δ

x


,
t

)

.





Where Ø(t) represents the coefficient phase of the Fourier transform of I(x, t) at some frequency, the property of the Fourier transform is given as:









(

t
+

Δ

t


)

=




(
t
)

-

Δ


x
.







The phase change of the Fourier coefficients match the global motion of the whole image. Therefore, to identify the spatial or local motion of an interested point, the change of local phase is considered. Similar to the Fourier transform, the local phase is the coefficient phase of image intensity band-passed with a bank of filters with different orientations.


Local phase may be represented as Øθ(x, y, t) at a spatial location (x, y) and time t at an orientation θ, where θ is the angle between the orientation of the filter and the positive x-axis. The local phase of a fixed point generally forms a constant contour through time. Similar to as with intensity, this phase contour may be given as:










θ

(

x
,
y
,
t

)

=
c

,




and differentiating with respect to time yields:













θ




x






x



t



+






θ




y






y



t



+





θ




t



=

0
.





When θ=0, the filter has orientation along the x-axis and it provides substantially just horizontal phase information







(


i
.
e
.





θ




y




0

)

.




Therefore, the velocity, ux, in units of pixels along the x-axis may be obtained as:







u
x

=




x



t


=


-


(





0




x


)


-
1










0




t


.







Similarly, the velocity, uy, in units of pixels along the y-axis may be obtained as:








u
y

=




y



t


=


-


(






π
/
2





y


)


-
1










π
/
2





t





,




provided that











θ




x




0
.





With velocity of the interested points calculated, other responses may be readily calculated. For example, displacement, dx and dy, may be obtained as the integral of the velocities given as:







d
x

=



0


t




u
x


dt








and






d
y

=



0


t




u
y



dt
.







Similarly, acceleration, ax and ay, may be obtained as the derivatives of the velocities given as:







a
x

=




u
x




t







and






a
y

=





u
y




t


.





Then, at step 238 the phase-based machine vision module 142 computes the average of responses within the averaging area (e.g., as a width and height in pixels) each interested point. This averaged response is then provided as output of the phase-based machine vision module 142 as the response for a monitoring point.


Returning to step 220, if a template-matching algorithm is selected, execution proceeds to step 240. In general, when using a template-matching algorithm, responses are determined by defining one or more templates in an initial frame of the video and determining best-matching image patches for each template in subsequent frames. Changes in the location between the initial frame and the subsequent frames indicate displacement, and other responses can be determined from displacement. An indicator metric may be used to measure how similar a template is to an image patch. The indicator metric may vary based on the specific type of template-matching algorithm employed.


At step 240, the template matching machine vision module 144 selects parameters and a region of interest (ROI) for the template-matching algorithm. The parameters may include a type of template-matching algorithm (e.g., Squared Difference (SQDIFF), Cross Correlation (CCORR), Correlation Coefficient (CCOEFF), Rectangle Histogram of Oriented Gradients (RHOG), Projection and Quantization of the Histogram of Oriented Gradients (PQHOG), etc.), number of templates to use (e.g., a single template, multiple templates, or a grid of templates), template sizes, and templet locations, among other possible parameters. The parameters may be selected, for example, manually based on user input processed by the user interface module 149, or automatically based on characteristics determined by an analysis of the video. The ROI may be an area in which a best-matching image patch for each template may be found in subsequent frames. The ROI may be selected, for example, manually based on user input processed by the user interface module 149, or automatically.


At step 242, the template matching machine vision module 144 converts each frame of the video to a gray scale frame. At step 244, the template matching machine vision module 144 defines one or more templates in an initial frame of the video (e.g., a single template, multiple templates, or a grid of templates) based on the parameters. Each template is selected as an area (e.g., a rectangular area) that includes identifiable features. The size of the area is defined by the template size and preferably is not be too small such that it does not include enough identifiable features to match with image patches of subsequent frames, nor too large that it includes pixels that may not move substantially in the same way or such that it unduly increases computational cost (in terms of processor and memory resources). In some implementations, to ensure the area has enough identifiable features, artificial markers or signs may be positioned on the structure to function as monitoring targets. However, it should be understood that template-matching may operate without artificial markers or signs being employed.


At step 246, the template matching machine vision module 144 searches for each template within the ROI in subsequent frames by sliding the template within the ROI and comparing it to the image patch at each possible location. The result of the comparison is an indicator metric for each possible location that quantitatively measures how similar the template is to the respective image patch. For each template, the indicator metrics may be organized as a two-dimensional matrix (referred to herein as a “similarity map”) having a number rows and columns. It should be noted that by limiting the searching to be within the ROI (as opposed to the entire subsequent frame), computational efficiency may be increased (in terms of processor and memory resources). To achieve this, the ROI may be selected to not be so large as to necessitate substantial computation effort to search within, while also not so small that it does not cover potential changes of location of the template.


At step 248, the template matching machine vision module 144 applies a sub-pixel method to improve resolution. Typically, with template matching algorithms accuracy of results can be at most one pixel. While this may be adequate in some use cases (e.g., when the video is of high resolution, and the camera is close to the structure), it may not always be sufficient. To address this issue, a subpixel method, such as bicubic interpolation, may be used. According to such a method, a convolution kernel may be applied in both x and y directions. The number of segments to divide each pixel into can be specified, with the more segments the more accurate the result, but more processing and memory resource consumption.


At step 249, the template matching machine vision module 144 determines a best-matching image patch for each template based on the respective similarity map, and calculates responses based on location of the template in the initial frame and the location of the best-matching image patch. The best-matching image patch may be the one with the best indicator metric (e.g. highest or lowest depending on the standard). Displacement may be calculated as the difference in the location of the template in the initial frame and the location of the best-matching image patch. Other responses (e.g., strain, acceleration, velocity, etc.) may be calculated based on the displacement. For example, strain may be calculated as a ratio of deformation over distance between the pair of points. Similarly, velocity may be calculated as change in displacement over a unit of time, and acceleration may be calculated as change in velocity over a change in time.


The operations of steps 244-249 may be implemented in a variety of different manners. In one implementation, the operations may be based on the following example formulas.


The top left corner of the template in the initial frame may be represented as (x0, y0). The top left corner of the best-matching image patch in the ith subsequent frame may be represented as (xi, yi). Horizontal and vertical displacements between the template in the initial frame and the best-matching image patch in the ith subsequent frame may be give as:






D

x
=


x
i

-

x
0



i







D

y
=


y
i

-

y
0



i

.




To determine these displacements, the subsequent frame may be represented as a matrix of I(x, y), where (x, y) is pixel position and I(x, y) is the pixel intensity at a location. The indicator metric at a specific location (x, y) within the ROI is denoted as R(x, y), which forms the basis for the similarity map T(x, y).


The indicator metric R(x, y) may be calculated differently depending on the type of template-matching algorithm selected. Correlation-like template matching algorithms, such as SQDIFF, CCORR and CCOEFF algorithms may calculate R(x, y) in somewhat similar manners.


For example, if a SQDIFF algorithm is selected, the indicator metric R(x, y) may be given as:







R

(

x
,
y

)

=







x




y






(


T

(


x


,

y



)

-



(

I

(


x
+

x



,

y
+

y




)

)

2

.








Further, if SQDIFF is normalized, the indicator metric R(x, y) may be given as:







R

(

x
,
y

)

=









x




y







(


T

(


x


,

y



)

-

I

(


x
+

x



,

y
+

y




)


)

2












x




y








T

(


x


,

y



)

2









x




y






I

(


x
+

x



,

y
+

y




)




)

2



.





In such algorithms, a best match may be indicated by the lowest value of R(x, y).


Further, if a CCORR algorithm is selected, the indicator metric R(x, y) may be given as:







R

(

x
,
y

)

=







x




y







T

(


x


,

y



)




I

(


x
+

x



,

y
+

y




)

.







Further, if CCORR is normalized, the indicator metric R(x, y) may be given as:







R

(

x
,
y

)

=









x




y







T

(


x


,

y



)



I

(


x
+

x



,

y
+

y




)











x




y








T

(


x


,

y



)

2









x




y







I

(


x
+

x



,

y
+

y




)

2






.





In such algorithms, a best match may be indicated by the greatest value of R(x, y).


Further, if a CCOEFF algorithm is selected, the indicator metric R(x, y) may be given as:







R

(

x
,
y

)

=





x




y






T



(


x


,

y



)




I



(


x
+

x



,

y
+

y




)









where








T


(


x


,

y



)

-

T

(

x
,
y

)

-

1

wh








x


,

y






T

(


x


,

y



)





,





and








I


(


x


,

y



)

=


I

(

x
,
y

)

-

1

wh








x


,

y






I

(


x


,

y



)






,




where w and h are the width and height of the template respectively. Further, if CCOEFF is normalized, the indicator metric R(x, y) may be given as:







R

(

x
,
y

)

=









x




y








T


(


x


,

y



)




I


(


x
+

x



,

y
+

y




)











x




y








T

(


x


,

y



)

2









x




y







I

(


x
+

x



,

y
+

y




)

2






.





In such algorithms, a best match may be indicated by the greatest value of R(x, y).


Feature-based template matching algorithms, such as RHOG and PQ-HOG algorithms may calculate R(x, y) in different manners. RHOG and PQ-HOG algorithms may use, for example, L1 norm distance and H-r(t) similarity, respectively, in RHOG and PQ-HOG code similarity measurement.


In a RHOG algorithm, the first step may be to compute gradient. When intensity of a pixel position (x, y) is represented as I(x, y) in a frame, its horizontal derivate and vertical derivative may be given as:








f
x

(

x
,
y

)

=




I



x




0
.
5

*

(


I

(


x
+
1

,
y

)

-

I

(


x
-
1

,
y

)


)










f
y

(

x
,
y

)

=




I



y




0
.
5

*


(


I

(

x
,

y
+
1


)

-

I

(

x
,

y
-
1


)


)

.






Then the gradient orientation angle θ and gradient magnitude Gm can be calculated as:







θ

(

x
,
y

)

=

arctan



(


f
y


f
x


)












G
m

(

x
,
y

)

=




f
x
2

+

f
y
2



.





A positive integer, Nbins, may be selected as a number of orientation bins. Assuming, for example, Nbins=4. Then the orientation bin Gb may be given as:










G
b

(

x
,
y

)

=


4
*


θ

(

x
,
y

)

π


+
3


;


G
b

=
1


,
2
,
3
,

4
.





Another positive integer, Ng, may represent a number of neighborhood pixels. For example, if Ng=1 it means that the gradient of nine ((2*Ng+1)2) neighborhoods are taken into account for calculating a RHOG code.


The RHOG code in each orientation bin may be calculated as the summation of gradient magnitudes of the neighborhood pixels that are quantizing into the bin, such that it can be given as:








RHOG



(

x
,
y
,
k

)


=








G
b

(


x


,

y



)

=
k





G
m

(


x


,

y



)



,



where




k

=
1

,
2
,


,


N
bins

;


x
-

N
g




x




x
+

N
g



;



and


y

-

N
g




y




y
+


N
g

.








A customized function may be used to estimate the similarity between the template and the image patch base on RHOG codes. For example, a RHOG code may be treated as the intensity in each pixel location and a normalized CCOEFF algorithm may be applied as discussed above.


Alternatively, a sum of L1 norm distance may be computed over all orientation bins. The RHOG code of the template and image patch at a location (x, y) at the kth orientation bin may be denoted as TRHOG(x, y, k) and IRHOG(x, y, k), respectively. The indicator metric R(x, y) for the similarity map may then be given as:







R

(

x
,
y

)

=






k







x




y








"\[LeftBracketingBar]"




T
RHOG

(


x


,

y


,
k

)

-


I
RHOG

(


x
+

x



,

y
+

y



,
k

)




"\[RightBracketingBar]"


.







The smaller the L1 norm distance, the greater the similarity between the template and the image patch. Therefore, the location of the best-matching image patch may be found by minimizing L1 norm distance.


Looking to a PQ-HOG implementation, in a PQ-HOG algorithm the first step is to compute a HOG vector. It may use, for example, rectangular HOG (RHOG, as discussed above) or column HOG (CHOG) in which case it may Nbins-dimensional vector. Then, the HOG vector may be projected and quantized. In essence, this is a random projection that projects high dimensional data into a low dimension while preserving pairwise distance. In one implementation, a very sparse projection approach may be used to project the HOG vector. A projection matrix with the dimension of Nbits×Nbins may be filled with entries of {−1, 0, 1} when probabilities






{


0.5
β

,

1
-

1
β


,

0.5
β


}




for a constant β. The constant β may be chosen, for example, as







β
=


N
bins

2


,




which is the most projection instances corresponding to the difference between two vector components and







N
bits

=




N
bins

(


N
bins

-
1

)

2

.





After multiplying the HOG vector with the projection matrix, the projected HOG(x, y) is a Nbits-dimensional vector. Afterwards, each entry in this projected vector may be quantized to a one-bit digit since one-bit quantization approximates the angel between the original vectors. As such, si(x, y) may denote the ith (1≤i≤Nbits) entry in the projected and quantized vector; r; may denote the ith row of the projection matrix given as:








s
i

(

x
,
y

)

=

{





1
,






r
i
T


HOG


(

x
,
y

)


>
0






0
,



otherwise



;






and s(x, y) may be a Nbits-dimensional binary code a portion of which (e.g., 8 or 24 bits) may be selected as a PQ-HOG code. NpQ_Bits bits may be selected and the resultant binary vector may be PQ_HOG(x, y). Since PQ_HOG(x, y) is binary, Hamming distance (also referred to as “H-distance”) may be used to measure the similarity between PQ-HOG codes. Considering a template and a candidate image patch cropped from the top corner (x, y) of a frame, the Hamming distance between a template pixel at (x′, y′) and image pixel (x+x′, y+y′) is given as:







d

(

x
,
y
,

x


,

y



)

=

Bitcount




(



T

PQ
HOG


(


x


,

y


,
k

)




I

PQ
HOG


(


x
+

x



,

y
+

y



,
k

)


)

.






This distance d(x, y, x′, y′) may be modified to replace the pixel error measure with a robust error measure p(d(x, y, x′, y′), t), where t is a parameter controlling the shape of p. Bounded M-estimators may be chosen as robust functions. For example, a nondecreasing bounded function may be given as:







p

(

d
,
t

)

=

{






g


(
d
)


,





if


d

<
t







g

(
t
)

,



otherwise



.






An estimator may be calculated as, for example, a simple truncation with g(d)=d, a trimmed mean M-estimator with g(d)=d2 or a Tukey's estimate with:








D
t

(

x
,
y
,

x


,

y



)

=

{







g


(
t
)


-

f


(

g


(

d


(

x
,
y
,

x


,

y



)


)


)



,





if


d


(

x
,
y
,

x


,

y



)


<
t






0
,



otherwise



.






The indicator metric for the similarity map may be generated through additive match measure, given as:








R
t

(

x
,
y

)

=







x




y








D
t

(

x
,
y
,

x


,

y



)

.






The greatest Rt (x, y) indicates the location of the best-matching image patch. Such a technique may be fast and robust (thereby conserving processor and memory resources) since only similar pixels are compared with the template, which can be interpreted as voting at pixel level. If a pixel error level is smaller than a threshold, then it contributes a particular pixel error to the matching measure. Otherwise, it may be discarded or replaced by another function that is independent of the pixel error.


Returning to FIG. 2, the responses for monitoring points returned by the phase-based machine vision module 142 and/or the template matching machine vision module 144 may be influenced by camera vibration (e.g., caused by wind, traffic, etc.). Camera vibration may induce additional motion in the video which may reduce accuracy of the returned output responses. At step 250, the camera vibration cancellation module 146 attempts to cancel the effects of camera vibration by subtracting their responses from each monitoring point response. While this may be performed in various ways, in one implementation, camera vibration cancellation is based upon motion of a selected reference point.



FIG. 4 is a flow diagram of an example sequence of steps 400 for camera vibration cancellation based upon motion of a selected reference point. At step 410, the camera vibration cancellation module 146 selects a reference point in in each frame of the video. The reference point may be selected on a stationary object separate from the structure. At step 420, the camera vibration cancellation module 146 calculates responsE of the reference point. Response of the reference point are treated as responses due to vibration of the video camera. At step 430, the camera vibration cancellation module 146 subtracts the response of the reference point from the response for each monitoring point to produce a camera vibration cancelled response.


The operations of steps 420 and 430 may be implemented in a variety of different manners. In one implementation where the response being determined is velocity, the operations may be based on the following example formulas.


If r is the reference point and urx is the velocity in the x direction for the reference point r, the decomposed velocity in the x direction for the monitoring point p is:







u
px
d

=


u
px

-


u
rx

.






Similarly, the decomposed velocity in the y direction for the monitoring point p is:







u
py
d

=


u
pxy

-


u
ry

.






The response for each monitoring point returned by the phase-based machine vision module 142 and/or the template matching machine vision module 144 may be also influenced by video noise. Video noise will reduce accuracy of the returned responses.


At step 260, the denoising module 147 attempts to denoise each monitoring point response to address the effects of video noise. Video denoising may be performed using any of a variety of spatial domain or transform domain denoising algorithms. In one implementation where the response being determined is velocity, the operations may be based on the following example formulas.


For a monitoring point p, the velocity in each direction may be calculated as the average of nx and/or ny pixels in the neighborhood of pixel p, given as:










u
p
d



(

x
,
y

)


_

=


[








i
=
1


n
x





u
p
d

(

x
,
y

)


+







j
=
1


n
xy





u
p
d

(

x
,
y

)



]


(


n
x

+

n
y


)



,




where upd (x, y) is the extracted velocities in x and y directions, and nx and ny are a number of pixels for processing video.


At step 270, the structural health monitoring software application 140 saves the responses. Up until steps 270, the vibration cancelled and denoised responses may be in the unit of pixels. As part of the saving operation, the structural health monitoring software application 140 may convert the responses to a standard unit using a scaling factor, which may be obtained based on a ratio of distance in the real-world to the number of pixels between two points in the video.


At step 280, these may be further processed to produce an indicator. Depending on the implementation, the further processing and indicator may take any of a variety of different forms. In one implementation, the further processing may simply involve organizing the responses and the indicator for graphical display. For example, in an implementation where the responses are displacements they may be organized over time to produce a graph of displacement verses time. In other implementations, the further processing may involve determining other quantities based on the responses and the indicator may be these other quantities or a graphical display related to these other quantities. For example, the further processing may involve applying a defects/damage detection algorithm and the indicator may be a graphical display of detected defects and/or damage.



FIG. 5 is a flow diagram of an example sequence of steps 500 that may be employed to detect defects and/or damage. Before the steps are executed the structural health monitoring software application 140 may have already selecting an area of interest having a limited size in the video (e.g., in response to user input processed by the user interface module 149) and divided the area of interest into a grid of cells. If a template matching algorithm is being used, each cell may be treated as a template such that template matching is performed on the resulting grid of templates.


At step 510, the defect/damage detection algorithm 148 determines a displacement of each cell. For a structure without defects or damage all the cells within an area of interest of limited size are expected to move in substantially the same direction if at all under an external load at a given time. However, if there is a defect or damage (e.g., a crack) on or under the surface of the structure, the cells on each side of the defect or damage often will move in markedly different direction.


At step 520, the displacements are visually displayed by the user interface module 149, for example, by color-coding/painting each cell with a color that that indicates a direction of the displacement. For a structure without defects or damage the cells within an area of interest of limited size are expected to have a substantially uniform color distribution. However, if there is a defect or damage (e.g., a crack) on or under the surface of the structure, the cells on each side of the defect are expected to have substantially different color distributions, such that a color boundary is presented. FIG. 6A is an example image 600 of a grid of cells showing a substantially uniform color distribution consistent with no defects or damage. FIG. 6B is an example image 610 of a grid of cells showing a color boundary consistent with a crack.


At step 530, the defect/damage detection algorithm 148 determines detects defects or damage based on differences in the displacement of cells. Such operation may be performed automatically, for example in response to a comparison calculation of the determined displacement of each cell, or manually, for example, in response to a user observing the color-coding/painting of each cell in a display, and visually determining the location of defects based on differing color distributions.


Returning to FIG. 2, at step 290, the user interface module 149 displays may display an indicator based on responses or detected defects/damage. For example, text, tables, graphs, images and the like may be displayed on a display screen. FIGS. 7A-7B are screen shot 700-710 of example user interfaces that may be presented by the user interface module 149 to display an indicator based on responses. In these examples, graphs 720 of displacement are based on phase-based algorithms and graph 730 of displacement is based on template matching algorithms.


It should be understood that various adaptations and modifications may be readily made to what is described above, to suit various implementations and environments. While it is discussed above that many aspects of the techniques may be implemented by specific software modules executing on hardware, it should be understood that some or all of the techniques may also be implemented by different software on different hardware. In addition to general-purpose computing devices, the hardware may include specially configured logic circuits and/or other types of hardware components. Above all, it should be understood that the above descriptions are meant to be taken only by way of example.

Claims
  • 1. A method for non-contact structural health monitoring, comprising: obtaining, by a structural health monitoring software application executing on one or more computing devices, a video of a structure captured by a video camera;selecting an area of interest in frames of the video;dividing the area of interest into a grid of cells;selecting one or more machine vision algorithms from a set of multiple machine vision algorithms provided by the structural health monitoring application, wherein the set of multiple machine vision algorithms includes at least one phase-based algorithm and at least one template matching algorithm;applying the one or more machine vision algorithms to the frames of the video of the structure to determine a displacement of each cell;detecting defects or damage based on differences in the displacement of cells; anddisplaying, by the structural health monitoring software application, an indicator of the detected defects or damage in a user interface on a screen of the one or more computing devices.
  • 2. The method of claim 1, wherein the displacement of each cell has a direction and the method further comprises: displaying, by the structural health monitoring software application, an indicator of the direction of the displacement of each cell in the user interface on the screen.
  • 3. The method of claim 1, wherein the displacement of each cell is determined for a period of time encompassed by the video and the method further comprises: displaying, by the structural health monitoring software application, an indicator of the displacement of one or more cells verses time in the user interface on the screen.
  • 4. The method of claim 1, wherein the applying applies at least one template matching algorithm, and the at least one template matching algorithm treats each cell of the grid of cells as a template.
  • 5. The method of claim 4, wherein the applying comprises: defining templates in an initial frame of the video;searching for each template from the initial frame in subsequent frames of the video by sliding the template, comparing the template to an image patch at each possible location, and generating a similarity map based on the comparing; anddetermining a best-matching image patch for each template using the similarity map and calculating displacement based on location of the template in the initial frame and the location of the best-matching image patch in the subsequent frame.
  • 6. The method of claim 5, wherein the applying further comprises: selecting a region of interest (ROI) for each template, andwherein the searching for each template is confined to be within the ROI of the respective template.
  • 7. The method of claim 1, wherein the applying applies at least one phase-based algorithm, and the at least one phase-based algorithm operates to: select interested points in an initial frame of the video;select an averaging area for each interested point;calculate local phases of each interested point over subsequent frames of the video; andtrack phase change for the interested points within respective neighborhoods.
  • 8. The method of claim 1, wherein the selecting selects both at least one phase-based algorithm and at least one template matching algorithm to have at least two selected machine vision algorithms, and the applying applies the at least two selected machine vision algorithms to determine the displacement of each cell.
  • 9. The method of claim 8, wherein the displaying the indicator of the detected defects or damage comprises: displaying a first indicator based on the at least one phase-based algorithm and displaying a second indicator based on the at least one template matching algorithm to permit comparison of results of the at least one phase-based algorithm and the least one template matching algorithm.
  • 10. A computing device, comprising: a processor; anda memory coupled to the processor and configured to store instructions of a structural health monitoring software application that when executed on the processor are operable to:obtain a video of a structure captured by a video camera;select an area of interest in frames of the video;divide the area of interest into a grid of cells;apply at least one phase-based algorithm to the frames of the video of the structure to determine a displacement of each cell;apply at least one template matching algorithm to the frames of the video of the structure to determine the displacement of each cell;display a first indicator of the displacement of each cell based on the at least one phase-based algorithm; anddisplay a second indicator of the displacement of each cell based on the least one phase-based algorithm.
  • 11. The computing device of claim 10, wherein the displacement of each cell is determined for a period of time and the first indicator and the second indicator comprise indicators of displacement verses time.
  • 12. The computing device of claim 10, wherein the phase-based algorithm is configured to select an averaging area for each interested point, calculate local phases of each interested point over subsequent frames of the video, and track phase change for the interested points within respective neighborhoods.
  • 13. The computing device of claim 10, wherein the template matching algorithm is configured to define templates in an initial frame of the video, search for each template from the initial frame in subsequent frames of the video by sliding the template, compare the template to an image patch at each possible location and generate a similarity map based on the comparison, and determine a best-matching image patch for each template using the similarity map.
  • 14. The computing device of claim 10, wherein the instructions of the structural health monitoring software application are operable to: detect defects or damage based on differences in the displacement of cells; anddisplay an indicator of the detected defects or damage.
  • 15. A non-transitory computer readable medium having instructions stored thereon that when executed on one or more processors are operable to: obtain a video of a structure captured by a video camera;select an area of interest in frames of the video;divide the area of interest into a grid of cells;apply two or more machine vision algorithms to the frames of the video of the structure to determine a displacement of each cell, wherein the two or more machine vision algorithms include: a phase-based algorithm that selects an averaging area for each interested point, calculates local phases of each interested point over subsequent frames of the video, and tracks phase change for the interested points within respective neighborhoods, anda template matching algorithm that defines templates in an initial frame of the video, searches for each template from the initial frame in subsequent frames of the video by sliding the template, compares the template to an image patch at each possible location, generates a similarity map based on the comparison, and determines a best-matching image patch for each template using the similarity map;detect defects or damage based on differences in the displacement of cells; anddisplay an indicator of the detected defects or damage in a user interface.
  • 16. The non-transitory computer readable media of claim 15, wherein the displacement of each cell has a direction and the instructions when executed are further configured to: display an indicator of the direction of the displacement of each cell in the user interface on the screen.
  • 17. The non-transitory computer readable media of claim 15, wherein the displacement of each cell is determined for a period of time encompassed by the video and the instructions when executed are further configured to: display an indicator of the displacement of one or more cells verses time in the user interface on the screen.
  • 18. The non-transitory computer readable media of claim 15, wherein the template matching algorithm treats each cell of the grid of cells as a template.
  • 19. The non-transitory computer readable media of claim 15, wherein the instructions when executed are further configured to: select a region of interest (ROI) for each template, wherein the search for each template is confined to be within the ROI of the respective template.
  • 20. The non-transitory computer readable media of claim 15, wherein the instructions when executed are further configured to: display a first indicator based on the at least one phase-based algorithm and display a second indicator based on the at least one template matching algorithm to permit comparison of results of the at least one phase-based algorithm and the least one template matching algorithm.
RELATED APPLICATIONS

The present application is a continuation of U.S. patent application Ser. No. 17/196,467 filed by Zheng Yi Wu et al. on Mar. 9, 2021 for “Machine Vision-Based Techniques for Non-Contract Structural Health Monitoring,” the contents of which are incorporated by reference herein in their entirety.

Continuations (1)
Number Date Country
Parent 17196467 Mar 2021 US
Child 18624667 US