The present invention is related generally to an apparatus and a method for fusion-based digital image correlation framework for strain measurement.
Strain measurement of materials subjected to loadings or mechanical damages is an essential task in various industrial applications. For strain measurement, aside from the widely used pointwise strain gauge technique, digital image correlation (DIC) as a non-contact and a non-interferometric optical technique attracts a lot of attention for its capability of providing full-field strain distribution of the surface using simple experimental setup. DIC is performed by comparing the digital gray intensity images of the surface before and after deformation, taking derivative of pixel displacement as a measure of strain at the pixel.
In various application, it is of great interests to perform full field two-dimensional (2D) DIC analysis on the curved surface of large 3D objects. DIC has strict requirements on images taken before and after distortion for accurate pixel displacement, such as image resolution, image registration, and compensation of camera lens distortion, etc., since the displacements under strain are generally very subtle for most industrial materials. Therefore, the requirements in target scenarios lead to two daunting limitations for existing 2D DIC analysis. First, the DIC method is usually limited to 2D planar object surfaces rather than 3D curved surfaces. Second, the DIC method is usually restricted to small surfaces due to the very high pixel resolution requirement of images for DIC analysis. A lot of efforts have been made on 3D DIC methods based on a binocular stereo vision or a multi-camera system surrounding involving precise calibration and image stitching, which are difficult to operate in various scenarios.
This work stitches the images captured by a single ordinary moving camera rather than a well calibrated multi-camera system.
In our proposed framework, we incorporate image fusion and camera pose estimation to automatically stitch a large number of images of the curved surface under test. This work extends the range of applications based on image fusion and stitching to strain measurement in mechanical engineering.
The proposed framework decouples the image fusion problem into a sequence of well-known PnP problems, which have been widely explored by using both non-iterative and iterative methods. Some are with extra outlier rejection or incorporate the observation uncertainty information. The proposed image fusion method combining the bundle adjustment principle and an iterative PnP method outperforms existing PnP methods and achieves applicable fusion accuracy.
The present disclosure addresses the problem of enabling two-dimensional digital image correlation (DIC) for strain measurement on large three-dimensional objects with curved surfaces. It is challenging to acquire full-field qualified images of the surface required by DIC due to the blur, distortion, and the narrow visual field of the surface that a single image can cover. To overcome this issue, we propose an end-to-end DIC framework incorporating image fusion principle to achieve full-field strain measurement over the curved surface. With a sequence of blurry images as inputs, we first recover sharp images using blind deconvolution, then project recovered sharp images to the curved surface using camera poses estimated by our proposed perspective-n-point (PnP) method called RRWLM. Images on the curved surface are stitched and then unfolded for strain analysis using DIC. Numerical experiments are conducted to validate our framework using RRWLM with comparisons to existing methods.
Some embodiments of the present invention propose an end-to-end fusion-based DIC framework to enable strain measurement along the 3D object curved surface in large size using a single camera. We first use a moving camera over the 3D large surface to acquire a sequence of 2D blurry images of the surface texture. With these blurry observations, we then recover the corresponding sharp images using blind deconvolution and project the pixels in them to the 3D surface using camera poses estimated by our proposed robust perspective-n-Point (PnP) method for image fusion. The stitched 3D surface images before and after deformation are unfolded to two 2D fused ones respectively, converting the 3D strain measurement into a 2D one for further DIC analysis. Since the displacements are subtle (typically sub-pixel) as mentioned before, their derivatives and corresponding strains are extremely sensitive to the fused image quality. Thus, the most daunting challenge in the pipeline is the stringent accuracy requirement (at least sub-pixel level) of the image fusion method for accurate strain measurement.
Further, according to some embodiments of the present invention, an image processing device for measuring strain of an object is provided. The image processing device includes an interface configured to acquire first sequential images and second sequential images, wherein two adjacent images of the first sequential images include first overlap portions, wherein two adjacent images of the second sequential images include second overlap portions, wherein the first sequential images correspond to a first three dimensional (3D) surface on the object at a first state and the second sequential images correspond to a second 3D surface on the object at a second state (first state may referred to as an initial condition and the second state may be a state after a period of time of operations); a memory to store computer-executable programs including an image deblurring method, a pose refinement method, a fused-base correlation method, a strain-measurement method, and an image correction method; and a processor configured to execute the computer-executable programs, wherein the processor performs steps of: deblurring the first sequential and second sequential images to obtain sharp focal plane images base on a blind kernel deconvolution method; stitching the sharpened first sequential images and the sharpened second sequential images into a first sharp 3D image and a second sharp 3D image based on camera pose estimations by solving a perspective-n-point (PnP) problem using a refined robust weighted Levenberg Marquardt (RRWLM) algorithm, respectively; forming a first two-dimensional (2D) image and a second 2D image by unfolding, respectively, the first sharp 3D image and the second sharp 3D; and generating a displacement (strain) map from the first 2D and second 2D images by performing a two-dimensional digital image correction (DIC) method.
Some embodiments of the present invention provide an end-to-end DIC framework incorporating image fusion to the strain measurement pipeline. It extends the range of DIC-based strain measurement applications to the curved surface of 3D objects in large size.
Further, an embodiment of the present invention provides an image processing method for measuring strain of an object. The image processing method may include acquiring first sequential images and second sequential images, wherein two adjacent images of the first sequential images include first overlap portions, wherein two adjacent images of the second sequential images include second overlap portions, wherein the first sequential images correspond to a first three dimensional (3D) surface on the object at a first state and the second sequential images correspond to a second 3D surface on the object at a second state; deblurring the first sequential and second sequential images to obtain sharp focal plane images based on a blind deconvolution method; stitching the sharpened first sequential images and the sharpened second sequential images into a first sharp 3D image and a second sharp 3D image based on camera pose estimations by solving a perspective-n-point (PnP) problem using a refined robust weighted Levenberg Marquardt (RRWLM) algorithm, respectively; forming a first two-dimensional (2D) image and a second 2D image by unfolding, respectively, the first sharp 3D image and the second sharp 3D; and generating a displacement(strain) map image from the first 2D and second 2D images by performing a two-dimensional digital image correction (DIC) method.
Yet, further, some embodiments of the present invention provide a non-transitory computer readable medium that comprises program instructions that causes a computer to perform a method. In this case, the method may include steps of acquiring first sequential images and second sequential images, wherein two adjacent images of the first sequential images include first overlap portions, wherein two adjacent images of the second sequential images include second overlap portions, wherein the first sequential images correspond to a first three dimensional (3D) surface on the object at a first state and the second sequential images correspond to a second 3D surface on the object at a second state; deblurring the first sequential and second sequential images to obtain sharp focal plane images based on a blind deconvolution method; stitching the sharpened first sequential images and the sharpened second sequential images into a first sharp 3D image and a second sharp 3D image based on camera pose estimations by solving a perspective-n-point (PnP) problem using a refined robust weighted Levenberg Marquardt (RRWLM) algorithm, respectively; forming a first two-dimensional (2D) image and a second 2D image by unfolding, respectively, the first sharp 3D image and the second sharp 3D; and generating a displacement(strain) map image from the first 2D and second 2D images by performing a two-dimensional digital image correction (DIC) method.
Another embodiment of the present invention proposes a two-stage method based on PnP method and bundle adjustment principle for image fusion. Our method outperforms state-of-arts and achieves applicable image fusion accuracy for strain measurement by DIC analysis.
The accompanying drawings, which are included to provide a further understanding of the invention, illustrate embodiments of the invention and together with the description serve to explain the principle of the invention.
Various embodiments of the present invention are described hereafter with reference to the figures. It would be noted that the figures are not drawn to scale elements of similar structures or functions are represented by like reference numerals throughout the figures. It should be also noted that the figures are only intended to facilitate the description of specific embodiments of the invention. They are not intended as an exhaustive description of the invention or as a limitation on the scope of the invention. In addition, an aspect described in conjunction with a particular embodiment of the invention is not necessarily limited to that embodiment and can be practiced in any other embodiments of the invention.
We consider the strain measurement of a cylinder surface which is of interest in many applications. For image acquisition, a moving camera captures a sequence of images {Yi}i=1y for the cylindrical surface texture Ub before deformation, and {Y′i}i=1q for Uf after deformation, as illustrated in
Since out-of-focus blur is a common image degradation phenomenon, we consider a six degree of freedom (6-DOF) pinhole camera model with a camera lens' point spread function (PSF) (blur kernel) K∈R(2r
where rg is the radius, C1 is the normalization term to ensure the energy of the PSF
Then the captured images {Yi}i=1p can be modeled:
Y
i
=K{circle around (*)}Xi, i=1,2, . . . ,p, (2)
where {circle around (*)} denotes the convolution operation, Xi∈m×n is the sharp camera focal plane image, and p is the total number of the images. Each pixel x=[x,y]T in Xi is projected from a pixel u=[xu, yu, zu]T on the 3D surface according to:
where R∈3×3 and T∈3 are the rotation matrix and the translation vector respectively, depending on the camera pose of Xi, v is a pixel-dependent scalar projecting the pixel to the focal plane, and Pg is the perspective matrix of the camera.
Note that each image Yi (Y′i) in the sequence covers a narrow field of the cylinder surface Ub (Uf). Our goal is to recover the whole unfolded images of the curved surface based on {Yi}i=1p and {Y′i}i=1q such that the strain on the cylindrical surface can be analyzed using 2D DIC. In the following descriptions, we will introduce our proposed framework including image deblurring, image fusion, and DIC, as illustrated in
The goal of this module is to recover sharp focal plane images {Xi}i=1p and the unknown blur kernel K simultaneously from the blurry observations {Yi}i=1p in (2). To this end, we formulate the blind deconvolution problem as
where ∥⋅∥p represents the Frobenius norm of a matrix, Ig(⋅) is the indicator function to ensure K is a truncated Gaussian kernel, Dj represents the derivative of Xi at pixel j in both x and y directions, and β is a weight depending on the noise level of the image Yi. The first term is a data fidelity term. The second term is a widely used regularization term total variation (TV) to preserve sharpness of the image. (4) is solved by alternating minimization with respect to K and {Xi}i=1p. Especially, we update {Xi}i=1p utilizing circular convolution with the periodic boundary assumption on {Xi}i=1p for fast computation by FFT.
To obtain a great initialization K0 of the blur kernel, we use Wiener Filter by minimizing the normalized sparsity measure in the possible region of σ as
where
In this module, we reconstruct the super-resolution texture over the 3D object curved surface using the deblurred sequence of images {
Without loss of generality, we consider the problem of estimating a camera pose of the target deblurred image {circumflex over (X)}i by registering it with an overlapping reference image {circumflex over (X)}j for which the camera pose is known.
Firstly, we acquire the well-known SIFT feature point sets ΩiSIFT={xi} in the target image {circumflex over (X)}i and ΩjSIFT={xj} in the reference {circumflex over (X)}j. Then we seek a set of matched feature points (j,i)={(xjm,xim)|xjm∈ΩjSIFT,xim∈ΩiSIFT, m=1, 2 . . . } satisfying
where a(x) denotes the SIFT feature vector at the pixel x, ΩjSIFT\xjm is the set of ΩiSIFT excluding xjm, and 0<C21 is a constant chosen to remove feature outliers, typically C2=0.7.
We project each feature point xjm in (j,i) to the 3D surface and get the corresponding set of {ujm=(xujm,yujm,zujm)}, using (3) with the pose of {circumflex over (X)}j and the object geometry. Then the camera pose estimation problem becomes the widely known PnP problem to estimate the camera pose using the point set (j,i)={(ujm,xim)}.
PnP problem can usually be formulated as a nonlinear sum of least squares problem. Considering that r3=r1×r2 holds in R=[r1,r2,r3]T we use h=[r1T,r2T,TT]T to denote unknown parameters of the camera pose. Then the camera pose hi associated with {circumflex over (X)}i can be achieved by solving:
s.t. RR
T
=I (7)
where {circumflex over (x)}i(ujm,h) is the projection result from the 3D point ujm to the camera focal plane of x with respect to the camera pose h using (3), R is determined by hi as above, and
represents the inverse of the measurement error for the m-th feature pair, for m=1, . . . , (j,i)|, and typically α=0.5.
To solve this problem, we utilize the widely used Levenberg-Marquardt algorithm (LM) with conjunction of the projection operator (⋅), to keep the orthonormality of the rotation matrix R. Given the present estimation h(t), one step update h(t+1)=h(t)+Δh for (7) by LM can be seen as the interpolation of the greedy descent and Gauss-Newton update with
Δh=(H+λ diag(H))−1b, (8)
where
is the Hessian matrix,
and λ is a parameter varying with iterations to determine the interpolation level accordingly.
The projection operator (h) is defined to orthonormalize r1, r2. We revise the method which approximately apportions half of the error to r′1 and r′2 as
with output orthonormalized r1, r2 being r′1/∥r′1∥2, r′2/∥r′2∥2.
For each image {circumflex over (X)}i in the sequence {{circumflex over (X)}i}i=2p, using the previous image {circumflex over (X)}i-1 as the reference image, we estimate its camera pose hi by iteratively update the camera pose using (8) with matching feature set (i-1,i) followed by the projection operation (⋅) and an evaluation step.
Motivated by the bundle adjustment principle, we propose to further refine camera pose estimations to take advantage of more useful matching feature pairs. With this observation, for the i-th image {circumflex over (X)}i, we search feature pairs in all the previous images and form the index set i={l|l<i, {circumflex over (X)}l∩{circumflex over (X)}i≠0} of images overlapping with {circumflex over (X)}i. Using the same condition in (6) for the feature point matching between the target image {circumflex over (X)}i and each image with index in the set i, we obtain the union of matching feature sets ∪j∈
From previous modules, we obtain the reference Û′b and the deformed image Û′f of large visual fields of the 3D surface from two sequences of images {Yi}i=1p and {Y′i}i=1q of narrow visual fields as inputs, respectively. The basic principle of DIC is the tracking of the chosen points between two images recorded before and after deformation for displacement. The sub-level displacement can be computed by tracking pixels in the sparse grid defined on the reference image, thanks to feature tracking methods. Under the assumption that the displacement is small in most engineering applications, our DIC module enables the computation of strain measurement by displacement in different smooth levels based on the programming.
For the 3D surface under test, two sequences of images are captured, before and after deformation respectively, by a moving camera as illustrated in
For super-resolution reconstruction of the surface texture, the camera moves in a snake scan pattern, taking 5 images as it moves along the axial direction and then moving forward in the tangential direction for the next 5 images along the axial direction, and so on. We collect a total of p=160 images of size m×n=500×600 for each sequence. Both sequences cover the same area, about 60 degree of the cylinder surface with slightly different camera starting positions before and after deformation, which can be directly extend to the 360° surface.
To examine our proposed framework and the essential PnP method for image fusion, we consider 5 baseline methods consisting of a classical iterative method LHM, four state-of-art non-iterative methods EPnP+GN, OPnP+LM, ASPnP, and REPPnP rejecting outliers. For comparison, we denote the non-refined
estimation process using (9) as robust weighted LM (RWLM) and the refined robust weighted LM as RRWLM in Alg.1, as shown in
Firstly, using only the first 10 images of each sequence of images, i.e., {{circumflex over (X)}i}i=110 and {{circumflex over (X)}′i}i=110 for the reference and deformed texture, we show the average of camera pose estimation errors and the average PSNR of the stitched surface texture images Û′b and Û′f with comparison to the best 3 baseline methods in
For illustration, the image fusion results for the reference image U′b via the proposed RRWLM are shown in
Accordingly, some embodiments of the present invention provide an end-to-end fusion-based DIC framework for 2D strain measurement along curved surface of 3D objects in large size. To address the challenges of single image's narrow visual field of the surface, we incorporate the image fusion principle and decouple the image fusion problem into a sequence of perspective-n-point (PnP) problems. The proposed PnP method with conjunction with bundle adjustment accurately recovers the 3D surface texture stitched by a large number of images and achieves applicable strain measurement by DIC method. Numerical experiments are conducted to show its outperformance with comparisons to existing methods.
The above-described embodiments of the present invention can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. Such processors may be implemented as integrated circuits, with one or more processors in an integrated circuit component. Though, a processor may be implemented using circuitry in any suitable format.
Also, the embodiments of the invention may be embodied as a method, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
Use of ordinal terms such as “first,” “second,” in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.
Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the invention.
Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
The strain measurement system 100 may include a network interface controller (interface) 110 configured to receive images from a camera/sensor 141 and display 142 images. The camera/sensor 141 is configured to take overlapped images of the surface of interest 140.
Further, the strain measurement system 100 may include a memory/CPU unit 120 to store computer-executable programs in a storage 200. The computer-executable programs/algorithms may include an image deblurring unit 220, an image stitching unit 230, a digital image correlation (DIC) unit 240, and an image displacement map unit 250. The computer-executable programs are configured to connect with the memory/CPU unit 120 that accesses the storage 200 to load the computer-executable programs.
Further, the Memory/CPU unit 120 is configured to receive images (data) from the camera/sensor 151 or an image data server 152 via a network 150 and perform the displacement measurement 100 discussed above.
Further, the strain measurement system 100 may include at least one camera that is arranged to capture images of the surface of interest 140, and the at least one camera may transmit the captured mages to a display device 142 via the interface.
The above-described embodiments of the present invention can be implemented using hardware, software, or a combination of hardware and software.
Also, the embodiments of the invention may be embodied as a method, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
Use of ordinal terms such as “first,” “second,” in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.
Number | Date | Country | |
---|---|---|---|
63091491 | Oct 2020 | US |