The present disclosure relates to 3D imaging. More specifically, the present disclosure is concerned with a method and a system for real-time high-speed three-dimensional surface imaging.
Three-dimensional (3D) surface imaging is used in a range of fields, including machine vision [1], remote sensing [2], biomedical engineering [3], and entertainment [4]. Demands for higher spatial resolution, a larger field of view (FOV), and a higher imaging speed have generated a number of 3D surface imaging methods [5-7].
Among them, structured-light profilometry (SLP) has become the mainstream technique [8]. In a typical setup, a projector delivers structured patterns onto an object. The structured patterns, deformed by the 3D surface of the object, are captured by a camera. The 3D information of the object is then extracted by analyzing the deformation of the patterns with respect to reference phase profiles. Among existing choices of structured patterns in structured-light profilometry (SLP) [9], grayscale sinusoidal fringes are most widely used because they provide 3D data with both high spatial resolution and high depth accuracy [10] as well as their suitability for high-speed 3D measurements [11]. Digital micromirror devices (DMDs) are commonly used to generate sinusoidal fringes. Each micromirror on a DMD can be independently tilted to either +12° or −12° from its surface normal to generate binary patterns at up to tens of kilohertz (kHz). By controlling the overall reflectance via temporal dithering, the DMD can produce grayscale sinusoidal patterns [12]. The DMD-based structured-light profilometry (SLP) is flexible in system development and accurate in 3D measurements [13-16].
Development of DMD-based SLP has allowed high-speed 3D surface imaging in real time, defined as image acquisition, processing, and display during the occurrence of dynamic events [17]. Nonetheless, most existing systems still have limitation in terms of fringe pattern generation, image acquisition and processing. Micromirror-dithering clamps the speed of generating 8-bit grayscale images to around 100 Hz [18], which excludes 3D profilometry of a range of moving objects, such as beating hearts and vibrating membranes [19, 20]. This limitation can be mitigated by using binary defocusing [21-23], which generates a pseudo-sinusoidal pattern at an unconjugated plane to the DMDs by slightly defocusing the projector. However, this method compromises the depth sensing range [24] and is less flexible when used with binary patterns with different periods. Meanwhile, the uneven surface of the DMDs [25] may induce image distortion to the defocused sinusoidal patterns at the unconjugated plane, which may decrease measurement accuracy especially under coherent illumination.
Image acquisition devices and image processing modules in existing SLP systems also have limitations. Most high-speed cameras deployed in SLP, despite having ultra-high imaging speeds, are not equipped with a high-speed interface to transfer data on time. This bottleneck also hampers the on-line processing software developed based on graphics processing units (GPUs) [26]. As a result, these systems cannot continuously stream data, further limiting their application scope to highly synchronous events.
There is still a need in the art for a method and a system for real-time high-speed three-dimensional surface imaging.
More specifically, in accordance with the present invention, there is provided a method for 3D surface imaging of an object, comprising generating a sinusoidal pattern from an input beam; projecting the sinusoidal pattern onto the object; acquiring deformed structured images from the object; reconstructing and displaying the surface of the object in real-time.
There is further provided a system for real-time high-speed 3D surface imaging of an object, comprising an device encoding an input beam with a structured pattern; a filter configured to spatially filter the encoded beam; a projector lens projecting the structured pattern onto the object; a high speed camera acquiring a structured pattern deformed by the 3D surface of the object; a graphic processing unit; and a CoaXPress interface transferring data acquired by the camera to the graphic processing unit; the graphic processing unit reconstructing and displaying the 3D surface of the object in real-time.
Other objects, advantages and features of the present invention will become more apparent upon reading of the following non-restrictive description of specific embodiments thereof, given by way of example only with reference to the accompanying drawings.
In the appended drawings:
The present invention is illustrated in further details by the following non-limiting examples.
In a system as illustrated in
The illumination source is a continuous-wave laser 12 with an output power of at least 200 mW, and a wavelength selected according to the camera's response, in a range between 420 and 700 nm. The beam expander 20 has a magnification time of more than 8, a maximum input beam diameter of more than 1.2 mm, and a wavelength range selected to cover the illumination wavelength. The mirrors M1 and M2, with a diameter of at least 2″, have a wavelength range selected to cover the illumination wavelength.
The encoding device 18 is a spatial light modulator such as a digital micromirror device (DMD), with a resolution of more than 1 Mega pixels and a full frame refreshing rate: of at least 5 kHz.
The two double-convex lens L3, L4, of a diameter of at least 2″ and focal length between 100 and 200 mm, have a wavelength range selected to cover the illumination wavelength. Optical low-pass filtering is achieved by a pinhole 22 of a diameter higher than 125 μm.
The projector lens 24 has focal length between 18 and 55 mm, and a maximum aperture off/3.5-5.6. The high-speed camera 14 has a frame rate of at least 5 k frames/second, and more than 250 k pixels on the sensor.
A beam profiler 16 at the image plane of the 4f system is used to test the projected pattern P1. From experimental data, curve-fitting tools ae used to compare the projected binary pattern P1 and a target profile P2 defined as a sinusoidal intensity constrained by a Gaussian envelope (Gaussian function×sine function). The fitting results are as shown in
After testing the projected pattern P1, the system and method were applied to a 3D reconstruction experiment with static target objects. The camera resolution was set to be 1280×860 pixels. The four-step phase shift method was used to extract the phase value (λ=80 pixels). 3D point cloud reconstruction of static toy animals in
The system and method is thus shown to allow grayscale sinusoidal pattern formation from a single static programmed binary pattern by a high-precision laser beam shaping system based on a DMD. An error diffusion algorithm may be applied in the binary pattern generation. The method may be used to project sinusoidal patterns on the image plane without mirror dithering.
The bandwidth limited projection is used in structural illumination. It enables sinusoidal pattern generation using a single binary pattern loaded onto the DMD. In addition, the sinusoidal pattern is formed at a fixed image plane, regardless of the spatial period of the sinusoidal patterns. In this regard, the present method solves previous problems in binary defocusing method. In addition, by leveraging the high refreshing rate of the DMD, the present method has the ability to project sinusoidal patterns at a speed of several thousand frames per second, which enables kilohertz-level 3D profilometry.
The present system and method of
In a system according to another embodiment of an aspect of the present disclosure illustrated in
The projector lens 24 projects these fringes onto the 3D object 26 with a field of view (FOV) of 100 mm×100 mm. The deformed structured images scattered from the object 26 are captured by a high-speed CMOS camera 14 at 5 kHz with 512×512 pixels in each frame. The captured images are transferred to a host computer 30 with via a CoaXPress interface 32 connected to a frame grabber (such as Cyton-CXP, Bitflow for example). A GPU-based image processing software is used for 3D image reconstruction in real-time.
The CoaXPress interface 32, with a data transfer speed of at least 25 Gbps for example, is used for high-speed data acquisition. The GPU, with a memory speed of a least 8 Gbps, memory bandwidth of a least 192 GB/sec, NVIDIA CUDA® Cores of a least 1152 for example, allows real-time data processing.
In a method for real-time, high-speed 3D surface imaging in a CoaXPress-interfaced band-limited illumination profilometry (CI-BLIP) according to an embodiment of an aspect of the present disclosure, band-limited illumination is thus used to generate and project high-quality sinusoidal fringe patterns at 5 kHz. The set of target images contains four grayscale sinusoidal fringe patterns (648-μm period) with phase shifts from 0 to 3π/2 with a step of π/2 as well as one vertical stripe pattern. Each individual grayscale fringe pattern is first converted to its corresponding binary DMD pattern using an adaptive error diffusion algorithm [27] as described in Supplementary section Note 1 hereinbelow and
Moreover, combining a CoaXPress-interfaced high-speed camera with a GPU-based software provides a synergy between the high-speed camera interface and the GPU-based processing that allows real-time high-speed 3D image reconstruction. The CXP cable 32, with a bandwidth of 25 Gbps, enables continuous streaming of fringe images to the host computer 30 during image acquisition at 5 kHz. These images are stored into RAM via direct memory access operations independent from the CPU. After determining the vertical stripe pattern, the calibration frame and four fringe patterns in GPU are processed, beginning with the parallelized extraction of wrapped phase and quality map information (see Supplementary section Notes 2-5 and
To quantify the depth resolution of the CoaXPress-interfaced band-limited illumination profilometry of the present disclosure, two stacked planar surfaces offset by about 5° were imaged (
Various static 3D objects were imaged to assess the feasibility of CoaXPress-interfaced band-limited illumination profilometry (CI-BLIP). First, three interlocking brick toys, with studs of respective heights of 23 mm, 15 mm, and 3 mm, were measured. Reconstructed results and selected centerlines are plotted in
To assess real-time high-speed 3D surface profilometry, CoaXPress-interfaced band-limited illumination profilometry (CI-BLIP) was used to image an electric fan rotating at 300 rounds per minute. The fan has seven blades, each having a tilt angle of bout 30°.
The method and system were further used to image flapping dynamics of a flag with a maple leaf pattern (80 mm×50 mm in size) mounted on a pole and exposed to a strong wind produced by an air blower.
A CoaXPress-interfaced band-limited illumination profilometry (CI-BLIP) according to an embodiment of an aspect of the present disclosure comprises generating a binary pattern, in which a grayscale sinusoidal pattern is processed by an adaptive error diffusion algorithm into a corresponding binary pattern able to be displayed on the DMD with upper limit speed (step 110). Band-limited illumination of the object then makes use of a 4f imaging system with a pinhole on the Fourier plane to filter high-spatial frequency noise, as well as a projector lens to project the pattern displayed on the DMD; each binary pattern displayed on the DMD is projected to be a grayscale sinusoidal image with high quality (step 120). A calibration step may comprise imaging a checkerboard with multiple different positions and calculating the spatial relationship by a calibration toolbox in MATLAB, so as to determine the extrinsic and intrinsic parameters of the pinhole models of camera and projector (step 230). In a data acquisition step (step 140), a CoaXPress interface is used between the camera and the host computer to continuously stream data at 25 Gbps, for acquisition of images carried with objects structural information are acquired and transferred to GPU. In a reconstruction step (step 150), the absolute phase value is calculated by GPU parallel processing and transferred into spatial information with calibration data. The 3D information (x,y,z) of the object's surface is extracted from each full-sequence (five images) of structured pattern illumination. The solved depth values are linearly mapped to an 8-bit range suitable for real-time display, the dynamic of objects in 3D being displayed at 60 Hz (full frame), with single point depth tracking at 1 kHz.
There is thus provided a CoaXPress-interfaced band-limited illumination profilometry (CI-BLIP) method that integrates band-limited illumination, CoaXPress-interfaced high-speed image acquisition, and GPU-based image reconstruction. The present high-speed structured-light profilometry (SLP) system and method allow real-time 3D position tracking at 1 kHz, which may be further improved by using a high-power laser, a faster camera interface [32], and advanced phase unwrapping algorithms [33].
The present method and system may be used in a range of applications, including in-situ industrial inspection, dynamic biometric analysis, and vibration measurement for acoustics.
They may be combined with studies of fluid-structure interactions. The mechanisms that govern the flag-wind interaction has generated scientific interests in diverse studies [37], including animal ethology [38], paper engineering [39], and hydroelectric power generation [40]. Due to its complex geometries and freely moving boundaries [31], study the gravity-induced impact in flag-fluid interaction remains a challenge. Thus far, although many simulation works have been carried out to investigate the relation between gravity and the stability of flags with different flow air conditions [41-43], experimental visualization of real-time flag movement is still a challenge. A limited number of experiments were conducted using low flow speeds [44, 45] and rigid flags, which restricted the analysis of fluid dynamics in 2D.
In contrast, the present CoaXPress-interfaced band-limited illumination profilometry (CI-BLIP) system and method allow 3D analysis of a travelling wave, the gravity-induced phase mismatch, and the asymmetric fluctuation of a non-rigid flapping flag at 1 kHz. The ability of the present CoaXPress-interfaced band-limited illumination profilometry (CI-BLIP) system and method for continuous streaming acquired data may also contribute to imaging unpredicted or non-repeatable flow dynamics.
Design of binary patterns: The gradient-based adaptive error diffusion (GAED) algorithm [27], a halftoning technique, was used to design the binary patterns that were loaded onto the digital micromirror device (DMD). In the experiment, four grayscale sinusoidal patterns, with a period of 60 digital micromirror device (DMD) pixels and with phase shifts of 0,
(Supplementary
Determination of the bandwidth of the system: The bandwidth of the system is controlled by a pinhole as a low-pass filter. The diameter of the pinhole, D, is calculated by
where Pf=648 μm denotes the fringe period (which equals to 60 digital micromirror device (DMD) pixels), λ=671 nm is the laser wavelength, f3=120 mm is the focal length of L3. Thus, the required pinhole diameter size is D=124.26 μm. In the experiment, a 150-μm pinhole was used in the simulation to conduct optical low-pass filtering. The corresponding normalized system bandwidth is 0.01. It is worth noting that the actual pinhole size is slightly larger than the calculated value so that all spatial frequency content of the sinusoidal fringe pattern are guaranteed to pass through it
The geometrical configuration of the method, following that of conventional structured-light profilometry, is shown in
Mathematically, the relationship between a 3D point P=[X Y Z]T and its locations on the 2D camera sensor (x, y) and DMD (xP, yP) are described as
where [x y 1]T and [xP yP 1]T are the homogenous coordinates of the camera pixel and the DMD pixel respectively, {R, RP} are 3×3 rotation matrices, {T, TP} are 3×1 translation vectors, [P 1]T=[X Y Z 1]T are the homogenous coordinates of P, and {s, sP} are arbitrary scaling factors. The matrices [R, T] and [RP, TP] describe the “extrinsic” properties of the camera and projector by defining their positions and orientations relative to the world coordinate system. On the other hand, the 3×3 matrices {A, AP} describe the “intrinsic” image forming properties of the pinhole camera/projector, and each takes the form:
where {fx, fy} describe separate horizontal and vertical focal lengths, (x0, y0) identify the coordinates of the principal point of the image plane, and a accounts for (possible) pixel skewness. Together, {A, R, T} form a set of calibration parameters that must be estimated separately for both the camera and projector. With knowledge of the calibration parameters, and once (x, y) and xP are determined, Equation (S2) can be rearranged into a single matrix equation of the form [x y xP]T=M P, from which the 3D point P is recovered via matrix inversion.
In phase shifting profilometry, 3D measurement begins with the capture of N fringe images of the 3D scene under illumination from N fringe patterns. Because of equal fringe shifting, a pixel at camera coordinate (x, y) will report a sinusoidally varying intensity across the acquired images. This intensity response can be considered as
Ik(x,y)=I′(x,y)+I″(x,y)cos [φ(x,y)−2πk/N], (S4)
Here Ik(x, y) represents the camera recorded intensity from the pixel at coordinates (x, y) in the kth image. I′(x, y) and I″(x, y) represent the average pixel intensity and intensity modulation depth due to fringe projection, φ(x, y) represents a phase offset, and k=0,1, . . . N−1. In particular, φ(x, y), encoded by the corresponding location of the pixel within the field of view of the projector, enables 3D reconstruction. Across all image pixels, the three parameters φ(x, y), I′(x, y), and I″(x, y) can be found as
Since values of φ(x,y) are fundamentally determined only up to multiples of 2π, Equation (S5) provides wrapped values in the interval (−π, π]. Importantly, since Equations (S5)-(S7) solve for three parameters, it is generally required that N≥3. In the present work, N=4 (see
To perform 3D reconstruction on objects, it is necessary to unwrap and possibly offset the values of φ(x,y) to obtain an absolute phase that can be converted to the coordinates of a pixel within the field of view of the projector. The weighted discrete cosine transform based phase unwrapping procedure [29, 28] was used. For this phase unwrapping procedure, the modulation values I″(x,y) were used to produce quality maps to guide unwrapping. To obtain absolute phase, a single vertical fringe was used as an additional pattern. Thresholding the values in this additional image allowed for a subset of the unwrapped phase map to be associated with the center of the DMD projector. Absolute phase was then obtained relative to the center of the DMD by averaging unwrapped phase values over the subset to obtain a phase offset value φ0 that was subtracted from the unwrapped phase. Horizontal projector coordinates xP (x, y) were then obtained as a linear function of the absolute phase by:
where {tilde over (φ)}(x, y) is the unwrapped phase, PP is the fringe period (in pixels), and Nx is the width of the DMD (in pixels).
Displayed in
Camera calibration: The calibration of camera was conducted to extract the intrinsic parameters of the camera model. A 9×11 black/white checkerboard pattern was used, on which each square was 8 mm×8 mm in size (
Projector calibration: In the calibration of projector, the key concept is to enable the projector to “capture” images like a camera, thus making the calibration the same as that of a camera. The method [50] involved capturing additional images of the checkerboard pattern under illumination of both horizontally and vertically shifted fringe patterns (
where A=42, T=400, t0=100,B=241. As illustrated in
where g=9.80665 m/s2. Ttheory was calculated to be 401.4 ms, which closely agrees to the experimental result.
The scope of the claims should not be limited by the embodiments set forth in the examples but should be given the broadest interpretation consistent with the description as a whole.
This application is a National Entry Application of PCT application no. CA2020/050149 filed on Feb. 6, 2020 and published in English under PCT Article 21(2), which itself claims benefit of U.S. provisional application Ser. No. 62/801,707, filed on Feb. 6, 2019. All documents above are incorporated herein in their entirety by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CA2020/050149 | 2/6/2020 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2020/160666 | 8/13/2020 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4294544 | Altschuler | Oct 1981 | A |
20050219552 | Ackerman | Oct 2005 | A1 |
20140064603 | Zhang | Mar 2014 | A1 |
20150077535 | Izatt | Mar 2015 | A1 |
20180227571 | Page | Aug 2018 | A1 |
20180262684 | Lowry | Sep 2018 | A1 |
20190261843 | Guan | Aug 2019 | A1 |
20200278250 | Scarcelli | Sep 2020 | A1 |
20200342205 | Park | Oct 2020 | A1 |
Entry |
---|
PCTCA2020050149-ib311-000001-EN, Notification concerning availability of the publication of the international application, Aug. 13, 2020. |
Jean-Yves Bouguet, Camera Calibration Toolbox for Matlab, Last updated Oct. 14, 2015. |
M. Akutsu, Y. Oikawa, and Y. Yamasaki, “Extract voice information using high-speed camera,” in Proceedings of Meetings on Acoustics ICA2013, (ASA, 2013), 055019. |
S. Alben and M. J. Shelley, “Flapping states of a flag in an inviscid fluid: bistability and the transition to chaos,” Physical review letters 100, 074301 (2008). |
M. Argentina and L. Mahadevan, “Fluid-flow-induced flutter of a flag,” Proceedings of the National Academy of Sciences 102, 1829-1834 (2005). |
S. Banerjee, B. S. Connell, and D. K. Yue, “Three-dimensional effects on flag flapping dynamics,” Journal of Fluid Mechanics 783, 103-136 (2015). |
J.-A. Beraldin, F. Blais, P. Boulanger, L. Cournoyer, J. Domey, S. El-Hakim, G. Godin, M. Rioux, and J. Taylor, “Real world modelling through high resolution digital 3D imaging of objects and structures,” ISPRS Journal of Photogrammetry and Remote Sensing 55, 230-250 (2000). |
A. Chatterjee, P. Singh, V. Bhatia, and S. Prakash, “Ear biometrics recognition using laser biospeckled fringe projection profilometry,” Optics & Laser Technology 112, 368-378 (2019). |
K. L. Chung, S. C. Pei, Y. L. Pan, W. L. Hsu, Y. H. Huang, W. N. Yang, and C. H. Chen, “A gradient-based adaptive error diffusion method with edge enhancement,” Expert Systems with Applications 38, 1591-1601 (2011). |
B. S. Connell and D. K. Yue, “Flapping dynamics of a flag in a uniform stream,” Journal of fluid mechanics 581, 33-67 (2007). |
P W WFuller, An introduction to high speed photography and photonics, The imaging Science Journal, vol. 57, 2009. |
J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photon. 3, 128-160 (2011). |
D. C. Ghiglia and L. A. Romero, “Robust two-dimensional weighted and unweighted phase unwrapping that uses fast transforms and iterative methods,” JOSA A 11, 107-117 (1994). |
S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?,” Optics and lasers in engineering 48, 133-140 (2010). |
M. Gosta and M. Grgic, “Accomplishments and challenges of computer stereo vision,” in Proceedings ELMAR-2010, (IEEE, 2010), 57-64. |
T. D. Han and T. S. Abdelrahman, “hiCUDA: High-level GPGPU programming,” IEEE Transactions on Parallel and Distributed systems 22, 78-90 (2011). |
M. Hansard, S. Lee, O. Choi, and R. P. Horaud, Time-of-flight cameras: principles, methods and applications (Springer Science & Business Media, 2012). |
L. J. Hornbeck, “Digital light processing for high-brightness high-resolution applications,” in Projection Displays III, (International Society for Optics and Photonics, 1997), 27-41. |
W.-X. Huang and H. J. Sung, “Three-dimensional simulation of a flapping flag in a uniform flow,” Journal of Fluid Mechanics 653, 301-336 (2010). |
J.-S. Hyun and S. Zhang, “Enhanced two-frequency phase-shifting method,” Applied optics 55, 4395-4401 (2016). |
J.-S. Hyun, B. Li, and S. Zhang, “High-speed high-accuracy three-dimensional shape measurement using digital binary defocusing method versus sinusoidal method,” Optical Engineering 56, 074102 (2017). |
Lazaros, et al., Review of Stereo Vision Algorithms: From Software to Hardware, Int. J. Optomechatronics, 2(4), 435-462, 2008. |
S. Lei and S. Zhang, “Flexible 3-D shape measurement using projector defocusing,” Optics letters 34, 3080-3082 (2009). |
B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3D shape measurement with digital binary defocusing techniques,” Optics and Lasers in Engineering 54, 236-246 (2014). |
B. Li and S. Zhang, “Superfast high-resolution absolute 3D recovery of a stabilized flapping flight process,” Optics express 25, 27270-27282 (2017). |
B. Li and S. Zhang, “Novel method for measuring a dense 3d strain map of robotic flapping wings,” Measurement Science and Technology 29, 045402 (2018). |
J. Liang, R. N. Kohn Jr, M. F. Becker, and D. J. Heinzen, “High-precision laser beam shaping using a binary-amplitude spatial light modulator,” Appl. Opt. 49, 1323-1330 (2010). |
Liang, et al., Grayscale laser image formation using a programmable binary mask, Opt. Eng., 51(10), 108201, 2012. |
A. Manela and M. Howe, “On the stability and sound of an unforced flag,” Journal of Sound and Vibration 321, 994-1006 (2009). |
S. Michelin, S. G. L. Smith, and B. J. Glover, “Vortex shedding model of a flapping flag,” Journal of Fluid Mechanics 617, 1-10 (2008). |
https://developer.nvidia.com/cufft.—2023. |
Salvi, et al., A State of the Art in Structured ILght Patterns for Surface Profilmetry, Pattern recognition, 43(8), 2666-2680, 2010. |
S. Seer, N. Brändle, and C. Ratti, “Kinects and human kinetics: A new approach for studying pedestrian behavior,” Transportation Research Part C 48, 212-228 (2014). 5. M. Gosta an. |
M. Shelley, N. Vandenberghe, and J. Zhang, “Heavy flags undergo spontaneous oscillations in flowing water,” Physical review letters 94, 094302 (2005). |
M. J. Shelley and J. Zhang, “Flapping and bending bodies interacting with fluid flows,” Annual Review of Fluid Mechanics 43, 449-465 (2010). |
Z. Song, Handbook of 3D machine vision : optical metrology and imaging , vol. Chapter 9, (2013). |
J. Stegmaier, F. Amat, W. C. Lemon, K. McDole, Y. Wan, G. Teodoro, R. Mikut, and P. J. Keller, “Real-time three-dimensional cell segmentation in large-scale microscopy data of developing embryos,” Developmental cell 36, 225-240 (2016). |
X. Su and Q. Zhang, “Dynamic 3-D shape measurement method: a review,” Optics and Lasers in Engineering 48, 191-204 (2010). |
G. W. Taylor, J. R. Burns, S. Kammann, W. B. Powers, and T. R. Welsh, “The energy harvesting eel: a small subsurface ocean/river power generator,” IEEE journal of oceanic engineering 26, 539-547 (2001). |
S. Van der Jeught and J. J. Dirckx, “Real-time structured light profilometry: a review,” Optics and Lasers in Engineering 87, 18-31 (2016). |
M. Vázquez-Arellano, H. Griepentrog, D. Reiser, and D. Paraforos, “3-D imaging systems for agricultural applications-a review,” Sensors 16, 618 (2016). |
D. Waide, “The Future of Machine Vision Standards,” Quality, 19VS (2017). |
Y. Wang and S. Zhang, “Three-dimensional shape measurement with binary dithered patterns,” Applied optics 51, 6631-6636 (2012). |
Y. Wang, J. I. Laughner, I. R. Efimov, and S. Zhang, “3D absolute shape measurement of live rabbit hearts with a superfast two-frequency phase-shifting technique,” Optics express 21, 5822-5832 (2013). |
Y. Watanabe, K. Isogai, S. Suzuki, and M. Sugihara, “A theoretical study of paper flutter,” Journal of Fluids and Structures 16, 543-560 (2002). |
J. Zhang, S. Childress, A. Libchaber, and M. Shelley, “Flexible filaments in a flowing soap film as a model for one-dimensional flags in a two-dimensional wind,” Nature 408, 835 (2000). |
S.Zhang, Novel method for structured light system calibration, Optical Engineering 45(8), 083601, Aug. 2006. |
S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Optics and lasers in engineering 48, 149-158 (2010). |
S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Optics and Lasers in Engineering 107, 28-37 (2018). |
Zhang, High-speed 3D shape measurement with structured light methods: A review, Opt. Laser En., 106, 119-131, 2018. |
K. Zhong, Z. Li, X. Zhou, Y. Li, Y. Shi, and C. Wang, “Enhanced phase measurement profilometry for industrial 3D inspection automation,” The International Journal of Advanced Manufacturing Technology 76, 1563-1574 (2015). |
C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Optics and Lasers in Engineering 109, 23-59 (2018). |
C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro Fourier Transform Profilometry (μFTP): 3D shape measurement at 10,000 frames per second,” Optics and Lasers in Engineering 102, 70-91 (2018). |
Number | Date | Country | |
---|---|---|---|
20220357151 A1 | Nov 2022 | US |
Number | Date | Country | |
---|---|---|---|
62902445 | Sep 2019 | US | |
62801707 | Feb 2019 | US |