The invention relates to methods and apparatus for tracking motion of objects in three-dimensional space. The methods and apparatus may be used to characterize three-dimensional flow fields by tracking movement of objects in the flow fields.
Three-dimensional (3D) flow fields can be captured through a variety of multi-camera techniques including: 3D particle tracking velocimetry (3D-PTV) (Nishino et al. 1989; Maas et al. 1993), tomographic particle image velocimetry (tomo-PIV) (Elsinga et al. 2006), and most recently Shake-The-Box (Schanz et al. 2016). While such approaches have been optimized significantly in terms of accuracy and computational cost since their introduction (see for instance Scarano 2012), such techniques traditionally suffer from two major drawbacks that limit their transfer to industrial applications. First, particularly in air, the low scattering efficiency of the traditional tracers (diameter D=O (1 μm)) and the limited pulse energy of the light source limit the measurement volume of typical studies to V<100 cm3 (Scarano et al. 2015). The limited size of the measurement volume is often accounted for by using scaled models of the geometry under consideration, which, in air, results in lower Reynolds numbers (Re). When large-volume measurements are attempted they are either realized by stitching multiple small measurement volumes together (see e.g. Sellappan et al. 2018) or by repeating stereo PIV measurements on many planes (see e.g. Suryadi et al. 2010). Michaux et al. (2018) automatized the process of capturing multiple stereo-PIV planes using three robotic arms to adjust the laser and the two cameras. To perform volumetric measurements at higher Re, measurements can be conducted in water (see e.g. Rosi and Rival 2018). However, due to the presence of the water-air interface, the camera calibration poses a significant challenge. Furthermore, experiments in water often come at high costs in terms of both the facility and the model.
The second major issue preventing a broader application of 3D measurements, particularly at large scales, is its complexity. Expensive multi-camera systems and challenging calibration are required. Such complex set-ups are rarely possible in non-laboratory conditions, and are often too expensive and time-consuming for the result-oriented applications in industry. For instance in industrial applications as well as in large-scale wind tunnel testing, integral as well as point-measurement techniques remain the predominant tool for flow characterization. While well-established methods such as balances, pressure probes (single and multi-hole), and hot-wires provide a robust and affordable way to measure the aerodynamic loads or the local flow, these methods do not capture the coherent structures in the flow, which often are key to developing cause-effect relationships for such problems.
In air, the problem of limited light scattering for conventional objects (D=O (1 μm), Raffel et al. 2018) was tackled by testing larger tracer objects such as fog-filled soap bubbles (D=O (10 mm), Rosi et al. 2014), snowfall (D=O (1 mm), Toloui et al. 2014) and helium-filled soap bubbles (HFSB) (D=O (100 μm), Scarano et al. 2015). Larger tracer particles (D>100 m) allow for measurement domains at the scale of cubic meters (see e.g. HFSB in 0.6 m3 measurement volume, Huhn et al. 2017; HFSB in about 1.0 m3 measurement volume Bosbach et al. 2019). In addition, the enhanced light-scattering behaviour allows the use of less hazardous illumination sources such as searchlights (Toloui et al. 2014), LEDs (Buchmann et al. 2012; Huhn et al. 2017; Bosbach et al. 2019), and even natural light (Rosi et al. 2014). However, while large tracer particles enable the study of large-scale flows, for example, the atmospheric surface layer, the growing tracer size introduces new challenges with regards to flow-tracking fidelity (Scarano et al. 2015; Raffel et al. 2018) and thus limitations on the system resolution.
To reduce the system complexity for 3D measurements, single-camera approaches, as well as a compact multi-camera system, have been explored. Single-camera approaches based on defocusing were first suggested by Willert and Gharib (1992), who used three holes in the aperture to produce multiple images of the same particles on the camera sensor. Kao and Verkman (1994) introduced astigmatism to the optics of a microscope by a cylindrical lens to track the 3D motion of a single particle. Cierpka and Kähler (2012) adapted both principles (defocusing and astigmatism) successfully to enable 3D measurements in micro-PTV applications. Another promising single-camera approach demonstrated by Fahringer et al (2015) and Zhao et al. (2019) is light-field PIV (LF-PIV), where the third dimension is reconstructed using the information gathered from a single plenoptic camera. However, the method is limited by depth resolution and elongation effects of the reconstructed particles. Moreover, Kurada et al (1995) proposed a prism that enabled recording of three perspectives with a single camera chip. The prism was utilized by Gao et al (2012) to perform volumetric measurements. As the former methods decrease the resolution of each view, Schneiders et al. (2018) introduced a coaxial volumetric velocimeter (CVV). The CVV integrates four cameras and the laser light illumination into a single module. Jux et al. (2018) combined the CVV with a robotic arm to automatically measure multiple subsets of a large measurement volume, providing time-averaged data in large volumes around complex geometries.
One aspect of the invention relates to a method for tracking movement of an object in three-dimensional (3D) space, comprising; using a single sensor to obtain images of the object moving through the 3D space; using a processor to determine a change in position of the object in the 3D space based on a change in size of the object in the images; and use the change in position to construct a trajectory of the object; wherein the trajectory represents movement of the object through the 3D space.
In one embodiment, determining a change in size of the object in the images comprises determining a first size of the object in a first image at a first instance in time; determining a second size of the object in a second image at a second instance in time; using a difference in the first and second sizes of the object as the change in size of the object.
In one embodiment, determining a change in size of the object in the images comprises using an object detection algorithm.
In one embodiment, determining a change in size of the object in the images comprises detecting features in a first image of the object at a first instance in time; detecting the features in a second image of the object at a second instance in time; using the features in the first and second images to determine the change in size of the object.
In one embodiment, the features are glare points and the change in size of the object is determined by extracting temporal evolution of spacing of the glare points.
In one embodiment, the change in position of the object is determined two or more times.
In various embodiments, changes in positions of two or more objects in the images may be determined. The objects may be of substantially uniform size and shape.
In various embodiments, the single sensor may be adapted to capture images based on a modality selected from light (visible, infra-red (IR)), ultrasound (US), X-ray, radio frequency (RF), and magnetic resonance (MR).
In one embodiment, the single sensor comprises a camera.
In one embodiment, the object is naturally-occurring in the 3D space.
In one embodiment, the object is manufactured.
In one embodiment, the object is released into the 3D space.
In one embodiment, the trajectory may be used to characterize a flow field in the 3D space, and the method may include outputting a 3D representation of the flow field.
Another aspect of the invention relates to apparatus for tracking movement of an object in three-dimensional (3D) space, comprising; a single sensor that captures images of the object moving through the 3D space; a processor that determines a change in position of the object in the 3D space based on a change in size of the object in the images; and uses the change in position to construct and output a trajectory of the object; wherein the trajectory represents movement of the object through the 3D space.
In one embodiment, the processor determines the change in size of the object in the images by: determining a first size of the object in a first image at a first instance in time; determining a second size of the object in a second image at a second instance in time; using a difference in the first and second sizes of the object as the change in size of the object.
In one embodiment, the processor determines the change in size of the object in the images comprises using an object detection algorithm.
In one embodiment, the processor determines the change in size of the object in the images by detecting features in a first image of the object at a first instance in time; detecting the features in a second image of the object at a second instance in time; using the features in the first and second images to determine the change in size of the object.
In one embodiment, the features are glare points and the change in size of the object is determined by extracting temporal evolution of spacing of the glare points.
In one embodiment, the processor determines the change in position of the object two or more times.
In one embodiment, the processor determines changes in positions of two or more objects in the images. The objects may be of substantially uniform size and shape.
In various embodiment, the single sensor is adapted to capture images based on a modality selected from light (visible, infra-red (IR)), ultrasound (US), X-ray, radio frequency (RF), and magnetic resonance (MR).
In one embodiment, the single sensor comprises a camera.
In one embodiment, the object is naturally-occurring in the 3D space.
In one embodiment, the object is manufactured.
In one embodiment, the object is released into the 3D space.
In one embodiment, the processor uses the trajectory to characterize a flow field in the 3D space and outputs a 3D representation of the flow field.
Another aspect of the invention relates to an apparatus and associated methods for characterizing a flow field of a 3D space, comprising a single sensor that captures images of one or more object moving through the 3D space; a processor that processes the images to determine a change in position of the one or more object in the 3D space based on a change in size of the one or more object in the images; and use the change in position to construct a trajectory of the one or more object through the 3D space; and output the trajectory of the one or more object in the 3D space and/or a 3D representation of the flow field of the 3D space.
Another aspect of the invention relates to non-transitory computer-readable storage media containing stored instructions executable by a processor, wherein the stored instructions direct the processor to execute processing steps on image data of one or more object moving through 3D space, including determining position and trajectory of the one or more object in the 3D space, using the position and trajectory of the one or more object to characterize a flow field of the 3D space, and optionally outputting a 3D representation of a flow field of the 3D space, as described herein.
For a greater understanding of the invention, and to show more clearly how it may be carried into effect, embodiments will be described, by way of example, with reference to the accompanying drawings, wherein:
) for a constant focus distance of=5 m.
=11, and of =5 m for a camera with 10 m pixel size.
Embodiments described herein provide methods and apparatus for tracking motion of one or more objects over a small or large volume (i.e., a 3D space) that enable affordable and efficient measurements using a single sensor. Compared to prior methods, embodiments significantly reduce experimental effort. Tracking motion of objects as described herein provides time-resolved measurements that enable characterization of flow fields in very large volumes, e.g., full-scale measurements in the atmospheric boundary layer, as well as in confined spaces, such as airflow in indoor spaces (e.g., offices, classrooms, laboratories, homes, etc.). Embodiments provide methods and apparatus for tracking motion of objects in 3D spaces and characterizing flow fields in real time. In the context of the recent pandemic, such indoor applications could help to reduce infection risk by designing appropriate air circulation, ensuring frequent air exchange, and avoiding direct airflow from individual to individual. In addition, embodiments may be adapted to track the motion of objects in volumes comprising various fluids (i.e., liquids, gases), or volumes in a vacuum (e.g., in outer space).
Embodiments use a single sensor 3D measurement approach to track one or more objects in a 3D space. The sensor captures images of one or more objects moving through the 3D space. The size of an object in an image captured by the sensor depends on its distance from the sensor as it travels through the 3D space. When the size of an object is known (i.e., the actual size, or the size with respect to a reference point), then by determining the change in size of the object in images captured at different times, the trajectory of the object may be constructed. As described herein, various techniques may be used to determine the size of an object in images captured by the sensor. For example, embodiments may be based on detecting glare points on objects in the images, while other embodiments may use object detection algorithms.
Embodiments may be implemented using a sensor technology that can capture images of an object moving in a 3D space, from which information (i.e., one or more features of the object) can be extracted to determine size of the object. Examples of such technology include, but are not limited to, those based on a modality selected from light (visible, infra-red (IR)), ultrasound (US), X-ray, radio frequency (RF), and magnetic resonance (MR).
Some embodiments may use objects of known size. For example, in some applications such as controlled experiments, studies in confined or enclosed 3D spaces, etc., in which objects are released into a 3D space, the objects are of known size. Also, the objects may be of substantially uniform shape. Examples of such objects include, but are not limited to, bubbles, balloons, particles prepared from selected materials, etc.
In other embodiments, the objects may not be of known or uniform size. For example, in some applications such as large enclosed spaces and outdoors, naturally occurring objects, such as snowflakes, ashes, or other particulate matter (e.g., resulting from natural events), seeds, animals such as birds or insects, etc., may be tracked. Alternatively, the object(s) may be manufactured (i.e., “man-made”), e.g., drones, aircraft, bubbles, balloons, particles prepared from selected materials, etc., and released into the space, or the objects may be debris or particulate matter (e.g., resulting from catastrophic events), etc. The size of such objects may be estimated based on experience or known parameters (e.g., size of a known species of bird or insect, or type of drone or aircraft). In the absence of known parameters various techniques may be employed to estimate size of objects, for example, a second sensor may be used, or the object size may be estimated when the object is at a known position, or suitable illumination can provide an object size estimate, etc. An embodiment for object tracking based on estimating size of objects is described in detail in Example 1.
Embodiments suitable for in very large measurement volumes (e.g., large enclosed spaces, outdoors) may include mobile apparatus for releasing objects of known size and of substantially uniform shape and tracking their movements. For example, in one embodiment a drone is equipped with a bubble generator, a sensor (e.g., camera), a global positioning system (GPS) sensor, and acceleration sensors or an inertial measurement unit (IMU). The bubble generator releases bubbles and position and velocity of the drone/sensor and bubbles are tracked over time as the bubbles move away from the drone. Images of the bubbles acquired by the camera are processed according to methods described herein to characterize the flow field in real-time in a very large measurement volume. Such an embodiment may be deployed in a wide variety of applications to measure the flow field in its vicinity, wherein quantities such as mean flow velocity and turbulence ratio may be derived and evaluated in real-time. Applications include, for example, evaluation of sites for wind turbine installations and optimization of wind turbine placement, where local weather conditions, complex terrain, etc., render studies based on weather models, historic weather data, and conventional flow measurement techniques to be of limited value. In contrast, a mobile embodiment as described herein allows the identification of suitable locations for wind turbine plants and placement of wind turbines, where a significant performance increase may be expected. Other applications may include measurements in research and development (e.g., design and optimization of industrial wind tunnels), on-road measurements for aerodynamic vehicle optimization, efficient disaster response when airborne contaminants are involved, and flow assessment in urban areas to predict aerodynamic and snow loads for planned buildings.
Embodiments may be based on tracking objects by tracking identifiable features in the images of the objects captured by the sensor. An object may have a characteristic related to surface properties, material properties, etc. that results in one or more identifiable features in the images. In some embodiments, an identifiable feature may be present in the images even if the object itself is not rendered in the images. An example of such a feature is glare points (or glints) produced by incident light on a reflective surface of the object. For example, when light is directed to a substantially spherical object with a reflective surface, a sensor such as a camera will capture resulting glare points on the reflective surface. The glare points in an image of the object may be used to determine the size of the object, and a temporal sequence of images may be used to determine a change in size of the object in the images relative to the sensor, and hence to construct the trajectory of the object.
A non-limiting example of a reflective object that may be tracked is a bubble. Bubbles, such as those produced from soap, are good candidates for use in embodiments because they are inexpensive and can easily be produced and dispersed in large quantities, they are very light and thus able to follow flow (e.g., of air) closely, and they can be relatively environmentally-friendly. Bubbles may be, e.g., centimeter-sized, which is a good compromise between the ability to detect glare points, strength/longevity of the bubbles, and their ability to follow fluid (e.g., air) flow, although other sizes may be used. However, as bubbles become larger they have more inertia which reduces their ability to follow air flow. Furthermore, larger bubbles deform more easily, rendering a glare point approach as described herein less accurate. A camera may be used as a sensor to capture images of bubbles, which may be illuminated (e.g., using white light) to create glare points on the bubbles. Depth (i.e., size) of the soap bubbles may be determined from the glare-point spacing in the images.
Embodiments may include one or more processor, e.g., a computer, having non-transitory computer-readable storage media containing stored instructions executable by the one or more processor, wherein the stored instructions direct the processor to carry out processing steps on image data of one or more object moving through 3D space, including determining position and trajectory of one or more object in 3D space, using the position and trajectory of the one or more object to characterize a flow field of the 3D space, and optionally outputting a 3D representation of a flow field of the 3D space, as described herein. For example, one or more processing steps such as those shown in the embodiment of
Embodiments based on use of bubbles as tracked objects are described in detail in Examples 1, 2, and 3. The Examples are included to show how embodiments of the invention may be implemented, and are not intended to limit the scope of the invention in any way.
An embodiment was implemented to demonstrate 3D object tracking based on object size estimation. The implementation is shown diagrammatically in
A test flow was examined in a 3m×4m×3m room equipped with two portable fans to generate a low-speed air circulation. As shown in
Once a bubble is generated, the bubble size (DB) can be determined by Equation (4) as the distance between the camera and the bubble generator o(t=t0) is known. Knowing DB, Equation (4) allows reconstruction of the bubble position in three dimensions for all time instances until the bubble leaves the field of view or bursts. While low object densities are required to avoid ambiguity in the reconstruction, very long tracks were extracted (˜80 time instances). The long tracks allow analysis of the material transport in the room, both instantaneously and statistically.
In this example, accuracy of the method was evaluated by recording the same bubble from two different perspectives (i.e., by using an additional camera 216, see
This example describes use of glare points of bubbles in 3D object tracking.
Consider a spherical air-filled soap bubble of diameter 10 mm<DB<25 mm much larger than the soap film thickness h=0.3 μm<<DB. When the bubble is illuminated by a parallel light beam, a camera at an observation angle θ with respect to the illumination direction captures several reflections (glare points) on the bubble surface. For θ≈90 the two glare points of highest intensity are a result of external and internal reflections, respectively (see
D
G
=D
B sin(θ/2). (1)
Hence, the glare-point spacing is directly proportional to the bubble diameter. If the light source and the camera are far away from the measurement volume, a constant 6 can be assumed throughout the whole measurement volume. For θ=90 this leads to DG=√{square root over (2)}/2DB, as shown in
Assuming that the bubbles remain spherical with constant diameter DB, and that the variation of θ is negligible along the bubble's path, the image glare-point spacing (dG) is related to DB by the optical magnification factor M, as shown in
where i is the image distance and o the object distance (Raffel et al. 2018). For o>>i the lens equation
f
−1
=o
−1
+i
−1 (3)
leads to f≈i, where f is the camera lens focal length. Equation (2) simplifies to
With Equation (4), and for known bubble diameters DB, the motion of a bubble in 3D space can then be extracted by a single camera.
The extraction of the out-of-plane position for each bubble requires knowledge about the bubble size (DB). The error estimate for DB propagates linearly into the estimate of o (Equation (4)), and therefore also into derived quantities such as the velocity or the material acceleration. The optimal solution would be a bubble generator (currently in development) that produces equally-sized bubbles of known size. However, alternate approaches are possible. For instance, if the illuminated region and the flow direction are known, DB can be estimated as soon as the bubble first appears in the image. Alternatively, DB can be estimated by a secondary view through the principle of photogrammetry. These details are outlined below.
Embodiments may exhibit a limited resolution in the out-of-plane direction. In particular, the out-of-plane component is resolved by the difference between dG(omin) and dG(omax), where omax is the maximum and omin the minimum object distance, respectively. For a given measurement volume depth omax−omin, the value of
has to be maximized. To allow for bright images, a small f-number () is preferred. As small
results in a limited depth-of-field (DOF), the limits of the measurement volume (omax and omin) are set to the limits of acceptable image sharpness (Greenleaf 1950):
where of is the focus distance, and c is the circle of confusion that describes the acceptable blurriness of the image. The combination of Equations (5) and (6) leads to the simple expression
where in the last step of>>f is assumed. Equation (7) implies that shorter focal lengths will allow for better depth resolution. However, small f results in wide opening angles and thereby leads to a measurement volume that is shaped more like a truncated pyramid than that of a cuboid. Furthermore, for small f the measurement volume is located close to the camera, in turn possibly modifying the flow. Therefore, f may be selected for a compromise between good out-of-plane resolution and sufficient distance between the camera and the measurement volume.
For example, in one embodiment f=60 mm (e.g., AF Micro-Nikkor 60 mm f/2.8D) may be selected as a compromise between good out-of-plane resolution and sufficient distance between the camera and the measurement volume omin. for c=23 μm, which corresponds to 2.3 px for a camera used in Example 3 (below), a Photron™ mini-WX 100. An f-number of
=11 provides sufficiently bright images. For of=5 m a DOF of omax−omin=L=4.0 m can be achieved.
A 3D object tracking embodiment using soap bubbles as tracked objects was implemented using a 30% scale tractor-trailer model at a 9° yaw angle in a wind tunnel at the National Research Council (NRC) in Ottawa, Canada.
=11. The measurement volume 518 started at the back of the trailer and extended ˜4 m in the x-direction (see
To generate tracked objects, two commercial bubble generators 520 (Antari B200) each with a production rate of ˜40 bubble/s were used. The diameters of the air-filled soap bubbles varied in the range 10 mm<DB<25 mm.
Both bubble generators were placed ˜20 m upstream of the measurement volume in the settling chamber of the wind tunnel at a height yb=4 m; see
Once the soap bubbles entered the measurement volume 518, they were illuminated by an array of four pulsed high-power LEDs 522 (LED-Flashlight 300, LaVision GmbH) placed in a series configuration, as shown in
To estimate the size of each bubble, two cameras (B and C) (Photron SA4, AF Nikkor 50 mm f/1.4D, =11,
A total of 19 runs (5400 images) were collected to assess the wake-flow characteristics at free-stream velocities of U∞=8 m/s (3 runs) and U∞=30 m/s (16 runs), respectively. Images were recorded at frequencies of Fcam=150 Hz (U∞=8 m/s) and Fcam=500 Hz (U∞=30 m/s). The cameras were mounted onto the wind-tunnel floor with tripods and vibration-damping camera mounts, and the tripods were then fixed with multiple lashing straps.
At high wind speeds, the shell of the wind tunnel vibrates at frequencies within the range of 9-40 Hz. While the tractor-trailer model 514 was mounted on the non-vibrating turntable 516, the cameras experienced significant vibrations. To correct for the vibrations during image processing, non-vibrating reference points were placed in the measurement volume. For camera (A) 524, two yellow stickers were attached to the left edge at the back of the trailer. For cameras B and C stickers were attached to the opposite wind tunnel walls. As the first step of processing 610, the raw images received by the processor were stabilized 620 (translation and rotation) through cross-correlation of the sticker positions throughout the time series. Thereafter, glare points were tracked 630 using standard two-dimensional PTV (DaVis 8.4.0, LaVision GmbH). A representation of a two-dimensional vector map is shown at 630.
Subsequently, at 640 individual glare point tracks were determined from temporal sequences of 2D images originating from individual bubbles, and then the 2D tracks of the same bubble were paired. The pairing was based on a series of conditions. First, the paired tracks have to be reasonably close and their velocities have to be similar. Second, the position of the light source determines the relative orientation of the individual glare points of the same bubble. After pairing, at 650 the temporal evolution of the glare-point spacing (dG(t)) was extracted. To reduce measurement noise, dG(t) was smoothed by applying a third-order polynomial fit.
Without optimal bubble generators that produce bubbles of uniform and known size, in this example, an additional processing step 660 was implemented to estimate size DB for each bubble once it appeared in the FOV using cameras B and C. The flow was recorded from a second perspective and DB was determined via photogrammetry. With known DB, the 3D position of each bubble can be estimated at all times from a single perspective. In particular, bubble tracks of camera A were matched with the second perspective (cameras B and C) via triangulation. Once DB was known the second perspective was disregarded and the complete 3D track was reconstructed from a single view. With an optimal bubble generator equally-sized bubbles can be generated, and the step 660 can be omitted. With known DB, for camera A the object distance (o) of each bubble was estimated from the glare-point spacing (dG) in Equation (2).
All cameras were calibrated by the method suggested by Zhang (2000), providing both the internal camera matrix I and the external matrix E=(R|T) consisting of the rotation matrix R and the translation vector T. The calibration maps the coordinates (x,y,z) from the real-world coordinate system to the frame of reference of the camera chip (X,Y)
where m is initially unknown. For sake of clarity, a second real-world coordinate system was introduced with the coordinates (xc,yc,zc) that shares the origin and orientation of the coordinate system of camera A. Since zc=o is then known, Equation (8) provides (xc,yc,zc), which by translation (T) and rotation (R) leads to (x,y,z), and thereby the three-dimensional tracks were determined at 670.
While the extracted Lagrangian data allows for direct determination of material accelerations, material transport, and the identification of coherent structures, the low object density in this proof-of-principle study (only two bubble generators used) does not allow one to extract spatial gradients in the time-resolved data set. In the following, the data is mapped on an Eulerian grid and averaged over time. A uniform 80×30×30 grid with a resolution of 0.05 m was defined and for each grid point the data were averaged. The mapping of the data to an equidistant Eulerian grid allows visualization of the mean velocity field, streamlines, as well as an estimate of the vorticity distribution.
Despite the challenging environment of the large-scale wind tunnel (vibrations, issues with accurate alignment due to long distances, etc.) along with non-ideal seeding (low seeding density, narrow seeding area) the embodiment of this example was used successfully to extract time-resolved Lagrangian data as well as 3D time-averaged velocity fields for the model tractor-trailer.
The contents of all cited publications are incorporated herein by reference in their entirety.
While the invention has been described with respect to illustrative embodiments thereof, it will be understood that various changes may be made to the embodiments without departing from the scope of the invention. Accordingly, the described embodiments are to be considered merely exemplary and the invention is not to be limited thereby.
This application claims the benefit of the filing date of Application No. 63/154,843, filed on Mar. 1, 2021, the contents of which are incorporated herein by reference in their entirety.
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/CA2022/050287 | 3/1/2022 | WO |
| Number | Date | Country | |
|---|---|---|---|
| 63154843 | Mar 2021 | US |