A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
The present invention relates to computational engineering technology, and more particularly to technology for recognizing features within a computationally designed structure.
Computational engineering analysis processes typically apply an algorithm to a digital representation of a Product. For example in computational fluid dynamics, a mesh is generated that represents the shape of the product and forms a basis for the flow features around the product. As is known in the art, a mesh is a discrete representation of a subsection of a continuous geometric space.
Mesh areas of a design are not generally uniform in size but are smaller and of higher concentration in those areas of a volume where there are features of interest having higher complexity, such as but not limited to flow features in an aircraft fuselage design.
A problem with conventional computational design technology arises from the fact that specification and identification of meshes in a design remains largely a human endeavor. In some circumstances, a person may need to labor over a design to identify all of the places where features having particular characteristics exist. And even in some technology that achieves some level of automation, a great deal of time and effort is required for a person to manually complete user-driven menus to characterize the features that are to be located within the structure.
Another problem with conventional computational design technology is the relatively slow speed of execution. The inventors of the subject matter described herein have determined, through investigation and inventive skill, that a contributing factor in this regard relates to the conventional way that a structural design is encoded (e.g., tessellation data, vector representations supplied as output from computer aided design programs, etc.).
In view of the foregoing, there is a need for technology that addresses the above and/or related problems.
It should be emphasized that the terms “comprises” and “comprising”, when used in this specification, are taken to specify the presence of stated features, integers, steps or components; but the use of these terms does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
Moreover, reference letters may be provided in some instances (e.g., in the claims and summary) to facilitate identification of various steps and/or elements. However, the use of reference letters is not intended to impute or suggest that the so-referenced steps and/or elements are to be performed or operated in any particular order.
In accordance with one aspect of the present invention, the foregoing and other objects are achieved in technology (e.g., methods, apparatuses, nontransitory computer readable storage media, program means) that identifies a structural feature of a structure. In an aspect of some but not necessarily all embodiments consistent with the invention, identifying the structural feature involves obtaining a global point cloud representation of the structure and obtaining a target point cloud that represents a target structural feature, wherein the target point cloud is a subset of the global point cloud representation of the structure. The global structural information and the target point cloud are supplied to a feature clustering process that producing therefrom a clustered representation of the structure, wherein:
The clustered representation of the structure and the target point cloud are supplied to a feature matching process that compares the target point cloud with clusters of datapoints within the clustered representation of the structure and produces therefrom one or more matching point clouds, wherein each of the one or more matching point clouds is a subset of the global point cloud representation of the structure.
In another aspect of some but not necessarily all embodiments consistent with the invention, the global structural information is the global point cloud representation of the structure.
In yet another aspect of some but not necessarily all embodiments consistent with the invention, identifying the structural feature involves producing a set of one or more extracted structural features based on the global point cloud representation of the structure, wherein each of the one or more extracted structural features is a pose invariant characterization of a local geometry around a point in the global point cloud representation of the structure, and wherein the global structural information is the set of one or more extracted structural features. In another aspect of some but not necessarily all such embodiments, producing the set of one or more extracted structural features based on the global point cloud representation of the structure comprises determining a Point Feature Histogram (PFH) based on the global point cloud representation of the structure. In one possible alternative, producing the set of one or more extracted structural features based on the global point cloud representation of the structure comprises determining a Fast Point Feature Histogram (FPFH) based on the global point cloud representation of the structure.
In another aspect of some but not necessarily all embodiments consistent with the invention, identifying the structural feature involves downsampling the global point cloud representation of the structure to produce a down sampled global point cloud representation of the structure, wherein producing the set of one or more extracted structural features based on the global point cloud representation of the structure comprises producing the set of one or more extracted structural features from the down sampled global point cloud representation of the structure.
In yet another aspect of some but not necessarily all embodiments consistent with the invention, producing the one or more matching point clouds comprises identifying a set of datapoints within one of the clusters of datapoints as one of the one or more matching point clouds when a comparison between the set of datapoints within said one of the clusters of datapoints and the target structural feature produces a predefined comparison result.
In yet another aspect of some but not necessarily all embodiments consistent with the invention, the predefined comparison result is a predefined root mean square error between the set of datapoints within said one of the clusters of datapoints and the target structural feature.
In still another aspect of some but not necessarily all embodiments consistent with the invention, the feature clustering process is a RANdom SAmple Consensus (RANSAC) process.
In another aspect of some but not necessarily all embodiments consistent with the invention, the feature matching process comprises determining an Iterative Closest Point (ICP) value.
In yet another aspect of some but not necessarily all embodiments consistent with the invention, obtaining the global point cloud representation of the structure comprises one or more of:
In some but not necessarily all alternative embodiments consistent with the invention, obtaining the global point cloud representation of the structure comprises obtaining the point cloud data by converting a global non-point cloud representation of the structure into the global point cloud representation of the structure.
In some but not necessarily all alternative embodiments consistent with the invention, obtaining the global point cloud representation of the structure comprises receiving the global point cloud representation from a neural network that was trained using a dataset representing the structure.
In still another aspect of some but not necessarily all embodiments consistent with the invention, identifying the structural feature involves obtaining a rule that describes a volume associated with the target structural feature, and obtaining contextual information about a locale of the target structural feature. For each of the one or more matching point clouds, a corresponding volume is produced, wherein the corresponding volume has a mesh spacing, a size, and a pose, and wherein further the mesh spacing is produced in accordance with the rule; and the size and pose are each produced based on the contextual information about the locale of the target structural feature and on contextual information about the locale of said each of the one or more matching point clouds.
In another aspect of some but not necessarily all embodiments consistent with the invention, identifying the structural feature involves obtaining one or more additional target point clouds that represent the target structural feature, wherein the one or more additional target point clouds are differently sized from one another and from the target point cloud. For each of the one or more additional target point clouds, the global structural information and said each of the one or more additional target point clouds are supplied to the feature clustering process that produces therefrom one or more additional clustered representations of the structure. For each of the one or more additional target point clouds, the one or more additional clustered representation of the structure and said each of the one or more additional target point clouds are supplied to the feature matching process that produces therefrom one or more additional sets of one or more matching point clouds.
In another aspect of some but not necessarily all embodiments consistent with the invention, a computer program product is configured for carrying out any one or more of the aspects herein described.
In yet another aspect of some but not necessarily all embodiments consistent with the invention, a nontransitory computer readable storage medium comprises program instructions that, when executed by one or more processors, carries out any one or more of the aspects herein described.
In another aspect of some but not necessarily all embodiments consistent with the invention, a system comprising one or more processors configured to carry out any one or more of the aspects herein described.
In yet another aspect of some but not necessarily all embodiments consistent with the invention, a structural feature recognizer for use in computational engineering is provided, wherein the structural feature recognizer is configured to carry out any one or more of the aspects herein described.
In still another aspect of some but not necessarily all embodiments consistent with the invention, computational engineering system is provided that comprises a structural feature recognizer for use in computational engineering, wherein the structural feature recognizer is configured to carry out any one or more of the aspects herein described.
The objects and advantages of the invention will be understood by reading the following detailed description in conjunction with the drawings in which:
The various features of the invention will now be described with reference to the figures, in which like parts are identified with the same reference characters.
The various aspects of the invention will now be described in greater detail in connection with a number of exemplary embodiments. To facilitate an understanding of the invention, many aspects of the invention are described in terms of sequences of actions to be performed by elements of a computer system or other hardware capable of executing programmed instructions. It will be recognized that in each of the embodiments, the various actions could be performed by specialized circuits (e.g., analog and/or discrete logic gates interconnected to perform a specialized function), by one or more processors programmed with a suitable set of instructions, or by a combination of both. The term “circuitry configured to” perform one or more described actions is used herein to refer to any such embodiment (i.e., one or more specialized circuits alone, one or more programmed processors, or any combination of these). Moreover, the invention can additionally be considered to be embodied entirely within any form of non-transitory computer readable carrier, such as solid-state memory, magnetic disk, or optical disk containing an appropriate set of computer instructions that would cause a processor to carry out the techniques described herein. Thus, the various aspects of the invention may be embodied in many different forms, and all such forms are contemplated to be within the scope of the invention. For each of the various aspects of the invention, any such form of embodiments as described above may be referred to herein as “logic configured to” perform a described action, or alternatively as “logic that” performs a described action.
One aspect of inventive embodiments is the use of a point cloud representation of a structural design. Unlike the conventional use of, for example, named elements and geometric types, the use of point clouds is more flexible and facilitates the adaptation of recent advances in machine vision technology for a new purpose, namely, solving feature recognition in a structural design.
Another aspect of inventive embodiments is feature recognition/identification by means of recognition of geometric similarity and registration. Unlike conventional technology, the embodiments consistent with the invention can detect similarity that is not constrained by dimension or type, because the underlying data structure is simply an arbitrary set of points in 3D space (i.e., a point cloud).
Yet another aspect of some but not necessarily all inventive embodiments is the modification of mesh generation rules based on locale-related contextual information and the application of the modified rules to automatically identified structural features.
These and other aspects are now described in further detail in the following discussion.
Registration algorithms are known in the art and are widely used in image processing applications. The goal of registration is typically to transform a collection of separate data sets so that they are all relative to a same coordinate system. This is useful, for example, when it is desired to overlay separately obtained images (e.g., any combination of photographs, sensor data, computer generated images, etc.) to produce a single image. One of many possible exemplary uses of registration can be found in augmented reality technology, in which computer generated graphic images are overlayed on top of one or more real world images. It is registration that allows the user to experience the computer generated image as existing at the intended location of the real world.
The inventors have recognized, through investigation and inventive skill, that registration algorithms can be used outside the context of image processing, and more particularly for the purpose of locating structural features that match a target feature of interest within a global structure. However, the fast global registration of sub-scale features within a much larger shape is a difficult problem, especially when point cloud representations are used. Machine vision for automation tend to assume a similar number of points and scales in the source and target point clouds and registration is preceded by a segmentation step (splitting the target into multiple separate parts, one of which will match the source—for example a moving car in a machine vision system based on LIDAR).
The inventors of the subject matter described herein have recognized that the functionality of registration can be applied to automatically locate features in a structure. However, unlike applications such as machine vision, feature matching presents the unique challenge of matching part of a contiguous target set to a source set of points where there may be no straightforward way to segment the data set a-priori.
The above mentioned challenges are resolved in the various inventive embodiments, such as that shown in
In an aspect of embodiments consistent with the invention, the process of recognizing features in an engineering model or structure and applying relevant rules to the onward computational process is uniquely based on points rather than any higher-order definition (such as CAD or imagery). This is because point clouds are a generic data representation that allows information about volumes (rather than surfaces) and measured (rather than computational) data values to be input to a single process without the need for specific data/file formats and without the need for representation types that are not suited to the data being represented—for example surface tufting on a 737 flight test. Therefore, one step is obtaining a point cloud representation of the structure (step 101).
Some, but not necessarily all, embodiments may further include the in-situ conversion of any other data representation to point clouds before global registration against the additional rule library is performed. This is unlike the geometry-based approaches, where rules are associated with named and tagged geometry (surfaces) before any discretization. This makes the current process ideally suited to the processing of computational engineering data sources based on product shape, regardless of representation. Accordingly, the step of obtaining the point cloud representation of the structure may need to include additional conversion steps. This is illustrated in
As shown in
Valuable information relevant to computational engineering can be obtained from physical experiments using real-world structures relating to computationally designed structures. In such cases, the data from physical experiments may be from any one or more of: electronic sensors, direct measurements that are transcribed, photography with interpretation, and other forms of data capture. Examples of electronic sensors include, but are not limited to, electro-optic, pressure gauge, thermal, infra-red, accelerometer, acoustic (microphone), and strain gauge sensors.
In each case, data points with values of interest are generated. Conversion to point cloud format is straightforward as the data points and values require no additional structure.
Except for the case in which data originates as a point cloud 201, the remaining formats are first converted into point cloud representation (step 213) before all data is collected as a single point cloud dataset 215.
Referring back to
An optional next step is feature point extraction (step 105), in which the point cloud dataset is processed to produce data representing one or more extracted features that characterize the local geometry around one or more points in the point cloud dataset, where the extracted features are preferably pose invariant with good discriminative power. Algorithms for performing suitable feature point extraction, such as Point Feature Histogram (PFH) and Fast Point Feature Histogram (FPFH) are known in the art, as exemplified by Rusu et al., “Fast Point Feature Histograms (FPFH) for 3D Registration”, 2009 IEEE International Conference on Robotics and Automation, 2009, pp. 3212-3217, doi: 10.1109/ROBOT.2009.5152473. Accordingly, a thorough description of such algorithms is beyond the scope of this disclosure.
Next, in step 107, a target point cloud is obtained. The target point cloud is a subset of the global point cloud representation and represents a structural feature of interest (e.g., to a designer of the structure). The target point cloud may be supplied by a user of the inventive technology, or by an Artificial Intelligence (AI) that has been trained to automatically identify types of features that are of interest.
As one goal of the design process is to generate suitable meshes for each structural feature of interest, wherein the meshes are not represented in the original global point cloud (e.g., in the examples presented herein, meshes that define vortexes generated by vortex generators), another step is obtaining a rule that describes the mesh that is to be generated for the target point cloud (step 109), taking into account contextual information related to the locale of the target point cloud. It is noted that a mesh for a target point cloud may be at the location of the target point cloud but is not required to be. To take one, non-limiting example, if the target point cloud represents a vortex generator, the required area of mesh refinement will be downstream of the vortex generator itself. As with the data representing the structure, rules may be initially represented in any data source.
Rules for computational engineering processes can be linked to specific features in the point cloud representation of products and associated data sources, but the rules themselves need not be in point cloud representation. An example is mesh generation for aerodynamics where certain aircraft body parts require special treatment-normally input by an expert or via feedback from a downstream process. After matching point clouds corresponding to additional instances of target structural features of interest are identified at different locations on the aircraft body (or more generally, to whatever type of structure is being designed), the computational engineering rules are applied at the feature locations, with a transformation in space according to the registration transformation, in order to generate appropriate meshes associated with the identified features.
As mentioned earlier, registration is a process in which separate datasets, having different coordinate systems, can be aligned to a common coordinate system, such as in image processing applications. The inventors have recognized that the finding of a point of alignment can also be useful in a computational engineering environment for indicating when a subset of points in a point cloud match those of a known feature, and for that reason, next steps include performance of a global registration that seeks to find a match between each target point cloud and the extracted features (or more generally, of the global point cloud in embodiments that do not incorporate feature extraction).
To perform this quickly, it is advantageous to first perform a global registration of the target point cloud with the global point cloud (step 111), and then to refine the initial feature alignment by performing a local registration of the target point cloud with the globally registered point cloud (using a different algorithm) (step 113). The output of the local registration is a set of one or more matching point clouds, corresponding to features of interest on the structure that match the target point cloud. Accordingly, in the exemplary embodiment, the first registration procedure can be, for example, a RANdom SAmple Consensus (RANSAC) process or similar. Global registration processes are known generally, outside the context of the herein-described inventive embodiments (e.g., for alignment of images of 3-dimensional (3D) shapes for the purpose of creating a single 3D image from separate images). See, for example, Zhou et al., “Fast Global Registration”, 9906. 10.1007/978-3-319-46475-6_47 (2016).
The local registration process can be, for example, an Iterative Closest Point (ICP) process or similar. The use of ICP for use outside the context of the herein-described inventive embodiments (e.g., for alignment of images of 3D shapes for the purpose of creating a single 3D image from separate images) is generally known in the art. See, for example, Gelfand et al., “Robust Global Registration”, SGP05: Eurographics Symposium on Geometry Processing, The Eurographics Association pp. 197-206 (2005).
A benefit of the strategy in which global registration and local registration are performed in sequence is that the global registration of the target point cloud with the global point cloud conditions the point cloud data such that points are clustered around potential features of interest in the structure. This clustering has the effect of creating a subset of points for the local registration to consider, thereby enabling the local registration process to work satisfactorily. Otherwise, local registration by itself would be ineffective.
To ensure accuracy in the exemplary embodiment, a measure of registration fitness is produced for each of the matching point clouds identified by the registration processes. This can be done by, for example, producing a fitness estimate representing a distance between the target point cloud and each automatically identified matching point cloud (step 115). The measure of fitness can be, for example, a Root Mean Square Error (RMSE) value.
The measure of fitness of the matching point cloud is compared with a threshold value (representing a minimum acceptable level of fitness), and if the fitness of an identified matching point cloud satisfies the comparison (“Yes” path out of decision block 117), the identified matching point cloud and contextual information describing its associated locale are stored or otherwise made available to a designer (step 119).
Having identified matching point clouds of the target point cloud within the global point cloud, corresponding meshes for those matching point clouds are generated. Doing so requires that the rules for mesh generation take into consideration the context of those matching point clouds in relation to the context of the locale in which the target point cloud is situated (e.g., the pose and surrounding elements of each given point cloud), since the newly generated mesh should fit within the context of the matching point cloud's locale. Rules for generating computational engineering meshes at the locations of the identified matching point clouds are created by adapting/transforming the rules associated with the target point cloud (step 121) based on the relation between the locale of the target point cloud and that of the matching point cloud. The transformed rules are then applied to generate respective meshes for each matching point cloud (step 121).
An exemplary embodiment involving the locale-dependent rule transformation of step 121 is illustrated in
Referring back to decision block 117 in
Further aspects of some but not necessarily all inventive embodiments involve multiscale registration, which is advantageous in use cases in which the details of the mesh generation rules to be applied for an identified feature are not the same at different locales. For example, the same vortex generator shape may require a longer downstream volume of mesh refinement when located on the leading edge of a wing than when located near the windscreen of an aircraft. Considering this aspect in further detail, it will be appreciated that the use of deep learning algorithms for multi scale registration (matching point clouds and locales) requires the automatic generation of recognizable patterns of points at multiple length scales determined from the global model. For example, if one looks at the surface of a sphere at very small distance from the surface, the shape appears to be flat; if the entire sphere is viewed, then one sees a set of points equidistant from a central point; and if one moves very far away, then one sees just a single point. For an aircraft, if one considers a vortex generator near the nose of the aircraft one sees a vortex generator on a flat surface; or a vortex generator on a cylinder; or a vortex generator just in front of a flat surface (the window of the aircraft); or a vortex generator near the front of a long cylindrical body, all depending on the distance of the observation point relative to the structure (or equivalently, depending on the size of the point cloud under consideration). For the same vortex generator at the rear of the aircraft near the tail, one sees a vortex generator on a flat surface; or a vortex generator on a cylinder; or a vortex generator near a large vertical lifting surface (the tail) and a large horizontal lifting surface (the horizontal stabilizer); or critically a vortex generator near the rear of a large cylinder—where the fluid dynamic boundary layer is considerably thicker than at the front of the aircraft. Again, what one sees is dependent on the size of the point cloud under consideration.
In view of the above, in another aspect of some but not necessarily all inventive embodiments, it is often advantageous in the computational engineering design process to automatically represent the global point cloud at (N) multiple useful length scales, with associated library entries as multiscale segmentation, where each scale acts as a modifier to the base rule library. To illustrate this aspect, reference is made to
In other respects, the blocks depicted in
Further aspects of embodiments consistent with the invention are now described in the context of a non-limiting example.
Referring now to
The rules exist as a set of points which are used to define the mesh spacing. Once a target set of point cloud matches is found for the original vortex generator, the mesh generation rules are transposed to the new location(s).
In other aspects of some but not necessarily all embodiments consistent with the invention, the rules specified for each structural feature (e.g., in this example, a vortex generator) are used to determine the level of mesh refinement (spacing) nearby. Where the initial rules have been specified a-priori, the downstream use of the resulting mesh may confirm or contradict the need for such a spacing. For example, the spacing may be overly cautious (refined). In such cases, the gradients in the resulting flow solution would have been captured by a less refined (i.e., coarser) mesh. The associated rule can then be automatically modified so as to maintain solution accuracy with a better optimized spacing.
The best rules for any given identified feature may depend upon the feature context, and in some but not necessarily all embodiments, the rules are modified based on context. Contextual information includes:
In yet another aspect of some but not necessarily all inventive embodiments, data sources external to the computational fluid dynamics (CFD) process can also be used to modify the rules. For example, measurements of the flow on a real aircraft (pressure taps, tufting, oil scar photography) can establish the need for modified spacing rules when the measured data is made available in point cloud format.
Further aspects of some but not necessarily all inventive embodiments will be appreciated from the following program code (written in the Python programming language), which shows exactly how to identify the ten independent antisympathetic vortex generator pairs at the front of the aircraft's global point cloud 500 of
Further aspects of embodiments consistent with the invention will now be described with reference to
The memory device(s) 1105 store program means 1109 (e.g., a set of processor instructions) configured to cause the one or more processors 1103 to control other device elements so as to carry out any of the aspects described herein. The memory device(s) 1105 may also store data (not shown) representing various constant and variable parameters as may be needed by the one or more processors 1103 and/or as may be generated when carrying out its functions such as those specified by the program means 1109.
The following description facilitates a greater understanding of aspects of inventive embodiments already described above, and also describes further inventive embodiments.
As already stated earlier in this description, a benefit of the strategy in which global registration and local registration are performed in sequence is that the global registration of the target point cloud with the global point cloud conditions the point cloud data such that points are clustered around potential features of interest in the structure. This clustering has the effect of creating a subset of points for the local registration to consider, thereby enabling the local registration process to work satisfactorily.
Embodiments such as those described with reference to
A point cloud can be considered to be a projected discrete sampling of the level set of a (potentially lower or higher dimensional) function. A neural network encodes a function that is possibly not continuous but thus a projected discrete sampling can be extracted from it. As is shown in the following analysis, a neural network with a discrete sampling (at zero specifically for the case of a neural network that is a signed distance field) is a point cloud representation and can therefore be used as data inputs in the above-described embodiments.
As used herein, the term “projected” means that there exists a function that takes the output of another function and transforms it into the space that the point cloud is represented in. For example, a first function might return a polar coordinate representation of a point but the operations may require a rectilinear representation, so a transformational function is applied to the output of the first function.
A signed distance field is a continuous function that represents the distance from a smooth manifold embedded in a higher dimensional space. If one assumes the presence of a smooth manifold that does not have any nulls in the manifold space and a Riemannian metric to allow for the definition of intrinsic and extrinsic normals, the manifold thus has an intrinsic normal direction and the positive space of the signed distance field is a distance function in the space pointed at by the normal; the negative values of the signed distance field are in the reverse direction of such a normal. Given a signed distance function (equivalently called, “signed distance field”), the function can be discretely sampled within its null set; that is, when it evaluates to 0 (or 0 to some specified tolerance), which means sampling along the surface being represented. Therefore, a point cloud can be encoded as a discrete sampling of a continuous function that represents a signed distance field φ.
This discrete sampling therefore can define an input domain into an identity injective function ϕ(x)->x: P->Rn
If there then exists a projection function π: H->F then there is an equivalence between f and g achieved by the formula:
One can choose x to be the identity function if F is a subspace of H. This can be ensured because g can be chosen to be ϕ−1(f(x)); that is, apply the equivalent operations on the discrete sampling of the signed distance field. Thus, in practical embodiments it is advantageous to just use discrete sampling of the neural network to evaluate the feature functions since this is an equivalent operation.
Neural networks are not necessarily continuous functions. However, since only the null space of the neural network's encoding is considered, there is no requirement that the encoded function be continuous, only that the null space coincide with the manifold embedded in the higher dimensional space. To illustrate this point, consider an image that approximates a surface with blocky looking pixels. In this instance, the edge of a shape is equivalently defined by the high resolution and low resolution versions, but now in neural network space. It can be seen, then, that a neural network trained on a structure will produce the same output as a point cloud produced by a more traditional sampling of that structure. For this reason, for purposes of inventive embodiments described herein, the trained neural network is a point cloud of that structure, no different from any other encoding of that point cloud.
The invention has been described with reference to particular embodiments. However, it will be readily apparent to those skilled in the art that, having learned and then applying aspects of the technology from this disclosure, it is possible to embody the invention in specific forms other than those of the embodiments described above. Thus, the described embodiments are merely illustrative and should not be considered restrictive in any way. The scope of the invention is further illustrated by the appended claims, rather than only by the preceding description, and all variations and equivalents which fall within the range of the claims are intended to be embraced therein.
This application is a continuation-in-part of co-pending U.S. patent application Ser. No. 17/984,210, filed Nov. 9, 2022, which claims the benefit of U.S. Provisional Application No. 63/399,351, filed Aug. 19, 2022 (now expired), both of which applications are hereby incorporated herein by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
63399351 | Aug 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17984210 | Nov 2022 | US |
Child | 18799335 | US |