The present disclosure relates generally to aircraft inspection and, in particular, to the inspection of the skin of manufactured or assembled parts of aircraft using three-dimensional modeling.
In aircraft manufacturing, inspection of the skin of manufactured or assembled parts is performed to find defects and anomalies. Existing visual inspection of the skin of the aircraft relies primarily on human visual acuity and can therefore be subjective. In addition, visual inspection can be swayed by human interpretation. The ability to promptly identify and address aircraft skin defects and anomalies can minimize potential delays due to rework. In addition, there is a need for a consistent quality inspection process that can be implemented not only at a manufacturer's facility but also at other sites, where there may be less expertise on potential issues and inspection criteria than at the manufacturer's site.
In view of the above, a system for automated surface anomaly detection is provided, including at least one processor, communicatively coupled to non-volatile memory storing a 3D (three-dimensional) control image depicting at least a portion of an exterior of a control object and instructions that, when executed by the processor, cause the processor to: retrieve from the non-volatile memory the 3D control image; capture, by a plurality of cameras, a 3D target image depicting at least the portion of the exterior of a target object; receive the 3D target image captured by the plurality of cameras, the 3D target image depicting at least the portion of the exterior of the target object; generate 2D (two-dimensional) target planar images of the target object based on the 3D target image, using a first plurality of virtual cameras; generate 2D control planar images of the control object based on the 3D control image, the 2D control planar images corresponding to the 2D target planar images of the target object, using a second plurality of virtual cameras; detect at least one difference between the 2D target planar images and the 2D control planar images; generate an output image, wherein the output image comprises a depiction of the target object with the at least one difference indicated; and cause the output image to be displayed.
The features, functions, and advantages that have been discussed can be achieved independently in various embodiments or can be combined in yet other embodiments further details of which can be seen with reference to the following description and drawings.
In view of the above issues, as shown in
In this example, a first camera 26a and a second camera 26b are used to capture the 3D target image 34. However, it will be appreciated that the quantity of cameras 26 is not particularly limited, and more than two cameras 26 can be used to capture the 3D target image 34. The cameras 26 can include at least one depth camera, 3D scanner, visual sensors, RGB cameras, thermal cameras, LiDAR cameras, or a combination thereof. The non-volatile memory 16 can further store a UV map 50, onto which the 3D control image 30 and the 3D target image 34 can be projected. The 3D target image 34 and the 3D control image 30 can be 3D point clouds of at least one of visual, thermal imaging, LiDAR, radar, or humidity sensors. The 3D control image 30 serves as a reference image for purposes of automated image comparison to the 3D target image 34.
The processor 14 receives the 3D target image 34 captured by the plurality of cameras 26, the 3D target image 34 depicting at least the portion of the exterior of the target object. Using a first plurality of virtual cameras 36, the processor 14 generates 2D target planar images 38 of the target object based on the 3D target image 34. Using a second plurality of virtual cameras 40, the processor 14 generates 2D control planar images 42 of the control object based on the 3D control image 30, the 2D control planar images 42 corresponding to the 2D target planar images 38 of the target object, so that the poses and locations in the 2D target planar images 38 correspond to those in the 2D control planar images 42. The first plurality of virtual cameras 36 and the second plurality of virtual cameras 40 can take the 2D target planar images 38 and the 2D control planar images 42 as perpendicularly as possible to the surface of the 3D target image 34 and the 3D control image 30, respectively. Images can be taken along an entire surface and an entire curvature of the 3D target image 34 so that as many anomalies can be detected as possible. Surface anomalies can include dents, holes, scratches, tears, chips, cracks, peels, burns, delaminations, melted metal, and missing sealant, for example.
The processor 14 can be configured so that an image acquisition module 52 of the processor 14 receives, as input, the 3D control image 30, the UV map 50, and the 3D target image 34, generates the 2D target planar images 38 and the 2D control planar images 42, and outputs the 2D target planar images 38 and the 2D control planar images 42.
The processor 14 generates 2D image pairs 44 pairing the 2D control planar images 42 to the 2D target planar images 38. Comparing the 2D target planar images 38 to the 2D control planar images 42, the 2D image pairs 44 taken of the parts for both the target object and the control object are compared side-by-side. It will be appreciated that comparing the control object and the target object side-by-side and in the same graphical representation eliminates format conflicts and mismatches between 3D graphical systems and models. For each 2D image pair 44, the 2D control planar image 42 and the 2D target planar image 38 are taken in the same pose and at the same location. The processor 14 detects at least one difference 46 or anomaly between the 2D target planar images 38 and the 2D control planar images 42, identifying the coordinates of the identified difference 46 in both 3D space and the UV space in the UV map 50. The processor 14 then generate an output image 48 comprising a depiction of the target object with the at least one difference 46 indicated or annotated, and causes the output image 48 to be displayed on a display 28. The 2D image pair 44 and detected differences 46, including the coordinates of the identified differences 46 in both 3D space and the UV space in the UV map 50, can be stored in non-volatile memory 16 for later use in various applications, including the training of deep learning models for automatically classifying types of anomalies. A large body of images showing a target defect can be used to train a deep learning model to automatically classify the target defect. For example, thousands of image pairs showing an inward dent can be used to train a deep learning model to automatically classify the inward dent.
The processor 14 can be configured so that an image comparison module 54 of the processor 14 receives, as input, the 2D target planar images 38 and the 2D control planar images 42 as 2D image pairs 44, compares the 2D image pairs 44 taken of the parts for both the target object and the control object side-by-side, detects at least one difference 46 between the 2D target planar images 38 and the 2D control planar images 42, generates an output image 48 comprising a depiction of the target object with the at least one difference 46 indicated or annotated, and outputs the output image 48. The image comparison module 54 can accept image comparison segmentation parameters 56 for controlling how the difference 46 is indicated or annotated on the 2D image pairs 44. In this example, the difference 46 is indicated or annotated by a circle.
Referring to
Referring to
Referring to
Referring to
At step 202, a 3D target image depicting at least a portion of an exterior of a target object is captured by a plurality of cameras, which can be a 3D scanner or at least one depth camera. At step 204, a 3D control image, which can comprise polygons, is retrieved from non-volatile memory. At step 206, the 3D target image captured by the plurality of cameras is received, the 3D target image depicting at least the portion of the exterior of the target object. The 3D target image can comprise polygons. The target object and the control object can be aircraft or aircraft components.
At step 208, 2D target planar images of the target object are generated based on the 3D target image, using a first plurality of virtual cameras. Step 208 can include step 208a, at which the 3D target image is projected in UV space. Step 208a can include step 208b, in which for every point inside the UV space, a corresponding point in 3D space is identified using Barycentric coordinates. Step 208 can further include step 208c, in which the first plurality of virtual cameras are arranged in a grid in the UV space, instantiated at positions in 3D space facing each of the polygons in a normal direction at a predetermined distance from each of the polygons. Step 208 can further include step 208d, in which the 3D target image is manipulated so that point of view angles, lighting and associated shadows, and/or virtual camera distances from target object of 2D target planar images match those from control object of 2D control planar images. Accordingly, lighting condition biases are removed.
At step 210, based on 3D control image, 2D control planar images of control object are generated corresponding to 2D target planar images of target object, using second plurality of virtual cameras. Step 210 can include step 210a, in which the 3D control image is projected in UV space. Step 210a can include step 210b, in which for every point inside the UV space, a corresponding point in 3D space is identified using Barycentric coordinates. Step 210 can further include step 210c, in which the second plurality of virtual cameras are arranged in a grid in the UV space, instantiated at positions in 3D space facing each of the polygons in a normal direction at a predetermined distance from each of the polygons. Step 210 can further include step 210d, in which the 3D control image is manipulated so that point of view angles, lighting and associated shadows, and/or virtual camera distances from control object of 2D control planar images match those from target object of 2D target planar images.
At step 212, at least one difference is detected between the 2D target planar images and the 2D control planar images. At step 214, an output image is generated comprising a depiction of the target object with the at least one difference indicated or annotated. At step 216, the output image is caused to be displayed.
The systems and processes described herein have the potential benefit of replacing subjective visual inspections based on human visual acuity and swayed by human interpretation with objective, computer-aided inspections based on unbiased evaluation of image data. This enables the inspection of the surfaces of manufactured parts in a simulated virtual and automated environment, in which lighting conditions can be controlled to be uniform. In other words, uniform light intensity and light angle can be achieved with light condition biases eliminated. Thus, thorough coverage of the surfaces of any manufactured part or component, including micro-scale and oversized parts and components, can be achieved without inadvertently missing any areas due to poor lighting conditions. Accordingly, expensive image capture apparatuses with rails or other support structures, multitudes of movable and static high definition cameras, and sensors to detect reflections, shadows, and other transient visual phenomena can be dispensed with to reduce costs and increase efficiencies in aircraft inspection. Indeed, unlike physical cameras, there is no physical limitation on the types and numbers of virtual cameras that can be applied to capture 2D planar images. Furthermore, expensive localization and mapping processing is eliminated by identifying the coordinates of identified anomalies in both 3D space and the UV space in the UV map. Having both the control object and the target object side-by-side and in the same graphical representation eliminates format conflicts and mismatches between different 3D graphical systems and models. Since modifications on the control object has the exact same effect on the target object, statistical quality control can be achieved, especially for the design of experiments where control variables are needed to identify significant factors on the quality and behavior of external surfaces.
The non-volatile storage device 406 stores various instructions, also referred to as software, that are executed by the logic processor 402. Logic processor 402 includes one or more physical devices configured to execute the instructions. For example, the logic processor 402 can be configured to execute instructions that are part of one or more applications, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions can be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
The logic processor 402 can include one or more physical processors (hardware) configured to execute software instructions. Additionally or alternatively, the logic processor 402 can include one or more hardware logic circuits or firmware devices configured to execute hardware-implemented logic or firmware instructions. Processors of the logic processor 402 can be single-core or multi-core, and the instructions executed thereon can be configured for sequential, parallel, and/or distributed processing. Individual components of the logic processor 402 optionally can be distributed among two or more separate devices, which can be remotely located and/or configured for coordinated processing. Aspects of the logic processor 402 can be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration. In such a case, these virtualized aspects are run on different physical logic processors of various different machines, it will be understood.
Non-volatile storage device 406 includes one or more physical devices configured to hold instructions executable by the logic processors to implement the methods and processes described herein. When such methods and processes are implemented, the state of non-volatile storage device 406 can be transformed—e.g., to hold different data.
Non-volatile storage device 406 can include physical devices that are removable and/or built-in. Non-volatile storage device 406 can include optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., ROM, EPROM, EEPROM, FLASH memory, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), or other mass storage device technology. Non-volatile storage device 406 can include nonvolatile, dynamic, static, read/write, read-only, sequential-access, location-addressable, file-addressable, and/or content-addressable devices. It will be appreciated that non-volatile storage device 406 is configured to hold instructions even when power is cut to the non-volatile storage device 406.
Volatile memory 404 can include physical devices that include random access memory. Volatile memory 404 is typically utilized by logic processor 402 to temporarily store information during processing of software instructions. It will be appreciated that volatile memory 404 typically does not continue to store instructions when power is cut to the volatile memory 404.
Aspects of logic processor 402, volatile memory 404, and non-volatile storage device 406 can be integrated together into one or more hardware-logic components. Such hardware-logic components can include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
The terms “module,” “program,” and “engine” can be used to describe an aspect of the security computing system 10 typically implemented in software by a processor to perform a particular function using portions of volatile memory, which function involves transformative processing that specially configures the processor to perform the function. Thus, a module, program, or engine can be instantiated via logic processor 402 executing instructions held by non-volatile storage device 406, using portions of volatile memory 404. It will be understood that different modules, programs, and/or engines can be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same module, program, and/or engine can be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The terms “module,” “program,” and “engine” can encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc.
Display subsystem 408 typically includes one or more displays, which can be physically integrated with or remote from a device that houses the logic processor 402. Graphical output of the logic processor executing the instructions described above, such as a graphical user interface, is configured to be displayed on display subsystem 408.
Input subsystem 410 typically includes one or more of a keyboard, pointing device (e.g., mouse, trackpad, finger operated pointer), touchscreen, microphone, and camera. Other input devices can also be provided.
Communication subsystem 412 is configured to communicatively couple various computing devices described herein with each other, and with other devices. Communication subsystem 412 can include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem can be configured for communication via a wireless telephone network, or a wired or wireless local- or wide-area network by devices such as a 3G, 4G, 5G, or 6G radio, WIFI card, ethernet network interface card, BLUETOOTH® radio, etc. In some embodiments, the communication subsystem can allow computing system 10 to send and/or receive messages to and/or from other devices via a network such as the Internet. It will be appreciated that one or more of the computer networks via which communication subsystem 412 is configured to communicate can include security measures such as user identification and authentication, access control, malware detection, enforced encryption, content filtering, etc., and can be coupled to a wide area network (WAN) such as the Internet.
The subject disclosure includes all novel and non-obvious combinations and subcombinations of the various features and techniques disclosed herein. The various features and techniques disclosed herein are not necessarily required of all examples of the subject disclosure. Furthermore, the various features and techniques disclosed herein can define patentable subject matter apart from the disclosed examples and can find utility in other implementations not expressly disclosed herein.
To the extent that terms “includes,” “including,” “has,” “contains,” and variants thereof are used herein, such terms are intended to be inclusive in a manner similar to the term “comprises” as an open transition word without precluding any additional or other elements.
It will be appreciated that “and/or” as used herein refers to the logical disjunction operation, and thus A and/or B has the following truth table.
To the extent that terms “includes,” “including,” “has,” “contains,” and variants thereof are used herein, such terms are intended to be inclusive in a manner similar to the term “comprises” as an open transition word without precluding any additional or other elements.
Further, the disclosure comprises configurations according to the following clauses.
Clause 1. A system for automated surface anomaly detection, the system comprising: at least one processor, communicatively coupled to non-volatile memory storing a 3D control image depicting at least a portion of an exterior of a control object and instructions that, when executed by the processor, cause the processor to: retrieve from the non-volatile memory the 3D control image; capture, by a plurality of cameras, a 3D target image depicting at least the portion of the exterior of a target object; receive the 3D target image captured by the plurality of cameras, the 3D target image depicting at least the portion of the exterior of the target object; generate 2D target planar images of the target object based on the 3D target image, using a first plurality of virtual cameras; generate 2D control planar images of the control object based on the 3D control image, the 2D control planar images corresponding to the 2D target planar images of the target object, using a second plurality of virtual cameras; detect at least one difference between the 2D target planar images and the 2D control planar images; generate an output image, wherein the output image comprises a depiction of the target object with the at least one difference indicated; and cause the output image to be displayed.
Clause 2. The system of clause 1, wherein, to generate the 2D target planar images of the target object and the 2D control planar images of the control object, the processor manipulates the 3D target image and/or the 3D target image so that point of view angles, lighting and associated shadows, and/or virtual camera distances from the target object of the 2D target planar images match point of view angles, lighting and associated shadows, and/or virtual camera distances from the control object of the 2D control planar images.
Clause 3. The system of clause 1 or 2, wherein the processor projects the 3D control image and the 3D target image in UV space.
Clause 4. The system of clause 3, wherein the processor arranges the first plurality of virtual cameras and the second plurality of virtual cameras in a grid in the UV space.
Clause 5. The system of clause 3 or 4, wherein, for every point inside the UV space, the processor identifies a corresponding point in 3D space using Barycentric coordinates.
Clause 6. The system of any of clauses 1 to 5, wherein the 3D control image and the 3D target image comprise polygons.
Clause 7. The system of clause 6, wherein the virtual cameras are instantiated at positions in 3D space facing each of the polygons in a normal direction at a predetermined distance from each of the polygons.
Clause 8. The system of any of clauses 1 to 7, wherein the target object and the control object are aircraft or aircraft components.
Clause 9. The system of any of clauses 1 to 8, wherein the at least one difference, the 2D target planar images, and the 2D control planar images are stored in the non-volatile memory and subsequently used to train a deep learning module to classify the difference.
Clause 10. A method for automated surface anomaly detection, the method comprising: retrieving from non-volatile memory a 3D control image; capturing, by a plurality of cameras, a 3D target image depicting at least a portion of an exterior of a target object; receiving the 3D target image captured by the plurality of cameras, the 3D target image depicting at least the portion of the exterior of the target object; generating 2D target planar images of the target object based on the 3D target image, using a first plurality of virtual cameras; generating 2D control planar images of a control object based on the 3D control image, the 2D control planar images corresponding to the 2D target planar images of the target object, using a second plurality of virtual cameras; detecting at least one difference between the 2D target planar images and the 2D control planar images; generating an output image, wherein the output image comprises a depiction of the target object with the at least one difference indicated; and causing the output image to be displayed.
Clause 11. The method of clause 10, wherein, to generate the 2D target planar images of the target object and the 2D control planar images of the control object, the 3D target image and/or the 3D target image is manipulated so that point of view angles, lighting and associated shadows, and/or virtual camera distances from the target object of the 2D target planar images match point of view angles, lighting and associated shadows, and/or virtual camera distances from the control object of the 2D control planar images.
Clause 12. The method of clause 10 or 11, wherein the 3D control image and the 3D target image are projected in UV space.
Clause 13. The method of any of clauses 10 to 12, wherein the first plurality of virtual cameras and the second plurality of virtual cameras are arranged in a grid in the UV space.
Clause 14. The method of any of clauses 10 to 13, wherein, for every point inside the UV space, a corresponding point in 3D space is identified using Barycentric coordinates.
Clause 15. The method of any of clauses 10 to 14, wherein the 3D control image and the 3D target image comprise polygons.
Clause 16. The method of clause 15, wherein the virtual cameras are instantiated at positions in 3D space facing each of the polygons in a normal direction at a predetermined distance from each of the polygons.
Clause 17. The method of any of clauses 10 to 16, wherein the target object and the control object are aircraft or aircraft components.
Clause 18. The method of any of clauses 10 to 17, wherein the at least one difference, the 2D target planar images, and the 2D control planar images are stored in the non-volatile memory and subsequently used to train a deep learning module to classify the difference.
Clause 19. A system for automated surface anomaly detection, the system comprising: non-volatile memory storing instructions and a 3D control image depicting at least a portion of an exterior of a control aircraft; a plurality of cameras disposed to capture a 3D target image of a portion of an exterior of a target aircraft corresponding to the portion of the exterior of the target aircraft; and at least one electronic processor, communicatively coupled to the non-volatile memory and the plurality of cameras, that executes the instructions to cause the processor to: retrieve from the non-volatile memory the 3D control image; capture, by the plurality of cameras, the 3D target image depicting at least the portion of the exterior of the target aircraft; receive the 3D target image captured by the plurality of cameras, the 3D target image depicting at least the portion of the exterior of the target aircraft; project the 3D control image and the 3D target image in UV space; identify a corresponding point in 3D space using Barycentric coordinates for every point inside the UV space; generate 2D target planar images of the target aircraft based on the 3D target image, using a first plurality of virtual cameras arranged in a grid in the UV space; generate 2D control planar images of the control aircraft based on the 3D control image, the 2D control planar images corresponding to the 2D target planar images of the target aircraft, using a second plurality of virtual cameras arranged in the grid in the UV space, the first plurality of virtual cameras and the second plurality of virtual cameras instantiated at identical positions in the 3D space; detect at least one anomaly between the 2D target planar images and the 2D control planar images; generate an output image, wherein the output image comprises a depiction of the target aircraft with the at least one anomaly annotated; and cause the output image to be displayed.
Clause 20. The system of clause 19, wherein the 3D target image and the 3D control image are 3D point clouds of at least one of thermal imaging, LiDAR, radar, or humidity sensors.
This application claims priority to U.S. Provisional Patent Application Ser. No. 63/159,387, filed Mar. 10, 2021, the entirety of which is hereby incorporated herein by reference for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
10210631 | Cinnamon et al. | Feb 2019 | B1 |
10459444 | Kentley-Klay | Oct 2019 | B1 |
10643329 | Afrasiabi et al. | May 2020 | B2 |
10957017 | Minor | Mar 2021 | B1 |
11055872 | Chen et al. | Jul 2021 | B1 |
11126891 | St. Romain, II et al. | Sep 2021 | B2 |
11238197 | Douglas et al. | Feb 2022 | B1 |
11263487 | Li et al. | Mar 2022 | B2 |
11308576 | Yuan et al. | Apr 2022 | B2 |
11341699 | Gottlieb | May 2022 | B1 |
20090274386 | Panetta et al. | Nov 2009 | A1 |
20120189178 | Seong | Jul 2012 | A1 |
20140308153 | Ljungblad | Oct 2014 | A1 |
20160085426 | Scott | Mar 2016 | A1 |
20170301078 | Forutanpour et al. | Oct 2017 | A1 |
20170357895 | Karlinsky et al. | Dec 2017 | A1 |
20170372480 | Anand et al. | Dec 2017 | A1 |
20180047208 | Marin | Feb 2018 | A1 |
20180129865 | Zia et al. | May 2018 | A1 |
20180129910 | Zia et al. | May 2018 | A1 |
20180130229 | Zia et al. | May 2018 | A1 |
20180130355 | Zia et al. | May 2018 | A1 |
20180330205 | Wu et al. | Nov 2018 | A1 |
20180336741 | Rezagholizadeh et al. | Nov 2018 | A1 |
20180349527 | Li et al. | Dec 2018 | A1 |
20190118497 | Kierbel et al. | Apr 2019 | A1 |
20190130603 | Sun et al. | May 2019 | A1 |
20190164007 | Liu et al. | May 2019 | A1 |
20190295302 | Fu et al. | Sep 2019 | A1 |
20190311469 | Afrasiabi et al. | Oct 2019 | A1 |
20190325574 | Jin et al. | Oct 2019 | A1 |
20190391562 | Srivastava et al. | Dec 2019 | A1 |
20200074674 | Guo et al. | Mar 2020 | A1 |
20200134446 | Soni et al. | Apr 2020 | A1 |
20200134494 | Venkatadri | Apr 2020 | A1 |
20200151938 | Shechtman et al. | May 2020 | A1 |
20200167161 | Planche et al. | May 2020 | A1 |
20200175669 | Bian et al. | Jun 2020 | A1 |
20200210779 | Atsmon et al. | Jul 2020 | A1 |
20200293828 | Wang et al. | Sep 2020 | A1 |
20200294201 | Planche et al. | Sep 2020 | A1 |
20200311482 | Soni et al. | Oct 2020 | A1 |
20200311913 | Soni et al. | Oct 2020 | A1 |
20200368815 | Baker et al. | Nov 2020 | A1 |
20200388017 | Coffman | Dec 2020 | A1 |
20200393562 | Staudinger et al. | Dec 2020 | A1 |
20210027107 | Pekelny et al. | Jan 2021 | A1 |
20210034961 | Lovell et al. | Feb 2021 | A1 |
20210097148 | Bagschik et al. | Apr 2021 | A1 |
20210097691 | Liu | Apr 2021 | A1 |
20210201078 | Yao et al. | Jul 2021 | A1 |
20210232858 | Mukherjee | Jul 2021 | A1 |
20210232926 | Hutter et al. | Jul 2021 | A1 |
20210271019 | Laffont et al. | Sep 2021 | A1 |
20210276587 | Urtasun et al. | Sep 2021 | A1 |
20210286923 | Kristensen et al. | Sep 2021 | A1 |
20210286925 | Wyrwas et al. | Sep 2021 | A1 |
20210287050 | Kar et al. | Sep 2021 | A1 |
20210325895 | Huai | Oct 2021 | A1 |
20210358164 | Liu et al. | Nov 2021 | A1 |
20210374402 | Kim et al. | Dec 2021 | A1 |
20210383616 | Rong et al. | Dec 2021 | A1 |
20210390674 | Afrasiabi et al. | Dec 2021 | A1 |
20210397142 | Lovell et al. | Dec 2021 | A1 |
20220005175 | Mansell | Jan 2022 | A1 |
20220012596 | Nie et al. | Jan 2022 | A1 |
20220020184 | Rijken et al. | Jan 2022 | A1 |
20220035961 | Ziabari et al. | Feb 2022 | A1 |
20220044074 | Li et al. | Feb 2022 | A1 |
20220051479 | Agarwal et al. | Feb 2022 | A1 |
20220067451 | Wang et al. | Mar 2022 | A1 |
20220083807 | Zhang et al. | Mar 2022 | A1 |
20220084173 | Liang et al. | Mar 2022 | A1 |
20220084220 | Pillai et al. | Mar 2022 | A1 |
20220091593 | Neilan et al. | Mar 2022 | A1 |
20220101104 | Chai et al. | Mar 2022 | A1 |
20220108417 | Liu et al. | Apr 2022 | A1 |
20220108436 | Kang et al. | Apr 2022 | A1 |
20220114698 | Liu | Apr 2022 | A1 |
20220141422 | Bathiche et al. | May 2022 | A1 |
20220156525 | Guizilini et al. | May 2022 | A1 |
20220180595 | Bethi et al. | Jun 2022 | A1 |
20220180602 | Hao et al. | Jun 2022 | A1 |
20220189145 | Evans et al. | Jun 2022 | A1 |
20220198339 | Zhao et al. | Jun 2022 | A1 |
20220198609 | Carbune et al. | Jun 2022 | A1 |
20220201555 | Zeng et al. | Jun 2022 | A1 |
20220202295 | Elbaz et al. | Jun 2022 | A1 |
20220230066 | Das et al. | Jul 2022 | A1 |
20220234196 | Tachikake | Jul 2022 | A1 |
20220238031 | Evans et al. | Jul 2022 | A1 |
20220253599 | Oh et al. | Aug 2022 | A1 |
20220254071 | Ojha et al. | Aug 2022 | A1 |
20220254075 | Kanazawa | Aug 2022 | A1 |
20220266453 | Lonsberry et al. | Aug 2022 | A1 |
20220289403 | Afrasiabi et al. | Sep 2022 | A1 |
20220306311 | Kyono et al. | Sep 2022 | A1 |
20220309811 | Haghighi et al. | Sep 2022 | A1 |
20220318354 | Park et al. | Oct 2022 | A1 |
20220318464 | Xu et al. | Oct 2022 | A1 |
20220318956 | Xu | Oct 2022 | A1 |
20220327657 | Zheng et al. | Oct 2022 | A1 |
20220335624 | Maurer et al. | Oct 2022 | A1 |
20220335672 | Lee et al. | Oct 2022 | A1 |
20220335679 | Afrasiabi et al. | Oct 2022 | A1 |
20220358255 | Burla et al. | Nov 2022 | A1 |
20220358265 | Wang et al. | Nov 2022 | A1 |
20220361848 | Wildeboer et al. | Nov 2022 | A1 |
20220366220 | Roth et al. | Nov 2022 | A1 |
20220374720 | Qu et al. | Nov 2022 | A1 |
20220375073 | Voigt et al. | Nov 2022 | A1 |
20220377257 | Wilson et al. | Nov 2022 | A1 |
20220392018 | Chen et al. | Dec 2022 | A1 |
20230030088 | Afrasiabi et al. | Feb 2023 | A1 |
20230043409 | Afrasiabi | Feb 2023 | A1 |
Number | Date | Country |
---|---|---|
3792827 | Mar 2021 | EP |
Entry |
---|
United States Patent and Trademark Office, Final Office action issued in U.S. Appl. No. 16/902,588, Sep. 16, 2022, 27 pages. |
European Patent Office, Extended European Search Report Issued in EP Application No. 22163745.7, Sep. 29, 2022, 7 pages. |
European Patent Office, Extended European Search Report Issued in EP Application No. 22165086.4, Nov. 3, 2022, 6 pages. |
Esfahani, S. et al., “A Survey of State-of-the-Art Gan-Based Approaches to Image Synthesis,” Proceedings of the 9th International Conference on Computer Science, Engineering and Applications, Jul. 13, 2019, Toronto, Canada, 14 pages. |
Luo, Y. et al., “Infrared Image Registration of Damage in the Aircraft Skin Based on Lie Group Machine Learning,” Proceedings of the 2014 26th Chinese Control and Decision Conference (CCDC), May 31, 2014, Changsha, China, 5 pages. |
Niu, S. et al., “Defect Image Sample Generation With GAN for Improving Defect Recognition,” IEEE Transactions on Automation Science and Engineering, vol. 17, No. 3, Jul. 2020, 12 pages. |
Wang, Z. et al., “Generative Adversarial Networks in Computer Vision: A Survey and Taxonomy,” ACM Computing Surveys, vol. 54, No. 2, Feb. 9, 2021, 38 pages. |
Xue, J. et al., “Interactive Rendering and Modification of Massive Aircraft CAD Models in Immersive Environment,” Computer-Aided Design and Applications, vol. 12, No. 4, Jul. 4, 2015, 11 pages. |
Zhu, J. et al., “Visual Object Networks: Image Generation with Disentangled 3D Representation,” Proceedings of the 32nd Conference on Neural Information Processing Systems, Dec. 3, 2018, Montreal, Canada, 12 pages. |
Zhu, J. et al., “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks,” Proceedings of the 2017 IEEE International Conference on Computer Vision, Oct. 22, 2017, Venice, Italy, 18 pages. |
United States Patent and Trademark Office, Non-final Office action issued in U.S. Appl. No. 16/902,588, Aug. 12, 2022, 22 pages. |
Number | Date | Country | |
---|---|---|---|
20220289403 A1 | Sep 2022 | US |
Number | Date | Country | |
---|---|---|---|
63159387 | Mar 2021 | US |