The subject matter disclosed herein relates to systems and methods for deep-learning based autonomous identification of heterogeneous phantom regions.
Volumetric medical imaging technologies use a variety of techniques to gather three-dimensional information about the body. For example, computed tomography (CT) imaging system measures the attenuation of X-ray beams passed through a patient from numerous angles. Based upon these measurements, a computer is able to reconstruct images of the portions of a patient's body responsible for the radiation attenuation. As will be appreciated by those skilled in the art, these images are based upon separate examination of a series of angularly displaced measurements. It should be pointed out that a CT system produces data that represent the distribution of linear attenuation coefficients of the scanned object. The data are then reconstructed to produce an image that is typically displayed on a screen, and may be printed or reproduced on film.
Phantoms are commonly utilized in the CT system during the calibration process and for quality assurance (QA) testing. For a homogeneous phantom (e.g., constructed using a single material where the attenuation coefficient doesn't change within the phantom cross section), it is easy to identify a region within such a phantom. A heterogeneous phantom (e.g., constructed using multiple materials where the attenuation coefficient changes within a phantom image depending on the material) is widely used in QA of a single energy and dual energy CT scanner for determining and conforming various image quality parameter metrics. Currently, manual intervention is required to identify the regions of interest (ROIs) for different materials as an intermediate step of the QA testing. In addition, a heterogeneous phantom can be advantageous for calibration purposes in multi-energy photon counting CT scanners. Thus, correct and autonomous identification of a heterogeneous phantom ROIs is needed to execute such tasks.
Certain embodiments commensurate in scope with the originally claimed subject matter are summarized below. These embodiments are not intended to limit the scope of the claimed subject matter, but rather these embodiments are intended only to provide a brief summary of possible embodiments. Indeed, the invention may encompass a variety of forms that may be similar to or different from the embodiments set forth below.
In one embodiment, a computer-implemented method is provided. The method includes obtaining, at a processor, a tomographic image of a phantom, wherein the phantom includes heterogeneous regions having a plurality of materials with varying attenuation coefficients. The method also includes automatically segmenting, via the processor, the heterogeneous regions from the tomographic image to generate a segmented image. The method further includes automatically identifying, via the processor, a plurality of regions of interest having varying attenuation coefficients within the tomographic image based on the segmented image. The method still further includes automatically labeling, via the processor, each region of interest of the plurality of regions of interest as representing a particular material of the plurality of materials. The method even further includes outputting, via the processor, a labeled image of the tomographic image.
In one embodiment, a computer-implemented method is provided. The method includes beginning, via a processor, a calibration of a tomographic imaging system. The method also includes obtaining, via the processor, a tomographic image of a phantom utilizing the tomographic imaging system, wherein the phantom includes heterogeneous regions having a plurality of materials with varying attenuation coefficients. The method also includes automatically segmenting, via the processor, the heterogeneous regions from the tomographic image to generate a segmented image. The method further includes automatically identifying, via the processor, a plurality of regions of interest having varying attenuation coefficients within the tomographic image based on the segmented image. The method still further includes automatically labeling, via the processor, each region of interest of the plurality of regions of interest as representing a particular material of the plurality of materials. The method further includes calculating, via the processor, a respective calibration vector for one or more regions of interest of the plurality of regions of interest based on the labeling of each region of interest.
In one embodiment, a computer-implemented method is provided. The method includes beginning, via a processor, quality assurance testing of a tomographic imaging system. The method includes obtaining, via the processor, a tomographic image of a phantom utilizing the tomographic imaging system, wherein the phantom includes heterogeneous regions having a plurality of materials with varying attenuation coefficients. The method also includes automatically segmenting, via the processor, the heterogeneous regions from the tomographic image to generate a segmented image. The method further includes automatically identifying, via the processor, a plurality of regions of interest having varying attenuation coefficients within the tomographic image based on the segmented image. The method still further includes automatically labeling, via the processor, each region of interest of the plurality of regions of interest as representing a particular material of the plurality of materials. The method even further includes calculating, via the processor, image quality metrics based on the labeling of each region of interest.
These and other features, aspects, and advantages of the present subject matter will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
When introducing elements of various embodiments of the present subject matter, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Furthermore, any numerical examples in the following discussion are intended to be non-limiting, and thus additional numerical values, ranges, and percentages are within the scope of the disclosed embodiments.
Some generalized information is provided to provide both general context for aspects of the present disclosure and to facilitate understanding and explanation of certain of the technical concepts described herein.
Deep-learning (DL) approaches discussed herein may be based on artificial neural networks, and may therefore encompass one or more of deep neural networks, fully connected networks, convolutional neural networks (CNNs), perceptrons, encoders-decoders, recurrent networks, wavelet filter banks, u-nets, generative adversarial networks (GANs), or other neural network architectures. The neural networks may include shortcuts, activations, batch-normalization layers, and/or other features. These techniques are referred to herein as deep-learning techniques, though this terminology may also be used specifically in reference to the use of deep neural networks, which is a neural network having a plurality of layers.
As discussed herein, deep-learning techniques (which may also be known as deep machine learning, hierarchical learning, or deep structured learning) are a branch of machine learning techniques that employ mathematical representations of data and artificial neural networks for learning and processing such representations. By way of example, deep-learning approaches may be characterized by their use of one or more algorithms to extract or model high level abstractions of a type of data-of-interest. This may be accomplished using one or more processing layers, with each layer typically corresponding to a different level of abstraction and, therefore potentially employing or utilizing different aspects of the initial data or outputs of a preceding layer (i.e., a hierarchy or cascade of layers) as the target of the processes or algorithms of a given layer. In an image processing or reconstruction context, this may be characterized as different layers corresponding to the different feature levels or resolution in the data. In general, the processing from one representation space to the next-level representation space can be considered as one ‘stage’ of the process. Each stage of the process can be performed by separate neural networks or by different parts of one larger neural network.
The present disclosure provides systems and methods for deep-learning based autonomous identification of heterogeneous phantom regions (e.g., via region of interest location algorithm). The disclosed embodiments include obtaining or acquiring (e.g., via a CT scanner) a tomographic image of a phantom. The phantom includes heterogeneous regions having a plurality of materials (e.g., within rods) with varying or different attenuation coefficients. The disclosed embodiments also include automatically segmenting (e.g., via a trained deep neural network) the heterogeneous regions from the tomographic image to generate a segmented image (e.g., segmentation mask). The disclosed embodiments further include automatically identifying a plurality of regions of interest having varying attenuation coefficients within the tomographic image based on the segmented image. In particular, the disclosed embodiments include automatically determining an angle of rotation of the phantom relative to a properly positioned phantom (via comparison of the segmentation mask to a reference image (e.g., reference or template segmentation mask serving as a ground truth) for a properly positioned phantom). The disclosed embodiments yet further include automatically labeling each region of interest of the plurality of regions of interest as representing a particular material of the plurality of materials. The disclosed embodiments still further include outputting a labeled or annotated image of the tomographic image with the labeled regions of interest. In certain embodiments, a rotational angle of the phantom within the tomographic image and/or a chart having both at least a label and the particular material for each region of interest. The disclosed embodiments enable the use of a multi-structure in the calibration and QA process without modifying any phantom hardware (e.g., adding a tracer/marker to identify regions). In addition, the deep neural network can be trained with any phantom at any scanning protocols. The ability to automatically identify the regions of interest within the heterogeneous phantom eliminates the need for manual intervention while ensuring the efficacy of the region identification.
With the foregoing discussion in mind.
In certain implementations, the source 12 may be positioned proximate to a collimator 22 used to define the size and shape of the one or more X-ray beams 20 that pass into a region in which a subject 24 (e.g., a patient) or object of interest is positioned. The subject 24 attenuates at least a portion of the X-rays. Resulting attenuated X-rays 26 impact a detector array 28 formed by a plurality of detector elements. Each detector element produces an electrical signal that represents the intensity of the X-ray beam incident at the position of the detector element when the beam strikes the detector 28. Electrical signals are acquired and processed to generate one or more scan datasets or reconstructed images.
A system controller 30 commands operation of the imaging system 10 to execute examination and/or calibration protocols and to process the acquired data. With respect to the X-ray source 12, the system controller 30 furnishes power, focal spot location, control signals and so forth, for the X-ray examination sequences. The detector 28 is coupled to the system controller 30, which commands acquisition of the signals generated by the detector 28. In addition, the system controller 30, via a motor controller 36, may control operation of a linear positioning subsystem 32 and/or a rotational subsystem 34 used to move components of the imaging system 10 and/or the subject 24. The system controller 30 may include signal processing circuitry and associated memory circuitry. In such embodiments, the memory circuitry may store programs, routines, and/or encoded algorithms executed by the system controller 30 to operate the imaging system 10, including the X-ray source 12, and to process the data acquired by the detector 28 in accordance with the steps and processes discussed herein. In one embodiment, the system controller 30 may be implemented as all or part of a processor-based system such as a general purpose or application-specific computer system.
The source 12 may be controlled by an X-ray controller 38 contained within the system controller 30. The X-ray controller 38 may be configured to provide power and timing signals to the source 12. The system controller 30 may include a data acquisition system (DAS) 40. The DAS 40 receives data collected by readout electronics of the detector 28, such as sampled analog signals from the detector 28. The DAS 40 may then convert the data to digital signals for subsequent processing by a processor-based system, such as a computer 42. In other embodiments, the detector 28 may convert the sampled analog signals to digital signals prior to transmission to the data acquisition system 40. The computer may include processing circuitry 44 (e.g., image processing circuitry). The computer 42 may include or communicate with one or more non-transitory memory devices 46 that can store data processed by the computer 42, data to be processed by the computer 42, or instructions to be executed by a processor (e.g., processing circuitry 44) of the computer 42. For example, the processing circuitry 44 of the computer 42 may execute one or more sets of instructions stored on the memory 46, which may be a memory of the computer 42, a memory of the processor, firmware, or a similar instantiation. In accordance with present embodiments, the memory 46 stores sets of instructions that, when executed by the processor, perform image processing methods as discussed herein. The memory 46 also stores one or more algorithms and/or neural networks 47 that may be utilized in autonomous identification of heterogeneous phantom regions as described in greater detail below.
The computer 42 may also be adapted to control features enabled by the system controller 30 (i.e., scanning operations and data acquisition), such as in response to commands and scanning parameters provided by an operator via an operator workstation 48. The system 10 may also include a display 50 coupled to the operator workstation 48 that allows the operator to view relevant system data, to observe reconstructed images, to control imaging, and so forth. Additionally, the system 10 may include a printer 52 coupled to the operator workstation 48 and configured to print images. The display 50 and the printer 52 may also be connected to the computer 42 directly or via the operator workstation 48. Further, the operator workstation 48 may include or be coupled to a picture archiving and communications system (PACS) 54. PACS 54 may be coupled to a remote system 56, radiology department information system (RIS), hospital information system (HIS) or to an internal or external network, so that others at different locations can gain access to the image data.
Further, the computer 42 and operator workstation 48 may be coupled to other output devices, which may include standard or special purpose computer monitors and associated processing circuitry. One or more operator workstations 40 may be further linked in the system for outputting system parameters, requesting examinations, viewing images, and so forth. In general, displays, printers, workstations, and similar devices supplied within the system may be local to the data acquisition components, or may be remote from these components, such as elsewhere within an institution or hospital, or in an entirely different location, linked to the image acquisition system via one or more configurable networks, such as the Internet, virtual private networks, and so forth.
While the preceding discussion has treated the various exemplary components of the imaging system 10 separately, these various components may be provided within a common platform or in interconnected platforms. For example, the computer 30, memory 38, and operator workstation 40 may be provided collectively as a general or special purpose computer or workstation configured to operate in accordance with the aspects of the present disclosure. In such embodiments, the general or special purpose computer may be provided as a separate component with respect to the data acquisition components of the system 10 or may be provided in a common platform with such components. Likewise, the system controller 30 may be provided as part of such a computer or workstation or as part of a separate system dedicated to image acquisition.
With the preceding discussion of an overall imaging system 10 in mind, and turning to
As described herein, techniques (e.g., a region of interest location algorithm) may be utilized for the automatically identifying the regions (e.g., rods) of varying or different attenuation coefficients within the heterogeneous phantom. In particular, the techniques utilize a holistic approach combining system hardware and a deep learning-based model to identify these regions within the heterogeneous phantom.
The method 66 includes obtaining or acquiring a tomographic image of a heterogeneous phantom (block 68). The tomographic image may be obtained utilizing a predefined protocol (e.g., kV, mA, bowtie, rotation speed, aperture, etc.).
The method 66 also includes automatically segmenting heterogeneous regions (e.g., having a plurality of materials with varying or different attenuation coefficients) from the tomographic image to generate a segmented image (e.g., segmentation mask) (block 70). A trained deep neural network may be utilized for performing segmentation of the heterogeneous regions. In certain embodiments, alternative techniques (e.g., if, then statements) may be utilized in performing the segmentation of the heterogeneous regions.
The method 66 further includes automatically determining an angle of rotation (e.g., angular rotation) of the heterogeneous phantom relative to a properly positioned heterogeneous phantom (or the heterogeneous phantom at a predefined position) (block 72). Determining the angle of rotation includes comparing the segmentation image derived from the tomographic image to a reference image of a properly positioned heterogeneous phantom. The reference image may be a segmentation image or segmentation mask derived from a tomographic image of the properly positioned heterogeneous phantom. A similarity index may be utilized to determine the angle of rotation based on the comparison between the segmentation image and the reference image. There is no limit on the angle of rotation of the phantom that can be determined.
The method 66 still further includes automatically identifying a plurality of ROIs having varying attenuation coefficients within the tomographic image based on the segmented image (i.e., based on the determined angle of rotation) (block 74). The plurality of ROIs correspond to the heterogeneous regions within the heterogeneous phantom. Identifying the ROIs includes identifying multi-connectivity (e.g., 8) pixel regions within the tomographic image. Identifying the ROIs also includes identifying pixels regions greater than a predefined number (e.g., 100) of connected pixels.
The method 66 yet further includes automatically labeling each ROI of the plurality of ROIs as representing a particular material of the materials with the varying or different attenuation coefficients (block 76). In certain embodiments, labeling the ROIs includes generating a label matrix for the plurality of ROIs and comparing the label matrix to a static lookup table to label/identify each ROI as representing a particular material.
Returning to
The method 116 includes generating training data for the deep neural network (e.g., convolutional neural network) (block 118). As depicted in
From these image-label pairs, further training data (different from the original training data) may be generated while the deep neural network (e.g., convolutional neural network) is being trained. For example, in-place data augmentation or on-the-fly data augmentation (e.g. utilizing ImageDataGenerator class of Keras deep learning network library) may be utilized to generate the additional training data. As depicted in
Returning to
As noted above, the ROI location algorithm (e.g., method 66 in
The method 152 includes beginning performing calibration of a CT scanner system (CT imaging system in
As noted above, the ROI location algorithm (e.g., method 66 in
The method 172 includes beginning performing QA testing of a CT scanner system (CT imaging system in
The method 172 includes identifying a center region of the heterogeneous phantom (block 184). The method 172 also includes determining if the heterogeneous phantom is grossly misaligned based on the identification of the center region (block 186). If the heterogeneous phantom is grossly misaligned, the method 172 includes determining if the misalignment can be corrected for automatically (block 188). If the misalignment cannot be correctly for automatically, the method 172 returns to block 176. If the misalignment can be corrected for automatically, the method 172 includes automatically adjusting table position (block 190) in axial plane and/or automatically adjusting table location in z-direction (block 192).
If the heterogeneous phantom is not grossly misaligned, the method 172 includes passing the identified center region as input to an image analysis for autonomous ROI placement (e.g., utilizing the ROI location algorithm as described above) (block 194). ROI location algorithm may correct for any minor misalignments (i.e., provide fine tuning). The method 172 also includes calculating image quality (IQ) metrics (block 196). The method 172 further includes ending the QA testing process (block 198).
Technical effects of the disclosed embodiments include providing for deep-learning based autonomous identification of heterogeneous phantom regions (e.g., via region of interest location algorithm). The disclosed embodiments enable the use of a multi-structure in the calibration and QA process without modifying any phantom hardware (e.g., adding a tracer/marker to identify regions). In addition, the deep neural network can be trained with any phantom at any scanning protocols. The ability to automatically identify the regions of interest within the heterogeneous phantom eliminates the need for manual intervention while ensuring the efficacy of the region identification.
The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).
This written description uses examples to disclose the present subject matter, including the best mode, and also to enable any person skilled in the art to practice the subject matter, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the subject matter is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims.
Number | Name | Date | Kind |
---|---|---|---|
6256367 | Vartanian | Jul 2001 | B1 |
6345112 | Summers | Feb 2002 | B1 |
6404853 | Odogba | Jun 2002 | B1 |
7627079 | Boone | Dec 2009 | B2 |
7792249 | Gertner | Sep 2010 | B2 |
7933782 | Reiner | Apr 2011 | B2 |
8018487 | Reiner | Sep 2011 | B2 |
8492734 | Yared | Jul 2013 | B2 |
8653480 | Yared | Feb 2014 | B2 |
8676302 | Wang | Mar 2014 | B2 |
8957955 | Reiner | Feb 2015 | B2 |
9082182 | Sebok | Jul 2015 | B2 |
9198633 | Freeman | Dec 2015 | B2 |
9311722 | Yared | Apr 2016 | B2 |
9378566 | Zebaze | Jun 2016 | B2 |
9547893 | Couch | Jan 2017 | B2 |
9566038 | Zebaze | Feb 2017 | B2 |
9861319 | Yu | Jan 2018 | B2 |
9877691 | Zebaze | Jan 2018 | B2 |
10022098 | Kleinszig | Jul 2018 | B2 |
10349912 | Dorn | Jul 2019 | B2 |
10360344 | Ribbing | Jul 2019 | B2 |
10363011 | König | Jul 2019 | B2 |
10376713 | Takayanagi | Aug 2019 | B2 |
10546375 | Couch | Jan 2020 | B2 |
10546376 | Couch | Jan 2020 | B2 |
10922816 | Huang | Feb 2021 | B2 |
10937206 | Lu | Mar 2021 | B2 |
10939891 | Ruchala | Mar 2021 | B2 |
11000255 | Ghamari | May 2021 | B2 |
11051782 | Douglas | Jul 2021 | B1 |
11253210 | Jones | Feb 2022 | B2 |
11354832 | Bai | Jun 2022 | B2 |
11450096 | Tan | Sep 2022 | B2 |
11532390 | Hegde | Dec 2022 | B2 |
11688044 | Delso Gafarot | Jun 2023 | B2 |
11694373 | Akcakaya | Jul 2023 | B2 |
11782176 | Zhan | Oct 2023 | B2 |
11794037 | Ramezanzadeh Moghadam | Oct 2023 | B2 |
11813103 | Ji | Nov 2023 | B2 |
11826585 | Ebbini | Nov 2023 | B2 |
20030043967 | Aufrichtig | Mar 2003 | A1 |
20030233039 | Shao | Dec 2003 | A1 |
20040058951 | Lanza | Mar 2004 | A1 |
20040264627 | Besson | Dec 2004 | A1 |
20060098856 | Botterweck | May 2006 | A1 |
20070244395 | Wang | Oct 2007 | A1 |
20080159469 | Ruhrnschopf | Jul 2008 | A1 |
20100210931 | Cuccia | Aug 2010 | A1 |
20100272339 | Feng | Oct 2010 | A1 |
20110026791 | Collins | Feb 2011 | A1 |
20110049384 | Yared | Mar 2011 | A1 |
20110123088 | Sebok | May 2011 | A1 |
20120076371 | Caruba | Mar 2012 | A1 |
20120106817 | Shih | May 2012 | A1 |
20120148131 | Couch | Jun 2012 | A1 |
20130156163 | Liu | Jun 2013 | A1 |
20130170723 | Kwon | Jul 2013 | A1 |
20130235969 | Winter | Sep 2013 | A1 |
20130272595 | Heine | Oct 2013 | A1 |
20140003692 | Yared | Jan 2014 | A1 |
20140180082 | Evans | Jun 2014 | A1 |
20140243661 | Yared | Aug 2014 | A1 |
20140355856 | Wang | Dec 2014 | A1 |
20150005635 | Glide-Hurst | Jan 2015 | A1 |
20150085979 | Zheng | Mar 2015 | A1 |
20150216498 | Schulze | Aug 2015 | A1 |
20150262387 | Zebaze | Sep 2015 | A1 |
20160015356 | Baiu | Jan 2016 | A1 |
20160113612 | Sedlmair | Apr 2016 | A1 |
20160278715 | Yu | Sep 2016 | A1 |
20160287906 | Nord | Oct 2016 | A1 |
20160296188 | Zebaze | Oct 2016 | A1 |
20160310762 | Ramezanzadeh Moghadam | Oct 2016 | A1 |
20160370285 | Jang | Dec 2016 | A1 |
20170112457 | Allinson | Apr 2017 | A1 |
20170119333 | Zebaze | May 2017 | A1 |
20170164835 | Wiest | Jun 2017 | A1 |
20170245825 | Star-Lack | Aug 2017 | A1 |
20180014806 | Lu | Jan 2018 | A1 |
20180028133 | Jones | Feb 2018 | A1 |
20180061058 | Xu | Mar 2018 | A1 |
20180116620 | Chen | May 2018 | A1 |
20180140272 | Ruchala | May 2018 | A1 |
20180224592 | Holdsworth | Aug 2018 | A1 |
20180247450 | Varfolomeev | Aug 2018 | A1 |
20180289336 | Osawa | Oct 2018 | A1 |
20180330233 | Rui | Nov 2018 | A1 |
20190046813 | Zhou | Feb 2019 | A1 |
20190054194 | Yu | Feb 2019 | A1 |
20190108441 | Thibault | Apr 2019 | A1 |
20190251694 | Han | Aug 2019 | A1 |
20190266436 | Prakash | Aug 2019 | A1 |
20190325621 | Wang | Oct 2019 | A1 |
20190388054 | Qiu | Dec 2019 | A1 |
20200065969 | Huang | Feb 2020 | A1 |
20200085404 | Siewerdsen | Mar 2020 | A1 |
20200170612 | Browne | Jun 2020 | A1 |
20200211710 | Do | Jul 2020 | A1 |
20200234471 | Lu | Jul 2020 | A1 |
20200237319 | Yi | Jul 2020 | A1 |
20200359911 | Olender | Nov 2020 | A1 |
20210236080 | Herrmann | Aug 2021 | A1 |
20210239775 | Gallippi | Aug 2021 | A1 |
20210239862 | Petrak | Aug 2021 | A1 |
20220245928 | Tan | Aug 2022 | A1 |
20220322940 | Trakic | Oct 2022 | A1 |
20220342098 | Zhan | Oct 2022 | A1 |
20230320688 | Datta | Oct 2023 | A1 |
Number | Date | Country |
---|---|---|
113423342 | Sep 2021 | CN |
3683771 | Jul 2022 | EP |
2017223560 | Dec 2017 | WO |
Entry |
---|
EP application 22207648.1 filed Nov. 15, 2022—extended Search Report issued Apr. 21, 2023; 8 pages. |
CN113423342—English Abstract, Espacenet search results Feb. 19, 2024; 1 page. |
JP application 2022-175193 filed Nov. 1, 2022—Office Action issued Nov. 22, 2023; Machine Translation, 8 pages. |
Number | Date | Country | |
---|---|---|---|
20230165557 A1 | Jun 2023 | US |