There are several commonly applied methods for treating various maladies affecting tissues and organs including the liver, brain, heart, lung and kidney. Often, one or more imaging modalities, such as magnetic resonance imaging, ultrasound imaging, computed tomography (CT), as well as others are employed by clinicians to identify areas of interest within a patient and ultimately targets for treatment.
An endoscopic approach has proven useful in navigating to areas of interest within a patient, and particularly so for areas within luminal networks of the body such as the lungs. To enable the endoscopic, and more particularly the bronchoscopic, approach in the lungs, endobronchial navigation systems have been developed that use previously acquired MRI data or CT image data to generate a three dimensional rendering or volume of the particular body part such as the lungs. In particular, previously acquired images, acquired from an MRI scan or CT scan of the patient, are utilized to generate a three dimensional or volumetric rendering of the patient.
The resulting volume generated from the MRI scan or CT scan is then utilized to create a navigation plan to facilitate the advancement of a navigation catheter (or other suitable device) through a bronchoscope and a branch of the bronchus of a patient to an area of interest. Electromagnetic tracking may be utilized in conjunction with the CT data to facilitate guidance of the navigation catheter through the branch of the bronchus to the area of interest. In certain instances, the navigation catheter may be positioned within one of the airways of the branched luminal networks adjacent to, or within, the area of interest to provide access for one or more medical instruments.
Thus, in order to generate a navigation plan, or in order to even generate a three dimensional (3D) volume or volumetric rendering of the patient's anatomy, such as the lung, a clinician is required to utilize an MRI system or CT system to acquire the necessary image data for construction of the 3D volume. An MRI system or CT-based imaging system is extremely costly, and in many cases not available in the same location as the location where a navigation plan is generated or where a navigation procedure is carried out.
A fluoroscopic imaging device is commonly located in the operating room during navigation procedures. The standard fluoroscopic imaging device may be used by a clinician to visualize and confirm the placement of a tool after it has been navigated to a desired location. However, although standard fluoroscopic images display highly dense objects such as metal tools and bones as well as large soft-tissue objects such as the heart, the fluoroscopic images have difficulty resolving small soft-tissue objects of interest such as lesions. Further, the fluoroscope image is only a two dimensional projection. In order to be able to see small soft-tissue objects in 3D dimensional space, an X-ray volumetric reconstruction is needed.
X-ray volumetric reconstruction may be achieved by back projecting fluoroscopic images from multiple angles. However, while performing a surgical procedure, often metal treatment and monitoring devices, such as bronchoscopes, catheters, electrocardiograph (ECG) components, patient sensor triplets (PSTs), and metal spheres on angle measurement jig may be used. These metal treatment and monitoring devices will therefore generally be present in the captured fluoroscopic images.
Metal devices produce strong artifacts in images and thus severely reduce image quality. These artifacts are usually due to noise, beam hardening, scattering, and partial volume, and their magnitude is often several hundred Hounsfield units (HUs). Metal artifacts in fluoroscopic or CT images appear as streaks and broad bright or dark bands, severely degrading image quality and drastically reducing the diagnostic value of images. Additionally, the metal objects may obstruct a clinician's view of a treatment target.
Algorithms developed to reduce metal artifacts can be classified into projection-interpolation-based methods, iterative reconstruction methods, or their combination. Projection interpolation methods treat parts of the projections affected by metal (the so-called metal shadow) as unreliable. These metal shadow data are complemented by interpolation between neighboring reliable data. Iterative reconstruction methods model the main causes for metal artifacts, such as noise and beam hardening. Although the image quality obtained from these methods is often better than that of projection-interpolation-based methods, the main drawback is their extremely high computational complexity. In particular, iterative methods have trouble dealing with data for a metal so dense that it stops almost all beams passing through it.
Some algorithms combining projection completion and iterative reconstruction have been proposed. These methods create a model image using classification prior information and then forward-project the model image to fill the gaps of the metal shadow. The model image classifies the pixels into several kinds of tissue and diminishes the density contrast of soft tissues. The region close to the metal is often not well corrected and some residual shadow artifacts still remain.
In recent years many algorithms have been developed for 3D reconstruction and motion estimation, which can roughly be divided into several categories. Namely: methods using bundle adjustment (BA) or methods based on factorization and hierarchical methods. In the first group, multi-view structure from motion started from estimating the geometry of two views. This structure will be used to estimate the pose of the adjacent camera. The quality of the reconstruction strongly depends on the initial structure of first camera pairs. Another disadvantage of this method is the drift problem. It also has expensive computation cost and suffers from accumulated errors when the number of image is increased. In the second group, the missing data and sensitiveness to outliers is the significant drawback. It is well studied by some authors. In the third group, the input images must be arranged in the hierarchical tree processed from root to the top.
Provided in accordance with the disclosure is a method for generating a three dimensional (3D) volume including a treatment target. The method includes receiving a plurality of two dimensional (2D) input images of a patient, recognizing a metal artifact in each of the plurality of 2D input images, removing the metal artifacts from the plurality of 2D input images based on the recognizing of the metal artifact, and replacing the metal artifact with alternative pixel data to generate a plurality of filtered 2D images, and generating the 3D volume from the plurality of filtered 2D images. The plurality of 2D input images including a treatment target.
In another aspect of the disclosure, the plurality of 2D input images is received from a fluoroscopic imager.
In yet another aspect of the disclosure, the method further includes imaging the patient to generate the plurality of 2D input images.
In another aspect of the disclosure, known metal artifacts include one or more of a bronchoscope, a biopsy tool, an ablation device, a probe, an endoscope, a catheter, a stapler, an implant, or components of an angle measurement jig.
In another aspect of the disclosure, the method further includes generating a plurality of 2D masks that correspond to the plurality of 2D input images. Removing the metal artifacts from the plurality of 2D input images may include applying the plurality of 2D masks to the corresponding plurality of 2D input images.
In yet another aspect of the disclosure, the method further includes performing in-painting to generate the alternative image data.
In a further aspect of the disclosure, the method further includes computing a plurality of inverse masks corresponding to the plurality of masks, generating a plurality of filtered 2D input images by multiplying each of the plurality of 2D input images by the corresponding inverse mask using a Gaussian filter, generating a plurality of filtered masks by filtering each of the plurality of inverse masks using a Gaussian filter, generating a plurality of blurred images by dividing each of the plurality of filtered 2D input images by a corresponding filtered mask of the plurality of filtered masks, and generating the alternative image data based on the plurality of inverse masks, the plurality of blurred frames, and the plurality of input images. The plurality of inverse masks may have a same size as the plurality of masks and values equal to an inverse of values of each of the plurality of masks. If a pixel in a filtered mask is equal to 0, a zero is assigned to a corresponding pixel location in a corresponding blurred frame of the plurality of blurred images.
In yet another aspect of the disclosure, the 3D volume is generated by back projecting filtered 2D images.
In another aspect of the disclosure, a system for generating a three dimensional (3D) volume including a treatment target is provided. The system includes a processor and a memory storing an application, which, when executed by the processor, causes the processor to recognize a metal artifact in a plurality of two dimensional (2D) input images, the 2D input images being of a patient including a target, replace pixels in the plurality of 2D input images corresponding to the recognized metal artifact with alternative pixel data to generate a plurality of filtered 2D images, and generate the 3D volume based on the plurality of filtered 2D images.
In another aspect, the alternative pixel data is generated by performing in-painting. In an aspect, the in-painting includes computing a plurality of inverse masks corresponding to the plurality of masks, the plurality of inverse masks having a same size as the plurality of masks and values equal to an inverse of values of each of the plurality of masks, generating a plurality of filtered 2D input images by multiplying each of the plurality of 2D input images by the corresponding inverse mask using a Gaussian filter, generating a plurality of filtered masks by filtering each of the plurality of inverse masks using a Gaussian filter, generating a plurality of blurred images by dividing each of the plurality of filtered 2D input images by a corresponding filtered mask of the plurality of filtered masks, where if a pixel in a filtered mask is equal to 0, a zero is assigned to a corresponding pixel location in a corresponding blurred frame of the plurality of blurred images, and generating the alternative pixel data based on the plurality of inverse masks, the plurality of blurred frames, and the plurality of input images.
In yet another aspect, the processor is further caused to generate a plurality of 2D masks that correspond to the plurality of 2D input images. The processor may be further configured to remove the metal artifact from the plurality of 2D input images by applying the plurality of 2D masks to the corresponding plurality of 2D input images. The 3D volume may be generated by back projecting the filtered 2D images.
In yet another aspect, a method for generating a three dimensional (3D) volume including a treatment target is provided. The method includes recognizing a metal artifact in a plurality of two dimensional (2D) input images, the 2D input images being of a patient including a target, performing in-painting to generate alternative pixel data, replacing pixels in the plurality of 2D input images corresponding to the recognized metal artifact with alternative pixel data to generate a plurality of filtered 2D images, generating the 3D volume based on the plurality of filtered 2D images, and displaying the generated 3D volume on a display.
In an aspect, the method further includes generating a plurality of 2D masks that correspond to the plurality of 2D input images.
In another aspect, removing the metal artifact from the plurality of 2D input images includes applying the plurality of 2D masks to the corresponding plurality of 2D input images.
In an aspect, performing in-painting may include computing a plurality of inverse masks corresponding to the plurality of masks, the plurality of inverse masks having a same size as the plurality of masks and values equal to an inverse of values of each of the plurality of masks, generating a plurality of filtered 2D input images by multiplying each of the plurality of 2D input images by the corresponding inverse mask using a Gaussian filter, generating a plurality of filtered masks by filtering each of the plurality of inverse masks using a Gaussian filter, generating a plurality of blurred images by dividing each of the plurality of filtered 2D input images by a corresponding filtered mask of the plurality of filtered masks, wherein if a pixel in a filtered mask is equal to 0, a zero is assigned to a corresponding pixel location in a corresponding blurred frame of the plurality of blurred images, and generating the alternative pixel data based on the plurality of inverse masks, the plurality of blurred frames, and the plurality of input images.
In an aspect, generating the 3D volume based on the plurality of filtered 2D images includes back projecting filtered 2D images. The plurality of 2D images may be received from a fluoroscopic imager.
Further, to the extent consistent, any of the aspects described herein may be used in conjunction with any or all of the other aspects described herein.
Objects and features of the presently disclosed system and method will become apparent to those of ordinary skill in the art when descriptions of various embodiments thereof are read with reference to the accompanying drawings, of which:
The disclosure is directed to a method for generating a 3D volume from a series of 2D images by removing metal artifacts from the original 2D images and back projecting the modified 2D images.
Specifically, the disclosure relates to generating 3D images of tissue treatment targets within a patient that may aide a clinician in performing medical treatment on the tissue treatment target. Example medical treatments that may be performed using the 3D image for guidance include biopsy, ablation, and lavage treatment. During these procedures, a treatment device, often metallic in nature, is guided through a luminal network or inserted percutaneously into the patient. The treatment device is then moved to a position approximate the treatment target. Once the treatment device is approximate the treatment target, a 3D volume may be developed to either update a previous model of the patient to establish a more precise or more recent location of the treatment target or, alternatively, to develop a 3D volume for the first time that may be displayed to aid a clinician in guiding a medical treatment device to a treatment target. The 3D volume must be highly accurate in order to ensure the device is placed at the treatment target so that the treatment target receives the treatment required and no other tissue is damaged. The present method for generating a 3D image of a target removes interferences from metal artifacts, such as those from a metal medical treatment device, and creates clear, accurate 3D images.
As fluoroscopic C-arm 10 revolves around patient P, x-ray emitter 103 emits x-rays toward patient P and x-ray receiver 105. Some of the x-rays collide with and are absorbed or deflected by patient P at various depths. The remaining x-rays, those that are not absorbed or deflected by patient P pass through patient P and are received by x-ray receiver 105. X-ray receiver 105 generates pixel data according to a quantity of x-rays received at certain locations along x-ray receiver 105 and transmits the pixel data to workstation 40 in order to generate projection images I1, I2, I3. Images I1, I2, I3 as shown in
Each image I1, I2, I3 includes a view of target T at an orientation given by orientations O1, O2, O3. When projection images I1, I2, I3 are combined through back projection, a process described in more detail with reference to
At step 305, workstation 40 performs a morphological closing operation using structure element on the input images I1, I2, I3 (e.g. input image IE1 generated at step 303. The morphological closing operation includes, first, dilating the pixels with an intensity value above a threshold of input image IE1 using the structural element. The structure element is a 2D array, which is smaller in size than input images I1, I2, I3, describing a design. The structural element may be chosen by a clinician or user or workstation 40 may use an existing structure element. In dilating the pixels with an intensity value above a threshold, an array of pixels surrounding each high intensity pixel and equal to the size of the structural element is made to resemble the design of the structural element. That is, the value of each high intensity pixel is expanded to nearby pixels according to the design of the structural element. As a result, boundaries of high intensity pixel areas are expanded to such that nearby high intensity pixel areas may connect to form a larger, joined area of high intensity pixels. Next, as the second aspect of a morphological closing operation, erosion is performed using the same or a different structural element. Erosion causes pixels at boundaries of groupings of high intensity pixels to be removed, reducing the size of the areas of high intensity pixels.
The morphological closing operation is designed to accentuate thin and metal objects O such as a catheter and metal spheres S such that they may be more easily identified. As a result of the morphological closing operation, a closed frame image is developed.
Turning now to step 307, workstation 40 develops a difference map between each input image IE1 and the corresponding closed image developed for each input image at step 305. The difference map may be generated through a variety of algorithms. On a most basic level, the pixel values of the closed image may be subtracted from the pixel values of input image IE1. Alternatively, the difference map may be given by subtracting a closed image by the corresponding 2D input image, dividing the difference of the subtraction but the 2D input image, and multiplying the quotient of the division by 100%. Additional algorithms known in the art may also be used to generate a difference map.
At step 309, a threshold is applied to the difference map developed at step 307. Workstation 40 removes all pixels with a pixel intensity value below a threshold value. Once the pixels are removed, only those pixels representing a portion of a metal object MO should remain.
Next, at step 311, workstation 40 generates masks to be applied to the input images by normalizing the remaining pixels in the difference map, those representing metal objects, to pixel values between minimum and maximum erosion percentage. Normalizing pixel values changes the range of pixel intensity values and creates a more detailed grayscale image. Normalizing the remaining pixels also emphasizes the shape and boundaries of the groupings of pixels. The minimum and maximum erosion percentage values are established according to standard deviations of the values of the remaining pixels.
At step 313, workstation 40 or a clinician determines whether there are metal jig spheres S. If metal jig spheres S are present, the process continues to steps 315 and 317 wherein the mask is updated for erosion of the jig's metal spheres. At step 315, workstation 40 projects a 3D model of the jig, generated by workstation 40 or retrieved by workstation 40 from a network or memory, onto an imaging plane of the input image according to the orientation of the input image which is determined using the metal jig sphere. The projection establishes a location and size of the metal jig sphere. Once the locations and sizes of the metal jig spheres are determined, at step 317, metal jig spheres are added as pixel groups to the mask. Each of the pixels in the pixel groups representing the metal jig spheres are assigned a pixel value of 1 in order to provide 100% erosion of the metal jig sphere when the mask is applied to the input images.
After step 317 is performed or if at step 313 it is determined that no measurement jig sphere is present, the process proceeds to step 319. At step 319 in-painting is performed using the mask to replace pixels in the input images that belong to metal objects recognized in the mask. As part of the in-painting, first, an inverse mask is generated for the mask. The inverse mask is created by inverting each of the pixel values in the mask. Essentially, each pixel in the inverse mask would be equal to: 1−(value of pixel in the original mask). For instance, if a pixel in the mask has a value of “1”, the pixel at the same position on the inverse mask would be assigned a value of “0”. From there, a plurality of filtered inverted masks are created by applying a Gaussian filter, of size 3×3 for example. Next, a filtered 2D input image is generated by multiplying each of the plurality of 2D input images by the corresponding inverse mask using a Gaussian filter, of size 3×3 for example. Then, a blurred image is generated by dividing the filtered 2D input image by the filtered mask. If a pixel in a filtered mask is equal to 0, a zero is assigned to a corresponding pixel location in the blurred frame of the plurality of blurred image. Finally, the 2D input images are updated by multiplying the 2D input image by an array equal to 1−(the mask) and adding the product to the product of the blurred frame multiplied by the mask. The mask may be updated by assigning a 1 to every pixel where the blurred frame is equal to 0 and assigning a 0 to every pixel where the blurred frame does not equal 0.
At step 321, workstation 40 reconstructs a 3D volume using multiple in-painted input images. The 3D volume is reconstructed using back projection, which is shown in more detail in
After performing back projection, the process may continue to either step 323 or step 325. If there is no prior image data depicting the 3D volume within Patient P, the process proceeds to step 323. At step 323, the back projected 3D volume is presented to the clinician a view on user interface 816. If there is prior image data depicting the 3D volume within Patient P, the process proceeds to step 325 where the prior image data is updated before proceeding to step 323 where the update image data describing the 3D volume is displayed.
At step 705, pixels representing portions of objects that may be metal objects are recognized in input image IE. Pixels representing portions of objects may be determined for example according to pixel intensity. Because metal objects tend to significantly deflect or absorb x-rays, these pixels will be shown in images I1, I2, I3 as having a high pixel intensity. Accordingly, in order to recognize pixels that potentially represent a portion of metal object MO, a pixel intensity threshold is set, above which pixels are deemed to potentially represent a portion of metal object MO. These pixels may be referred to as high intensity pixels. As an example, pixels with an intensity value greater than or equal to 3000 Hounsfield Units (HU) may be considered prospective pixels representing a metal object.
At step 707, input image IE is segmented based on the determination of pixels that may potentially represent portions of metal objects. As noted above, these may be pixel with intensities greater than a certain threshold. To segment input image IE, the pixels that are determined, at step 705, as potentially representing portions of metal objects are isolated and are used to generated segmented image Is (See
At step 709, the pixels in segmented image Is are dilated. The sizes of pixels are increased to emphasize the proximity of pixels and emphasize any patterns between adjacent pixels. The dilation of these pixels creates dilated segmented image ID. Then, at step 311, workstation 40 compares groups of adjacent pixels to known metal objects. Potential known metal objects include, but are not limited to, bronchoscopes, biopsy tools, ablation devices, probes, endoscopes, a catheters, staplers, or implants. Shapes, sizes, and common orientations of the known metal object may be compared to the groups of adjacent pixels. These shapes, sizes, and common orientations may be saved in a memory in workstation 40. Alternatively, workstation 40 may receive information regarding the shape, size, and common orientation of known metal objects from a network. Additionally, a clinician may review the image and determine that a group of pixels resemble a metal device to be removed from input image IE.
At step 713, workstation 40 or a clinician makes a determination of whether a metal object is recognized. In a case where the process described in
If, at step 713, workstation 40 or a clinician determines that one or more metal objects are recognized, then the process proceeds to step 717. At step 717, all pixels that do not represent a recognized metal object are removed from dilated segmented image ID to generate an image with pixels representing only identified metal objects IO.
At step 719, the peripheral pixels belonging to the identified metal object in image IO are dilated to create metal object mask M which is design to remove metal objects from input image IE. In dilating the pixels, workstation 40 expands each of the remaining pixels to ensure that the pixels cover the full area of the metal object in input image IE. Segmented pixels after dilation whose HU value is below 1000 are excluded. By setting these metal pixels to one and all other pixels to zero, a binary metal image is produced. The corresponding projections of the metal in the original sinogram are identified via forward projection of this metal image. At step 721, metal object mask is applied to input image IE to remove all pixels from input image IE that correspond to the location of the dilated pixels from mask M.
At step 723, the pixels in input image IE that were removed according to mask M are replaced using in-painting. In-painting involves reconstructing missing pixel data by interpolating pixel data of the rest of the image to areas where the pixel data is removed or missing. The pixel data may be interpolated using a structural approach, a geometric approach, or a combination thereof. With structural in-painting techniques, workstation 40 interpolates the removed pixels by continuing repetitive pattern in input image IE. With geometric in-painting techniques, workstation 40 interpolates the removed pixels by creating consistency of the geometric structure in input image IE. For instance, contour lines that arrive at boundary of missing pixels are prolonged into the missing pixel area.
At step 725, workstation 40 reconstructs a 3D volume using multiple in-painted input images IE. The 3D volume is reconstructed using back projection, which is shown in more detail in
After performing back projection, the process may continue to either step 727 or step 729. If there is no prior image data depicting the 3D volume within Patient P, the process proceeds to step 727. At step 727, the back projected 3D volume is presented to the clinician a view on user interface 816. If there is prior image data depicting the 3D volume within Patient P, the process proceeds to step 729 where the prior image data is updated before proceeding to step 727 where the update image data describing the 3D volume is displayed.
With reference to
Memory 1202 may store application 42 and/or image data 1214. Application 42 may, when executed by processor 1204, cause display 1206 to present user interface 1216. Network interface 608 may be configured to connect to a network such as a local area network (LAN) consisting of a wired network and/or a wireless network, a wide area network (WAN), a wireless mobile network, a Bluetooth network, and/or the internet. Input device 1210 may be any device by means of which a clinician may interact with workstation 40, such as, for example, a mouse, keyboard, foot pedal, touch screen, and/or voice interface. Output module 1212 may include any connectivity port or bus, such as, for example, parallel ports, serial ports, universal serial busses (USB), or any other similar connectivity port known to those skilled in the art.
While several embodiments of the disclosure have been shown in the drawings, it is not intended that the disclosure be limited thereto, as it is intended that the disclosure be as broad in scope as the art will allow and that the specification be read likewise. Therefore, the above description should not be construed as limiting, but merely as exemplifications of particular embodiments. Those skilled in the art will envision other modifications within the scope and spirit of the claims appended hereto.
Specifically, while embodiments of the disclosure have been described with respect to a fluoroscopic scanner, it is not intended that the disclosure be limited thereto. The current disclosure contemplates use of the systems and methods described herein to plan a path to a target that avoids obstructions that may be present during the performance of various surgical procedures. Those skilled in the art would envision numerous other obstructions.
Detailed embodiments of such devices, systems incorporating such devices, and methods using the same are described above. However, these detailed embodiments are merely examples of the disclosure, which may be embodied in various forms. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting but merely as a basis for the claims and as a representative basis for allowing one skilled in the art to variously employ the disclosure in virtually any appropriately detailed structure.
Although embodiments have been described in detail with reference to the accompanying drawings for the purpose of illustration and description, it is to be understood that the inventive processes and apparatus are not to be construed as limited thereby. It will be apparent to those of ordinary skill in the art that various modifications to the foregoing embodiments may be made without departing from the scope of the disclosure as set forth in the following claims.
This application is a continuation of U.S. patent application Ser. No. 16/259,612 filed Jan. 28, 2019, now U.S. Pat. No. 10,930,064, which claims the benefit of the filing date of provisional U.S. Patent Application No. 62/628,028, filed Feb. 8, 2018, the entire contents of which are incorporated herein by reference.
|6118845||Simon et al.||Sep 2000||A|
|6470207||Simon et al.||Oct 2002||B1|
|6483892||Wang et al.||Nov 2002||B1|
|6801594||Ali et al.||Oct 2004||B1|
|6990368||Simon et al.||Jan 2006||B2|
|7369695||Zettel et al.||May 2008||B2|
|8335359||Fidrich et al.||Dec 2012||B2|
|8549607||Mazarick et al.||Oct 2013||B2|
|8706184||Mohr et al.||Apr 2014||B2|
|8750582||Boese et al.||Jun 2014||B2|
|8827934||Chopra et al.||Sep 2014||B2|
|9918659||Chopra et al.||Mar 2018||B2|
|10373719||Soper et al.||Aug 2019||B2|
|10480926||Froggatt et al.||Nov 2019||B2|
|10524866||Srinivasan et al.||Jan 2020||B2|
|10555788||Panescu et al.||Feb 2020||B2|
|10638953||Duindam et al.||May 2020||B2|
|10674970||Averbuch et al.||Jun 2020||B2|
|10706543||Donhowe et al.||Jul 2020||B2|
|10709506||Coste-Maniere et al.||Jul 2020||B2|
|10772485||Schlesinger et al.||Sep 2020||B2|
|10796432||Mintz et al.||Oct 2020||B2|
|10823627||Sanborn et al.||Nov 2020||B2|
|10827913||Ummalaneni et al.||Nov 2020||B2|
|10835153||Rafii-Tari et al.||Nov 2020||B2|
|10885630||Li et al.||Jan 2021||B2|
|20100183214||Mccollough et al.||Jul 2010||A1|
|20130303945||Blumenkranz et al.||Nov 2013||A1|
|20140035798||Kawada et al.||Feb 2014||A1|
|20150148690||Chopra et al.||May 2015||A1|
|20150265368||Chopra et al.||Sep 2015||A1|
|20160157939||Larkin et al.||Jun 2016||A1|
|20160183841||Duindam et al.||Jun 2016||A1|
|20160192860||Allenby et al.||Jul 2016||A1|
|20160287344||Donhowe et al.||Oct 2016||A1|
|20170112576||Coste-Maniere et al.||Apr 2017||A1|
|20170209071||Zhao et al.||Jul 2017||A1|
|20170265952||Donhowe et al.||Sep 2017||A1|
|20170311844||Zhao et al.||Nov 2017||A1|
|20180078318||Barbagli et al.||Mar 2018||A1|
|20180153621||Duindam et al.||Jun 2018||A1|
|20180235709||Donhowe et al.||Aug 2018||A1|
|20180240237||Donhowe et al.||Aug 2018||A1|
|20180256262||Duindam et al.||Sep 2018||A1|
|20180279852||Rafii-Tari et al.||Oct 2018||A1|
|20180325419||Zhao et al.||Nov 2018||A1|
|20190000559||Berman et al.||Jan 2019||A1|
|20190000560||Berman et al.||Jan 2019||A1|
|20190008413||Duindam et al.||Jan 2019||A1|
|20190038365||Soper et al.||Feb 2019||A1|
|20190065209||Mishra et al.||Feb 2019||A1|
|20190110839||Rafii-Tari et al.||Apr 2019||A1|
|20190175062||Rafii-Tari et al.||Jun 2019||A1|
|20190183318||Froggatt et al.||Jun 2019||A1|
|20190183585||Rafii-Tari et al.||Jun 2019||A1|
|20190183587||Rafii-Tari et al.||Jun 2019||A1|
|20190192234||Gadda et al.||Jun 2019||A1|
|20190209016||Herzlinger et al.||Jul 2019||A1|
|20190209043||Zhao et al.||Jul 2019||A1|
|20190239723||Duindam et al.||Aug 2019||A1|
|20190250050||Sanborn et al.||Aug 2019||A1|
|20190254649||Walters et al.||Aug 2019||A1|
|20190272634||Li et al.||Sep 2019||A1|
|20190298160||Ummalaneni et al.||Oct 2019||A1|
|20190298451||Wong et al.||Oct 2019||A1|
|20190320878||Duindam et al.||Oct 2019||A1|
|20190320937||Duindam et al.||Oct 2019||A1|
|20190336238||Yu et al.||Nov 2019||A1|
|20190343424||Blumenkranz et al.||Nov 2019||A1|
|20190350659||Wang et al.||Nov 2019||A1|
|20190365199||Zhao et al.||Dec 2019||A1|
|20190365486||Srinivasan et al.||Dec 2019||A1|
|20190380787||Ye et al.||Dec 2019||A1|
|20200000319||Saadat et al.||Jan 2020||A1|
|20200008655||Schlesinger et al.||Jan 2020||A1|
|20200030044||Wang et al.||Jan 2020||A1|
|20200043207||Lo et al.||Feb 2020||A1|
|20200046431||Soper et al.||Feb 2020||A1|
|20200046436||Tzeisler et al.||Feb 2020||A1|
|20200054399||Duindam et al.||Feb 2020||A1|
|20200060771||Lo et al.||Feb 2020||A1|
|20200069192||Sanborn et al.||Mar 2020||A1|
|20200077870||Dicarlo et al.||Mar 2020||A1|
|20200078095||Chopra et al.||Mar 2020||A1|
|20200078103||Duindam et al.||Mar 2020||A1|
|20200109124||Pomper et al.||Apr 2020||A1|
|20200129239||Bianchi et al.||Apr 2020||A1|
|20200155116||Donhowe et al.||May 2020||A1|
|20200188038||Donhowe et al.||Jun 2020||A1|
|20200205903||Srinivasan et al.||Jul 2020||A1|
|20200214664||Zhao et al.||Jul 2020||A1|
|20200229679||Zhao et al.||Jul 2020||A1|
|20200242767||Zhao et al.||Jul 2020||A1|
|20200297442||Adebar et al.||Sep 2020||A1|
|20200315554||Averbuch et al.||Oct 2020||A1|
|20200330795||Sawant et al.||Oct 2020||A1|
|20200364865||Donhowe et al.||Nov 2020||A1|
|European Search Report for Application No. 05254709, dated Aug. 12, 2005.|
|Extended European Search Report issued in corresponding Appl. No. EP 19156052.3 dated Jun. 11, 2019 (12 pages).|
|Hua Li, et al., “Evaluation of dual-front active contour segmentation and metal shadow tilling methods on metal artifact reduction in multi-slice helical CT”, Proceedings Medical Imaging 2010: Physics of Medical Imaging, vol. 7622, pp. 1-7 (2010).|
|My-Ha Le, et al., “3D scene reconstruction enhancement method based on automatic context analysis and convex optimization”. Neurocomputing vol. 137, pp. 71-78 (2014).|
|Zhiwei Tang, et al., “Efficient Metal Artifact Reduction Method Based on Improved Total Variation Regularization”, Journal of Medical and Biological Engineering, vol. 34, No. 3, pp. 261-268 (2014).|
|20210174581 A1||Jun 2021||US|