Generating synthesized projection images for 3D breast tomosynthesis or multi-mode x-ray breast imaging

Information

  • Patent Grant
  • 11090017
  • Patent Number
    11,090,017
  • Date Filed
    Wednesday, September 11, 2019
    5 years ago
  • Date Issued
    Tuesday, August 17, 2021
    3 years ago
Abstract
Methods and systems for medical imaging including synthesizing virtual projections from acquired real projections and generating reconstruction models and images based on the synthesized virtual projections and acquired real projections. For example, first x-ray imaging data is generated from a detected first x-ray emission at a first angular location and second x-ray imaging data is generated from a detected second x-ray emission at a second angular location. Based on at least the first x-ray imaging data and the second x-ray imaging data, third x-ray imaging data for a third angular location relative to the breast may be synthesized. An image of the breast may be displayed or generated from the third x-ray imaging data.
Description
BACKGROUND

Medical imaging has become a widely used tool for identifying and diagnosing abnormalities, such as cancers or other conditions, within the human body. Medical imaging processes such as mammography and tomography are particularly useful tools for imaging breasts to screen for, or diagnose, cancer or other lesions with the breasts. Tomosynthesis systems are mammography systems that allow high resolution breast imaging based on limited angle tomosynthesis. Tomosynthesis, generally, produces a plurality of x-ray images, each of discrete layers or slices of the breast, through the entire thickness thereof. In contrast to typical two-dimensional (2D) mammography systems, a tomosynthesis system acquires a series of x-ray projection images, each projection image obtained at a different angular displacement as the x-ray source moves along a path, such as a circular arc, over the breast. In contrast to conventional computed tomography (CT), tomosynthesis is typically based on projection images obtained at limited angular displacements of the x-ray source around the breast. Tomosynthesis reduces or eliminates the problems caused by tissue overlap and structure noise present in 2D mammography imaging. Acquiring each projection image, however, increases the total amount of time required to complete the imaging process.


It is with respect to these and other general considerations that the aspects disclosed herein have been made. Also, although relatively specific problems may be discussed, it should be understood that the examples should not be limited to solving the specific problems identified in the background or elsewhere in this disclosure.


SUMMARY

Examples of the present disclosure describe systems and methods for medical imaging through the use of a synthesized virtual projections generated from real projections. In an aspect, the technology relates to a system for generating images of a breast. The system includes an x-ray source, an x-ray detector, at least one processor operatively connected to the x-ray detector, and memory operatively connected to the at least one processor, the memory storing instructions that, when executed by the at least one processor, cause the system to perform a set of operations. The operations include, emitting, from the x-ray source, a first x-ray emission at a first angular location relative to the x-ray detector; detecting, by the x-ray detector, the first x-ray emission after passing through the breast; generating first x-ray imaging data from the detected first x-ray emission; emitting, from the x-ray source, a second x-ray emission at a second angular location relative to the breast; detecting, by the x-ray detector, the second x-ray emission after passing through the breast; generating second x-ray imaging data from the detected second x-ray emission; synthesizing, based on at least the first x-ray imaging data and the second x-ray imaging data, third x-ray imaging data for a third angular location relative to the breast, wherein the third angular location is different from the first angular location and the second angular location, thereby eliminating the need for an x-ray emission at the third angular location; and generating and displaying an image of the breast from the third x-ray imaging data.


In an example, the first x-ray imaging data is a first real projection for the first angular location, the second x-ray imaging data is a second real projection for the second angular location, and the third x-ray imaging data is a virtual projection for the third angular location. In another example, synthesizing the third x-ray imaging data includes fusing the first x-ray imaging data and the second x-ray imaging data in at least one of a spatial domain or a frequency domain. In yet another example, synthesizing the third x-ray imaging data further includes generating reconstruction data from the first x-ray imaging data and the second x-ray imaging data, and synthesizing the third x-ray imaging data is further based on the generated reconstruction data. In still another example, synthesizing the third x-ray imaging data further includes providing the first x-ray imaging data and the second x-ray imaging data into a trained deep-learning neural network and executing the trained deep-learning neural network based on the first x-ray imaging data and the second x-ray imaging data to generate the third x-ray imaging data. In still yet another example, the operations further comprise training a deep-learning neural network to generate the trained deep-learning neural network. Training the deep-learning neural network includes obtaining a set of real prior x-ray imaging data used for imaging a breast at multiple angular locations; dividing the set of real prior x-ray imaging data into a plurality of datasets comprising a training real data set for a first plurality of the angular locations and a training virtual data set for a second plurality of the angular locations, the second plurality of angular locations being different from the first plurality of angular locations; providing the training real data set as inputs into the deep-learning neural network; and providing the training virtual data set as a ground truth for the deep-learning neural network. In another example, the operations are performed as part of digital breast tomosynthesis or multi-modality imaging.


In another aspect, the technology relates to a computer-implemented method, executed by at least one processor, for generating images of a breast. The method includes receiving first real projection data for an x-ray emission from a first angular location relative to the breast; receiving second real projection data for an x-ray emission emitted from a second angular location relative to the breast; receiving third real projection data for an x-ray emission from a third angular location relative to the breast; and executing a synthesization process. The synthesization process is executed to generate, based on the first real projection data and the second real projection data, first virtual projection data for an x-ray emission from a fourth angular location relative to the breast, wherein the fourth angular location is different from the first angular location and the third angular location; and generate, based on the second real projection data and the third real projection data, second virtual projection data for an x-ray emission from a fifth angular location relative to the breast, wherein the fifth angular location different from the second angular location and the fourth angular location. The method further includes determining that at least one of the first virtual projection data or the second virtual projection data has a quality outside of a predetermined tolerance; based on the determination that the at least one of the first virtual projection or the second virtual projection has a quality outside of a predetermined tolerance, modifying the synthesization process to create a modified synthesization process; executing the modified synthesization process to generate a modified first virtual projection and a modified second virtual projection; generating a reconstruction model from the first real projection data, the second real projection data, the third real projection data, the modified first virtual projection data, and the modified second virtual projection data; and displaying at least one of a slice of the breast from the generated reconstruction model, the first real projection data, the second real projection data, the third real projection data, the first virtual projection data, or the second virtual projection data.


In an example, determining that at least one of the first virtual projection data or the second virtual projection data has a quality outside of a predetermined tolerance further includes: identifying a landmark in one of the first real projection data or the second real projection data; identifying the landmark in the first virtual projection data; comparing the location of the landmark in the first virtual projection data to the location of the landmark in at least one of the first real projection data or the second real projection data; and based on the comparison, determining whether the location of the landmark in the first virtual projection data is within the predetermined tolerance. In another example, the synthesization process includes: providing the first real projection data, the second real projection data, the third real projection data into a trained deep-learning neural network; and executing the trained deep-learning neural network based on the first real projection data, the second real projection data, the third real projection data to generate the first virtual projection data and the second virtual projection data. In yet another example, modifying the synthesization process includes modifying coefficients of the trained deep-learning neural network. In still another example, the method further includes determining that the slice has a quality outside a reconstruction quality tolerance; and based on the determination that the slice has a quality outside a reconstruction quality tolerance, further modifying the modified synthesization process to create a further modified synthesization process. In still yet another example, the method further includes: executing the further modified synthesization process to generate a further modified first virtual projection data and a further modified second virtual projection data; generating a modified reconstruction model from the first real projection data, the second real projection data, the third real projection data, the further modified first virtual projection data, and the further modified second virtual projection data; and displaying at least one of a slice of the breast from the modified reconstruction model, the further modified first virtual projection, or the further modified second virtual projection. In another example, the method is performed as part of digital breast tomosynthesis or multi-modality imaging.


In another aspect, the technology relates to another computer-implemented method, executed by at least one processor, for generating images of a breast. The method includes receiving first real projection data for an x-ray emission from a first angular location relative to the breast; receiving second real projection data for an x-ray emission emitted from a second angular location relative to the breast; providing the first real projection data and the second real projection data into a trained deep-learning neural network; executing the trained deep-learning neural network based on the first real projection data, and the second real projection data to generate first virtual projection data for a third angular location relative to the breast; generating a reconstruction model from the first real projection data, the second real projection data, and the first virtual projection data; and displaying at least one of a slice of the breast from the generated reconstruction model, the first real projection data, the second real projection data, or the first virtual projection data.


In an example, the method further includes determining that the slice has a quality outside a reconstruction quality tolerance; and based on the determination that the slice has a quality outside a reconstruction quality tolerance, modifying the trained deep-learning neural network to create a modified deep-learning neural network. In another example, the method further includes: executing the modified deep-learning neural network based on the first real projection data and the second real projection data to generate a modified first virtual projection; generating a modified reconstruction model from the first real projection data, the second real projection data, and the modified first virtual projection data; and displaying at least one of a slice of the breast from the modified reconstruction model or the modified first virtual projection. In yet another example, the determination that the slice has a quality outside a reconstruction quality tolerance is based on at least one of image artifacts or image quality measurements. In still another example, the difference between the first angular location and the second angular location is less than or equal to three degrees. In still yet another example, the method is performed as part of digital breast tomosynthesis or multi-modality imaging.


This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Additional aspects, features, and/or advantages of examples will be set forth in part in the description which follows and, in part, will be apparent from the description, or may be learned by practice of the disclosure.





BRIEF DESCRIPTION OF THE DRAWINGS

Non-limiting and non-exhaustive examples are described with reference to the following figures.



FIG. 1 depicts a perspective view of a portion of an upright breast x-ray imaging system.



FIG. 2 is a side elevation of the system of FIG. 1.



FIG. 3 is a front elevation illustrating a patient shield for a system similar to that seen in FIGS. 1 and 2.



FIG. 4 is a side elevation that is the same as FIG. 2 but illustrates a patient shield.



FIGS. 5 and 6 are similar to FIGS. 1 and 2, respectively, but illustrate the system as used in a tomosynthesis mode or a mammography mode and shows a gantry that is spaced further from a support column than in FIGS. 2 and 4.



FIG. 7A depicts an example of plurality of angular locations relative to a compressed breast for real projections.



FIG. 7B depicts an example of plurality of angular locations relative to a compressed breast for real projections and virtual projections.



FIG. 7C depicts an example of plurality of angular locations relative to a compressed breast for real projections and virtual projections.



FIG. 8A depicts an example system for synthesizing virtual projections.



FIG. 8B depicts another example system for synthesizing virtual projections.



FIG. 8C depicts another example system for synthesizing virtual projections.



FIG. 9A depicts a method for generating images of a breast.



FIG. 9B depicts a method for synthesizing x-ray imaging data.



FIG. 9C depicts a method for training a deep-neural network for use in medical imaging.



FIG. 10 depicts a method for imaging a breast.



FIG. 11 illustrates one example of a suitable operating environment in which one or more of the present examples can be implemented.



FIG. 12 is an embodiment of a network in which the various systems and methods disclosed herein may operate.





DETAILED DESCRIPTION

As discussed above, a tomosynthesis system acquires a series of x-ray projection images, each projection image obtained at a different angular displacement as the x-ray source moves along a path, such as a circular arc, over the breast. More specifically, the technology typically involves taking two-dimensional (2D) real projection images of the immobilized breast at each of a number of angles of the x-ray beam relative to the breast. The resulting x-ray measurements are computer-processed to reconstruct images of breast slices that typically are in planes transverse to the x-ray beam axis, such as parallel to the image plane of a mammogram of the same breast, but can be at any other orientation and can represent breast slices of selected thicknesses. Acquiring each real projection image introduces additional radiation to the patient and increases the total amount of time required to complete the imaging process. The use of fewer real projection images, however, leads to worse image quality for the reconstructed images.


The present technology contemplates systems and methods that allow fewer real projection images to be acquired, while still preserving suitable image quality of reconstructed images of breast slices. The present technology allows for virtual projection images to be generated from real projection images. The virtual projection images may then be used, along with the real projection images, to generate the reconstruction model for the breast. Through the use of the virtual projection images, radiation exposure at some of the angular locations where radiation exposure was traditionally necessary can be eliminated—thus reducing the total radiation dose received by the patient and reducing the time required to complete the tomosynthesis procedure. Reducing the amount of time the patient is imaged also improves image quality by reducing the amount of movement or motion of the patient during the imaging procedure.


In some examples the total imaging time and dosage may remain the same as prior imaging procedures, such as tomosynthesis imaging procedures. In such examples, the virtual projection images may be generated to expand the angular range that is images or provide additional information for the real projection images. Thus, imaging artifacts in the reconstruction images, such as overlay structures or other imaging artifacts that are inherent in limited-angle imaging modalities, may be reduced or removed due to the additional virtual projections.


The virtual projection images may be generated from machine-learning techniques, such a deep-learning neural networks. The virtual projection images may also be generated by fusing multiple real projection images. In addition, generating the virtual projection images may be based on reconstruction data generated from the real projection images. The generation of the virtual projections and reconstruction data may also be an iterative process. For example, image quality of a reconstructed breast slice may be assessed, and if the quality is poor, modified virtual projections can be generated to improve the image quality of the image of the breast slice until a desired performance criteria is achieved. As used herein, a real projection refers to a projection obtained by emitting radiation through the breast for a respective angular location. In contrast, a virtual projection refers to a projection obtained without emitting radiation through the breast.


In describing examples and embodiments illustrated in the drawings, specific terminology is employed for the sake of clarity. However, the disclosure of this patent specification is not intended to be limited to the specific terminology so selected and it is to be understood that each specific element includes all technical equivalents that operate in a similar manner.



FIGS. 1 and 2 illustrate portions of a non-limiting example of a multi-mode breast x-ray imaging system operable in a CT mode but also configured to selectively operate in a tomosynthesis mode including a wide angle tomosynthesis mode and a narrow angle tomosynthesis mode, and in a mammography mode. For clarity of illustration, a patient shield for use in the CT mode is omitted from FIGS. 1 and 2 but examples are illustrated in FIGS. 3 and 4. A support column 100 is secured to a floor and houses a motorized mechanism for raising and lowering a horizontally extending axle 102, which protrudes through an opening 100a in column 100, and for rotating axle 102 about its central axis. Axle 102 in turn supports a coaxial axle 102a that can rotate with or independently of axle 102. Axle 102 supports a breast immobilization unit comprising an upper plate 104a and a lower plate 104b such that each plate can move up and down along the long dimension of support 100 together with axles 102 and 102a, at least one of the plates can move toward the other, and unit 104 can rotate about the common central axis of axles 102 and 102a. In addition, axle 102 supports a gantry 106 for two types of motorized movement: rotation about the central axis of axle 102, and motion relative to axle 102 along the length of gantry 106. Gantry 106 carries at one end an x-ray source such as a shrouded x-ray tube generally indicated at 108, and at the other end a receptor housing 110 enclosing an imaging x-ray detector or receptor 112.


When operating in a CT mode, the system of FIGS. 1 and 2 immobilizes a patient's breast between plates 104a and 104b. To this end, unit 104 is raised or lowered together with axle 102 to the height of the breast while the patient is upright, e.g., standing or sitting. The patient leans toward unit 104 from the left side of the system as seen in FIG. 2, and a health professional, typically an x-ray technician, adjusts the breast between plates 104a and 104b while pulling tissue to the right in FIG. 2 and moving at least one of plates 104a and 104b toward the other to immobilize the breast and keep it in place, preferably with as much as practicable of the breast tissue being inside unit 104. In the course of taking x-ray measurements representing real projection x-ray images, from which to reconstruct images of respective breast slices, gantry 106 rotates about the central axis of axle 102 while the breast remains immobilized in unit 104. Imaging receptor 112 inside housing 110 remains fixed relative to x-ray tube 108 during the rotation of gantry 106. A pyramid shaped beam of x-rays from tube 108 traverses the breast immobilized in unit 104 and impinges on imaging receptor 112, which in response generates a respective two-dimensional array of pixel values related to the amount of x-ray energy received for each increment of rotation at respective pixel positions in an imaging plane of the receptor. These arrays of pixel values for real projection images are delivered to and processed by a computer system to reconstruct slice images of the breast. Gantry 106 may be configured for motorized movement toward column 100, to facilitate the x-ray technician's access to the patient's breast for positioning the breast in unit 104, and away from column 100 to ensure that x-ray tube 108 and imaging receptor 112 inside housing 110 can image the appropriate breast tissue. Alternatively, gantry 106 can maintain a fixed distance from column 100, to the left of the position seen in FIG. 2, so that the imaging x-ray beam can pass through as much as practical of the breast immobilized in unit 104, in which case there would be no need for a mechanism to vary that distance.


A unique challenge arises because of the upright position of the patient and the rotation of x-ray tube 108 and receptor housing 110 through a large angle in the CT mode of operation. As known, CT scanning typically involves a rotation of the source and receptor through an angle of 180° plus the angle subtended by the imaging x-ray beam, and preferably a rotation through a greater angle, e.g., 360°. However, if the rotation includes the 0° position of x-ray source 108 as seen in FIGS. 1 and 2, the patient's head may be too close to x-ray source 108. Collision of rotating assemblies with the patient, and concern with such collision, can be avoided by the use of a shield separating the patient from assemblies rotating even the full 360, as discussed below in this patent specification, although depending on the design of the shield and the rotating assemblies in particular embodiments this may require the patient to arch her body such that both her head and legs are away from the system, to the left as seen in FIG. 2. An alternative, also discussed below, is to exclude from the rotation a sector or segment around the position of x-ray source 108 seen in FIGS. 1 and 2. As a non-limiting example, if the position of x-ray tube 108 seen in FIGS. 1 and 2 is designated the 0° position, then the rotation for CT imaging excludes positions of x-ray source 108 in the 90° sector or segment between 45° and 315°, or in the 120° sector or segment between 60° and 300°, or in some other sector or segment that is sufficient to clear the patient's head position while taking x-ray CT data over a sufficient angle of rotation for the reconstruction of high quality slice images. While the rotation of x-ray tube 108 and receptor housing 110 still has to clear the lower part of the patient's body, it is generally easier for a patient to keep the lower part of her body away from the rotating components, to the left as seen in FIG. 2 (and preferably behind a shield), than to arch back her head and shoulders.


An example of such a shield is illustrated in FIGS. 3 and 4. FIG. 4 is a side elevation that is otherwise the same as FIG. 2 but additionally illustrates a patient shield 114 having a central opening 114c. Shield 114 may be completely circular in front elevation, as illustrated by the circle that includes an arc in broken line in FIG. 3, in front elevation. In that case, gantry 106 can rotate through a complete circle in the CT mode. As an alternative, shield 114 can leave open a sector or segment 114a illustrated in FIG. 3 as the area below the broken line arc and between the solids line of shield 114. In that case, gantry 106 can rotate in the CT mode only through an angle that is less than 360°, but the patient can have space for her head and perhaps a shoulder and an arm in the V-shaped cutout 114b of shield 114, for a more comfortable body posture. Specifically, as illustrated in FIG. 3, gantry 106 can rotate only within the portion of shield 114 that is outside V-shaped cutout 114b. One of the possible positions of gantry 106 and tube 108 and receptor housing 110 is shown in solid lines. Another possible position is shown in broken lines, and designated as gantry 106′, carrying x-ray source 108′ and receptor housing 110′. FIG. 4 illustrates a possible shape of shield 114 in side elevation.


Use of the system in a tomosynthesis mode is illustrated in FIGS. 5 and 6, which are otherwise the same as FIGS. 1 and 2 respectively, except that gantry 106 is in a different position relative to breast immobilization unit 104 and axle 102 and column 100, and no shield 114 is shown. In particular, x-ray source 108 is further from unit 104 and column 100, and receptor housing 110 is closer to unit 104. In the tomosynthesis mode, the patient's breast also is immobilized between plates 104a and 104b, which remain in place during imaging. In one example, x-ray tube 108 and receptor housing 110 may undergo a rotation about the immobilized breast that is similar to that in the CT mode operation but is through a smaller angle. A respective two-dimensional projection image Tp taken for each increment of rotation while x-ray tube 108 and imaging receptor 112 inside housing 110 rotate as a unit, fixed with respect to each other, as in the CT mode or as illustrated in principle in commonly assigned U.S. Pat. No. 7,123,684, the disclosure of which is hereby incorporated by reference herein in its entirety. Alternatively, the motions of x-ray tube 108 and receptor 112 relative to the immobilized breast can be as in said system offered under the trade name Selenia® Dimensions® of the common assignee, certain aspect of which are described in commonly owned U.S. Pat. No. 7,616,801, the disclosure of which is hereby incorporated by reference herein in its entirety. In this alternative case, x-ray tube rotates about the central axis of axle 102, but receptor housing 110 remains in place while imaging receptor 112 rotates or pivots inside housing 110 about an axis that typically passes through the image plane of the receptor, is parallel to the central axis of axle 102, and bisects imaging receptor 112. The rotation or pivoting of receptor 112 typically is through a smaller angle than the rotation angle of x-ray tube 108, calculated so that a normal to the imaging plane of receptor 112 can continue pointing at or close to the focal spot in x-ray tube 108 from which the imaging x-ray beam is emitted, and so that the beam continues to illuminate all or most of the imaging surface of receptor 112.


In one example of tomosynthesis mode operation, x-ray tube 108 rotates through an arc of about ±15° while imaging receptor rotates or pivots through about ±5° about the horizontal axis that bisects its imaging surface. During this motion, plural projection images RP are taken, such as 20 or 21 images, at regular increments of rotation angle. The central angle of the ±15° arc of x-ray source 108 rotation can be the 0° angle, i.e., the position of the x-ray source 108 seen in FIGS. 5 and 6, or some other angle, e.g., the angle for the x-ray source position typical for MLO imaging in conventional mammography. In the tomosynthesis mode, the breast may be immobilized in unit 104 but, alternatively, lower plate 104b may be removed so that the breast is supported between the upper surface of receptor housing 110 and upper plate 104a, in a manner analogous to the way the breast is immobilized in said system offered under the trade name Selenia®. In the tomosynthesis mode, greater degree of breast compression can be used under operator control than in the CT mode. The same concave plates 104a and 104b can be used, or generally flat plates can be substituted, or a single compression paddle can be used while the breast is supported by the upper surface of receptor housing 110, as used in said system offered under the Selenia® trade name.


When operating in a tomosynthesis mode, the system of FIGS. 5 and 6 provides multiple choices of that mode, selectable by an operator, for example a narrow angle mode and a wide angle mode. In the narrow angle tomosynthesis mode, x-ray source 108 rotates around unit 104 and the patient's breast immobilized therein through an angle such as ±15°, while in the wide angle tomosynthesis mode x-ray tube 108 rotates through an angle such as in the range of about ±15° to ±60°. The wide angle mode may involve taking the same number of projection images RP as the narrow angle mode, or a greater number. As a non-limiting example, if the narrow angle mode involves taking a total or 20 or 21 tomosynthesis projection images RP as x-ray source 108 moves through its arc around the breast, the wide angle mode may involve taking the same number of images RP or a greater number, such as 40 or 60 or some other number, typically at regular angular increments. The examples of angles of rotation of x-ray source 108 are not limiting. The important point is to provide multiple modes of tomosynthesis operations, where one mode involves x-ray source rotation through a greater angle around the breast than another tomosynthesis mode. Additional details regarding the structure and operation of image system of FIGS. 1-6 are provided in U.S. Pat. No. 8,787,522, the disclosure of which is hereby incorporated by reference herein in its entirety. The methods and systems described herein may be implemented in digital breast tomosynthesis (DBT) procedures as well as multi-modality imaging (MMI) procedures. MMI procedures generally refers to the use of a combination of different imaging modes or techniques, such as DBT acquisitions with varying dosage levels and/or angular coverage, computerized tomography (CT) of a compressed breast, and/or a combination of the two.



FIG. 7A depicts an example of plurality of angular locations (L1-L7) relative to a compressed breast 702 for real projections. At each of the plurality of angular locations (L1-L7) a real projection image may be acquired. For example, at angular location L1, an x-ray source 704 emits an x-ray emission 706 that passes through the compressed breast and is then detected by an x-ray receptor or detector 708, which allows for processing of the detected x-ray emission to form a real projection image for angular location L1. Subsequently, the x-ray source 704 moves to angular location L2, where an x-ray emission 706 is emitted that passes through the compressed breast and is then detected by the x-ray detector 708, which allows for processing of the detected x-ray emission to form a real projection image for angular location L2. This process continues for each of the angular locations L1-L7. While there are only seven angular locations depicted in the figure, such a depiction is for illustrative purposes. In implementation, more or fewer angular locations may be used. In addition, the positions of the angular locations may also differ in different examples. In some examples, the difference between each angular location may be less than or equal to three degrees. In other examples, the difference between each angular location may be less than or equal to one degree.



FIG. 7B depicts an example of plurality of angular locations (L1-L7) relative to a compressed breast 702 for real projection and virtual projections. In FIG. 7B, the angular locations for real projections are indicated by solid lines and the angular locations for virtual projections are indicated by dashed lines. More specifically, in the example depicted, real projections are acquired at angular locations L1, L3, L5, and L7. Virtual projections are then generated for angular locations L2, L4, and L6. The virtual projections may be synthesized, or otherwise generated, based on the acquired virtual projections. For example, the virtual projection for angular location L2 may be generated based on the real projections acquired for angular locations L1 and L3. The virtual projection for angular location L2 may be based on other acquired angular locations as well.


In other examples, angular locations for real projections and virtual projections need not alternate as shown in FIG. 7B. As an example, FIG. 7C depicts an example of plurality of angular locations (L1-L7) relative to a compressed breast 702 for real projection and virtual projections. In the example depicted in FIG. 7C, real projections are acquired at angular locations L1, L2, L4, and L5. Virtual projections are generated for angular locations L3, L6, and L7. Each of the virtual projections may be generated based on any combination of the acquired real projections. For instance, the virtual projection for angular location L7 may be generated based on the acquired real projections for angular locations L1, L2, L4, and L5. While a few different examples of angular locations for real and virtual projections have been provided herein, the technology is not limited to such combinations. Other combinations of angular locations for real and virtual projections are also contemplated.



FIG. 8A depicts an example system 800 for synthesizing virtual projections (VP). The system 800 includes an acquisition system 802 and a virtual projection synthesizer 804. The acquisition system may include one or more of the imaging system(s) discussed above in FIGS. 1-6, which are capable of acquiring real projections (RP). The real projections (RP) are provided to a virtual projection synthesizer 804. The virtual projection synthesizer 804 synthesizes virtual projections (VP) based on the real projections (RP) received from the acquisition system 802. The virtual projection synthesizer 804 may be part of a computing system attached to the acquisition system 802, such as a workstation, or another computing system operatively connected to the acquisition system 802 such that the virtual projection synthesizer 804 may receive the real projections (RP). The virtual projection synthesizer 804 may fuse at least a portion of the real projections (RP) to synthesize the virtual projections (VP). The virtual projections (VP) and the real projections (RP) may then be used together to generate a reconstruction model.


Fusing the real projections (RP) to synthesize or generate virtual projections (VP) may be performed by a variety of image analysis and combination techniques, including superposition, interpolation, and extrapolation techniques, among other potential techniques. Interpolation or extrapolation techniques may be performed based on the angular locations of the real projections (RP) as compared to the corresponding angular location of the virtual projection (VP). For instance, where the angular location of the virtual projection (VP) is between the angular locations of the real projections (RP) used to generate the virtual projection (VP), interpolation techniques may be used. Where the angular location of the virtual projection (VP) is outside the angular locations of the real projections (RP) used to generate the virtual projection (VP), extrapolation techniques may be used. The techniques for fusing the real projections (RP) to generate virtual projections (VP) may also be performed in the spatial, transform, or frequency domains. For example, image fusion techniques in the spatial domain generally operate based on the pixel values in the real projections (RP). Image fusion techniques within the transform or frequency domains generally operate based on mathematical transforms, such as a Fourier or Laplace transform, of the pixel data from the real projections (RP). For instance, in the frequency domain, the image fusion techniques may be based on a rate of change of pixel values within the spatial domain.



FIG. 8B depicts another example system 810 for synthesizing virtual projections. Similar to the system 800, the system 810 includes the acquisition system 802 and the virtual projection synthesizer 804. The system 810 also includes a reconstruction engine 806. The reconstruction engine 806 generates a reconstruction model and reconstruction images TR, such as images for slices of breasts as discussed above. The reconstruction engine 806 receives the real projections (RP) from the acquisition system 802 and receives the virtual projections from the virtual projection synthesizer 804. The reconstruction engine 806 then generates the reconstruction model and reconstruction images (TR) based on the received real projections (RP) and the received virtual projections (VP).


In the system 810, the virtual projection synthesizer 804 may also use reconstruction images (TR) to generate the virtual projections (VP). In one example, the virtual projection synthesizer 804 receives reconstruction images (TR) from the reconstruction engine 806. In such an example, the reconstruction images (TR) may be based on the real projections (RP) received by the reconstruction engine 806 from the acquisition system 802. In other examples, the process of generating the reconstruction images (TR) and/or the virtual projections (VP) may be an iterative process. For instance, the reconstruction engine 806 may receive the real projections (RP) from the acquisition system 802 and the virtual projections (VP) from the virtual projection synthesizer 804 generated from the real projections (RP). The reconstruction engine 806 then generates the reconstruction images (TR) from the real projections (RP) and the virtual projections (VP). Those reconstruction images (TR) may be provided back to the virtual projection synthesizer 804 to update the virtual projections (VP) based on the reconstruction images (TR). The updated or modified virtual projections (VP) may then be provided back to the reconstruction engine 806 to generate an updated or modified reconstruction model and updated reconstruction images (TR). This iterative updating or modification process may continue until performance criteria for the virtual projection or a performance criteria for the reconstruction images, or both, is achieved.



FIG. 8C depicts another example system 820 for synthesizing virtual projections (VP). The system 820 is similar to the system 800 depicted in FIG. 8A, with the exception that the virtual projection synthesizer 804 includes at least one deep-learning neural network 808. The deep-learning neural network 808 receives at least a portion of the real projections (RP) from the acquisition system 802. The deep-learning neural network 808 processes the received real projections (RP) to synthesize or generate the virtual projections (VP).


Prior to receiving the real projections (RP), the deep-learning neural network 808 has been trained to generate the virtual projections (VP). For instance, the deep-learning neural network 808 may be trained with a known set of real projection data. The real projection data may be separated into a set of training real projection data and training virtual projection data. As an example, real projection data may be received for angular locations L1, L2, and L3. The real projection data for angular location L2 may be segregated into a data set of training virtual projection data. The real projection data for angular location L2 is effectively the desired, or ideal, virtual projection data for angular location L2. As such, the deep-learning neural network 808 can be trained to produce virtual projection data based on former real projection data. The real projection data for angular locations L1 and L3 is used as input during training, and the known virtual projection data for the angular location L2 is used as a ground truth during training. Training the deep-learning neural network 808 may be performed using multiple different techniques. As one example, the coefficients of the deep-learning neural network 808 may be adjusted to minimize a pre-defined cost function that evaluates the difference between the known virtual projection data and the output of the deep-learning neural network 808 during training. Multiple sets of real projection data may be used to train the deep-learning neural network 808 until a desired performance of the deep-learning neural network 808 is achieved.


In other examples, the deep-learning neural network 808 may be used to generate a reconstruction model or reconstruction images (TR) without an intermediate operation of generating virtual projections (VP). In such an example, the deep-learning neural network 808 may be trained with a set of real projection data and a corresponding reconstruction model or reconstruction images. The reconstruction images can be used as the ground truth during training and the real projection data may be used as the input to the deep-learning neural network 808 during training. Training of the deep-learning neural network 808 may then be similar to the training discussed above.


While a deep-learning neural network 808 has a been used in the example system 820 depicted in FIG. 8C, other machine learning techniques or neural networks, such as recurrent or convolutional neural networks, may be used in place of, or in combination with, the deep-learning neural network 808. For example, hidden Markov models or support vector machines may also be used. In general, the machine learning techniques are supervised learning techniques and are trained based on a known set of data, as discussed herein. Those machine learning techniques may be reinforced as additional imaging data is acquired for different patients.



FIG. 9A depicts a method 900 for generating images of a breast. At operation 902, a first x-ray emission is emitted from an x-ray source at a first angular location. For example, an x-ray source may emit x-rays from the angular location L1. At operation 904, the emitted x-ray emission is detected after passing through the breast from the first angular location. At operation 904, first x-ray imaging data is generated from the first x-ray emission detected at operation 904. The first x-ray imaging data may be a real projection for the first angular location. Operations 902-906 effectively then repeat for a second angular location. For instance, at operation 908, a second x-ray emission is emitted from the second angular location. As an example, the x-ray source may emit x-rays from the angular location L3. At operation 910, the emitted x-ray emission is detected after passing through the breast from the second angular location. At operation 912, second x-ray imaging data is generated from the second x-ray emission detected at operation 910. The second x-ray imaging data may be a real projection for the second angular location.


At operation 914, third x-ray imaging data for a third angular location is synthesized based on at least the first x-ray imaging data and the second x-ray imaging data. The third angular location is different from the first and second angular locations. In the example where the first x-ray imaging data is a real projection for angular location L1 and the second x-ray imaging data is a real projection for angular location L3, the third x-ray imaging data may be a virtual projection for the angular location L2. As such, by generating the third x-ray imaging data for the third angular location without emitting x-ray radiation at the third angular location, the need for an x-ray emission at the third angular location is eliminated. By eliminating the need for the x-ray emission, the overall radiation dose delivered to the patient is reduced and the time required to complete the imaging procedure is reduced.


In another example of method 900, the first x-ray imaging data is a real projection for angular location L5 and the second x-ray imaging data is a real projection for angular location L6. In that example, the third x-ray imaging data generated at operation 914 may be a virtual projection for angular location wider than angular locations L5 and L6, such as angular location L7. As such, by generating the third x-ray imaging data for the third angular location to expand the overall angular range from the angular location of L6 to that of L7, the overall image quality may be improved due to the additional information from the wider angular location of angular location L7. For instance, image artifacts such as overlap structures may be reduced or removed.



FIG. 9B depicts a method 920 for synthesizing x-ray imaging data. The method 920 may be used in operation 914 of method 900. At operation 922, first x-ray imaging data for a first angular location is received, and at operation 924, second x-ray imaging data for a second angular location is received. The first x-ray imaging data may be a real projection for a first angular location and the second x-ray imaging data may be a real projection for a second angular location. Third x-ray imaging data for a third angular location, in the form of a virtual projection for example, may then be synthesized at operation 926 and/or operation 928. Synthesizing a virtual projection may include simulating a real projection or other image without the requirement of x-ray emission. At operation 926, the first x-ray imaging data and second x-ray imaging data is fused to create the third x-ray imaging data. Fusing the first x-ray imaging data and second x-ray imaging data may be done by a variety of image analysis and combination techniques, including superposition, interpolation, and extrapolation techniques, among other potential techniques. Interpolation or extrapolation techniques may performed based on the angular locations of the first x-ray imaging data and second x-ray imaging data as compared to the corresponding angular location of the third x-ray imaging data. For instance, where the angular location of the third x-ray imaging data is between the angular locations of the first x-ray imaging data and second x-ray imaging data used to generated the third x-ray imaging data, interpolation techniques may be used. Where the angular location of the third x-ray imaging data is outside the angular locations of the first x-ray imaging data and second x-ray imaging data used to generated the third x-ray imaging data, extrapolation techniques may be used. The techniques for fusing the first x-ray imaging data and second x-ray imaging data to generate the third x-ray imaging data may also be performed in the spatial, transform, or frequency domains. For example, image fusion techniques in the spatial domain generally operate based on the pixel values in the first x-ray imaging data and second x-ray imaging data. Image fusion techniques within the transform or frequency domains generally operate based on mathematical transforms, such as a Fourier or Laplace transform, of the pixel data from the first x-ray imaging data and second x-ray imaging data. For instance, in the frequency domain, the image fusion techniques may be based on a rate of change of pixel values within the spatial domain.


At operation 928, the first x-ray imaging data and second x-ray imaging data are provided as inputs into a trained deep-learning neural network, and the trained deep-learning neural network is executed based on the first x-ray imaging data and second x-ray imaging data to generate the third x-ray imaging data. The trained deep-learning neural network may have been trained based on a set of real projection data, as discussed above and discussed below in further detail with respect to FIG. 9C.


At operation 930, a reconstruction model and/or reconstruction images are generated based on the first x-ray imaging data, the second x-ray imaging data, and the third x-ray imaging data. For example, the first x-ray imaging data and the second x-ray imaging data may be provided to a reconstruction engine. The third x-ray imaging data generating at operation 926 and/or operation 928 may also be provided to the reconstruction engine. The reconstruction engine then generates the reconstruction model and/or reconstruction images based on the first x-ray imaging data, the second x-ray imaging data, and the third x-ray imaging data. In some examples, reconstruction data from the reconstruction model can be provided to a virtual projection synthesizer to be used in operation 926 and/or operation 928 to generate the third x-ray imaging data. In such examples, the reconstruction data is generated at operation 930 based on the first x-ray imaging data and the second x-ray imaging data prior to the generation of the third x-ray imaging data at operation 926 and/or operation 928. In other examples, the process may be iterative, and operation 926 and/or operation 928 may repeat upon receiving the reconstruction data to generate modified third x-ray imaging data. The modified third x-ray imaging data may then be used to generate a modified or updated reconstruction model and/or reconstruction images. At operation 932, one or more reconstruction slices of the breast are displayed based on the reconstruction model and/or reconstruction images generated at operation 930. In other examples, the one or more reconstruction slices of the breast may be displayed concurrently or sequentially with one or more of the acquired real projection images and/or one or more of the generated virtual projection images. In some examples of operation 932, one or more reconstruction slices of the breast, one or more of the acquired real projection images, and/or one or more of the generated virtual projection images may be displayed, either concurrently or sequentially. While the x-ray imaging data in the methods discussed herein are discussed as being first, second, third, etc., such designations are merely for clarity and do necessarily denote any particular order or sequence. In addition, it should be appreciated that additional real x-ray imaging data may be used and more virtual imaging data may also be generated than what is discussed by example in the methods described herein.



FIG. 9C depicts a method 950 for training a deep-neural network for use in medical imaging. The method 950 may be used to train the trained deep-learning neural network used in operation 928 of method 920 described above with reference to FIG. 9B. At operation 952, a set of real prior x-ray imaging data used for imaging a breast at multiple angular locations is obtained. The set of real prior x-ray imaging data may be prior tomosynthesis data acquired for a plurality of different breasts. For instance, the real prior x-ray imaging data may be a plurality of real projections taken at different angular locations. At operation 954, the real prior x-ray imaging data is divided into a training real data set for a first plurality of the angular locations and a training virtual data set for a second plurality of the angular locations. The second plurality of angular locations are different from the first plurality of angular locations. As an example, real projection data may be received for angular locations L1, L2, L3, L4, and L5. The real projection data for angular locations L2 and L4 may be divided into a data set of training virtual data. The real projection data is effectively the desired, or ideal, virtual projection data. As such, the deep-learning neural network can be trained to produce virtual projection data based on prior real projection data. The real projection data for angular locations L1, L3, and L5 is left as the training real data set. At operation 956, the training real data set is provided as input to the deep-learning neural network. At operation 958, the training virtual data set is provided to the deep-learning neural network as a ground truth for the training real data set. At operation 960, one or more parameters of the deep-learning neural network are modified based on the training real data set and the training virtual data set. As an example, the coefficients of the deep-learning neural network may be adjusted to minimize a pre-defined cost function that evaluates the difference between the training virtual data set and the output of the deep-learning neural network during training.


At operation 962, the deep-learning neural network is tested. The deep-learning neural network may be tested with other sets of real prior x-ray imaging data to determine the performance and accuracy of the deep-learning neural network. At operation 964, based on the testing performed in operation 962, a determination is made as to whether the performance of the deep-learning neural network is acceptable. The determination may be made based on differences between the output of the deep-learning neural network and the known test data. If the performance is acceptable or within a predetermined tolerance, the trained deep-learning neural network is stored for later use with live real x-ray imaging data. If the performance is not acceptable or outside a predetermined tolerance, the method 950 flows back to operation 952 where the training of the deep-learning neural network continues with an additional set of real prior x-ray imaging data is obtained and used. The method 950 continues and repeats until the deep-learning neural network generates acceptable results that are within the predetermined tolerance.



FIG. 10 depicts a method 1000 for imaging a breast. At operation 1002, real projection data is received for a plurality of angular locations. For example, the real projection data may include first real projection data for an x-ray emission from a first angular location relative to the breast, second real projection data for an x-ray emission emitted from a second angular location relative to the breast, and third real projection data for an x-ray emission from a third angular location relative to the breast. At operation 1004, a synthesization process is executed to generate virtual projection data. The synthesization process may include fusing image data and/or executing a trained deep-learning neural network, as discussed above. Continuing with the example above, the synthesization process may generate, based on the first real projection data and the second real projection data, first virtual projection data for an x-ray emission from a fourth angular location relative to the breast, where the fourth angular location is located between the first angular location and the third angular location. The synthesization process may also generate, based on the second real projection data and the third real projection data, second virtual projection data for an x-ray emission from a fifth angular location relative to the breast, where the fifth angular location is located between the second angular location and the fourth angular location.


At operation 1006, a determination is made as to whether the generated virtual projection data is acceptable. For instance, a determination may be made as to whether the image quality of the virtual projection data is within a predetermined tolerance. Continuing with the example above, a determination may be made as to whether at least one of the first virtual projection data or the second virtual projection data has a quality outside of a predetermined tolerance. In one example, determining whether the generated virtual projection data is acceptable is based on the identification of landmarks in the real projection data and the generated virtual projection data. In continuing with the example above, a landmark may be identified in the first real projection data and/or the second real projection data. The landmark may then be identified in the first virtual projection data. The location of the landmark in the first virtual projection data is then compared to the location of the landmark in the first real projection data and/or the second real projection data. Based on the comparison, a determination is made as to whether the location of the landmark in the first virtual projection data is within the predetermined tolerance.


If the virtual projections are determined to be acceptable or within the predetermined tolerance at operation 1006, the method 1000 flows to operation 1012 where a reconstruction model and/or reconstruction images are generated from the real projection data received at operation 1002 and the virtual projection data generated at operation 1004. If, however, the virtual projection data is determined to not be acceptable or outside the predetermined tolerance, the method 1000 flows to operation 1008 where the synthesization process is modified to create a modifying synthesization process. Modifying the synthesization process may include altering the image combination techniques, such as modifying weighting or other parameters, used to combine the real projection data. In examples where the synthesization process includes executing a deep-learning neural network, modifying the synthesization process may include modifying the deep-learning neural network to create a modified deep-learning neural network. Modifying the deep-learning neural network may include adjusting the coefficients of the deep-learning neural network such that the modified deep-learning neural network produces virtual projection data that will fall within the predetermined tolerance.


The modified synthesization process is then executed in operation 1010 to generate modified virtual projection data. In continuing with the example above, the modified synthesization process may be executed to generate a modified first virtual projection and a modified second virtual projection. At operation 1012, a modified reconstruction model and/or modified reconstruction images are generated from the real projection data received at operation 1002 and the modified virtual projection data generated at operation 1010. In continuing with the example above, generating the modified reconstruction model and/or modified reconstruction images may be based on the first real projection data, the second real projection data, the third real projection data, the modified first virtual projection data, and the modified second virtual projection data.


At operation 1014, a determination is made as to whether the reconstruction model and/or reconstruction images generated at operation 1012 are acceptable or within a reconstruction quality tolerance. For example, it may be determined that a particular slice has a quality outside a reconstruction quality tolerance. The determination that the slice has a quality outside a reconstruction quality tolerance may be based on image artifacts within the slice and/or other image quality measurements, such as the sharpness of objects in the slice, contrast-to-noise ratios, spatial resolutions, z-axis resolution or an artifact spread function (e.g., artifact spreading among the slices along the z-direction. If the reconstruction model and/or reconstruction images are determined to be acceptable at operation 1014, the method 1000 flows to operation 1016 where a slice from the reconstruction model and/or reconstruction images is displayed. In other examples, the one or more reconstruction slices of the breast may be displayed concurrently or sequentially with one or more of the acquired real projection images and/or one or more of the generated virtual projection images. In some examples of operation 1016, one or more reconstruction slices of the breast, one or more of the acquired real projection images, and/or one or more of the generated virtual projection images may be displayed, either concurrently or sequentially. If the reconstruction model and/or reconstruction images are determined to not be acceptable at operation 1014, the method 1000 flows back to operation 1008 where the synthesization process may be further modified to create a further modified synthesization process. The further modified synthesization process is then executed at operation 1010 to generate further modified virtual projection data. That further modified virtual projection data may then be used to create a further modified reconstruction model and/or reconstruction images.



FIG. 11 illustrates one example of a suitable operating environment 1100 in which one or more of the present embodiments can be implemented. This operating environment may be incorporated directly into the imaging systems disclosed herein, or may be incorporated into a computer system discrete from, but used to control, the imaging systems described herein. This is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality. Other computing systems, environments, and/or configurations that can be suitable for use include, but are not limited to, imaging systems, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics such as smart phones, network PCs, minicomputers, mainframe computers, tablets, distributed computing environments that include any of the above systems or devices, and the like.


In its most basic configuration, operating environment 1100 typically includes at least one processing unit 1102 and memory 1104. Depending on the exact configuration and type of computing device, memory 1104 (storing, among other things, instructions to perform the image acquisition and processing methods disclosed herein) can be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.), or some combination of the two. This most basic configuration is illustrated in FIG. 11 by dashed line 1106. Further, environment 1100 can also include storage devices (removable, 1108, and/or non-removable, 1110) including, but not limited to, magnetic or optical disks or tape. Similarly, environment 1100 can also have input device(s) 1114 such as touch screens, keyboard, mouse, pen, voice input, etc., and/or output device(s) 1116 such as a display, speakers, printer, etc. Also included in the environment can be one or more communication connections 1112, such as LAN, WAN, point to point, Bluetooth, RF, etc.


Operating environment 1100 typically includes at least some form of computer readable media. Computer readable media can be any available media that can be accessed by processing unit 1102 or other devices comprising the operating environment. By way of example, and not limitation, computer readable media can comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, solid state storage, or any other tangible medium which can be used to store the desired information. Communication media embodies computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer readable media. A computer-readable device is a hardware device incorporating computer storage media.


The operating environment 1100 can be a single computer operating in a networked environment using logical connections to one or more remote computers. The remote computer can be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above as well as others not so mentioned. The logical connections can include any method supported by available communications media. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.


In some embodiments, the components described herein comprise such modules or instructions executable by computer system 1100 that can be stored on computer storage medium and other tangible mediums and transmitted in communication media. Computer storage media includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Combinations of any of the above should also be included within the scope of readable media. In some embodiments, computer system 1100 is part of a network that stores data in remote storage media for use by the computer system 1100.



FIG. 12 is an embodiment of a network 1200 in which the various systems and methods disclosed herein may operate. In embodiments, a client device, such as client device 1202, may communicate with one or more servers, such as servers 1204 and 1206, via a network 1208. In embodiments, a client device may be a standalone device may be a portable of fixed work station operatively connected to the acquisition system. The client device may also include or incorporate a laptop, a personal computer, a smart phone, a PDA, a netbook, or any other type of computing device, such as the computing device in FIG. 11. In embodiments, servers 1204 and 1206 may also be any type of computing device, such as the computing device illustrated in FIG. 11. Network 1208 may be any type of network capable of facilitating communications between the client device and one or more servers 1204 and 1206. For example, the surface image data and the internal image data may be acquired locally via the imaging systems and communicated to another computing device(s) for further processing, such as an image acquisition workstation or a cloud-based service. Examples of such networks include, but are not limited to, LANs, WANs, cellular networks, and/or the Internet.


In embodiments, the various systems and methods disclosed herein may be performed by one or more server devices. For example, in one embodiment, a single server, such as server 1204 may be employed to perform the systems and methods disclosed herein, such as the methods for imaging discussed herein. Client device 1202 may interact with server 1204 via network 1208. In further embodiments, the client device 1202 may also perform functionality disclosed herein, such as scanning and image processing, which can then be provided to servers 1204 and/or 1206.


In alternate embodiments, the methods and systems disclosed herein may be performed using a distributed computing network, or a cloud network. In such embodiments, the methods and systems disclosed herein may be performed by two or more servers, such as servers 1204 and 1206. Although a particular network embodiment is disclosed herein, one of skill in the art will appreciate that the systems and methods disclosed herein may be performed using other types of networks and/or network configurations.


In light of the foregoing, it should be appreciated that the present technology is able to reduce the overall radiation does to the patient during an imaging process by acquiring real projection data at fewer angular locations than what was previously used in tomosynthesis procedures. Virtual projections may then be used in place of additional real projections. The combination of the real and virtual projections thus can provide a substantially equivalent reconstruction as that of a former full-dose projection acquisition process. In addition, the present technology can improve the image quality of the reconstructed images, without increasing radiation dosage, by generating virtual projections at angular locations that provide additional information to reduce image artifacts that would otherwise appear in tomosynthesis. Further, the total time required to complete the imaging process is reduced by the present technology. Reducing the time the patient is imaged also reduces the impact of patient movement on the image quality of the reconstructed data.


The embodiments described herein may be employed using software, hardware, or a combination of software and hardware to implement and perform the systems and methods disclosed herein. Although specific devices have been recited throughout the disclosure as performing specific functions, one of skill in the art will appreciate that these devices are provided for illustrative purposes, and other devices may be employed to perform the functionality disclosed herein without departing from the scope of the disclosure.


This disclosure describes some embodiments of the present technology with reference to the accompanying drawings, in which only some of the possible embodiments were shown. Other aspects may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments were provided so that this disclosure was thorough and complete and fully conveyed the scope of the possible embodiments to those skilled in the art. Further, as used herein and in the claims, the phrase “at least one of element A, element B, or element C” is intended to convey any of: element A, element B, element C, elements A and B, elements A and C, elements B and C, and elements A, B, and C.


Although specific embodiments are described herein, the scope of the technology is not limited to those specific embodiments. One skilled in the art will recognize other embodiments or improvements that are within the scope and spirit of the present technology. Therefore, the specific structure, acts, or media are disclosed only as illustrative embodiments. The scope of the technology is defined by the following claims and any equivalents therein.

Claims
  • 1. A system for generating images of a breast, the system comprising: an x-ray source;an x-ray detector;at least one processor operatively connected to the x-ray detector; andmemory operatively connected to the at least one processor, the memory storing instructions that, when executed by the at least one processor, cause the system to perform a set of operations comprising: emitting, from the x-ray source, a first x-ray emission at a first angular location relative to the x-ray detector;detecting, by the x-ray detector, the first x-ray emission after passing through the breast;generating first x-ray imaging data from the detected first x-ray emission;emitting, from the x-ray source, a second x-ray emission at a second angular location relative to the breast;detecting, by the x-ray detector, the second x-ray emission after passing through the breast;generating second x-ray imaging data from the detected second x-ray emission;synthesizing, based on at least the first x-ray imaging data and the second x-ray imaging data, third x-ray imaging data for a third angular location relative to the breast, wherein the third angular location is different from the first angular location and the second angular location, thereby eliminating the need for an x-ray emission at the third angular location, wherein synthesizing the third x-ray imaging data comprises: providing the first x-ray imaging data and the second x-ray imaging data into a trained deep-learning neural network; andexecuting the trained deep-learning neural network based on the first x-ray imaging data and the second x-ray imaging data to generate the third x-ray imaging data; andgenerating and displaying an image of the breast from the third x-ray imaging data.
  • 2. The system of claim 1, wherein the first x-ray imaging data is a first real projection for the first angular location, the second x-ray imaging data is a second real projection for the second angular location, and the third x-ray imaging data is a virtual projection for the third angular location.
  • 3. The system of any one of claims 1, wherein synthesizing the third x-ray imaging data includes fusing the first x-ray imaging data and the second x-ray imaging data in at least one of a spatial domain or a frequency domain.
  • 4. The system of any one of claims 1, wherein synthesizing the third x-ray imaging data further comprises: generating reconstruction data from the first x-ray imaging data and the second x-ray imaging data; andwherein synthesizing the third x-ray imaging data is further based on the generated reconstruction data.
  • 5. The system of claim 1, wherein the operations further comprise training a deep-learning neural network to generate the trained deep-learning neural network, wherein training the deep-learning neural network comprises: obtaining a set of real prior x-ray imaging data used for imaging a breast at multiple angular locations;dividing the set of real prior x-ray imaging data into a plurality of datasets comprising a training real data set for a first plurality of the angular locations and a training virtual data set for a second plurality of the angular locations, the second plurality of angular locations being different from the first plurality of angular locations;providing the training real data set as inputs into the deep-learning neural network; andproviding the training virtual data set as a ground truth for the deep-learning neural network.
  • 6. The system of any one of claims 1, wherein the operations are performed as part of digital breast tomosynthesis or multi-modality imaging.
  • 7. A computer-implemented method, executed by at least one processor, for generating images of a breast, the method comprising: receiving first real projection data for an x-ray emission from a first angular location relative to the breast;receiving second real projection data for an x-ray emission emitted from a second angular location relative to the breast;receiving third real projection data for an x-ray emission from a third angular location relative to the breast;executing a synthesization process to: generate, based on the first real projection data and the second real projection data, first virtual projection data for an x-ray emission from a fourth angular location relative to the breast, wherein the fourth angular location is different from the first angular location and the third angular location; andgenerate, based on the second real projection data and the third real projection data, second virtual projection data for an x-ray emission from a fifth angular location relative to the breast, wherein the fifth angular location different from the second angular location and the fourth angular location;determining that at least one of the first virtual projection data or the second virtual projection data has a quality outside of a predetermined tolerance;based on the determination that the at least one of the first virtual projection or the second virtual projection has a quality outside of a predetermined tolerance, modifying the synthesization process to create a modified synthesization process;executing the modified synthesization process to generate a modified first virtual projection and a modified second virtual projection;generating a reconstruction model from the first real projection data, the second real projection data, the third real projection data, the modified first virtual projection data, and the modified second virtual projection data; anddisplaying at least one of a slice of the breast from the generated reconstruction model, the first real projection data, the second real projection data, the third real projection data, the first virtual projection data, or the second virtual projection data.
  • 8. The method of claim 7, wherein determining that at least one of the first virtual projection data or the second virtual projection data has a quality outside of a predetermined tolerance further comprises: identifying a landmark in one of the first real projection data or the second real projection data;identifying the landmark in the first virtual projection data;comparing the location of the landmark in the first virtual projection data to the location of the landmark in at least one of the first real projection data or the second real projection data; andbased on the comparison, determining whether the location of the landmark in the first virtual projection data is within the predetermined tolerance.
  • 9. The method of any one of claims 7, wherein the synthesization process comprises: providing the first real projection data, the second real projection data, the third real projection data into a trained deep-learning neural network; andexecuting the trained deep-learning neural network based on the first real projection data, the second real projection data, the third real projection data to generate the first virtual projection data and the second virtual projection data.
  • 10. The method of claim 9, wherein modifying the synthesization process comprises modifying coefficients of the trained deep-learning neural network.
  • 11. The method of any one of claims 7, further comprising: determining that the slice has a quality outside a reconstruction quality tolerance; andbased on the determination that the slice has a quality outside a reconstruction quality tolerance, further modifying the modified synthesization process to create a further modified synthesization process.
  • 12. The method of claim 11, further comprising: executing the further modified synthesization process to generate a further modified first virtual projection data and a further modified second virtual projection data;generating a modified reconstruction model from the first real projection data, the second real projection data, the third real projection data, the further modified first virtual projection data, and the further modified second virtual projection data; anddisplaying at least one of a slice of the breast from the modified reconstruction model, the further modified first virtual projection, or the further modified second virtual projection.
  • 13. The method of any one of claims 7, wherein the method is performed as part of digital breast tomosynthesis or multi-modality imaging.
  • 14. A computer-implemented method, executed by at least one processor, for generating images of a breast, the method comprising: receiving first real projection data for an x-ray emission from a first angular location relative to the breast;receiving second real projection data for an x-ray emission emitted from a second angular location relative to the breast;providing the first real projection data and the second real projection data into a trained deep-learning neural network;executing the trained deep-learning neural network based on the first real projection data, and the second real projection data to generate first virtual projection data for a third angular location relative to the breast;generating a reconstruction model from the first real projection data, the second real projection data, and the first virtual projection data; anddisplaying at least one of a slice of the breast from the generated reconstruction model, the first real projection data, the second real projection data, or the first virtual projection data.
  • 15. The method of claim 14, further comprising: determining that the slice has a quality outside a reconstruction quality tolerance; andbased on the determination that the slice has a quality outside a reconstruction quality tolerance, modifying the trained deep-learning neural network to create a modified deep-learning neural network.
  • 16. The method of claim 15, further comprising: executing the modified deep-learning neural network based on the first real projection data and the second real projection data to generate a modified first virtual projection;generating a modified reconstruction model from the first real projection data, the second real projection data, and the modified first virtual projection data; anddisplaying at least one of a slice of the breast from the modified reconstruction model or the modified first virtual projection.
  • 17. The method of any one of claims 15, wherein the determination that the slice has a quality outside a reconstruction quality tolerance is based on at least one of image artifacts or image quality measurements.
  • 18. The method of one of claims 14, wherein the difference between the first angular location and the second angular location is less than or equal to three degrees.
  • 19. The method of any one of claims 14, wherein the method is performed as part of digital breast tomosynthesis or multi-modality imaging.
RELATED CASES

This application claims the benefit of U.S. Provisional Application No. 62/730,818 filed on Sep. 13, 2018, which is hereby incorporated by reference in its entirety.

US Referenced Citations (380)
Number Name Date Kind
3365575 Strax Jan 1968 A
3502878 Stewart Mar 1970 A
3863073 Wagner Jan 1975 A
3971950 Evans et al. Jul 1976 A
4160906 Daniels et al. Jul 1979 A
4310766 Finkenzeller et al. Jan 1982 A
4380086 Vagi Apr 1983 A
4496557 Malen et al. Jan 1985 A
4513433 Weiss et al. Apr 1985 A
4542521 Hahn et al. Sep 1985 A
4559641 Caugant et al. Dec 1985 A
4662379 Macovski May 1987 A
4706269 Reina et al. Nov 1987 A
4721856 Saotome et al. Jan 1988 A
4744099 Huettenrauch et al. May 1988 A
4752948 MacMahon Jun 1988 A
4760589 Siczek Jul 1988 A
4763343 Yanaki Aug 1988 A
4773086 Fujita et al. Sep 1988 A
4773087 Plewes Sep 1988 A
4819258 Kleinman et al. Apr 1989 A
4821727 Levene et al. Apr 1989 A
4969174 Scheid et al. Nov 1990 A
4989227 Tirelli et al. Jan 1991 A
5018176 Romeas et al. May 1991 A
RE33634 Yanaki Jul 1991 E
5029193 Saffer Jul 1991 A
5051904 Griffith Sep 1991 A
5078142 Siczek et al. Jan 1992 A
5163075 Lubinsky et al. Nov 1992 A
5164976 Scheid et al. Nov 1992 A
5199056 Darrah Mar 1993 A
5212637 Saxena May 1993 A
5240011 Assa Aug 1993 A
5256370 Slattery et al. Oct 1993 A
5274690 Burke Dec 1993 A
5289520 Pellegrino et al. Feb 1994 A
5291539 Thumann et al. Mar 1994 A
5313510 Ebersberger May 1994 A
5359637 Webber Oct 1994 A
5365562 Toker Nov 1994 A
5415169 Siczek et al. May 1995 A
5426685 Pellegrino et al. Jun 1995 A
5452367 Bick et al. Sep 1995 A
5506877 Niklason et al. Apr 1996 A
5526394 Siczek et al. Jun 1996 A
5528658 Hell Jun 1996 A
5539797 Heidsieck et al. Jul 1996 A
5553111 Moore et al. Sep 1996 A
5592562 Rooks Jan 1997 A
5594769 Pellegrino et al. Jan 1997 A
5596200 Sharma et al. Jan 1997 A
5598454 Franetzki et al. Jan 1997 A
5609152 Pellegrino et al. Mar 1997 A
5627869 Andrew et al. May 1997 A
5657362 Giger et al. Aug 1997 A
5668844 Webber Sep 1997 A
5668889 Hara Sep 1997 A
5706327 Adamkowski et al. Jan 1998 A
5719952 Rooks Feb 1998 A
5735264 Siczek et al. Apr 1998 A
5769086 Ritchart et al. Jun 1998 A
5803912 Siczek et al. Sep 1998 A
5818898 Tsukamoto et al. Oct 1998 A
5828722 Ploetz et al. Oct 1998 A
5841829 Dolazza Nov 1998 A
5844965 Galkin Dec 1998 A
5864146 Karellas Jan 1999 A
5872828 Niklason et al. Feb 1999 A
5878104 Ploetz Mar 1999 A
5896437 Ploetz Apr 1999 A
5941832 Tumey et al. Aug 1999 A
5970118 Sokolov Oct 1999 A
5986662 Argiro et al. Nov 1999 A
5999836 Nelson et al. Dec 1999 A
6005907 Ploetz Dec 1999 A
6022325 Siczek et al. Feb 2000 A
6075879 Roehrig et al. Jun 2000 A
6091841 Rogers et al. Jul 2000 A
6137527 Abdel-Malek et al. Oct 2000 A
6141398 He et al. Oct 2000 A
6149301 Kautzer et al. Nov 2000 A
6167115 Inoue Dec 2000 A
6175117 Komardin et al. Jan 2001 B1
6196715 Nambu et al. Mar 2001 B1
6207958 Giakos Mar 2001 B1
6216540 Nelson et al. Apr 2001 B1
6219059 Argiro Apr 2001 B1
6233473 Shepherd et al. May 2001 B1
6243441 Zur Jun 2001 B1
6244507 Garland Jun 2001 B1
6256369 Lai Jul 2001 B1
6256370 Yavuz Jul 2001 B1
6272207 Tang Aug 2001 B1
6289235 Webber et al. Sep 2001 B1
6292530 Yavus et al. Sep 2001 B1
6327336 Gingold et al. Dec 2001 B1
6341156 Baetz et al. Jan 2002 B1
6345194 Nelson et al. Feb 2002 B1
6375352 Hewes et al. Apr 2002 B1
6411836 Patel et al. Jun 2002 B1
6415015 Nicolas et al. Jul 2002 B2
6418189 Schafer Jul 2002 B1
6442288 Haerer et al. Aug 2002 B1
6459925 Nields et al. Oct 2002 B1
6480565 Ning Nov 2002 B1
6490476 Townsend et al. Dec 2002 B1
6501819 Unger et al. Dec 2002 B2
6542575 Schubert Apr 2003 B1
6553096 Zhou et al. Apr 2003 B1
6556655 Chichereau et al. Apr 2003 B1
6574304 Hsieh et al. Jun 2003 B1
6574629 Cooke, Jr. et al. Jun 2003 B1
6597762 Ferrant et al. Jul 2003 B1
6611575 Alyassin et al. Aug 2003 B1
6620111 Stephens et al. Sep 2003 B2
6626849 Huitema et al. Sep 2003 B2
6633674 Gemperline et al. Oct 2003 B1
6638235 Miller et al. Oct 2003 B2
6647092 Eberhard et al. Nov 2003 B2
6744848 Stanton et al. Jun 2004 B2
6748044 Sabol et al. Jun 2004 B2
6751285 Eberhard et al. Jun 2004 B2
6758824 Miller et al. Jul 2004 B1
6813334 Koppe et al. Nov 2004 B2
6882700 Wang et al. Apr 2005 B2
6885724 Li et al. Apr 2005 B2
6895076 Halsmer May 2005 B2
6909790 Tumey et al. Jun 2005 B2
6909792 Carrott et al. Jun 2005 B1
6912319 Barnes et al. Jun 2005 B1
6940943 Claus et al. Sep 2005 B2
6950493 Besson Sep 2005 B2
6957099 Arnone et al. Oct 2005 B1
6970531 Eberhard et al. Nov 2005 B2
6978040 Berestov Dec 2005 B2
6987831 Ning Jan 2006 B2
6999554 Mertelmeier Feb 2006 B2
7001071 Deuringer Feb 2006 B2
7016461 Rotondo Mar 2006 B2
7110490 Eberhard et al. Sep 2006 B2
7110502 Tsuji Sep 2006 B2
7116749 Besson Oct 2006 B2
7123684 Jing et al. Oct 2006 B2
7127091 Op De Beek et al. Oct 2006 B2
7142633 Eberhard et al. Nov 2006 B2
7190758 Hagiwara Mar 2007 B2
7206462 Betke Apr 2007 B1
7244063 Eberhard Jul 2007 B2
7245694 Jing et al. Jul 2007 B2
7286645 Freudenberger Oct 2007 B2
7302031 Hjarn et al. Nov 2007 B2
7315607 Ramsauer Jan 2008 B2
7319735 Defreitas et al. Jan 2008 B2
7319736 Rotondo Jan 2008 B2
7323692 Rowlands et al. Jan 2008 B2
7331264 Ozawa Feb 2008 B2
7430272 Jing et al. Sep 2008 B2
7443949 Defreitas et al. Oct 2008 B2
7577282 Gkanatsios et al. Aug 2009 B2
7583786 Jing et al. Sep 2009 B2
7609806 Defreitas et al. Oct 2009 B2
7616731 Pack Nov 2009 B2
7616801 Gkanatsios et al. Nov 2009 B2
7630531 Chui Dec 2009 B2
7630533 Ruth et al. Dec 2009 B2
7688940 Defreitas et al. Mar 2010 B2
7697660 Ning Apr 2010 B2
7702142 Ren et al. Apr 2010 B2
7760853 Jing et al. Jul 2010 B2
7760924 Ruth et al. Jul 2010 B2
7792245 Hitzke et al. Sep 2010 B2
7831296 Defreitas et al. Nov 2010 B2
7839979 Hauttmann Nov 2010 B2
7869563 Defreitas et al. Jan 2011 B2
7881428 Jing et al. Feb 2011 B2
7885384 Mannar Feb 2011 B2
7894646 Shirahata et al. Feb 2011 B2
7916915 Gkanatsios et al. Mar 2011 B2
7949091 Jing et al. May 2011 B2
7986765 Defreitas et al. Jul 2011 B2
7991106 Ren et al. Aug 2011 B2
8031834 Ludwig Oct 2011 B2
8131049 Ruth et al. Mar 2012 B2
8155421 Ren et al. Apr 2012 B2
8170320 Smith et al. May 2012 B2
8175219 Defreitas et al. May 2012 B2
8285020 Gkanatsios et al. Oct 2012 B2
8416915 Jing et al. Apr 2013 B2
8452379 DeFreitas et al. May 2013 B2
8457282 Baorui et al. Jun 2013 B2
8515005 Ren et al. Aug 2013 B2
8559595 Defreitas et al. Oct 2013 B2
8565372 Stein et al. Oct 2013 B2
8565374 DeFreitas et al. Oct 2013 B2
8565860 Kimchy Oct 2013 B2
8571289 Ruth et al. Oct 2013 B2
8712127 Ren et al. Apr 2014 B2
8767911 Ren et al. Jul 2014 B2
8787522 Smith et al. Jul 2014 B2
8831171 Jing et al. Sep 2014 B2
8853635 O'Connor Oct 2014 B2
8873716 Ren et al. Oct 2014 B2
9042612 Gkanatsios et al. May 2015 B2
9066706 Defreitas et al. Jun 2015 B2
9226721 Ren et al. Jan 2016 B2
9460508 Gkanatsios et al. Oct 2016 B2
9498175 Stein et al. Nov 2016 B2
9502148 Ren Nov 2016 B2
9549709 DeFreitas et al. Jan 2017 B2
9851888 Gkanatsios et al. Dec 2017 B2
9895115 Ren Feb 2018 B2
10108329 Gkanatsios et al. Oct 2018 B2
10194875 DeFreitas et al. Feb 2019 B2
10296199 Gkanatsios May 2019 B2
10413255 Stein Sep 2019 B2
10719223 Gkanatsios Jul 2020 B2
20010038681 Stanton et al. Nov 2001 A1
20020012450 Tsujii Jan 2002 A1
20020048343 Launay et al. Apr 2002 A1
20020050986 Inoue et al. May 2002 A1
20020070970 Wood et al. Jun 2002 A1
20020075997 Unger et al. Jun 2002 A1
20020090055 Zur et al. Jul 2002 A1
20020094062 Dolazza Jul 2002 A1
20020122533 Marie et al. Sep 2002 A1
20020126798 Harris Sep 2002 A1
20030007598 Wang et al. Jan 2003 A1
20030010923 Zur Jan 2003 A1
20030018272 Treado et al. Jan 2003 A1
20030026386 Tang et al. Feb 2003 A1
20030058989 Rotondo Mar 2003 A1
20030072409 Kaufhold et al. Apr 2003 A1
20030072417 Kaufhold et al. Apr 2003 A1
20030073895 Nields et al. Apr 2003 A1
20030095624 Eberhard et al. May 2003 A1
20030097055 Yanof et al. May 2003 A1
20030149364 Kapur Aug 2003 A1
20030169847 Karellas et al. Sep 2003 A1
20030194050 Eberhard Oct 2003 A1
20030194051 Wang et al. Oct 2003 A1
20030194121 Eberhard et al. Oct 2003 A1
20030210254 Doan et al. Nov 2003 A1
20030212327 Wang et al. Nov 2003 A1
20030215120 Uppaluri et al. Nov 2003 A1
20040008809 Webber Jan 2004 A1
20040066882 Eberhard et al. Apr 2004 A1
20040066884 Hermann Claus et al. Apr 2004 A1
20040066904 Eberhard et al. Apr 2004 A1
20040070582 Smith et al. Apr 2004 A1
20040094167 Brady et al. May 2004 A1
20040101095 Jing et al. May 2004 A1
20040109529 Eberhard et al. Jun 2004 A1
20040146221 Siegel et al. Jul 2004 A1
20040171986 Tremaglio, Jr. et al. Sep 2004 A1
20040190682 Deuringer Sep 2004 A1
20040213378 Zhou et al. Oct 2004 A1
20040247081 Halsmer Dec 2004 A1
20040264627 Besson Dec 2004 A1
20040267157 Miller et al. Dec 2004 A1
20050025278 Hagiwara Feb 2005 A1
20050049521 Miller et al. Mar 2005 A1
20050063509 DeFreitas et al. Mar 2005 A1
20050078797 Danielsson et al. Apr 2005 A1
20050089205 Kapur Apr 2005 A1
20050105679 Wu et al. May 2005 A1
20050113681 DeFreitas et al. May 2005 A1
20050113715 Schwindt et al. May 2005 A1
20050117694 Francke Jun 2005 A1
20050129172 Mertelmeier Jun 2005 A1
20050133706 Eberhard Jun 2005 A1
20050135555 Claus et al. Jun 2005 A1
20050135664 Kaufhold et al. Jun 2005 A1
20050226375 Eberhard et al. Oct 2005 A1
20050248347 Damadian Nov 2005 A1
20060030784 Miller et al. Feb 2006 A1
20060034426 Freudenberger Feb 2006 A1
20060074288 Kelly et al. Apr 2006 A1
20060098855 Gkanatsios et al. May 2006 A1
20060109951 Popescu May 2006 A1
20060126780 Rotondo Jun 2006 A1
20060129062 Nicoson et al. Jun 2006 A1
20060155209 Miller et al. Jul 2006 A1
20060210016 Francke Sep 2006 A1
20060262898 Partain Nov 2006 A1
20060269041 Mertelmeier Nov 2006 A1
20060291618 Eberhard et al. Dec 2006 A1
20070030949 Jing et al. Feb 2007 A1
20070036265 Jing et al. Feb 2007 A1
20070076844 Defreitas et al. Apr 2007 A1
20070078335 Horn Apr 2007 A1
20070140419 Souchay Jun 2007 A1
20070223651 Wagenaar et al. Sep 2007 A1
20070225600 Weibrecht et al. Sep 2007 A1
20070242800 Jing et al. Oct 2007 A1
20080019581 Gkanatsios et al. Jan 2008 A1
20080045833 Defreitas et al. Feb 2008 A1
20080056436 Pack Mar 2008 A1
20080101537 Sendai May 2008 A1
20080112534 Defreitas et al. May 2008 A1
20080118023 Besson May 2008 A1
20080130979 Ren et al. Jun 2008 A1
20080212861 Durgan et al. Sep 2008 A1
20080285712 Kopans Nov 2008 A1
20080317196 Imai Dec 2008 A1
20090003519 Defreitas et al. Jan 2009 A1
20090010384 Jing et al. Jan 2009 A1
20090080594 Brooks et al. Mar 2009 A1
20090080602 Brooks et al. Mar 2009 A1
20090135997 Defreitas et al. May 2009 A1
20090141859 Gkanatsios et al. Jun 2009 A1
20090213987 Stein et al. Aug 2009 A1
20090237924 Ladewig Sep 2009 A1
20090238424 Arakita et al. Sep 2009 A1
20090268865 Ren et al. Oct 2009 A1
20090296882 Gkanatsios et al. Dec 2009 A1
20090304147 Jing et al. Dec 2009 A1
20100020937 Hautmann Jan 2010 A1
20100020938 Koch Jan 2010 A1
20100034450 Mertelmeier Feb 2010 A1
20100054400 Ren Mar 2010 A1
20100086188 Ruth et al. Apr 2010 A1
20100091940 Ludwig et al. Apr 2010 A1
20100150306 Defreitas et al. Jun 2010 A1
20100189227 Mannar Jul 2010 A1
20100195882 Ren Aug 2010 A1
20100226475 Smith Sep 2010 A1
20100290585 Eliasson Nov 2010 A1
20100303202 Ren Dec 2010 A1
20100313196 De Atley et al. Dec 2010 A1
20110026667 Poorter Feb 2011 A1
20110069809 Defreitas et al. Mar 2011 A1
20110178389 Kumar et al. Jul 2011 A1
20110188624 Ren Aug 2011 A1
20110234630 Batman et al. Sep 2011 A1
20110268246 Dafni Nov 2011 A1
20120033868 Ren Feb 2012 A1
20120051502 Ohta et al. Mar 2012 A1
20120236987 Ruimi Sep 2012 A1
20120238870 Smith et al. Sep 2012 A1
20130028374 Gkanatsios et al. Jan 2013 A1
20130211261 Wang Aug 2013 A1
20130272494 DeFreitas et al. Oct 2013 A1
20140044230 Stein et al. Feb 2014 A1
20140044231 Defreitas et al. Feb 2014 A1
20140086471 Ruth et al. Mar 2014 A1
20140098935 Defreitas et al. Apr 2014 A1
20140232752 Ren et al. Aug 2014 A1
20140314198 Ren et al. Oct 2014 A1
20140321607 Smith Oct 2014 A1
20140376690 Jing et al. Dec 2014 A1
20150049859 DeFreitas et al. Feb 2015 A1
20150160848 Gkanatsios et al. Jun 2015 A1
20150310611 Gkanatsios et al. Oct 2015 A1
20160106383 Ren et al. Apr 2016 A1
20160189376 Bernard Jun 2016 A1
20160209995 Jeon Jul 2016 A1
20160220207 Jouhikainen Aug 2016 A1
20160256125 Smith Sep 2016 A1
20160270742 Stein et al. Sep 2016 A9
20160302746 Erhard Oct 2016 A1
20160331339 Guo Nov 2016 A1
20170024113 Gkanatsios et al. Jan 2017 A1
20170032546 Westerhoff Feb 2017 A1
20170071562 Suzuki Mar 2017 A1
20170128028 DeFreitas et al. May 2017 A1
20170135650 Stein et al. May 2017 A1
20170316588 Homann Nov 2017 A1
20170319167 Goto Nov 2017 A1
20180130201 Bernard May 2018 A1
20180177476 Jing et al. Jun 2018 A1
20180188937 Gkanatsios et al. Jul 2018 A1
20180289347 DeFreitas et al. Oct 2018 A1
20180344276 DeFreitas et al. Dec 2018 A1
20190059830 Williams Feb 2019 A1
20190095087 Gkanatsios et al. Mar 2019 A1
20190200942 DeFreitas Jul 2019 A1
20190336794 Li Nov 2019 A1
20190388051 Morita Dec 2019 A1
20200029927 Wilson Jan 2020 A1
Foreign Referenced Citations (46)
Number Date Country
102222594 Oct 2011 CN
102004051401 May 2006 DE
102004051820 May 2006 DE
102010027871 Oct 2011 DE
0775467 May 1997 EP
0982001 Mar 2000 EP
1028451 Aug 2000 EP
1428473 Jun 2004 EP
1759637 Mar 2007 EP
1569556 Apr 2012 EP
2732764 May 2014 EP
2602743 Nov 2014 EP
2819145 Dec 2014 EP
3143935 Mar 2017 EP
53151381 Nov 1978 JP
2001-346786 Dec 2001 JP
2002219124 Aug 2002 JP
2006-231054 Sep 2006 JP
2007-50264 Mar 2007 JP
2007-521911 Aug 2007 JP
2007229269 Sep 2007 JP
2008-67933 Mar 2008 JP
2008086471 Apr 2008 JP
2009500048 Jan 2009 JP
2012-509714 Apr 2012 JP
2012-511988 May 2012 JP
2015-530706 Oct 2015 JP
WO 9005485 May 1990 WO
WO 9803115 Jan 1998 WO
WO 9816903 Apr 1998 WO
WO 0051484 Sep 2000 WO
WO 03020114 Mar 2003 WO
WO 03037046 May 2003 WO
WO 2003057564 Jul 2003 WO
WO 2004043535 May 2004 WO
WO 2005051197 Jun 2005 WO
WO 2005110230 Nov 2005 WO
WO 2005112767 Dec 2005 WO
WO 2006055830 May 2006 WO
WO 2006058160 Jun 2006 WO
WO 2007129244 Nov 2007 WO
WO 2008072144 Jun 2008 WO
WO 2009122328 Oct 2009 WO
WO 2009136349 Nov 2009 WO
WO 2010070554 Jun 2010 WO
WO 2013184213 Dec 2013 WO
Non-Patent Literature Citations (27)
Entry
Kachelriess, Marc et al., “Flying Focal Spot (FFS) in Cone-Beam CT”, 2004 IEEE Nuclear Science Symposium Conference Record, Oct. 16-22, 2004, Rome Italy, vol. 6, pp. 3759-3763.
Niklason et al., “Digital breast tomosynthesis: potentially a new method for breast cancer screening”, In Digital Mammography, 1998, 6 pages.
Thurfjell, “Mammography screening: one versus two views and independent double reading”, Acta Radiologica 35, No. 4, 1994, pp. 345-350.
“Essentials for life: Senographe Essential Full-Field Digital Mammography system”, GE Health-care Brochure, MM-0132-05.06-EN-US, 2006, 12 pgs.
“Filtered Back Projection,” (NYGREN) published May 8, 2007; URL:http://web.archive.org/web/19991010131715/http://www.owlnet.rice.edu/-.about.e1ec539/Projects97/cult/node2.html., 2 pgs.
“Lorad Selenia” Document B-BI-SEO US/Intl (May 2006) copyright Hologic 2006, 12 pgs.
ACRIN website, located at https://www.acrin.org/PATIENTS/ABOUTIMAGINGEXAMSANDAGENTS/ABOUTMAMMOGRAPHYANDTOMOSYNTHESIS.aspx, “About Mammography and Tomosynthesis”, obtained online on Dec. 8, 2015, 5 pgs.
American College of Radiology website, located at http://www.acr.org/FAQs/DBT-FAQ, “Digital Breast Tomosynthesis FAQ for Insurers”, obtained online on Dec. 8, 2015, 2 pages.
Aslund, Magnus, “Digital Mammography with a Photon Counting Detector in a Scanned Multislit Geometry”, Doctoral Thesis, Dept of Physics, Royal Institute of Technology, Stockholm, Sweden, Apr. 2007, 51 pages.
Chan, Heang-Ping et al., “ROC study of the effect of stereoscopic imaging on assessment of breast lesions”, Medical Physics, vol. 32, No. 4, Apr. 2005, 7 pgs.
Cole, Elodia, et al., “The Effects of Gray Scale Image Processing on Digital Mammography Interpretation Performance”, Academic Radiology, vol. 12, No. 5, pp. 585-595, May 2005.
Digital Clinical Reports, Tomosynthesis, GE Brochure 98-5493, Nov. 1998, 8 pgs.
Dobbins, James T., “Digital x-ray tomosynthesis: current state of the art and clinical potential,” Physics in Medicine and Biology, Taylor and Francis Ltd, London GB, vol. 48, No. 19, Oct. 7, 2003, 42 pages.
Federica Pediconi et al., “Color-coded automated signal intensity-curve for detection and characterization of breast lesions: Preliminary evaluation of a new software for Mr-based breast imaging”, International Congress Series 1281 (2005) 1081-1086.
Grant, David G., “Tomosynthesis: a three-dimensional imaging technique”, IEEE Trans. Biomed. Engineering, vol. BME-19, #1, Jan. 1972, pp. 20-28.
Japanese Office Action mailed in Application 2016-087710, dated Mar. 1, 2017, 5 pages.
Japanese Office Action mailed in Application 2017-001579, mailed Mar. 29, 2017, 1 page. (No English Translation.).
Kita et al., “Correspondence between different view breast X-rays using simulation of breast deformation”, Proceedings 1998 IEE Computer Society Conference on Computer Vision and Pattern Recognition, Santa Barbara, CA, Jun. 23-25, 1998, pp. 700-707.
Mammographic Accreditation Phantom, http://www.cirsinc.com/pdfs/015cp.pdf. (2006), 2 pgs.
Niklason, Loren T. et al., “Digital Tomosynthesis in Breast Imaging”, Radiology, Nov. 1997, vol. 205, No. 2, pp. 399-406.
Pisano, Etta D., “Digital Mammography”, Radiology, vol. 234, No. 2, Feb. 2005, pp. 353-362.
Senographe 700 & 800T (GE); 2-page download on Jun. 22, 2006 from www.gehealthcare.com/inen/rad/whe/products/mswh800t.html.; Figures 1-7 on 4 sheets re lateral shift compression paddle, 2 pgs.
Smith, A., “Fundamentals of Breast Tomosynthesis”, White Paper, Hologic Inc., WP-00007, Jun. 2008, 8 pgs.
Smith, Andrew, PhD, “Full Field Breast Tomosynthesis”, Hologic White Paper, Oct. 2004, 6 pgs.
Wheeler F. W., et al. “Micro-Calcification Detection in Digital Tomosynthesis Mammography”, Proceedings of SPIE, Conf-Physics of Semiconductor Devices, Dec. 11, 2001 to Dec. 15, 2001, Delhi, SPIE, US, vol. 6144, Feb. 13, 2006, 12 pgs.
Wu, Tao, et al. “Tomographic Mammography Using a Limited No. Of Low-Dose Cone-Beam Projection Images” Medical Physics, AIP, Melville, NY, vol. 30, No. 3, Mar. 1, 2003, p. 365-380.
Japanese Notice of Rejection in Application 2018-554775, dated Feb. 22, 2021, 10 pages.
Related Publications (1)
Number Date Country
20200085393 A1 Mar 2020 US
Provisional Applications (1)
Number Date Country
62730818 Sep 2018 US