Systems and methods for correlating objects of interest

Information

  • Patent Grant
  • 12236597
  • Patent Number
    12,236,597
  • Date Filed
    Wednesday, May 22, 2024
    9 months ago
  • Date Issued
    Tuesday, February 25, 2025
    9 days ago
Abstract
A method of correlating regions in an image pair including a cranial-caudal image and a medial-lateral-oblique image. Data from a similarity matching model is received by an ensemble model, the data including at least a matched pair of regions and a first confidence level indicator associated with the matched pair of regions. Data from a geo-matching model is received by the ensemble model, the data from the geo-matching model including at least the matched pair of regions and a second confidence level indicator. A joint probability of correlation is determined by the ensemble model based on evaluation of each of the first and second confidence level by the ensemble matching model, wherein the joint probability of correlation provides a probability that the region in each image correlates to the corresponding region in the other image. The joint probability of correlation is provided to an output device.
Description
BACKGROUND

Medical imaging provides a non-invasive method to visualize the internal structure of a patient. Visualization methods can be used to screen for and diagnose cancer and other maladies in a patient. For example, early screening can detect lesions within a breast that might be cancerous so that treatment can take place at an early stage in the disease.


Mammography is a form of medical imaging that utilizes x-ray radiation to visualize breast tissue. These techniques are often used to screen patients for potentially cancerous lesions and other abnormalities. Traditional mammograms involve acquiring two-dimensional (2D) images of the breast from various angles. A craniocaudal (CC) image is one of the standard image types captured with x-ray radiation. The CC image is a visualization of the breast from above (e.g., view from top of breast). Another standard image type is the mediolateral-oblique (MLO) image that images the breast from the side at an angle. Generally, MLO images are captured at an angle or at angles of forty to sixty degrees. The review and consideration of information in the CC images and in the MLO images together can increase the diagnostic power of breast imaging. Tomosynthesis is another method of taking mammograms, where a plurality of images are acquired. Each image is acquired at a respective thickness and the images are taken at a multitude of angles.


It is against this background that the present disclosure is made.


SUMMARY

In one aspect, the present disclosure relates to a method of correlating regions of interest (ROIs) in an image pair including a cranial-caudal (CC) image and a medial-lateral-oblique (MLO) image, the method including: receiving, by an ensemble matching machine learning (ML) model, data from a similarity matching ML model, the data from the similarity matching ML model including at least a matched pair of ROIs and a first confidence level indicator associated with the matched pair of ROIs; receiving, by the ensemble matching ML model, data from a geo-matching (GM) model, the data from the GM model including at least the matched pair of ROIs and a second confidence level indicator; determining, by the ensemble matching ML model, a joint probability of correlation based on evaluation of each of the first and second confidence level by the ensemble matching ML model, wherein the joint probability of correlation provides a probability that the ROI in the CC image correlates to the corresponding ROI in the MLO image and vice versa; and providing the joint probability of correlation to an output device. In an example, the method further includes receiving data associated with training CC-MLO image pairs; and training the ensemble matching ML model with the data associated with the training CC-MLO image pairs.


In another example of the above aspect, the method further includes determining, by the ensemble matching ML model, a third confidence level indicator based on evaluation of each of the first and second confidence level and the joint probability of correlation, wherein the third confidence level indicator is a likelihood of reliability associated with the joint probability of correlation. In another example, the joint probability is a probability that the similarity correlation and the GM correlation properly correlated the CC-ROI and the MLO-ROI. In still another example, providing the joint probability of correlation to an output device comprises a numerical value associated with the joint probability of correlation.


In another example, each image of the matched lesion pair is a whole breast image. In yet another example, each image of the matched lesion pair contains only the ROI for each of the CC image and the MLO image. In still another example, providing the joint probability of correlation to an output device comprises a numerical display. In another example of the above aspect, providing the joint probability of correlation to an output device includes: receiving a selection of the CC-ROI; and presenting, in response to receiving the selection of the CC-ROI, the MLO-ROI. In a further example, the method further includes determining, in response to receiving the selection of the CC-ROI, that the joint probably of correlation exceeds a predetermined threshold; and presenting, in response to determining that the joint probability of correlation exceeds a predetermined threshold, the MLO-ROI.


In another example of the above aspect, the matched pair of ROIs includes a similarity correlation between a CC-ROI in a CC-image and a MLO-ROI in a MLO image, wherein the first confidence level indicator is a probability associated with the correlation between the CC-ROI and the MLO-ROI. In another example, the second confidence level indicator is a probability associated with a GM correlation between the CC-ROI and the MLO-ROI. In still another example, the method further includes displaying, on the output display, a pair of symbols, wherein each symbol of the pair of symbols marks a ROI of the matched pair of ROIs. In another example, the data from the GM model comprises location data for each ROI of the matched pair of ROIs. In a further example, the second confidence level indicator indicates a probability that a first location of a first ROI of the matched pair of ROIs and a second location of a second ROI of the matched pair of ROIS are a same location. In a still further example, each of the CC image and the MLO image depicts a breast and the GM model logically divides the breast into quadrants.


In another example of the above aspect, the data from the similarity model comprises characteristics data for each ROI of the matched pair of ROIs. In another example, the first confidence level indicator indicates a degree of similarity between a first set of characteristics associated with a first ROI of the matched pair of ROI and a second set of characteristics associated with a second ROI of the matched pair of ROIs. In a further example, the characteristics data includes one or more of a size, a shape, one or more margins, a location, a density, one or more colors, an orientation, a texture, a pattern, and a depth.


In another aspect, the present disclosure relates to a system for ensemble matching a cranial-caudal (CC) and a medial-lateral-oblique (MLO) image including: at least one processor in communication with at least one memory; an ensemble matching module that executes on the at least one processor and during operation is configured to: receive, from a similarity matching model, a matched CC-MLO image pair and a similarity confidence level indicator associated with the matched CC-MLO image pair; receive, from a geometric matching (GM) model, the matched CC-MLO image pair and a GM confidence level indicator associated with the matched CC-MLO image pair; apply an ensemble matching model to determine an ensemble confidence level based on an ensemble machine learning (ML) algorithm trained on a plurality of matched CC-MLO pairs; associate the ensemble confidence level with the matched CC-MLO image pair; and output the ensemble confidence level with the matched CC-MLO image pair. In an example of this aspect, the system further includes an image acquisition module.


In another example of the above aspect, the matched CC-MLO image pair comprises a first region of interest (ROI) identified in the CC image and a second ROI identified in the MLO image, wherein either of the similarity ML model or the GM model assigns a correlation to the first ROI and the second ROI. In a further example, the ensemble matching module further receives, from the GM model, location data associated with the correlation between the first ROI and the second ROI. In another further example, the ensemble matching module further receives, from the similarity ML model, shape data associated with the correlation between the first ROI and the second ROI. In a still further example, the ensemble matching module further receives, from the similarity ML model, margin data associated with the correlation between the first ROI and the second ROI. In a yet further example, the similarity matching model determines the matched CC-MLO image pair by: identifying a CC region of interest (ROI) in a CC-image received from an image acquisition module; searching an MLO image received from the image acquisition module for a MLO-ROI, wherein the MLO image includes plurality of regions and the similarity matching model review each region of the plurality of regions; determining at least one similarity characteristic of each of the CC-ROI and the MLO-ROI; and correlating the CC-ROI and the MLO-ROI based on the at least one similarity characteristic.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates an example system for detecting a region of interest within a breast;



FIG. 2 illustrates an example x-ray imaging system;



FIG. 3 illustrates a perspective view of the x-ray imaging system of FIG. 2;



FIG. 4 illustrates the x-ray imaging system of FIG. 2 in a breast positioning state for left mediolateral oblique (LMLO) imaging orientation;



FIG. 5 illustrates a technique for geo-matching using a distance between the nipple and a lesion in a CC image;



FIG. 6 illustrates a quadrant technique that can be used by the geo-matching ML model;



FIGS. 7A-7C illustrate CC-MLO image pairs showing soft tissue lesions;



FIGS. 8A-8C illustrate CC-MLO image pairs showing calcification clusters;



FIG. 9 illustrates a process flow for processing a CC-MLO image pair;



FIG. 10 illustrates a flowchart of a method of correlating a first lesion in a CC image to a second lesion in an MLO image;



FIG. 11 illustrates a flowchart of another method of correlating a first lesion in a CC image to a second lesion in an MLO image;



FIG. 12 illustrates a flowchart of an ensemble method of correlating a first lesion in a CC image to a second lesion in an MLO image;



FIG. 13 illustrates a flowchart of a method of training an ensemble matching machine learning (ML) model; and



FIG. 14 illustrates a block diagram of example physical components of a computing system usable to implement one or more aspects of the present disclosure.





DETAILED DESCRIPTION

The present disclosure is directed to systems and methods for locating lesions within breast tissue using an imaging device. In particular, a computing system utilizes machine learning (ML) models or artificial intelligence (AI) models to navigate to a first lesion in an image of a first image type, to navigate to a second lesion in an image of a second image type, and to correlate the first lesion and the second lesion. In correlating the first lesion and the second lesion, the computing system determines that the first lesion and the second lesion are the same object within the tissue of a breast. In one embodiment, the first image type is a cranial-caudal (CC) image and the second image type is a mediolateral oblique (MLO) image. As discussed herein, an image type refers to a perspective view from which an image is taken in addition to other features, e.g., a modality with which an image is taken. The term “lesion” refers to any object of interest, such as a mass, one or more calcifications, and other suspicious areas.


In examples, systems and method embodying the present disclosure identify a first region of interest (ROI) in an image of a first image type, identify a second ROI in an image of a second image type, and correlate the first ROI to the second ROI. An ROI may contain one or more lesions. The image of the first image type may include multiple lesions. Similarly, the image of the second image type may include multiple lesions. While examples discussed in greater detail herein are primarily directed to mammograms, it will be understood by those of skill in the art that the principles of the present disclosure are applicable to other forms of breast imaging, such as tomosynthesis, ultrasound, and magnetic resonance imaging, as well as to the use of medical imaging on other structures of the body.


Generally, a computing system operates to correlate a first lesion in an image of a first image type and a second lesion in an image of a second image type, and to provide a confidence level indicator. A correlation is a determination by the system that the first and second lesion each represent a same object within the breast. The confidence level indicator indicates a level of confidence in the correlation between the first lesion and the second lesion. The confidence level indicator represents a likelihood that the determination that the first and second lesion represent a same object within the breast was reliably determined. The computing system uses one or more ML models to analyze the image of the first image type and the image of the second image type to determine if the first lesion in the image of the first image type correlates with the second lesion in the image of the second image type. The confidence level indicator may also be generally understood as a confidence score.


In an example implementation, a confidence level indicator is presented on a display. The confidence level indicator is provided to a radiologist to aid the radiologist in determining whether the first lesion in the first image type is correlated with the second lesion of the second image type. When the system determines that the first lesion in the first image type is correlated with the second lesion in the second image type, this indicates that the first and second lesion are different views of the same object within the breast of a patient. In some examples, one or more of a similarity matching ML model, a geo-matching ML model, and an ensemble matching ML model is used.



FIG. 1 illustrates an example identification system 100 for locating a lesion within a breast using imaging of one or more types, and for correlating a first lesion in a first image type with a second lesion of a second image type. The identification system 100 includes a computing system 102 and an x-ray imaging system 104. In some examples, the identification system 100 operates to guide a radiologist to a lesion in a breast, based on data collected during an x-ray imaging procedure when the lesion was first identified. In some examples, the identification system 100 provides a confidence level indicator to the radiologist to aid the radiologist in confirming that a first lesion visible in an image of a first image type is correlated with a second lesion identified in an image of a second image type. In embodiments, the confidence level indicator is presented on a display.


The x-ray imaging system 104 operates to take images of breast tissue using x-ray radiation. The x-ray imaging system 104 includes an x-ray imaging device 124 and the x-ray computing device 112 in communication with the x-ray imaging device 124. The x-ray imaging device 124 is described in further detail in relation to FIGS. 2-4. The x-ray computing device 112 operates to receive inputs from a radiologist (R) to operate the x-ray imaging device 124 and view images received from the x-ray imaging device 124.


A radiologist R operates the x-ray computing device 112 to capture x-ray images of the breast of a patient (P) using the x-ray imaging device 124. The x-ray images typically including CC and MLO images for each breast (LMLO, RMLO and RCC, LCC) as well as others. The x-ray images may be taken as part of a routine health screening or as part of a diagnostic examination.


For each image type (CC, MLO, etc.) the images may be acquired as a plurality of projections (Tp) at different angles and thicknesses. The plurality of image may be processed or reconstructed to produce a plurality of reconstructed images or slices (Tr). The plurality of Tr images may be synthesized into a single synthesized image (Ms) showing the most relevant clinical information and ROI locations. Each of the pixels in the synthesized image Ms may be mapped to a Tr image or slice. Each of the CC image and the MLO image may be one or more tomosynthesis image slices or Tr images. In embodiments, one or more of the CC image and the MLO image may be a Tp projection image.


In embodiments, each of Ms, Tr, and Tp images may be stored in the data store and can be retrieved by a radiologist for review. The images are then presented to a radiologist R who reidentifies one or more lesions in the patient P's breast that may require additional analysis to determine if the lesions are potentially cancerous and require a biopsy.


The computing system 102 operates to process and store information received from the x-ray imaging system 104. In the example of FIG. 1, the computing system 102 includes a matching engine 106 and a data store 108. In some examples, the matching engine 106 and data store 108 are housed within the memory of the computing system 102 (e.g., memory 1408 in FIG. 14). In some examples, the computing system 102 accesses the matching engine 106 and data store 108 from a remote computing system 110 such as a cloud computing environment. Though FIG. 1 shows the computing system 102 as standing alone from other components of the identification system 100, the computing system 102 may also be incorporated into the x-ray computing device 112 or another computing device utilized in patient care. In some examples, the computing system 102 includes two or more computing devices.


A radiologist R will typically review a plurality of CC tomosynthesis images and a plurality of MLO tomosynthesis images for each breast. Particularly, the radiologist will visually review the plurality of images to locate a region of interest in one of the CC images and try to find a similar region in one of the corresponding MLO images. However, this process is imperfect and subject to human error because no known technology exists that can find related artifacts across different types of images.


The matching engine 106 is programmed to analyze images of different types to determine if a first lesion in a first image of a first image type is a same object as a second lesion in a second image of a second image type. Matching engine 106 includes one or more machine learning (ML) modules 116, 118, 120. Matching engine 106 represents logic or programming which operates on a processor in computer system 102, such as processing device 1402 in FIG. 14. In embodiments, matching engine further includes a correlation evaluator 122.


In the example of FIG. 1, a training data store 114 is utilized to train ML models 116, 118, 120. The training data store 114 stores multiple training cases of correlated first and second lesions in images of the first and the second image types, where the first and second lesions have been correlated by a matching engine or by a human. In embodiments, each training case that is stored in training data store 114 includes at least a pair of images (also referred to simply as “image pairs”) of the first image type and second image type. Each image in a training image pair contains one or more of an ROI, a confirmed lesion, or a potential lesion. A first lesion in a first image in the training image pair is correlated to a second lesion in the second image of the training image pair. The first image is of a first image type and the second image is of a second image type. The stored training cases include training image pairs with correctly correlated lesions and, in embodiments, also include training image pairs with falsely correlated lesions.


Computer system 102 uses the stored training cases in training data store 114 to train ML models 116, 118, and 120 to identify features that can be used to correlate a first and second lesion in images of the first and the second image types. In one embodiment, each ML model 116, 118, 120 is a machine learning classifier. Once trained, ML models 116, 118, and 120 can be used by matching engine 106 to correlate lesions in images of different types.


Various machine learning techniques can be utilized to generate a machine learning classifier to be used as a lesion classifier. In some examples, the machine learning models are supervised machine learning models. In other examples, one or more of the machine learning models are unsupervised machine learning models. In some examples, the machine learning models are based on an artificial neural network. In some examples, the neural network is a deep neural network (DNN). In some examples, the machine learning models are a convolutional deep neural network (CNN). In some examples, an ensemble or combination of two or more networks are utilized to generate one or more of the image classifiers. In some examples, two or more machine learning models are utilized to generate features or feature sets from the training image pairs in training data store 114.


The resulting trained machine learning classifiers, ML models 116, 118, 120, are applied by a correlation evaluator 122. Correlation evaluator 122 compares image pairs (e.g., CC-MLO image pairs) including an image of a first type (e.g., a CC image) and an image of a second type (e.g., an MLO image), and applies one or more of ML models 116, 118, 120 to the CC-MLO image pair to determine whether a first lesion in the CC image of the image pair correlates to a second lesion in the MLO image of the image pair. Correlation evaluator 122 determines a first and second lesion are correlated when the first lesion identified in the CC image is determined to be the same object as the second lesion identified in the MLO image base on ML models 116, 118, 120. Correlation evaluator 122 determines and outputs a confidence level indicator indicating a level of confidence in the correlation. In embodiments, correlation evaluator 122 is absent and the confidence level indicator is determined by matching engine 106 through the application of one or more of ML models 116, 118, 120 without correlation evaluator 122.


In one example, the confidence level indicator is a numerical value. In another example, the confidence level indicator may indicate a category of confidence such as “high,” “medium,” or “low.” In alternative examples, the confidence level indicator is provided as a percentage such as “99%,” “75%,” or “44%”. In examples, the confidence level indicator indicates a likelihood that a correlation has correctly determined that the first and second lesion are a same object.


In embodiments, the confidence level indicator includes two parts: a first confidence level indicator indicating a level of confidence in the correlation and a second confidence level indicator indicating a level of confidence in the calculation of the first confidence level indicator. For example, systems embodying the present disclosure determine a first lesion in an image of a first image type is correlated with a second lesion in an image of a second image type. The system calculates the first confidence level indicator, which is a probability that the correlation between the first and second lesion is accurate. The system also determines a second confidence level indicator, which indicates a level confidence that the first confidence level indicator was properly calculated. In this example the first confidence level indicator is an assessment of the correlation between the first and second lesions, while the second confidence level indicator is an assessment of the calculation producing the first confidence level indicator.


A graphical user interface (GUI) presented on a display of the computing system 102 operates to present information to a radiologist or other clinician. In some examples, the GUI displays a confidence level indicator overlayed on one or more image pairs being reviewed by the radiologist. For example, the GUI presents a CC image and an MLO image displayed side by side with the confidence level indicator overlaying the image pair and indicating a level of confidence that a first lesion identified in the CC image is the same object as a second lesion identified in the MLO image. In another example, a radiologist indicates a region of interest in the CC image and, in response, a corresponding MLO image having a region of interest with a highest confidence level of correlation to the region of interest in the CC image is brought up on the display. In embodiments, the corresponding MLO image has an indicator or is zoomed to the correlated region of interest.


Additionally or alternatively, the information displayed may include data such as a basis or a reason for the value of the confidence level indicator, or logic indicating why the matching engine 106 determined a particular value for the confidence level indicator. For example, coordinating location data from the geo-matching model 118 or shared shaped characteristics data from the similarity matching model 116 may be displayed. In embodiments, the confidence level indicator may be displayed on a synthesized or Ms image. Additionally, the GUI may display a letter or other symbol indicia, indicating that the first lesion in the CC image is correlated to the second lesion in the MLO image. For example, the first lesion in the CC image and the correlated second lesion in the MLO image is labeled with an “A.” If multiple regions of interest appear in the image pair, multiple letters or labels may be used. In embodiments, the letter or other symbol indicia is displayed in addition to the confidence level indicator.


The data store 108 operates to store information received from the x-ray imaging system 104 and matching engine 106. In some examples, the data store 108 is actually two or more separate data stores. For example, one data store could be a remote data store that stores images from one or more x-ray imaging systems, such as x-ray imaging system 104. Another data store could be housed locally within the computing system 102. In some examples, the data store 108 may be part of an electronic medical record (EMR) system.



FIG. 2 illustrates an example of the x-ray imaging system shown in FIG. 1. FIG. 3 illustrates a perspective view of the x-ray imaging system of FIG. 1. Referring concurrently to FIGS. 2 and 3, the x-ray imaging system 104 immobilizes a patient's breast 202 for x-ray imaging via a breast compression immobilizer unit 204 that includes a static breast support platform 206 and a moveable compression paddle 208. The breast support platform 206 and the compression paddle 208 each have a compression surface 210 and 212, respectively, that move towards each other to compress and immobilize the breast 202. In some systems, the compression surfaces 210, 212 are exposed so as to directly contact the breast 202. The breast support platform 206 also houses an image receptor 216 and, optionally, a tilting mechanism 218, and optionally an anti-scatter grid (not shown). The breast compression immobilizer unit 204 is in a path of an imaging beam 220 emanating from x-ray source 222, such that the imaging beam 220 impinges on the image receptor 216.


The breast compression immobilizer unit 204 is supported on a first support arm 224 and the x-ray source 222 is supported on a second support arm 226. For mammography, first and second support arms 224, 226 can rotate as a unit about an axis 228 between different imaging orientations such as CC and MLO, so that the x-ray imaging system 104 can take a mammogram projection image (x-ray image) at each orientation. In operation, the image receptor 216 remains in place relative to the breast support platform 206 while an image is taken. The breast compression immobilizer unit 204 releases the breast 202 for movement of first and second support arms 224, 226 to a different imaging orientation. For tomosynthesis, the first support arm 224 stays in place, with the breast 202 immobilized and remaining in place, while at least the second support arm 226 rotates the x-ray source 222, relative to the breast compression immobilizer unit 204 and the compressed breast 202, about the axis 228. The x-ray imaging system 104 takes plural tomosynthesis projection images of the breast 202 at respective angles of the imaging beam 220 relative to the breast 202.


Concurrently and optionally, the image receptor 216 may be tilted relative to the breast support platform 206 and in sync with the rotation of the second support arm 226. The tilting can be through the same angle as the rotation of the x-ray source 222 but may also be through a different angle selected such that the imaging beam 220 remains substantially in the same position on the image receptor 216 for each of the plural images. The tilting can be about an axis 230, which can but need not be in the image plane of the image receptor 216. The tilting mechanism 218 that is coupled to the image receptor 216 can drive the image receptor 216 in a tilting motion.


When the x-ray imaging system 104 is operated, the image receptor 216 produces imaging information in response to illumination by the imaging beam 220 and supplies it to an image processor 232 for processing and generating breast x-ray images. A system control and workstation unit 238 including software controls the operation of the system and interacts with the operator to receive commands and deliver information including processed-ray images.



FIG. 4 illustrates the x-ray imaging system shown in FIG. 1 in a breast positioning state for left MLO (LMLO) imaging orientation. A tube head 258 of the x-ray imaging system 104 is set in an orientation so as to be generally parallel to a gantry 256 of the x-ray imaging system 104, or otherwise not normal to the flat portion of a support arm 260 against which the breast is placed. In this position, the technologist may more easily position the breast without having to duck or crouch below the tube head 258.


The x-ray imaging system 104 includes a floor mount or base 254 for supporting the x-ray imaging system 104 on a floor. The gantry 256 extends upwards from the floor mount 252 and rotatably supports both the tube head 258 and a support arm 260. The tube head 258 and support arm 260 are configured to rotate discretely from each other and may also be raised and lowered along a face 262 of the gantry so as to accommodate patients of different heights. An x-ray source, described elsewhere herein and not shown here, is disposed within the tube head 258. The support arm 260 includes a support platform 264 that includes therein an x-ray receptor and other components (not shown). A compression arm 266 extends from the support arm 260 and is configured to raise and lower linearly (relative to the support arm 260) a compression paddle 268 for compression of a patient breast during imaging procedures. Together, the tube head 258 and support arm 260 may be referred to as a C-arm.


A number of interfaces and display screens are disposed on the x-ray imaging system 104. These include a foot display screen 270, a gantry interface 272, a support arm interface 274, and a compression arm interface 276. In general, the various interfaces 272, 274, and 276 may include one or more tactile buttons, knobs, switches, as well as one or more display screens, including capacitive touch screens with graphic user interfaces (GUIs) so as to enable user interaction with and control of the x-ray imaging system 104. In examples, the interfaces 272, 274, 276 may include control functionality that may also be available on a system control and workstation, such as the x-ray computing device 112 of FIG. 1. Any individual interface 272, 274, 276 may include functionality available on other interfaces 272, 274, 276, either continually or selectively, based at least in part on predetermined settings, user preferences, or operational requirements. In general, and as described below, the foot display screen 270 is primarily a display screen, though a capacitive touch screen might be utilized if required or desired.


In examples, the gantry interface 272 may enable functionality such as: selection of the imaging orientation, display of patient information, adjustment of the support arm elevation or support arm angles (tilt or rotation), safety features, etc. In examples, the support arm interface 274 may enable functionality such as adjustment of the support arm elevation or support arm angles (tilt or rotation), adjustment of the compression arm elevation, safety features, etc. In examples, the compression arm interface 276 may enable functionality such as adjustment of the compression arm elevation, safety features, etc. Further, one or more displays associated with the compression arm interface 276 may display more detailed information such as compression arm force applied, imaging orientation selected, patient information, support arm elevation or angle settings, etc. The foot display screen 270 may also display information such as displayed by the display(s) of the compression arm interface 276, or additional or different information, as required or desired for a particular application.


As described earlier, a CC-MLO image pair, taken, for example, by x-ray imaging system 104, is analyzed to determine a correlation between a first lesion in the CC image and a second lesion in the MLO image. The CC-MLO image pair is analyzed by a matching engine using one or more ML models. The machine learning models include one or more of a similarity matching ML model, a geo-matching ML model, and an ensemble ML matching model (e.g., ML models 116, 118, 120 in FIG. 1).


When applying the similarity matching ML model, the matching engine compares the first and second lesions to determine a confidence level indicator that represents how similar in appearance or location the first and second lesions are to each other. The similarity matching ML model provides criteria to determine the similarity of two or more lesions based on factors or features such as shape, margin, proximity or relationship to anatomical landmarks or other notable features in the image, etc. . . . The similarity matching ML model may be a neural network and, in embodiments, is a feature-based network. The neural network may be a fully connected network. In examples, the similarity matching ML model is a deep learning neural network, such as a deep convolutional network.


When applying the geo-matching ML model, the matching engine maps the first and second lesions in each of a first image (e.g., a CC image) and a second image (e.g., a MLO image) based on one or more criteria to determine if the first lesion is a same object as the second lesion. In one embodiment, the criteria include a distance between a lesion and one or more identifiable physical or anatomical landmarks. The anatomical landmarks include, but are not limited to, the nipple, the chest wall, and the pectoral muscle. In embodiments, the criteria include a probability that a lesion is located in a particular quadrant of the image or the volume being imaged (e.g., a breast). The geo-matching ML model may be a neural network or other ML model. In embodiments the geo-matching model is a rule-based AI.


When applying the ensemble matching model, the matching engine compares correlation data received from other models in the system, such as similarity matching model 116 and geo-matching model 118, and determines and outputs a joint probability of correlation. In examples, the ensemble matching model may be a feature-based neural network. The ensemble matching model may generate features related not only to the lesions in the different image types, but features related to data received from other matching models in the system. For example, data or confidence level indicators produced by other ML models may be a feature in the ensemble matching model.



FIG. 5 illustrates a technique for geo-matching using a distance between the nipple and a first or second lesion in an image pair. In one embodiment, a first lesion 500 in a CC image of the right breast (RCC) is identified. The geo-matching ML model is applied to find the location of a second lesion 502 in an MLO image of the right breast (RMLO). The geo-matching ML model is used to calculate the distance (d) from the first lesion 500 to the nipple 504. An arc is superimposed on the RMLO image having a radius that is equal to the distance (d). Once the second lesion 502 is identified in the RMLO image, the geo-matching ML model is applied to determine a confidence level indicator (e.g., a probability) that the second lesion 502 in the RMLO image is the same object as the first lesion 500 in the RCC image. In other embodiments, different or additional anatomical landmarks may be used in geo-matching the target lesions.



FIG. 6 illustrates a quadrant technique that can be used when applying the geo-matching ML model. The quadrant technique is used for quadrant-based matching of lesions. A breast 600 is viewed from the front of the breast. A horizontal line 602 and a vertical line 604 are superimposed on the breast 600 and intersect at the nipple 606. The intersecting horizontal and vertical lines 602, 604 logically divide the breast 600 into four quadrants (quadrants 0, 1, 2, and 3). In the RCC image, the vertical line 604 corresponds to the posterior nipple line (PNL). The PNL line is drawn posteriorly and perpendicularly from the nipple towards the pectoral muscle (or the posterior image edge in CC) in the CC images. In the RCC image, the quadrants 0 and 3 are combined into one half of the breast 612 and the quadrants 1 and 2 are combined into the other half of the breast 614.


As shown in the graphic representation 608 of the breast, a target lesion 610 extends into the quadrants 0 and 3. If the vertical line 604 represents an x axis and the horizontal line 602 represents a y axis of a cartesian coordinate system, a lesion 610 has a z max 616 in quadrant 0 and a z min 618 in quadrant 3. A geo-matching ML model is applied to estimate which quadrant a first lesion in a CC image is located in, to estimate which quadrant a second lesion in an MLO image is located in, and to compare the two estimated quadrants to determine if the estimated quadrants match. The geo-matching module can used to determine a confidence level indicator (e.g., a probability) that the first and second lesions in the CC and MLO images are the same lesion. In some embodiments, a probability map for the quadrants 0, 1, 2, 3 is determined and used with the geo-matching ML model to correlate a first lesion in a CC image with a second lesion in an MLO image.


In some instances, due at least in part to factors such as the density of the breast tissue, the positioning of the breast, the compression of the breast, or any motion of the breast tissue during the compression, the similarity matching technique or the geo-matching technique may be more suitable for correlating first and second lesions in the first and second image types.



FIGS. 7A-7C illustrate CC-MLO image pairs showing soft tissue lesions. In the CC-MLO image pairs 700, 702, 704, the CC image is the top image, and the MLO image is the bottom image. In the illustrated image pairs, either or both of the geo-matching and the similarity matching techniques may be effective at correlating the first soft tissue lesion in the CC image with the second soft tissue lesion in the MLO image in each CC-MLO image pair 700, 702, 704. However, in an example, due to factors such as those listed earlier (e.g., motion of the breast tissue during compressions), the locations of some soft tissue lesions may shift between a first image type and a second image type, which can make it more challenging for the geo-matching technique to correlate the first and second soft tissue lesions across the first and second image types.


For example, due to the motion of the breast tissue, a first soft tissue lesion in a CC image may be estimated to be in one quadrant while a second soft tissue lesion in an MLO image can be estimated to be a different quadrant despite the first and second lesion in fact representing a same object in the breast. Since the soft tissue legions in the respective CC-MLO image pairs remain similar in one or more of the shape, the margins, the orientation, the density, the size, or the depth of the soft tissue lesion within the breast, despite factors such as movement during compression, the similarity matching technique can be more effective than the geo-matching technique in correlating the first and second soft tissue lesions in the CC-MLO image pairs in this example scenario.


In contrast, a geo-matching technique can be more suitable for correlating first and second lesions like those seen the example CC-MLO image pairs 800, 802, 804 shown in FIGS. 8A-8C, where each CC-MLO image pair 800, 802, 804 depicts a cluster of calcifications. Each cluster of calcifications includes multiple calcifications that can vary in shape, margins, orientation, density, or size within the breast and across images of different types. Since the CC image and the MLO image are captured at different angles, it can be challenging to correlate the cluster of calcifications in the CC image with the cluster of calcifications in the MLO image using the similarity matching technique. As shown in FIGS. 8A-8C, the clusters can look very different when projected into different planes due to the motion among the calcium elements that form the cluster. It may be challenging for the similarity matching model to identify the cluster in the MLO image is the same object as the cluster in the CC image. Thus, the geo-matching technique can be more effective in correlating the first and second lesion in the CC-MLO image pair with each other in this example scenario where each lesion is a cluster of calcifications.


Based at least on the different strengths and effectiveness between the similarity matching technique and the geo-matching technique, embodiments disclosed herein use an ensemble matching technique to correlate first and second lesions in CC and MLO image pairs (e.g., the ensemble matching ML model 120 in FIG. 1). The ensemble matching technique operates on all types of lesions and improves the effectiveness of the correlation process. In some instances, embodiments that employ the ensemble matching technique can improve the correlation process by more than ninety percent.



FIG. 9 illustrates a process flow 906 for processing a CC-MLO image pair. In one embodiment, a CC-MLO image pair 900, 902 is received by matching engine 106 and geo-matching ML model 118 is applied to the CC-MLO image pair 900, 902. Geo-matching ML model 118 previously received geo-matching model training 908. Geo-matching model training 908 comprises training data for training the geo-matching ML model 118 and may be stored in a training data store, such as training data store 114 of FIG. 1. In embodiments, geo-matching model training 908 is a set of CC-MLO image pairs with correlated first and second lesions. Geo-matching model training 908 may include both true matched lesions, that demonstrate accurate correlations, and false matched lesions, that demonstrate inaccurate correlations. In embodiments, such as embodiments where geo-matching ML model 118 is instead a rules-based AI, geo-matching model training 908 is a set of geo-matching rules.


Using geo-matching ML model 118, matching engine 106 determines whether one or more lesions are present in CC-MLO image pair 900, 902 outputs results data 912, including a confidence level indicator, that a first lesion present in CC image 900 is correlated to a second lesion present in MLO image 902. Geo-matching ML model 118 is used to determine the results data 912 based on features or rules learned from geo-matching model training 908.


The similarity matching ML model 116 is applied to CC-MLO image pair 900, 902 by matching engine 106. Similarity matching ML model 116 previously received similarity matching model training 910. Similarity matching model training 910 comprises training data for training the similarity matching ML model 116 and may be stored in a training data store, such as training data store 114 of FIG. 1. In embodiments, similarity matching model training 910 is a set of CC-MLO image pairs with correlated lesions. Similarity matching model training 910 may include both true matched lesions, that demonstrate accurate correlations, and false matched lesions, that demonstrate inaccurate correlations. In embodiments, similarity matching model training 910 is a same data set as geo-matching model training 908. In embodiments, similarity matching model training 910 is a different data set than geo-matching model training 908.


Similarity matching ML model 116 is used to determine whether one or more lesions are present in CC image 900 and MLO image 902 and outputs results data 914, including a confidence level indicator, that a first lesion in CC image 900 is correlated to a second lesion in MLO image 902. Similarity matching ML model 116 is used to determine the results data 914 based on similarity matching model training 910. In examples, matching engine 106 executes similarity matching ML model 116 on similarity matching model training 910, for example to generate a feature set, and generates the results data 914 by applying the feature set to the CC-MLO image pair 900, 902.


Matching engine 106 applies ensemble ML model 102 to results data 912 from the geo-matching ML model 118 (e.g., a geo-matching confidence level indicator) and results data 914 from the similarity matching ML model 116 (e.g., a similarity confidence level indicator). In embodiments, ensemble matching ML model 120 is also employed to analyze CC-MLO image pair 900, 902 directly.


Ensemble matching ML model 120 may be applied to CC-MLO image pair 900, 902 as output data from one or both of geo-matching ML model 118 and similarity matching ML model 116. For example, results data 912 from geo-matching ML model 118 may include the CC-MLO image pair annotated to show quadrants or other location data used to calculate the geo-matching confidence level indicator. Results data 914 from similarity matching ML model 116 may include the CC-MLO image pair annotated to show shape or margin data for the first and second lesions. Ensemble matching ML model 120 may be applied to CC-MLO image pair 900, 902 independently of either geo-matching ML model 118 and similarity matching ML model 116.


Results data 912, 914 generally includes a geo-matching confidence level from geo-matching ML model 118 and a similarity confidence level from similarity matching ML model 116. In embodiments, results data 912, 914 comprise additional data from one or both of geo-matching ML model 118 and similarity matching ML model 116. For example, geo-matching results data 912 from geo-matching ML model 118 may also include a distance between a lesion and an anatomical landmark for each of CC image 900 and MLO image 902, or may include quadrant estimations for each of a first lesion in CC image 900 and a second lesion in MLO image 902. Similarity results data 914 from similarity matching ML model 116 may include similarity data, such as shape, texture, orientation, or margin.


Matching engine 106 uses ensemble ML model 120 to evaluate the CC and MLO images 900, 902 with the results data 912, 914 from the geo-matching and similarity matching ML models 118, 116 and produces a joint probability of correlation 904 that represents the probability that the first and second lesions in the CC and MLO image pair are the same object within the breast. Joint probability of correlation 904 may be a single value representing a probability that the correlation of a first and second lesion between the CC image 900 and the MLO image 902 is a reliable correlation. In examples, joint probability of correlation 904 includes at least two elements: a correlation probability 918 that represents the likelihood that the correlation is reliable, and a correlation confidence level 920 which represents a likelihood that the probability calculation itself is reliable.


Ensemble ML model 120, prior to receiving results data 912, 914, is trained using ensemble model training 916. Ensemble model training 916 comprises training data for training the ensemble matching ML model 120 and may be stored in a training data store, such as training data store 114 of FIG. 1. In embodiments, ensemble model training 916 is a set of CC-MLO image pairs with correlated first and second lesions. Ensemble model training 916 may include both true matched lesions and false matched lesions. In embodiments, ensemble model training 916 is a same data set as geo-matching model training 908 and similarity matching model training 910. In embodiments, ensemble model training 916 is a different data set from one or both of geo-matching model training 908 and similarity matching model training 910.


Regardless of whether ensemble model training 916 is a same or different data set as compared to training data received by other models in the system, ensemble ML model 120 generates a unique feature set. Ensemble ML model 120 develops a feature set that includes features related to images, lesions, and other ROIs, and also includes features related to results data 912, 914, such as scoring, confidence level, location data, shape data, margin data, etc.


Ensemble ML model 116 is used by matching engine 106 to determine how reliable the correlations made using geo-matching ML model 118 and similarity matching ML model 116 are, and presents this reliability determination as a joint probability of correlation 904. Joint probability of correlation 904 may be a single output, such a numerical probability or color-coded confidence level indicator, or may have two or more parts. The example process flow 906 presents the joint probability of correlation as a correlation probability 918 and an associated correlation confidence level 920.


To consider an example case, a particular first and second lesion may be of a kind for which geo-matching is more effective, e.g., a cluster of calcifications. The ensemble matching ML model may be trained to identify when the target lesion is a calcification and to place a greater weight on data received from the geo-matching ML model in such a case. However, in this example scenario, the cluster of calcifications may be located on a boundary between two or more quadrants. The ensemble matching ML model may also be trained to reduce confidence in the geo-matching ML model is this situation where a lesion crosses over a quadrant boundary, as this prevents the lesion from being accurately estimated as being in one quadrant or the other. In this scenario, both the geo-matching ML model and similarity matching ML model may determine high likelihood of correlation, which would result in a high correlation probability. However, because confidence in the similarity matching ML model is reduced due to the type of lesion and confidence in the geo-matching ML model is reduced due to the location of the lesion on a quadrant boundary, the high correlation probability is accompanied by a low correlation confidence level.


Ensemble ML model 120 is trained to determine the joint probability of correlation 904 based on ensemble model training 916. In examples, ensemble ML model 120 executes a ML algorithm on ensemble model training 916, for example to generate a feature set, and the joint probability of correlation 904 is determined when matching engine 106 uses ensemble ML model to apply the feature set to the CC-MLO image pair 900, 902 and the results data 912, 914. In embodiments, process flow 906 may further include a correlation evaluator, such as correlation evaluator 122 of FIG. 1. The correlation evaluator, as a component of matching engine 106, may receive data from one or more of geo-matching ML model 118, similarity matching ML model 116, and ensemble ML model 120 and the correlation evaluator may generate and output the joint probability of correlation 904.



FIG. 10 illustrates a flowchart of a method 1012 of correlating a first lesion in a CC image to a second lesion in an MLO image. The method is performed by a matching engine, such as matching engine 106 in FIGS. 1 and 9, using a geo-matching ML model, such as the geo-matching ML model 118 in FIGS. 1 and 9. Initially, as shown in block 1000, an image pair that consists of a first image of a first image type and a second image of a second image type is received. In one embodiment, the first and second image types are a CC image and an MLO image. The geo-matching matching model is used to estimate a first distance between at least one anatomical landmark and a first lesion in the CC image and a second distance between the at least one anatomical landmark and a second lesion in the MLO image (block 1002). The geo-matching ML model is also used to estimate in which quadrant of the breast the first and second lesions in the CC image and in the MLO image are located (block 1004).


The data from the distance determination and the quadrant determination are combined. In embodiments, geo-matching ML model is used to perform both a distance estimation to an anatomical landmark and a quadrant assignment, while in other embodiments, geo-matching ML model may be used to perform only one or the other method of evaluation, or another location- or geometry-based evaluation of a lesion's placement within a breast or other imaged volume.


In embodiments, each image of an image pair is individually evaluated as a whole using the geo-matching model. For example, each of a CC image and an MLO image in an image pair may be fully evaluated using the geo-matching ML model to identify lesions and estimate the locations of the lesions before the geo-matching ML model is used to determine whether a correlation exists between a first lesion identified in the CC image and a second lesion identified in the MLO image. One or more first lesions identified in the CC image and one or more second lesions may be identified in the MLO image. One or more correlations may be determined to exists between the first and second lesions in the CC image and the MLO image. For example, two first lesions may be identified in the CC image, first lesion (a) and first lesion (b), and two second lesions in the MLO image, second lesion (a) and second lesion (b). A first correlation may be determined to exist between first lesion (a) and second lesion (a), and a second correlation may be determined to exists between first lesion (b) and second lesion (b).


The geo-matching ML model is used to compute a confidence level indicator at block 1006. As described earlier, the confidence level indicator represents a probability that the first lesion in an image of the first image type (e.g., the CC image) is a same object as the second lesion in an image of the second image type (e.g., the MLO image). In some instances, the radiologist is interested in understanding the determination of the confidence level based on the geo-matching model alone. The confidence level indicator may be then presented to a radiologist at block 1008. In one embodiment, the confidence level indicator is displayed to the radiologist on a display device. In examples, a letter or other symbol indicia may be displayed to mark each of the first lesion in the CC image and the second lesion in the MLO image. If multiple regions of interest appear in the image pair, multiple letters or labels may be used. In embodiments, the letter or other symbol indicia is displayed without the confidence level indicator.


Next, as shown in optional block 1010, additional information is presented to the radiologist. The additional information may include, but is not limited to, an explanation as to why the matching engine derived the confidence level indictor presented or the reasoning why this CC-MLO pair has a higher (or lower) confidence level indicator. For example, measurement or other distance data and identification of the anatomical landmark used is displayed. An anatomical quadrant map may be displayed with the first or second lesion's orientation in a particular quadrant of each image type overlayed on the anatomical quadrant map. Sources of uncertainty may be identified on the display, such as indicating a lesion is lying on a quadrant boundary or indicating an inconsistency between the first lesion's relationship to a particular anatomical landmark and the second lesions relationship to the particular anatomical landmark.



FIG. 11 illustrates a flowchart of a method 1112 of correlating a first lesion in a CC image to a second lesion in an MLO image. The method is performed by a matching engine, such as matching engine 106 in FIGS. 1 and 9, using a similarity matching ML model, such as the similarity matching ML model 116 in FIGS. 1 and 9. The illustrated method 1112 shares some similar steps with the method 1012 shown in FIG. 10.


The CC and MLO images are received by the matching engine at block 1000. One or more characteristics of a first lesion in the CC image and a second lesion in the MLO image are determined at block 1100 using the similarity matching ML model. The characteristics assigned to a lesion include, but are not limited to, the size, the shape, the margins, the location, the density, the color, the orientation, the texture, the pattern, or the depth within the breast.


In embodiments, each image of an image pair is individually evaluated as a whole. For example, each of a CC image and an MLO image in a pair is fully evaluated using the similarity matching ML model to identify lesions or potential lesion and fully characterize the identified lesions before the similarity matching ML model is used to determine whether a correlation exists between a first lesion in the CC image and a second lesion in the MLO image. Like the geo-matching ML model, the similarity matching ML model may be used to identify and correlate one or more lesions in the image pair.


The similarity matching ML model is used to compute a confidence level indicator at block 1006. As described earlier, the confidence level indicator represents a probability that the first lesion in an image of the first image type (e.g., the CC image) is a same lesion as the second lesion in an image of the second image type (e.g., the MLO image). In some instances, the radiologist is interested in understanding the determination of the confidence level based on the similarity matching ML model alone. The confidence level indicator may be then presented to a radiologist at block 1008. In one embodiment, the confidence level indicator is displayed to the radiologist on a display device. In examples, a letter or other symbol indicia pair may be displayed to mark each of the first lesion in the CC image and the second lesion in the MLO image. If multiple regions of interest appear in the image pair, multiple letters or labels may be used. In embodiments, the letter or other symbol indicia is displayed without the confidence level indicator.


Next, as shown in optional block 1010, additional information is presented to the radiologist. The additional information may include, but is not limited to, an explanation as to why the matching engine derived the confidence level indictor presented or the reasoning why a particular CC-MLO pair has a higher (or lower) confidence level indicator. For example, shape, texture, orientation, or other characteristic data is displayed. One or more overlays may be presented over each of the first and second lesions, indicating, for example, shape or margin boundaries and indicating similarities or differences between the characteristics of the first and second lesion.



FIG. 12 illustrates a flowchart of a method 1212 of correlating a first lesion in a CC image to a second lesion in an MLO image. The method 1212 is performed by a matching engine, such as matching engine 106 in FIGS. 1 and 9, using an ensemble matching ML model, such as the ensemble matching ML model 120 in FIGS. 1 and 9. In embodiments, some or all of the method 1212 may be performed by a correlation evaluator, such as correlation evaluator 122 in FIG. 1. The illustrated method 1212 shares some steps with the methods 1012, 1112 shown in FIGS. 10 and 11.


The CC and MLO images are received at block 1000. At block 1200, the confidence level indicator produced using the geo-matching ML model is received. In some cases, the distances between each of the first and second lesions and one or more anatomical landmarks for each of the CC and the MLO images, or an estimated quadrant for each of the first and second lesions in the CC and the MLO images, or other additional data are also received. At block 1202, the confidence level indicator produced using the similarity matching ML model is received. In embodiments, one or more characteristics of the first and second lesions in the CC and the MLO images, or other additional data are also received.


The ensemble matching ML model is used to analyze the CC and the MLO images, the confidence level indicators provided from each of the geo-matching ML model and the similarity matching ML model, and, in some embodiments, the any additional data received from one or both of the geo-matching and similarity matching ML models. Based on the analysis using the ensemble matching ML model, the matching engine generates a joint probability of correlation at 1204.


Part of the analysis may include emphasizing the data received from one type of model over another model or associating a greater or lesser weight with data received from a particular model. For example, based on a type of lesion the first and second lesions are identified to be (e.g., soft tissue lesions or clusters of calcifications), the ensemble matching ML model may determine, based on its training or previous analyses, that the data from either of the similarity matching ML model and the geo-matching ML model is more accurate in predicting the probability that the target lesions correlate to each other (and vice versa).


Additionally or alternatively, if the confidence level indicator produced by one model is relatively high while the confidence level indicator received from the other model is relatively low, the ensemble matching ML model may rely on the data received from the model that provided the relatively higher confidence level indictor (or vice versa). For example, for dense breasts the similarly matching ML model may result in a relatively low confidence level indicator due to obscuration of a lesion by the density of the surrounding tissue, while the geo-matching location ML model is able to compensate for the low confidence level indicator from the similarity matching ML model by providing a relatively high confidence level indicator indicating that the CC and MLO images are matching. Such adjustments may be indicated to the radiologist with a joint confidence level of correlation associated with the joint probability of correlation. In embodiments, such adjustments may be opaque to the radiologist but may be reflected in a relative weight assigned to one or both of the geo-matching ML and similarity matching ML model's contribution to the joint probability.


The joint probability of correlation may be then presented to a radiologist at block 1206. In one embodiment, the joint probability of correlation is displayed to the radiologist on a display device. The joint probability of correlation may also be used to determine other outputs to a display device. For example, a threshold may be associated with the joint probability of correlation such that if a user selects a lesion or ROI in a CC image and a joint probability of correlation exceeding a predetermined threshold is associated with that lesion or ROI, the system may automatically display a MLO image with the correlated lesion or ROI.


In examples, a letter or other symbol indicia may be displayed to mark each of the first lesion in the CC image and the second lesion in the MLO image. If multiple regions of interest appear in the image pair, multiple letters or labels may be used. In embodiments, the letter or other symbol indicia is displayed without the confidence level indicator.


Next, as shown in optional block 1010, additional information is presented to the radiologist. The additional information may include, but is not limited to, any of the additional information discussed above in relation to methods 1012, 1112.



FIG. 13 illustrates a flowchart of a method 1312 of training an ensemble matching ML model, such as ensemble matching ML model 120 of FIGS. 1 and 9. In one embodiment, the ensemble matching ML model is trained using a supervised training process. Initially, radiologists or a matching engine review multiple CC-MLO image pairs and label or otherwise identify a first and second lesions in the CC and MLO images that correlate to each other (block 1300). The labels can identify information for each of the first and second lesion pair, such as a type of lesion or whether or not the lesion is cancerous.


The labeled CC-MLO image pairs are input to the ensemble matching ML model at block 1302. Training data for the similarity matching ML model and for the geo-matching ML model are input to the ensemble matching ML model at block 1304. The ensemble matching ML model is trained using the human-labeled CC-MLO image pairs and the training data for the similarity and geo-matching ML models at block 1306. In embodiments, the ensemble matching ML model may generate a feature set based on the human-labeled CC-MLO image pairs and the training data for the similarity and geo-matching ML models. The trained ensemble matching ML model is then used, at block 1308, in a lesion identification and correlation system, such as the system 100 shown in FIG. 1.


In some embodiments, the similarity matching and the geo-matching ML models are trained using known training techniques. Additionally, the human-labeled CC-MLO image pairs are also used to train the similarity matching ML model and the geo-matching ML model. Each model can be trained independent of the other models. Alternatively, the geo-matching, similarity matching, and ensemble matching ML models may be trained in one step or training process.


Other embodiments can arrange the blocks in the flowcharts shown in FIGS. 10-13 in an order that is different from the illustrated order.



FIG. 14 is a block diagram illustrating example physical components of a computing device. The computing device 1400 can be any computing device utilized in conjunction with the identification system 100, such as the computing system 102, the x-ray computing device 112, and the computing system 110.


In the example shown in FIG. 14, the computing device 1400 includes at least one processing device (collectively processing device 1402), at least one memory (collectively memory 1408), and at least one bus (collectively bus 1422) that couples the memory 1408 to the processing device 1402. Example processing devices 1402 include, but are not limited to, a central processing unit, a microprocessor, an application specific integrated circuit, a digital signal processor, and a graphics processor. Example memory 408 includes a random-access memory (“RAM”) 1410 and a read-only memory (“ROM”) 1412. A basic input/output system that contains the basic routines that help to transfer information between elements within the computing device 1400, such as during startup, is stored in the ROM 1412.


The computing device 1400 further includes one or more storage devices (collectively storage device 1414). The storage device 1414 is able to store software instructions and data. For example, the storage device 1414 stores the GUI shown in FIG. 1.


The storage device 1414 is connected to the processing device 1402 through a storage controller (not shown) connected to the bus 1422. The storage device 1414 and its associated computer-readable storage media provide non-volatile, non-transitory data storage for the computing device 1400. Although the description of computer-readable storage media contained herein refers to a storage device, such as a hard disk or solid-state disk, it should be appreciated by those skilled in the art that computer-readable data storage media can include any available tangible, physical device or article of manufacture from which the processing device 1402 can read data and/or instructions. In certain examples, the computer-readable storage media includes entirely non-transitory media.


Computer-readable storage media include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable software instructions, data structures, program modules or other data. Example types of computer-readable data storage media include, but are not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROMs, digital versatile discs (“DVDs”), other optical storage media, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computing device 1400.


According to some examples, the computing device 1400 can operate in a networked environment using logical connections to remote network devices through a network 1452, such as a wireless network, the Internet, or another type of network. The computing device 1400 may connect to the network 1452 through a network interface unit 1404 connected to the bus 1422. It should be appreciated that the network interface unit 1404 may also be utilized to connect to other types of networks and remote computing systems.


The computing device 1400 also includes an input/output controller 1406 for receiving and processing input from a number of other devices, including a touch user interface display screen, or another type of input device. Similarly, the input/output controller 1406 may provide output to a touch user interface display screen or other type of output device.


As mentioned briefly above, the storage device 1414 and the RAM 1410 of the computing device 1400 can store software instructions and data. The software instructions include an operating system 1418 suitable for controlling the operation of the computing device 1400. The storage device 1414 and/or the RAM 1410 also store software instructions, that when executed by the processing device 1402, cause the computing device 1400 to provide the functionality discussed herein.


EXAMPLES

Illustrative examples of the systems and methods described herein are provided below. An embodiment of the system or method described herein may include any one or more, and any combination of, the clauses described below.


Clause 1. A method of correlating regions of interest (ROIs) in an image pair comprising a cranial-caudal (CC) image and a medial-lateral-oblique (MLO) image, the method comprising: receiving, by an ensemble matching machine learning (ML) model, data from a similarity matching ML model, the data from the similarity matching ML model including at least a matched pair of ROIs and a first confidence level indicator associated with the matched pair of ROIs; receiving, by the ensemble matching ML model, data from a geo-matching (GM) model, the data from the GM model including at least the matched pair of ROIs and a second confidence level indicator; determining, by the ensemble matching ML model, a joint probability of correlation based on evaluation of each of the first and second confidence level by the ensemble matching ML model, wherein the joint probability of correlation provides a probability that the ROI in the CC image correlates to the corresponding ROI in the MLO image and vice versa; and providing the joint probability of correlation to an output device.


Clause 2. The method of clause 1, further comprising: receiving data associated with training CC-MLO image pairs; and training the ensemble matching ML model with the data associated with the training CC-MLO image pairs.


Clause 3. The method of clause 1 or 2, further comprising determining, by the ensemble matching ML model, a third confidence level indicator based on evaluation of each of the first and second confidence level and the joint probability of correlation, wherein the third confidence level indicator is a likelihood of reliability associated with the joint probability of correlation.


Clause 4. The method of any of clauses 1-3, wherein the joint probability is a probability that the similarity correlation and the GM correlation properly correlated the CC-ROI and the MLO-ROI.


Clause 5. The method of any of clauses 1-4, wherein providing the joint probability of correlation to an output device comprises a numerical value associated with the joint probability of correlation.


Clause 6. The method of any of clauses 1-5, wherein each image of the matched lesion pair is a whole breast image.


Clause 7. The method of any of clauses 1-6, wherein each image of the matched lesion pair contains only the ROI for each of the CC image and the MLO image.


Clause 8. The method of any of clauses 1-7, wherein providing the joint probability of correlation to an output device comprises a numerical display.


Clause 9. The method of any of clauses 1-8, wherein providing the joint probability of correlation to an output device comprises: receiving a selection of the CC-ROI; and presenting, in response to receiving the selection of the CC-ROI, the MLO-ROI.


Clause 10. The method of clause 9, further comprising: determining, in response to receiving the selection of the CC-ROI, that the joint probably of correlation exceeds a predetermined threshold; and presenting, in response to determining that the joint probability of correlation exceeds a predetermined threshold, the MLO-ROI.


Clause 11. The method of any of clauses 1-10, wherein the matched pair of ROIs includes a similarity correlation between a CC-ROI in a CC-image and a MLO-ROI in a MLO image, wherein the first confidence level indicator is a probability associated with the correlation between the CC-ROI and the MLO-ROI.


Clause 12. The method of any of clauses 1-11, wherein the second confidence level indicator is a probability associated with a GM correlation between the CC-ROI and the MLO-ROI.


Clause 13. The method of any of clauses 1-12, further comprising: displaying, on the output display, a pair of symbols, wherein each symbol of the pair of symbols marks a ROI of the matched pair of ROIs.


Clause 14. The method of any of clauses 1-13, wherein the data from the GM model comprises location data for each ROI of the matched pair of ROIs.


Clause 15. The method of clause 14, wherein the second confidence level indicator indicates a probability that a first location of a first ROI of the matched pair of ROIs and a second location of a second ROI of the matched pair of ROIS are a same location.


Clause 16. The method of clause 14 or 15, wherein each of the CC image and the MLO image depicts a breast and the GM model logically divides the breast into quadrants.


Clause 17. The method of any of clauses 1-16, wherein the data from the similarity model comprises characteristics data for each ROI of the matched pair of ROIs.


Clause 18. The method of clause 17, wherein the first confidence level indicator indicates a degree of similarity between a first set of characteristics associated with a first ROI of the matched pair of ROI and a second set of characteristics associated with a second ROI of the matched pair of ROIs.


Clause 19. The method of clause 17 or 18, wherein the characteristics data includes one or more of a size, a shape, one or more margins, a location, a density, one or more colors, an orientation, a texture, a pattern, and a depth.


Clause 20. A system for ensemble matching a cranial-caudal (CC) and a medial-lateral-oblique (MLO) image comprising: at least one processor in communication with at least one memory; an ensemble matching module that executes on the at least one processor and during operation is configured to: receive, from a similarity matching model, a matched CC-MLO image pair and a similarity confidence level indicator associated with the matched CC-MLO image pair; receive, from a geometric matching (GM) model, the matched CC-MLO image pair and a GM confidence level indicator associated with the matched CC-MLO image pair; apply an ensemble matching model to determine an ensemble confidence level based on an ensemble machine learning (ML) algorithm trained on a plurality of matched CC-MLO pairs; associate the ensemble confidence level with the matched CC-MLO image pair; and output the ensemble confidence level with the matched CC-MLO image pair.


Clause 21. The system of clause 20, further comprising an image acquisition module.


Clause 22. The system of clause 20 or 21, wherein the matched CC-MLO image pair comprises a first region of interest (ROI) identified in the CC image and a second ROI identified in the MLO image, wherein either of the similarity ML model or the GM model assigns a correlation to the first ROI and the second ROI.


Clause 23. The system of clause 22, wherein the ensemble matching module further receives, from the GM model, location data associated with the correlation between the first ROI and the second ROI.


Clause 24. The system of clause 22 or 23, wherein the ensemble matching module further receives, from the similarity ML model, shape data associated with the correlation between the first ROI and the second ROI.


Clause 25. The system of any of clause 22-24, wherein the ensemble matching module further receives, from the similarity ML model, margin data associated with the correlation between the first ROI and the second ROI.


Clause 26. The system of any of clause 22-25, wherein the similarity matching model determines the matched CC-MLO image pair by: identifying a CC region of interest (ROI) in a CC-image received from an image acquisition module; searching an MLO image received from the image acquisition module for a MLO-ROI, wherein the MLO image includes plurality of regions and the similarity matching model review each region of the plurality of regions; determining at least one similarity characteristic of each of the CC-ROI and the MLO-ROI; and correlating the CC-ROI and the MLO-ROI based on the at least one similarity characteristic.


Clause 27. A method for ensemble matching a cranial-caudal (CC) image and a medial-lateral-oblique (MLO) image comprising: training an ensemble matching model using training CC-MLO image pairs, wherein the training CC-MLO image pairs include pairs of CC-MLO images with regions of interest (ROIs) correlated with high confidence and pairs of CC-MLO images with falsely correlated ROIs; determining, by the ensemble matching model, an ensemble confidence level for a correlation between ROIs in one or more paired CC-MLO images by: receiving a matched CC-MLO pair from a similarity matching model, the matched CC-MLO pair including a similarity confidence score; receiving the matched CC-MLO pair from a geo-matching model, the matched CC-MLO pair including a location confidence score; analyzing the matched CC-MLO pair along with the similarity confidence score and the location confidence score based on the training CC-MLO image pairs; calculating the ensemble confidence level; and generating an output presentation of the ensemble confidence level.


This disclosure described some examples of the present technology with reference to the accompanying drawings, in which only some of the possible examples were shown. Other aspects can, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein. Rather, these examples were provided so that this disclosure was thorough and complete and fully conveyed the scope of the possible examples to those skilled in the art.


Although various embodiments and examples are described herein, those of ordinary skill in the art will understand that many modifications may be made thereto within the scope of the present disclosure. Therefore, the specific structure, acts, or media are disclosed only as illustrative examples. Examples according to the technology may also combine elements or components of those that are disclosed in general but not expressly exemplified in combination, unless otherwise stated herein. Accordingly, it is not intended that the scope of the disclosure in any way be limited by the examples provided.

Claims
  • 1. A method of correlating regions of interest (ROIs) in an image pair comprising a cranial-caudal (CC) image and a medial-lateral-oblique (MLO) image, the method comprising: receiving, by an ensemble matching machine learning (ML) model trained using a first training set of matched CC-MLO image pairs, data from a first analysis of the image pair by a similarity matching ML model trained using a second training set of matched CC-MLO image pairs, the data from the similarity matching ML model including at least a matched pair of ROIs and a first confidence level indicator associated with the matched pair of ROIs;receiving, by the ensemble matching ML model in parallel with receiving the data from the similarity matching ML model, data from a second analysis of the image pair by a geo-matching (GM) model trained using a third training set of matched CC-MLO image pairs, the data from the GM model including at least the matched pair of ROIs and a second confidence level indicator;determining, by the ensemble matching ML model, a joint probability of correlation based on evaluation of each of the first and second confidence level by the ensemble matching ML model, wherein the joint probability of correlation provides a probability that the ROI in the CC image (CC-ROI) correlates to the corresponding ROI in the MLO image (MLO-ROI) and vice versa; andproviding the joint probability of correlation to an output device.
  • 2. The method of claim 1, further comprising: receiving data associated with training CC-MLO image pairs; andtraining the ensemble matching ML model with the data associated with the training CC-MLO image pairs.
  • 3. The method of claim 1, further comprising determining, by the ensemble matching ML model, a third confidence level indicator based on evaluation of each of the first and second confidence level and the joint probability of correlation, wherein the third confidence level indicator is a likelihood of reliability associated with the joint probability of correlation.
  • 4. The method of claim 1, wherein the joint probability of correlation is a probability that the matched pair of ROIs in the data from the similarity matching ML model and the matched pair of ROIs in the data from the GM matching ML model properly correlated the CC-ROI and the MLO-ROI.
  • 5. The method of claim 1, wherein providing the joint probability of correlation to an output device comprises a numerical value associated with the joint probability of correlation.
  • 6. The method of claim 1, wherein each image of the matched pair of ROIs is a whole breast image.
  • 7. The method of claim 1, wherein each image of the matched pair of ROIs contains only the ROI for each of the CC image and the MLO image.
  • 8. The method of claim 1, wherein providing the joint probability of correlation to an output device comprises a numerical display.
  • 9. The method of claim 1, wherein providing the joint probability of correlation to an output device comprises: receiving a selection of the CC-ROI; andpresenting, in response to receiving the selection of the CC-ROI, the MLO-ROI.
  • 10. The method of claim 1, wherein the matched pair of ROIs includes a similarity correlation between a CC-ROI in a CC-image and a MLO-ROI in a MLO image, wherein the first confidence level indicator is a probability associated with the similarity correlation between the CC-ROI and the MLO-ROI.
  • 11. The method of claim 1, wherein the second confidence level indicator is a probability associated with a GM correlation between the CC-ROI and the MLO-ROI.
  • 12. The method of claim 1, further comprising: displaying, on the output device display, a pair of symbols, wherein each symbol of the pair of symbols marks a ROI of the matched pair of ROIs.
  • 13. The method of claim 1, wherein the data from the GM model comprises location data for each ROI of the matched pair of ROIs.
  • 14. The method of claim 1, wherein the data from the similarity matching ML model comprises characteristics data for each ROI of the matched pair of ROIs.
  • 15. The method of claim 1, wherein the second and third training set of matched CC-MLO image pairs are a same training set of CC-MLO image pairs.
  • 16. The method of claim 15, wherein each of the first, second, and third training set of matched CC-MLO image pairs are a same training set of CC-MLO image pairs.
  • 17. The method of claim 1, wherein each of the first, second, and third training set of matched CC-MLO image pairs are a unique training set of CC-MLO image pairs.
  • 18. A system for ensemble matching a cranial-caudal (CC) and a medial-lateral-oblique (MLO) image comprising: at least one processor in communication with at least one memory;an ensemble matching module that executes on the at least one processor and during operation is configured to:receive, from a similarity matching model trained using a first training set of matched CC-MLO image pairs, a matched CC-MLO image pair and a similarity confidence level indicator associated with the matched CC-MLO image pair;receive, from a geometric matching (GM) model trained using a second training set of matched CC-MLO image pairs, the matched CC-MLO image pair and a GM confidence level indicator associated with the matched CC-MLO image pair in parallel with the matched CC-MLO image pair received from the similarity matching model and the similarity confidence level indicator;determine an ensemble confidence level based on each of the similarity confidence level indicator and the GM confidence level indicator, using an ensemble machine learning (ML) algorithm trained on a plurality of matched CC-MLO pairs;associate the ensemble confidence level with the matched CC-MLO image pair; and output the ensemble confidence level with the matched CC-MLO image pair.
  • 19. The system of claim 18, wherein the matched CC-MLO image pair comprises a first region of interest (ROI) identified in the CC image and a second ROI identified in the MLO image, wherein either of the similarity matching model or the GM model assigns a correlation to the first ROI and the second ROI.
  • 20. The system of claim 19, wherein the ensemble matching module further receives, from the GM model, location data associated with the correlation between the first ROI and the second ROI.
  • 21. The system of claim 19, wherein the ensemble matching module further receives, from the similarity matching model, shape data associated with the correlation between the first ROI and the second ROI.
  • 22. The system of claim 19, wherein the ensemble matching module further receives, from the similarity matching model, margin data associated with the correlation between the first ROI and the second ROI.
  • 23. The system of claim 19, wherein the similarity matching model determines the matched CC-MLO image pair by: identifying a CC region of interest (ROI) in a CC-image received from an image acquisition module;searching an MLO image received from the image acquisition module for a MLO-ROI, wherein the MLO image includes a plurality of regions and the similarity matching ML model reviews each region of the plurality of regions;determining at least one similarity characteristic of each of the CC-ROI and the MLO-ROI; andcorrelating the CC-ROI and the MLO-ROI based on the at least one similarity characteristic.
CROSS-REFERENCE TO RELATED APPLICATION

This application is a continuation of PCT International Patent Application No. PCT/US2022/080432, filed on Nov. 23, 2022, which claims the benefit of U.S. Provisional Application No. 63/283,866, filed Nov. 29, 2021, the entire disclosures of which are incorporated herein by reference in their entireties. To the extent appropriate, a claim of priority is made to each of the above disclosed applications.

US Referenced Citations (551)
Number Name Date Kind
3502878 Stewart Mar 1970 A
3863073 Wagner Jan 1975 A
3971950 Evans et al. Jul 1976 A
4160906 Daniels Jul 1979 A
4310766 Finkenzeller et al. Jan 1982 A
4496557 Malen et al. Jan 1985 A
4559557 Keyes Dec 1985 A
4559641 Caugant et al. Dec 1985 A
4706269 Reina et al. Nov 1987 A
4727565 Ericson Feb 1988 A
4744099 Huettenrauch May 1988 A
4773086 Fujita Sep 1988 A
4773087 Plewes Sep 1988 A
4819258 Kleinman et al. Apr 1989 A
4821727 Levene et al. Apr 1989 A
4907156 Doi et al. Jun 1990 A
4969174 Schied Nov 1990 A
4989227 Tirelli et al. Jan 1991 A
5018176 Romeas et al. May 1991 A
RE33634 Yanaki Jul 1991 E
5029193 Saffer Jul 1991 A
5051904 Griffith Sep 1991 A
5078142 Siczek et al. Jan 1992 A
5099846 Hardy Mar 1992 A
5129911 Siczek et al. Jul 1992 A
5133020 Giger et al. Jul 1992 A
5163075 Lubinsky Nov 1992 A
5164976 Scheid et al. Nov 1992 A
5199056 Darrah Mar 1993 A
5219351 Teubner Jun 1993 A
5240011 Assa Aug 1993 A
5279309 Taylor et al. Jan 1994 A
5280427 Magnusson Jan 1994 A
5289520 Pellegrino et al. Feb 1994 A
5343390 Doi et al. Aug 1994 A
5359637 Webbe Oct 1994 A
5365562 Toker Nov 1994 A
5386447 Siczek Jan 1995 A
5415169 Siczek et al. May 1995 A
5426685 Pellegrino et al. Jun 1995 A
5452367 Bick Sep 1995 A
5491627 Zhang et al. Feb 1996 A
5499097 Ortyn et al. Mar 1996 A
5506877 Niklason et al. Apr 1996 A
5526394 Siczek Jun 1996 A
5539797 Heidsieck et al. Jul 1996 A
5553111 Moore Sep 1996 A
5592562 Rooks Jan 1997 A
5594769 Pellegrino et al. Jan 1997 A
5596200 Sharma Jan 1997 A
5598454 Franetzki Jan 1997 A
5609152 Pellegrino et al. Mar 1997 A
5627869 Andrew et al. May 1997 A
5642433 Lee et al. Jun 1997 A
5642441 Riley et al. Jun 1997 A
5647025 Frost et al. Jul 1997 A
5657362 Giger et al. Aug 1997 A
5660185 Shmulewitz et al. Aug 1997 A
5668889 Hara Sep 1997 A
5671288 Wilhelm et al. Sep 1997 A
5709206 Teboul Jan 1998 A
5712890 Spivey Jan 1998 A
5719952 Rooks Feb 1998 A
5735264 Siczek et al. Apr 1998 A
5757880 Colomb May 1998 A
5763871 Ortyn et al. Jun 1998 A
5769086 Ritchart et al. Jun 1998 A
5773832 Sayed et al. Jun 1998 A
5803912 Siczek et al. Sep 1998 A
5818898 Tsukamoto et al. Oct 1998 A
5828722 Ploetz Oct 1998 A
5835079 Shieh Nov 1998 A
5841124 Ortyn et al. Nov 1998 A
5872828 Niklason et al. Feb 1999 A
5875258 Ortyn et al. Feb 1999 A
5878104 Ploetz Mar 1999 A
5878746 Lemelson et al. Mar 1999 A
5896437 Ploetz Apr 1999 A
5941832 Tumey Aug 1999 A
5954650 Saito Sep 1999 A
5986662 Argiro Nov 1999 A
6005907 Ploetz Dec 1999 A
6022325 Siczek et al. Feb 2000 A
6067079 Shieh May 2000 A
6075879 Roehrig et al. Jun 2000 A
6091841 Rogers Jul 2000 A
6091981 Cundari et al. Jul 2000 A
6101236 Wang et al. Aug 2000 A
6102866 Nields et al. Aug 2000 A
6137527 Abdel-Malek Oct 2000 A
6141398 He Oct 2000 A
6149301 Kautzer et al. Nov 2000 A
6175117 Komardin Jan 2001 B1
6196715 Nambu Mar 2001 B1
6215892 Douglass et al. Apr 2001 B1
6216540 Nelson Apr 2001 B1
6219059 Argiro Apr 2001 B1
6256370 Yavus Apr 2001 B1
6233473 Sheperd May 2001 B1
6243441 Zur Jun 2001 B1
6245028 Furst et al. Jun 2001 B1
6272207 Tang Aug 2001 B1
6289235 Webber et al. Sep 2001 B1
6292530 Yavus Sep 2001 B1
6293282 Lemelson Sep 2001 B1
6327336 Gingold et al. Dec 2001 B1
6327377 Rutenberg et al. Dec 2001 B1
6341156 Baetz Jan 2002 B1
6375352 Hewes Apr 2002 B1
6389104 Bani-Hashemi et al. May 2002 B1
6411836 Patel Jun 2002 B1
6415015 Nicolas Jul 2002 B2
6424332 Powell Jul 2002 B1
6442288 Haerer Aug 2002 B1
6459925 Nields et al. Oct 2002 B1
6463181 Duarte Oct 2002 B2
6468226 McIntyre, IV Oct 2002 B1
6480565 Ning Nov 2002 B1
6501819 Unger et al. Dec 2002 B2
6556655 Chichereau Apr 2003 B1
6574304 Hsieh Jun 2003 B1
6597762 Ferrant Jul 2003 B1
6611575 Alyassin et al. Aug 2003 B1
6620111 Stephens et al. Sep 2003 B2
6626849 Huitema et al. Sep 2003 B2
6633674 Barnes Oct 2003 B1
6638235 Miller et al. Oct 2003 B2
6647092 Eberhard Nov 2003 B2
6650928 Gailly Nov 2003 B1
6683934 Zhao Jan 2004 B1
6744848 Stanton Jun 2004 B2
6748044 Sabol et al. Jun 2004 B2
6751285 Eberhard Jun 2004 B2
6758824 Miller et al. Jul 2004 B1
6813334 Koppe Nov 2004 B2
6882700 Wang Apr 2005 B2
6885724 Li Apr 2005 B2
6901156 Giger et al. May 2005 B2
6912319 Barnes May 2005 B1
6940943 Claus Sep 2005 B2
6978040 Berestov Dec 2005 B2
6987331 Koeppe Jan 2006 B2
6999553 Livingston Feb 2006 B2
6999554 Mertelmeier Feb 2006 B2
7022075 Grunwald et al. Apr 2006 B2
7025725 Dione et al. Apr 2006 B2
7030861 Westerman Apr 2006 B1
7110490 Eberhard Sep 2006 B2
7110502 Tsuji Sep 2006 B2
7117098 Dunlay et al. Oct 2006 B1
7123684 Jing et al. Oct 2006 B2
7127091 OpDeBeek Oct 2006 B2
7142633 Eberhard Nov 2006 B2
7218766 Eberhard May 2007 B2
7245694 Jing et al. Jul 2007 B2
7286634 Sommer, Jr. et al. Oct 2007 B2
7289825 Fors et al. Oct 2007 B2
7298881 Giger et al. Nov 2007 B2
7315607 Ramsauer Jan 2008 B2
7319735 Defreitas et al. Jan 2008 B2
7323692 Rowlands Jan 2008 B2
7346381 Okerlund et al. Mar 2008 B2
7406150 Minyard et al. Jul 2008 B2
7430272 Jing et al. Sep 2008 B2
7443949 Defreitas et al. Oct 2008 B2
7466795 Eberhard et al. Dec 2008 B2
7556602 Wang et al. Jul 2009 B2
7577282 Gkanatsios et al. Aug 2009 B2
7606801 Faitelson et al. Oct 2009 B2
7616801 Gkanatsios et al. Nov 2009 B2
7630533 Ruth et al. Dec 2009 B2
7634050 Muller et al. Dec 2009 B2
7640051 Krishnan Dec 2009 B2
7697660 Ning Apr 2010 B2
7702142 Ren et al. Apr 2010 B2
7705830 Westerman et al. Apr 2010 B2
7760924 Ruth et al. Jul 2010 B2
7769219 Zahniser Aug 2010 B2
7787936 Kressy Aug 2010 B2
7809175 Roehrig et al. Oct 2010 B2
7828733 Zhang et al. Nov 2010 B2
7831296 DeFreitas et al. Nov 2010 B2
7869563 DeFreitas Jan 2011 B2
7974924 Holla et al. Jul 2011 B2
7991106 Ren et al. Aug 2011 B2
8044972 Hall et al. Oct 2011 B2
8051386 Rosander et al. Nov 2011 B2
8126226 Bernard et al. Feb 2012 B2
8155421 Ren et al. Apr 2012 B2
8165365 Bernard et al. Apr 2012 B2
8532745 DeFreitas et al. Sep 2013 B2
8571289 Ruth Oct 2013 B2
8594274 Hoernig et al. Nov 2013 B2
8677282 Cragun et al. Mar 2014 B2
8712127 Ren et al. Apr 2014 B2
8787522 Smith et al. Jul 2014 B2
8897535 Ruth et al. Nov 2014 B2
8983156 Periaswamy et al. Mar 2015 B2
9020579 Smith Apr 2015 B2
9075903 Marshall Jul 2015 B2
9084579 Ren et al. Jul 2015 B2
9119599 Itai Sep 2015 B2
9129362 Jerebko Sep 2015 B2
9289183 Karssemeijer Mar 2016 B2
9451924 Bernard Sep 2016 B2
9456797 Ruth et al. Oct 2016 B2
9478028 Parthasarathy Oct 2016 B2
9589374 Gao Mar 2017 B1
9592019 Sugiyama Mar 2017 B2
9805507 Chen Oct 2017 B2
9808215 Ruth et al. Nov 2017 B2
9811758 Ren et al. Nov 2017 B2
9901309 DeFreitas et al. Feb 2018 B2
10008184 Kreeger et al. Jun 2018 B2
10010302 Ruth et al. Jul 2018 B2
10074199 Robinson et al. Sep 2018 B2
10092358 DeFreitas Oct 2018 B2
10111631 Gkanatsios Oct 2018 B2
10242490 Karssemeijer Mar 2019 B2
10276265 Reicher et al. Apr 2019 B2
10282840 Moehrle et al. May 2019 B2
10335094 DeFreitas Jul 2019 B2
10357211 Smith Jul 2019 B2
10410417 Chen et al. Sep 2019 B2
10413263 Ruth et al. Sep 2019 B2
10444960 Marshall Oct 2019 B2
10456213 DeFreitas Oct 2019 B2
10573276 Kreeger et al. Feb 2020 B2
10575807 Gkanatsios Mar 2020 B2
10595954 DeFreitas Mar 2020 B2
10624598 Chen Apr 2020 B2
10977863 Chen Apr 2021 B2
10978026 Kreeger Apr 2021 B2
11419565 Gkanatsios Aug 2022 B2
11508340 Kreeger Nov 2022 B2
11589944 DeFreitas Feb 2023 B2
11663780 Chen May 2023 B2
11701199 DeFreitas Jul 2023 B2
20010038681 Stanton et al. Nov 2001 A1
20010038861 Hsu et al. Nov 2001 A1
20020012450 Tsuji Jan 2002 A1
20020050986 Inoue May 2002 A1
20020075997 Unger et al. Jun 2002 A1
20020113681 Byram Aug 2002 A1
20020122533 Marie et al. Sep 2002 A1
20020188466 Barrette et al. Dec 2002 A1
20020193676 Bodicker Dec 2002 A1
20030007598 Wang Jan 2003 A1
20030018272 Treado et al. Jan 2003 A1
20030026386 Tang Feb 2003 A1
20030048260 Matusis Mar 2003 A1
20030073895 Nields et al. Apr 2003 A1
20030095624 Eberhard et al. May 2003 A1
20030097055 Yanof May 2003 A1
20030128893 Castorina Jul 2003 A1
20030135115 Burdette et al. Jul 2003 A1
20030169847 Karellas Sep 2003 A1
20030194050 Eberhard Oct 2003 A1
20030194121 Eberhard et al. Oct 2003 A1
20030194124 Suzuki et al. Oct 2003 A1
20030195433 Turovskiy Oct 2003 A1
20030210254 Doan Nov 2003 A1
20030212327 Wang Nov 2003 A1
20030215120 Uppaluri Nov 2003 A1
20040008809 Webber Jan 2004 A1
20040008900 Jabri et al. Jan 2004 A1
20040008901 Avinash Jan 2004 A1
20040036680 Davis Feb 2004 A1
20040047518 Tiana Mar 2004 A1
20040052328 Saboi Mar 2004 A1
20040064037 Smith Apr 2004 A1
20040066884 Claus Apr 2004 A1
20040066904 Eberhard et al. Apr 2004 A1
20040070582 Smith et al. Apr 2004 A1
20040077938 Mark et al. Apr 2004 A1
20040081273 Ning Apr 2004 A1
20040094167 Brady May 2004 A1
20040101095 Jing et al. May 2004 A1
20040109028 Stern et al. Jun 2004 A1
20040109529 Eberhard et al. Jun 2004 A1
20040127789 Ogawa Jul 2004 A1
20040138569 Grunwald Jul 2004 A1
20040171933 Stoller et al. Sep 2004 A1
20040171986 Tremaglio, Jr. et al. Sep 2004 A1
20040267157 Miller et al. Dec 2004 A1
20050047636 Gines et al. Mar 2005 A1
20050049521 Miller et al. Mar 2005 A1
20050063509 Defreitas et al. Mar 2005 A1
20050078797 Danielsson et al. Apr 2005 A1
20050084060 Seppi et al. Apr 2005 A1
20050089205 Kapur Apr 2005 A1
20050105679 Wu et al. May 2005 A1
20050107689 Sasano May 2005 A1
20050111718 MacMahon May 2005 A1
20050113680 Ikeda et al. May 2005 A1
20050113681 DeFreitas et al. May 2005 A1
20050113715 Schwindt et al. May 2005 A1
20050124845 Thomadsen et al. Jun 2005 A1
20050135555 Claus Jun 2005 A1
20050135664 Kaufhold Jun 2005 A1
20050226375 Eberhard Oct 2005 A1
20060004278 Giger et al. Jan 2006 A1
20060009693 Hanover et al. Jan 2006 A1
20060018526 Avinash Jan 2006 A1
20060025680 Jeune-Iomme Feb 2006 A1
20060030784 Miller et al. Feb 2006 A1
20060074288 Kelly et al. Apr 2006 A1
20060098855 Gkanatsios et al. May 2006 A1
20060129062 Nicoson et al. Jun 2006 A1
20060132508 Sadikali Jun 2006 A1
20060147099 Marshall et al. Jul 2006 A1
20060154267 Ma et al. Jul 2006 A1
20060155209 Miller et al. Jul 2006 A1
20060197753 Hotelling Sep 2006 A1
20060210131 Wheeler Sep 2006 A1
20060228012 Masuzawa Oct 2006 A1
20060238546 Handley Oct 2006 A1
20060257009 Wang Nov 2006 A1
20060269040 Mertelmeier Nov 2006 A1
20060274928 Collins et al. Dec 2006 A1
20060291618 Eberhard et al. Dec 2006 A1
20070014468 Gines et al. Jan 2007 A1
20070019846 Bullitt et al. Jan 2007 A1
20070030949 Jing et al. Feb 2007 A1
20070036265 Jing et al. Feb 2007 A1
20070046649 Reiner Mar 2007 A1
20070047793 Wu et al. Mar 2007 A1
20070052700 Wheeler et al. Mar 2007 A1
20070076844 Defreitas et al. Apr 2007 A1
20070114424 Danielsson et al. May 2007 A1
20070118400 Morita et al. May 2007 A1
20070156451 Gering Jul 2007 A1
20070223651 Wagenaar et al. Sep 2007 A1
20070225600 Weibrecht et al. Sep 2007 A1
20070236490 Casteele Oct 2007 A1
20070242800 Jing et al. Oct 2007 A1
20070263765 Wu Nov 2007 A1
20070274585 Zhang et al. Nov 2007 A1
20080019581 Gkanatsios et al. Jan 2008 A1
20080043905 Hassanpourgol Feb 2008 A1
20080045833 DeFreitas et al. Feb 2008 A1
20080101537 Sendai May 2008 A1
20080114614 Mahesh et al. May 2008 A1
20080125643 Huisman May 2008 A1
20080130979 Ren Jun 2008 A1
20080139896 Baumgart Jun 2008 A1
20080152086 Hall Jun 2008 A1
20080165136 Christie et al. Jul 2008 A1
20080187095 Boone et al. Aug 2008 A1
20080198966 Hjarn Aug 2008 A1
20080221479 Ritchie Sep 2008 A1
20080229256 Shibaike Sep 2008 A1
20080240533 Piron et al. Oct 2008 A1
20080297482 Weiss Dec 2008 A1
20090003519 DeFreitas Jan 2009 A1
20090005668 West et al. Jan 2009 A1
20090005693 Brauner Jan 2009 A1
20090010384 Jing et al. Jan 2009 A1
20090034684 Bernard Feb 2009 A1
20090037821 O'Neal et al. Feb 2009 A1
20090063118 Dachille et al. Mar 2009 A1
20090079705 Sizelove et al. Mar 2009 A1
20090080594 Brooks et al. Mar 2009 A1
20090080602 Brooks et al. Mar 2009 A1
20090080604 Shores et al. Mar 2009 A1
20090080752 Ruth Mar 2009 A1
20090080765 Bernard et al. Mar 2009 A1
20090087067 Khorasani Apr 2009 A1
20090123052 Ruth May 2009 A1
20090129644 Daw et al. May 2009 A1
20090135997 Defreitas et al. May 2009 A1
20090138280 Morita et al. May 2009 A1
20090143674 Nields Jun 2009 A1
20090167702 Nurmi Jul 2009 A1
20090171244 Ning Jul 2009 A1
20090238424 Arakita Sep 2009 A1
20090259958 Ban Oct 2009 A1
20090268865 Ren et al. Oct 2009 A1
20090278812 Yasutake Nov 2009 A1
20090296882 Gkanatsios et al. Dec 2009 A1
20090304147 Jing et al. Dec 2009 A1
20100034348 Yu Feb 2010 A1
20100049046 Peiffer Feb 2010 A1
20100054400 Ren et al. Mar 2010 A1
20100067648 Kojima Mar 2010 A1
20100079405 Bernstein Apr 2010 A1
20100086188 Ruth et al. Apr 2010 A1
20100088346 Urness et al. Apr 2010 A1
20100098214 Star-Lack et al. Apr 2010 A1
20100105879 Katayose et al. Apr 2010 A1
20100121178 Krishnan May 2010 A1
20100131294 Venon May 2010 A1
20100131482 Linthicum et al. May 2010 A1
20100135558 Ruth et al. Jun 2010 A1
20100152570 Navab Jun 2010 A1
20100166147 Abenaim Jul 2010 A1
20100166267 Zhang Jul 2010 A1
20100171764 Feng et al. Jul 2010 A1
20100189322 Sakagawa Jul 2010 A1
20100195882 Ren et al. Aug 2010 A1
20100208037 Sendai Aug 2010 A1
20100231522 Li Sep 2010 A1
20100246884 Chen et al. Sep 2010 A1
20100246909 Blum Sep 2010 A1
20100259561 Forutanpour et al. Oct 2010 A1
20100259645 Kaplan Oct 2010 A1
20100260316 Stein et al. Oct 2010 A1
20100280375 Zhang Nov 2010 A1
20100293500 Cragun Nov 2010 A1
20110018817 Kryze Jan 2011 A1
20110019891 Puong Jan 2011 A1
20110054944 Sandberg et al. Mar 2011 A1
20110069808 Defreitas et al. Mar 2011 A1
20110069906 Park Mar 2011 A1
20110087132 DeFreitas et al. Apr 2011 A1
20110105879 Masumoto May 2011 A1
20110109650 Kreeger May 2011 A1
20110110570 Bar-Shalev May 2011 A1
20110110576 Kreeger May 2011 A1
20110123073 Gustafson May 2011 A1
20110125526 Gustafson May 2011 A1
20110134113 Ma et al. Jun 2011 A1
20110150447 Li Jun 2011 A1
20110157154 Bernard et al. Jun 2011 A1
20110163939 Tam et al. Jul 2011 A1
20110178389 Kumar et al. Jul 2011 A1
20110182402 Partain Jul 2011 A1
20110234630 Batman et al. Sep 2011 A1
20110237927 Brooks et al. Sep 2011 A1
20110242092 Kashiwagi Oct 2011 A1
20110310126 Georgiev et al. Dec 2011 A1
20120014501 Pelc Jan 2012 A1
20120014504 Jang Jan 2012 A1
20120014578 Karssemeijer Jan 2012 A1
20120069951 Toba Mar 2012 A1
20120106698 Karim May 2012 A1
20120127297 Baxi May 2012 A1
20120131488 Karlsson et al. May 2012 A1
20120133600 Marshall May 2012 A1
20120133601 Marshall May 2012 A1
20120134464 Hoernig et al. May 2012 A1
20120148151 Hamada Jun 2012 A1
20120150034 DeFreitas et al. Jun 2012 A1
20120189092 Jerebko Jul 2012 A1
20120194425 Buelow Aug 2012 A1
20120238870 Smith et al. Sep 2012 A1
20120277625 Nakayama Nov 2012 A1
20120293511 Mertelmeier Nov 2012 A1
20130016255 Bhatt Jan 2013 A1
20130022165 Jang Jan 2013 A1
20130044861 Muller Feb 2013 A1
20130059758 Haick Mar 2013 A1
20130108138 Nakayama May 2013 A1
20130121569 Yadav May 2013 A1
20130121618 Yadav May 2013 A1
20130202168 Jerebko Aug 2013 A1
20130259193 Packard Oct 2013 A1
20130272494 DeFreitas Oct 2013 A1
20140033126 Kreeger Jan 2014 A1
20140035811 Guehring Feb 2014 A1
20140064444 Oh Mar 2014 A1
20140073913 DeFreitas et al. Mar 2014 A1
20140082542 Zhang et al. Mar 2014 A1
20140200433 Choi Jul 2014 A1
20140219534 Wiemker et al. Aug 2014 A1
20140219548 Wels Aug 2014 A1
20140276061 Lee et al. Sep 2014 A1
20140327702 Kreeger et al. Nov 2014 A1
20140328517 Gluncic Nov 2014 A1
20150004558 Inglese Jan 2015 A1
20150052471 Chen Feb 2015 A1
20150061582 Smith Apr 2015 A1
20150238148 Georgescu Aug 2015 A1
20150258271 Love Sep 2015 A1
20150302146 Marshall Oct 2015 A1
20150309712 Marshall Oct 2015 A1
20150317538 Ren et al. Nov 2015 A1
20150331995 Zhao Nov 2015 A1
20160000399 Halmann et al. Jan 2016 A1
20160022364 DeFreitas et al. Jan 2016 A1
20160051215 Chen Feb 2016 A1
20160078645 Abdurahman Mar 2016 A1
20160140749 Erhard May 2016 A1
20160210774 Wiskin et al. Jul 2016 A1
20160228034 Gluncic Aug 2016 A1
20160235380 Smith Aug 2016 A1
20160350933 Schieke Dec 2016 A1
20160364526 Reicher et al. Dec 2016 A1
20160367210 Gkanatsios Dec 2016 A1
20170071562 Suzuki Mar 2017 A1
20170132792 Jerebko et al. May 2017 A1
20170202453 Sekiguchi Jul 2017 A1
20170262737 Rabinovich Sep 2017 A1
20180008220 Boone et al. Jan 2018 A1
20180008236 Venkataraman et al. Jan 2018 A1
20180047211 Chen et al. Feb 2018 A1
20180109698 Ramsay et al. Apr 2018 A1
20180132722 Eggers et al. May 2018 A1
20180137385 Ren May 2018 A1
20180144244 Masoud May 2018 A1
20180256118 DeFreitas Sep 2018 A1
20190000318 Caluser Jan 2019 A1
20190015173 DeFreitas Jan 2019 A1
20190037173 Lee et al. Jan 2019 A1
20190043456 Kreeger Feb 2019 A1
20190057778 Porter et al. Feb 2019 A1
20190287241 Hill et al. Sep 2019 A1
20190290221 Smith Sep 2019 A1
20190325573 Bernard et al. Oct 2019 A1
20200046303 DeFreitas Feb 2020 A1
20200054300 Kreeger et al. Feb 2020 A1
20200093562 DeFreitas Mar 2020 A1
20200184262 Chui Jun 2020 A1
20200205928 DeFreitas Jul 2020 A1
20200253573 Gkanatsios Aug 2020 A1
20200345320 Chen Nov 2020 A1
20200390404 DeFreitas Dec 2020 A1
20210000553 St. Pierre Jan 2021 A1
20210100518 Chui Apr 2021 A1
20210100626 St. Pierre Apr 2021 A1
20210113167 Chui Apr 2021 A1
20210118199 Chui Apr 2021 A1
20210174504 Madabhushi Jun 2021 A1
20210212665 Tsymbalenko Jul 2021 A1
20220005277 Chen Jan 2022 A1
20220013089 Kreeger Jan 2022 A1
20220036545 St. Pierre Feb 2022 A1
20220192615 Chui Jun 2022 A1
20220254023 McKinney et al. Aug 2022 A1
20220386969 Smith Dec 2022 A1
20230000467 Shi Jan 2023 A1
20230008465 Smith Jan 2023 A1
20230033601 Chui Feb 2023 A1
20230038498 Xu Feb 2023 A1
20230053489 Kreeger Feb 2023 A1
20230054121 Chui Feb 2023 A1
20230056692 Gkanatsios Feb 2023 A1
20230082494 Chui Mar 2023 A1
20230098305 St. Pierre Mar 2023 A1
20230103969 St. Pierre Apr 2023 A1
20230124481 St. Pierre Apr 2023 A1
20230125385 Solis Apr 2023 A1
20230225821 DeFreitas Jul 2023 A1
20230230679 Chen Jul 2023 A1
20230240785 DeFreitas Aug 2023 A1
20230344453 Yang Oct 2023 A1
20230394769 Chen Dec 2023 A1
20240169958 Kreeger May 2024 A1
20240315654 Chui Sep 2024 A1
20240338864 Chui Oct 2024 A1
20240341698 DeFreitas Oct 2024 A1
Foreign Referenced Citations (128)
Number Date Country
2014339982 Apr 2015 AU
1802121 Jul 2006 CN
1846622 Oct 2006 CN
101066212 Nov 2007 CN
102169530 Aug 2011 CN
202161328 Mar 2012 CN
102429678 May 2012 CN
102473300 May 2012 CN
105193447 Dec 2015 CN
106659468 May 2017 CN
107440730 Dec 2017 CN
112561908 Mar 2021 CN
102010009295 Aug 2011 DE
102011087127 May 2013 DE
775467 May 1997 EP
982001 Mar 2000 EP
1428473 Jun 2004 EP
2236085 Jun 2010 EP
2215600 Aug 2010 EP
2301432 Mar 2011 EP
2491863 Aug 2012 EP
1986548 Jan 2013 EP
2656789 Oct 2013 EP
2823464 Jan 2015 EP
2823765 Jan 2015 EP
2889743 Jul 2015 EP
3060132 Apr 2019 EP
H09-35043 Feb 1997 JP
H09-198490 Jul 1997 JP
H09-238934 Sep 1997 JP
H10-33523 Feb 1998 JP
2000-200340 Jul 2000 JP
2002-109510 Apr 2002 JP
2002-282248 Oct 2002 JP
2003-126073 May 2003 JP
2003-189179 Jul 2003 JP
2003-199737 Jul 2003 JP
2003-531516 Oct 2003 JP
2004254742 Sep 2004 JP
2005-110843 Apr 2005 JP
2005-522305 Jul 2005 JP
2005-227350 Aug 2005 JP
2005-322257 Nov 2005 JP
2006-519634 Aug 2006 JP
2006-312026 Nov 2006 JP
2007-130487 May 2007 JP
2007-216022 Aug 2007 JP
2007-325928 Dec 2007 JP
2007-330334 Dec 2007 JP
2007-536968 Dec 2007 JP
2008-068032 Mar 2008 JP
2008518684 Jun 2008 JP
2008-253401 Oct 2008 JP
2009-034503 Feb 2009 JP
2009-522005 Jun 2009 JP
2009-526618 Jul 2009 JP
2009-207545 Sep 2009 JP
2010-137004 Jun 2010 JP
2011-110175 Jun 2011 JP
2012-011255 Jan 2012 JP
2012-501750 Jan 2012 JP
2012-061196 Mar 2012 JP
2013-530768 Aug 2013 JP
2013-244211 Dec 2013 JP
2014-507250 Mar 2014 JP
2014-534042 Dec 2014 JP
2015-506794 Mar 2015 JP
2015-144632 Aug 2015 JP
2016-198197 Dec 2015 JP
2016059743 Apr 2016 JP
2017-000364 Jan 2017 JP
2017-056358 Mar 2017 JP
10-2015-0010515 Jan 2015 KR
10-2017-0062839 Jun 2017 KR
9005485 May 1990 WO
9317620 Sep 1993 WO
9406352 Mar 1994 WO
199700649 Jan 1997 WO
199816903 Apr 1998 WO
0051484 Sep 2000 WO
2003020114 Mar 2003 WO
03077202 Sep 2003 WO
2005051197 Jun 2005 WO
2005110230 Nov 2005 WO
2005112767 Dec 2005 WO
2006055830 May 2006 WO
2006058160 Jun 2006 WO
2007095330 Aug 2007 WO
08014670 Feb 2008 WO
2008047270 Apr 2008 WO
2008050823 May 2008 WO
2008054436 May 2008 WO
2009026587 Feb 2009 WO
2010028208 Mar 2010 WO
2010059920 May 2010 WO
2011008239 Jan 2011 WO
2011043838 Apr 2011 WO
2011065950 Jun 2011 WO
2011073864 Jun 2011 WO
2011091300 Jul 2011 WO
2012001572 Jan 2012 WO
2012068373 May 2012 WO
2012063653 May 2012 WO
2012112627 Aug 2012 WO
2012122399 Sep 2012 WO
2013001439 Jan 2013 WO
2013035026 Mar 2013 WO
2013087476 May 2013 WO
2013123091 Aug 2013 WO
2013136222 Sep 2013 WO
2014080215 May 2014 WO
2014149554 Sep 2014 WO
2014207080 Dec 2014 WO
2015061582 Apr 2015 WO
2015066650 May 2015 WO
2015130916 Sep 2015 WO
2016103094 Jun 2016 WO
2016184746 Nov 2016 WO
2016206942 Dec 2016 WO
2018183548 Oct 2018 WO
2018183549 Oct 2018 WO
2018183550 Oct 2018 WO
2018236565 Dec 2018 WO
2019032558 Feb 2019 WO
2019091807 May 2019 WO
2021021329 Feb 2021 WO
2021168281 Aug 2021 WO
2021195084 Sep 2021 WO
Non-Patent Literature Citations (84)
Entry
Perek, S. et al. “Siamese network for dual-view mammography mass matching.” Image Analysis for Moving Organ, Breast&Thoracic Images: 3rd Int'l Workshop, Rambo 2018, 4th Int'l Workshop, BIA 2018, and 1st Int'l Workshop, TIA 2018, Proceedings 3. Springer International Publishing, 2018. (Year: 2018).
PCT International Search Report and Written Opinion in International Application PCT/US2022/080432, mailed Mar. 22, 2023, 17 pages.
“Filtered Back Projection”, (Nygren), published May 8, 2007, URL: http://web.archive.org/web/19991010131715/http://www.owlnet.rice.edu/˜elec539/Projects97/cult/node2.html, 2 pgs.
“Supersonic to feature Aixplorer Ultimate at ECR”, AuntiMinnie.com, 3 pages (Feb. 2018).
Al Sallab et al., “Self Learning Machines Using Deep Networks”, Soft Computing and Pattern Recognition (SoCPaR), 2011 Int'l. Conference of IEEE, Oct. 14, 2011, pp. 21-26.
Berg, WA et al., “Combined screening with ultrasound and mammography vs mammography alone in women at elevated risk of breast cancer”, JAMA 299:2151-2163, 2008.
Burbank, Fred, “Stereotactic Breast Biopsy: Its History, Its Present, and Its Future”, published in 1996 at the Southeastern Surgical Congress, 24 pages.
Bushberg, Jerrold et al., “The Essential Physics of Medical Imaging”, 3rd ed., In: “The Essential Physics of Medical Imaging, Third Edition”, Dec. 28, 2011, Lippincott & Wilkins, Philadelphia, PA, USA, XP05579051, pp. 270-272.
Caroline, B.E. et al., “Computer aided detection of masses in digital breast tomosynthesis: A review”, 2012 International Conference on Emerging Trends in Science, Engineering and Technology (INCOSET), Tiruchirappalli, 2012, pp. 186-191.
Carton, AK, et al., “Dual-energy contrast-enhanced digital breast tomosynthesis—a feasibility study”, BR J Radiol. Apr. 2010;83 (988):344-50.
Chan, Heang-Ping et al., “Computer-aided detection system for breast masses on digital tomosynthesis mammograms: Preliminary Experience”, Radiology, Dec. 2005, 1075-1080.
Chan, Heang-Ping et al., “ROC Study of the effect of stereoscopic imaging on assessment of breast lesions,” Medical Physics, vol. 32, No. 4, Apr. 2005, 1001-1009.
Chen, SC, et al., “Initial clinical experience with contrast-enhanced digital breast tomosynthesis”, Acad Radio. Feb. 2007 14(2):229-38.
Cho, N. et al., “Distinguishing Benign from Malignant Masses at Breast US: Combined US Elastography and Color Doppler US-Influence on Radiologist Accuracy”, Radiology, 262(1): 80-90 (Jan. 2012).
Conner, Peter, “Breast Response to Menopausal Hormone Therapy—Aspects on Proliferation, apoptosis and Mammographic Density”, 2007 Annals of Medicine, 39;1, 28-41.
Diekmann, Felix et al., “Thick Slices from Tomosynthesis Data Sets: Phantom Study for the Evaluation of Different Algorithms”, Journal of Digital Imaging, Springer, vol. 22, No. 5, Oct. 23, 2007, pp. 519-526.
Diekmann, Felix., et al., “Digital mammography using iodine-based contrast media: initial clinical experience with dynamic contrast medium enhancement”, Invest Radiol 2005; 40:397-404.
Dromain C., et al., “Contrast enhanced spectral mammography: a multi-reader study”, RSNA 2010, 96th Scientific Assembly and Scientific Meeting.
Dromain, C., et al., “Contrast-enhanced digital mammography”, Eur J Radiol. 2009; 69:34-42.
Dromain, Clarisse et al., “Dual-energy contrast-enhanced digital mammography: initial clinical results”, European Radiology, Sep. 14, 2010, vol. 21, pp. 565-574.
Dromain, Clarisse, et al., “Evaluation of tumor angiogenesis of breast carcinoma using contrast-enhanced digital mammography”, AJR: 187, Nov. 2006, 16 pages.
Duan, Xiaoman et al., “Matching corresponding regions of interest on cranio-caudal and medio-lateral oblique view mammograms”, IEEE Access, vol. 7, Mar. 25, 2019, pp. 31586-31597, XP011715754, DOI: 10.1109/Access.2019.2902854, retrieved on Mar. 20, 2019, abstract.
E. Shaw de Paredes et al., “Interventional Breast Procedure”, published Sep./Oct. 1998 in Curr Probl Diagn Radiol, pp. 138-184.
EFilm Mobile HD by Merge Healthcare, web site: http://itunes.apple.com/bw/app/efilm-mobile-hd/id405261243?mt=8, accessed on Nov. 3, 2011 (2 pages).
EFilm Solutions, eFilm Workstation (tm) 3.4, website: http://estore.merge.com/na/estore/content.aspx?productID=405, accessed on Nov. 3, 2011 (2 pages).
Elbakri, Idris A. et al., “Automatic exposure control for a slot scanning full field digital mammography system”, Med. Phys. 2005; Sep. 32(9):2763-2770, Abstract only.
Ertas, M. et al., “2D versus 3D total variation minimization in digital breast tomosynthesis”, 2015 IEEE International Conference on Imaging Systems and Techniques (IST), Macau, 2015, pp. 1-4.
Feng, Steve Si Jia, et al., “Clinical digital breast tomosynthesis system: Dosimetric Characterization”, Radiology, Apr. 2012, 263(1); pp. 35-42.
Fischer Imaging Corp, Mammotest Plus manual on minimally invasive breast biopsy system, 2002, 8 pages.
Fischer Imaging Corporation, Installation Manual, MammoTest Family of Breast Biopsy Systems, 86683G, 86684G, P-55957-IM, Issue 1, Revision 3, Jul. 2005, 98 pages.
Fischer Imaging Corporation, Operator Manual, MammoTest Family of Breast Biopsy Systems, 86683G, 86684G, P-55956-OM, Issue 1, Revision 6, Sep. 2005, 258 pages.
Freiherr, G., “Breast tomosynthesis trials show promise”, Diagnostic Imaging—San Francisco 2005, V27; N4:42-48.
Georgian-Smith, Dianne, et al., “Stereotactic Biopsy of the Breast Using an Upright Unit, a Vacuum-Suction Needle, and a Lateral Arm-Support System”, 2001, at the American Roentgen Ray Society meeting, 8 pages.
Ghiassi, M. et al., “A Dynamic Architecture for Artificial Networks”, Neurocomputing, vol. 63, Aug. 20, 2004, pp. 397-413.
Giger et al. “Development of a smart workstation for use in mammography”, in Proceedings of SPIE, vol. 1445 (1991), pp. 101103; 4 pages.
Giger et al., “An Intelligent Workstation for Computer-aided Diagnosis”, in RadioGraphics, May 1993, 13:3 pp. 647-656; 10 pages.
Glick, Stephen J., “Breast CT”, Annual Rev. Biomed. Eng., 2007, 9;501-26.
Green, C. et al., “Deformable mapping using biochemical models to relate corresponding lesions in digital breast tomosynthesis and automated breast ultrasound images”, Medical Image Analysis, 60: 1-18 (Nov. 2019).
Hologic, “Lorad StereoLoc II” Operator's Manual 9-500-0261, Rev. 005, 2004, 78 pgs.
Hologic, Inc., 510(k) Summary, prepared Nov. 28, 2010, for Affirm Breast Biopsy Guidance System Special 510(k) Premarket Notification, 5 pages.
Hologic, Inc., 510(k) Summary, prepared Aug. 14, 2012, for Affirm Breast Biopsy Guidance System Special 510(k) Premarket Notification, 5 pages.
ICRP Publication 60: 1990 Recommendations of the International Commission on Radiological Protection, 12 pages.
Ijaz, Umer Zeeshan, et al., “Mammography phantom studies using 3D electrical impedance tomography with numerical forward solver”, Frontiers in the Convergence of Bioscience and Information Technologies 2007, 379-383.
Jochelson, M., et al., “Bilateral Dual Energy contrast-enhanced digital mammography: Initial Experience”, RSNA 2010, 96th Scientific Assembly and Scientific Meeting, 1 page.
Jong, RA, et al., Contrast-enhanced digital mammography: initial clinical experience. Radiology 2003; 228:842-850.
Kao, Tzu-Jen et al., “Regional admittivity spectra with tomosynthesis images for breast cancer detection”, Proc. of the 29th Annual Int'l. Conf. of the IEEE EMBS, Aug. 23-26, 2007, 4142-4145.
Kim, Eun Sil, et al., “Significance of microvascular evaluation of ductal lesions on breast ultrasonography: Influence on diagnostic performance”, Clinical Imaging, Elsevier, NY, vol. 51, Jun. 6, 2018, pp. 252-259.
Koechli, Ossi R., “Available Sterotactic Systems for Breast Biopsy”, Renzo Brun del Re (Ed.), Minimally Invasive Breast Biopsies, Recent Results in Cancer Research 173:105-113; Springer-Verlag, 2009.
Kopans, Daniel B., “Breast Imaging”, 3rd Edition, Lippincott Williams and Wilkins, published Nov. 2, 2006, pp. 960-967.
Kopans, et al. Will tomosynthesis replace conventional mammography? Plenary Session SFN08: RSNA 2005.
Lee, E. et al., “Combination of Quantitative Parameters of Shear Wave Elastography and Superb Microvascular Imaging to Evaluate Breast Masses”, Korean Journal of Radiology: Official Journal of the Korean Radiological Society, 21(9): 1045-1054 (Jan. 2020).
Lehman, CD, et al. MRI evaluation of the contralateral breast in women with recently diagnosed breast cancer. N Engl J Med 2007; 356:1295-1303.
Lewin, JM, et al., Dual-energy contrast-enhanced digital subtraction mammography: feasibility. Radiology 2003; 229:261-268.
Lilja, Mikko, “Fast and accurate voxel projection technique in free-form cone-beam geometry with application to algebraic reconstruction,” Applies Sciences on Biomedical and Communication Technologies, 2008, Isabel '08, first international symposium on, IEEE, Piscataway, NJ, Oct. 25, 2008.
Lindfors, KK, et al., Dedicated breast CT: initial clinical experience. Radiology 2008; 246(3): 725-733.
Love, Susan M., et al. “Anatomy of the nipple and breast ducts revisited”, Cancer, American Cancer Society, Philadelphia, PA, vol. 101, No. 9, Sep. 20, 2004, pp. 1947-1957.
Mahesh, Mahadevappa, “AAPM/RSNA Physics Tutorial for Residents—Digital Mammography: An Overview”, Nov.-Dec. 2004, vol. 24, No. 6, 1747-1760.
Metheany, Kathrine G. et al., “Characterizing anatomical variability in breast CT images”, Oct. 2008, Med. Phys. 35 (10); 4685-4694.
Niklason, L., et al., Digital tomosynthesis in breast imaging. Radiology. Nov. 1997; 205(2):399-406.
Nikunjc, Oza et al., Dietterich, T.G., Ed., “Ensemble methods in machine learning”, Jan. 1, 2005, Multiple Classifier Systems, Lecture Notes in Computer Science; LNCS, Springer-Verlag Berlin/Heidelberg, pp. 1-15, abstract.
Pathmanathan et al., “Predicting tumour location by simulating large deformations of the breast using a 3D finite element model and nonlinear elasticity”, Medical Image Computing and Computer-Assisted Intervention, pp. 217-224, vol. 3217 (2004).
Pediconi, “Color-coded automated signal intensity-curve for detection and characterization of breast lesions: Preliminary evaluation of new software for MR-based breast imaging,” International Congress Series 1281 (2005) 1081-1086.
Poplack, SP, et al., Digital breast tomosynthesis: initial experience in 98 women with abnormal digital screening mammography. AJR Am J Roentgenology Sep. 2007 189(3):616-23.
Prionas, ND, et al., Contrast-enhanced dedicated breast CT: initial clinical experience. Radiology. Sep. 2010 256(3):714-723.
Rafferty, E. et al., “Assessing Radiologist Performance Using Combined Full-Field Digital Mammography and Breast Tomosynthesis Versus Full-Field Digital Mammography Alone: Results” presented at 2007 Radiological Society of North America meeting, Chicago IL.
Reynolds, April, “Stereotactic Breast Biopsy: A Review”, Radiologic Technology, vol. 80, No. 5, Jun. 1, 2009, pp. 447M-464M, XP055790574.
Sakic et al., “Mammogram synthesis using a 3D simulation. I. breast tissue model and image acquisition simulation” Medical Physics. 29, pp. 2131-2139 (2002).
Samani, A. et al., “Biomechanical 3-D Finite Element Modeling of the Human Breast Using MRI Data”, 2001, IEEE Transactions on Medical Imaging, vol. 20, No. 4, pp. 271-279.
Samulski, Maurice et al., “Optimizing case-based detection performance in a multiview CAD system for mammography”, IEEE Transactions on Medical Imaging, vol. 30, No. 4, Apr. 1, 2011, pp. 1001-1009, XP011352387, ISSN: 0278-0062, DOI: 10.1109/TMI.2011.2105886, abstract.
Sechopoulos, et al., “Glandular radiation dose in tomosynthesis of the breast using tungsten targets”, Journal of Applied Clinical Medical Physics, vol. 8, No. 4, Fall 2008, 161-171.
Shrading, Simone et al., “Digital Breast Tomosynthesis-guided Vacuum-assisted Breast Biopsy: Initial Experiences and Comparison with Prone Stereotactic Vacuum-assisted Biopsy”, the Department of Diagnostic and Interventional Radiology, Univ. of Aachen, Germany, published Nov. 12, 2014, 10 pgs.
Smith, A., “Full field breast tomosynthesis”, Radiol Manage. Sep.-Oct. 2005; 27(5):25-31.
Taghibakhsh, f. et al., “High dynamic range 2-TFT amplified pixel sensor architecture for digital mammography tomosynthesis”, IET Circuits Devices Syst., 2007, 1(10, pp. 87-92.
Van Schie, Guido, et al., “Generating Synthetic Mammograms from Reconstructed Tomosynthesis Volumes”, IEEE Transactions on Medical Imaging, vol. 32, No. 12, Dec. 2013, 2322-2331.
Van Schie, Guido, et al., “Mass detection in reconstructed digital breast tomosynthesis volumes with a computer-aided detection system trained on 2D mammograms”, Med. Phys. 40(4), Apr. 2013, 41902-1-41902-11.
Varjonen, Mari, “Three-Dimensional Digital Breast Tomosynthesis in the Early Diagnosis and Detection of Breast Cancer”, IWDM 2006, LNCS 4046, 152-159.
Weidner N, et al., “Tumor angiogenesis and metastasis: correlation in invasive breast carcinoma”, New England Journal of Medicine 1991; 324:1-8.
Weidner, N, “The importance of tumor angiogenesis: the evidence continues to grow”, AM J Clin Pathol. Nov. 2004 122(5):696-703.
Wen, Junhai et al., “A study on truncated cone-beam sampling strategies for 3D mammography”, 2004, IEEE, 3200-3204.
Williams, Mark B. et al., “Optimization of exposure parameters in full field digital mammography”, Medical Physics 35, 2414 (May 20, 2008); doi: 10.1118/1.2912177, pp. 2414-2423.
Wodajo, Felasfa, MD, “Now Playing: Radiology Images from Your Hospital PACS on your iPad,” Mar. 17, 2010; web site: http://www.imedicalapps.com/2010/03/now-playing-radiology-images-from-your-hospital-pacs-on-your-ipad/, accessed on Nov. 3, 2011 (3 pages).
Yin, H.M., et al., “Image Parser: a tool for finite element generation from three-dimensional medical images”, BioMedical Engineering Online. 3:31, pp. 1-9, Oct. 1, 2004.
Zhang, Yiheng et al., “A comparative study of limited-angle cone-beam reconstruction methods for breast tomosythesis”, Med Phys., Oct. 2006, 33(10): 3781-3795.
Zhao, Bo, et al., “Imaging performance of an amorphous selenium digital mammography detector in a breast tomosynthesis system”, May 2008, Med. Phys 35(5); 1978-1987.
Related Publications (1)
Number Date Country
20240320827 A1 Sep 2024 US
Provisional Applications (1)
Number Date Country
63283866 Nov 2021 US
Continuations (1)
Number Date Country
Parent PCT/US2022/080432 Nov 2022 WO
Child 18671250 US