Dynamic self-learning medical image method and system

Information

  • Patent Grant
  • 11403483
  • Patent Number
    11,403,483
  • Date Filed
    Thursday, May 31, 2018
    6 years ago
  • Date Issued
    Tuesday, August 2, 2022
    2 years ago
Abstract
A method and system for creating a dynamic self-learning medical image network system, wherein the method includes receiving, from a first node initial user interaction data pertaining to one or more user interactions with the one or more initially obtained medical images; training a deep learning algorithm based at least in part on the initial user interaction data received from the node; and transmitting an instance of the trained deep learning algorithm to the first node and/or to one or more additional nodes, wherein at each respective node to which the instance of the trained deep learning algorithm is transmitted, the trained deep learning algorithm is applied to respective one or more subsequently obtained medical images in order to obtain a result.
Description
FIELD

The presently disclosed inventions relate generally to medical imaging techniques such as tomosynthesis, and more specifically to systems and methods for implementing a dynamic self-learning medical image network system. In particular, the presently disclosed inventions relate to interacting with, and observing user behavior pertaining to, one or more medical images at a plurality of nodes, in order to improve performance of the dynamic self-learning medical image network system.


BACKGROUND

Medical imaging systems, e.g., tomosynthesis systems, CT scanning systems, MRI systems, mammography systems, etc., have been used for screening and diagnosing a variety of conditions. Doctors and other medical professionals often rely on medical images to diagnose various health conditions. Accurate readings of the medical images are contingent on the quality and clarity of the image, as well as the knowledge and expertise of the medical professional reviewing the image. Specifically, in order for the medical images to be helpful to medical professionals, they must clearly and accurately portray respective body parts to which the image pertains, such that the medical professional can efficiently make a prognosis with reasonable certainty. Radiologists (or other medical professionals) typically study thousands of such medical images, and are trained to detect, through time and practice, recurring patterns in the medical images that are indicative of abnormalities (or other objects of interest) in human tissue.


However, even with extensive training, the objects of interest to the medical professional can be difficult to identify within the medical image for a number of reasons. For example, the medical image may not provide sufficient clarity, or focus, such that a potential abnormality is overlooked. Or, the abnormality may be too small, or otherwise difficult to ascertain. In another example, the abnormality may not be a well-known abnormality, or one that a newly-trained medical professional has previously encountered. In some cases, human error may result in certain abnormalities being overlooked or misdiagnosed. Furthermore, the experience and knowledge from highly experienced practitioners is not easily transferred to others. As will be appreciated, such errors and omissions may have serious, and sometimes even fatal, consequences for patients. Also, existing medical imaging analysis systems, including in particular Computer Aided Detection (CAD) systems, must be frequently programmed and modeled with new information and/or updated analysis techniques, which is time consuming and resource-intensive. Thus, a medical imaging analysis system that minimizes reviewing errors and is automatically updated to provide the latest information and analysis techniques would be highly desirable.


SUMMARY

In accordance with one aspect of the disclosed inventions, a method is provided for creating and using a dynamic self-learning medical image network system. In an exemplary embodiment, the method includes receiving, from a first node initial user interaction data pertaining to one or more user interactions with the one or more initially obtained medical images; training a deep learning algorithm based at least in part on the initial user interaction data received from the node; and transmitting an instance of the trained deep learning algorithm to the first node and/or to one or more additional nodes, wherein at each respective node to which the instance of the trained deep learning algorithm is transmitted, the trained deep learning algorithm is applied to respective one or more subsequently obtained medical images in order to obtain a result.


By way of non-limiting examples, the initial user interaction data may include at least one annotation on at least one of the one or more initially obtained medical images, a selection of one of more pixels associated with at least one of the one or more initially obtained medical images, an actual or estimated amount of time one or more users spent viewing one or more of the initially obtained medical images, an actual or estimated portion of at least one of the one or more medical images that was focused upon by at least one user, a description of a patient condition, and/or diagnostic findings that may be (without limitation) in a form of a written or a voice dictation report.


By way of non-limiting examples, the instance of the trained deep learning algorithm may be maintained at the first node and/or one or more additional nodes, and/or may run on a server accessed through a network.


By way of non-limiting examples, the result may include recognizing one or more objects in the medical image and/or providing a recommendation pertaining to the medical image.


In an exemplary embodiment, the method further includes receiving, from the first node and/or one or more additional nodes, subsequent user interaction data pertaining to one or more subsequently obtained medical images, wherein the subsequent user interaction data is used to modify the trained deep learning algorithm. By way of non-limiting example, the subsequent user interaction data may be used to modify the trained deep learning algorithm if it is determined that the subsequent user interaction data satisfies a predetermined threshold confidence level indicating that the trained deep learning algorithm should be modified. By way of non-limiting example, modification of the trained deep learning algorithm may include adding one or more layers to and/or changing the internal structure of, the layers in the trained deep learning algorithm.


In accordance with another aspect of the disclosed inventions, a dynamic self-learning medical image network system is provided, the system including a plurality of nodes, and a central brain server, wherein the central brain server is configured to receive initial user interaction data from one or more nodes of the plurality, wherein the initial user interaction data pertains to one or more user interactions with one or more initially obtained medical images, train a deep learning algorithm based at least in part on the initial user interaction data received from the node, and transmit an instance of the trained deep learning algorithm to each node of the plurality, and wherein each node of the plurality is configured to apply the instance of the trained deep learning algorithm to one or more subsequently obtained medical images in order to obtain a result.


In an exemplary embodiment, each node of the plurality is configured to maintain an instance of the trained deep learning algorithm.


By way of non-limiting examples, the initial user interaction data received by the central brain server may include at least one annotation on at least one of the one or more initially obtained medical images, a selection of one of more pixels associated with at least one of the one or more initially obtained medical images, an actual or estimated amount of time one or more users spent viewing one or more of the initially obtained medical images, an actual or estimated portion of at least one of the one or more medical images that was focused upon by at least one user at one of the nodes, a description of a patient condition, and/or diagnostic findings that may be (without limitation) in a form of a written or a voice dictation report.


By way of non-limiting examples, the result may include recognizing one or more objects in the medical image and/or providing a recommendation pertaining to the medical image.


In an exemplary embodiment, the central brain server is configured to receive subsequent user interaction data from one or more nodes of the plurality pertaining to one or more subsequently obtained medical images, and to modify the trained deep learning algorithm if the subsequent user interaction data satisfies a predetermined threshold confidence level indicating that the trained deep learning algorithm should be modified. By way of non-limiting example, central brain server may modify the trained deep learning algorithm by adding one or more layers to the trained deep learning algorithm.


These and other aspects and embodiments of the disclosed inventions are described in more detail below, in conjunction with the accompanying figures.





BRIEF DESCRIPTION OF THE FIGURES

The drawings illustrate the design and utility of embodiments of the disclosed inventions, in which similar elements are referred to by common reference numerals. These drawings are not necessarily drawn to scale. In order to better appreciate how the above-recited and other advantages and objects are obtained, a more particular description of the embodiments will be rendered, which are illustrated in the accompanying drawings. These drawings depict only typical embodiments of the disclosed inventions and are not therefore to be considered limiting of its scope.



FIG. 1 is a block diagram illustrating the dynamic self-learning medical image network system constructed in accordance with embodiments of the disclosed inventions;



FIG. 2 is a sequence diagram illustrating the flow of information between a user and a central brain network constructed in accordance with embodiments of the disclosed inventions;



FIG. 3 illustrates one embodiment of recording user interactions in a dynamic self-learning medical image network system constructed in accordance with embodiments of the disclosed inventions;



FIGS. 4A and 4B illustrate an exemplary flow diagram depicting various steps to modify (and thereby improve) the dynamic self-learning medical image network system over time; and



FIGS. 5A to 5H illustrate an exemplary process flow in accordance with embodiments of the disclosed inventions.





DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS

All numeric values are herein assumed to be modified by the terms “about” or “approximately,” whether or not explicitly indicated, wherein the terms “about” and “approximately” generally refer to a range of numbers that one of skill in the art would consider equivalent to the recited value (i.e., having the same function or result). In some instances, the terms “about” and “approximately” may include numbers that are rounded to the nearest significant figure. The recitation of numerical ranges by endpoints includes all numbers within that range (e.g., 1 to 5 includes 1, 1.5, 2, 2.75, 3, 3.80, 4, and 5).


As used in this specification and the appended claims, the singular forms “a”, “an”, and “the” include plural referents unless the content clearly dictates otherwise. As used in this specification and the appended claims, the term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise. In describing the depicted embodiments of the disclosed inventions illustrated in the accompanying figures, specific terminology is employed in this patent specification for the sake of clarity and ease of description. However, the disclosure of this specification is not intended to be limited to the specific terminology so selected, and it is to be understood that each specific element includes all technical equivalents that operate in a similar manner. It is to be further understood that the various elements and/or features of different illustrative embodiments may be combined with each other and/or substituted for each other wherever possible within the scope of this specification, including without limitation the accompanying figures and the appended claims.


Various embodiments of the disclosed inventions are described hereinafter with reference to the figures. It should be noted that the figures are not drawn to scale and that elements of similar structures or functions are represented by like reference numerals throughout the figures. It should also be noted that the figures are only intended to facilitate the description of the embodiments. They are not intended as an exhaustive description of the invention or as a limitation on the scope of the disclosed inventions, which is defined only by the appended claims and their equivalents. In addition, an illustrated embodiment of the disclosed inventions needs not have all the aspects or advantages shown. For example, an aspect or an advantage described in conjunction with a particular embodiment of the disclosed inventions is not necessarily limited to that embodiment and can be practiced in any other embodiments even if not so illustrated.


Introduction


This patent specification and the accompanying figures describe and illustrate a dynamic self-learning medical image network system that utilizes deep learning techniques to observe user interactions with medical images at a plurality of nodes. These user interactions are compiled, analyzed and optimized using a central brain network that advantageously trains one or more deep learning algorithms/network to continuously improve readings of medical images over time. As will be discussed throughout the specification, the dynamic self-learning medical image network system advantageously leverages expertise gained from users who are highly trained in reading medical images and decodes this information in deep learning algorithms, such that the system is constantly learning and improving analysis of medical images over time, in an effort to ultimate emulate the skill and intuition of a human expert.


Accurate reading of medical images, such as without limitation MRI scans, CT scans, tomosynthesis slices, X-rays, etc., is often contingent on the skill and technical expertise of medical personnel evaluating them. This expertise is gained through time, practice and intuition that is developed as a result of viewing a very large number of medical images. However, since there is a wide range in the respective skill, efficiency and expertise of the medical personnel charged with reading the images, the quality and accuracy of image-based screening and diagnosis can vary greatly. Further, even in the case of highly skilled medical professionals (e.g., radiologists), human error sometimes causes missed diagnoses, inaccurate readings and/or false positives. Of course, such errors, although easy to make, adversely affect the quality of healthcare delivered to patients, and can cause great stress and anxiety to everyone involved.


Although there are many medical image analysis software (CAD) programs available in the market today, they are static modeling programs that have to be programmed and trained prior to implementation, and as such can quickly become outdated. Re-programming to update the system is costly and time intensive, and still tends to lag behind research or new findings, even if routinely performed.


Advantages of the Dynamic Self-Learning Medical Image Network


One approach to globally improve the quality of medical image screening and diagnosis is to aid the reader of the medical images (also referred to herein as the system “user”) by implementing a dynamic self-learning medical imaging network system that interacts with a large number of users (e.g., expert radiologists), analyzes many types of medical data, e.g., medical images, patient information data, patient medical records, etc., and automatically learns to interpret medical images and data to identify patterns (which may be image patterns or non-image patterns) that are symptomatic of abnormalities.


A dynamic self-learning medical image network system may be defined as a system that is continuously updating and/or adding layers to one or more deep neural networks, without necessarily requiring manual re-programming. In other words, rather than relying (and waiting) on presumed (if not actual) experts that understand trends in radiology or image analysis to create a new static program or update an existing one, the dynamic self-learning system is continually learning from the actual (and truly expert) users, by analyzing the user interactions with the system and periodically adding layers to the deep neural networks based on this analysis. This approach has several advantages. First, by studying user interactions with a very large number of medical images, the accuracy of the system in detecting such patterns not only improves over time, but also contemporaneously with the user. For example, if users are beginning to identify a previously unknown mass as being an abnormality, the dynamic self-learning medical image network system obtains this information in real-time, and is able to start identifying such abnormalities on its own, without necessarily being re-programmed by a system administrator. Additionally, the system learns patterns not just from a limited training dataset, but from a very large, and indeed ever-growing dataset. For example, thousands of radiologists may mark a particular type of breast mass as a spiculated mass (e.g., a type of breast cancer lesion). Having digital data of these diagnoses allows the system to study the patterns of the images that have been marked by the users as constituting a spiculated mass. Thus, the dynamic self-learning system described and depicted herein may strategically leverage the expertise of tens (or hundreds) of thousands of users in real-time to develop a highly accurate image recognition system.


Further, the dynamic self-learning medical image network system described and depicted herein allows users to rely on the system (and other users) in reaching a diagnosis. For example, a particular doctor (or group of doctors) may have special expertise when dealing with a rare form of abnormality. Or the abnormality may have only been recently detected by a small group of users in a particular part of the world. By leveraging knowledge that may only be available in one local community, the dynamic self-learning system may assist users in other communities worldwide by automatically detecting the heretofore unknown or little-known condition. Thus, information may spread far more swiftly with such an integrated image recognition system that is connected to users having varying skills, expertise, geography and patient type.


Moreover, by tracking interactions of a large number of users, the dynamic self-learning system may learn that certain users are especially skilled or knowledgeable, e.g., based on the rate of accurate screenings, and interactions with such users may be weighted higher than the average user of the medical image system. Conversely, if it is determined that certain users are less skilled, the dynamic self-learning system may avoid or minimize learnings from such user interactions, but may instead assist such users with information gained from users that have greater expertise. Thus, the dynamic self-learning medical image network system may actively assist users in improving readings of medical images.


The dynamic self-learning medical image network system may be trained to observe a set of user interactions pertaining to one or more medical images, and pool data received from a plurality of users in order to learn aspects pertaining to medical image and user interface interaction. For example, one learning aspect may relate to image analysis itself, and the dynamic self-learning medical image network system may observe user behavior related to medical image to detect recurring patterns in medical images that are indicative of abnormalities. Another example of a learning aspect may relate to user experience, and how to improve a set of task flows such that a user is provided an optimal amount of information to quickly and efficiently reach a prognosis. Yet another example of a learning aspect may relate to learning medical history associated with a patient and presenting that information to the medical professional to provide a more comprehensive diagnosis.


Although the present specification focuses on learning aspects related to image analysis and diagnosis, it should be appreciated and understood that the dynamic self-learning medical image network system may observe a myriad of user interactions to improve various aspects of the system. By constantly learning through user interactions with medical images in many locations, the dynamic self-learning medical image network system detects (or aids medical professionals in detecting) abnormalities, thereby increasing the efficiency and accuracy of screening and diagnoses performed using medical images. Ultimately, by learning details pertaining to highly skilled user decisions regarding medical images (also referred to herein as “medical image process flow”), the dynamic self-learning medical image network system becomes increasingly accurate and reliable over time, such that it may attempt to emulate the skill (or indeed even the subconscious intuition) of such highly skilled users.


The present specification focuses on implementing a dynamic self-learning medical image network system using deep machine learning algorithms (e.g., neural networks) to learn from users and data. It is envisioned that the system may learn independently with little need for manual programming or programmer input. In particular, in recent years, there have been major improvements in the field of machine learning using deep learning systems to recognize images and to understand natural languages. In many applications, the machine learning algorithm can be trained to learn to perform tasks at similar performance levels as a human. By building upon such machine learning algorithms, an “expert-level” self-learning medical image network system may be created that can be used to extract patterns and detect trends that may be too complex to be detected through traditional computer technologies but easily detected by an expert human user. Thus, the dynamic self-learning medical image network system may become an “expert” assistant to the medical professional reading the medical image. Other suitable means to achieve such a complex learning may be similarly implemented without limitation.


System Overview (Including Interaction Between Various Nodes and the Central Brain)



FIG. 1 illustrates an overview of the dynamic self-learning medical image network system 100 which incorporates image generation, image analysis and network technology. It should be understood that while FIG. 1 illustrates a particular embodiment with certain processes taking place in a particular serial order or in parallel, the claims and various other embodiments described herein are not limited to any particular order, unless so specified. More particularly, the dynamic self-learning medical image network system 100 includes a plurality of nodes 102 (e.g., 102a, 102b, 102c . . . 102n) that interact with a central brain network 104. In one or more embodiments, the nodes 102 refer to a computing system that may or may not interact with a user. As shown in FIG. 1, nodes 102a and 102b interact with the users, but node 102c does not. In some embodiments, the nodes 102 may refer to a point of contact between the dynamic self-learning medical image network system 100 and a user 110 (e.g., 110a, 110b . . . 110n). The nodes 102 may be any type of computing device including a processor and/or display system (e.g., personal computer, specialized imaging system, smartphone, a tablet, an image acquisition device, e.g., MRI, CT, tomosynthesis system), an image review workstation, a virtual reality device, desktop computer, web portal, etc. In some embodiments, the respective nodes may each be some other type of machine-human user interface.


Each node 102 may be implemented on a picture archiving and communications system (PACS). For example, a respective node 102 may be a dedicated medical image viewing workstation allowing users 110 to perform a variety of specialized image-related tasks. The nodes 102 may include one or more network interfaces for communicating with other devices through a network. The nodes 102 may include other input/output devices that enable user interaction with the node, such as a display, keyboard, mouse, audio speakers, and the like. It should be appreciated that the node may function with or without the user. For example, in some embodiments, the node 102 may be an intelligent workstation that mimics or attempts to make decisions like a human.


In some embodiments, a particular node 102 may represent an algorithm or data server that may be running data mining algorithms. In other embodiments, the node 102 may be a computing device that gathers data. In some embodiments, the node 102 may be a PACS machine that gathers images. In still other embodiments, the node 102 may simply be software that is running on a hospital computer system. Thus, it should be appreciated that not all nodes 102 necessarily interact with users, and some nodes 102 may simply gather data, or learn from data itself while other nodes 102 also provide and receive user interaction.


A user 110 accessing the dynamic self-learning medical image network system 100 is typically a medical professional (e.g., general doctor, radiologist, medical technician), but it should be appreciated that the dynamic self-learning medical image network system 100 is capable of interacting with any user (e.g., non-medical professionals, patients, etc.) no user at all. For purposes of illustration, the remainder of this specification focuses on medical professional users, but this should not be understood as limiting the applicability of the dynamic self-learning medical image network system.


The central brain network 104 may be any type of network known in the art, such as the Internet, or any cloud computing network. In one or more embodiments, the nodes 102 may communicatively couple to the central brain network in any manner, such as by a global or local wired or wireless connection (e.g., LAN, WAN, intranet, etc.). In one or more embodiments, the central brain network 104 may be communicatively coupled to one or more servers 150 or other machines which may include one or more processing units and/or computer readable media. In one or more embodiments, the central brain network 104 may reside (and be maintained) on one or more physical computing devices, or it may reside on a virtual cloud computing network. In its simplest form, it can be a very powerful central computing device communicatively coupled to a plurality of nodes. In a more complex form, the central brain network 104 may take a distributed form and reside over a number of physical or virtual computing devices.


More particularly, the server(s) 150 may house and/or host a plurality of computing components that together process data received from the plurality of nodes 102, store data, and provide outputs that are sent back to the nodes 102. As will be discussed in further detail below, the data may pertain to medical images being viewed and interacted with at the plurality of nodes 102. This data may be processed, analyzed, stored and updated through the various computing components of the server 150, and updated data may be sent back to the nodes 102 through the central brain network 104. In one or more embodiments, the server 150 may be a single powerful server. In another embodiment, a distributed system having multiple servers performing sub-sections of the computing tasks is envisioned. The server(s) 150 may be located in one geographical location, or may be located at different locations throughout the world. In one or more embodiments, an instance of the server 150 is operable to run at the node 102, such that an instance of the dynamic self-learning medical image network system runs on the node itself. The server(s) 150 may refer to local servers or remote servers.


In one or more embodiments, the server(s) 150 include one or more database(s) 106 that store all or portion of the data related to the dynamic self-learning medical image network system 100. The database(s) 106 may be the central data store providing long-term storage for all data, or it may be a limited-purpose data store for a specific area. The database(s) 106 make data accessible to the central brain network 104. The server(s) 150 may include computing components that are operable to retrieve data from the one or more databases and supply it to the central brain network 104 through a server-network interface. Although depicted as a single database 106 in FIG. 1, it should be appreciated that any number of local or remote databases may be part of the dynamic self-learning medical image network system 100. In one or more embodiments, the database 106 may store image acquisition data 154 that may be displayed to the users 110 at the various nodes 102. The database 106 may also store image analysis data 156, or data related to analysis of the various medical images. In one or more embodiments, training data 158 may also be used to train the dynamic self-learning medical image network system 100.


Medical images typically refer to digital representations of one or more objects (e.g., parts or portions of a patient's body, such as breasts). The digital representations may be modified or manipulated in order to identify or enhance certain features of the image. Such manipulations are virtual manipulations accomplished through the various computing components of the server(s) 150.


The analysis data may originate from the users or may be computer-generated analysis data. The database 106 may also store a set of user interaction data 152. The user interaction data 152 may be any data collected from the plurality of users 110. For example, the user interaction data 152 may be detected patterns indicative of known abnormalities, and also may contain feature values (e.g., coordinates, grayscale values, contrast values, etc.) related to abnormalities (e.g., cysts, tumors, abnormal masses, spiculated masses, calcifications, etc.). In one or more embodiments, the database(s) 106 include a constantly updated/modified learning library that improves over time based on the collected user interaction data. In one or more embodiments, the learning library may store a set of rules and/or models that may be used by the server(s) for image analysis.


In one or more embodiments, the server(s) 150 include one or more algorithms 112 that ingest a set of data pertaining to user interaction with a plurality of images, and create data models that may be used to detect patterns indicative of abnormalities in the medical images. The algorithms 112 may relate to image analysis, image display, or any other processes related to data that is present and interacted with at the nodes 102. Although the present specification focuses on image analysis algorithms, it should be appreciated that any type of algorithm 112 may be created and stored.


In one or more embodiments, the server(s) 150 include one or more deep learning algorithms or deep neural networks 114 that are trained on medical image data to learn complex image patterns and detect anatomical landmarks. A deep learning algorithm may refer to a deep neural network comprising various layers of information. Deep learning algorithms 114 contain multiple layers of learned features and/or variables between the input data and the output data. Deep learning algorithms 114 or deep neural networks may be implemented with many layers that are built on top of each other, such that complex deep learning algorithms comprise several deep layers, e.g., tens, hundreds, or even thousands of layers, that are continuously added as the system learns more information. A deep neural network may be differentiated with typical neural networks that tend to be “shallow” neural networks comprising only a few layers, e.g., only three or four layers. Thus, deep neural networks tend to be far more complex than shallow neural networks.


The deep learning algorithms 114 may be trained to detect patterns or a localization (e.g., pixel or voxel coordinates) in a medical image. In one or more embodiments, deep learning algorithms may be trained based on a plurality of training images. For example, the deep learning algorithms 114 may be trained using the user interaction data stored in the database(s) 106. The training images may be 2D or 3D medical images acquired through any type of medical image modality (e.g., tomosynthesis, mammography, CT, MRI, ultrasound, etc.). It should be appreciated that at least a subset of the training images may be annotated with the location of the respective anatomical object or landmark. For example, the user interaction data stored in the database(s) 106 may contain annotations (collected from the plurality of users 110) in the medical images identifying a desired object (e.g., type of mass, abnormality, type of tissue, etc.).


In some embodiments, the training images may be non-annotated but may tag objects in some other fashion. The deep learning algorithms 114 adapt and modify over time so as to improve their accuracy, efficiency and/or other performance criteria with a larger and larger number of user interaction data, thereby detecting and localizing desired anatomical landmarks or objects in the medical images with greater precision. Although there may be many possible implementations of deep learning algorithms to recognize abnormalities in medical images, one possible implementation approach includes one or more deep learning algorithms that calculate a probability that a targeted anatomical object is located at a particular pixel or voxel. In another possible implementation, the deep learning algorithms may calculate a difference vector from a particular voxel to a predicted location of the target object.


Thus, deep learning algorithms 114 may be utilized in one of many possible implementations and be trained, using user interaction data, to detect target objects or abnormalities in the medical images. By collecting vast amounts of user interaction data, the deep learning algorithms may be trained to become increasingly precise over time. It should be appreciated that that the deep neural networks or deep learning algorithms are updated dynamically, in contrast to static neural networks that are used to provide results based on a pre-programmed algorithm. Static neural networks do not adapt to new information, whereas the deep neural network system described and depicted herein “learns” from various user interactions, and updates, modifies and/or adds layers to the deep learning neural network(s). The term “dynamic,” in this context, refers to a deep-learning system that is continually updated or modified automatically without specific need for re-programming. In particular, the deep neural network system preferably automatically updates the respective deep neural networks by adding one or more layers and/or changing the structure of one or more existing layers, once it is understood by the system that additional complexity pertaining to a pattern is required or otherwise useful. This is an important distinction from existing deep neural network algorithms, which (once trained) merely modify the respective layer weighting parameters without otherwise changing or adding layers.


There may be many ways in which to add layers to the deep neural network and/or update the deep neural network contemporaneously while a plurality of users interacts with the system. For example, the system might pool together data from a large number of users to determine a threshold level of confidence before adding a layer of complexity to the existing deep learning algorithm. The threshold levels may be predetermined, in one or more embodiment. Or, in another embodiment, the threshold level may refer to a particular number of users corroborating a particular detail. In yet another embodiment, programmer input may be requested prior to adding another layer. Once a particular confidence level is achieved, one or more layers may be added, or the neural network may be modified to conform to the newly “learned” data.


Example Implementation of Dynamic Self-Learning Medical Image Network System


Referring now to FIG. 2, a system diagram showing an example sequence of interaction with the various components of the dynamic self-learning medical image network system is illustrated. As discussed in detail with reference to FIG. 1, at 210, a set of image display data may be sent to the node 102a through the central brain network 104. In one or more embodiments, an instance of an initial deep learning algorithm/network may also be sent to the node to perform image analysis. The initial deep learning algorithm may be trained at the central brain network using existing data, and known images, or training data. An instance of the initial deep learning algorithm may be pushed to one or more nodes of the dynamic self-learning image network system. In one or more embodiments, the image display data may be medical images (e.g., tomosynthesis image slices of a patient's breast tissue). At 212, the user 110a (if the node interacts with a user) interacts with the node 102a, and is able to view the image display data. In one or more embodiments, the initial deep learning algorithm may be run on the medical images, and one or more results may be provided to the user 110a.


At 214, the user may interact with the medical image (e.g., annotate the medical image, zoom into particular aspects of the medical image, focus on a particular slice if there are multiple slices, etc.) in one or more ways. For example, in viewing a particular tomosynthesis slice, the user may mark a particular set of pixels of the image and annotate that part of the image as indicative of a spiculated mass lesion. This user interaction may be recorded at the node 102a. The various types of possible user interactions will be discussed further below. Or, the user 110a may concur or reject the analysis provided by the deep learning algorithm. User interactions are collected in addition to the provided analysis such that the dynamic self-learning image network system is constantly collecting user interaction information even if a current instance of the (initially trained) deep-learning algorithm is run on one or more medical images.


At 216, the user interaction data recorded at the node 102a (e.g., annotations, marked pixels, audio and/or video recordings, etc.) may be sent to the central brain network 104. At 218, the user interaction data (input) is received at the server 15 through the central brain network 104. In one or more embodiments, the user interaction data may be used as additional training data on the deep learning algorithms 114. As discussed above, the deep learning algorithms 114 may consume the user interaction data to learn patterns or features associated with a spiculated mass as highlighted by the user. This interaction (along with other user interaction data collected from all the other nodes of the dynamic self-learning medical image network system) allows the deep learning algorithms 114 to automatically recognize spiculated masses (and other abnormalities learned by the deep learning algorithms 114) based on digital information associated with the medical image. The modified (improved) deep learning algorithm information may be stored at one or more databases at the server 150.


In practice, not every new user interaction data will be used (or be useful) to modify (with the goal of improving) an existing deep learning algorithm. The example discussed above is simplified for illustrative purposes. Rather, the newly collected user interaction data may be used to run one or more data-mining/un-supervised learning algorithm to form a new understanding of the complexity of the new data. Once this complexity reaches a certain threshold level, more layers may be added to the deep learning algorithm to create an improved/updated deep learning algorithm that is more complex and contains insights from more data. The improved/updated deep learning algorithm may be further trained on more preliminary/training data before it is pushed back to various nodes of the dynamic self-learning medical image network system.


At 220, the improved/updated deep learning algorithms are communicated to the various nodes through the central brain network 104. At 222, an instance (or partial instance) of the improved/updated trained deep learning algorithms may be transmitted to any node (e.g., 102b), which may then be used to provide feedback and/or provide image analysis at the node 102b. The improved/updated trained deep learning algorithms may run on the node 102b in order to automatically recognize spiculated masses (or other abnormalities) found in other medical images residing at the node 102b. For example, this improved/updated deep learning algorithm information may be used on another medical image viewed by a user at node 102b. The node 102b (leveraging the improved deep learning algorithms) may automatically mark portions of the other medical image if the system determines that a particular area of the medical image contains a spiculated mass object. This information may be displayed to the user at node 102b, wherein the user may confirm or reject the automatically detected object. This interaction may also be recorded and sent back to the server 150 through the central brain network 104 to further improve the deep learning algorithms 114. Thus, it is envisioned that the deep learning algorithms becomes increasingly skilled at recognizing objects found in the medical images over time.


As discussed above, user interaction data collected at the various nodes 102 of the dynamic self-learning medical image network system 100 is continuously used, in regular intervals, to improve the deep learning algorithms 114. Thus, the current invention(s) describe a dynamic self-learning medical image network system that is constantly learning in real-time as it is being deployed at various nodes. In contrast to static neural networks that have to be manually programmed or re-programmed periodically, the dynamic self-learning medical image network system is continuously adding more layers (or otherwise modifying itself) to the deep learning algorithms without necessitating reprogramming to create a new neural network. When new data is learned that adds to the complexity of the existing deep learning algorithms, new layers are automatically added to the deep learning algorithm, and pushed to the various nodes.


Referring now to FIG. 3, an overview of how various types of user interactions with medical images is utilized in the dynamic self-learning medical image network system is illustrated. As shown in FIG. 3, the user 110a at the node 102a may be presented with a series of medical images 302. For example, the series of medical images 302 may be tomosynthesis slices representative of a patient's breast tissue. The user 110a may interact with the series of images 302 in a number of ways. For example, the user 110a may zoom in to view particular tomosynthesis slices. Also, the user 110a may concentrate on just a subset of the image slices, while ignoring the rest. Additionally, the user 110a may expressly mark a portion of the digital image, and annotate it. Furthermore, the user 110a may create a video or audio recording of the user's diagnosis.


In one or more embodiments, the user 110a may immediately focus one or more of the image slices 302 to focus on. For example, the user 110a may spend most of the time focused on image slice x. The dynamic self-learning medical image network system 100 may record this interaction to determine whether there are any patterns in what image slice(s) provide the most valuable information. In one or more embodiments, the dynamic self-learning medical image network system 100 may track the actual time and/or an estimate of an amount of time spent on a particular image or images (slice or slices). This information may be coupled with other collected user interaction data to learn what parts of an image deck are most important when analyzing images.


In another example, the dynamic self-learning medical image network system may ask the user 110a to mark or otherwise annotate a particular image with information regarding the medical image. In one possible implementation, a set of users may be selected to train the dynamic self-learning system. These users may be asked to annotate various medical images with many types of abnormalities. The system may thereafter pool images belonging to or associated with a type of abnormality, and then identify (“learn”) patterns emerging from a relatively large image dataset.


Other types of user interactions that may be recorded include pixels of the medical image that may be highlighted or zoomed by the user. The dynamic self-learning medical image network system may record the time spent on an image or set of images. Similarly, any number of such interactions may be received and recorded.


Any or all of these user interactions 304 may be sent to the server 150 through the central brain network 104, and further stored in the learning database 106. The learning database 106 may comprise user interaction data 152 received from thousands or millions of nodes. As shown in FIG. 3, the user interaction data 152 may comprise pixels highlighted by the user, areas (e.g., defined by pixels) of an image that were focused upon (e.g., “zoomed in” on) by the user, actual or estimated time spent on image portions, feedback on images, annotations, or any other type of user interaction. Similarly, other types of user interactions (e.g., 154, 156 and 158) may be similarly stored, although omitted for simplicity in FIG. 3.


Furthermore, the learning database 106 may store information related to known digital patterns indicative of objects in the image. For example, the learning database 106 may store patterns indicative of various abnormal (and normal) objects found in the breast. The database 106 may also store a set of basic training data (e.g., known abnormalities, etc.) to be used to train the deep learning algorithms. Of course, by utilizing more and more training data, the deep learning algorithms become more accurate over time. By pooling together medical image data (image or non-image based data) deep learning algorithms and other types of machine learning algorithms may be utilized to learn data patterns that can be used to not only detect and localize abnormalities, but also to understand normal variations among different individuals and populations.



FIGS. 4A and 4B illustrate a flow diagram 400 provided to illustrate an exemplary process that may be performed in order to implement the dynamic self-learning medical image network system. At step 402, a training image dataset may be collected. For example, this initial image dataset may include a set of annotated medical images collected from a set of user experts. At step 404, this image dataset may be used to train a preliminary deep learning algorithm. At step 406, when a user 110a is viewing a new medical image, this preliminary deep learning algorithm may be applied. At step 408, an image analysis (indicating any detected abnormalities) may be provided at the node 102a. At step 410, user interaction data is received. For example, the user 110a may agree with the image analysis provided by the dynamic self-learning medical image network system, and indicate as much.


At step 412, the user interaction data may be sent from the node 102a to the server 150 through the central brain network 104. At step 414, the new user interaction data may be pooled with other user interpretation data related to a particular part of the deep learning algorithm. For example, the user interaction may pertain to a particular type of object being displayed at various nodes to various users. For example, a particular feature of an abnormal object may be specified through the user interaction. Each of these user interactions regarding the feature may be compiled together to determine whether users are identifying a particular feature, or else classifying it in a manner that is not presently encoded in the existing deep neural network or algorithm.


At step 416 it may be determined whether the pooled user interpretations satisfy a threshold level such that a modification to the preliminary deep learning algorithm is required. For example, the deep learning algorithm may only be modified, e.g., add a layer to the deep neural network, change a value, etc., if a predetermined threshold level is met. For example, to add a layer to the deep neural network, a confidence level, e.g., based on number of users providing the particular input, weight given to users, etc., of 98% may need to be met. Or, in another example, to change an existing value, a confidence level of 99.9% may need to be met.


At step 418, if the predetermined threshold level is not met, the system continues to use the preliminary deep learning algorithm. If, however, it is determined at step 416 that the threshold level is met, the deep learning algorithm may be modified (e.g., a new layer may be added). At step 422, the modified deep learning algorithm may be pushed to various nodes, and the modified deep learning algorithm may be applied to various images at the nodes (step 424).


It should be appreciated that in some embodiments, when the neural network is modified, it may need to be trained at the server with training data prior to pushing the algorithm to the various nodes. Thus, the above example is provided for illustrative purposes and should not be read as limiting.


Referring now to FIGS. 5A-5H an exemplary process flow diagram illustrating various steps in implementing the dynamic self-learning medical image network system is shown. It should be appreciated that the following scenario focuses on user-based interactions at the nodes, but other embodiments may entail simply collecting data from the node without any user interaction at all. Thus, again, the following described process is provided for purposes of illustration, and should not be read as limiting.


Referring to FIG. 5A, a training user 510a may interact with one or more medical image slices at a node 502a. In one or more embodiments, the training user may be a user that is chosen to train the dynamic self-learning medical image network system. In order embodiments, the training user may be a volunteer. In yet another embodiment, there may be no distinction between training users and regular users, and all user interactions with the dynamic self-learning medical image network system may be given the same weight. As shown in FIG. 5A, the training user 510a may interact with the medical image by selecting a group of pixels (504a). For example, the selected group of pixels may represent an area of the medical image containing one or more abnormalities (or other objects of interest).


Referring now to FIG. 5B, the training user 510a may further annotate the one or more medical images (504b). In the illustrated embodiment, the user 510a may annotate the selected portion of the medical image to indicate that it pertains to a calcification object. FIG. 5C illustrates yet another user interaction 504c, where the system notes that the training user 510a zooms in on a particular area of the medical image.


As shown in FIG. 5D, the user interactions 504a-504c are transmitted to the server 150 through the central brain network 506. It should be appreciated that in some embodiments, an instance of the dynamic self-learning medical image network system may reside at the node 502a itself. In other embodiments, the dynamic self-learning medical image network system may only be accessed through the central brain network 506.


As shown in FIG. 5E, the user interactions 504a-504c may be stored in the database 508 associated with the dynamic self-learning medical image network system. This set of user interactions may be used to train the deep learning algorithms to result in a set of improved deep learning algorithms 514a. In the illustrated embodiment, the system consults with a predetermined threshold for modifying the deep learning algorithm 550 in order to determine whether the user input (e.g., pooled from various users) meets or exceeds the threshold. If the threshold is satisfied, the deep learning algorithm 514a is created. As discussed above, the improved deep learning algorithm 514a may be additional layers, modified values, or any other changes. It should be appreciated that this modification occurs without a need for re-programming of the neural network, and maybe done automatically by the dynamic self-learning medical image network system periodically whenever the threshold is met, in one or more embodiments.


Referring now to FIG. 5F, the improved deep learning algorithm may be utilized at another node 502b, being accessed by another user 510b. In the illustrated embodiment, the other user 510b may be viewing a different set of medical images. As shown in FIG. 5G, the dynamic self-learning medical image network system may utilize the improved deep learning algorithm 514a to recognize one or more objects in the medical images being shown at node 502b. For example, a spiculated mass object may be detected, and the system may ask the user 510b to confirm or reject a recognized object. This user interaction 504d (e.g., confirm/reject) may be captured by the dynamic self-learning medical image network system.


Finally, referring to FIG. 5H, user interaction 504d is used to improve the deep learning algorithms even further, thereby generating improved deep learning algorithms 514b. These improved deep learning algorithms 514b may be successfully used at other nodes to perform various analysis tasks. Thus, the dynamic self-learning medical image network system is greatly improved over time by receiving user interactions from a large number of users.


Having described exemplary embodiments of the dynamic self-learning medical image network system, it should be appreciated that the examples provided herein and depicted in the accompanying figures are only illustrative, and that other embodiments and examples also are encompassed within the scope of the appended claims. For example, while the flow diagrams provided in the accompanying figures are illustrative of exemplary steps; the overall image merge process may be achieved in a variety of manners using other data merge methods known in the art. The system block diagrams are similarly representative only, illustrating functional delineations that are not to be viewed as limiting requirements of the disclosed inventions. It will also be apparent to those skilled in the art that various changes and modifications may be made to the depicted and/or described embodiments, without departing from the scope of the disclosed inventions, which is to be defined only by the following claims and their equivalents. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense.

Claims
  • 1. A method for creating and using a dynamic self-learning medical image network system, the method comprising: receiving, from a first node that displays one or more medical images to a user, initial user interaction data pertaining to one or more user interactions with the one or more initially obtained medical images, wherein the initial user interaction data comprises an actual or estimated portion of at least one of the one or more medical images that was focused upon by at least one user;training a deep learning algorithm based at least in part on the initial user interaction data received from the node, wherein the initial user interaction data received from the node is weighted based on a skill level of the user; andtransmitting an instance of the trained deep learning algorithm to the first node and/or to one or more additional nodes, wherein at each respective node to which the instance of the trained deep learning algorithm is transmitted, the trained deep learning algorithm is applied to respective one or more subsequently obtained medical images in order to obtain a result.
  • 2. The method of claim 1, wherein the initial user interaction data comprises at least one annotation on at least one of the one or more initially obtained medical images.
  • 3. The method of claim 1, wherein the initial user interaction data comprises a selection of one of more pixels associated with at least one of the one or more initially obtained medical images.
  • 4. The method of claim 1, wherein the initial user interaction data comprises an actual or estimated amount of time one or more users spent viewing one or more of the initially obtained medical images.
  • 5. The method of claim 1, wherein the initial user interaction data comprises a description of a patient condition and/or written or recorded audio diagnostic findings.
  • 6. The method of claim 1, wherein the instance of the trained deep learning algorithm is maintained at the first node and/or at one or more additional nodes.
  • 7. The method of claim 1, wherein the trained deep learning algorithm runs on a server accessed through a network.
  • 8. The method of claim 1, wherein the result comprises recognizing one or more objects in the medical image.
  • 9. The method of claim 1, wherein the result comprises providing a recommendation pertaining to the medical image.
  • 10. The method of claim 1, further comprising receiving, from the first node and/or one or more additional nodes, subsequent user interaction data pertaining to one or more subsequently obtained medical images, wherein the subsequent user interaction data is used to modify the trained deep learning algorithm.
  • 11. The method of claim 10, wherein the subsequent user interaction data is used to modify the trained deep learning algorithm if it is determined that the subsequent user interaction data satisfies a predetermined threshold confidence level indicating that the trained deep learning algorithm should be modified.
  • 12. The method of claim 10, wherein the modification of the trained deep learning algorithm comprises adding one or more layers to the trained deep learning algorithm.
  • 13. The method of claim 10, wherein the modification of the trained deep learning algorithm comprises modifying a respective structure of one or more existing layers of the trained deep learning algorithm.
  • 14. The method of claim 1, wherein the actual portion of the at least one of the one or more medical images comprises at least one of a tomosynthesis slice image or a subset of a plurality of tomosynthesis slice images.
  • 15. The method of claim 1, wherein the skill level of the user is based on a rate of accurate diagnostic findings associated with the user.
  • 16. A dynamic self-learning medical image network system, comprising: a plurality of nodes; anda central brain server configured to receive initial user interaction data from one or more nodes of the plurality, wherein the initial user interaction data pertains to one or more user interactions with one or more initially obtained medical images, wherein the initial user interaction data comprises an actual or estimated portion of at least one of the one or more medical images that was focused upon by at least one user,train a deep learning algorithm based at least in part on the initial user interaction data received from the node, wherein the initial user interaction data is weighted based on a skill level of the user, andtransmit an instance of the trained deep learning algorithm to each node of the plurality,wherein each node of the plurality is configured to apply the instance of the trained deep learning algorithm to one or more subsequently obtained medical images in order to obtain a result.
  • 17. The system of claim 16, wherein the central brain server is configured to train the deep learning algorithm using the initial user interaction data to modify one or more pre-existing algorithms.
  • 18. The system of claim 16, wherein the initial user interaction data comprises at least one annotation on at least one of the one or more initially obtained medical images.
  • 19. The system of claim 16, wherein the initial user interaction data comprises a selection of one of more pixels associated with at least one of the one or more initially obtained medical images.
  • 20. The system of claim 16, wherein the initial user interaction data comprises an actual or estimated amount of time one or more users spent viewing one or more of the initially obtained medical images.
  • 21. The system of claim 16, wherein the initial user interaction data comprises a description of a patient condition and/or written or recorded audio diagnostic findings.
  • 22. The system of claim 16, wherein each node of the plurality is configured to maintain an instance of the trained deep learning algorithm.
  • 23. The system of claim 16, wherein the result comprises recognizing one or more objects in the medical image.
  • 24. The system of claim 16, wherein the result comprises providing a recommendation pertaining to the medical image.
  • 25. The system of claim 16, wherein the central brain server is configured to receive subsequent user interaction data from one or more nodes of the plurality pertaining to one or more subsequently obtained medical images, and to modify the trained deep learning algorithm if the subsequent user interaction data satisfies a predetermined threshold confidence level indicating that the trained deep learning algorithm should be modified.
  • 26. The system of claim 25, wherein the central brain server modifies the trained deep learning algorithm by adding one or more layers to, and/or modifying a structure of one or more existing layers of, the trained deep learning algorithm.
  • 27. The system of claim 16, wherein the actual portion of the at least one of the one or more medical images comprises at least one of a tomosynthesis slice image or a subset of a plurality of tomosynthesis slice images.
  • 28. The system of claim 16, wherein the skill level of the user is based on a rate of accurate diagnostic findings associated with the user.
RELATED APPLICATIONS DATA

The present application is a National Phase entry under 35 U.S.C § 371 of International Patent Application No. PCT/US2018/035331, having an international filing date of May 31, 2018, which claims the benefit under 35 U.S.C. § 119 to U.S. Provisional Patent Application Ser. No. 62/522,241, filed Jun. 20, 2017, which is incorporated by reference in its entirety into the present application.

PCT Information
Filing Document Filing Date Country Kind
PCT/US2018/035331 5/31/2018 WO 00
Publishing Document Publishing Date Country Kind
WO2018/236565 12/27/2018 WO A
US Referenced Citations (453)
Number Name Date Kind
3502878 Stewart Mar 1970 A
3863073 Wagner Jan 1975 A
3971950 Evans et al. Jul 1976 A
4160906 Daniels Jul 1979 A
4310766 Finkenzeller et al. Jan 1982 A
4496557 Malen et al. Jan 1985 A
4559641 Caugant et al. Dec 1985 A
4706269 Reina et al. Nov 1987 A
4744099 Huettenrauch May 1988 A
4773086 Fujita Sep 1988 A
4773087 Plewes Sep 1988 A
4819258 Kleinman et al. Apr 1989 A
4821727 Levene et al. Apr 1989 A
4907156 Doi et al. Jun 1990 A
4969174 Schied Nov 1990 A
4989227 Tirelli et al. Jan 1991 A
5018176 Romeas et al. May 1991 A
RE33634 Yanaki Jul 1991 E
5029193 Saffer Jul 1991 A
5051904 Griffith Sep 1991 A
5078142 Siczek et al. Jan 1992 A
5099846 Hardy Mar 1992 A
5129911 Siczek et al. Jul 1992 A
5133020 Giger et al. Jul 1992 A
5163075 Lubinsky Nov 1992 A
5164976 Scheid et al. Nov 1992 A
5199056 Darrah Mar 1993 A
5219351 Teubner Jun 1993 A
5240011 Assa Aug 1993 A
5279309 Taylor et al. Jan 1994 A
5280427 Magnusson Jan 1994 A
5289520 Pellegrino et al. Feb 1994 A
5343390 Doi et al. Aug 1994 A
5359637 Webbe Oct 1994 A
5365562 Toker Nov 1994 A
5386447 Siczek Jan 1995 A
5415169 Siczek et al. May 1995 A
5426685 Pellegrino et al. Jun 1995 A
5452367 Bick Sep 1995 A
5491627 Zhang et al. Feb 1996 A
5499097 Ortyn et al. Mar 1996 A
5506877 Niklason et al. Apr 1996 A
5526394 Siczek Jun 1996 A
5539797 Heidsieck et al. Jul 1996 A
5553111 Moore Sep 1996 A
5592562 Rooks Jan 1997 A
5594769 Pellegrino et al. Jan 1997 A
5596200 Sharma Jan 1997 A
5598454 Franetzki Jan 1997 A
5609152 Pellegrino et al. Mar 1997 A
5627869 Andrew et al. May 1997 A
5642433 Lee et al. Jun 1997 A
5642441 Riley et al. Jun 1997 A
5647025 Frost et al. Jul 1997 A
5657362 Giger et al. Aug 1997 A
5668889 Hara Sep 1997 A
5671288 Wilhelm et al. Sep 1997 A
5712890 Spivey Jan 1998 A
5719952 Rooks Feb 1998 A
5735264 Siczek et al. Apr 1998 A
5763871 Ortyn et al. Jun 1998 A
5769086 Ritchart et al. Jun 1998 A
5773832 Sayed et al. Jun 1998 A
5803912 Siczek et al. Sep 1998 A
5818898 Tsukamoto et al. Oct 1998 A
5828722 Ploetz Oct 1998 A
5835079 Shieh Nov 1998 A
5841124 Ortyn et al. Nov 1998 A
5872828 Niklason et al. Feb 1999 A
5875258 Ortyn et al. Feb 1999 A
5878104 Ploetz Mar 1999 A
5878746 Lemelson et al. Mar 1999 A
5896437 Ploetz Apr 1999 A
5941832 Tumey Aug 1999 A
5954650 Saito Sep 1999 A
5986662 Argiro Nov 1999 A
6005907 Ploetz Dec 1999 A
6022325 Siczek et al. Feb 2000 A
6067079 Shieh May 2000 A
6075879 Roehrig et al. Jun 2000 A
6091841 Rogers Jul 2000 A
6101236 Wang et al. Aug 2000 A
6102866 Nields et al. Aug 2000 A
6137527 Abdel-Malek Oct 2000 A
6141398 He Oct 2000 A
6149301 Kautzer et al. Nov 2000 A
6175117 Komardin Jan 2001 B1
6196715 Nambu Mar 2001 B1
6215892 Douglass et al. Apr 2001 B1
6216540 Nelson Apr 2001 B1
6219059 Argiro Apr 2001 B1
6256370 Yavus Apr 2001 B1
6233473 Sheperd May 2001 B1
6243441 Zur Jun 2001 B1
6245028 Furst et al. Jun 2001 B1
6272207 Tang Aug 2001 B1
6289235 Webber et al. Sep 2001 B1
6292530 Yavus Sep 2001 B1
6293282 Lemelson Sep 2001 B1
6327336 Gingold et al. Dec 2001 B1
6327377 Rutenberg et al. Dec 2001 B1
6341156 Baetz Jan 2002 B1
6375352 Hewes Apr 2002 B1
6389104 Bani-Hashemi et al. May 2002 B1
6411836 Patel Jun 2002 B1
6415015 Nicolas Jul 2002 B2
6424332 Powell Jul 2002 B1
6442288 Haerer Aug 2002 B1
6459925 Nields et al. Oct 2002 B1
6463181 Duarte Oct 2002 B2
6468226 McIntyre, IV Oct 2002 B1
6480565 Ning Nov 2002 B1
6501819 Unger et al. Dec 2002 B2
6556655 Chichereau Apr 2003 B1
6574304 Hsieh Jun 2003 B1
6597762 Ferrant Jul 2003 B1
6611575 Alyassin et al. Aug 2003 B1
6620111 Stephens et al. Sep 2003 B2
6626849 Huitema et al. Sep 2003 B2
6633674 Barnes Oct 2003 B1
6638235 Miller et al. Oct 2003 B2
6647092 Eberhard Nov 2003 B2
6650928 Gailly Nov 2003 B1
6683934 Zhao Jan 2004 B1
6744848 Stanton Jun 2004 B2
6748044 Sabol et al. Jun 2004 B2
6751285 Eberhard Jun 2004 B2
6758824 Miller et al. Jul 2004 B1
6813334 Koppe Nov 2004 B2
6882700 Wang Apr 2005 B2
6885724 Li Apr 2005 B2
6901156 Giger et al. May 2005 B2
6912319 Barnes May 2005 B1
6940943 Claus Sep 2005 B2
6978040 Berestov Dec 2005 B2
6987331 Koeppe Jan 2006 B2
6999554 Mertelmeier Feb 2006 B2
7022075 Grunwald et al. Apr 2006 B2
7025725 Dione et al. Apr 2006 B2
7030861 Westerman Apr 2006 B1
7110490 Eberhard Sep 2006 B2
7110502 Tsuji Sep 2006 B2
7117098 Dunlay et al. Oct 2006 B1
7123684 Jing et al. Oct 2006 B2
7127091 OpDeBeek Oct 2006 B2
7142633 Eberhard Nov 2006 B2
7218766 Eberhard May 2007 B2
7245694 Jing et al. Jul 2007 B2
7289825 Fors et al. Oct 2007 B2
7298881 Giger et al. Nov 2007 B2
7315607 Ramsauer Jan 2008 B2
7319735 Defreitas et al. Jan 2008 B2
7323692 Rowlands Jan 2008 B2
7346381 Okerlund et al. Mar 2008 B2
7406150 Minyard et al. Jul 2008 B2
7430272 Jing et al. Sep 2008 B2
7443949 Defreitas et al. Oct 2008 B2
7466795 Eberhard et al. Dec 2008 B2
7577282 Gkanatsios et al. Aug 2009 B2
7606801 Faitelson et al. Oct 2009 B2
7616801 Gkanatsios et al. Nov 2009 B2
7630533 Ruth et al. Dec 2009 B2
7634050 Muller et al. Dec 2009 B2
7640051 Krishnan Dec 2009 B2
7697660 Ning Apr 2010 B2
7702142 Ren et al. Apr 2010 B2
7705830 Westerman et al. Apr 2010 B2
7760924 Ruth et al. Jul 2010 B2
7769219 Zahniser Aug 2010 B2
7787936 Kressy Aug 2010 B2
7809175 Roehrig et al. Oct 2010 B2
7828733 Zhang et al. Nov 2010 B2
7831296 DeFreitas et al. Nov 2010 B2
7869563 DeFreitas Jan 2011 B2
7974924 Holla et al. Jul 2011 B2
7991106 Ren et al. Aug 2011 B2
8044972 Hall et al. Oct 2011 B2
8051386 Rosander et al. Nov 2011 B2
8126226 Bernard et al. Feb 2012 B2
8155421 Ren et al. Apr 2012 B2
8165365 Bernard et al. Apr 2012 B2
8532745 DeFreitas et al. Sep 2013 B2
8571289 Ruth Oct 2013 B2
8594274 Hoernig et al. Nov 2013 B2
8677282 Cragun et al. Mar 2014 B2
8712127 Ren et al. Apr 2014 B2
8897535 Ruth et al. Nov 2014 B2
8983156 Periaswamy et al. Mar 2015 B2
9020579 Smith Apr 2015 B2
9075903 Marshall Jul 2015 B2
9084579 Ren et al. Jul 2015 B2
9119599 Itai Sep 2015 B2
9129362 Jerebko Sep 2015 B2
9289183 Karssemeijer Mar 2016 B2
9451924 Bernard Sep 2016 B2
9456797 Ruth et al. Oct 2016 B2
9478028 Parthasarathy Oct 2016 B2
9589374 Gao Mar 2017 B1
9592019 Sugiyama Mar 2017 B2
9805507 Chen Oct 2017 B2
9808215 Ruth et al. Nov 2017 B2
9811758 Ren et al. Nov 2017 B2
9901309 DeFreitas et al. Feb 2018 B2
10008184 Kreeger et al. Jun 2018 B2
10010302 Ruth et al. Jul 2018 B2
10092358 DeFreitas Oct 2018 B2
10111631 Gkanatsios Oct 2018 B2
10242490 Karssemeijer Mar 2019 B2
10335094 DeFreitas Jul 2019 B2
10357211 Smith Jul 2019 B2
10410417 Chen et al. Sep 2019 B2
10413263 Ruth et al. Sep 2019 B2
10444960 Marshall Oct 2019 B2
10456213 DeFreitas Oct 2019 B2
10573276 Kreeger et al. Feb 2020 B2
10575807 Gkanatsios Mar 2020 B2
10595954 DeFreitas Mar 2020 B2
10624598 Chen Apr 2020 B2
10977863 Chen Apr 2021 B2
10978026 Kreeger Apr 2021 B2
20010038681 Stanton et al. Nov 2001 A1
20010038861 Hsu et al. Nov 2001 A1
20020012450 Tsuji Jan 2002 A1
20020050986 Inoue May 2002 A1
20020075997 Unger et al. Jun 2002 A1
20020113681 Byram Aug 2002 A1
20020122533 Marie et al. Sep 2002 A1
20020188466 Barrette et al. Dec 2002 A1
20020193676 Bodicker Dec 2002 A1
20030007598 Wang Jan 2003 A1
20030018272 Treado et al. Jan 2003 A1
20030026386 Tang Feb 2003 A1
20030048260 Matusis Mar 2003 A1
20030073895 Nields et al. Apr 2003 A1
20030095624 Eberhard et al. May 2003 A1
20030097055 Yanof May 2003 A1
20030128893 Castorina Jul 2003 A1
20030135115 Burdette et al. Jul 2003 A1
20030169847 Karellas Sep 2003 A1
20030194050 Eberhard Oct 2003 A1
20030194121 Eberhard et al. Oct 2003 A1
20030210254 Doan Nov 2003 A1
20030212327 Wang Nov 2003 A1
20030215120 Uppaluri Nov 2003 A1
20040008809 Webber Jan 2004 A1
20040008900 Jabri et al. Jan 2004 A1
20040008901 Avinash Jan 2004 A1
20040036680 Davis Feb 2004 A1
20040047518 Tiana Mar 2004 A1
20040052328 Saboi Mar 2004 A1
20040066884 Claus Apr 2004 A1
20040066904 Eberhard et al. Apr 2004 A1
20040070582 Smith et al. Apr 2004 A1
20040077938 Mark et al. Apr 2004 A1
20040081273 Ning Apr 2004 A1
20040094167 Brady May 2004 A1
20040101095 Jing et al. May 2004 A1
20040109028 Stern et al. Jun 2004 A1
20040109529 Eberhard et al. Jun 2004 A1
20040127789 Ogawa Jul 2004 A1
20040138569 Grunwald Jul 2004 A1
20040171933 Stoller et al. Sep 2004 A1
20040171986 Tremaglio, Jr. et al. Sep 2004 A1
20040267157 Miller et al. Dec 2004 A1
20050047636 Gines et al. Mar 2005 A1
20050049521 Miller et al. Mar 2005 A1
20050063509 Defreitas et al. Mar 2005 A1
20050078797 Danielsson et al. Apr 2005 A1
20050084060 Seppi et al. Apr 2005 A1
20050089205 Kapur Apr 2005 A1
20050105679 Wu et al. May 2005 A1
20050107689 Sasano May 2005 A1
20050111718 MacMahon May 2005 A1
20050113681 DeFreitas et al. May 2005 A1
20050113715 Schwindt et al. May 2005 A1
20050124845 Thomadsen et al. Jun 2005 A1
20050135555 Claus Jun 2005 A1
20050135664 Kaufhold Jun 2005 A1
20050226375 Eberhard Oct 2005 A1
20060009693 Hanover et al. Jan 2006 A1
20060018526 Avinash Jan 2006 A1
20060025680 Jeune-Iomme Feb 2006 A1
20060030784 Miller et al. Feb 2006 A1
20060074288 Kelly et al. Apr 2006 A1
20060098855 Gkanatsios et al. May 2006 A1
20060129062 Nicoson et al. Jun 2006 A1
20060132508 Sadikali Jun 2006 A1
20060147099 Marshall et al. Jul 2006 A1
20060155209 Miller et al. Jul 2006 A1
20060197753 Hotelling Sep 2006 A1
20060210131 Wheeler Sep 2006 A1
20060228012 Masuzawa Oct 2006 A1
20060238546 Handley Oct 2006 A1
20060257009 Wang Nov 2006 A1
20060269040 Mertelmeier Nov 2006 A1
20060291618 Eberhard et al. Dec 2006 A1
20070019846 Bullitt et al. Jan 2007 A1
20070030949 Jing et al. Feb 2007 A1
20070036265 Jing et al. Feb 2007 A1
20070046649 Reiner Mar 2007 A1
20070052700 Wheeler et al. Mar 2007 A1
20070076844 Defreitas et al. Apr 2007 A1
20070114424 Danielsson et al. May 2007 A1
20070118400 Morita et al. May 2007 A1
20070156451 Gering Jul 2007 A1
20070223651 Wagenaar et al. Sep 2007 A1
20070225600 Weibrecht et al. Sep 2007 A1
20070236490 Casteele Oct 2007 A1
20070242800 Jing et al. Oct 2007 A1
20070263765 Wu Nov 2007 A1
20070274585 Zhang et al. Nov 2007 A1
20080019581 Gkanatsios et al. Jan 2008 A1
20080045833 DeFreitas et al. Feb 2008 A1
20080101537 Sendai May 2008 A1
20080114614 Mahesh et al. May 2008 A1
20080125643 Huisman May 2008 A1
20080130979 Ren Jun 2008 A1
20080139896 Baumgart Jun 2008 A1
20080152086 Hall Jun 2008 A1
20080165136 Christie et al. Jul 2008 A1
20080187095 Boone et al. Aug 2008 A1
20080198966 Hjarn Aug 2008 A1
20080229256 Shibaike Sep 2008 A1
20080240533 Piron et al. Oct 2008 A1
20080297482 Weiss Dec 2008 A1
20090003519 DeFreitas Jan 2009 A1
20090005668 West et al. Jan 2009 A1
20090010384 Jing et al. Jan 2009 A1
20090034684 Bernard Feb 2009 A1
20090037821 O'Neal et al. Feb 2009 A1
20090079705 Sizelove et al. Mar 2009 A1
20090080594 Brooks et al. Mar 2009 A1
20090080602 Brooks et al. Mar 2009 A1
20090080604 Shores et al. Mar 2009 A1
20090080752 Ruth Mar 2009 A1
20090080765 Bernard et al. Mar 2009 A1
20090087067 Khorasani Apr 2009 A1
20090123052 Ruth May 2009 A1
20090129644 Daw et al. May 2009 A1
20090135997 Defreitas et al. May 2009 A1
20090138280 Morita et al. May 2009 A1
20090143674 Nields Jun 2009 A1
20090167702 Nurmi Jul 2009 A1
20090171244 Ning Jul 2009 A1
20090238424 Arakita Sep 2009 A1
20090259958 Ban Oct 2009 A1
20090268865 Ren et al. Oct 2009 A1
20090278812 Yasutake Nov 2009 A1
20090296882 Gkanatsios et al. Dec 2009 A1
20090304147 Jing et al. Dec 2009 A1
20100034348 Yu Feb 2010 A1
20100049046 Peiffer Feb 2010 A1
20100054400 Ren et al. Mar 2010 A1
20100079405 Bernstein Apr 2010 A1
20100086188 Ruth et al. Apr 2010 A1
20100088346 Urness et al. Apr 2010 A1
20100098214 Star-Lack et al. Apr 2010 A1
20100105879 Katayose et al. Apr 2010 A1
20100121178 Krishnan et al. May 2010 A1
20100131294 Venon May 2010 A1
20100131482 Linthicum et al. May 2010 A1
20100135558 Ruth et al. Jun 2010 A1
20100152570 Navab Jun 2010 A1
20100166267 Zhang Jul 2010 A1
20100195882 Ren et al. Aug 2010 A1
20100208037 Sendai Aug 2010 A1
20100231522 Li Sep 2010 A1
20100259561 Forutanpour et al. Oct 2010 A1
20100259645 Kaplan Oct 2010 A1
20100260316 Stein et al. Oct 2010 A1
20100280375 Zhang Nov 2010 A1
20100293500 Cragun Nov 2010 A1
20110018817 Kryze Jan 2011 A1
20110019891 Puong Jan 2011 A1
20110054944 Sandberg et al. Mar 2011 A1
20110069808 Defreitas et al. Mar 2011 A1
20110069906 Park Mar 2011 A1
20110087132 DeFreitas et al. Apr 2011 A1
20110105879 Masumoto May 2011 A1
20110109650 Kreeger May 2011 A1
20110110576 Kreeger May 2011 A1
20110150447 Li Jun 2011 A1
20110163939 Tam et al. Jul 2011 A1
20110178389 Kumar et al. Jul 2011 A1
20110182402 Partain Jul 2011 A1
20110234630 Batman et al. Sep 2011 A1
20110237927 Brooks et al. Sep 2011 A1
20110242092 Kashiwagi Oct 2011 A1
20110310126 Georgiev et al. Dec 2011 A1
20120014504 Jang Jan 2012 A1
20120014578 Karssemeijer Jan 2012 A1
20120069951 Toba Mar 2012 A1
20120131488 Karlsson et al. May 2012 A1
20120133600 Marshall May 2012 A1
20120133601 Marshall May 2012 A1
20120134464 Hoernig et al. May 2012 A1
20120148151 Hamada Jun 2012 A1
20120189092 Jerebko Jul 2012 A1
20120194425 Buelow Aug 2012 A1
20120238870 Smith et al. Sep 2012 A1
20120293511 Mertelmeier Nov 2012 A1
20130022165 Jang Jan 2013 A1
20130044861 Muller Feb 2013 A1
20130059758 Haick Mar 2013 A1
20130108138 Nakayama May 2013 A1
20130121569 Yadav May 2013 A1
20130121618 Yadav May 2013 A1
20130202168 Jerebko Aug 2013 A1
20130259193 Packard Oct 2013 A1
20140033126 Kreeger Jan 2014 A1
20140035811 Guehring Feb 2014 A1
20140064444 Oh Mar 2014 A1
20140073913 DeFreitas et al. Mar 2014 A1
20140219534 Wiemker et al. Aug 2014 A1
20140219548 Wels Aug 2014 A1
20140327702 Kreeger et al. Nov 2014 A1
20140328517 Gluncic Nov 2014 A1
20150052471 Chen Feb 2015 A1
20150061582 Smith Apr 2015 A1
20150238148 Georgescu et al. Aug 2015 A1
20150302146 Marshall Oct 2015 A1
20150309712 Marshall Oct 2015 A1
20150317538 Ren et al. Nov 2015 A1
20150331995 Zhao Nov 2015 A1
20160000399 Halmann et al. Jan 2016 A1
20160022364 DeFreitas et al. Jan 2016 A1
20160051215 Chen Feb 2016 A1
20160078645 Abdurahman Mar 2016 A1
20160228034 Gluncic Aug 2016 A1
20160235380 Smith Aug 2016 A1
20160367210 Gkanatsios Dec 2016 A1
20170071562 Suzuki Mar 2017 A1
20170262737 Rabinovich Sep 2017 A1
20180047211 Chen et al. Feb 2018 A1
20180137385 Ren May 2018 A1
20180144244 Masoud May 2018 A1
20180256118 DeFreitas Sep 2018 A1
20190015173 DeFreitas Jan 2019 A1
20190043456 Kreeger Feb 2019 A1
20190290221 Smith Sep 2019 A1
20200046303 DeFreitas Feb 2020 A1
20200093562 DeFreitas Mar 2020 A1
20200205928 DeFreitas Jul 2020 A1
20200253573 Gkanatsios Aug 2020 A1
20200345320 Chen Nov 2020 A1
20200390404 DeFreitas Dec 2020 A1
20210000553 St. Pierre Jan 2021 A1
20210100518 Chui Apr 2021 A1
20210100626 St. Pierre Apr 2021 A1
20210113167 Chui Apr 2021 A1
20210118199 Chui Apr 2021 A1
20220005277 Chen Jan 2022 A1
20220013089 Kreeger Jan 2022 A1
Foreign Referenced Citations (98)
Number Date Country
2014339982 Apr 2015 AU
1846622 Oct 2006 CN
202161328 Mar 2012 CN
102429678 May 2012 CN
107440730 Dec 2017 CN
102010009295 Aug 2011 DE
102011087127 May 2013 DE
775467 May 1997 EP
982001 Mar 2000 EP
1428473 Jun 2004 EP
2236085 Jun 2010 EP
2215600 Aug 2010 EP
2301432 Mar 2011 EP
2491863 Aug 2012 EP
1986548 Jan 2013 EP
2656789 Oct 2013 EP
2823464 Jan 2015 EP
2823765 Jan 2015 EP
3060132 Apr 2019 EP
H09-198490 Jul 1997 JP
H09-238934 Sep 1997 JP
10-33523 Feb 1998 JP
H10-33523 Feb 1998 JP
2000-200340 Jul 2000 JP
2002-282248 Oct 2002 JP
2003-189179 Jul 2003 JP
2003-199737 Jul 2003 JP
2003-531516 Oct 2003 JP
2006-519634 Aug 2006 JP
2006-312026 Nov 2006 JP
2007-130487 May 2007 JP
2007-330334 Dec 2007 JP
2007-536968 Dec 2007 JP
2008-068032 Mar 2008 JP
2009-034503 Feb 2009 JP
2009-522005 Jun 2009 JP
2009-526618 Jul 2009 JP
2009-207545 Sep 2009 JP
2010-137004 Jun 2010 JP
2012-501750 Jan 2012 JP
2012011255 Jan 2012 JP
2012-061196 Mar 2012 JP
2013-244211 Dec 2013 JP
2014-507250 Mar 2014 JP
2014-534042 Dec 2014 JP
2015-506794 Mar 2015 JP
2016-198197 Dec 2015 JP
2016-198197 Dec 2016 JP
10-2015-001051 Jan 2015 KR
1020150010515 Jan 2015 KR
10-2017-006283 Jun 2017 KR
1020170062839 Jun 2017 KR
9005485 May 1990 WO
9317620 Sep 1993 WO
9406352 Mar 1994 WO
199700649 Jan 1997 WO
199816903 Apr 1998 WO
0051484 Sep 2000 WO
2003020114 Mar 2003 WO
2005051197 Jun 2005 WO
2005110230 Nov 2005 WO
2005110230 Nov 2005 WO
2005112767 Dec 2005 WO
2005112767 Dec 2005 WO
2006055830 May 2006 WO
2006058160 Jun 2006 WO
2007095330 Aug 2007 WO
08014670 Feb 2008 WO
2008047270 Apr 2008 WO
2008054436 May 2008 WO
2009026587 Feb 2009 WO
2010028208 Mar 2010 WO
2010059920 May 2010 WO
2011008239 Jan 2011 WO
2011043838 Apr 2011 WO
2011065950 Jun 2011 WO
2011073864 Jun 2011 WO
2011091300 Jul 2011 WO
2012001572 Jan 2012 WO
2012068373 May 2012 WO
2012063653 May 2012 WO
2012112627 Aug 2012 WO
2012122399 Sep 2012 WO
2013001439 Jan 2013 WO
2013035026 Mar 2013 WO
2013078476 May 2013 WO
2013123091 Aug 2013 WO
2014149554 Sep 2014 WO
2014207080 Dec 2014 WO
2015061582 Apr 2015 WO
2015066650 May 2015 WO
2015130916 Sep 2015 WO
2016103094 Jun 2016 WO
2016184746 Nov 2016 WO
2018183548 Oct 2018 WO
2018183549 Oct 2018 WO
2018183550 Oct 2018 WO
WO2018236565 Dec 2018 WO
Non-Patent Literature Citations (58)
Entry
International Search Report and Written Opinion dated Sep. 7, 2018 for PCT/US2018/035331, Applicant Hologic, Inc., 12 pages.
PCT International Preliminary Report on Patentability (Chapter I of the Patent Cooperation Treaty) for PCT/US2018/035331US, Applicant: Hologio Inc., Form PCT/IB/326 and 373, dated Jan. 2, 2020 (9 pages).
Ahmad A Al Sallab et al., “Self Learning Machines Using Deep Networks”, Soft Computing and Pattern Recognition (SoCPaR), 2011 Int'l. Conference of IEEE, Oct. 14, 2011, pp. 21-26.
Ghiassi M et al., “A Dynamic Architecture for Artificial Neural Networks”, Neurocomputing, vol. 63, Aug. 20, 2004, pp. 397-413.
European Search Report in Application 18820591.8, dated Mar. 4, 2021, 9 pages.
Caroline, B.E et al., “Computer aided detection of masses in digital breast tomosynthesis: A review”, 2012 International Conference on Emerging Trends in Science, Engineering and Technology (INCOSET), Tiruchirappalli, 2012, pp. 186-191.
Ertas, M. et al., “2D versus 3D total variation minimization in digital breast tomosynthesis”, 2015 IEEE International Conference on Imaging Systems and Techniques (IST), Macau, 2015, pp. 1-4.
Giger et al. “Development of a smart workstation for use in mammography”, in Proceedings of SPIE, vol. 1445 (1991), p. 101103; 4 pages.
Giger et al., “An Intelligent Workstation for Computer-aided Diagnosis”, in RadioGraphics, May 1993, 13:3 pp. 647-656; 10 pages.
eFilm Mobile HD by Merge Healthcare, web site: http://itunes.apple.com/bw/app/efilm-mobile-hd/id405261243?mt=8, accessed on Nov. 3, 2011 (2 pages).
eFilm Solutions, eFilm Workstation (tm) 3.4, website: http://estore.merge.com/na/estore/content.aspx?productlD=405, accessed on Nov. 3, 2011 (2 pages).
Wodajo, Felasfa, MD, “Now Playing: Radiology Images from Your Hospital PACS on your iPad,” Mar. 17, 2010; web site: http://www.imedicalapps.com/2010/03/now-playing-radiology-images-from-your-hospital-pacs-on-your-ipad/, accessed on Nov. 3, 2011 (3 pages).
Lewin,JM, et al., Dual-energy contrast-enhanced digital subtraction mammography: feasibility. Radiology 2003; 229:261-268.
Berg, WA et al., “Combined screening with ultrasound and mammography vs mammography alone in women at elevated risk of breast cancer”, JAMA 299:2151-2163, 2008.
Carton, AK, et al., “Dual-energy contrast-enhanced digital breast tomosynthesis—a feasibility study”, BR J Radiol. Apr. 2010;83 (988):344-50.
Chen, SC, et al., “Initial clinical experience with contrast-enhanced digital breast tomosynthesis”, Acad Radio. Feb. 2007 14(2):229-38.
Diekmann, F., et al., “Digital mammography using iodine-based contrast media: initial clinical experience with dynamic contrast medium enhancement”, Invest Radiol 2005; 40:397-404.
Dromain C., et al., “Contrast enhanced spectral mammography: a multi-reader study”, RSNA 2010, 96th Scientific Assembly and Scientific Meeting.
Dromain, C., et al., “Contrast-enhanced digital mammography”, Eur J Radiol. 2009; 69:34-42.
Freiherr, G., “Breast tomosynthesis trials show promise”, Diagnostic lmaging—San Francisco 2005, V27; N4:42-48.
ICRP Publication 60: 1990 Recommendations of the International Commission on Radiological Protection, 12 pages.
Jochelson, M., et al., “Bilateral Dual Energy contrast-enhanced digital mammography: Initial Experience”, RSNA 2010, 96th Scientific Assembly and Scientific Meeting, 1 page.
Jong, RA, et al., Contrast-enhanced digital mammography: initial clinical experience. Radiology 2003; 228:842-850.
Kopans, et al. Will tomosynthesis replace conventional mammography? Plenary Session SFN08: RSNA 2005.
Lehman, CD, et al. MRI evaluation of the contralateral breast in women with recently diagnosed breast cancer. N Engl J Med 2007; 356:1295-1303.
Lindfors, KK, et al., Dedicated breast CT: initial clinical experience. Radiology 2008; 246(3): 725-733.
Niklason, L., et al., Digital tomosynthesis in breast imaging. Radiology. Nov. 1997; 205(2):399-406.
Poplack, SP, et al., Digital breast tomosynthesis: initial experience in 98 women with abnormal digital screening mammography. AJR Am J Roentgenology Sep. 2007 189(3):616-23.
Prionas, ND, et al., Contrast-enhanced dedicated breast CT: initial clinical experience. Radiology. Sep. 2010 256(3):714-723.
Rafferty, E. et al., “Assessing Radiologist Performance Using Combined Full-Field Digital Mammography and Breast Tomosynthesis Versus Full-Field Digital Mammography Alone: Results”. . . presented at 2007 Radiological Society of North America meeting, Chicago IL.
Smith, A., “Full field breast tomosynthesis”, Radiol Manage. Sep.-Oct. 2005; 27(5):25-31.
Weidner N, et al., “Tumor angiogenesis and metastasis: correlation in invasive breast carcinoma”, New England Journal of Medicine 1991; 324:1-8.
Weidner, N, “The importance of tumor angiogenesis: the evidence continues to grow”, AM J Clin Pathol. Nov. 2004 122(5):696-703.
Hologic, Inc., 510(k) Summary, prepared Nov. 28, 2010, for Affirm Breast Biopsy Guidance System Special 510(k) Premarket Notification, 5 pages.
Hologic, Inc., 510(k) Summary, prepared Aug. 14, 2012, for Affirm Breast Biopsy Guidance System Special 510(k) Premarket Notification, 5 pages.
“Filtered Back Projection”, (NYGREN), published May 8, 2007, URL: http://web.archive.org/web/19991010131715/http://www.owlnet.rice.edu/˜elec539/Projects97/cult/node 2.html, 2 pgs.
Hologic, “Lorad StereoLoc II” Operator's Manual 9-500-0261, Rev. 005, 2004, 78 pgs.
Shrading, Simone et al., “Digital Breast Tomosynthesis-guided Vacuum-assisted Breast Biopsy: Initial Experiences and Comparison with Prone Stereotactic Vacuum-assisted Biopsy”, the Department of Diagnostic and Interventional Radiology, Univ. of Aachen, Germany, published Nov. 12, 2014, 10 pgs.
“Supersonic to feature Aixplorer Ultimate at ECR”, AuntiMinnie.com, 3 pages (Feb. 2018).
Bushberg, Jerrold et al., “The Essential Physics of Medical Imaging”, 3rd ed., In: “The Essential Physics of Medical Imaging, Third Edition”, Dec. 28, 2011, Lippincott & Wilkins, Philadelphia, PA, USA, XP05579051, pp. 270-272.
Dromain, Clarisse et al., “Dual-energy contrast-enhanced digital mammography: initial clinical results”, European Radiology, Sep. 14, 2010, vol. 21, pp. 565-574.
Reynolds, April, “Stereotactic Breast Biopsy: A Review”, Radiologic Technology, vol. 80, No. 5, Jun. 1, 2009, pp. 447M-464M, XP055790574.
E. Shaw de Paredes et al., “Interventional Breast Procedure”, published Sep./Oct. 1998 in Curr Probl Diagn Radiol, pp. 138-184.
Burbank, Fred, “Stereotactic Breast Biopsy: Its History, Its Present, and Its Future”, published in 1996 at the Southeastern Surgical Congress, 24 pages.
Georgian-Smith, Dianne, et al., “Stereotactic Biopsy of the Breast Using an Upright Unit, a Vacuum-Suction Needle, and a Lateral Arm-Support System”, 2001, at the American Roentgen Ray Society meeting, 8 pages.
Fischer Imaging Corp, Mammotest Plus manual on minimally invasive breast biopsy system, 2002, 8 pages.
Fischer Imaging Corporation, Installation Manual, MammoTest Family of Breast Biopsy Systems, 86683G, 86684G, pp. 55957-IM, Issue 1, Revision 3, Jul. 2005, 98 pages.
Fischer Imaging Corporation, Operator Manual, MammoTest Family of Breast Biopsy Systems, 86683G, 86684G, pp. 55956-OM, Issue 1, Revision 6, Sep. 2005, 258 pages.
Koechli, Ossi R., “Available Sterotactic Systems for Breast Biopsy”, Renzo Brun del Re (Ed.), Minimally Invasive Breast Biopsies, Recent Results in Cancer Research 173:105-113; Springer-Verlag, 2009.
Chan, Heang-Ping et al., “ROC Study of the effect of stereoscopic imaging on assessment of breast lesions,” Medical Physics, vol. 32, No. 4, Apr. 2005, 1001-1009.
Lilja, Mikko, “Fast and accurate voxel projection technique in free-form cone-beam geometry with application to algebraic reconstruction,” Applies Sciences on Biomedical and Communication Technologies, 2008, Isabel '08, first international symposium on, IEEE, Piscataway, NJ, Oct. 25, 2008.
Pathmanathan et al., “Predicting tumour location by simulating large deformations of the breast using a 3D finite element model and nonlinear elasticity”, Medical Image Computing and Computer-Assisted Intervention, pp. 217-224, vol. 3217 (2004).
Pediconi, “Color-coded automated signal intensity-curve for detection and characterization of breast lesions: Preliminary evaluation of new software for MR-based breast imaging,” International Congress Series 1281 (2005) 1081-1086.
Sakic et al., “Mammogram synthesis using a 3D simulation. I. breast tissue model and image acquisition simulation” Medical Physics. 29, pp. 2131-2139 (2002).
Samani, A. et al., “Biomechanical 3-D Finite Element Modeling of the Human Breast Using MRI Data”, 2001, IEEE Transactions on Medical Imaging, vol. 20, No. 4, pp. 271-279.
Yin, H.M., et al., “Image Parser: a tool for finite element generation from three-dimensional medical images”, BioMedical Engineering Online. 3:31, pp. 1-9, Oct. 1, 2004.
Van Schie, Guido, et al., “Mass detection in reconstructed digital breast tomosynthesis volumes with a computer-aided detection system trained on 2D mammograms”, Med. Phys. 40(4), Apr. 2013, 41902-1 -41902-11.
Van Schie, Guido, et al., “Generating Synthetic Mammograms from Reconstructed Tomosynthesis Volumes”, IEEE Transactions on Medical Imaging, vol. 32, No. 12, Dec. 2013, 2322-2331.
Related Publications (1)
Number Date Country
20200184262 A1 Jun 2020 US
Provisional Applications (1)
Number Date Country
62522241 Jun 2017 US