Embodiments described herein relate to systems and methods for performing image analytics to automatically select, arrange, and process key images as part of a medical image study.
When physicians, such as radiologists and cardiologists, review medical images captured as part of a clinical imaging procedure for the purpose of creating a clinical report, they commonly select key images. “Key images,” as this term is used in the medical industry, identify “important” images in a study. Key images may be displayed in a montage, such as a single composite image or as individual images separately displayed, such as in a virtual stack of images. The key images may include images supporting a normal finding, an abnormality, a change from previous image studies, or the like. In some embodiments, to provide a proper diagnosis, a reviewing physician compares one or more of these key images to one or more images included in another image study, sometimes referred to as a “comparison image study.” Accordingly, the reviewing physician must be able to located relevant comparison image studies and properly compare images between multiple studies or risk providing a misdiagnosis.
Thus, embodiments described herein improve clinical efficiency and accuracy related to reading and reporting medical images using rules and, in some embodiments, artificial intelligence. In particular, embodiments described herein assist reading physicians in selecting, arranging, processing, and reporting key images from a current image study and comparison image studies using automated, rules-based actions to expedite the reading and reporting of medical images.
For example, in one embodiment, the invention provides a system for automatically determining a key image for display to a user and/or storage as part of analyzing an image study generated as part of a medical imaging procedure. The system includes a memory storing a plurality of image studies, each of the plurality of image studies including a plurality of images; a display device for displaying images; and an electronic processor interacting with the memory and the display device. The electronic processor is configured to: determine a first key image within a plurality of images included in a first image study; automatically determine, by executing one or more rules associated with one or more of the first key image, a user, a type of the first image study, a modality generating the first image study, an anatomy, a location of the modality, and patient demographics, at least one second key image included in at least one second image study included in the plurality of image studies stored in the memory; and display, via the display device, the second key image with the first key image to aid a user in study of the first image study.
Another embodiment provides a method of automatically determining a key image for display to a user and/or for storage as part of analyzing an image study generated as part of a medical imaging procedure. The method includes: determining a first key image within a plurality of images included in a first image study; automatically determining, with an electronic processor, by executing one or more rules associated with one or more of the first key image, a user, a type of the first image study, a modality generating the first image study, an anatomy, a location of the modality, and patient demographics, at least one second key image included in at least one second image study included in a plurality of image studies stored in a memory; and displaying, with the electronic processor via a display device, the second key image with the first key image within a montage template to aid a user in study of the first image study.
Another embodiment is directed to a non-transitory computer medium including instructions that, when executed as a set of instructions by an electronic processor perform a set of operations. The operations determine a first key image within a plurality of images included in a first image study; automatically determine, by executing one or more rules associated with one or more of the first key image, a user, a type of the first image study, a modality generating the first image study, an anatomy, a location of the modality, and patient demographics, at least one second key image included in at least one second image study included in a plurality of image studies stored in a memory, the one or more rules generated using machine learning; and display, via a display device, the second key image with the first key image to aid a user in study of the first image study.
Other aspects of the invention will become apparent by consideration of the detailed description and accompanying drawings.
Before any embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways.
Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. The terms “mounted,” “connected” and “coupled” are used broadly and encompass both direct and indirect mounting, connecting and coupling. Further, “connected” and “coupled” are not restricted to physical or mechanical connections or couplings, and may include electrical connections or couplings, whether direct or indirect. Also, electronic communications and notifications may be performed using any known means including direct connections, wireless connections, etc.
A plurality of hardware and software based devices, as well as a plurality of different structural components may be utilized to implement the invention. In addition, embodiments of the invention may include hardware, software, and electronic components or modules that, for purposes of discussion, may be illustrated and described as if the majority of the components were implemented solely in hardware. However, one of ordinary skill in the art, and based on a reading of this detailed description, would recognize that, in at least one embodiment, the electronic-based aspects of the invention may be implemented in software (e.g., stored on non-transitory computer-readable medium) executable by one or more processors. As such, it should be noted that a plurality of hardware and software based devices, as well as a plurality of different structural components, may be utilized to implement the invention. For example, “mobile device,” “computing device,” and “server” as described in the specification may include one or more electronic processors, one or more memory modules including non-transitory computer-readable medium, one or more input/output interfaces, and various connections (e.g., a system bus) connecting the components.
The memory 106 may include read-only memory (“ROM”), random access memory (“RAM”) (e.g., dynamic RAM (“DRAM”), synchronous DRAM (“SDRAM”), and the like), electrically erasable programmable read-only memory (“EEPROM”), flash memory, a hard disk, a secure digital (“SD”) card, other suitable memory devices, or a combination thereof. The electronic processor 104 executes computer-readable instructions (“software”) stored in the memory 106. The software may include firmware, one or more applications, program data, filters, rules, one or more program modules, and other executable instructions. For example, as illustrated in
The communication interface 108 allows the server 102 to communicate with devices external to the server 102. For example, as illustrated in
In some embodiments, the server 102 acts as a gateway to the one or more image repositories 112. For example, in some embodiments, the server 102 may be picture archiving and communication system (“PACS”) server that communicates with one or more image repositories 112. However, in other embodiments, the server 102 may be separate from a PACS server and may communicate with a PACS server to access images stored in one or more image repositories.
As illustrated in
As illustrated in
In some embodiments, the image selection application 110 performs the functionality described herein in response to various triggering events. For example, in some embodiments, the image selection application 110 perform the functionality described herein in response to a reviewing or reading physician accessing or viewing a particular image study. For example,
Returning to
As one example, an image included in a current image study may include an index lesion, defined as a key finding that is representative of the patient's problem or shows a pertinent negative finding. Such an index lesion could be identified because of an action of the reading physician or automatically because the anatomical position matches the location of a previously marked index lesion in the same patient. Under any one of these circumstances, when the image is added to the montage (meaning marked as a key image and/or added to a specific montage of images), the electronic processor 104 is configured to automatically select another key image (e.g., the best matching comparison image that also contains the same index lesion) as described below.
In particular, regardless of whether the first key image was determined from input from a user or automatically, the electronic processor 104 is configured to automatically determine a second image based on one or more rules (at block 308). The rules may consider characteristics of the first key image, the exam type, the modality type, patient demographics, user findings, or the like. For example, the rules may specify that when the first key image is selected from a magnetic resonance (“MR”) image (“MRI”) study and the initial diagnosis (provided by the user or automatically using image analytics) is “normal,” a predetermined set of images (of particular anatomy, with particular image characteristics or positions, or the like) should be automatically included in the set of key images. The rules may use metadata for an image or image study (e.g., DICOM header data), patient data, clinical data, and the like to automatically select the second key image.
It should be understood that the second key image may be included in the same image study as the first key image or a different image study. In particular, in some embodiments, the second key image is included in a prior comparison image study. In this situation, the second key image may include a key image or nonkey image from a comparison image study. However, in other embodiments, the second key image may be an image within the comparison study identified by the electronic processor 104 (regardless of whether the image was identified as a key image in the comparison image study) as being relevant, such as by analyzing and interpreting a diagnosis or finding (as recorded in a structured report for the comparison image study) for the comparison image study or by anatomically matching to a location of a key image in the current study. It should be understood that, in some embodiments, the first key image may be included in the comparison image study and the second key image may also be included in the comparison image study, another comparison image study, or a current image study being reviewed by a user.
The rules may be customized for individual users or groups of users, such as by user, user role, a location of the modality generating the image study, the location of the user, a modality, an exam type, a body part associated with the image study or a particular image, patient demographics, a network of the user, a clinic the user is associated with, a referring physician, and the like. Thus, for example, if a particular physician selects a key image, the electronic processor 104 may be configured to automatically select and apply a rule for the modality and finding that is specific to the user as compared to other rules for the same modality and finding.
As illustrated in
In some embodiments, the electronic processor 104 is also configured to automatically generate text or labels for the key images. For example,
In some embodiments, the electronic processor 104 is also configured to automatically generate text for a report (a structured report) associated with an image study based on the selection of key images. For example, the electronic processor 104 may be configured to automatically generate text based on what images were compared, what anatomy was reviewed, measurements in images, or the like. This text can be displayed to a user for review, editing (as needed), and approval. In some embodiments, the user may indicate (by selecting a button or other selection mechanism or issuing an audio or verbal command) when all of the key images have been selected (and annotated as needed), which may trigger the electronic processor 104 to generate text for the report.
It should be understood that, in some embodiments, the electronic processor 104 is configured to automatically select multiple key images for an image study (e.g., a third key image, fourth key image, and the like). Each automatically-determined key image may be selected from the same image study, different image studies, or a combination thereof. For example, in some situations, the selected key images may be from different types of image studies or different image studies generated at different times (e.g., to show a treatment progression or change). Further, additional key images, such as images 234, 236 are selectable by a user in a similar manner as discussed above to select additional key images. All of the key images selected for a particular image study are provided as an initial montage, which a user can review, edit, and approve. In particular, the user may have the option to remove or replace key images by selecting and deleting the images.
In some embodiments, the rules described above, are predefined for one or multiple users. The rules may also be manually configurable or changeable by particular users. Alternatively or in addition, the rules may be initially created or modified using machine learning. Machine learning generally refers to the ability of a computer program to learn without being explicitly programmed. In some embodiments, a computer program (e.g., a learning engine) is configured to construct a model (e.g., one or more algorithms) based on example inputs. Supervised learning involves presenting a computer program with example inputs and their desired (e.g., actual) outputs. The computer program is configured to learn a general rule (e.g., a model) that maps the inputs to the outputs. The computer program may be configured to perform deep machine learning using various types of methods and mechanisms. For example, the computer program may perform deep machine learning using decision tree learning, association rule learning, artificial neural networks, inductive logic programming, support vector machines, clustering, Bayesian networks, reinforcement learning, representation learning, similarity and metric learning, sparse dictionary learning, and genetic algorithms. Using all of these approaches, a computer program may ingest, parse, and understand data and progressively refine models for data analytics.
Accordingly, a learning engine (executed by the server 102 or a separate computing device) may be configured to receive example inputs and outputs (“training information”) that allows the learning engine to automatically determine the rules described above. In some embodiments, the training information includes information regarding what images were selected as key images for previously-reviewed image study, what images were annotated, a diagnosis for the image study or individual images, or the like. Again, machine learning techniques as described in U.S. patent application Ser. Nos. 15/179,506 and 15/179,465 (incorporated by reference herein) may be used to automatically create or modify the rules described herein for automatically selecting key images. User interaction with selected key images may also be used as feedback to such a learning engine to further refine the rules. For example, when a particular user repeatedly adds a particular image to a montage, deletes an automatically-selected image from a montage, changes the position of an image in the montage, or a combination thereof, the learning engine may be configured to detect a pattern in such manual behavior and modify the rules (such as user-specific rules) accordingly.
As noted above, in addition to selecting key images, a user may also position key images within a montage (e.g., at particular positions). For example,
For example,
The functionality of
As illustrated in
The electronic processor 104 is also configured to determine a key image included in the image study (at block 508). As described above, key images may be determined manually, automatically, or a combination thereof. As also described above, each key image may be positioned within the montage template and, again, this positioning may be performed manually or automatically by the electronic processor 104. Based on the position of the key image within the montage template, the electronic processor 104 is configured to automatically annotate the key image (at block 510) and display the key image with the annotation within the montage template (at block 512). For example, each montage template may include one or more pre-labeled sub-containers that specify required or recommended images. For example, as illustrated in
In addition to labeling key images, one or more sub-containers within a montage template may be associated with particular automated functionality. For example, in some embodiments, the electronic processor 104 is also configured to automatically label other images in an image study based on the labels automatically added to key images positioned within a montage template (e.g., based on an image's position in a series of images with respect to a key image). Similarly, in some embodiments, when a key image is added to a particular sub-container of a montage template, the electronic processor 104 may be configured to automatically select another key image that includes a corresponding image from a comparison image study. The electronic processor 104 may also be configured automatically analyze an image or multiple images to perform various types of analyses. For example, the electronic processor 104 may be configured to compare and describe index lesions, identify anomalies, compare findings or anatomical locations, determine progressions, take measurements, add one or more graphical annotations (“marktations”) to an image, or the like. For example, an image from a brain MM showing an index nodular metastasis in the left occipital lobe may be added to a montage and the electronic processor 104 may be configured to automatically compare and describe index lesions, automatically add a brain MM image from the most recent comparison image study, and analyze and reports the progression or regression of the lesion.
The results of such analysis may be provided as text (e.g., for inclusion in a structured report), a table, or the like. For example, the electronic processor 104 may be configured to generate text based on the analysis and display the text to a user for review, editing, and approval. Similarly, the electronic processor 104 may be configured to create a table of findings and analyze the table to determine disease changes, such as by comparing images using one or standard methodologies, such as RECIST 1.1 rules. Such analysis may be reported to the user and, optionally, added to a structured report.
Particular sub-containers may also be designated as required or optional, and the electronic processor 104 may be configured to automatically prompt a user for a key image for such sub-containers and may be configured to prevent the user from submitting or saving a report or finding for an image study until all required key images have been added to the montage.
Different processing may be associated with different sub-containers of a montage template and may also differ depending on the key image positioned within a particular sub-container (or key images positioned in other sub-containers of the montage template). Also, in some embodiments, a user (via the GUI) may be configured to provide a tool that allows the user to associate particular sub-containers with particular functionality. Also, the processing functionalities may be configured to be customized for particular users or groups of users. Furthermore, in some embodiments, the processing for one or more sub-containers may be based on findings or other input from a user and, thus, may be dynamically updated based on user interaction.
Alternatively or in addition, the processing functionality associated with particular montage template may be automatically generated or modified using artificial intelligence as described above for the rules for selecting key images. For example, a learning engine may be configured to automatically learn data patterns associated with labels or actions taken by a user to define processing for a particular sub-container. In some embodiments, a learning engine may also be configured to consider processing performed when a previous exam was read, such as a comparison image study. For example, under the appropriate circumstances, when an image is added to a montage, the electronic processor 104 may attempt to segment and measure the volume of anomalies if this was the processing performed when the comparison exam was read and reported. As an example, when a chest computed tomography (“CT”) slice is moved to the montage template, the electronic processor 104 may be configured to detect aortic abnormalities or other specific abnormalities that were assessed on the prior image study or clinical report. Also, feedback from a user regarding automatically-generated text could be provided as part of a closed feedback loop to help the system 100 learn the proper behaviors for processing key images. The labels associated a montage template may also be used automatically learn anatomy based on user actions. For example, labeled images may be used as training data for a learning engine.
In one embodiment, the system analyzes the exam images to understand the anatomical location, such that when a user selects an exam image as a key image, the image is automatically positioned in the proper location in the montage template. Thus, the montage template or key image template can work in two ways in increase efficiency as the template can provide 1) a means for labeling images as to anatomy or other characteristic(s), or 2) a standardized format for key images that specifies an order or location that is automatically filled as key images are selected (as the system can automatically derive these characteristics), or both. The montage template therefore can enhance user consistency and efficiency in multiple ways. In other embodiments, the selection of key images by the user is provided by various automated and semi-automated arrangements. In one embodiment a user clicks on an image. In another embodiment, a user provides an audio command to a conversational audio interface. The system may infer a selection, so that if a user says, “Normal brain”, the system might use configured or machine-learned rules to select one or more key images based on inferred actions.
Thus, embodiments described herein provide, among other things, methods and systems for automatically selecting, arranging, and processing key images for a medical image study. As described above, various rules may be applied by the systems and methods to quickly and effectively process image studies that may include hundreds or thousands of images without requiring or minimizing user input or interaction. Machine learning techniques may be used to establish or modify such rules, which further improve the efficiency and effectiveness of the systems and methods. Various features and advantages of the invention are set forth in the following claims.
Number | Name | Date | Kind |
---|---|---|---|
6090044 | Bishop et al. | Jul 2000 | A |
6574304 | Hsieh et al. | Jun 2003 | B1 |
6687329 | Hsieh | Feb 2004 | B1 |
6819790 | Suzuki et al. | Nov 2004 | B2 |
6836558 | Doi et al. | Dec 2004 | B2 |
7130457 | Kaufman et al. | Oct 2006 | B2 |
7428323 | Hillman | Sep 2008 | B2 |
7529394 | Krishnan et al. | May 2009 | B2 |
7640051 | Krishnan et al. | Dec 2009 | B2 |
7672491 | Krishnan et al. | Mar 2010 | B2 |
7761345 | Martin et al. | Jul 2010 | B1 |
7788040 | Haskell et al. | Aug 2010 | B2 |
7949167 | Krishnan et al. | May 2011 | B2 |
8021045 | Foos et al. | Sep 2011 | B2 |
8107700 | Daw et al. | Jan 2012 | B2 |
8199985 | Jakobsson et al. | Jun 2012 | B2 |
8340437 | Abramoff | Dec 2012 | B2 |
8345940 | Mattiuzzi et al. | Jan 2013 | B2 |
8478698 | Mah | Jul 2013 | B1 |
8583450 | Baker et al. | Nov 2013 | B2 |
8687867 | Collins et al. | Apr 2014 | B1 |
8727989 | Baba | May 2014 | B2 |
8879813 | Solanki et al. | Nov 2014 | B1 |
9089303 | Chen et al. | Jul 2015 | B2 |
9092727 | Reicher | Jul 2015 | B1 |
9245337 | Schmidt et al. | Jan 2016 | B2 |
10127662 | Reicher | Nov 2018 | B1 |
10269114 | Reicher et al. | Apr 2019 | B2 |
10275876 | Reicher et al. | Apr 2019 | B2 |
10275877 | Reicher et al. | Apr 2019 | B2 |
10311566 | Reicher et al. | Jun 2019 | B2 |
10332251 | Reicher et al. | Jun 2019 | B2 |
20030147465 | Wu | Aug 2003 | A1 |
20040147840 | Duggirala et al. | Jul 2004 | A1 |
20050010098 | Frigstad et al. | Jan 2005 | A1 |
20050010445 | Krishnan et al. | Jan 2005 | A1 |
20050021375 | Shimizu et al. | Jan 2005 | A1 |
20050049497 | Krishnan | Mar 2005 | A1 |
20050113960 | Karau et al. | May 2005 | A1 |
20050231416 | Rowe et al. | Oct 2005 | A1 |
20050251013 | Krishnan et al. | Nov 2005 | A1 |
20050255434 | Lok et al. | Nov 2005 | A1 |
20060110018 | Chen et al. | May 2006 | A1 |
20060159325 | Zeineh et al. | Jul 2006 | A1 |
20060228015 | Brockway et al. | Oct 2006 | A1 |
20060274928 | Collins et al. | Dec 2006 | A1 |
20070036402 | Cahill et al. | Feb 2007 | A1 |
20070047786 | Aklilu et al. | Mar 2007 | A1 |
20070078679 | Rose | Apr 2007 | A1 |
20070118055 | McCombs | May 2007 | A1 |
20070118399 | Avinash et al. | May 2007 | A1 |
20070272747 | Woods et al. | Nov 2007 | A1 |
20080046286 | Halsted | Feb 2008 | A1 |
20080126982 | Sadikali | May 2008 | A1 |
20080163070 | Mahesh | Jul 2008 | A1 |
20080226147 | Hargrove et al. | Sep 2008 | A1 |
20090080731 | Krishnapuram et al. | Mar 2009 | A1 |
20090092300 | Jerebko et al. | Apr 2009 | A1 |
20090274384 | Jakobovits | Nov 2009 | A1 |
20090299977 | Rosales | Dec 2009 | A1 |
20090326989 | Crucs | Dec 2009 | A1 |
20100042422 | Summers | Feb 2010 | A1 |
20100082692 | Akinyemi | Apr 2010 | A1 |
20100121178 | Krishnan et al. | May 2010 | A1 |
20100312734 | Widrow | Dec 2010 | A1 |
20110123079 | Gustafson | May 2011 | A1 |
20110228995 | Batman | Sep 2011 | A1 |
20110301447 | Park et al. | Dec 2011 | A1 |
20120001853 | Tanaka | Jan 2012 | A1 |
20120054652 | Kawagishi et al. | Mar 2012 | A1 |
20120088981 | Liu et al. | Apr 2012 | A1 |
20120172700 | Krishnan et al. | Jul 2012 | A1 |
20120189176 | Giger et al. | Jul 2012 | A1 |
20120237109 | Rajpoot et al. | Sep 2012 | A1 |
20120250961 | Iwasaki | Oct 2012 | A1 |
20120283574 | Park et al. | Nov 2012 | A1 |
20120310399 | Metzger | Dec 2012 | A1 |
20120328178 | Remiszewski et al. | Dec 2012 | A1 |
20130090554 | Zvuloni et al. | Apr 2013 | A1 |
20130149682 | Raab | Jun 2013 | A1 |
20130204115 | Dam et al. | Aug 2013 | A1 |
20130290225 | Kamath et al. | Oct 2013 | A1 |
20130304751 | Yoshioka et al. | Nov 2013 | A1 |
20130314434 | Shetterly et al. | Nov 2013 | A1 |
20140010432 | Cohen-Solal et al. | Jan 2014 | A1 |
20140121487 | Faybishenko et al. | May 2014 | A1 |
20140155763 | Bruce | Jun 2014 | A1 |
20140161337 | Raykar et al. | Jun 2014 | A1 |
20140185888 | Kelm et al. | Jul 2014 | A1 |
20140218397 | Rutman et al. | Aug 2014 | A1 |
20140218552 | Huang | Aug 2014 | A1 |
20140219526 | Linguraru et al. | Aug 2014 | A1 |
20140244309 | Francois | Aug 2014 | A1 |
20140257854 | Becker et al. | Sep 2014 | A1 |
20140279807 | Dimitrijevic | Sep 2014 | A1 |
20140313222 | Anderson et al. | Oct 2014 | A1 |
20140314292 | Kamen et al. | Oct 2014 | A1 |
20140375671 | Giger et al. | Dec 2014 | A1 |
20150065803 | Douglas et al. | Mar 2015 | A1 |
20150072371 | Marugame | Mar 2015 | A1 |
20150091778 | Day | Apr 2015 | A1 |
20150103170 | Nelson et al. | Apr 2015 | A1 |
20150205917 | Mabotuwana et al. | Jul 2015 | A1 |
20150230876 | Roe et al. | Aug 2015 | A1 |
20150235365 | Mankovich | Aug 2015 | A1 |
20150262014 | Iwamura | Sep 2015 | A1 |
20150287192 | Sasaki | Oct 2015 | A1 |
20150302317 | Norouzi et al. | Oct 2015 | A1 |
20150320365 | Schulze et al. | Nov 2015 | A1 |
20150325018 | Ben Ayed | Nov 2015 | A1 |
20150331995 | Zhao et al. | Nov 2015 | A1 |
20150332111 | Kisilev et al. | Nov 2015 | A1 |
20160005106 | Giraldez et al. | Jan 2016 | A1 |
20160041733 | Qian | Feb 2016 | A1 |
20160275138 | Rutenberg et al. | Sep 2016 | A1 |
20160283489 | Uy | Sep 2016 | A1 |
20160292155 | Adriaensens | Oct 2016 | A1 |
20160350480 | Gerdeman | Dec 2016 | A1 |
20160350919 | Steigauf | Dec 2016 | A1 |
20160361025 | Reicher et al. | Dec 2016 | A1 |
20160361121 | Reicher et al. | Dec 2016 | A1 |
20160364526 | Reicher et al. | Dec 2016 | A1 |
20160364527 | Reicher et al. | Dec 2016 | A1 |
20160364528 | Reicher et al. | Dec 2016 | A1 |
20160364539 | Reicher et al. | Dec 2016 | A1 |
20160364630 | Reicher et al. | Dec 2016 | A1 |
20160364631 | Reicher et al. | Dec 2016 | A1 |
20160364857 | Reicher | Dec 2016 | A1 |
20160364862 | Reicher et al. | Dec 2016 | A1 |
20170039321 | Reicher | Feb 2017 | A1 |
20170091937 | Barnes et al. | Mar 2017 | A1 |
20170169192 | Sevenster | Jun 2017 | A1 |
20170262584 | Gallix | Sep 2017 | A1 |
20180144421 | Williams et al. | May 2018 | A1 |
20180260949 | Kreeger | Sep 2018 | A1 |
Entry |
---|
Filed Dec. 13, 2017, U.S. Appl. No. 15/840,689. |
Filed Dec. 13, 2017, U.S. Appl. No. 15/840,744. |
Filed Jun. 10, 2016, U.S. Appl. No. 15/179,409, US2016/0364862. |
Filed Jun. 10, 2016, U.S. Appl. No. 15/179,434, US2016/0364526. |
Filed Jun. 10, 2016, U.S. Appl. No. 15/179,452, US2016/0364527. |
Filed Jun. 10, 2016, U.S. Appl. No. 15/179,465, US2016/0364528. |
Filed Jun. 10, 2016, U.S. Appl. No. 15/179,501, US2016/0364630. |
Filed Jun. 10, 2016, U.S. Appl. No. 15/179,506, US2016/0364631. |
Filed Jun. 10, 2016, U.S. Appl. No. 15/179,674, US2016/0364539. |
Filed Jun. 10, 2016, U.S. Appl. No. 15/179,681, US2016/0361025. |
Filed Jun. 10, 2016, U.S. Appl. No. 15/179,448, US2016/0364857. |
Filed Jun. 10, 2016, U.S. Appl. No. 15/179,457, US2016/0361121. |
Chen et al., “An Automatic Diagnostic System for CT Liver Image Classification”, IEEE Transactions on Biomedical Engineering, Jun. 6, 1998, pp. 783-794, vol. 45, No. 6. |
Goldbaum et al., “Automated Diagnosis and Image Understanding with Object Extraction, Object Classification, and Inferencing in Retinal Images”, Department of Ophthalmology and Department of Engineering and Computer Science, 1996, 4 pages, University of California, La Jolla, CA, USA. |
Office Action from the US Patent and Trademark Office for U.S. Appl. No. 15/179,501 dated Oct. 10, 2017 (14 pages). |
Office Action from the US Patent and Trademark Office for U.S. Appl. No. 15/179,681 dated Oct. 16, 2017 (30 pages). |
Piccolo et al., “Dermoscopic diagnosis by a trained clinician vs. a clinician with minimal dermoscopy training vs. computer-aided diagnosis of 341 pigmented skin lesions: a comparative study”, Bristish Journal of Dermatology, (2002), vol. 147, pp. 481-486, Bristish Association of Dermatologists. |
Binder et al., “Application of an artificial neural network in epiluminescene microscopy pattern analysis of pigmented skin lesions: a pilot study”, Bristish Journal of Dermatology, (1994), vol. 130, pp. 460-465. |
Carlson et al., “Pancreatic cystic neoplasms: the role and sensitivity of needle aspiration and biopsy”, Abdom Imaging, (1998), vol. 23, pp. 387-393, American Roentgen Ray Society, Washington D.C. |
Office Action from the US Patent and Trademark Office for U.S. Appl. No. 15/179,465 dated Oct. 13, 2017 (31 pages). |
Office Action from the US Patent and Trademark Office for U.S. Appl. No. 15/179,674 dated Oct. 30, 2017 (30 pages). |
Scott et al., “Telemedical Diagnosis of Retinopathy of Prematurity: Intraphysician Agreement between Ophthalmoscopic Examination and Image-Based Interpretation”, Opthamology, (Jul. 2008), vol. 115, No. 7. |
Office Action from the US Patent and Trademark Office for U.S. Appl. No. 15/179,448 dated Oct. 31, 2017 (14 pages). |
Office Action from the US Patent and Trademark Office for U.S. Appl. No. 15/179,457 dated Nov. 17, 2017 (15 pages). |
Office Action from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,506 dated Jan. 11, 2018 (10 pages). |
Final Office Action from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,465 dated Feb. 28, 2018 (32 pages). |
Final Office Action from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,681 dated Feb. 28, 2018 (23 pages). |
Lavrenko, JJV. et al.; “Automatic Image Annotation and Retrieval Using Cross-Media Relevance Models”; SIGIR'03; Jul. 28-Aug. 1, 2003. |
Wang, L. et al.; “Automatic Image Annotation and Retrieval Using Subspace Clustering Algorithm”; MMDB'04; Nov. 13, 2004. |
IPCOM000191498D; “Methods and Systems for Medical Image Analysis”; http://ip.com/IPCOM/000191498D; Jan. 6, 2010. |
Anonymously; “Method of Providing Translucent Annotations in Medical Images”; http://ip.com/IPCOM/000152706D; May 10, 2007. |
Non-Final Office Action from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,434 dated Mar. 12, 2018 (12 pages). |
Final Office Action from the U.S. Patent Office for U.S. Appl. No. 15/179,501 dated Apr. 9, 2018 (15 pages). |
Final Office Action from the U.S. Patent Office for U.S. Appl. No. 15/179,506 dated Jun. 8, 2018 (11 pages). |
Final Office Action from the U.S. Patent Office for U.S. Appl. No. 15/179,448 dated May 2, 2018 (15 pages). |
Final Office Action from the U.S. Patent Office for U.S. Appl. No. 15/179,457 dated Apr. 30, 2018 (15 pages). |
Non-Final Office Action from the U.S. Patent Office for U.S. Appl. No. 15/179,409 dated Jun. 15, 2018 (11 pages). |
Notice of Allowance from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,465 dated Jul. 25, 2018 (15 pages). |
Non-Final Office Action from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,674 dated Jul. 11, 2018 (14 pages). |
Non-Final Office Action from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,681 dated Jul. 9, 2018 (29 pages). |
Kim, N. et al., “An Engineering View on Megatrends in Radiology: Digitization to Quantitative Tools of Medicine”, Korean Journal of Radiology, Mar.-Apr. 2013, vol. 14, No. 2, pp. 139-153. |
Teng. C., “Managing DICOM Image Metadata with Desktop Operating Systems Native User Interface”, 22nd IEEE International Symposium on Computer-Based Medical Systems, 2009, (5 pages). |
Doi, K., “Computer-aided diagnosis in medical imaging: Historical review, current status and future potential”, Computerized Medical Imaging and Graphics, vol. 31, 2007, pp. 198-211. |
Final Office Action from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,434 dated Oct. 11, 2018 (27 pages). |
Non-Final Office Action from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,452 dated Oct. 19, 2018 (17 pages). |
Notice of Allowance from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,681 dated Oct. 30, 2018 (14 pages). |
Notice of Allowance from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,674 dated Oct. 18, 2018 (8 pages). |
Non-Final Office Action from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,409 dated Dec. 13, 2018 (47 pages). |
Notice of Allowance from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,434 dated Dec. 28, 2018 (9 pages). |
Corrected Notice of Allowability from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,681 dated Jan. 17, 2019 (7 pages). |
Notice of Allowance from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,448 dated Jan. 23, 2019 (9 pages). |
Notice of Allowance from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,457 dated Dec. 14, 2018 (8 pages). |
Notice of Allowance from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,501 dated Feb. 8, 2019 (9 pages). |
Corrected Notice of Allowability from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,674 dated Jan. 17, 2019 (7 pages). |
Supplemental Notice of Allowability from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,457 dated Feb. 1, 2019 (8 pages). |
Final Office Action from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,452 dated Mar. 8, 2019 (13 pages). |
Supplemental Notice of Allowability from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,448 dated Feb. 27, 2019 (4 pages). |
Non-Final Office Action from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/840,744 dated Mar. 5, 2019 (16 pages). |
Supplemental Notice of Allowability from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/1,43 dated Feb. 1, 2019 (5 pages). |
Notice of Allowance from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,506 dated Mar. 12, 2019 (7 pages). |
Supplemental Notice of Allowability from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,501 dated Mar. 27, 2019 (15 pages). |
Corrected Notice of Allowability from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,674 dated Mar. 25, 2019 (11 pages). |
Corrected Notice of Allowability from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,681 dated Mar. 25, 2019 (11 pages). |
Supplemental Notice of Allowability from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,448 dated Mar. 13, 2019 (7 pages). |
Supplemental Notice of Allowability from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,457 dated Apr. 1, 2019 (7 pages). |
Final Office Action from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,409 dated Jun. 13, 2019 (46 pages). |
Applicant-Initiated Interview Summary from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,452 dated May 28, 2019 (3 pages). |
Applicant-Initiated Interview Summary from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/840,744 dated May 23, 2019 (3 pages). |
Examiner Answer to Appeal to Appeal Brief from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,409 dated Mar. 5, 2020 (15 pages). |
Examiner Answer to Appeal to Appeal Brief from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/840,744 dated May 15, 2020 (14 pages). |
Supplemental Notice of Allowability from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,448 dated Apr. 1, 2019 (7 pages). |
Advisory Action from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/179,452 dated May 28, 2019 (3 pages). |
Advisory Action from the U.S. Patent and Trademark Office for U.S. Appl. No. 15/840,744 dated Oct. 11, 2019 (3 pages). |
Number | Date | Country | |
---|---|---|---|
20190180863 A1 | Jun 2019 | US |