1. Field of the Invention
The present invention relates to computerized image analysis and more particularly to interactive object and track array based image analysis methods.
2. Description of the Related Art
a. Description of Problem that Motivated the Invention
Computerized image analysis, namely, computer extraction of regions containing objects of interest such as semiconductor wafer defects, circuit boards, bond wires, ball grid array, pad grid array, electronic package, tissues, cellular and subcellular components, bacteria, viruses and analysis of the features measured from the regions, is a fundamental step in electronic industry quality control and quantitative microscopy which have broad applications and markets.
Due to the complexity of the image analysis tasks, it is difficult to have fully automated solutions except some dedicated high-volume applications such as cancer screening, wafer defect inspection. Most of the computerized image analysis applications require interactive confirmation, editing and data analysis by users.
b. How Did Prior Art Solve the Problem?
Currently, most users perform image analysis using standard image processing software (such as Zeiss' AxioVision, Nikon's NIS-Elements, Olympus cellSens, ImageJ, Metamorph, ImagePro, Slidebook, Imaris, Volocity, etc.), custom scripts/programming, or by hand. It is difficult to apply standard image processing software functions to perform image analysis. As a result the majority of image analysis is performed either manually or using a simple method that has very limited applications and poor results. Biology image analysis products have been developed recently for high content screening applications. However, they are coupled to specific instrument platforms, cell types, and reagents. They are not flexible for broad applications.
The advancement in image acquisition and storage technologies enables many large volume image analysis applications. For example, applications including multi-spectral, 3D and time-lapse image sequences are becoming common. The interactive process in the large volume analysis applications are challenging as the large size of images, objects and data make it inefficient to perform interactive image analysis.
There are some prior art work such as making montage using imageJ that allows multi-cropping of images and assembles them into an image montage or showing photo galleries. However, the prior art work are limited to showing a lot of information in a small amount of space. It does not support efficient interactive image analysis including data confirmation, editing, measurements and subpopulation creation.
The current invention includes a computerized object region array interactive image analysis method that generates a displayed object region array to enable efficient interactive object region analysis to verify, edit and create a subpopulation of objects. It enables statistical test applying to a single or a plurality of subpopulation data acquired from a single or a plurality of input images and a single or a plurality of interactive object region analyses. The method is especially efficient for time-lapse image sequences as the displayed track or image sequence array allows a frame view for easy viewing and the focused track or image sequence can be navigated to view tracked object or image sequence at different time frames.
The primary objective of the invention is to provide an object region array for computerized interactive object analysis. The secondary objective of the invention is to provide a track region array for computerized interactive track analysis. The third objective of the invention is to provide an object region sequence array for computerized interactive object sequence analysis.
There are three application scenarios of the current invention: a computerized object region array interactive object analysis, a computerized track array interactive track analysis, and a computerized object sequence array interactive object sequence analysis.
The present invention is described below for the three above-mentioned application scenarios.
The input image 100 can be acquired from any digitization methods such as camera, scanner, photomultipliers, image sensors, etc. The images can be acquired with different spectral and modalities such as bright field, dark field, X-ray, IR, ultrasound, lasers, etc. in multi-dimensional (X, Y, Z, spectral, position) spaces.
In one embodiment of the invention, microscopy images are used as the input images. The microscopy images can be acquired from different microscopy modes such as Total internal reflection fluorescence microscopy (TIRF), bright-field, Phase contrast, Differential interference contrast (DIC) microscopy, FRAP, FLIM and FRET and also may be from 2D and 3D microscopy such as inverted, confocal and super-resolution microscopes.
As shown in
In one embodiment of the invention, a structure guided processing method as described in Lee; Shih-Jong J. “Structure-guided image processing and image feature enhancement”, U.S. Pat. No. 6,463,175, Sep. 24, 2002 and Lee; Shih-Jong J., Oh; Seho, Huang; Chi-Chou,“Structure-guided automatic learning for image feature enhancement” U.S. Pat. No. 6,507,675, Jan. 14, 2003 can be used for object segmentation.
In another embodiment of the invention, the teachable segmentation method as described in Lee; Shih-Jong J., Oh; Seho “Learnable Object Segmentation”, U.S. Pat. No. 7,203,360, Apr. 10, 2007 can be used for object segmentation.
In alternative embodiments of the invention, segmentation methods such as thresholding, watershed, modeling, clustering methods can also be used for the object segmentation.
The object measurements generate characterization features for each segmented object. A comprehensive set of characterization features can be is extracted from a single or a plurality of segmented objects. In one embodiment of the invention, the characterization features include intensity space features, color space features and relational features.
The features are derived from the grayscale intensity in the region of a segmented object such as mean, standard deviation, skewness, kurtosis and other statistics. Moreover, pre-processing of the grayscale intensity can be performed before extracting the statistics. The pre-processing includes point operations such as logarithm conversion for optical density measurement or filtering such as edge enhancement by linear or morphological gradient operator that includes dark edge, bright edge, general edge enhancement, line enhancement, corner enhancement, etc.
Note that when there are multiple channel images derived from different markers and/or imaging modalities, the intensity space features from each of the different channels can all be included.
In alternative embodiments of the invention, other intensity space features such as texture features derived from co-occurrence matrices or wavelet, run-lengths etc. may be used.
When the input image is a color image, color transformation may be applied to convert the color image into multi-bands of grayscale images. In one embodiment of the invention, the multiple bands include the following images: R (Red channel), G (Green channel), B (Blue channel), (R−G)/(R+G), R/(R+G+B), G/(R+G+B), B/(R+G+B), R/G, R/B, G/B, G/R, B/G, B/R, etc. In addition, RGB to HSI conversion can be performed to generate hue, saturation, and intensity bands.
The intensity space features with and without pre-processing as described in section B.1 can be generated for each band of the image.
In alternative embodiments of the invention, other feature spaces such as temporal space (for image sequences) or different focal planes (for 3D images) may be used.
Relational features can characterizes spatial relations of multiple sets of objects by comprehensive collections of spatial mapping features. Some of the features have clearly understandable physical, structural, or geometrical meanings. Others are statistical characterizations, which may not have clear physical, structural or geometrical meanings when considered individually. A combination of these features could characterize subtle difference numerically using the comprehensive feature set.
In one embodiment of the invention, the relational feature extraction method as described in Lee; Shih-Jong J., Oh; Seho “Intelligent Spatial Reasoning”, U.S. Pat. No. 7,263,509, Aug. 28, 2007 can be used for relational features.
The whole image summary features for all segmented objects such as cells, colonies or grids from the image can be generated. The summary features include summary statistics such as mean, variance, skewness, kurtosis, median, and top and bottom percentiles of the pattern features from the segmented objects of the image.
The object segmentation step may be assisted by an interactive update step. As shown in
The interactive update step shows the preliminarily segmented objects 500 to a user. The user can update the segmentation masks of the preliminarily segmented objects 500 by drawing new masks, deleting or inserting masks or using magic wand and/or other touch up tools such as the tools provided in Adobe photoshop software. After the interactive update, the resulting segmented objects should have user satisfied segmentation masks.
As shown in
The displayed objects display the detected objects on the computer display under a display mode selected from a group consisting of a whole image display, at least one object data graph, and at least one object data sheet. The whole image display highlights the detected objects with object indicators such object center, object bounding box, or other indication such as object masks with different colors for different objects. The object data graph is selected from a group consisting of a feature histogram plot, an object histogram plot, a scatter plot and a stacked scatter plot, radial graph, mesh graph, surface graph, volumetric graph, etc. The object data sheet contains sheets for object features and summary features.
The object array display 110 step shows the selected objects 108 in a computer display. It is shown as a displayed object region array 112. In the displayed object region array, the selected object regions are cut out and shown as an array (1D or 2D) of object regions in the computer display.
The displayed object region array 800 consists of an option 802 to indicate an object location within its object region such as a circle in the center of the object or arrows, boxes, etc. It also consists of an option to overlay an object with its segmentation mask and an option to show multi-channel composite color view of an object if the image includes multiple acquisition channels.
Other options include image zoom 804, padding of image region boundary 806.
An interactive object region analysis 114 step is performed using the input image 100, the selected objects 108 and the displayed object region array 112 to generate an object analysis outcome 116.
The displayed object region array can be shown along with other display modes such as the whole image display 700, the at least one object data graph 702, and the at least one object data sheet 704. The data from different display modes 800, 700, 702, 704 are linked.
The interactive object region analysis picks a focused object from any of the display modes and the object is focused (highlighted) in other display modes.
The interactive object region analysis views the input image, the selected objects and the displayed object region array in different display modes and performs a single or a plurality of analysis steps including
a) deleting an object from the displayed object region array
The object analysis outcome 116 could be the outcomes of any of the above analysis steps or their combinations. In one embodiment of the invention, the object analysis outcome 116 is a subpopulation of objects after deleting unwanted objects and properly label the desired objects.
In one embodiment of the invention, the outcomes of any of the above analysis steps or their combinations such as a subpopulation of objects with correct masks and/or desired measurements can be further analyzed using at the least one object data graph 702, and the at least one object data sheet 704. Furthermore, the data could be tested using statistical test methods such as T-test, ANOVA test, MANOVA test, Wilcoxon-Mann Whitney test, Chi-square test, Mcnemar's test, Fisher's test, binomial test, Kolmogorov-Smirnov test, etc.
In another embodiment of the invention, a plurality of subpopulation data acquired from a plurality of input images and a plurality of interactive object region analyses can be combined and further analyzed using object data graph or object data sheet. Furthermore, the combined data could be tested using statistical test methods such as T-test, ANOVA test, MANOVA test, Wilcoxon-Mann Whitney test, Chi-square test, Mcnemar's test, Fisher's test, binomial test, Kolmogorov-Smirnov test, etc.
The input image sequence 200 can be acquired from the same methods as input image 100 described in section I.1 except that the input image sequence 200 includes temporal dimension. That is, it contains a time sequence of input image 100.
As shown in
In one embodiment of the invention, the object detect and track includes object segmentation followed by the tracking of the segmented objects. This embodiment is suitable for the tracking of large objects such as cells or colonies. In this embodiment, the object segmentation can be performed using the method as described in section I.2.A.
In another embodiment, the objects are detected and tracked directly without segmentation. This embodiment is suitable for the tracking of small objects such as particles. For object tracking, the method described in Lee; Shih-Jong J., Oh; Seho “Method for moving cell detection from temporal image sequence model estimation”, U.S. Pat. No. 8,045,783, Oct. 25, 2011 can be used. In alternative embodiments of the invention, other tracking methods such as autocorrelation, Kalman filter, etc. can also be used for tracking.
The tracked object measurement 1004 step generates characterization features for each track and for the object of the track at each time point. That is, the characterization features include time dependent object features and time independent track features.
The time dependent object features include the characterization features described in section II.2.B for each time point. In addition, kinetic features such as the changes in intensity space, color space and relational features as well as the change velocity, change acceleration of the trajectories and their pattern features can be included.
The time independent track features include whole track trajectory features such as Total time, First Frame, Last Frame, Sum of incremental velocity in x and y directions over the whole track, Sum of square of incremental velocity in x and y directions over the whole track, Length of the straight line connecting track starting and end points, Straight line length divided by Total Time, Total incremental distances of curvature track path, Total Length divided by Total Time, Sum of angle change within the length of the track, Ratio of Straight Line Velocity to Curvilinear Velocity, The area of a bounding box generated from the minimum and maximum values of x- and y-coordinates over track lifetime, etc.
The object detection and tracking step may be assisted by an interactive track update step. As shown in
The interactive track update 1102 step shows the preliminarily tracked objects 1100 to a user. The user can update the segmentation masks and or the tracks of the preliminarily tracked objects 1100 by editing masks and/or deleting, connect, insert tracks. After the interactive track update, the resulting tracked objects should have user satisfied tracks and/or segmentation masks.
As shown in
The displayed tracks display the detected tracks on the computer display under a display mode selected from a group consisting of a whole image display, at least one data graph, and at least one data sheet. The whole image display highlights the detected tracks with track indicators. The indicator could include a trajectory plot of time course of the tracked object center with time span specifiable. The trajectory could be colored or faded as a function of time. The trajectory colors could be different for different tracks. The data graph is selected from a group consisting of a feature histogram plot, an object histogram plot, a scatter plot, a stacked scatter plot, radial graph, mesh graph, surface graph, volumetric graph, a time trace plot, a directional plot, a linage plot, a kymograph, or a mean-squared displacement time plot, etc. The data sheet is selected from a group consisting of sheet of summary statistics, sheet of object measurements per time point, sheet of track measurements, and sheet of lineage measurements.
The track array display 210 step shows the selected tracks 208 in a computer display. Similar to the object array display described in section II.4, it is shown as a displayed track array 212. In the displayed track array, the selected tracks at each time point are cut out and shown as an array (1D or 2D) of track regions in the computer display. The displayed track array displays regions of the selected tracks sorted under an order selected from a group consisting of track feature ranking, track indices, and manual arrangement.
The displayed track array displays regions of the selected tracks at a time frame selected from a group consisting of track middle frame, track first frame, track last frame, and track event frame. The displayed track array consists of an option to indicate a tracked object location within its track array region such as a circle in the center of the tracked object or arrows, boxes, etc. It also consists of an option to overlay a tracked object with its segmentation mask and an option to show multi-channel composite color view of a tracked object if the image includes multiple acquisition channels. Other options include image zoom, padding of image region boundary, etc.
As shown in
An interactive track analysis 214 step is performed using the input image sequence 200, the selected tracks 208 and the displayed track array 212 to generate a track analysis outcome 216.
The displayed track array can be shown along with other display modes such as the whole image display, the at least one data graph, and the at least one data sheet. The data from different display modes are linked.
The interactive track analysis views the input image sequence, the selected tracks and the displayed track array in different display modes and performs a single or a plurality of analysis steps including
a) deleting a track from the displayed track array
b) assigning a label to a track in the displayed track array
c) entering annotation to a track in the displayed track array
d) editing a track and updating track measurements
e) performing manual measurements of a track
f) creating a subpopulation of tracks
The track analysis outcome 216 could be the outcomes of any of the above analysis steps or their combinations. In one embodiment of the invention, the track analysis outcome 216 is a subpopulation of tracks after deleting unwanted tracks and properly edit and label the desired tracks.
In one embodiment of the invention, the outcomes of any of the above analysis steps or their combinations such as a subpopulation of tracks with correct detection, masks, tracks and/or desired measurements can be further analyzed using at the least one data graph, and the at least one data sheet. Furthermore, the data could be tested using statistical test methods such as T-test, ANOVA test, MANOVA test, Wilcoxon-Mann Whitney test, Chi-square test, Mcnemar's test, Fisher's test, binomial test, Kolmogorov-Smirnov test, etc.
In another embodiment of the invention, a plurality of subpopulation data acquired from a plurality of input image sequences and a plurality of interactive track analyses can be combined and further analyzed using data graph or data sheet. Furthermore, the combined data could be tested using statistical test methods such as T-test, ANOVA test, MANOVA test, Wilcoxon-Mann Whitney test, Chi-square test, Mcnemar's test, Fisher's test, binomial test, Kolmogorov-Smirnov test, etc.
Computerized object sequence array interactive object sequence analysis is applicable to image sequences where the objects do not move much such as the application of measuring calcium oscillation of cells.
This is the same as described in section II.1
This is the same as described in section 1.2 except that the detection is performed on one frame of the input image sequence 200. The frame can be specified by a user input or is defaulted to the first frame. In addition to measuring the object measurements in I.2.B for a single or a plurality of time frame, the time dependent object features and time independent track features described in section III.2.B can also be calculated.
This is the same as described in section I.3.
This is the same as II.4 track array display except that it displays regions of the selected object at a time frame selected from a group consisting of object detection used frame, first frame, last frame, middle frame, and a user specified frame.
This is basically the same as II.5 interactive track analysis. It views the input image sequence, the selected objects and the displayed object region sequence array and performs an analysis step selected from a group consisting
a) deleting an object sequence from the displayed object region sequence array;
b) assigning a label to an object sequence in the displayed object region sequence array;
c) entering annotation to an object sequence in the displayed object region sequence array;
d) editing an object mask and updating object measurements;
e) performing manual measurements of an object sequence; and
f) creating a sub-population of object sequences.
The object sequence analysis outcome 316 could be the outcomes of any of the above analysis steps or their combinations. In one embodiment of the invention, the object sequence analysis outcome 316 is a subpopulation of object sequences after deleting unwanted object sequence and properly edit and label the desired object sequences.
In one embodiment of the invention, the outcomes of any of the above analysis steps or their combinations such as a subpopulation of object sequences with correct detection masks and/or desired measurements can be further analyzed using at the least one object data graph, and the at least one object data sheet. Furthermore, the data could be tested using statistical test methods such as T-test, ANOVA test, MANOVA test, Wilcoxon-Mann Whitney test, Chi-square test, Mcnemar's test, Fisher's test, binomial test, Kolmogorov-Smirnov test, etc.
In another embodiment of the invention, a plurality of subpopulation data acquired from a plurality of input image sequences and a plurality of interactive object sequence analyses can be combined and further analyzed using object data graph or object data sheet. Furthermore, the combined data could be tested using statistical test methods such as T-test, ANOVA test, MANOVA test, Wilcoxon-Mann Whitney test, Chi-square test, Mcnemar's test, Fisher's test, binomial test, Kolmogorov-Smirnov test, etc.
The invention has been described herein in considerable detail in order to comply with the Patent Statutes and to provide those skilled in the art with the information needed to apply the novel principles and to construct and use such specialized components as are required. However, it is to be understood that the inventions can be carried out by specifically different equipment and devices, and that various modifications, both as to the equipment details and operating procedures, can be accomplished without departing from the scope of the invention itself.
This work was supported by U.S. Government grant number 5R44HL106863-03, awarded by the National Heart, Lung, and Blood Institutes. The U.S. Government may have certain rights in the invention.