The present invention relates to an image processing device, an image processing method, and a program for processing data of Z-stack images.
In the field of pathology, a virtual slide system is available as an alternative to an optical microscope, which is a tool for pathologic diagnosis. The virtual slide system enables a user to capture images of a specimen to be examined placed on a preparation, digitalize the captured images, and perform pathologic diagnosis on a display. With the digitalization of pathologic diagnosis using a virtual slide system, an optical microscope image of a specimen to be examined according to the related art can be handled as digital data. Accordingly, increased convenience can be expected in terms of explanation to a patient using a digital image, sharing of rare cases, higher speed in telediagnosis, higher efficiency in education and practice, and so forth.
A function of obtaining Z-stack images of virtual slide is very useful in grasping the three-dimensional structure of a target analyte (PTL 1). In this description, “Z-stack images” are data constituted by a plurality of images captured by a microscope device with a focal position being changed. Each of the images constituting the “Z-stack images” is called a “layer” or “layer image”.
In the case of capturing individual layer images of Z-stack images, an image capture element and an analyte (preparation) may be relatively displaced in accordance with an image capture position (Z position), due to an error of a mechanism for moving the image capture element or an error of a mechanism of a stage for moving the analyte as a target to be captured. Accordingly, a structural object such as a cell in the analyte may be displaced in each of a plurality of captured layer images, which may hinder accurate grasping of the three-dimensional structure of the structural object.
Accordingly, the present invention provides an image processing device capable of reducing a displacement of a structural object in Z-stack images.
An image processing device according to an aspect of the present invention includes an image obtaining unit, a displacement obtaining unit, and a displacement correcting unit. The image obtaining unit obtains data of Z-stack images including a plurality of layer images which are obtained by capturing, with a microscope device, images of an analyte at different Z-direction positions. The displacement obtaining unit obtains information regarding a displacement in an XY plane in at least one layer image among the plurality of layer images. The displacement correcting unit corrects the displacement in the at least one layer image in accordance with the information regarding the displacement.
According to the aspect of the present invention, an image processing device capable of reducing a displacement of a structural object in Z-stack images can be provided.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
Hereafter, exemplary embodiments of the present invention will be described with reference to the drawings.
A first embodiment for realizing the present invention will be described with reference to the drawings.
A method according to this embodiment is realized with the system configuration illustrated in
In this embodiment, the image processing device 102 is formed of, for example, the hardware configuration illustrated in
A functional block diagram of the image processing device 102 according to this embodiment is illustrated in
In this embodiment, the situation illustrated in
In
If a displacement of an image capture element does not occur during image capturing, all pieces of image data recorded as the images of layer 0 (403), layer 1 (404), and layer 2 (405) correspond to the same image 501 illustrated in
In this embodiment, description will be given of a method for displaying image data after removing a position error with respect to these XY planes from the image data.
First, in step S701, various initial settings are performed.
Subsequently, in step S702, the image correction control unit 303 selects one layer image (first layer image) serving as a reference for displacement correction. This function of the image correction control unit 303 corresponds to a selecting unit of the present invention. Basically, any layer image selected from among a plurality of layer images may be used as the reference layer image.
Subsequently, in step S703, the feature region extracting unit 304 extracts, from the selected reference layer image, one or more feature regions representing one or more certain features of the image. An appropriate process such as corner extraction may be used to extract one or more feature regions. In this embodiment, each feature region may be a corner (feature point) extracted in accordance with a known corner extraction algorithm (corner detection algorithm) For example, the image 611 is selected as a reference layer image, and the corner extraction algorithm is applied to the image 611. Accordingly, corners 801 to 805 are extracted as feature points.
Subsequently, in step S704, the feature region extracting unit 304 determines whether or not the number of extracted feature regions is one. If the determination result in step S704 is “YES”, the process proceeds to step S706. If the determination result in step S704 is “NO”, a plurality of feature regions exist in the image, and thus the feature region extracting unit 304 selects one of the feature regions. In this case, the feature region selected from among the plurality of feature regions is not specified, and a user may set an appropriate selection method in accordance with a purpose. In this embodiment, five feature points are illustrated in
Subsequently, in step S706, the image correction control unit 303 selects a certain layer image (second layer image) from among the layer images other than the reference layer image from which a feature region has not been extracted.
Subsequently, in step S707, the feature region extracting unit 304 extracts one or more feature regions from the selected layer image. If a plurality of feature regions are extracted in step S703, it is likely that a plurality of feature regions are extracted in step S707. Thus, in this case, a maximum amount of displacement that can be corrected may be defined in advance, and one or more feature regions may be extracted from a limited region in consideration of the position of the one feature region which has already been selected and the maximum amount of displacement with respect to the position. With this process, a feature region corresponding to the feature region which has already been extracted can be efficiently extracted. For example, when this process is applied to the images 601 and 621, the corners corresponding to the corner 803 which has already been extracted are a corner 901 in the image 601, and a corner 911 in the image 621 (
Subsequently, in step S708, it is determined whether or not feature regions have been extracted from all the layer images. If the determination result in step S708 is “YES”, the process proceeds to step S709. If the determination result in step S708 is “NO”, the process returns to step S706.
Subsequently, in step S709, the image correction control unit 303 selects one layer image other than the reference layer image.
Subsequently, in step S710, the correction amount calculating unit 305 calculates an appropriate correction amount for the selected layer image. This function of the correction amount calculating unit 305 corresponds to the calculating unit of the present invention. Specifically, the correction amount calculating unit 305 may calculate an appropriate correction amount by using the difference between the position of the feature region in the reference layer image and the position of the feature region in the selected layer image.
Subsequently, in step S711, it is determined whether or not correction amounts in all the layer images have been calculated. If the determination result in step S711 is “YES”, the process proceeds to step S712. If the determination result in step S711 is “NO”, the process returns to step S709.
Subsequently, in step S712, the correction amount calculating unit 305 creates a correction amount table in a memory or the like and stores values obtained through calculation in the table. The correction amount table stores, as shown in Table 1, layer numbers and corresponding X-direction and Y-direction correction amounts as elements.
In this embodiment, layer 1 (404) is a reference layer, and thus the correction amounts in the X direction and Y direction for layer 1 are zero. The unit of the value of a correction amount may be appropriately set depending on a case. The easiest way is that the number of pixels is used as the unit.
After step S712, the processing ends.
The description of the flow of the image processing according to this embodiment ends now.
According to this embodiment, in the case of actually displaying a plurality of depth images, displacements in the X and Y directions in the images can be corrected using correction amounts for displayed layer images. Compared to the related art, a more accurate three-dimensional structure of a subject can be grasped.
A second embodiment for realizing the present invention will be described with reference to the drawings.
The system configuration, hardware configuration, and functional blocks of the image processing device according to this embodiment are the same as those in the first embodiment. However, the algorithm for correcting displacements in individual layer images is different.
In the first embodiment, the situation illustrated in
In the second embodiment, the situation illustrated in
In
If a displacement of an image capture element or an analyte does not occur during image capturing, pieces of image data recorded as the images of layer 0 (1004) to layer 3 (1007) are images 1101, 1111, 1121, and 1131 illustrated in
In this embodiment, description will be given of a method for displaying image data after removing a position error with respect to these XY planes from the image data.
First, in step S1301, various initial settings are performed.
Subsequently, in step S1302, a certain integer-type variable i is prepared, and zero is assigned thereto.
Subsequently, in step S1303, one or more feature regions in layer i are extracted.
Subsequently, in step S1304, one or more feature regions in layer i+1 are extracted.
Subsequently, in step S1305, a feature region common to layer i and layer i+1 is selected. Here, “a common feature region” means, when certain two feature regions are considered, an amount of displacement therebetween is smaller than a preset maximum amount of displacement and the image patterns (features) thereof are sufficiently similar.
There may be a plurality of pairs of feature regions that satisfy the above-described conditions. In that case, any one of the pairs may be selected in the actual algorithm. For example, in this embodiment, a pair of corners 1203 and 1214 (first feature region), a pair of corners 1215 and 1225 (second feature region), and a pair of corners 1224 and 1233 (third feature region) are selected as common feature regions. The first and second feature regions, and the second and third feature regions have a displacement larger than the preset maximum amount of displacement, and/or have image patterns (features) not similar to each other.
Subsequently, in step S1306, with the use of an amount of displacement between the extracted feature region in layer i and the extracted feature region in layer i+1, a correction amount in each layer is calculated. This is a process of, for example, creating Table 2. When i is zero, layer 0 is regarded as a reference, an amount of displacement between the two feature regions is regarded as a correction amount for layer 1, and the correction amount is stored in the table.
Subsequently, in step S1307, the integer-type variable i is incremented by one.
Subsequently, in step S1308, it is determined whether or not the value of i is equal to N−1, where N represents the total number of layers. If the determination result in step S1308 is “YES”, the process proceeds to step S1309. If the determination result in step S1308 is “NO”, the process returns to step S1303.
Subsequently, in step S1309, a correction amount table for all the layers is created. Specifically, for example, if three tables, Table 2 to Table 4, have already been created through repetition of step S1306 (in this embodiment, N is 4), new correction amounts for layer 2 are calculated with reference to the correction amounts for layer 1 in Table 2. Likewise, new correction amounts for layer 3 are calculated with reference to the correction amounts for layer 2 in Table 3. As a result of combining all the results, Table 5 is obtained.
After step S1309, the processing ends.
The description of the flow of the image processing according to this embodiment ends now.
According to this embodiment, in the case of actually displaying a plurality of depth images, displacements in the X and Y directions in the images can be corrected using correction amounts for displayed layer images. Compared to the related art, a more accurate three-dimensional structure of a subject can be grasped. Also, compared to the first embodiment, correction of a displacement can be applied to a larger number of types of subject images.
A third embodiment for realizing the present invention will be described with reference to the drawings.
The hardware configuration and functional blocks of the image processing device according to this embodiment are the same as those in the first and second embodiments. However, the system configuration and the image processing algorithm are different.
In the first and second embodiments, the system configuration illustrated in
In this embodiment, the situation illustrated in
The situation illustrated in
First, in step S1701, various initial settings are performed.
Subsequently, in step S1702, a reference layer serving as a reference of a correction process is selected. It is desirable to select, as a reference layer, a layer with no displacement caused by inclination. However, information regarding inclination is not always obtained from the image. In that case, for example, a user may directly specify a reference layer.
Subsequently, in step S1703, a certain integer-type variable i is prepared, and the layer number of the reference layer is assigned thereto.
Subsequently, in step S1704, a group of feature regions in the reference layer is extracted. In the first and second embodiments, only one feature region is selected. In the third embodiment, a plurality of feature regions are extracted, and a sufficient number of feature regions among them are used for correction.
Subsequently, in step S1705, the integer-type variable i is decremented by one.
Subsequently, in step S1706, a group of feature regions in layer i is extracted.
Subsequently, in step S1707, image correction is performed by using the groups of feature regions extracted in steps S1704 and S1706. At this time, the groups of feature regions extracted from the two layers may be associated with each other, and correction may be performed by using a method such as warping in image processing. The specific method is illustrated in
Subsequently, in step S1708, it is determined whether or not i is equal to zero. If the determination result in step S1708 is “YES”, the process proceeds to step S1709. If the determination result in step S1708 is “NO”, the process returns to step S1705.
Subsequently, in step S1709, the layer number of the reference layer is assigned to variable i again.
Subsequently, in step S1710, the variable i is incremented by one.
Subsequently, in step S1711, a group of feature regions in layer i is extracted as in step 1706.
Subsequently, in step S1712, image correction is performed by using the groups of feature regions extracted in steps S1704 and S1711. A process similar to that in step S1707 may be used for the correction.
Subsequently, in step S1713, it is determined whether or not i is equal to N−1, where N represents the total number of layers. If the determination result in step S1713 is “YES”, the processing ends. If the determination result in step S1713 is “NO”, the process returns to step S1710.
The description of the flow of the image processing according to this embodiment ends now.
According to this embodiment, in the case of actually displaying a plurality of depth images, displacements in the images can be corrected using correction amounts for displayed layer images. Compared to the related art, a more accurate three-dimensional structure of a subject can be grasped. Also, compared to the first and second embodiments, captured images can be appropriately corrected even if the image capture device is displaced in an oblique direction.
A fourth embodiment for realizing the present invention will be described with reference to the drawings.
The method according to this embodiment is realized using the virtual slide system having the configuration illustrated in
The virtual slide system illustrated in
An operation input device 1911, such as a keyboard or mouse which receives input from a user, and the image display device 103 which displays processed images are connected to the image processing device 1912. Also, a storage device 1913 and another computer system 1914 are connected to the image processing device 1912.
In the case of capturing images of many analytes (preparations) by using a batch process, the image capture device 1910 sequentially captures images of individual analytes under the control performed by the image processing device 1912, and the image processing device 1912 performs necessary processing on pieces of image data of the individual analytes. The pieces of image data of the individual analytes obtained thereby are transmitted to the storage device 1913 serving as a large-capacity data storage or to the other computer system 1914, and are stored therein.
Image capturing by the image capture device 1910 (pre-measurement and main measurement) is realized when the image processing device 1912 provides an instruction to a controller 1908 in response to input by the user, and then the controller 1908 controls a main measurement unit 1901 and a pre-measurement unit 1902.
The main measurement unit 1901 is an image capture unit which obtains a high-resolution image used for diagnosis of an analyte on a preparation. The pre-measurement unit 1902 is an image capture unit which performs image capturing before main measurement, and obtains an image to obtain image capturing control information, in order to obtain an accurate image in main measurement.
A displacement meter 1903 is connected to the controller 1908, so that the position and distance of a preparation placed on a stage in the main measurement unit 1901 or the pre-measurement unit 1902 can be measured.
Also, an aperture controller 1904, a stage controller 1905, an illumination controller 1906, and a sensor controller 1907 which control image capture conditions of the main measurement unit 1901 and the pre-measurement unit 1902 are connected to the controller 1908. These controllers 1904 to 1907 control an aperture, a stage, illumination, and operation of an image sensor in accordance with control signals transmitted from the controller 1908.
The stage includes an XY stage which moves a preparation in a direction vertical to an optical axis, and a Z stage which moves a preparation in a direction along the optical axis. The XY stage is used to capture an analyte image extending in the direction vertical to the optical axis, and the Z stage is used to capture an image in which a focal position is changed in a depth direction. Although not illustrated, the image capture device 1910 includes a rack on which a plurality of preparations can be set, and a transport mechanism for transporting a preparation from the rack to an image capture position on the stage. In the case of performing a batch process, the controller 1908 controls the transport mechanism, so as to transport preparations one by one from the rack to the stage of the pre-measurement unit 1902 and then to the stage of the main measurement unit 1901.
An auto-focus (AF) unit 1909 which realizes auto focusing by using a captured image is connected to the main measurement unit 1901 and the pre-measurement unit 1902. The AF unit 1909 is capable of finding out an in-focus position by controlling the positions of stages of the main measurement unit 1901 and the pre-measurement unit 1902 via the controller 1908.
Light emitted from a light source 2001 is uniformized through an illumination optical system 2002, so that variations in an amount of light are suppressed, and is applied to a preparation 2004 placed on a stage 2003. The preparation 2004 is made by putting an object to be observed, such as a piece of tissue or a smear cell, on slide glass, and fixing it under cover glass together with a mounting medium, and is prepared so that an analyte (subject) can be observed.
An image forming optical system (objective lens) 2005 is an optical system which enlarges an image of an analyte and forms the image on an image capture unit 2007. Light transmitted through the preparation 2004 is focused on an image capture plane on the image capture unit 2007 through the image forming optical system 2005. An aperture 2006 exists in the image forming optical system 2005, and the depth of field can be controlled by adjusting the aperture 2006.
At the time of image capturing, the light source 2001 is turned on, and the preparation 2004 is irradiated with light. An image formed on the image capture plane through the illumination optical system 2002, the preparation 2004, and the image forming optical system 2005 is received by an image sensor of the image capture unit 2007. In the case of capturing a monochrome (gray scale) image, exposure is performed with a white light source 2001, and image capturing is performed once. In the case of capturing a color image, exposure is sequentially performed with three light sources 2001 of RGB, and image capturing is performed three times. Accordingly, a color image is obtained.
The image of an analyte formed on the image capture plane undergoes photoelectric conversion in the image capture unit 2007, also undergoes A/D conversion, and is transmitted to the image processing device 1912 in the form of an electric signal.
The correction amount data obtained by the correction amount obtaining unit 2101 is the data obtained by the displacement meter 1903, and is, in this embodiment, an absolute position of the XY stage or a relative position of the XY stage with respect to a reference position. The displacement meter 1903 is not limited as long as it is capable of obtaining data regarding a relative distance between the image sensor and an optical image of an analyte formed by the image sensor. In this embodiment, the image sensor is fixed, and thus only the position of the XY stage is measured using the displacement meter 1903.
A flow of obtaining the correction amount data is illustrated in
First, in step S2201, initialization of the image capture device 1910 is performed. The initialization includes self-diagnosis of the system, initialization of various parameters, setting of reference positions of individual stages, and checking of mutual connection among individual units.
Subsequently, in step S2202, zero is assigned to a variable i. The variable i represents a layer number. When the number of layers to be obtained is N, the variable i may be a value of any of zero to N−1.
Subsequently, in step S2203, a position of the Z stage is determined Normally, focal positions of individual layers are given from a user during capturing of Z-stack images. Thus, the position of the Z stage in image capturing for a layer number i may be determined in accordance with the data.
Subsequently, in step S2204, the XY stage is driven and the position thereof is determined. The position of the XY stage is determined by using the reference position set in step S2201. For example, when the shape of an effective image capture region of a sensor is a square whose one side has length L, the XY stage is driven to the position which is deviated from the reference position by integral multiple of L in both the X and Y directions.
Subsequently, in step S2205, an actual position of the XY stage is obtained. In this embodiment, the position of the XY stage is obtained by using the displacement meter 1903. Normally, the position data given for step S2204 matches the position obtained in step S2205. Actually, however, both positions may not match due to mechanical accuracy or the like. In order to increase accuracy of a position, steps S2204 and S2205 may be repeated using feedback control. In this embodiment, such feedback control is not performed by placing priority on high-speed processing. The data about the position of the XY stage is eventually transmitted to the image processing device 1912 as correction amount data.
Subsequently, in step S2206, the image capture unit 2007 captures an optical image of the analyte on the preparation 2004.
Subsequently, in step S2207, the variable i is incremented by one. This corresponds to an instruction to capture the next layer image by changing the focal position.
Subsequently, in step S2208, it is determined whether or not the value of the variable i is larger than N−1. If the determination result in step S2208 is “NO”, the process returns to step S2203, where the Z stage is driven again to determine the position thereof to capture the next layer image. If the determination result in step S2208 is “YES”, it is determined that all the layer images have been captured, and the processing ends.
With the above-described processing, image data and stage position data, that is, correction amount data, can be obtained.
When the value of the variable i is one or more, step S2204 may be omitted if priority is placed on processing speed. This is because the position of the XY stage is mostly determined at the time when i is zero. If the XY stage is moved in the optical-axis direction to change the layer image to be captured, the amount of movement in the horizontal direction of the XY stage may be small. In this case, a sufficient image correction effect is expected by performing correction of displacement in the horizontal direction in this embodiment.
Furthermore, practically, a region of a target of image capturing is larger than an image capture sensor in most cases. In that case, execution of the algorithm illustrated in
Now, description of the flow of image processing according to this embodiment ends.
According to this embodiment, in the case of actually displaying a plurality of depth images, displacements in the images can be corrected using correction amounts for displayed layer images. Compared to the related art, a more accurate three-dimensional structure of a subject can be grasped. Also, compared to the first and second embodiments, displacement detection based on a feature amount is not used, which decreases the load of a displacement correction process and increases the speed of a displacement correction process.
Aspects of the present invention can also be realized by a computer of a system or apparatus (or device such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiments, and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiments. For this purpose, the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (e.g., computer-readable medium).
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Applications No. 2011-286781, filed Dec. 27, 2011 and No. 2012-237792, filed Oct. 29, 2012, which are hereby incorporated by reference herein in their entirety.
Number | Date | Country | Kind |
---|---|---|---|
2011-286781 | Dec 2011 | JP | national |
2012-237792 | Oct 2012 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2012/008021 | 12/14/2012 | WO | 00 | 6/26/2014 |