1. Field of the Invention
The present invention relates to a technology of dividing and imaging an object by using a plurality of image sensors which are discretely arranged, and generating a large sized image by merging the plurality of divided images.
2. Description of the Related Art
In the field of pathology, a virtual slide apparatus is available, where a sample placed on a slide is imaged, and the image is digitized so as to make possible a pathological diagnosis based on a display. This is used instead of an optical microscope, which is another tool used for pathological diagnosis. By digitizing an image for pathological diagnosis using a virtual slide apparatus, a conventional optical microscope image of the sample can be handled as digital data. The expected merits of this are: a quick remote diagnosis, a description of a diagnosis for a patient using digital images, a sharing of rare cases, and making education and practical training efficient.
In order to digitize the operation with an optical microscope using the virtual slide apparatus, the entire sample on the slide must be digitized. By digitizing the entire sample, the digital data created by the virtual slide apparatus can be observed by viewer software, which runs on a PC and WS. If the entire sample is digitized, however an enormous number of pixels are required, normally several hundred million to several billion. Therefore in a virtual slide apparatus, an area of a sample is divided into a plurality of areas, and is imaged using a two-dimensional image sensor having several hundred thousand to several million pixels, or using a one-dimensional image sensor having several thousand pixels. To generate an image of the entire sample, a technology to merge (connect) the divided images, while considering distortion and shift of images due to aberration of the lenses, is required.
As an image merging technology, the following technology has been proposed (see Japanese Patent Application Laid-Open No. H06-004660 and Japanese Patent Application Laid-Open No. 2010-050842). Japanese Patent Application Laid-Open No. H06-004660 discloses a technology on an image merging apparatus for generating a panoramic image, wherein aberration is corrected at least on an overlapped area of the image based on estimated aberration information, and each of the corrected images is merged. Japanese Patent Application Laid-Open No. 2010-050842 discloses a technology to side step the parallax phenomena by dynamically changing the stitching points according to the distance between a multi-camera and an object, so as to obtain a seamless wide angle image.
In conventional image merging technology, it is common to connect two images by creating an overlapped area (seams) between adjacent images, and performing image correction processing (pixel interpolation) on the pixels in the overlapped area. An advantage of this method is that the joints of the images can be unnoticeable, but a problem is that resolution drops in the overlapped area due to image correction. Particularly in the case of the virtual slide apparatus, it is desired to obtain an image that faithfully reproduces the original, minimizing resolution deterioration due to image correction, in order to improve diagnostic accuracy in pathological diagnosis.
In the case of Example 1 of Japanese Patent Application Laid-Open No. H06-004660, however, an area where blur is generated due to image interpolation is decreased by correcting the distortion in the overlapped area, where a same area is imaged by two images, but the area depends on the overlapped area between the two images which is the overlapped area itself. In this patent application, nothing is disclosed about a further decrease of the correction area within the overlapped area.
In the case of Example 2 of Japanese Patent Application Laid-Open No. H06-004660, an example of smoothly merging images by changing the focal length value upon rotational coordinate transformation is disclosed, but this does not decrease the correction area itself.
In the case of Example 3 in Japanese Patent Application Laid-Open No. H06-004660, a correction curve is determined based on the estimated aberration information, but the estimated aberration information is not reflected in a method of determining the correction range, since points not to be corrected are predetermined.
In Japanese Patent Application Laid-Open No. 2010-050842, the influence of image distortion due to aberration of the lens and how to determine the correction area are not disclosed. Although a seamless wide angle image can be obtained, the problem is that resolution deteriorates in the image merging area due to image interpolation.
With the foregoing in view, it is an object of the present invention to provide a configuration to divide and image an object using a plurality of image sensors which are discretely arranged, and generate a large sized image by merging the plurality of divided images, wherein deterioration of resolution due to merging is minimized.
The present invention provides an imaging apparatus including: a supporting unit which supports an object; an imaging unit which has a plurality of image sensors discretely disposed with spacing from one another; an imaging optical system which enlarges an image of the object and guides the image to the imaging unit, and of which relative position with the plurality of image sensors is fixed; a moving unit which changes the relative position between the plurality of image sensors and the object, so as to perform a plurality of times of imaging while changing imaging positions of the plurality of image sensors with respect to the image of the object; and a merging unit which connects a plurality of images obtained from respective image sensors at respective imaging positions, and generates an entire image of the object, wherein aberration of the imaging optical system in an image obtained by each image sensor is predetermined for each image sensor based on the relative position between the imaging optical system and the image sensor, the moving unit changes the relative position between the plurality of image sensors and the object so that the two images to be connected partially overlap, the merging unit smoothes seams of the two images by setting a correction area in an overlapped area where the two images to be connected overlap with each other, and performing correction processing on pixels in the correction area, and a size of the correction area is determined according to the difference in aberrations of the two images, which is determined by a combination of image sensors which have imaged the two images to be connected.
The present invention can provide a configuration to divide and image an object using a plurality of image sensors which are discretely arranged, and generate a large sized image by merging the plurality of divided images, wherein deterioration of resolution due to merging is minimized.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
As
The slide 103 is a supporting unit to support a sample to be a target of pathological diagnosis. And the slide 103 has a slide glass on which the sample is placed and a cover glass with which the sample is sealed using a mounting solution.
The imaging optical system 104 enlarges (magnifies) the transmitted light from the imaging target area 110a on the slide 103, and guides the light and forms an imaging target area image 110b, which is a real image of the imaging target area 110a on the surface of the imaging unit 105. The effective field of view 112 of the imaging optical system has a size that covers an image sensor group 111a to 111q, and the imaging target area 110b.
The imaging unit 105 is an imaging unit constituted by a plurality of two-dimensional image sensors which are discretely arrayed two-dimensionally in the X direction and the Y direction, with spacing therebetween. Seventeen two-dimensional image sensors are used in the present embodiment, and these image sensors may be mounted on a same board or on separate boards. To distinguish an individual image sensor, an alphabetic character is attached to the reference number, that is, from a to c, sequentially from the left, in the first row, d to g in the second row, h to j in the third row, k to n in the fourth row, and o to q in the fifth row, but for simplification, image sensors are denoted as “111a to 111q” in the drawings. This is the same for the other drawings.
The development/correction unit 106 performs the development processing and the correction processing of the digital data acquired by the imaging unit 105. The functions thereof include black level correction, DNR (Digital Noise Reduction), pixel defect correction, brightness correction due to individual dispersion of image sensors and shading, development processing, white balance processing and enhancement processing. The merging unit 107 performs processing to merge a plurality of captured images which are output from the development/correction unit 106. The joint correction by the merging unit 107 is not performed for all the pixels, but only for an area where the merging processing is required. The merging processing will be described in detail with reference to
The compression unit 108 performs sequential compression processing for each block image which is output from the merging unit 107. The transmission unit 109 outputs the signals of the compressed block image to a PC (Personal Computer) and WS (Workstation). For the signal transmission to a PC and WS, it is preferable to use a communication standard which allows large capacity transmission, such as gigabit Ethernet (registered trademark).
In a PC and WS, each received compressed block image is sequentially stored in a storage. To read a captured image of a sample, viewer software is used. The viewer software reads the compressed block image in the read area, and decompresses and displays the image on a display. By this configuration, a high resolution large screen image can be captured from about a 20 mm square sample, and the acquired image can be displayed.
(Imaging Procedure of Imaging Target Area)
In FIG. 2B-(a), an area obtained by the first imaging is indicated by black solid squares. In the first imaging position, each of RGB images is obtained by switching the emission wavelength of the light source. In FIG. 2B-(b), an area obtained by the second imaging, after moving the slide by the moving mechanism, is indicated by diagonal lines (slanted to the left). In FIG. 2B-(c), an area obtained by the third imaging is indicated by reverse diagonal lines (slanted to the right). In FIG. 2B-(d), an area obtained by the fourth imaging is indicated by half tones.
After performing imaging four times by the image sensor group (the moving mechanism moves the slide three times), the entire imaging target area can be imaged without any openings.
(Flow of Imaging Processing)
In step S301, an imaging area is set. A 20 mm square area is assumed as the imaging target area, and the position of the mm square area is set according to the position of the sample on the slide.
In step S302, the slide is moved to the initial position where the first imaging (N=1) is executed. In the case of
In step S303, an image is captured within an angle of lens view for the Nth time.
In step S304, it is determined whether imaging of the entire imaging target area is completed. If the imaging of the entire imaging target area is not completed, processing advances to S305. If the imaging of the entire imaging target area is completed, that is, if N=4 in the case of this embodiment, the processing ends.
In step S305, the moving mechanism moves the slide so that the relative position of the image sensor group and the imaging target area image becomes a position for executing imaging for the Nth time (N≧2).
In step S306, emission of a single color light source (R light source, G light source or B light source) is started, and the light is irradiated onto the imaging target area on the slide.
In step S307, the image sensor group is exposed, and single color image signals (R image signal, G image signals or B image signals) are read. Because of the rolling shutter method, the exposure of the image sensor group and the reading signals are executed line by line. The lighting timing of the single color light source and the exposure timing of the image sensor group are controlled so as to operate synchronously. The single color light source starts emission at the timing of the start of exposure of the first line of the image sensors, and continues the emission until exposure of the last line completes. At this time, it is sufficient if only the image sensors which capture images, out of the image sensor group, operate. In the case of FIG. 2B-(a), for example, it is sufficient if only the image sensors blotted out in black operate, and the three image sensors on the top, which are outside the imaging target area image, need not operate.
In step S308, it is determined whether the exposure and the reading signals are completed for all the lines of the image sensors. The processing returns to S307 and continues until all the lines are completed. When all the lines are completed, processing advances to S309.
In step S309, it is determined whether the imaging of all the RGB images completed. If imaging of each image of RGB is not completed, processing returns to S306, and processing ends if completed.
According to these processing steps, the entire imaging target area is imaged by imaging each image of RGB 4 times respectively.
(Image Merging)
The two-dimensional image sensors 401a to 401q correspond to the two-dimensional image sensor group 111a to 111q described in
The development/correction units 403a to 403q perform the development processing and correction processing on the R image signal, G image signal and B image signal. The functions thereof include black level correction, DNR (Digital Noise Reduction), pixel defect correction, brightness correction due to individual dispersion of image sensors and shading, development processing, white balance processing and enhancement processing.
The sensor memories 404a to 404q are frame memories for temporarily storing developed/corrected image signals.
The memory control 405 specifies a memory area for image signals stored in the sensor memories 404a to 404q, controls to transfer the image signals to one of the compression unit 410, horizontal direction merging unit 406 and vertical direction merging unit 407. The operation of memory control will be described in detail with reference to
The horizontal direction merging unit 406 performs merging processing for image blocks in the horizontal direction. The vertical direction merging unit 407 performs merging processing for image blocks in the vertical direction. The merging processing in the horizontal direction and the merging processing in the vertical direction are executed in the overlapped areas between adjacent image sensors. The overlapped area will be described later with reference to
The compression unit 410 sequentially performs compression processing on image signals transferred from the sensor memories 404a to 404q, the horizontal merging memory 408 and the vertical merging memory 409, for each transfer block. The transmission unit 411 converts the electric signals of the compressed block image into light signals, and outputs the signals to a PC and WS.
Because of the above configuration, an image of the entire imaging target area can be generated from the image discretely acquired by the two-dimensional image sensors 401a to 401q, by the merging processing.
The number of read pixels in the Y direction is roughly the same for the adjacent imaging areas in the X direction, therefore the image merging processing can be performed for each area (A to I) and the applied range can be easily expanded to the entire imaging target area. Since the imaging areas are acquired in such a ways as the image sensor group sequentially filling the imaging target area image along the Y direction, the image merging processing can be implemented with simple memory control.
A partial area extracted from the entire imaging target area was used for description, but the description on the areas where image merging is performed and the merging direction can be applied to the range of the entire imaging target area.
In (a), the R image and G image are captured for the first time, and in a state where the R image and the G image are stored in the color memories 402d to 402q respectively, and the B image is captured and sequentially read. In the development/correction units 403d to 403q, the R image and the G image are read from the color memories 402d to 402q synchronizing with the B image which is read from the two-dimensional image sensor, and development and correction processing is sequentially performed. An image on which the development and correction processing was performed is sequentially stored in the sensor memories 404d and 404q. The images stored here are the area (A, B, D, E).
In (b), the image of area (A), out of the area (A, B, D, E) stored in the sensor memories 404d to 404q in (a), is transferred to the compression unit 410. The merging processing is not performed for the area (A).
In (c), the R image and G image are captured for the second time, and in a state where the R image and the G image are stored in the color memories 402a to 402n respectively, and the B image is captured and sequentially read. In the development/correction units 403a to 403n, the R image and the G image are read from the color memories 402a to 402n, synchronizing with the B image which is read from the two-dimensional image sensor, and development and correction processing is sequentially performed. An image on which the development and correction processing was performed is sequentially stored in the sensor memories 404a to 404n. The images stored here are the area (B, C, E, F).
In (d), the image of the area (C), out of the area (B, C, E, F) stored in the sensor memories 404a to 404n in (c), is transferred to the compression unit 410. The merging processing is not performed for the area (C).
In (e), the area (B, E) is read from the sensor memories 404a to 404q, and image merging processing in the horizontal direction is performed.
In (f), the image after the image merging processing in the horizontal direction is sequentially stored in the horizontal merging memory 408.
In (g), the image of the area (B) stored in the horizontal merging memory 408 is transferred to the compression unit 410.
In (h), the R image and G image are captured for the third time, and in a state where the R image and G image are stored in the color memories 402d to 402q respectively, and the B image is captured and sequentially read. In the development/correction units 403d to 403q, the R image and the G image are read from the color memories 402d to 402q, synchronizing with the B image which is read from the two-dimensional image sensor, and the development and correction processing is sequentially performed. An image on which the development and correction processing was performed is sequentially stored in the sensor memories 404d to 404q. The image stored here is the area (D, E, G, H).
In (i), the image of the area (G), out of the area (D, E, G, H) stored in the sensor memories 404d to 404q in (h), is transferred to the compression unit 410. The merging processing is not performed for the area (G).
In (j), the image of the area (D, E) is read from the sensor memories 404d to 404q, and the horizontal merging memory 408, and the image merging processing in the vertical direction is performed.
In (k), the image after the image merging processing in the vertical direction is sequentially stored in the vertical merging memory 409.
In (l), the image of the area (D) stored in the vertical merging memory 409 is transferred to the compression unit 410.
In (m), the R image and G image are captured for the fourth time, and in a state where the R image and the G image are stored in the color memories 402a to 402n respectively, and the B image is captured and sequentially read. In the development/correction units 403a to 403n, the R image and the G image are read from the color memories 402a to 402n, synchronizing with the B image which is read from the two-dimensional image sensor, and the development and correction processing is sequentially performed. An image on which the development and correction processing was performed is sequentially stored in the sensor memories 404a to 404n. The image stored here is the area (E, F, H, I).
In (n), the image of the area (I), out of the area (E, F, H, I) stored in the sensor memories 404a to 404n in (m), is transferred to the compression unit 410. The merging processing is not performed for the area (I).
In (o), the area (E, F) is read from the sensor memories 404a to 404n and the vertical merging memory 409, and the image merging processing in the vertical direction is performed.
In (p), the image after the image merging processing in the vertical direction is sequentially stored in the vertical merging memory 409.
In (q), the image of the area (F) stored in the vertical merging memory 409 is transferred to the compression unit 410.
In (r), the area (E, H) is read from the sensor memories 404a to 404q and the vertical merging memory 409, and image merging processing in the horizontal direction is performed.
In (s), the image after the image merging processing in the horizontal direction is sequentially stored in the horizontal merging memory 408.
In (t), the image of the area (E, H) stored in the horizontal merging memory 408 is sequentially transferred to the compression unit 410.
In this way, the sequential merging processing can be performed by the memory control unit 405 controlling the memory transfer, and the image of the entire imaging target area can be transferred to the sequential compression unit 410.
Here the sequence of compression without merging processing was described for the areas (A), (C), (G) and (I), but the sequence of compression after joining areas with which the areas (A), (C), (G) and (I) are merged, can also be implemented.
In order to smoothly connect two images having different distortions, correction processing (processing to change coordinates of the pixels and pixel values) must be executed for the pixels in the overlapped area. A problem, however, is that resolution drops if the correction processing is executed, as mentioned above. Therefore according to the present embodiment, in order to minimize the influence of deterioration of resolution, correction processing is not executed for all the pixels of all the overlapped areas, but is executed only for a partial area (this area is hereafter called the “correction area”) of an overlapped area. At this time, the size of the correction area is determined according to the difference of distortions of the two images to be connected (this is determined depending on the combination of image sensors which captured the image). Deterioration of resolution can be decreased as the correction area size becomes smaller.
An example of a method for determining a correction area will be described with reference to
Three representative points A, B and C on a center line of the overlapped area, of which width is K, are considered. In the correspondence with
L(A) is a width required for smoothing connecting the first image and the second image at a representative point A. L(A) is mechanically determined using a relative difference M(A) between the shift of the representative point A from the true value in the first image, and the shift of the representative point A from the true value in the second image. The shift from the true value refers to a coordinate shift which is generated due to the influence of distortion. In the case of
It is assumed that the true value of the representative point A is (Ax, Ay), the shift value of the representative point A in the first image is (ΔAx1, ΔAy1), and the shift value of the representative point A in the second image is (ΔAx2, ΔAy2) (see
M(A)=|(Ax+ΔAx1,Ay+ΔAy1)−(Ax+ΔAx2,Ay+ΔAy2)|=|(ΔAx1−ΔAx2,ΔAy1−ΔAy2)|
Then the area L(A) for connecting is determined by
L(A)=α×M(A)
where α is an arbitrarily determined positive number.
If the relative difference between the shift of the representative point A from the true value in the first image and the shift of the representative point A from the true value in the second image is M(A)=4.5 (pixels) and α=10, L(A)=45(pixel) is established, and this means that 45 pixels are required for the connecting area. α is a parameter to determine the smoothness of connecting, and as the value of α increases, the connecting becomes smoother, but the overlapped area also increases, hence an appropriate value is arbitrarily determined.
L(B) and L(C) can be considered in the same manner. It is assumed that the true values of the representative points B and C are (Bx, By) and (Cx, Cy) respectively, and the shift values of the representative points B and C in the first image are (ΔBx1, ΔBy1) and (ΔCx1, ΔCy1), and the shift values of the representative points B and C in the second image are (ΔBx2, ΔBy2) and (ΔCx2, ΔCy2) respectively. In this case, M(B) and M(C) respectively are given by:
M(B)=|(Bx+ΔBx1,By+ΔBy1)−(Bx+ΔBx2,By+ΔBy2)|=|(ΔBx1−ΔBx2,ΔBy1−ΔBy2)|; and
M(C)=|(Cx+ΔCx1,Cy+ΔCy1)−(Cx+ΔCx2,Cy+ΔCy2)|=|(ΔCx1−ΔCx2,ΔCy1−ΔCy2)|.
Then the areas L (B) and L (C) for connecting are determined by:
L(B)=α×M(B); and
L(C)=α×M(C).
Then the maximum value out of L(A), L(B) and L(C) is determined as the width N of the correction area. For example, if the relationship of L(A), L(B) and L(C) is
L(A)>L(B)>L(C)
as shown in
By the above method, the size of each correction area is adaptively determined so that the correction area becomes smaller as the relative coordinate shift amount, due to distortion, becomes smaller. To be more specific, if the direction of arrangement of the two images being disposed side by side is the first direction, and a direction perpendicular to the first direction is the second direction, the width of the correction area in the first direction becomes narrower as the relative coordinate shift amount, due to distortion, becomes smaller. The correction area is created along the second direction, so as to cross the overlapped area. Here the three representative points on the center line of the overlapped area were considered, but the present invention is not limited to this, and the correction area can be more accurately estimated as the number of representative points increases.
There are eight overlapped areas in the case of the image merging in the horizontal direction of the first column (C1) of the overlapped area in
Applying the same concept to each column (C1 to C6) and each row (R1 to R7), the correction area is determined for each overlapped area, and the overlapped areas in each column and in each row are determined. In other words, each column and each row has an independent overlapped area, and each overlapped area has a different sized correction area. If the size of the overlapped area is the minimum value required, as described here, the imaging sensor can be downsized, and the capacities of the color memory and the sensor memory can be decreased, which is an advantage. However the sizes of all the overlapped areas may be set to a same value.
Here the shift from the true value was described as a coordinate shift generated due to the influence of distortion, but the description is applicable to the case of a pixel value shift as well, not only to the case of coordinate shift.
Based on the above concept, the correction area in each overlapped area is determined. The merits of setting the correction area in the overlapped area follow. First, if the position is shifted in the obtained image, the position can be corrected with the image information by using such a method as characteristic extraction for the two images. Second, the coordinate values and pixel values can be referred to in the two images, hence correction accuracy can be improved and the images can be smoothly connected.
In step S1001, a number of division in the imaging area is set. In other words, how the imaging target area image 110b is divided by the image sensor group 111a to 111q is set. In
In step S1002, the relative coordinate shift amount is calculated. In the representative points in each column and each row, a relative difference of the shifts from the true value between the connecting target images is calculated. The shift from the true value refers to the coordinate shift, which is generated due to the influence of the distortion. The calculation method is as described in
In step S1003, the correction area is determined for each overlapped area. The method for determining the correction area is as described in
In step S1004, the overlapped area is determined for each row and each column. The maximum correction area is determined based on the maximum relative coordinate shift value in each row and each column, and a predetermined margin area is added to the maximum correction area in each row and each column to determine the respective overlapped area. Here the overlapped area has the same size in each row and each column, since same sized two-dimensional image sensors are used for the image sensor group 111a to 111q. The method for determining the margin area will be described later.
By the above mentioned processing steps, the correction area in each overlapped area is determined.
In step S1101, the relative coordinate shift amount is calculated for the row Rn. For the representative points on the center line of the overlapped area, the relative difference of the shifts from the true value between the connecting target images is calculated. In step S1102, the maximum relative coordinate shift amount is determined for the row Rn based on the result in S1101. In step S1103, it is determined whether the calculation of the relative coordinate shift amount for all the rows, and determination of the maximum relative coordinate shift amount for each row, are completed. Steps S1101 and S1102 are repeated until the processing is completed for all the rows. In step S1104, the relative coordinate shift amount is calculated for the column Cn. For the representative points on the center line of the overlapped area, the relative difference of the shifts from the true value between the connecting target images is calculated. In step S1105, the maximum relative coordinate shift amount is determined for the column Cn based on the result in S1104. In step S1106, it is determined whether the calculation of the relative coordinate shift amount for all the columns, and the determination of the maximum relative coordinate shift amount for each column, are completed. Steps S1104 and S1105 are repeated until the processing is completed for all the columns. By the above processing steps, the relative coordinate shift amount is calculated for the representative points on the center line of the overlapped area, and the maximum relative coordinate shift amount is determined for each row and each column.
In step S1201, the overlapped area is determined for the row Rn. Based on the determination of the correction areas in S1003, an area of which correction area is largest in each row is regarded as the overlapped area. In step S1202, it is determined whether the determination of the overlapped area is completed for all the rows. Step S1201 is repeated until the processing is completed for all the rows. In step S1203, the overlapped area is determined for the column Cn. Based on the determination of the correction areas in S1003, an area of which correction area is largest in each column is regarded as the overlapped area. In step S1204, it is determined whether the determination of the overlapped area is completed for all the columns. Step S1203 is repeated until the processing is completed for all the columns. By the above processing steps, the overlapped area is determined for each row and each column.
The processings described in
In the case of performing interpolation processing on the first image, the position of the coordinates P21 is transformed into the position of the coordinates P41. In the same way, the coordinates P22 are transformed into the coordinates P42, and the coordinates P23 are transformed into the coordinates P43. The coordinates P21, P22 and P23 need not match with each barycenter of the pixel, but the positions of P41, P42 and P43 match with each barycenter of the pixel. Here only the representative points are illustrated, but actual processing is performed on all the pixels on the boundary line 2 in the first image. Considering interpolation processing to be performed on the second image, the position of the coordinates P31 is transformed into the position of the coordinates P11. In the same way, the coordinates P32 are transformed into the coordinates P12, and the coordinates P33 are transformed into the coordinates P13. The coordinates P31, P32 and P33 need not match each barycenter of the pixel, but the positions of the coordinates P11, P12 and P13 match with each barycenter of the pixel. Here only representative points are illustrated, but actual processing is performed on all the pixels on the boundary line 1 in the second image. Since the correction image is generated for the first image and the second image respectively like this, image merging with smooth seams can be implemented using α blending, where the ratio of the first image is high near the boundary line 1, and the ratio of the second image is high near the boundary line 2.
The interpolation processing here is performed based on the coordinate information which is held in advance. As
In step S1501, coordinates P′ (m, n), which is a reference point, are specified.
In step S1502, a correction value, which is required to obtain the address P (m, n) after transforming the reference point, is obtained from an aberration correction table. The aberration correction table is a table holding the correspondence of positions of pixels before and after coordinate transformation. Correction values for calculating coordinate values after transformation, corresponding to coordinates of a reference point, are stored.
In step S1503, coordinates P (m, n) after transformation of the reference pixel are obtained based on the values stored in the aberration correction table obtained in the processing in step S1502. In the case of distortion, coordinates after transformation of the reference pixel are obtained based on the shift of the pixel. If values stored in the aberration correction table, that is reference points, are values of the selected representative points (representative values), a value between these representative points is calculated by interpolation.
In step S1504, it is determined whether coordinate transformation processing is completed for all the processing target pixels, and if the processing is completed for all the pixels, this coordinate transformation processing is ended. If not completed, the processing step returns to step S1501, and the above mentioned processing is executed repeatedly. By these processing steps, the coordinate transformation processing is performed.
Here the correspondence when the position of the coordinates P21 is transformed into the position of the coordinates P41 in
In step S1505, the coordinates Q, which are the position where interpolation is performed, are specified.
In step S1506, several to several tens of reference pixels P (m, n) around the pixel generated in the interpolation position are specified.
In step S1507, coordinates of each of the peripheral pixels P (m, n), which are reference pixels, are obtained.
In step S1508, the distance between the interpolation pixel Q and each of the reference pixels P (m, n) is determined in vector form, of which origin is the interpolation pixel.
In step S1509, a weight factor of each reference pixel is determined by substituting the distance calculated in the processing in step S1508 for the interpolation curve or line. Here it is assumed that a cubic interpolation formula, the same as the interpolation operation used for coordinate transformation, is used, but a linear interpolation (bi-linear) algorithm may be used.
In step S1510, a product of the value of each reference pixel and the weight factor in the x and y coordinates is sequentially added, and the value of the interpolation pixel is calculated.
In step S1511, it is determined whether the pixel interpolation processing is performed for all the processing target pixels, and if the processing is completed for all the pixels, this pixel interpolation processing ends. If not completed, processing step returns to step S1505, and the above mentioned processing is executed repeatedly. By these processing steps, the pixel interpolation processing is performed.
Considering the case of performing the coordinate transformation processing and the pixel interpolation processing on the correction area shown in
The characteristic preconditions and configuration of the imaging apparatus of the present embodiment will now be described, and the technical effects thereof will be referred to.
The imaging apparatus of the present embodiment in particular is targeted for use as a virtual slide apparatus in the field of pathology. The characteristics of digital images of samples obtained by the virtual slide apparatus, that is, enlarged images of tissues and cells of the human body, indicate that there are not many geometric patterns, such as straight lines, hence image distortion does not influence the appearance of an image very much. In order to improve the diagnostic accuracy in pathological diagnosis, on the other hand, resolution deterioration due to image processing should be minimized. Because of these preconditions, priorities are assigned to image design to secure resolution, rather than to minimize the influence of the distortion of images in image connecting, so that an area where resolution is deteriorated by image correction can be decreased.
The imaging apparatus of the present embodiment has a configuration for dividing an imaging area and imaging the divided areas using a plurality of two-dimensional image sensors which are discretely disposed within a lens diameter including the imaging area, and merging the plurality of divided images to generate a large sized image.
In the case of a configuration of a multi-camera in which same sized cameras having a same aberration are regularly disposed, lens aberration of the two cameras in the overlapped area approximately match in the row direction and in the column direction. Therefore image merging in the overlapped area can be handled using a fixed processing in the row direction and in the column direction respectively. However in the case of using a plurality of two-dimensional image sensors, which are discretely disposed within the lens diameter including the imaging area, lens aberrations of the two two-dimensional image sensors differ depending on the overlapped area.
In the case of panoramic photography having another configuration, the overlapped area can be controlled freely. However in the case of using a plurality of two-dimensional image sensors which are discretely disposed within a lens diameter including the imaging area, the overlapped area is fixed, just like the case of the multi-camera.
In this way, the imaging apparatus of the present embodiment has a characteristic that the multi-camera and panoramic photography do not possess, that is, the overlapped area is fixed, but the lens aberrations of the two-dimensional image sensors are different depending on the overlapped area. The effect of this configuration in particular is that an area where resolution is deteriorated can be minimized by adaptively determining the correction area in each overlapped area according to the aberration information.
The effect of the present embodiment described above is based on the preconditions that an imaging area is divided and the divided area is imaged using a plurality of two-dimensional image sensors which are discretely disposed within a lens diameter including the imaging area, and the plurality of divided images are merged to generate a large sized image. In the merging processing (connecting processing) of the divided images, the correction area is adaptively determined according to the aberration information to perform correction, hence an area where resolution is deteriorated due to image correction can be decreased.
Now the second embodiment of the present invention will be described. In the first embodiment mentioned above, the correction area is determined according to the largest relative coordinate shift amount in the overlapped areas. In other words, the width of the correction area is constant. Whereas in the second embodiment, the correction area is adaptively determined within the overlapped area according to the relative coordinate shift amount of the center line of the overlapped area. Thereby the width of the correction area changes according to the relative coordinate shift amount. Thus the only difference between the present embodiment and the first embodiment mentioned above is the approach to determine the correction area. Therefore in the description of the present embodiment, a detailed description on portions the same as the first embodiment is omitted. For example, the configuration and processing sequence of imaging and image merging of the imaging apparatus shown in
The method for determining the correction area according to the present embodiment will now be described with reference to
L (A) is a width required for smoothly connecting the first image and the second image at the representative point A, and is determined by the same method as the first embodiment. This is the same for L (B) and L (C).
As
The above mentioned correction area, which adaptively changes according to the relative coordinate shift amount of the center line of the overlapped area, is determined for the first column (C1) of the overlapped area in
Here the shift from the true value was described as a coordinate shift generated due to the influence of distortion, but the description is applicable to the case of a pixel value shift as well, and not just the case of the coordinate shift.
According to the present embodiment described above, the correction area can be further decreased than the case of the first embodiment, therefore the area in which resolution deteriorates due to image correction can be further decreased.
Now the third embodiment of the present invention will be described. In the first embodiment and the second embodiment mentioned above, the method for determining the correction area based on the representative points on the center line of the overlapped area was described. Whereas in the third embodiment, the position of the correction area is adaptively determined based on the correction of two images in the overlapped area. The difference from the first embodiment and the second embodiment is that the calculation of the relative coordinate shift amount does not depend on the center line of the overlapped area. Thus the only difference of the present embodiment from the first and second embodiments is the method for determining the correction area. Therefore in the description of the present embodiment, a detailed description of the portions the same as the first embodiment is omitted. For example, the configuration and the processing sequence of imaging and image merging of the imaging apparatus shown in
The method for determining the correction area according to the present embodiment will now be described with reference to
First in the overlapped area having width K, hierarchical block matching is performed between the first image and the second image, whereby the portion where the correlation of these images is highest (portion where these images are most similar) is detected. In concrete terms, the correlation (degree of consistency) between the first image and the second image in the search block is determined at each position, while gradually shifting the position of the search block in the horizontal direction, and detecting a position where the correlation is highest. By performing this processing at a plurality of positions in the vertical direction in the overlapped area, a block group where correlation between the first image and the second image is high can be obtained.
Now as
The correction area N, which adaptively changes according to the above mentioned relative coordinate shift amount of the correction center line, is determined for the first column (C1) of the overlapped area in
According to the present embodiment, the overlapped area, which is temporarily set for searching with the search block, and the final overlapped area, which is determined based on the maximum width of the correction area, must be set separately. The connecting becomes smoother as the temporarily determined overlapped becomes large, but if it is too large, the final overlapped area may become large, hence an appropriate numeric value is set arbitrarily.
According to the present embodiment described above, the correction area can be even smaller than the first and second embodiments, therefore the area where resolution deteriorates due to image correction can be further decreased.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2010-273386, filed on Dec. 8, 2010 and Japanese Patent Application No. 2011-183092, filed on Aug. 24, 2011, which are hereby incorporated by reference herein in their entirety.
Number | Date | Country | Kind |
---|---|---|---|
2010-273386 | Dec 2010 | JP | national |
2011-183092 | Aug 2011 | JP | national |