The present invention relates to an image processing apparatus, an image processing method, and an image processing program for extracting boundary lines of layers by processing a tomographic image of a target object, such as a tomographic image of a subject's eye, captured using a tomography apparatus or the like.
One of ophthalmic diagnostic apparatuses is a tomography apparatus that utilizes optical interference of so-called optical coherence tomography (OCT) to capture tomographic pictures of ocular fundi. Such a tomography apparatus can irradiate an ocular fundus with low-coherence light of broadband as the measurement light to capture tomographic pictures of the ocular fundus with high sensitivity through interference of the reflected light from the ocular fundus and reference light.
Such a tomography apparatus is capable of three-dimensionally observing the state inside the retinal layers. For example, it is possible to quantitatively diagnose the stage of progression of ophthalmic disorder, such as glaucoma, and the degree of recovery after the treatment through measurement of the layer thickness of a retinal layer, such as a nerve fiber layer, or the change in a layer shape, such as irregularities, on a retinal pigment epithelium layer.
Patent Literature 1 describes a configuration for detecting the boundary of a retinal layer from a tomographic picture captured by a tomography apparatus and extracting exudates as one of lesions from the ocular fundus image.
Patent Literature 2 describes a configuration for identifying an artifact region in the tomographic image of an ocular fundus, detecting the boundary of a retinal layer in a region that is not an artifact region, detecting the boundary of a retinal layer in the artificial region on the basis of luminance values in a different method, and superimposing and displaying lines that represent the detected boundaries.
Patent Literature 3 describes a configuration for detecting layers on the basis of edges lying from a side at which the intensity of signal light obtained from a tomographic image of a subject's eye is low to a side at which the intensity of signal light is high and detecting a layer or layer boundary existing between the layers on the basis of an edge lying from the side at which the intensity of signal light is high to a side at which the intensity of signal light is low.
Patent Literature 4 describes a configuration for preliminarily setting an existence probability model in which the existence probability of brain tissues in the three-dimensional space of an MRI image is modeled, obtaining a tissue distribution model in which both the signal intensity distribution model and the existence probability model are established, and calculating, for each voxel included in the MRI image, a degree of the voxel belonging to white matter tissues and gray white tissues.
Non-Patent Literature 1 describes a configuration for acquiring an edge image of a retinal tomographic picture using a Canny edge detection method, weighting the edge image and a luminance gradient image to calculate them, and searching a shortest route to extract a boundary line of the retinal layer. Non-Patent Literature 1 also describes a configuration for first extracting two boundary lines when extracting boundary lines of a plurality of retinal layers, and searching another boundary line existing therebetween within a narrow range interposed between the extracted boundary lines, thereby reducing the extraction time.
[Patent Literature 1] JP2010-279438A
[Patent Literature 2] JP2012-61337A
[Patent Literature 3] JP5665768B
[Patent Literature 4] JP2011-30911A
[Non-Patent Literature 1] “Automated layer segmentation of macular OCT images using dual-scale gradient information” 27 Sep. 2010/Vol. 18, No, 20/OPTICS EXPRESS 21293-21307
In the configuration of Patent Literature 1 to 3, however, the boundary line of a layer is extracted by detecting an edge in the tomographic image and, therefore, the accuracy in extracting the boundary line depends on the edge detection. Problems thus exist in that the edge information cannot be obtained with a sufficient degree of accuracy in a region in which the luminance value is low and it is difficult to perform reliable boundary line extraction.
In the configuration of Patent Literature 2, the position of a layer is obtained in the artifact region, in which the luminance value is low, using an evaluation function value for a model shape. However, the accuracy of the evaluation function value is insufficient and a problem exists in that the accuracy in extracting the boundary line deteriorates in a region in which the luminance value is low.
In Non-Patent Literature 1, the edge image of a retinal tomographic picture and the luminance gradient image are each weighted to extract the boundary line. However, the weighting is one-dimensional and, in particular, the edge image is not associated with the probability of existence of the boundary line to be extracted. A disadvantage is therefore that the boundary line cannot be extracted with a high degree of accuracy.
In Patent Literature 4, the existence probability model representing the existence probability of brain tissues is used to calculate a degree that a region of the brain tissues belongs to white matter tissues and gray white tissues, but a problem exists in that the calculation takes time because the existence probability model is a three-dimensional model.
In Non-Patent Literature 1, the extraction time is reduced through extracting two boundary lines and searching another boundary line existing therebetween within a narrow range interposed between the extracted boundary lines. However, Non-Patent Literature 1 involves a problem in that the extracted boundary line may cross another boundary line that has already been extracted or an ambiguous boundary line or disappearing boundary line can not be extracted with a high degree of accuracy because a process of restricting the extraction region on the basis of extracted results and further restricting the extraction region on the basis of extracted results is not repeated and sequentially performed. In addition, even though the extraction time is reduced by performing the search within a range interposed between the extracted boundary lines, the search within the range has to be repeated many times to extract all of the plurality of boundary lines, and the time necessary for the search will increase as the number of boundary lines to be extracted increases.
The present invention has been made in consideration of the above, and objects of the present invention include providing an image processing apparatus, an image processing method, and an image processing program with which a boundary line of a layer can be extracted with a high degree of accuracy from a captured image of a target object that is composed of a plurality of layers, and a plurality of layers can be efficiently extracted in a short time.
To achieve the above objects, first, the present invention provides an image processing apparatus comprising a boundary line extraction means that extracts a boundary line of a layer from an input image obtained by capturing an image of a target object composed of a plurality of layers, the boundary line extraction means being configured to: first extract boundary lines at upper and lower ends of the target object; limit a search range using the extracted boundary lines at the upper and lower ends to extract another boundary line; limit the search range using an extraction result of the other boundary line to extract still another boundary line; and then sequentially repeat similar processes to extract subsequent boundary lines (Invention 1).
The above invention (Invention 1) may further comprise a curvature correction means that corrects the input image to match a curvature of a previously extracted boundary line (Invention 2).
Second, the present invention provides an image processing apparatus comprising: a boundary line extraction means that extracts a boundary line of a layer from an input image obtained by capturing an image of a target object composed of a plurality of layers; and a search range setting means that utilizes an already extracted boundary line extracted by the boundary line extraction means to dynamically set a search range for another boundary line (Invention 3).
In the above invention (Invention 3), the search range setting means may preferably dynamically set the search range for the other boundary line in accordance with inclination of the already extracted boundary line (Invention 4).
In the above invention (Invention 3, 4), the search range setting means may set the search range for the other boundary line such that the search range for the other boundary line is separated from the already extracted boundary line by a predetermined distance (Invention 5).
Third, the present invention provides an image processing method for extracting a boundary line of a layer from an input image obtained by capturing an image of a target object composed of a plurality of layers, the image processing method comprising: first extracting boundary lines at upper and lower ends of the target object; limiting a search range using extraction results of the extracted boundary lines at the upper and lower ends to extract another boundary line; further limiting the search range using an extraction result of the other boundary line to extract still another boundary line; and then sequentially repeating similar processes to extract subsequent boundary lines (Invention 6).
The above invention (Invention 6) may correct the input image to match a curvature of a previously extracted boundary line and thereafter extract another boundary line (Invention 7).
Fourth, the present invention provides an image processing method for extracting a boundary line of a layer from an input image obtained by capturing an image of a target object composed of a plurality of layers, wherein an already extracted boundary line is utilized to dynamically set a search range for another boundary line (Invention 8).
In the above invention (Invention 8), it may be preferred to dynamically set the search range for the other boundary line in accordance with inclination of the already extracted boundary line (Invention 9).
In the above invention (Invention 8, 9), the search range for the other boundary line may be set such that the search range for the other boundary line is separated from the already extracted boundary line by a predetermined distance (Invention 10).
Fifth, the present invention provides an image processing program that causes a computer to serve as the image processing apparatus according to any one of Invention 1 to 5 or causes a computer to execute the image processing method according to any one of Invention 6 to 10 (Invention 11).
In the present invention, when a plurality of boundary lines is extracted from the input image, an already extracted boundary line can be utilized to allow another boundary line to be effectively extracted.
That is, boundary lines can be extracted by sequentially repeating similar processes, such as a process of limiting the search range on the basis of its previous extraction result to extract another boundary line, a process of limiting the search range on the basis of its previous extraction result to extract still another boundary line, and a process of limiting the search range on the basis of its previous extraction result to extract yet another boundary line. Such extraction of boundary lines allows the search range to be limited for any extraction, thus increasing the speed of the extraction process, and the extraction is easy because the parameters (e.g. the existence probability and weighting coefficients) can be appropriately set again every time the range is changed.
Moreover, the curvature of a layer structure of the input image can be corrected using the previously extracted boundary line thereby to improve the accuracy in extraction of the boundary lines because the directions of edges and luminance gradient are aligned.
Furthermore, in the present invention, the search range can be set to match the inclination of an already extracted boundary line thereby to enable highly-accurate extraction of a boundary line that is a similar curve to the already extracted boundary line. In addition, the search range can be set so as to be separated from an already extracted pixel by a predetermined distance, and the search can thereby be performed within a range that does not cross the already extracted boundary line. This can avoid crossing of the extracted boundary lines.
Thus, the already extracted boundary line can be utilized to appropriately set the search range and it is thereby possible to extract an ambiguous boundary line or a boundary line that partially disappears, without crossing the already extracted boundary line.
Hereinafter, the present invention will be described in detail on the basis of one or more examples or embodiments with reference to the drawings. Description will be made herein by exemplifying tomographic images of the ocular fundus of a subject's eye as images of a target object to be processed, but images of a target object to be processed in the present invention are not limited to tomographic images of an ocular fundus and the present invention can be applied to images of a target object for which images of a plurality of layers are captured.
The system further includes an image processing apparatus 20. The image processing apparatus 20 has a control unit 21 that is realized by a computer composed of a CPU, a RAM, a ROM, and other necessary components. The control unit 21 executes an image processing program thereby to control the entire image processing. The image processing apparatus 20 is provided with a tomographic image forming unit 22.
The tomographic image forming unit 22 is realized by a dedicated electronic circuit that executes a known analyzing method, such as a Fourier-domain scheme, or by an image processing program that is executed by the previously-described CPU. The tomographic image forming unit 22 forms tomographic images of an ocular fundus on the basis of the OCT signals generated from the tomography apparatus 10.
For example, as illustrated in
A plurality (t) of tomographic images BN formed by the tomographic image forming unit 22 or a three-dimensional volume image assembled from the t tomographic images BN is stored in a storage unit 23 composed of a semiconductor memory, hard disk drive, or other appropriate storage device. The storage unit 23 further stores the above-described image processing program and other necessary programs and data.
The image processing device 20 is provided with an image processing unit 30. The image processing unit 30 comprises a boundary line candidate image creating means 31, a luminance value-differentiated image creating means 32, a luminance value information image creating means 32a, an evaluation score image creating means 33, a boundary line extracting means 34, a search range setting means 35, and a control point setting means 36.
As will be described later, the boundary line candidate image creating means 31 detects edges in an input image to create a boundary line candidate image, the luminance value-differentiated image creating means 32 differentiates the luminance value of the input image to create a luminance value-differentiated image that represents a luminance gradient, the luminance value information image creating means 32a shifts the input image in the vertical direction to create a luminance value information image that represents luminance information, the evaluation score image creating means 33 creates an evaluation score image that represents an evaluation score for boundary line extraction, on the basis of the created images and read-out images, and the boundary line extracting means 34 searches for a route having the highest total value of the evaluation score from the evaluation score image and extracts it as a boundary line. The search range setting means 35 sets a search range to match one or more already extracted boundary lines, and the control point setting means 36 sets control points at a certain pixel interval on the extracted boundary line. Each means or each image processing in the image processing unit 30 is realized by using a dedicated electronic circuit or by executing the image processing program.
An existence probability image storage unit 26 is provided to store, for each boundary line to be extracted, an image that represents the existence probability of the boundary line, as will be described later.
A weighting coefficient storage unit 27 is provided to store a weighting coefficient with which the luminance value-differentiated image is weighted and a weighting coefficient with which the luminance value information image is weighted, for each boundary line to be extracted.
A display unit 24 is provided which is, for example, composed of a display device such as an LCD. The display unit 24 displays tomographic images stored in the storage unit 23, images generated or processed by the image processing apparatus 20, control points set by the control point setting means 36, associated information such as information regarding the subject, and other information.
An operation unit 25 is provided which, for example, has a mouse, keyboard, operation pen, pointer, operation panel, and other appropriate components. The operation unit 25 is used for selection of an image displayed on the display unit 24 or used for an operator to give an instruction to the image processing apparatus 20 or the like.
Among the tomographic images captured using such a configuration, the tomographic image Bk acquired with the scanning line yk passing through a macula region R of the ocular fundus E illustrated in
Specifically,
In the lower part of
If such a tomographic image can be used as the basis to measure the layer thicknesses of the nerve fiber layer and other layers and the change in a layer shape, such as irregularities, on the retinal pigment epithelium layer, it will be possible to quantitatively diagnose the stage of progression of ophthalmic disorder and the degree of recovery after the treatment. Depending on the environment of image capturing, however, accurate measurement of the layer thickness or layer shape may be difficult due to attenuation or missing of OCT signals which causes the boundary line of each layer to be ambiguous or discontinuous or to disappear.
In the present invention, therefore, the following method is employed to extract boundary lines of retinal layers with a high degree of accuracy. This method will be described below with reference to the flowchart of
First, as illustrated in step S1 of
After the input image B is selected, a boundary line to be extracted from the input image B is determined and an existence probability image for the boundary line is read out from the existence probability image storage unit 26 (step S2).
The existence probability image storage unit 26 is illustrated in
Each of such existence probability images is an image comprising m×n pixels that is obtained through preliminarily acquiring the tomographic image BK with the same scanning line yK for a plurality of normal eyes, calculating the probability of existence of a boundary line in each pixel (i, j) [i=1, 2, . . . , m, j=1, 2, . . . , n] of the tomographic image BK, and storing the probability of existence as a digital value at a pixel position corresponding to each pixel (i, j) of the tomographic image BK.
For example, the existence probability image ILM (L1) for the boundary line L1 of the tomographic image Bk is schematically illustrated as HILM in the lower part of
The existence probability image storage unit 26 can be an external storage unit rather than being provided in the image processing apparatus 20. For example, the existence probability image storage unit 26 may be provided in a server connected via the Internet.
Description herein is directed to an example in which the boundary line L1 of the internal limiting membrane (ILM) is extracted first among the boundary lines of the tomographic image Bk. Accordingly, the input image B and the existence probability image ILM (L1) which is determined by the boundary line L1 are read out from the existence probability image storage unit 26. The read-out existence probability image is illustrated as HILM in the upper part of
Subsequently, the boundary line candidate image creating means 31 of the image processing unit 30 is used to detect edges in the input image B and create a boundary line candidate image (step S3). For this edge detection, for example, a known Canny edge detection method is used. Edges extracted by the Canny edge detection method are thin edges. When a threshold is appropriately set, such as by setting a high threshold for a high-contrast region, the boundary line candidate image can be created which comprises a plurality of thin lines to be boundary line candidates in the input image B. This boundary line candidate image is an image of m×n pixels that has a value indicative of information on the presence or absence of an edge in the input image B as a digital value at a pixel position corresponding to each pixel (i, j). The boundary line candidate image is illustrated as EILM in
Subsequently, the luminance value-differentiated image creating means 32 is used to differentiate the luminance value of the input image B in the Z direction and create a luminance value-differentiated image (step S4). Differentiating the luminance value is calculating the luminance gradient in the Z direction. The luminance gradient is calculated, for example, using a differential filter such as a known Sobel filter. The luminance value-differentiated image is an image of m×n pixels that has a value indicative of the luminance gradient of the input image B as a digital value at a pixel position corresponding to each pixel (i, j). The luminance value-differentiated image is illustrated as GILM in
Subsequently, a weighting coefficient WILM set for the luminance value-differentiated image is read out from the weighting coefficient storage unit 27 (step S4′). The weighting coefficient WILM is used for the boundary line ILM to be obtained.
Further, an image in which the input image B is shifted in the vertical direction by a desired number of pixels is created as a luminance value information image B′ (step S5), and a weighting coefficient QILM for ILM set for the luminance value information image B′ is read out from the weighting coefficient storage unit 27 (step S5′). The moving amount and moving direction in the vertical direction can be appropriately changed in accordance with the boundary line to be obtained, but in an example for the case of ILM, the input image B is moved upward by 5 pixels. That is, it is preferred to determine the moving direction and the shift amount such that the luminance information overlaps the boundary line to be obtained.
Subsequently, the evaluation score image creating means 33 is used to calculate and create an evaluation score image CILM on the basis of the following equation (step S6). The evaluation score image CILM is calculated and created on the basis of the boundary line candidate image EILM, the existence probability image HILM, the luminance value-differentiated image GILM, the weighting coefficient WILM set for the luminance value-differentiated image, the luminance value information image B′, and the weighting coefficient QILM set for the luminance value information image.
C
ILM
=E
ILM
×H
ILM
+W
ILM
×G
ILM
+Q
ILM
×B′
Each pixel (i, j) of the evaluation score image CILM is scored as a digital value and, therefore, a route search is performed using dynamic programming, for example, to search for a route having the highest total score and extract the boundary line of ILM.
In
As will be understood, the route search is started from the pixel line P1 in
As illustrated in the lower part of
Subsequently, a determination is made as to whether all boundary lines have been extracted (step S9). If there is a boundary line that has not been extracted, the routine returns to step S2, while if all the boundary lines have been extracted, the process is ended.
As described above, in the present embodiment, the evaluation score image is formed through obtaining the positional information of the input image using the boundary line candidate image and the existence probability image, obtaining the luminance value information of the input image using the luminance value-differentiated image, and combining the positional information and the luminance value information which is weighted with an appropriate weighting coefficient. Among the pixels of the evaluation score image, a pixel in which the boundary line to be extracted exists has a high evaluation score due to the calculation using the existence probability image. Thus, the accuracy in extraction of the boundary line can be remarkably improved because the boundary line is determined by searching for such pixels having high evaluation scores.
The weighting coefficient applied to the luminance value-differentiated image can be omitted depending on the boundary line to be extracted. When extracting a plurality of boundary lines, the luminance gradient information is weighted in accordance with the boundary lines to be extracted, thereby to allow extraction of boundary lines having difference characteristics with a high degree of accuracy.
When the above-described boundary line extracted in step S7 is superimposed on the input image and displayed but the extracted boundary line is misaligned with the original boundary line, the user can modify the boundary line position, as will be described later. In this case, in accordance with the modification, the existence probability image for the boundary line stored in the existence probability image storage unit 26 can also be modified for learning.
When the weighting coefficient applied to the luminance value-differentiated image in the boundary line extraction process is modified, the boundary line can be more satisfactorily extracted. In such a case, in accordance with the modification of the weighting coefficient, the weighting coefficient for the boundary line stored in the weighting coefficient storage unit 27 may also be modified for learning.
As illustrated in step S9 of
Among the boundary lines, the boundary line of the internal limiting membrane ILM (L1) at the uppermost end and the boundary line of the retinal pigment epithelium RPE (L10) at the lowermost end represent boundaries at which the luminance change is large, and are thus easy to extract. These boundary lines are therefore extracted first, and the extracted boundary lines are utilized to limit and/or set the search range to extract other boundary lines.
In
Subsequently, the luminance value differentiation is performed for each layer in the input image B to extract the luminance gradient (process in step S4 of
After such processing, the existence probability images HILM, HNFL/GCL, HGCL/IPL, . . . for respective layers, the weighting coefficients WILM, WNFL/GCL, WGCL/IPL, . . . set for the luminance value-differentiated images, and the weighting coefficients QILM, QNFL/GCL, QGCL/IPL, . . . set for the luminance value information are read out from the existence probability image storage unit 26 and the weighting coefficient storage unit 27, and the process as described in step S6 of
The uppermost ILM represents a boundary at which the luminance change is large, so the ILM is selected first in the order of extraction. The search range is set for the entire input image, and a route having the highest total score of the evaluation score EILM×HILM+WILM×GILM+QILM×B′ as a parameter is searched and extracted as the boundary line L1 of ILM (step T3). The process of extracting the boundary line L1 for ILM corresponds to the process described with reference to
Then, the lowermost RPE is selected. In the same manner as the above, the search range is set for the entire input image, and a route having the highest total score of a parameter ERPE×HRPE+WRPE×GRPE+QRPE×B′ is searched. The route determined to have the highest total score as a result of the search is extracted as the boundary line L10 of RPE (step T4). In an alternative embodiment, extraction of the boundary line L10 of RPE may be performed first and followed by extraction of the boundary line L1 of ILM.
Subsequently, the search range is set as a range of the already extracted boundary lines ILM(L1) to RPE(L10), and a route having the highest total score of a parameter EIS/OS×HIS/OS+WIS/OS×GIS/OS+QIS/OS×B′ is searched and extracted as the boundary line L8 of IS/OS (step T5).
Subsequently, the search range is set as a range of the already extracted boundary lines ILM(L1) to IS/OS(L8), and a route having the highest total score of a parameter EOPL/ONL×HOPL/ONL+WOPL/ONL×GOPL/ONL+QOPL/ONL×B′ is searched and extracted as the boundary line L6 of OPL/ONL (step T6). In addition, the search range is set as a range of the already extracted boundary lines IS/OS(L8) to RPE(L10), and a route having the highest total score of a parameter EOS/RPE×HOS/RPE+WOS/RPE×GOS/RPE+QOS/RPE×B′ is searched and extracted as the boundary line L9 of OS/RPE (step T7).
Similarly, the search range is set as a range of the already extracted boundary lines ILM(L1) to OPL/ONL(L6), and a route having the highest total score of a parameter ENFL/GCL×HNFL/GCL+WNFL/GCL×GNFL/GCL+QNFL/GCL×B′ is searched and extracted as the boundary line L2 of NFL/GCL (step T8). In addition, the search range is set as a range of the already extracted boundary lines OPL/ONL(L6) to IS/OS(L8), and a route having the highest total score of a parameter EELM×HELM+WELM×GELM+QELM×B′ is searched and extracted as the boundary line L7 of ELM (step T9).
Likewise, the search range is set as a range of the already extracted boundary lines NFL/GCL(L2) to OPL/ONL(L6), and a route having the highest total score of a parameter EIPL/INL×HIPL/INL+WIPL/INL×GIPL/INL+QIPL/INL×B′ is searched and extracted as the boundary line L4 of IPL/INL (step T10). In addition, the search range is set as a range of the already extracted boundary lines NFL/GCL(L2) to IPL/INL(L4), and a route having the highest total score of a parameter EGCL/IPL×HGCL/IPL+WGCL/IPL×GGCL/IPL+QGCL/IPL×B′ is searched and extracted as the boundary line L3 of GCL/IPL (step T11). Finally, the search range is set as a range of the already extracted boundary lines IPL/INL(L4) to OPL/ONL(L6), and a route having the highest total score of a parameter EINL/OPL×HINL/OPL+WINL/OPL×GINL/OPL+QINL/OPL×B′ is searched and extracted as the boundary line L5 of INL/OPL (step T12). Ten boundary lines are thus extracted.
As will be apparent from the above-described processing, except the two internal limiting membrane ILM(L1) and retinal pigment epithelium RPE(L10) which are extracted first, the boundary lines are extracted by sequentially repeating similar processes, such as a process of limiting the search range on the basis of the previous extraction result to extract another boundary line, a process of limiting the search range on the basis of the previous extraction result to extract still another boundary line, and a process of limiting the search range on the basis of the previous extraction result to extract yet another boundary line.
Extraction of boundary lines which is performed sequentially in such a manner has advantages that not only a high-speed extraction process can be achieved because the search range is limited for any extraction but also the extraction is easy because the parameters (e.g. the existence probability and weighting coefficients) can be appropriately set again every time the range is changed. Moreover, as will be described later, it is possible to avoid crossing with the already extracted boundary lines or to extract a boundary line that is ambiguous or disappears, because the already extracted boundary lines can be utilized to set the search range.
Furthermore, when extracting a plurality of boundary lines, curvature correction can be performed using one or more boundary lines that are previously extracted. For example, when the boundary line of IS/OS(L8) is extracted in step T5 of
In the above-described process, when the user modifies a boundary line, the extraction process is performed again for boundary lines that are extracted after the modified boundary line. For example, when OPL/ONL(L6) is extracted in step T6 of
The image processing unit 30 is provided with the search range setting means 35, which can be used to dynamically set the search range for a boundary line utilizing one or more already extracted boundary lines.
Its examples are illustrated in
In contrast, when an already extracted boundary line is utilized to set the search range for a boundary line, the search range is dynamically set in accordance with the inclination of an already extracted boundary line L. As illustrated in
Thus, the search range is set to match the inclination of the already extracted boundary line thereby to allow highly-accurate extraction of a boundary line that is a similar curve to the already extracted boundary line.
Moreover, as illustrated in
In some cases, as illustrated in the upper part of
When the search range is not set to match the already extracted boundary lines, as illustrated in the left side of the figure, a pixel line having a 3-pixel length in the Z direction is set as a part of the search range so as to be centered on the left adjacent pixel (i−1, j) to the pixel of interest I (i, j) and, in a similar manner, pixel lines having a 3-pixel length are sequentially set at the left side as parts of the search range (in this case, s=1). The set final search range is a range as illustrated in the middle of the left side. Then, when the route search is performed such that the evaluation score is highest, the extracted boundary line will be a curve indicated by a dashed line as illustrated in the lower part and cross the already extracted boundary line located below. This causes a situation in which a continuous boundary line cannot be extracted.
In contrast, when the search range is set to match the already extracted boundary lines, it is possible to extract a boundary line that is ambiguous or discontinuous. This is illustrated at the right side of
In the examples illustrated in
Thus, the already extracted boundary lines are utilized to appropriately set the search range and it is thereby possible to extract an ambiguous boundary line or a boundary line that partially disappears, without crossing the already extracted boundary lines.
As illustrated in
For example, the user specifies one point on the boundary line using the mouse or operation pen of the operation unit 25 and also specifies a pixel interval D. The control unit 21 identifies a pixel on the specified boundary line, and the control point setting means 36 is used to set control points centered on the identified pixel in the right and left of the X direction, at pixel positions at which the D×n−th (n=1, 2, . . . ) pixel line and the boundary line cross each other. In an alternative embodiment, the control unit 21 may set the control points at given X-direction positions on the specified boundary line.
The control points thus set are displayed on the display unit 24. For example, as illustrated in the lower left diagram of
In the boundary line extracted using the method illustrated in
A narrowed control point interval allows faithful representation of the extracted result, but modification of the boundary line takes time because the number of control points increases. In addition, when the control point interval is narrowed, fine irregularities may occur even on a smooth boundary line and a smooth curve cannot be obtained. Accordingly, the degree of pixel interval at which the control points are set may be set by the user in accordance with the features of the boundary line to be extracted or in accordance with the degree of modification necessary for the extracted boundary line.
For example, the upper diagram of
When the control point interval is increased, the number of control points to be modified is reduced, and the modification time can be shortened. For example, as illustrated in the lower part of
When control points are set on a boundary line at a given pixel interval as described above, one or more set control points may be removed and the remaining control points can be connected by a spline curve. Alternatively, one or more control points may be added to a space or spaces between the set control points and these control points can be connected by a spline curve to form a boundary line. Thus, the control points can be removed, added, or moved thereby to extract a smoother or faithful boundary line.
As described above, the lower part of
In a case in which the route search is performed again for a plurality of boundary lines, in order to prevent the phenomenon of crossing of the boundary lines, when a control point Q (x, z) on a boundary line A moves to Q′ (x, z′), the following measures can be taken at the time of re-detection of a boundary line B. For example, as illustrated in
When the boundary line is modified, as described above, in accordance with the modification, the existence probability image for the boundary line stored in the existence probability image storage unit 26 can also be modified for learning of the existence probability image.
Number | Date | Country | Kind |
---|---|---|---|
2015-162125 | Aug 2015 | JP | national |
2015-162126 | Aug 2015 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2016/073497 | 8/9/2016 | WO | 00 |