The present invention relates to an image processing apparatus configured to process an image of a biological tissue.
Generally, lesion parts of biological tissues exhibit a color different from that of normal parts. As the performance of color endoscopes has improved, it becomes possible the distinguish a lesion part of which color is only slightly different from the color of the normal parts. However, in order for an operator to accurately distinguish a lesion part from normal tissues by only a slight difference of the colors of the endoscopic image, the operator should have been trained by a skilled instructor for a long period. Further, it is not easy to distinguish the lesion parts by only a slight color difference for even a skilled operator, a careful operation is required.
It has been suggested, for example, in Japanese Patent Provisional Publication No. 2014-18332 (hereinafter, referred to as patent document 1), an endoscope apparatus in which, in order that lesion parts can be distinguished easily, with respect to endoscopic image data photographed under illumination of white light, whether an object is a lesion part of not based on color information, and a color mapping process to change colors of pixels of a portion which is judged to be the lesion part is executed.
According to the endoscope apparatus of patent document 1, since the color mapping process is applied to all the pixels, a calculation amount necessary for the color mapping process is very large. Therefore, when a movement of an image is fast at a time of a screening inspection or the like, the color mapping process is slowed and the endoscopic image cannot follow the frame rate at which the endoscopic images area photographed. As a result, the color mapping is applied to another endoscopic image photographed after the endoscopic image for which the color mapping is to be applied, there has occurred a problem that a positional displacement occurs between the endoscopic image and the color mapping.
The present invention is made in view of the above circumstances and an object thereof is to provide an image processing apparatus capable of preventing positional displacement between marks applied to an endoscopic image to indicate lesion parts and the endoscopic image itself.
According to an embodiment of the present invention, there is provided an image processing apparatus, which has an image data obtaining means configured to obtain color moving image data including multiple pieces of image data capturing biological tissues, a scene determining means configured to determine a photographic scene based on the color moving image data, a score calculating means configured to calculate a score indicative of seriousness of lesion of the biological tissues captured in the image represented by the image data for each pixel, based on the image data, a marking means configured to apply marks indicative of a distribution of the scores on the image. The marking means is configured to execute a detailed marking process to apply the marks indicating the distribution of the scores in detail, and a simple marking process to apply the marks indicating the distribution of the scores in a manner simpler than the detailed marking process. The marking means executes one of the detailed marking process and the simple marking process in accordance with the result of determination of the photographic scene.
In the image processing apparatus described above, the scene determining means may determine a kind of image an image inspection photographed in the color moving image data.
In the image processing apparatuses described above, the scene determining means may determine whether the photographic scene is of a screening inspection or a thorough inspection, and the marking means may execute the simple marking process when the photographic scene is determined to be of the screening inspection, while the detailed marking process when the photographic scene is determined to be of the thorough inspection.
In the image processing apparatus described above, the scene determining means may be provided with a moving image analyzing means configured to analyze movement of the image.
In the image processing apparatus described above, the moving image analyzing means may be provided with a velocity field calculating means configured to calculate a velocity field based on continuous multiple pieces of image data, and may determine a kind of the image inspection based on a calculation result of the velocity field.
In the image processing apparatus described above, the moving image analyzing means may be provided with an image velocity calculating means configured to calculate a representative value of magnitude of velocity vectors of respective pixels constituting the velocity field and obtain the representative value as an image velocity.
In the image processing apparatus described above, the moving image analyzing means may be provided with an image velocity change rate calculating means configured to calculate an image velocity change rate which represents a rate of change of the image velocity per unit time.
In the image processing apparatus described above, the moving image analyzing means may be provided with a resolution lowering means configured to lower a resolution of the image data.
In the image processing apparatus described above, the scene determining means may be provided with a brightness image data generating means configured to generate brightness image data of which pixel values are brightness values of the image data.
In the image processing apparatus described above, the scene determining means may be provided with an image simplifying means configured to simplify a brightness image represented by the brightness image data.
In the image processing apparatus described above, the image simplifying means may be provided with a resolution lowering means configured to lower a resolution of the brightness image, a blurring means configured to apply a blurring processing to the brightness image of which resolution has been lowered, and a resolution increasing means configured to increase the resolution of the brightness image to which the blurring processing has been applied to an original resolution.
In the image processing apparatus described above, the image simplifying means may be provided with a gradation reducing means configured to reduce gradation of the brightness image data.
In the image processing apparatus described above, the scene determining means may be provided with a contour line image data generating means configured to generate contour line image data representing contour lines of brightness values based on the brightness image.
In the image processing apparatus described above, the contour line image data generating means may he provided with a vector differential calculating means configured to calculate gradient of the brightness image data.
In the image processing apparatus described above, the scene determining means may be provided with a contour line density calculating means configured to calculate a density of the contour lines, and the scene determining means may determine the photographic scene based on the density of the contour lines.
In the image processing apparatus described above, the scene determining means may be provided with a brightness gradient calculating means configured to calculate brightness gradient within the image, and the scene determining means may determine the photographic scene based on the brightness gradient.
In the image processing apparatus described above, the scene determining means may be provided with a circularity calculating means configured to calculate a circularity of a low brightness area of the image, and the scene determining means may determine the photographic scene based on the circularity.
In the image processing apparatus described above, the scene determining means may have a centroid calculating means configured to calculate a center of gravity of a low brightness area of the image, and the scene determining means may determine the photographic scene based on the centroid.
According to an embodiment of the present invention, a positional displacement between marks applied to an endoscopic image to indicate lesion parts and the endoscopic image itself can be prevented.
Hereinafter, referring to the drawings, embodiments of an image processing apparatus according to the present invention will be described. Incidentally, in the following description, an electronic endoscope system will be explained as one embodiment of the present invention.
The processor 200 is provided with a system controller 202 and a timing controller 204. The system controller 202 is configured to execute programs stored in a memory 212, and integrally control the electronic endoscope apparatus 1 entirely. The system controller 202 is connected to an operation panel 214. The system controller 202 changes operations of the electronic endoscope apparatus 1 and parameters for respective operations in accordance with instructions, which are input through the operation panel 214 by an operator. The timing controller 204 is configured to output synchronizing signals used to adjust operation timings of various parts to respective circuits of the electronic endoscope apparatus 1.
A lam 208 is actuated by a lamp power source igniter 206, and then, irradiates illuminating light L. The lamp 208 is, for example, a high-intensity lamp such as a xenon lam, a halogen lamp, a mercury lamp and a metal halide lamp, or an LED (light emitting diode). The illuminating light L is light having a spectrum ranging mainly from a visible light region to an invisible infrared region (or, white light including at least visible light region).
The illuminating light L irradiated by the lamp 208 is converged on an incident surface of an LCB (light carrying bundle) 102 by a converging lens 210, and enters into the LCB 102.
The illuminating light L entered the LCB 102 propagates inside the LCB 102, emitted from a light emitting surface of the LCB 102 which is arranged at a distal end of the electronic scope 100, and is incident on an object through a distribution lens 104. Return light from the object, which is illuminated by the illuminating light L, is converged, by an objective lens 106, to focus an optical image on a light receiving surface of a solid state imaging element 108.
The solid state imaging element 108 is a single CCD (charge coupled device) image sensor in accordance with a complementary color checkered color difference line sequential system). The solid state imaging element 108 picks up an optical image focused on the light receiving surface, and outputs an analog photographing signal. Specifically, the solid state imaging element 108 accumulates the optical image focused on respective pixels of the light receiving surface as electric charges corresponding to light amounts, generates yellow (Ye), cyan (Cy), green (G) and magenta (Mg) color signals, and sequentially outputs scan lines obtained by adding and mixing generated color signals of each two pixels arranged next to each other in a vertical direction. Incidentally, the solid state imaging element 108 needs not be limited to a CCD image sensor, but can be replaced with CMOS (complementary metal oxide semiconductor) image sensor, or any other type of imaging device. Further, the solid state imaging element 108 may be one mounting a primary color system filter (e.g., a Bayer array filter).
Inside a connection part of the electronic scope 100, a driver signal processing circuit 110 is provided. The analog photographing signal including the scan lines described above is input to the driver signal processing circuit 110 from the solid state imaging element 108 at a field period. Incidentally, in the following description, a term “field” could be replaced with a term “frame.” In the embodiment, the field period and a frame period are 1/60 second and 1/30 second, respectively. The driver signal processing circuit 110 applies a predetermined processing to the analog photographing signal transmitted from the solid state imaging element 108, and outputs the same to an image processing circuit 220 of the processor 200.
The driver signal processing circuit 110 is also configured to access a memory 120 and retrieves intrinsic information which is intrinsic to the electronic scope 100. The intrinsic information of the electronic scope 100 recorded in the memory 120 includes, for example, the number of pixels, a sensitivity, an operable field rate, a model number of the solid state imaging element 108. The driver signal processing circuit 100 transmits the intrinsic information retrieved from the memory 120 to the system controller 202.
The system controller 202 executes various operations based on the intrinsic information of the electronic scope 100 to generates control signals. The system controller 202 controls operations and timings of circuits in the processor 200, with use of the generated control signals, so that processes suitable to the electronic scope connected to the processor 200 will be executed.
The timing controller 204 generates a synchronizing signal in accordance with a timing control by the system controller 202. The driver signal processing circuit 110 controls and drives the solid state imaging element 108, in accordance with the synchronizing signal supplied from the timing controller 204, at a timing synchronously with the field rate of a video signal generated by the processor 200.
The image processing circuit 220 generates image data based on the photographing signal output by the electronic scope 100, under control of the system controller 202. The image processing circuit 220 generates a screen data for monitor display using the generated image data, converts the screen data to a video signal having a predetermined video format, and outputs the same. The video signal is input to the monitor 900, and a color image of the object is displayed on a display screen of the monitor 900.
The driver signal processing circuit 110 is provided with a driving circuit 112 and an AFE (analog front end) 114. The driving circuit 112 generates a driving signal of the solid state imaging element 108 in accordance with the synchronizing signal. The AFE 114 applies noise reduction, signal amplification, gain compensation and A/D (analog to digital) conversion with respect to the analog photographing signal, and outputs a digital image signal, and outputs a digital photographing signal. Incidentally, all or a part of processing executed by the AFE 114 according to the embodiment may be executed by the solid state imaging element 108 or the image processing circuit 220.
The image processing circuit 220 is provided with a basic processing part 220a, an output circuit 220b, a TE (tone enhancement) processing part 221, an effective pixel judging part 222, a color space converting part 223, a lesion determining part 224, a score calculating part 225, a marking processing part 226, an image memory 227, a display screen generating part 228, a memory 229 and a reliability evaluating part 230. Processing executed by each part of the image processing circuit 220 will be described later.
As shown in
SCT will be described later.
Next, processes executed by the image processing circuit 220 will be described.
The basic processing S1 includes a process of converting the digital photographing signal output by the AFE 114 to a intensity signal Y, and color difference signals Cb and Cr, a primary color separation process of separating primary colors R, G and B from the intensity signal Y, and color difference signals Cb and Cr, a clamp process of removing offset components, a defective pixel correction process of correcting a pixel value of a defective pixel with use of pixel values of surrounding pixels, a de-mosaic process (i.e., an interpolation process) of converting photographing data (i.e., RAW data) consisting of monochromatic pixel values to image data having full-color pixel values, a linear matrix process of correcting a spectral characteristic of the imaging element with use of a color matrix, a white balance process of compensating for spectral property of the illuminating light, and a contour correction process of compensating for deterioration of a spatial frequency characteristic.
Incidentally, all or part of the processes executed by the basic processing part 220a in the embodiment may be executed by the driver signal processing circuit 110 or the solid state imaging element 108.
The normal observation image data N generated by the basic processing part 220a is transmitted to the TE processing part 221 and the scene judging part 230, and further stored in the storage area Pn of the image memory 227.
Next, whether an operation mode is set to an image analysis mode (S2) is judged. The image analysis mode according to the embodiment of the invention is an operation mode in which color information is analyzed with respect each pixel of the image data, it is judged whether each pixel is a pixel photographing a lesion part (hereinafter, referred to as a lesion pixel) based on the result of analysis of the color information and a predetermined judging criteria, and the lesion pixels are displayed in a discriminated manner. Kinds of lesions to be judged can be selected depending on inspection contents. In an example described below, pixels of color range which is intrinsic to observation images of inflammation (e.g., reddening inflammation including selling or easy bleeding) of inflammatory bowel disease (IBD) are displayed in a discriminated manner.
It is noted that the electronic endoscope apparatus 1 according to the embodiment is configured to operate in either of two operation modes: an image analysis mode; and a normal observation mode. The operation mode is switched by a user operation to an operation part 130 of the electronic scope 100 or the operation panel 214 of the processor 200. When the operation mode is set to the normal observation mode (S2: NO), process proceeds to S12.
When the image analysis mode is selected (S2: YES), the TE process S3, which is to be executed by the TE processing part 221, is executed subsequently. The TE process S3 is a process of increasing an effective resolution by performing gain adjustment to give a non-linear gain to each of primary color signals R, G and B of the normal observation image data N, thereby substantially expanding a dynamic range in the vicinity of a characteristic color range (in particular, a boundary portion thereof) of the lesion subject to judgment. Specifically, in the TE process S3, a process of applying the non-linear gain as shown in
Incidentally, by the TE process S3, the hue changes such that the inflammatory part becomes reddish, the ulcer part becomes whitish and the normal part becomes greenish. Therefore, when the tone-enhanced image data E, which is generated in the TE process S3, is displayed on the monitor 900, lesion part (e.g., the inflammatory part or the ulcer part) can easily be visually recognized in comparison with a case where the normal observation image data N before the TE process S3 is applied is displayed. It is noted that the TE process S3 above is an example of a color enhancement process applicable to the present invention. Instead of the TE process S3, another type of color enhancement process capable of enhancing color quality, specifically, the hue or contrast of saturation (or chromaticity), may be employed.
After the TE process S3 has completed, the effective pixel judging part 222 applies the effective pixel judging process S4 to the tone-enhanced image data E. It is noted that, the TE process S3 is omitted and the effective pixel judging process S4 may be applied to the normal observation image data N.
int(x, y)=0.3*R′(x, y)+0.59*G′(x, y)+0.11*B′(x,y) [Formula 1]
Incidentally, values of the corrected intensity int(x, y) as calculated are used in a following appropriate exposure judging process S42. Further, as known from formula 1, the corrected intensity int(x, y) is not a simple average of the primary color signals R′(x, y), G′(x, y) and B′(x, y), but is obtained as a weighted average based on relative spectral sensitivity characteristic of human beings (e.g., the operator).
Next, for each pixel(x, y), the appropriate exposure judging process S42 is executed, in which whether the exposure level is appropriate to image analysis is judged based on the corrected intensity int(x, y) of the tone-enhanced image data E calculated in process S41 and the primary color signals R′(x, y), G′(x, y) and B′(x, y). In the appropriate exposure judging process S42, the exposure is determined to be the appropriate exposure (S42: YES) when at least one of (or both of) the following two conditions (i.e., formulae 2 and 3) is satisfied. Incidentally, formula 2 defines an upper limit value of the corrected intensity int(x, y) (the entire light amount), while formula 3 defines a lower limit value of each of the primary color signals R′(x, y), G′(x, y) and B′(x, y).
int(x, y)<235 [Formula 2]
Max{R′(x, y),G′(x, y),B′(x, y)}>20 [Formula 3]
If, for the pixel(x, y), it is determined that formula 2 or formula 3 (or both formulae 2 and 3) is satisfied and the exposure is appropriate (S42: YES), the effective pixel judging part 222 rewrites the value of a flag F(x, y), which corresponds to the pixel(x, y), of the flag table FT stored in the memory 229 with value “1” (S43).
It is noted that the flag F(x, y) has a flag value of one of 0-2. Each flag value is defined below.
In the appropriate exposure judging process S42, if none of the formulae 2 and 3 is satisfied (or one of the formulae 2 and 3 is not satisfied), and the exposure is determined to be inappropriate (S42: NO), the effective pixel judging part 222 rewrites the value of the flag F(x, y) with “0” (S44).
In process S45, it is judged whether the process has been completed for all the pixels(x, y). Unless all the pixels(x, y) have been processed, the above processes S41-S45 are repeated.
When the effective pixel judging process S4 has completed, the color space converting part 223 applies a color space converting process S5 to the tone-enhanced image data E. The color space converting process S5 is a process of converting pixel values of an RGB space defined by RGB three primary colors to pixel values of HIS (Hue-Saturation-Intensity) space defined by three elements of hue, saturation and intensity. Specifically, in the color space converting process S5, the primary color signals R′(x, y), G′(x, y) and B′(x, y) of each pixel(x, y) of the TE image data E is converted to hue H(x, y), saturation S(x, y) and intensity I(s, y).
Further, data of under or over exposure pixels(x, y) has low accuracy and lowers reliability degree of the analysis results. Therefore, the color space converting process S5 is applied only to the pixels(x, y) of which the value of the flag F(x, y) is set to be one (1) (i.e., the pixels(x, y) judged to be appropriately exposed in the effective pixel judging process S4).
Decision image data J{H(x, y), S(x, y), I(x, y)} having hue H(x, y), saturation S(x, y) and intensity I(x, y) of each pixel(x, y), which are generated by the color space converting part 223, is transmitted to the lesion determining part 224.
After completion of the color space conversion process S5, the lesion determining part 224 executes a lesion judging process S6 using the decision image data J the lesion determining process S6 is a process applied to each pixel(x, y) of the endoscope image, in which process a condition of the biological tissue photographed by the pixel is determined (i.e., it is judged whether the biological tissue is in the inflammatory condition) depending on whether the decision image data J is plotted on which of areas α or β (see
The inflammation determining process S62 will be described.
The scatter diagram shown in
In the inflammation determining process S62, it is determined whether decision image data J{H(x, y), S(x, y)} of each pixel(x, y) is to be plotted in area β shown in
130+δS1≤S(x, y) [Formula 4]
60+δH1≤H(x, y)≤100+δH2 [Formula 5]
When the decision image data J{H(x, y), S(x, y)} of a pixel(x, y) is plotted in area β (S62: YES), the value of the flag F(x, y) corresponding to the pixel(x, y) is rewritten with “2” (i.e., inflammation) (S63), and control proceeds to process S64. When the decision image data J{H(x, y), S(x, y)} of a pixel(x, y) is not plotted in area β (S62: NO), the flag F(x, y) is not rewritten, and control proceeds to process S64.
In process S64, it is judged whether all the pixels(x, y) have been processed. Until all the pixels(x, y) are processed, above processes S61-S64 are repeated.
After the lesion determining process S6 has completed, a score calculating process S7 is executed. The score calculating process S7 is a process of calculating a score Sc(x, y) representing an evaluation value of severity degree of the lesion part based on the pixel values of the decision image data J. The score calculating process S7 is executed sequentially for all the pixels(x, y). Incidentally, an algorithm of the score calculation explained below is only an example, and the present invention can be applied to displayed screens of scores calculated in various algorithms, respectively.
Here, a principle of score calculation according to the embodiment will be described briefly. It is known that the more a symptom of an inflammatory part progresses, the closer the color of the inflammatory part becomes the color of blood as superficial normal mucous membranes will be fallen out. Therefore, degree of correlation between the color of the inflammatory part and the color of the blood (i.e., correlation value CV, which will be described later) serves as a good index representing the severity degree of the inflammatory part. According to the present embodiment, the correlation value CV(x, y) representing the relative correlation between the decision image data J{H(x, y), S(x, y)} of each pixel(x, y) and a color of the blood (i.e., hue and saturation) is calculated, which is used as the score Sc(x, y) representing the severity of the inflammatory part.
When the value of the flag F(x, y) is “2” (inflammation), namely, when the pixel(x, y) is the lesion pixel (S71: YES), process proceeds to S72. When the pixel(x, y) is not the lesion pixel (S71: NO), process proceeds to S79.
It is known that saturation of blood or biological tissue including blood depends on its intensity. Specifically, saturation thereof is lower as the intensity is higher. In S72, variation of saturation S(x, y) due to intensity I(x, y) of the decision image data J(x, y) is compensated using formula 6 which is developed by the present inventors. By applying this compensation, it is possible to make precision of score calculation higher.
Next, using formula 7, a hue distance DHUE(x, y) is calculated (S73). The hue distance DHUE is a relative value of the hue of the decision image data J(x, y) using the hue Href of the blood sample data as reference.
D
HUE(x, y)=H(x,y)−Href [Formula 7]
Next, a hue correlation value HVC(x, y) is determined (S74) based on the hue distance DHUE(x, y). The hue correlation value HCV(x, y) is a parameter having strong correlation with severity degree of an inflammation part.
The relationship between the hue distance DHUE and the hue correlation value HCV shown in
Next, a saturation distance DSAT(x, y) is calculated using formula 8. The saturation distance DSAT(x, y) is a relative value of saturation of the decision image data J(x, y) using saturation Sref of the blood sample data as reference.
D
SAT(x, y)=Scorr.(x, y)−Sref [Formula 8]
Next, a saturation correlation value SCV(x, y) is determined based on the saturation distance DSAT(x, y) (S76). The saturation correlation value SCV(x, y) is also a parameter having strong correlation with the severity degree of the inflammation part.
The relationship between the saturation distance DSAT and the saturation correlation value shown in
Next, by multiplying the hue correlation value HCV(x, y) with the saturation correlation value SCV(x, y), a correlation value CV(x, y) between the color of a lesion pixel(x, y) and the color of blood. It is noted that the correlation value CV(x, y) is a normalized value of which the minimum value is 0.0 and the maximum value is 1.0. Further, the correlation value CV(x, y) is divided into eleven steps with a pitch of 0.1 point.
Since the correlation value CV(x, y) serves as an appropriate index of severity degree of the inflammation, the value of the score Sc(x, y) in the score table ST is rewritten with the correlation value CV(x, y) (S78).
When a pixel(x, y) is not the lesion pixel (S71: NO), the above-described calculation of the correlation value CV(x, y) is not executed, and the value of the score Sc(x, y) in the score table ST is rewritten with “0” (S79). According to this configuration, scores Sc(x, y) can be given to all the pixels(x, y) with a smaller amount of calculations.
In process S80, it is judged whether the processing has been completed for all the pixels(x, y). Until processing has been completed for all the pixels(x, y), above-described processes S71-S80 are repeated.
After completion of the score calculation S7 (or, in parallel with a series of processes from the TE, processing S3 to the score calculation process S7), a scene determining process S8 is executed by the scene determining part 230.
The scene determining process S8 according to the present embodiment will be generally described below. Generally, endoscopic inspections are executed by roughly breaking down into two steps (i.e., two kinds of inspections). A first step is a “screening inspection” to search for portions suspected to be lesion parts by observing an inspection object (e.g., inner walls of an esophagus, a stomach and a duodenal in case of an upper gastrointestinal endoscopy) throughout. A second step is a “thorough inspection” in which the suspected portions found in the screening inspection are thoroughly observed to judge lesion tissues/normal tissues, and when the found portions are lesion tissues, a kind and the severity degree thereof are judged.
Incidentally, Table 1 shows features of the endoscopic images photographed when the screening inspection is executed (see
In an ordinary screening inspection, a tip 101a (see
Therefore, in the endoscopic image photographed when the screening inspection is carried out, a dark inner wall of the gastrointestinal tract spaced from the tip 101a of the electronic scope 100 is shown at a central part of the image, and brightly illuminated inner wall of the gastrointestinal tract close to the tip 101a of the electronic scope 100 is shown at a peripheral part of the image as shown in
Further, in the image in the screening inspection, from a brightly illuminated part close to the tip 101a to a dark and distant part to which almost no illumination light reaches are shown, change of darkness/brightness (i.e., intensity) in the image becomes large.
Further, since the screening inspection is carried out with moving the tip 101a of the electronic scope 100 as described above, movement of the endoscopic image photographed during the screening inspection is fast.
The thorough inspection is carried out by aiming the tip 101a of the electronic scope 100 close to a particular part of the inner wall of the gastrointestinal tract (i.e., a part which is suspected to be a lesion part in the screening inspection). Therefore, in the endoscopic image photographed when the thorough inspection is carried out, a brightly illuminated wall of the gastrointestinal tract close to the tip 101a of the electronic scope 100 is shown at a central part (or substantially entire part), while a dark inner wall of the gastrointestinal tract distant from the tip 101a is shown in a peripheral part of the image. Accordingly, the dark part in the endoscopic image has a non-circular shape.
It is noted that the thorough inspection is carried out such that movement of the tip 101a of the electronic scope 100 is as small as possible in order to observe minute shape and texture of the object. Therefore, movement of the endoscopic image photographed during the thorough inspection is slow and gentle.
Since dark part which is distant from the tip 101a of the electronic scope 100 is not substantially photographed in the image of the through inspection, change of brightness/darkness within the image is gentle.
The scene determining process S8 according to the present embodiment is a process of determining a status of inspection (i.e., whether the screening inspection is being executed or the thorough inspection is being executed) based on the features (in particular, the movement of the image) described in Table 1.
In the motion picture analyzing process S81, firstly, a low-resolution data making process S811 to convert the normal observation image NP to a low-resolution normal observation image NPr by reducing the resolution (i.e., the number of pixels) of the normal observation image NP to 1/n2 thereof (n being an integer) is executed. This process is aimed to reduce calculation amounts required in the following steps, and according to the present embodiment, the resolution of the normal observation image NP is reduced to 1/16 thereof. Specifically, the normal observation image NP is divided into blocks each has n pixels×n pixels (e.g., four pixels by four pixels), and n2 pixels (e.g., 16 pixels) in each block is integrated into a new single pixel. In that instance, a representative value of the pixel values N(x, y) is calculated for each block (e.g., an average value, a median value or a most frequent value of the pixel values N(x, y) of each block), and the representative value is used as the pixel value of the low-resolution normal observation image NPr.
Next, based on the low-resolution normal observation image NPr of the latest frame and that of the previous frame, a velocity vector field {Vx(x, y), Vy(x, y)} (hereinafter, simply referred to as a velocity filed (Vx, Vy)) is calculated (S812). This velocity field is an optical flow calculated in accordance with, for example, a gradient method or Lucas-Kanade method.
Next, with use of formula 9, an image velocity PV which is a root mean square if the velocity field (Vx, Vy). The image speed PV is a parameter indicative of quantity of an average velocity of an entire image.
where, Nv: the number of elements of the velocity field (i.e., the number of pixels of the low-resolution normal observation image NPr).
Next, an image velocity changing rate PV′ (i.e., a changing amount of the image velocity PV per unit time) which is a time differential of the image velocity PV is calculated. Further, a smoothing process is applied to the image velocity changing rate PV′. Specifically, a representative value (e.g., the average value, a median value or a most frequent value) of the image velocity changing rates PV′ for multiple low-resolution normal observation images NPr photographed within a latest predetermined time period (e.g., for one second) are calculated, and the representative value is used as the image speed changing rate PV′. It is noted that a value obtained simply by applying the time differential to the image velocity PV includes large amounts of noises of high-frequency components (e.g., movement of image due to oscillation and the like, which is not expected by the operator, of the electronic scope 100). Therefore, if the smoothing process is not applied, but the time differential value of the image velocity PV is used as is and the scene determining process S83 is executed, the determination result becomes unstable, which causes frequent changes of the display modes.
After the moving image analyzing process S81, an intensity gradient calculating process S82 to calculate an intensity gradient (i.e., the maximum value LDmax of a density LDφ which will be described later) in the normal observation image NP is executed.
In the intensity gradient calculating process S82, an intensity index Lu0(x, y) of each pixel (x, y) of the normal observation image NP is firstly calculated with use of formula 10, and intensity image data Lu0(i.e., image data representing the intensity image LP0) which has the intensity indexes Lu0(x, y) as elements (pixel values) is generated (S820). It is noted that, according to the present embodiment, the intensity indexes Lu0(x, y) are calculated as a simple average of the values of primary color signals R(x, y), G(x, y) and B(x, y) of respective pixels of the normal observation image data N. However, the intensity indexes Lu0(x, y) may be calculated by weighted average corresponding to a spectral sensitivity characteristic of the solid state imaging elements 108, or by weighted average corresponding to the relative spectral sensitivity characteristic as is treated in formula 1. Alternatively, the intensity indexes Lu0(x, y) may be calculated not as the average of the primary color signals, but a sum of the same.
Lu(x, y)={R(x, y)+G(x, y)+B(x, y)}/3 [Formula 10]
Next, a resolution decreasing process S821 to convert intensity image data Lu0 to intensity image data Lu1 (i.e., image data representing an intensity image LP1) having intensity indexes Lu1 (x, y) as elements (i.e., pixel values) by decreasing the resolution (the number of pixels) of the intensity image data Lu0 to 1/n2. The resolution decreasing process S821 reduces the amount of calculations required in respective steps in the following steps, and further, simplifies the intensity image Lu0.
Next, a blurring process S822 is executed. In the blurring process S822, for each pixel, a representative value (e.g., an average value, a median value or a most frequent value) of the intensity indexes Lu1(x, y) of the pixels included in a predetermined area (e.g., 3 pixels by 3 pixels) centered on the pixel, and the intensity image data Lu2 (i.e., image data representing the intensity image LP2) having the representative values (i.e., intensity indexes Lu2(x, y)) as its elements (i.e., pixel values) is generated. The blurring process S822 further simplifies the intensity image LP1.
Next, a resolution increasing process S823 to increase the resolution (the number of pixels) of the intensity image data Lu2 by multiplying n2 (n being an integer) to generate intensity image data Lu3 (i.e., image data representing the intensity image LP3) of which resolution is returned to that of the original intensity image data Lu0 is executed. The resolution increasing process S823 is executed by dividing each pixel into n pixels by n pixels. By the resolution increasing process, the resolution (i.e., the number of pixels) increases, however, the image itself does not change.
Next, intensity image data Lu4 (i.e., image data representing an intensity image LP4 is generated as a gradation decreasing process S824 to decrease the gradation of the pixel values is executed with respect to the intensity image data Lu3. In the gradation decreasing process S824, for example, the gradation is decreased from 256 steps to 8 steps or 16 steps.
By the gradation decreasing process S821, the blurring process S822, the resolution increasing process S823 and the resolution decreasing process S824, simplification of the intensity image LP0 is executed effectively. Incidentally, by applying Fourier transformation to the intensity image data Lu0 to eliminate high-frequency components, then by applying inverse Fourier transformation thereto instead of executing the above processes, the intensity image LP0 can be similarly simplified. Further, by executing the blurring process S822 by multiple times, substantially the similar effects can be obtained.
Next, a vector differential operating process S825 is applied with respect to the intensity image data Lu4. Specifically, the gradient of the intensity indexes Lu4(x, y) is calculated.
As shown in
Next, using the contour line image data CD, a circularity Vround of the darkest image area (i.e., a lowest gradation area RL) in the intensity image LP4 is calculated with use of formula 11 (S826).
where,
Next, using the contour line image data CD, the centroid GP of the lowest gradation area RL is calculated with use of Formula 12 (S827).
where,
Next, using the contour line image data CD, densities LD of the contour lines CL in eight directions (0, π/4, π/2, 3π/4, π, 5π/4, 3π/2, 7π/4) starting at the centroid GP are calculated (S828). The density LD of the contour lines CL is defined as the number of contour lines CL per a unit length in a radial direction with respect to the centroid GP.
In the intensity image LP4 shown in
In the calculation of the density LD of the contour lines CL, firstly, intersections Q0, Qπ/4, Qπ/2, Q3π/4, Qπ, Q5π/4, Q3π/2, Q7π/4 of radial lines (broken lines) respectively extending eight directions from the centroid GP and the outermost contour line CL3 are detected. Then, distances d0, dπ/4, dπ/2, d3π/4, dπ, d5π/4, d3π/2, d7π/4 (not shown) between the centroid GP and the respective intersections Q0, Qπ/4, Qπ/2, Q3π/4, Qπ, Q5π/4, Q3π/2, Q7π/4 are calculated with use of formula 13.
where,
Next, the maximum value LDmax of the densities LDφ calculated in the density calculation S828. This value is the intensity gradation of the intensity image LP4.
In the inspection type determining process S83, firstly, an image velocity changing rate determining process S831 to determined whether or not an image velocity changing rate PV′ is equal to or less than a predetermined threshold value ThPV′ is executed. When a violent movement of the image is occurring such that the image velocity changing rate PV′ exceeds the threshold value ThPV′, hand shaking occurs on the normal observation image NP, and a marking process (S10, S11) which will be describe later cannot be executed accurately. Further, since the movement of the image is fast, it is difficult for the operator to accurately recognize the marking information. Therefore, when the image velocity changing rate PV′ exceeds the threshold value ThPV′ (S831: NO), process immediately exits from the inspection type determining process S83, and proceeds to the display image generating process S12 (
In Centroid Determination S832, it is determined whether the centroid GP of the lowest gradation area R1 is located within a predetermined central area of the contour line image CP. When the centroid GP is located within the predetermined area (S832: YES), process proceeds to the next circularity determining process S833. When the centroid GP is not within the predetermined area (S832: NO), process proceeds to a density determining process S834 without executing the circularity determining process S833.
(Circularity Determination: S833)
In a circularity determination S833, it is determined whether the circularity Vround is greater than a predetermined threshold value (e.g., 0.6) or not. When the circularity Vround is greater than the threshold value of 0.6 (S833: YES), the type of inspection is determined to be the screening inspection (S837). When the circularity Vround is equal to or less than the threshold value of 0.6 (S833: NO), a density determination S834 is executed subsequently.
In the density determination S834, it is judged whether the density LD is greater than a predetermined threshold value ThLD. When the density LD is less than the threshold value ThLD (S834: NO), the inspection is determined to be the thorough inspection (S836). When the density LD is equal to or greater than the threshold value ThLD (S834: YES), an image velocity determination S835 is executed subsequently.
In the image velocity determination S835, it is determined whether the image velocity is greater than a predetermined threshold value ThPV. When the image velocity PV is greater than the threshold value ThPV (S835: YES), the inspection is determined to be the screening inspection (S837). When the image velocity PV is equal to or less than the threshold value ThPV (S835: NO), the inspection is determined to be the thorough inspection (S836).
Incidentally, the image velocity PV and the image velocity changing rate PV′ are parameters regarding movement of the tip of the electronic scope 100 when an inspection is being carried out. Further, the intensity gradient and the circularity Vround and a position of the centroid of the lowest gradation area R1 are parameters determined by an attitude of the tip of the electronic scope 100 with respect to the inner wall of the gastrointestinal tract which is an object. That is, the inspection type determination S83 according to the embodiment is to determine the type of the endoscopic inspection based on the movement and attitude of the tip of the electronic scope 100 which is assumed from the endoscopic image.
Next, based on the determination result of the type of inspection in the scene determining process S8, the type of the marking process executed by the marking processing part 226 is determined (S9). When the type of the inspection is determined to be the thorough inspection (S9: YES), a fine marking process S10 is executed. When the type of the inspection is determined to be the screening inspection (S9: NO), the simple marking process S11 is executed.
In the fine marking process S10, a color map image CMP in which a distribution of severity degrees in the normal observation image NP is indicated by color is generated as a marking image data to be overlaid on the normal observation image NP. The color map image CMP generated in the fine marking process S10 has display colors Col(x, y), which are determined in accordance with the scores Sc(x, y) of corresponding pixels(x, y) of the normal observation image NP.
In the fine marking process S10, firstly the display color table DCT stored in the memory 229 is referred to and the display colors Col(x, y) to be applied to respective pixels are determined based on the scores Sc(x, y). Then, the color map data CM having the display colors Col(x, y) as pixels values is generated, and stored in the storage area PC of the image memory 227. An example of a color map image CMP generated by the fine marking process S10 is shown in
It is noted that the display color table DCT is numerical value table defining a relationship between the score Sc and the display colors (i.e., color codes) of the color map image CMP. An example of the display color table DCT is shown in Table 2. Incidentally, regarding the display colors, different colors are set per each eleven steps of scores Sc. To the pixels(x, y) of which scores Sc(x, y) are zero (normal tissues), a value indicating the colorless and transparent (i.e., null value) is assigned. Therefore, the pixels of the normal tissues are not colored by the fine marking process S10. Further, designation of the color to be applied to each pixel(x, y) needs not be limited to designation by RGB, but may be designated by other color expression (e.g., hue and/or saturation). Further, as shown in
The simple marking process S11 is a process similar to the fine marking process S10, and a simplified display table DCT (Table 3) is used. Specifically, in the simple marking process S11, to the pixels of which scores Sc is less than a predetermined value (e.g.. 0.6) and severity degree is low, a vacant value (i.e., null value) representing colorless and transparent color Col(x, y) is assigned, while to the pixels of which scores Sc is greater than the predetermined value and the severity degree is high, a single display color (e.g., yellow) Col(x, y) is assigned. An example of the marking image MP when the simple marking process S11 is executed is shown in
By employing the simple marking process S11, calculation amount necessary for the process can be largely reduced. Therefore, even for an image of which moving speed is high, the image processing can follow the frame rate, and it becomes possible to apply the marks accurately at the lesion part. Further, since marks of simple configuration (e.g., a single color) is applied at a part where the severity degree is high, the marks are visually well recognizable and the operator can grasp the part of which severity degree is high even in the image of which moving speed is high.
When the fine marking process S10 or the simple marking process S11 has completed, a display screen generating process S12 is executed subsequently. The display screen generating process S12 is to generate display screen data to display a screen on the monitor 900 using various pieces of image data stored in the image memory 227, and is executed by the display screen generating part 228 of the image processing circuit 220. The display screen generating part 228 is capable of generating plurality of kinds of display image data in accordance with control of the system controller 202. To the display screen data as generated, processing such as a gamma compensation is applied by the output circuit 220b, and then converted into a video signal having a predetermined video format and output to the monitor 900 (outputting process S12).
In the display screen generating process S12, the display image generating part 228 retrieves the normal observation image data N (or, retrieves the tone-enhanced image data E from storage area group Pe), and displays the normal observation image NP (or, the tone-enhanced image EP) on the normal image display area 324. Further, the display image generating part 228 retrieves the marking image data M from a storage area group Pm, and displays the marking image MP on the analysis image display area 325. Further, in the date/time display area 321 and the basic information display area 322, information supplied from the system controller 202 is displayed.
The operator carries out the endoscopic observation with watching the analysis mode observation screen 320. Specifically, the operator carries out the endoscopic observation with watching the normal observation image NP (or the tone-enhanced image EP) displayed in the normal image display area 324, with reference to the marking image MP displayed in the analysis image display area 325. By carefully observing particularly carefully a where a marking is applied in the marking image MP, an accurate medical examination can be carried out without overlooking a lesion part.
After completion of the display screen generating process S12 and outputting process S13, it is judged whether the endoscopic observation is to be continued (S14). Until a user operation to instruct end of the endoscopic observation or stoppage of operating the electronic endoscope apparatus 1 is carried out (S14: NO), the processes S1-S13 are repeated.
It is noted that the above embodiments are examples where the present invention is applied to the electronic endoscope systems. However, the present invention needs not be limited to such a configuration. For example, the present invention can be applied to an image reproducing device configured to reproduce endoscopic observation images photographed by the electronic endoscope apparatus. The present invention can also be applied to an observation image other than the endoscopic images (e.g., observation images taken with ordinary video cameras, or observation images inside a human body during operations).
According to the embodiment, a configuration to determine the type of the endoscopic inspection based on the movement and attitude of the tip of the electronic scope 100, which are assumed from the endoscopic image, is employed. However, the present invention needs not be limited to such a configuration. For example, an insertion shape detecting function to detect a shape and/or position of an insertion part of the endoscope during an inspection (an example of which is disclosed in Japanese Patent Provisional Publication No. 2013-85744) may be provided to the electronic scope, and the type of the endoscopic inspection may be judged based on the movement and the attitude of the tip of the electronic scope which are detection results by the insertion shape detection function.
The foregoing is the description of the illustrative embodiments. The embodiments of the present invention are not limited to those described above, and various modifications can be made within technical philosophy of the present invention. For example, appropriate combinations of illustratively indicated embodiments in the specification are also included in embodiments of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
2015-037540 | Feb 2015 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2016/055150 | 2/23/2016 | WO | 00 |