The present disclosure relates to an image processing apparatus, a capsule endoscope system, a method of operating an image processing apparatus, and a computer-readable storage medium.
In the field of endoscopes, a capsule endoscope that is introduced into a subject to capture an image has been developed. The capsule endoscope has an imaging function and a wireless communication function inside a capsule-shaped casing formed to have a size that enables introduction into the gastrointestinal tract of a subject. The capsule endoscope is swallowed by the subject and thereafter captures an image while moving inside the gastrointestinal tract by a peristaltic motion or the like, and sequentially generates and wirelessly transmits an image (hereinafter, also referred to as in-vivo image) of an internal portion of an organ of the subject (see, for example, JP 2012-228346 A). The wirelessly transmitted image is received by a receiving device provided outside the subject. Further, the received image is fetched to an image processing apparatus such as a workstation and subjected to predetermined image processing. As a result, the in-vivo image of the subject can be displayed as a still image or a moving image on a display device connected to the image processing apparatus.
When finding a lesion such as a bleeding source using the capsule endoscope, in a case where the lesion cannot be found in a single examination, the capsule endoscope may be introduced into the same subject multiple times for examination.
In some embodiments, provided is an image processing apparatus that performs image processing on an image captured by a capsule endoscope introduced into a subject. The image processing apparatus includes: an identification circuit configured to calculate a characteristic of each of a plurality of image groups captured when the capsule endoscope is introduced into the subject multiple times, each image group being a group of images that are captured each time the capsule endoscope is introduced into the subject, and identify, based on the calculated characteristic, a first region or a second region in each image group, the first region being a region that does not include an image of the subject captured by the capsule endoscope, the second region being a region that is regarded as not including the captured image of the subject; and a first specifying circuit configured to specify at least one section of the subject in the plurality of image groups, the at least one section including the first region or the second region.
In some embodiments, a capsule endoscope system includes: the image processing apparatus; and the capsule endoscope.
In some embodiments, provided is a method of operating an image processing apparatus that performs image processing on an image captured by a capsule endoscope introduced into a subject. The method includes: calculating, by an identification circuit, a characteristic of each of a plurality of image groups captured when the capsule endoscope is introduced into the subject multiple times, each image group being a group of images that are captured each time the capsule endoscope is introduced into the subject; identifying, based on the calculated characteristic, a first region or a second region in each of the plurality of image groups, the first region being a region that does not include an image of the subject captured by the capsule endoscope, the second region being a region that is regarded as not including the captured image of the subject; and specifying, by a first specifying circuit, at least one section of the subject in each of the plurality of image groups, the at least one section including the first region or the second region.
In some embodiments, provided is a non-transitory computer-readable recording medium on which an executable program is recorded. The program instructs an image processing apparatus that performs image processing on an image captured by a capsule endoscope introduced into a subject to execute: calculating, by an identification circuit, a characteristic of each of a plurality of image groups captured when the capsule endoscope is introduced into the subject multiple times, each image group being a group of images that are captured each time the capsule endoscope is introduced into the subject, identifying, based on the calculated characteristic, a first region or a second region in each of the plurality of image groups, the first region being a region that does not include an image of the subject captured by the capsule endoscope, the second region being a region that is regarded as not including the captured image of the subject; and specifying, by a first specifying circuit, at least one section of the subject in each of the plurality of image groups, the at least one section including the first region or the second region.
The above and other features, advantages and technical and industrial significance of this disclosure will be better understood by reading the following detailed description of presently preferred embodiments of the disclosure, when considered in connection with the accompanying drawings.
Hereinafter, embodiments will be described with reference to the accompanying drawings.
a capsule endoscope 2 that is introduced into a subject H such as a patient, generates an image obtained by capturing the inside of the subject H, and wirelessly transmits the generated image, a receiving device 3 that receives the image wirelessly transmitted from the capsule endoscope 2 via a receiving antenna unit 4 attached to the subject H, an image processing apparatus 5 that acquires the image from the receiving device 3, performs predetermined image processing on the acquired image, and displays the processed image, and a display device 6 that displays the image of the inside of the subject H, or the like in response to an input from the image processing apparatus 5.
The capsule endoscope 2 is constituted by an image sensor such as a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS). The capsule endoscope 2 is a capsule type endoscope device formed to have a size that enables introduction into an organ of the subject H. The capsule endoscope 2 is introduced into the organ of the subject H by oral insertion or the like, and sequentially captures in-vivo images while moving inside the organ by a peristaltic motion or the like, and maintaining a predetermined frame rate. Then, the images generated by the capturing are sequentially transmitted by an embedded antenna or the like.
The receiving antenna unit 4 includes a plurality of (eight in
The receiving device 3 receives the image wirelessly transmitted from the capsule endoscope 2 via these receiving antennas 4a to 4h, performs predetermined processing on the received image, and stores the image and information regarding the image in an embedded memory. The receiving device 3 may include a display unit that displays a state of reception of the image wirelessly transmitted from the capsule endoscope 2, and an input unit such as an operation button to operate the receiving device 3. Further, the receiving device 3 includes a general-purpose processor such as a central processing unit (CPU), or a special-purpose processor such as various arithmetic operation circuits that perform specific functions, such as an application specific integrated circuit (ASIC) and a field programmable gate array (FPGA).
The image processing apparatus 5 performs image processing on each of a plurality of image groups captured by introducing the capsule endoscope 2 into the same subject H multiple times. Each image group is a group of in-vivo images of the subject H that are arranged in time series, the in-vivo images being captured by the capsule endoscope 2 introduced into the subject H until the capsule endoscope 2 is pulled out of the body of the subject H. The image processing apparatus 5 is implemented by a workstation or personal computer including a general- purpose processor such as a CPU, or a special-purpose processor such as various arithmetic operation circuits that execute a certain function, such as an ASIC and an FPGA. The image processing apparatus 5 fetches the image and the information regarding the image, the image and the information being stored in the memory of the receiving device 3, performs predetermined image processing, and displays the image on the screen. Note that
The image acquisition unit 51 acquires an image to be processed from the outside. Specifically, the image acquisition unit 51 fetches, under the control of the control unit 57, an image (an image group including a plurality of in-vivo images captured (acquired) in time series by the capsule endoscope 2) stored in the receiving device 3 set in the cradle 3a, via the cradle 3a connected to the USB port. Further, the image acquisition unit 51 also causes the storage unit 52 to store the fetched image group via the control unit 57.
The storage unit 52 is implemented by various IC memories such as a flash memory, a read only memory (ROM), and a random access memory (RAM), a hard disk that is built-in or connected by a data communication terminal, or the like. The storage unit 52 stores the image group transferred from the image acquisition unit 51 via the control unit 57. Further, the storage unit 52 stores various programs (including an image processing program) executed by the control unit 57, information required for processing performed by the control unit 57, or the like.
The input unit 53 is implemented with input devices such as a keyboard, a mouse, a touch panel, and various switches, and outputs, to the control unit 57, input signals generated in response to an external operation on these input devices.
The identification unit 54 calculates a characteristic of each of the plurality of image groups to identify a region of the subject H that is not captured by the capsule endoscope 2, in each of the plurality of image groups on the basis of the characteristic. Specifically, the identification unit 54 includes a first calculation unit 541 that calculates, as a characteristic, the amount of a specific region in each image of each of the plurality of image groups, and a first identification unit 542 that identifies the region of the subject H that is not captured by the capsule endoscope 2 on the basis of the amount of the specific region calculated by the first calculation unit 541. The specific region is a region including a captured image of, for example, a bubble or residue in the gastrointestinal tract, or noise caused by a poor state of communication between the capsule endoscope 2 and the receiving device 3. Further, the specific region may include a region including a captured image of bile. Further, the identification unit 54 may identify a blurred image caused by fast movement of the capsule endoscope 2. Alternatively, a configuration in which a user can select a specific target to be included in the specific region by setting. The identification unit 54 includes a general- purpose processor such as a CPU or a special-purpose processor such as various arithmetic operation circuits that perform specific functions, such as an ASIC and an FPGA.
Note that the specific region can be detected by applying a known method. For example, as disclosed in JP 2007-313119 A, it is allowable to detect a bubble region by detecting a match between a bubble model to be set on the basis of a feature of a bubble image, such as an arc-shaped protruding edge due to illumination reflection, existing at a contour portion of a bubble or inside the bubble, and an edge extracted from an intraluminal image. Alternatively, as disclosed in JP 2012-143340 A, it is allowable to detect a residue candidate region, that is assumed to be a non-mucosa region, on the basis of color feature data based on each pixel value, and to discern whether or not the residue candidate region is a mucosa region on the basis of a positional relationship between the residue candidate region and the edge extracted from the intraluminal image.
The first specifying unit 55 specifies a section of the subject H in which the region identified by the identification unit 54 in the image group overlaps each other between the plurality of image groups. However, the first specifying unit 55 may specify at least one section of the subject H in which the region is included in one of the plurality of image groups. Specifically, the first specifying unit 55 may specify a section of the subject H in which the region identified by the identification unit 54 is included in any one of the plurality of image groups. Further, the first specifying unit 55 may specify a section of the subject H in which an overlapping proportion of the region of the image group identified by the identification unit 54 overlapping each other between the plurality of image groups is equal to or more than a predetermined value. The first specifying unit 55 includes a general-purpose processor such as a CPU or a special-purpose processor such as various arithmetic operation circuits that perform specific functions, such as an ASIC and an FPGA.
The generation unit 56 generates information regarding a position of the section specified by the first specifying unit 55. The information generated by the generation unit 56 is, for example, a distance from a reference position of the subject H to the section. However, the generation unit 56 may generate information regarding a position of the section specified by the first specifying unit 55 for at least one section. Further, the information generated by the generation unit 56 may include a distance from the reference position of the subject H to a position where the section ends, a distance from the reference position of the subject H to an intermediate position of the section, the length of the section, and the like. The generation unit 56 includes a general-purpose processor such as a CPU or a special-purpose processor such as various arithmetic operation circuits that perform specific functions, such as an ASIC and an FPGA.
The control unit 57 reads a program (including the image processing program) stored in the storage unit 52 and controls an overall operation of the image processing apparatus 5 according to the program. The control unit 57 includes a general-purpose processor such as a CPU or a special-purpose processor such as various arithmetic operation circuits that perform specific functions, such as an ASIC and an FPGA. Alternatively, the control unit 57 may include the identification unit 54, the first specifying unit 55, the generation unit 56, the display controller 58, and the like, and one CPU and the like.
The display controller 58 controls display performed by the display device 6 under the control of the control unit 57. Specifically, the display controller 58 controls display performed by the display device 6 by generating and outputting a video signal. The display controller 58 causes the display device 6 to display the information generated by the generation unit. The display controller 58 includes a general-purpose processor such as a CPU or a special-purpose processor such as various arithmetic operation circuits that perform specific functions, such as an ASIC and an FPGA.
The display device 6 is implemented by a liquid crystal display, an organic electroluminescence (EL) display, or the like, and displays a display screen such as an in-vivo image under the control of the display controller 58.
Next, an operation of the image processing apparatus 5 will be described. Hereinafter, processing for two image groups including first and second image groups will be described. However, the number of image groups is not particularly limited as long as it is plural.
Next, the identification unit 54 performs identification processing on the first image group (Step S2).
Then, the first calculation unit 541 calculates the amount (the area, the number of pixels, or the like) of a specific region included in the i-th image (Step S12).
Next, the first identification unit 542 determines whether or not the i-th image is a specific image in which the amount of the specific region is equal to or more than a predetermined threshold value (equal to or more than a predetermined area) stored in the storage unit 52. (Step S13). The specific image is an image including a region that does not include a captured image of the subject H (inner wall of the gastrointestinal tract) in an amount that is equal to or more than a predetermined threshold value due to the specific region such as a bubble, residue, or noise. The threshold value may be a value input by the user.
In a case where the i-th image is the specific image (Step S13: Yes), the control unit 57 stores, in the storage unit 52, the fact that the i-th image is the specific image (Step S14).
On the other hand, in a case where the i-th image is not the specific image (Step S13: No), the processing directly proceeds to Step S15.
Next, the control unit 57 determines whether or not the variable i is equal to or more than the number N of all images (Step S15).
In a case where the variable i is smaller than N (Step S15: No), the control unit 57 increments the variable i (i=i+1) (Step S16), and returns to Step S12 to continue the processing. On the other hand, in a case where the variable i is N or more (Step S15: Yes), the identification processing ends.
By the identification processing described above, a region of the subject H that is not captured by the capsule endoscope 2 in the first image group is identified. Specifically, a region between the specific images that are consecutive in time series is the region of the subject H that is not captured by the capsule endoscope 2.
Returning to
Then, the first specifying unit 55 specifies an overlapping section of the subject H in which the region identified by the identification unit 54 in each of the first and second image groups overlap each other between the first and second image groups (Step S4).
Next, the generation unit 56 calculates a distance from a reference position to the overlapping section (Step S5).
Further, the display controller causes the display device 6 to display an image displaying the distance to the overlapping section (Step S6).
In the first image group, the region of the subject H that is not captured by the capsule endoscope 2 is a region A11. Similarly, in the second image group, the region of the subject H that is not captured by the capsule endoscope 2 is a region A12. The region A11 and the region A12 are identified by the identification unit 54. Then, the first specifying unit 55 specifies an overlapping section B1 as a section in which the region A11 and the region A12 overlap each other. Further, the display device 6 displays a distance d1 and a distance d2 as the distances from the reference positions generated by the generation unit 56 to the overlapping section.
The user can recognize a section of the subject H that is not captured by the capsule endoscope 2 even after performing an examination multiple times, due to the overlapping section B1 displayed on the display device 6. As a result, the user can easily specify a lesion such as a bleeding source by selectively examining the overlapping section B1 with a small intestine endoscope or the like.
In the examination using the capsule endoscope 2, in a case of a patient with obscure gastrointestinal bleeding (OGIB), in which a bleeding source is not found by the examination using the capsule endoscope 2 and anemia is not alleviated, the bleeding source is specified by repeatedly performing the examination using the capsule endoscope 2. However, in a case where the bleeding source is in a region where the capsule endoscope 2 passes through quickly or in a region where residues are likely to accumulate, the bleeding source may not be found even after performing the examination using the capsule endoscope 2 multiple times. In such a case, the image processing apparatus 5 automatically specifies the overlapping section B1 which is the section of the subject H that is not captured by the capsule endoscope 2 in the examination performed multiple times. As a result, the user can easily specify the bleeding source by examining the overlapping section B1 with a small intestine endoscope or the like.
Next, an operation of the image processing apparatus 5A will be described. The operation of the image processing apparatus 5A differs from the image processing apparatus 5 only in identification processing.
Then, the second identification unit 542A identifies whether or not the degree of similarity calculated by the second calculation unit 541A is lower than a predetermined threshold value (Step S22). Note that the threshold value may be a value stored in a storage unit 52 in advance, or may be a value input by the user. In a case where it is identified by the second identification unit 542A that the degree of similarity is lower than the predetermined threshold value (Step S22: Yes), a control unit 57 stores, in a storage unit 52, the fact that a region between the i-th image and the i+1-th image is a region of the subject H that is not captured by the capsule endoscope 2 (Step S23).
On the other hand, in a case where it is identified by the second identification unit 542A that the degree of similarity is equal to or higher than the predetermined threshold value (Step S22: No), the processing directly proceeds to Step S15.
Next, the processing in Steps S15 and S16 is performed in the same manner as in the first embodiment.
As in Modified Example 1-1, the identification unit 54 may identify a region of the subject H that is not captured by the capsule endoscope 2 by using an amount that is determined based on the degree of similarity between at least two images, or on a position, speed, or acceleration of the capsule endoscope.
A first specifying unit 55B of the image processing apparatus 5B specifies, in the reciprocating image group, a section of the subject H that is overlappingly identified, by an identification unit 54, as a region of the subject H that is not captured by the capsule endoscope 2 when the capsule endoscope 2 reciprocates in the subject H.
Next, an operation of the image processing apparatus 5B will be described.
Next, the first specifying unit 55B specifies a section of the subject H that is not captured by the capsule endoscope 2 in the first image group (Step S32).
Further, as illustrated in
Then, as in Steps S2, S31, and S32, in Steps S3, S33, and S34, an overlapping section of the second image group is specified. Then, the processing in Steps S4 to S6 is performed in the same manner as in the first embodiment, and a series of treatments ends.
According to Modified Example 1-2, a section that is not captured by the capsule endoscope 2 even once when the capsule endoscope 2 reciprocates is specified as the overlapping section B2. Therefore, sections that the user examines again by using a small intestine endoscopy are reduced, and the burden on the user can be reduced.
A configuration of an image processing apparatus 5 according to a second embodiment is the same as that of the first embodiment, and the second embodiment differs from the first embodiment only in processing in the image processing apparatus 5.
An identification unit 54 identifies regions A31 to A34 of the subject H that are not captured by the capsule endoscope 2, in each of the plurality of image groups.
The first specifying unit 55 identifies whether or not each section of each of the plurality of image groups includes the region identified by the identification unit 54. Then, the first specifying unit 55 specifies overlapping sections B31 in which an overlapping proportion of the regions identified by the identification unit 54 is 75% or more.
A generation unit 56 calculates a distance d21 and a distance d22 as information regarding positions of the overlapping sections B31. An image including a captured image of the pylorus in the fourth image group is a position where distance d=0, the position corresponding to a reference position, and the distance d21 and the distance d22 are distances from the reference position to the overlapping sections B31. Further, the generation unit 56 calculates a distance C1 between the two overlapping sections B31 as information regarding the positions of the overlapping sections B31.
A generation unit 56 calculates a distance d31, a distance d32, and a distance d33 as information regarding positions of the overlapping sections B32. An image including a captured image of the pylorus in a fourth image group is a position where distance d=0, the position corresponding to a reference position, and the distance d31, the distance d32, and the distance d33 are distances from the reference position to the overlapping sections B32. Further, the generation unit 56 calculates, as the information regarding the positions of the overlapping sections B32, a distance C2 between the first overlapping section B32 and the second overlapping section B32, and a distance C3 between the second overlapping section B32 and the third overlapping section B32.
Similarly, the generation unit 56 corrects a position of each image in a second image group so that the first captured image in the second image group and the last captured image in the second image group correspond to the predetermined distance d=0 and the distance d=D1, respectively. By this correction, a region A421 identified by the identification unit 54 as a region of the subject H that is not captured by the capsule endoscope 2 in the second image group is corrected to a region A422.
Then, a first specifying unit 55 specifies, as an overlapping section B4, a section in which the region A412 and the region A422 overlap each other.
Then, a first specifying unit 55 specifies, as an overlapping section B5, a section in which a region A51 and the region A522 overlap each other.
Then, a first specifying unit 55 specifies, as an overlapping section B6, a section in which a region A61 and the region A622 overlap each other.
Note that three or more reference positions may be set to sites such as the mouth, the cardia, the pylorus, the ileum, and the anus, or lesions such as a hemostasis site and a ridge site, and different corrections may be applied for the respective reference positions. Further, the reference position may be detected from an image, or the user may observe the image to select the reference position.
A first specifying unit 55C acquires a region of the subject H that is not captured by the capsule endoscope 2, the regions being identified in each of a plurality of image groups on the basis of a characteristic of each of the plurality of image groups, and specifies a section of the subject H in which the region in each of a plurality of image groups overlap each other between the plurality of image groups. In other words, the first specifying unit 55C specifies a section of the subject H in which the region identified by the identification unit 71 in each of the plurality of image groups overlaps each other between the plurality of image groups. However, the first specifying unit 55C may specify at least one section of the subject H in which the region is included in one of the plurality of image groups.
As in the fourth embodiment described above, the image processing apparatus 5C does not include the identification unit, the first calculation unit, and the first identification unit, and the processing device 7 connected via the Internet may perform processing that is to be performed by the identification unit. Similarly, the processing that is to be performed by the identification unit may be performed on a cloud including a plurality of processing devices (server group).
A display controller 58D acquires a specified section of the subject H in which a region of the subject H that is not captured by the capsule endoscope 2 in each of a plurality of image groups overlaps each other between a plurality of image groups, the region being identified in each of a plurality of image groups on the basis of a characteristic of each of the plurality of image groups, and causes the display device 6 to display information regarding a position of the section. In other words, the first specifying unit 72D specifies the section of the subject H in which the regions identified by the identification unit 71 overlap each other between the plurality of image groups, the generation unit 73D generate the information regarding the position of the section specified by the first specifying unit 72D, and the display controller 58D causes the display device 6 to display the information regarding the position of the section. However, the first specifying unit 72D may specify at least one section of the subject H in which the region is included in one of the plurality of image groups.
As in Modified Example 4-1 described above, the image processing apparatus 5D does not include the identification unit, the first specifying unit, and the generation unit, and the processing device 7D connected via the Internet may perform processing that is to be performed by the identification unit, the first specifying unit, and the generation unit, respectively. Similarly, the processing that is to be performed by the identification unit, the first specifying unit, and the generation unit may be performed on a cloud including a plurality of processing devices (server group).
As such, only a current examination result may be displayed by the distance bar 63, and a past examination result may be displayed by the marker 64. Note that in a case where there are a plurality of past examination results, markers for each examination may be displayed side by side. In addition, in a case where there are a plurality of past examination results, a marker indicating a region that is repeatedly not captured by the capsule endoscope 2 in the past examinations may be displayed. Similarly, in a case where there are a plurality of past examination results, a marker indicating a region including a portion that is repeatedly not captured the capsule endoscope 2 in the past examinations may be displayed, the portion having a predetermined proportion or more. In addition, in a case where there are a plurality of past examination results, a marker indicating a region that is not captured by the capsule endoscope 2 even once in the past examinations may be displayed.
Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the disclosure in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2018-060859 | Mar 2018 | JP | national |
This application is a continuation of PCT international application Ser. No. PCT/JP2018/032918, filed on Sep. 5, 2018 which designates the United States, incorporated herein by reference, and which claims the benefit of priority from Japanese Patent Applications No. 2018-060859, filed on Mar. 27, 2018, incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2018/032918 | Sep 2018 | US |
Child | 17025225 | US |