1. Field of the Invention
The present invention relates to an image display apparatus and a capsule endoscope system for displaying an in-vivo image that is obtained by a capsule endoscope introduced into the subject.
2. Description of the Related Art
Conventionally, in diagnosis of a subject using a capsule endoscope introduced into the subject and capturing an image in the subject, operation is performed such that a group of in-vivo images obtained by the capsule endoscope is observed as a quasi-motion picture or as a list of still pictures, and those appearing to be abnormal are selected therefrom. This operation is called interpretation of radiogram.
The group of in-vivo images captured in one examination includes as many as approximately 60,000 images (equivalent to about 8 hours), and therefore, extremely heavy burden is imposed on a person who does interpretation of radiograms. For this reason, as a helping function for the interpretation of radiograms, it is suggested to, e.g., detect an abnormal portion such as bleeding and tumor by image processing and draw the trace of the capsule endoscope by estimating, through calculation, the position where an in-vivo image in question is captured in the subject (for example, see Japanese Patent Application Laid-open No. 2008-036243, Japanese Patent Application Laid-open No. 2010-082241, and Japanese Patent Application Laid-open No. 2007-283001).
An image display apparatus according to an aspect of the present invention for displaying an image based on in-vivo image data obtained, via a receiving device that wirelessly communicates with a capsule endoscope, from the capsule endoscope that captures an in-vivo image of a subject includes: an input receiving unit for receiving input of information to the image display apparatus; a storage unit for storing the in-vivo image data and information related to a position of the capsule endoscope in the subject, the information being associated with the in-vivo image data; an image processing unit for executing predetermined image processing on the in-vivo image data stored in the storage unit; a position estimating unit for executing position estimating processing for estimating a position of the capsule endoscope during image-capturing of the in-vivo image, on the basis of the information related to the position stored in the storage unit; a display unit for displaying a display screen having a selection region for selecting a position estimation level of the position estimating processing executed by the position estimating unit and/or a content of the image processing executed by the image processing unit; a processing time estimating unit for predicting a processing time required in the image processing and/or the position estimating processing, on the basis of the position estimation level and/or the content of the image processing which is selected in the selection region and for which the input receiving unit receives an input of a selection signal; and a processing setting unit for instructing the image processing unit and/or the position estimating unit to execute processing, wherein the display unit displays the selection content selected in the selection region and a processing time confirmation region having a prediction processing time display field for displaying a prediction processing time predicted by the processing time estimating unit, and when the input receiving unit receives an input of an instruction for determining the selection content displayed on the display unit and the prediction processing time, the processing setting unit instructs the image processing unit and/or the position estimating unit to execute the determined processing.
A capsule endoscope system according to another aspect of the present invention includes: a capsule endoscope that is introduced into a subject, so that the capsule endoscope captures an in-vivo image and generates in-vivo image data representing the in-vivo image of the subject; a receiving device for receiving the in-vivo image data generated by the capsule endoscope via wireless communication; and an image display apparatus for displaying an image based on the in-vivo image data obtained via the receiving device, wherein the image display apparatus includes: an input receiving unit for receiving input of information to the image display apparatus; a storage unit for storing the in-vivo image data and information related to a position of the capsule endoscope in the subject, the information being associated with the in-vivo image data; an image processing unit for executing predetermined image processing on the in-vivo image data stored in the storage unit; a position estimating unit for executing position estimating processing for estimating a position of the capsule endoscope during image-capturing of the in-vivo image, on the basis of the information related to the position stored in the storage unit; a display unit for displaying a display screen having a selection region for selecting a position estimation level of the position estimating processing executed by the position estimating unit and/or a content of the image processing executed by the image processing unit; a processing time estimating unit for predicting a processing time required in the image processing and/or the position estimating processing, on the basis of the position estimation level and/or the content of the image processing which is selected in the selection region and for which the input receiving unit receives an input of a selection signal; and a processing setting unit for instructing the image processing unit and/or the position estimating unit to execute processing, wherein the display unit displays the selection content selected in the selection region and a processing time confirmation region having a prediction processing time display field for displaying a prediction processing time predicted by the processing time estimating unit, and when the input receiving unit receives an input of an instruction for determining the selection content displayed on the display unit and the prediction processing time, the processing setting unit instructs the image processing unit and/or the position estimating unit to execute the determined processing.
The above and other features, advantages and technical and industrial significance of this invention will be better understood by reading the following detailed description of presently preferred embodiments of the invention, when considered in connection with the accompanying drawings.
Hereinafter, an image display apparatus and a capsule endoscope system according to an embodiment of the present invention will be described with reference to drawings. In the following description, for example, a system including a capsule endoscope for capturing an in-vivo image introduced into a subject will be illustrated as an example, but it is to be understood that the present invention is not limited by this embodiment.
After the capsule endoscope 2 is swallowed through the mouth of the subject 10, the capsule endoscope 2 moves in organs of the subject 10 due to peristaltic motions of organs and the like, and during which the capsule endoscope 2 generates the in-vivo image data by performing predetermined signal processing on captured image signals obtained by successively capturing images in the subject 10 with a predetermined time interval (for example, interval of 0.5 seconds). Every time the capsule endoscope 2 captures the in-vivo image of the subject 10, the capsule endoscope 2 successively, wirelessly transmits the generated in-vivo image data to the external receiving device 3. Identification information (for example, serial number) for identifying an individual capsule endoscope is allocated to the capsule endoscope 2, and this identification information is wirelessly transmitted together with the in-vivo image data.
The receiving device 3 has an antenna unit 4 having multiple receiving antennas 41a to 41h. Each of the receiving antennas 41a to 41h is configured using, for example, a loop antenna, and each of the receiving antennas 41a to 41h is provided at a predetermined position on an external surface of the subject 10 (for example, a position corresponding to each organ in the subject 10, which is a path of the capsule endoscope 2). These receiving antennas 41a to 41h are provided at, for example, the predetermined positions with respect to the subject 10 during the examination. It should be noted that the arrangement of the receiving antennas 41a to 41h may be changed to any arrangement in accordance with the purpose of examination, diagnosis, and the like. It should be noted that the number of antennas provided in the antenna unit 4 may not be necessarily interpreted as being eight, which are shown as the receiving antennas 41a to 41h, and the number of antennas provided in the antenna unit 4 may be less than eight or may be more than eight.
While the capsule endoscope 2 captures images (for example, from when the capsule endoscope 2 is introduced through the mouth of the subject 10 to when the capsule endoscope 2 passes through the alimentary canal and is excreted), the receiving device 3 is carried by the subject 10, and receives, via the antenna unit 4, the in-vivo image data wirelessly transmitted from the capsule endoscope 2. The receiving device 3 stores the received in-vivo image data to memory incorporated therein. The receiving device 3 also stores, to the memory, received strength information about each of the receiving antennas 41a to 41h when the in-vivo image is received and time information representing a time at which the in-vivo image is received, in such a manner that the received strength information and the time information are associated with the in-vivo image data. It should be noted that the received strength information and the time information are used by the image display apparatus 5 as information related to the position of the capsule endoscope 2. After the capsule endoscope 2 finishes capturing the images, the receiving device 3 is detached from the subject 10, and is connected to the image display apparatus 5 so that information such as the in-vivo image data is transferred (downloaded).
The image display apparatus 5 is configured with a work station or a personal computer having a display unit such as a CRT display or a liquid crystal display, and displays the in-vivo image based on the in-vivo image data obtained via the receiving device 3. An operation input device 5b such as a keyboard and a mouse is connected to the image display apparatus 5. Alternatively, a touch panel provided in an overlapping manner on the display unit may be used as the operation input device 5b. While a user (a person who does interpretation of radiograms) manipulates the operation input device 5b, the user interprets the in-vivo images of the subject 10 which are displayed successively on the image display apparatus 5, and observes (examines) living body portions (for example, esophagus, stomach, small intestine, and large intestine) in the subject 10, thus diagnosing the subject 10 on the basis of the above.
The image display apparatus 5 has, for example, a USB (Universal Serial Bus) port, and a cradle 5a is connected via this USB port. The cradle 5a is a reading device for reading the in-vivo image data from the memory of the receiving device 3. When the receiving device 3 is attached to the cradle 5a, the receiving device 3 is electrically connected to the image display apparatus 5, so that the in-vivo image data stored in the memory of the receiving device 3, the received strength information and the time information associated therewith, and the related information such as the identification information of the capsule endoscope 2 are transferred to the image display apparatus 5. The image display apparatus 5 thus obtains the series of the in-vivo image data of the subject 10 and the related information related thereto, and further executes processing explained later, thus displaying the in-vivo image. It should be noted that the image display apparatus 5 may be connected to an output device such as a printer, and the in-vivo image may be output to the output device.
It should be noted that the image display apparatus 5 can obtain the in-vivo image data captured by the capsule endoscope 2 according to various types of methods other than the method explained above. For example, in the receiving device 3, memory that can be detached from and attached to the receiving device 3, such as a USB memory and a compact flash (registered trademark), may be used instead of the internal memory. In this case, after the in-vivo image data provided by the capsule endoscope 2 are stored to the memory, only the memory may be detached from the receiving device 3, and, for example, the memory may be inserted into the USB port and the like of the image display apparatus 5. Alternatively, the image display apparatus 5 may be provided with a communication function for communicating with an external device, and the in-vivo image data may be obtained from the receiving device 3 by means of wired or wireless communication.
Subsequently, each device constituting the capsule endoscope system 1 will be explained in detail.
As illustrated in
As illustrated in
The image-capturing unit 21 includes, for example, an image sensor 21a such as a CCD and a CMOS for generating image data of an image of the subject from an optical image formed on a light receiving surface, and also includes an optical system 21b such as an object lens provided at the light receiving surface side of the image sensor 21a. The illumination unit 22 is configured with an LED (Light Emitting Diode) and the like for emitting light to the subject 10 during the image capturing process. The image sensor 21a, the optical system 21b, and the illumination unit 22 are mounted on the circuit board 23.
The drive circuit of the image-capturing unit 21 operates under the control of the signal processing unit 24 explained later, and generates, for example, a captured image signal representing an image in the subject with a regular interval (for example, two frames per second), and the captured image signal is input to the signal processing unit 24. In the explanation below, it is assumed that the image-capturing unit 21 and the illumination unit 22 respectively include their drive circuits.
The circuit board 23 having the image-capturing unit 21 and the illumination unit 22 mounted thereon is provided at the side of the optical dome 2a within the capsule-shaped container (2a, 2b) such that the light receiving surface of the image sensor 21a and the light emitting direction of the illumination unit 22 face the subject 10 with the optical dome 2a interposed therebetween. Therefore, the image capturing direction of the image-capturing unit 21 and the illumination direction of the illumination unit 22 are oriented toward the outside of the capsule endoscope 2 with the optical dome 2a interposed therebetween as illustrated in
The signal processing unit 24 controls each unit in the capsule endoscope 2, and also performs A/D conversion on the captured image signal that is output from the image-capturing unit 21 to generate digital in-vivo image data, and further performs predetermined signal processing. The memory 25 temporarily stores various types of operations executed by the signal processing unit 24 and the in-vivo image data having been subjected to signal processing by the signal processing unit 24. The transmission unit 26 and the antenna 27 transmits, to the outside, the in-vivo image data stored in the memory 25 as well as the identification information of the capsule endoscope 2 in such a manner that the in-vivo image data and the identification information are multiplexed in a radio signal. The battery 28 provides electric power to each unit in the capsule endoscope 2. It is assumed that the battery 28 includes a power supply circuit for, e.g., boosting the electric power supplied from a primary battery such as a button battery or a secondary battery.
On the other hand, the receiving device 3 includes a receiving unit 31, a signal processing unit 32, memory 33, an interface (I/F) unit 34, an operation unit 35, a display unit 36, and a battery 37. The receiving unit 31 receives, via the receiving antennas 41a to 41h, the in-vivo image data wirelessly transmitted from the capsule endoscope 2. The signal processing unit 32 controls each unit in the receiving device 3, and performs the predetermined signal processing on the in-vivo image data received by the receiving unit 31. The memory 33 stores various types of operations executed by the signal processing unit 32, the in-vivo image data having been subjected to signal processing by the signal processing unit 32, and related information related thereto (the received strength information, the time information, and the like). The interface unit 34 transmits the image data stored in the memory 33 to the image display apparatus 5 via the cradle 5a. The operation unit 35 is used by the user to input various types of operation instructions and settings to the receiving device 3. The display unit 36 notifies or displays various types of information to the user. The battery 37 supplies electric power to each unit in the receiving device 3.
As illustrated in
The interface unit 51 functions as an input receiving unit for receiving the in-vivo image data and the related information related thereto, which are input via the cradle 5a, and receiving various types of instructions and information, which are input via the operation input device 5b.
The temporary storage unit 52 is configured with volatile memory such as DRAM and SDRAM, and temporarily stores the in-vivo image data which are input from the receiving device 3 via the interface unit 51. Alternatively, instead of the temporary storage unit 52, a recording medium and a drive device for driving the recording medium, such as an HDD (hard disk drive), an MO (magneto-optical disks), a CD-R, and a DVD-R, may be provided, and the in-vivo image data which are input via the interface unit 51 may be temporarily stored to the recording medium.
The image processing unit 53 performs on the in-vivo image data stored in the temporary storage unit 52, basic (essential) image processing such as white balance processing, demosaicing, color conversion, density conversion (such as gamma conversion), smoothing (noise reduction and the like), and sharpening (edge emphasis and the like), and auxiliary (optional) image processing for detecting lesion site and the like. Examples of optical image processing include red detecting processing for detecting a bleeding point and detection of lesion site due to tumor, blood vessel, tumor, and the like achieved by image recognition processing.
The position estimating unit 54 executes position estimating processing on the basis of the received strength information and the time information stored in the temporary storage unit 52. More specifically, the position estimating unit 54 obtains, from the temporary storage unit 52, the received strength of each of the receiving antennas 41a to 41h associated with the in-vivo image data received at a certain time, and extracts a spherical region of which center is at each of the receiving antennas 41a to 41h and of which radius is a distance according to the received strength. The weaker the received strength is, the larger the radius is. The position where these regions cross each other is estimated as a position of the capsule endoscope 2 at that time, i.e., the position represented by the in-vivo image in the subject 10. The position estimating unit 54 executes this kind of position estimation at a predetermined sampling density. The position thus estimated (estimated position information) is associated with the time information and stored to the storage unit 55.
It should be noted that various known methods other than the above can be applied as a specific method of this position estimating processing.
The storage unit 55 stores, e.g., not only parameters and various types of processing programs executed by the image display apparatus 5 but also the in-vivo image data subjected to the image processing by the image processing unit 53, the estimated position information obtained by the position estimating unit 54, and examination information generated by the examination information generating unit 58 explained later. For example, the storage unit 55 is configured with a recording medium and a drive device for driving the recording medium, such as a semiconductor memory such as flash memory, RAM (Random Access Memory), ROM (Read Only Memory), and an HDD (hard disk drive), an MO (magneto-optical disks), a CD-R, and a DVD-R.
The processing setting unit 56 sets the content of the image processing executed by the image processing unit 53 (the type and the precision of the image processing) and the position estimation level of the position estimating processing executed by the position estimating unit 54, on the basis of information that is input via the interface unit 51 when the user manipulates the operation input device 5b (input information), thus controlling the image processing unit 53 and the position estimating unit 54 so as to execute each processing with the content having been set (hereinafter referred to as processing content).
More specifically, the processing setting unit 56 sets, on the basis of the above input information, at least one detecting unit or estimating unit that executes processing, from among the red color detecting unit 53b, the (tumorous) lesion detecting unit 53c, the (vascular) lesion detecting unit 53d, the (bleeding) lesion detecting unit 53e, and the position estimating unit 54. Further, with regard to the position estimating processing, the processing setting unit 56 sets, on the basis of the input information, any one of, e.g., three position estimation levels (of which estimation precisions are low, medium, and high, respectively) that can be executed.
The processing time estimating unit 57 predicts a time needed in the image processing and/or position estimating processing (processing time) on the basis of the content set by the processing setting unit 56. It should be noted that the processing time in the first embodiment does not mean a simple summation of times required by the image processing and the position estimating processing but means an actual processing time from when the image processing and the position estimating processing are started to be executed in parallel to when the image processing and the position estimating processing are completed.
More specifically, the processing time estimating unit 57 predicts the processing time on the basis of the elements (1) to (4) as follows.
(1) CPU Occupation Rate
The image display apparatus 5 executes, in parallel, not only the processing on the in-vivo image data but also various types of processing such as initialization of the memory of the receiving device 3 after the transfer, generation of the examination information, and the display of the existing in-vivo image. Accordingly, the CPU occupation rate changes from time to time. The processing time estimating unit 57 predicts and calculates, in accordance with the occupation rate at that time, the processing time required when the image processing and the position estimating processing are executed in parallel. It should be noted that, when multiple sets of image processing are executed, various types of image processing are basically executed in parallel, but under a circumstance where, for example, the CPU occupation rate is high, and the number of sets of processing that can be processed in parallel is limited, various types of image processing may be processed serially. In view of such situation, the processing time estimating unit 57 predicts the processing time.
(2) Processing Time Per In-Vivo Image (or the Amount of Data Processing Per In-Vivo Image)
The storage unit 55 stores, as parameters, information about a time per in-vivo image (or the amount of data processing) that is required in various types of image processing including the red color detecting processing, the tumorous lesion detecting processing, the vascular lesion detecting processing, and the bleeding lesion detecting processing, and the position estimating processing. The processing time estimating unit 57 retrieves, from the storage unit 55, the information about the time required in the image processing and the position estimating processing which are set by the processing setting unit 56.
(3) The Number of In-Vivo Images
The amount of data processing in the image processing and the position estimating processing greatly changes in accordance with the amount of in-vivo image data transferred from the receiving device 3. In other words, the amount of data processing in the image processing and the position estimating processing greatly changes in accordance with the number of in-vivo images. The number of in-vivo images is determined in accordance with the image capturing rate of the capsule endoscope 2 and the examination time (a time from when the capsule endoscope 2 is introduced into the subject 10 to when the capsule endoscope 2 is excreted out of the subject 10).
(4) Sampling Density Corresponding to Precision of Each Position Estimation
The storage unit 55 stores, as a parameter, sampling density information corresponding to each position estimation level (low, medium, and high). The processing time estimating unit 57 retrieves, from the storage unit 55, the sampling density information corresponding to the position estimation level set by the processing setting unit 56. The total number of in-vivo images to be subjected to the position estimating processing is determined in accordance with the sampling density and the number of in-vivo images.
Further, the processing time estimating unit 57 may obtain the finish time of the image processing and the position estimating processing, on the basis of the calculated processing time and the current time.
The examination information generating unit 58 generates information about the examination on the basis of the information provided via the operation input device 5b. More specifically, the information includes patient information (ID, name, sex, age, date of birth, and the like) for distinguishing the subject 10, who is a patient, and also includes diagnosis information for identifying the content of diagnosis of the subject 10 (the name of a hospital, the name of a doctor (nurse) who give the capsule, the date and time when the capsule was given, the date and time when the data were obtained, the serial number of the capsule endoscope 2, the serial number of the receiving device 3, and the like). It should be noted that the examination information may be generated in advance before the receiving device 3 transfers the in-vivo image data, or may be generated after the in-vivo image data are transferred.
The display control unit 59 controls the display unit 60 so as to display, in a predetermined format, the in-vivo image having been subjected to the image processing by the image processing unit 53, the position information estimated by the position estimating unit 54, and various types of other information.
The display unit 60 is configured with a CRT display or a liquid crystal display, and under the control of the display control unit 59, the display unit 60 displays various types of information and the radiographic image interpretation screen including the in-vivo image of the subject 10.
Subsequently, operation of the image display apparatus will be explained with reference to
When the receiving device 3 is attached to the cradle 5a in step S101 (step S101: Yes), the in-vivo image data and the related information related thereto, which are stored in the memory of the receiving device 3, are begun to be transferred to the image display apparatus 5 (step S102). At this occasion, the transferred in-vivo image data and the like are stored to the temporary storage unit 52. When the receiving device 3 is not attached to the cradle 5a (step S101: No), the image display apparatus 5 waits until the receiving device 3 is attached.
When the transfer of the in-vivo image data and the related information related thereto is finished, each unit of the image display apparatus 5 sets the processing content of the position estimating processing and the image processing in steps S103 to S107. First, in step S103, the display control unit 59 controls the display unit 60 for displaying the screen for allowing the user to select a desired processing content.
When this processing selection screen 100 is initially displayed, all of the icons 101 to 104 are selected, and the icon 106 of the icons 105 to 107 is selected. In other words, the processing selection screen 100 indicates that all the image processing and the position estimation at the precision “medium” are executed. From this screen, the icon (for example, the icons 101 and 102) representing unnecessary image processing is unselected, and the icon (for example, icon 105) representing the desired position estimation level is selected.
When the type of the image processing and the position estimation level desired by the user are selected and further a selection signal of the OK button 108 is input (for example, the OK button is clicked) in the processing selection screen 100 (step S104: Yes), the processing setting unit 56 primarily determines the processing content displayed on the processing selection screen 100, and the processing time estimating unit 57 predicts and calculates the processing time required to execute the determined processing content (step S105). When the selection signal of the OK button 108 is input (step S104: No), this processing selection screen 100 is continuously displayed on the display unit 60 (step S103), and the user can select the processing content again and again for any number of times.
Subsequently, in step S106, the display control unit 59 causes the display unit 60 to display the processing time calculated by the processing time estimating unit 57.
When a selection signal of the “confirm” button 112 is input on the processing time confirmation screen 110 (step S107: Yes), the processing setting unit 56 determines the selected processing content, and causes the image processing unit 53 and the position estimating unit 54 to execute the processing based on this content in parallel (step S108). On the other hand, when a selection signal of the “return” button 113 is input in the processing time confirmation screen 110 (step S107: No), the display control unit 59 displays the processing selection screen 100 as illustrated in
In step S108, the position estimating unit 54 executes the position estimating processing at the position estimation level having been set, in parallel with the processing of the image processing unit 53.
When the image processing and the position estimating processing as described above are finished, the display control unit 59 causes the display unit to display the radiographic image interpretation screen including the in-vivo image and the estimated position information in step S109.
In the main display region 123, the in-vivo image corresponding to the in-vivo image data processed by the basic processing unit 53a as illustrated in
As explained above, in the image display apparatus according to the first embodiment, the processing content desired by the user is executed on the in-vivo image data, so that the waiting time until the start of the interpretation of the images can be reduced to the minimum. In addition, the time required in the processing on the in-vivo image data or the finish time of the processing is predicted and displayed, and this enables the user to efficiently utilize the waiting time until the start of the interpretation of the images.
In the first embodiment, four types of image processing are mentioned. However, the types of image processing are not limited to the four types as described above. In other words, the number of the types of image processing may be increased or decreased to any number as long as it is possible to select whether to execute or not to execute at least one type of image processing.
In the first embodiment, a case where both of the position estimating processing and the image processing are executed has been explained as a specific example. Alternatively, at least one of the position estimating processing and the image processing may be executed.
Modification 1-1
Subsequently, the first modification of the image display apparatus according to the first embodiment will be explained with reference to
When a predetermined operation signal is input (for example, a cursor 131 is placed on any one of the icons 101 to 104, and the mouse is right-clicked) in a processing selection screen 130 as illustrated in
When the user clicks and selects the radio button 133 of any one of the levels of precision through pointer operation on the screen using a mouse and the like, the processing time estimating unit 57 retrieves the sampling density information corresponding to the selected precision from the storage unit 55, and predicts and calculates the processing time. After the processing content is determined, the processing setting unit 56 causes the red color detecting unit 53b to (bleeding) lesion detecting unit 53e of the image processing unit 53 to execute the image processing at the sampling density.
According to this modification 1-1, the user can efficiently interpret the in-vivo images which have been subjected to the desired image processing at the desired precision.
Modification 1-2
Subsequently, the second modification of the image display apparatus according to the first embodiment will be explained with reference to
The trace calculation unit 61 executes trace calculation processing of the capsule endoscope 2 on the basis of the estimated position information obtained by the position estimating unit 54. More specifically, the trace calculation unit 61 extracts, from multiple estimated positions of the capsule endoscope 2, two points adjacent to each other in terms of time, and when the distance between these two points is equal to or less than a predetermined value, the two points are connected. By doing so, the trace calculation unit 61 successively connects the estimated positions, thus calculating the total trace and generating trace information. It should be noted that this trace information is stored to the storage unit 55. It should be noted that various known methods other than the above can be applied as a specific method of this trace calculation processing.
For example, the display control unit 59 displays a trace on a radiographic image interpretation screen 140 as illustrated in
If the image processing by the image processing unit 53 and the position estimating processing by the position estimating unit 54 are finished, the display control unit 59 may start displaying the radiographic image interpretation screen even before the trace calculation processing is finished. In this case, first, in the radiographic image interpretation screen 120 as illustrated in
According to this modification 1-2, the user can see the trace to more correctly understand the position of the in-vivo image in question.
Subsequently, an image display apparatus according to a second embodiment of the present invention will be explained. The configuration of the image display apparatus according to the second embodiment is the same as the one as illustrated in
First, in steps S101 and S102, in-vivo image data and related information related thereto are transferred from a receiving device 3 attached to a cradle 5a to an image display apparatus 5. The details of these steps are the same as those explained in the first embodiment (see
In step S201, the display control unit 59 causes the display unit 60 to display a screen for allowing a user to select whether to enter into a mode for selecting a processing content in a desired processing time.
When a selection signal of the selection button 201 is input in this processing selection screen 200 (step S202: Yes), for example, the processing setting unit 56 generates a processing content table 210 as illustrated in
On the other hand, when a selection signal of the selection button 201 is not input (step S202: No), the display control unit 59 repeats display of the processing selection screen 200 as illustrated in
In step S204, for example, the display control unit 59 causes the display unit 60 to display a processing time input field 203 as illustrated in
As illustrated in
When the desired processing time is not input to the processing time input field 203 (step S205: No), the processing time input field 203 is continued to be displayed (step S204).
In step S207, the display control unit 59 causes the display unit 60 to display the extracted processing content.
When a selection signal of an OK button 221 is input in the processing display screen 220 (step S208: Yes), the processing setting unit 56 determines the displayed processing content. Thereafter, the image processing and the position estimating processing are executed in parallel (step S108), and the radiographic image interpretation screen is displayed on the display unit 60 (step S109). It should be noted that operation in steps S108 and S109 is the same as the operation explained in the first embodiment (see
On the other hand, when a selection signal of a NO button 222 is input in the processing display screen 220 (step S208: No), the display control unit 59 causes the display unit 60 to display the processing time input field 203 as illustrated in
As described above, according to the second embodiment, the processing content is selected according to the desired processing time, and the user can efficiently interpret the radiographic images on the basis of the in-vivo images having been subjected to necessary processing in a limited time.
It should be noted that each processing content illustrated in the processing content table 210 of
In step S205 explained above, instead of having the user input the desired processing time, a desired finish time may be input. In this case, the processing time estimating unit 57 calculates the desired processing time based on the desired finish time that has been input, and extracts a processing content in accordance with the desired processing time.
When the processing content displayed in step S206 explained above is not the processing content desired by the user, the user may be allowed to correct the processing content by selecting the icons 101 to 107 on the processing display screen 220. In this case, the processing time estimating unit 57 calculates the prediction processing time and the processing finish time again on the basis of the processing content, and the display control unit 59 controls the display unit 60 to display the result of recalculation in the predicted time display region unit.
Modification 2-1
Subsequently, the first modification of the image display apparatus according to the second embodiment will be explained with reference to
More specifically, when a desired processing time is input to a processing time input field 203 as illustrated in
When a selection signal of any one of the icons 231 to 233 and a selection signal of the OK button 234 are input, the processing setting unit 56 determines the processing content displayed in the selected icons 231 to 233, and causes the image processing unit 53 and the position estimating unit 54 to execute the processing.
On the other hand, when a selection signal of the instruction button 235 is input, the processing setting unit 56 extracts a subsequent processing candidate from the processing content table 210. Thereafter, the display control unit 59 displays, on the processing candidate display screen 230, an icon representing the subsequent processing candidate extracted by the processing setting unit 56. The user may select a desired processing content by way of an icon displayed on the display unit 60.
According to the modification 2-1, the user can compare multiple processing candidates, and therefore, the user can select a processing content that is more suitable for the user's request.
Modification 2-2
Subsequently, the second modification of the image display apparatus according to the second embodiment will be explained with reference to
More specifically, in step S203 as illustrated in
According to the modification 2-2, the precision of the image processing can also be selected, and therefore, the user can interpret radiographic in-vivo images having been subjected to the processing that is more suitable for the user's request.
Modification 2-3
Subsequently, a third modification of the image display apparatus according to the second embodiment will be explained with reference to
When the order of priority is set, first, the display control unit 59 causes the display unit 60 to display a priority order setting screen 250 as illustrated in
When the user selects a radio button 254 representing the order of priority desired by the user by pointer operation on the screen with a mouse and the like, and further the user selects the OK button 253, the display control unit 59 subsequently causes the display unit 60 to display a precision setting screen 260 as illustrated in
When the user selects a radio button 264 representing a desired precision of each processing, and further clicks the OK button 263, the processing setting unit 56 generates user setting information based on the content displayed on the precision setting screen 260, and stores this to a storage unit 55. It should be noted that
When a desired processing time (for example, 60 minutes) is input to the processing time input field 203 as illustrated in
On the other hand, when the processing time is more than the desired processing time in the processing content of the user's setting prediction, the processing time estimating unit 57 searches a processing content which is close to the processing content of the user's setting as much as possible and of which prediction processing time fits within the desired processing time.
For example, when the desired processing time is 60 minutes, as illustrated in
According to the modification 2-3, the order of priority unique to the user is set in advance, and therefore, the processing reflecting the user's preference can be performed on the in-vivo image data within the desired processing time.
Modification 2-4
Subsequently, the fourth modification of the image display apparatus according to the second embodiment will be explained with reference to
During the processing time, the processing setting unit 56 reads the table from the storage unit 55, and temporarily sets, as the processing content, processing of which order of priority is the highest (for example, position estimating processing “medium”). The processing time estimating unit 57 calculates the prediction processing time of the processing content temporarily set. When the prediction processing time is less than the desired processing time, the processing setting unit 56 adds processing of which order of priority is the second highest (for example, red color detecting processing “high”), thus setting a new processing content. The processing time estimating unit 57 calculates a prediction processing time of the newly set processing content (i.e., the position estimating processing “medium” and the red color detecting processing “high”). When the prediction processing time is less than the desired processing time, the processing setting unit 56 further adds processing of which order of priority is the subsequently higher (for example, (tumorous) lesion detecting processing “medium”), thus setting a new processing content. As described above, addition of the processing and calculation of the prediction processing time are repeated immediately before the prediction processing time becomes more than the desired processing time. The processing setting unit 56 determines that the processing content immediately before the prediction processing time becomes more than the desired processing time is an ultimate processing content.
According to the modification 2-4, the order of priority is set in advance for the precision of the image processing, and the processing reflecting the user's preference can be performed on the in-vivo image data within the desired processing time.
Furthermore, like the modification 1-2, another modification of the second embodiment may be made by adding a trace calculation unit 61 to the second embodiment and the modifications 2-1 to 2-4 thereof.
Subsequently, an image display apparatus according to the third embodiment of the present invention will be explained. The configuration of the image display apparatus according to the third embodiment is the same as that as illustrated in
When the user selects an icon representing a desired image processing and an icon representing a desired position estimation level by pointer operation on the processing selection screen 300 using a mouse and the like, the processing time estimating unit 57 calculates the prediction processing time required to execute the selected processing. Accordingly, the display control unit 59 causes the prediction processing time to display the calculated prediction processing time.
When a selection signal of the OK button 302 is input in the processing selection screen 300 explained above, the processing setting unit 56 determines the selected processing content. On the other hand, when a selection signal of the NO button 303 is input, all the icons 101 to 107 are unselected. In this case, the user can select the icons 101 to 107 all over again from the beginning.
As described above, in the third embodiment, the individual processing time required to perform the position estimating processing and various kinds of image processing is displayed on the processing selection screen, and therefore, the user can look up the individual processing times to select a desired processing content. Therefore, the user himself/herself can adjust the desired processing content performed on the in-vivo image data and the time when the interpretation of the image can be started.
Modification 3
Subsequently, the modification of the image display apparatus according to the third embodiment will be explained with reference to
When a predetermined operation signal is input (for example, a cursor 331 is placed on any one of the icons 101 to 104, and it is right-clicked) in the processing selection screen 310, the display control unit 59 causes the display unit 60 to display a precision selection window 312 for selecting the precision of the image processing (for example, three levels, i.e., low, medium, and high). For example, the precision selection window 312 includes radio buttons 313 corresponding to three levels of precision. When the user clicks and selects a radio button 313 of any one of the levels of precision, the processing time estimating unit 57 retrieves the sampling density information corresponding to the selected precision from the storage unit 55, and calculates the individual processing time predicted where the image processing is executed independently. The display control unit 59 controls the display unit 60 to display the calculated individual processing times below the selected icon. For example, the individual processing time “20 minutes” required when the processing precision of the lesion detecting processing (bleeding) is “medium” is displayed below the icon 104 of
Furthermore, like the modification 1-2, another modification of the third embodiment may be made by adding a trace calculation unit 61.
Subsequently, an image display apparatus according to the fourth embodiment of the present invention will be explained. The image display apparatus according to the fourth embodiment is based on the image display apparatus according to the third embodiment but is configured to allow selection of a portion in the subject 10 which is to be subjected to processing of in-vivo image data.
A processing selection screen 400 as illustrated in
When the user selects any one of the radio buttons 402 in the processing selection screen 400, the processing setting unit 56 sets in-vivo image data in a range corresponding to a selected organ, as a target of processing of various kinds of image processing. More specifically, for example, it is assumed that in-vivo image data obtained within one hour from when the capsule endoscope 2 was introduced into the subject 10 are associated with the “esophagus”, in-vivo image data obtained 30 minutes to 2 hours and half thereafter are associated with the “stomach”, in-vivo image data obtained two hours to six hours and half thereafter are associated with the “small intestine”, and in-vivo image data obtained six hours or more thereafter are associated with the “large intestine”.
The processing time estimating unit 57 calculates the processing times of various kinds of image processing on the basis of the number of in-vivo images set as the target of processing. The display control unit 59 controls the display unit 60 so as to display the calculated processing times below the respective icons 101 to 104. It should be noted that
According to the fourth embodiment, the user can look up the time of each processing displayed on the screen to select a desired processing content performed on a desired organ and a precision thereof. Therefore, the interpretation of the in-vivo images having been subjected to the desired processing can be started within a desired time.
It should be noted that the setting of the range of the in-vivo image data corresponding to the selected organ is not limited to the above explanation, and, for example, the range may be set on the basis of a mark given during examination with the capsule endoscope 2. More specifically, while the capsule endoscope 2 moves within the subject 10, the in-vivo image is displayed on the display unit 36 of the receiving device 3 as illustrated in
Modification 4
Subsequently, a modification of the image display apparatus according to the fourth embodiment will be explained with reference to
A processing selection screen 410 as illustrated in
Like the modification 1-2, another modification of the fourth embodiment may be made by adding a trace calculation unit 61.
As described above, according to the first to fourth embodiments and modifications thereof, the content of the image processing and the position estimation level are set for the in-vivo image data taken by the capsule endoscope, on the basis of the information received by the input receiving unit, and the time required to perform the processing is predicted and displayed. Therefore, the radiographic in-vivo images having been subjected to necessary processing can be efficiently interpreted.
The embodiments explained above are merely examples for carrying out the present invention. The present invention is not limited thereto, and it is within the scope of the present invention to make various kinds of modifications according to specifications and the like. Further, it is obvious from the above description that various kinds of other embodiments can be made within the scope of the present invention.
Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2010-214413 | Sep 2010 | JP | national |
This application is a continuation of PCT international application Ser. No. PCT/JP2011/064260 filed on Jun. 22, 2011 which designates the United States, incorporated herein by reference, and which claims the benefit of priority from Japanese Patent Applications No. 2010-214413, filed on Sep. 24, 2010, incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2011/064260 | Jun 2011 | US |
Child | 13429499 | US |