The disclosure relates to an endoscope system including an endoscope device that is configured to be introduced into a subject to capture images in the subject.
In the related art, in medical fields, endoscope systems are used for observation inside the subject. In the endoscope system, a flexible insertion unit having an elongated shape is inserted into a subject such as a patient, illumination light is illuminated from the distal end of the insertion unit, and images of an inside of the subject are captured by receiving the reflected light of the illumination light with an image sensor of the distal end of the insertion unit. After predetermined image processing is performed on the image by a processing device connected to the proximal end side of the insertion unit through the cable, the image captured in this manner is displayed on a display of the endoscope system.
As the image sensor, for example, a complementary metal oxide semiconductor (CMOS) image sensor is used. The CMOS image sensor generates an image signal by a rolling shutter method in which reading is performed with the timing shifted for each horizontal line.
In the endoscope system, in some cases, for example, while performing intermittent illumination such as illumination by pulsed illumination light, observation of a moving subject such as a vocal cord is performed by using a rolling shutter system. As an endoscope system using such intermittent illumination, there is disclosed a technique where a microphone attached to a patient collects sound from the vocal cord and pulsed illumination light (hereinafter, referred to as “pulsed light”) is emitted in synchronization with vibrational frequency of the vocal cord detected from the collected sound (refer to, for example, JP 2009-219611 A).
In some embodiments, an endoscope system includes: a light source configured to generate and emit pulsed light; an endoscope device having an image sensor configured to captures images of an inside of a subject in accordance with timing for generating the pulsed light by the light source and output an image signal; a processing device configured to control the light source and the endoscope device and process the image signal; a sound collection unit having a first microphone and a second microphone to collect sound and having a wired connection to the processing device; a holding member for fixedly holding the first microphone and the second microphone in a certain positional relationship at a location separated from the subject; and a positional relationship acquisition unit configured to acquire values indicating positional relationships between the first microphone, the second microphone, and the subject. The processing device includes: a vibrational frequency detection unit configured to extract a vibrational frequency of first sound emitted by the subject from the sound collected by the first microphone and the second microphone, based on the values indicating the positional relationships between the first microphone, the second microphone, and the subject acquired by the positional relationship acquisition unit; and a light source controller configured to control the light source to generate the pulsed light in accordance with the vibrational frequency of the first sound extracted by the vibrational frequency detection unit.
The above and other features, advantages and technical and industrial significance of this invention will be better understood by reading the following detailed description of presently preferred embodiments of the invention, when considered in connection with the accompanying drawings.
Exemplary embodiments of the present invention will be described below. In the embodiments, as an example of a system including a medical device according to the present invention, reference will be made to a medical endoscope system for capturing and displaying images of an inside of a subject such as a patient. The present invention is not limited to the embodiments. The same reference signs are used to designate the same elements throughout the drawings.
An endoscope system 1 illustrated in
The endoscope 2 includes an elongated insertion unit 21, an operating unit 22, and a universal cord 23.
The insertion unit 21 is inserted with a light guide 27 as an illumination fiber, an electrical cable 26 for transmission of an image signal and transmission of a driving signal, and the like. The insertion unit 21 includes an optical system 24 for condensing light at a distal end portion 21a, an image sensor 25 that is provided at an imaging position of the optical system 24 to receive the light condensed by the optical system 24, photoelectrically convert the light into an electric signal, and perform predetermined signal processing, a distal end of the light guide 27 that is configured by using glass fiber or the like to constitute a light guide path of the light emitted by the light source device 5, and an illumination lens 27a that is provided at the distal end of the light guide 27.
The optical system 24 is configured with one or a plurality of lenses arranged on a light receiving surface side of a light-receiving unit 25a described later and has an optical zoom function for changing the angle of view and a focus function for changing the focus.
The image sensor 25 captures images of an inside of the subject in accordance with the timing of generation of pulsed light by the light source device 5 and outputs the captured image signal to the processing device 4 through the electrical cable 26. The image sensor 25 includes the light-receiving unit 25a and a reading unit 25b.
In the light-receiving unit 25a, a plurality of pixels that receive light from the subject illuminated with the pulsed light by the light source device 5 and photoelectrically convert the received light to generate an image signal are arranged in a matrix shape on the light receiving surface. The light-receiving unit 25a generates an image signal representing the inside of the subject from the optical image formed on the light receiving surface.
The reading unit 25b performs exposure on a plurality of the pixels in the light-receiving unit 25a and reading of image signals from a plurality of the pixels. The light-receiving unit 25a and the reading unit 25b are configured by, for example, CMOS image sensors and can perform exposure and reading for each horizontal line. On the basis of a driving signal transmitted from the processing device 4, the reading unit 25b generates an pixel signal by a rolling shutter method in which an imaging operation of performing exposure and reading is performed from the first horizontal line and charge resetting, exposure, and reading are performed with the timing shifted for each horizontal line. The reading unit 25b outputs the image signal read out from a plurality of the pixels of the light-receiving unit 25a to the processing device 4 through the electrical cable 26 and a connector 23a.
The operating unit 22 is connected to the proximal end side of the insertion unit 21 and is provided with a switch 22a that receives input of various operation signals.
The universal cord 23 extends in a direction different from the direction in which the insertion unit 21 extends from the operating unit 22 and incorporates various cables connected to the processing device 4 and the light source device 5 through the connectors 23a and 23b. The universal cord 23 incorporates at least a light guide 27 and a plurality of electrical cables 26.
The microphone 3 is connected to the processing device 4 by wire and collects sound. The distal end of a cord 31 is connected to the microphone 3, and the proximal end of the cord is detachably connected to a sound input terminal 33 of the processing device 4. The sound signal collected by the microphone 3 is output to a vibrational frequency detection unit 41 described later through the cord 31 connected to the processing device 4. The microphone 3 is fixedly held at a predetermined position by the holding member 32.
The holding member 32 is, for example, a fixing member 32b that fixes the microphone 3 in the vicinity of the light of an arm light 32a (refer to
The processing device 4 includes a vibrational frequency detection unit 41, a control unit 42, an image processing unit 43, a display controller 44, an input unit 45, and a storage unit 46.
The vibrational frequency detection unit 41 detects vibrational frequency of the sound that is collected by the microphone 3 and input to the processing device 4 through the cord 31 and the sound input terminal 33. The sound is emitted from a vocal cord of the subject H as the subject. The vibrational frequency detection unit 41 outputs the vibrational frequency of the detected sound to the control unit 42.
The control unit 42 is realized by using a CPU or the like. The control unit 42 controls the processing operation of each unit of the processing device 4. The control unit 42 controls operations of the processing device 4 by transferring instruction information and data for each configuration of the processing device 4. The control unit 42 controls operations of the image sensor 25 by connecting to the image sensor 25 through the electrical cable 26 and outputting a driving signal. The control unit 42 connects to the light source device 5 through a cable. The control unit 42 includes a light source controller 42a that controls operations of the light source device 5. The light source controller 42a controls the generation timing and the generation period of pulsed light by a light source 53 in synchronization with the vibrational frequency of the sound detected by the vibrational frequency detection unit 41. The generation timing and the generation period of the pulsed light by the light source controller 42a are also output to the image processing unit 43.
The image processing unit 43 performs predetermined signal processing on the image signal read out by the reading unit 25b of the image sensor 25. For example, the image processing unit 43 performs at least optical black subtraction processing, white balance (WB) adjustment processing, image signal synchronization processing (in the case where the image sensors are in a Bayer arrangement), color matrix calculation processing, gamma correction processing, color reproduction processing, edge emphasis processing, and the like on the image signal.
The display controller 44 generates a display image signal to be displayed on the display device 6 from the image signal processed by the image processing unit 43. The display controller 44 outputs the display image signal of which format is changed to correspond to the display device 6 to the display device 6.
The input unit 45 is realized by using an operation device such as a mouse, a keyboard, and a touch panel and receives input of various types of instruction information of the endoscope system 1. Specifically, the input unit 45 inputs various types of instruction information such as subject information (for example, ID, date of birth, name, and the like), identification information of the endoscope 2 (for example, ID and inspection correspondence item), details of inspection, or the like.
The storage unit 46 is realized by using a volatile memory or a nonvolatile memory and stores various programs for operating the processing device 4 and the light source device 5. The storage unit 46 temporarily stores the information being processed by the processing device 4. The storage unit 46 stores the image signal output from the image sensor 25 in unit of a frame. The storage unit 46 stores the image signal processed by the image processing unit 43. The storage unit 46 may be configured by using a memory card or the like that is mounted from the outside of the processing device 4.
The light source device 5 includes a pulse generator 51, a light source driver 52, and a light source 53.
On the basis of the value (pulse width or duty ratio) calculated by the light source controller 42a, the pulse generator 51 generates a pulse for driving the light source 53 by using the vibrational frequency of the sound detected by the vibrational frequency detection unit 41, generates a PWM signal for controlling the light source including the pulse, and outputs the signal to the light source driver 52.
The light source driver 52 supplies predetermined power to the light source 53 on the basis of the PWM signal generated by the pulse generator 51.
The light source 53 is configured by using a light source such as a white LED that generates pulsed white light (pulsed light) as illumination light to be supplied to the endoscope 2 and an optical system such as a condenser lens. The light (pulsed light) emitted from the light source 53 is illuminated on the subject from the distal end portion 21a of the insertion unit 21 through the connector 23b and the light guide 27 of the universal cord 23.
As described above, in first embodiment, since the microphone 3 is fixedly held by the holding member 32 at a location separated from the subject H by a certain distance D or more where patient insulation becomes unnecessary, the patient insulation dedicated to the microphone is not required for either the microphone or the processing device. Therefore, according to the first embodiment, even in a configuration where sound is collected by the microphone to generate pulsed light, it is possible to avoid a complicated configuration caused by the patient insulation and the insulation between circuits.
Although the single microphone 3 is provided in the first embodiment, a plurality of the microphones 3 may be provided. Moreover, although the imaging signal processing circuit 47a and the first insulation transmission unit 47b are provided in the processing device 4 in the first embodiment, these elements may be provided in the endoscope 2 (for example, a portion of the connector of the operating unit 22 or the universal cord 23 to be connected to the processing device 4). Only the imaging signal processing circuit 47a may be provided in the endoscope 2 (for example, a portion of the connector of the operating unit 22 or the universal cord 23 to be connected to the processing device 4).
Next, a second embodiment will be described. In the second embodiment, a plurality of microphones are provided to increase sound collection sensitivity, and by obtaining distances between a subject and the microphones, noise is canceled from sound signals collected by the microphones.
As illustrated in
As illustrated in
The operating unit 222 includes the infrared output unit 208 configured to output infrared rays toward the first microphone 3A and the second microphone 3B. The infrared output unit 208 outputs infrared ray under the control of the control unit 242 of the processing device.
As illustrated in
The distance calculation unit 247 calculates a first distance that is a distance between the first microphone 3A and the subject H and a second distance that is a distance between the second microphone 3B and the subject H, as the positional relationships between the first microphone 3A, the second microphone 3B, and the subject H. As described above, the position of the operating unit 222 when the insertion unit 21 of the endoscope 202 is introduced into the mouth of the subject H is approximated to the position of the vocal cord of the subject H as a subject. Therefore, the distance calculation unit 247 calculates the distance D1 between the first microphone 3A and the operating unit 222 and the distance D2 between the second microphone 3B and the operating unit 222. The distance calculation unit 247 calculates the distance D1 on the basis of a difference between an infrared output time by the infrared output unit 208 provided in the operating unit 222 and an infrared detection time by the first infrared sensor 2071 and the speed of infrared ray traveling in the air. The distance calculation unit 247 calculates the distance D2 on the basis of a difference between an infrared output time by the infrared output unit 208 provided in the operating unit 222 and an infrared detection time by the second infrared sensor 2072 and the speed of infrared ray traveling in the air. The distance calculation unit 247 outputs the calculated distances D1 and D2 to the vibrational frequency detection unit 241.
Based on the positional relationships between the first microphone 3A, the second microphone 3B, and the subject H, namely, the distances D1 and D2 acquired by the distance calculation unit 247, the vibrational frequency detection unit 241 extracts the vibrational frequency of the first sound emitted by the subject H from the sound collected by the first microphone 3A and the second microphone 3B.
The square of a distance between a sound source and a microphone is proportional to the intensity of sound collected by the microphone. Therefore, the sound of the vibrational frequency Fn where the ratio between the intensity I1(Fn) of the sound collected by the first microphone 3A and the intensity I2(Fn) collected by the second microphone 3B is equal to the ratio between the square of the distance D1 and the square of the distance D2 is the first sound emitted by the subject H. That is, the sound of the vibrational frequency Fn that satisfies the relationship of the following formula (1) is the first sound emitted by the subject H. The sound of the vibrational frequency Fn which does not satisfy the relationship of the following formula (1) is noise sound emitted by other than the subject H.
The vibrational frequency detection unit 241 obtains the intensity ratios between the sound collected by the first microphone 3A and the sound collected by the second microphone 3B for each vibrational frequency and extracts, among the obtained intensity ratios, the vibrational frequency having the intensity ratio corresponding to the ratio between the square of the distance D1 and the square of the distance D2 obtained by the distance calculation unit 247 as the vibrational frequency of the first sound emitted by the subject H. That is, the vibrational frequency detection unit 241 extracts the vibrational frequency Fn that satisfies the above-described formula (1) as the vibrational frequency of the first sound emitted by the subject H. The light source controller 42a controls the pulsed light generation processing on the light source 53 in accordance with the vibrational frequency of the first sound extracted by the vibrational frequency detection unit 241 in this manner. The vibrational frequency detection unit 241 may sum up the sound of the two microphones, may sum up the sound of the two microphones with the gain of the sound having the lower intensity being increased, or may use only the sound having the higher intensity.
As described above, in the second embodiment, a plurality of microphones are provided to increase the sound collection sensitivity, and by obtaining the distances between the subject H and the microphones, noise is canceled from the sound signal collected by the microphones, and since only the vibrational frequency of the first sound emitted by the subject H is extracted, it is possible to match the pulsed light generation processing with the vibrational frequency of the first sound at a high accuracy.
Modified Example of Second Embodiment
Next, Modified Example of the second embodiment will be described. In Modified Example of the second embodiment, the first distance and the second distance are calculated by performing image processing.
As illustrated in
The distance calculation unit 247A calculates the distance D1 and the distance D2 by using a triangulation method or the like on the basis of the position of the marker 208A included in the image signal (for example, the image G1 illustrated in
As illustrated in Modified Example of the second embodiment, by performing image processing, the distance between the subject and each microphone may be obtained.
Next, a third embodiment will be described. In the third embodiment, values that can be in correspondence with the first distance and the second distance are acquired, and noise is canceled from a sound signal collected by a microphone on the basis of the acquired values.
As illustrated in
The high-frequency sound source 348 emits second sound in a high frequency band outside the human audible band.
As described above, the square of a distance between a sound source and a microphone is proportional to the intensity of sound collected by the microphone. In the third embodiment, the square of the distance D1 between the high-frequency sound output unit 308 and the first microphone 3A is proportional to the intensity of the second sound collected by the first microphone 3A. Similarly, the square of the distance D2 between the high-frequency sound output unit 308 and the second microphone 3B is proportional to the intensity of the second sound collected by the second microphone 3B. Therefore, the intensity of the second sound collected by the first microphone 3A is a value that can be in correspondence with the square of the distance D1, and the intensity of the second sound collected by the second microphone 3B is a value that can be in correspondence with the square of the distance D2. As described in the second embodiment, the sound of the vibrational frequency Fn where the ratio between the intensity I1(Fn) of the sound collected by the first microphone 3A and the intensity I2(Fn) collected by the second microphone 3B is equal to the ratio between the square of the distance D1 and the square of the distance D2 is the first sound emitted by the subject H.
Therefore, the sound of the vibrational frequency Fn where the ratio between the intensity I1(Fn) of the sound collected by the first microphone 3A and the intensity I2(Fn) collected by the second microphone 3B is equal to the ratio between the intensity I1(Fi) of the second sound of which center vibrational frequency is the vibrational frequency Fi collected by the first microphone 3A and the intensity I2(Fi) of the second sound of which center vibrational frequency is the vibrational frequency Fi collected by the second microphone 3B is the first sound emitted by the subject H. That is, the sound of the vibrational frequency Fn that satisfies the relationship of the following formula (2) is the first sound emitted by the subject H. The sound of the vibrational frequency Fn which does not satisfy the relationship of the following formula (2) is a noise sound emitted by other than the subject H.
The vibrational frequency detection unit 341 includes a positional relationship acquisition unit 341a that acquires values indicating the positional relationships between the first microphone 3A, the second microphone 3B, and the subject H based on the intensity of the second sound collected by the first microphone 3A and the intensity of the second sound collected by the second microphone 3B. The positional relationship acquisition unit 341a acquires the reference intensity ratio that is the ratio between the intensity of the second sound collected by the first microphone 3A and the intensity of the second sound collected by the second microphone 3B as a value indicating the positional relationship. The vibrational frequency detection unit 341 obtains the intensity ratio between the sound collected by the first microphone 3A and the sound collected by the second microphone 3B for each vibrational frequency and extracts, among the obtained intensity ratios, the vibrational frequency in the human audible band of which the intensity ratio is substantially equal to the reference intensity ratio acquired by the positional relationship acquisition unit 341a as the vibrational frequency of the first sound emitted by the subject H. That is, the vibrational frequency detection unit 341 extracts the vibrational frequency Fn that satisfies the above-described formula (2) as the vibrational frequency of the first sound emitted by the subject H.
In the example of
Even though the distances D1 and D2 are not acquired directly, similarly to the third embodiment, by acquiring values that can be in correspondence with the first distance and the second distance, noise is canceled from the sound signal collected by the microphone, and only the vibrational frequency of the first sound emitted by the subject H can also be extracted.
Modified Example of Third Embodiment
In Modified Example of the third embodiment, an example in which the third embodiment and Modified Example of the second embodiment are combined will be described.
Similarly to the vibrational frequency detection unit 241, the vibrational frequency detection unit 441 extracts the vibrational frequency of the first sound emitted by the subject H from the sound collected by the first microphone 3A and the sound collected by the second microphone 3B by using the distances D1 and D2 calculated by the distance calculation unit 247A. In addition, the vibrational frequency detection unit 441 extracts the vibrational frequency of the first sound emitted by the subject H from the sound collected by the first microphone 3A and the sound collected by the second microphone 3B by the same method as that of the vibrational frequency detection unit 341. In the case where the vibration frequencies extracted by different methods are equal to each other, the vibrational frequency detection unit 441 outputs the equal vibrational frequency as the vibrational frequency of the first sound emitted by the subject H to the light source controller 42a.
Similarly to Modified Example of the third embodiment, by combining different extraction methods, it is also possible to improve the detection accuracy of the vibrational frequency of the first sound emitted by the subject H.
In the above-described first to third embodiments, the light source device 5 is provided separately from the processing device 4. However, the light source device 5 and the processing device 4 may be integrated.
In the above-described first to third embodiments, the device connected to the processing device 4 is not limited to the endoscope having the image sensor 25 at the distal end of the insertion unit 21. For example, the device may be a camera head provided with an image sensor that is mounted on the eyepiece portion of an optical endoscope such as an optical viewing tube or a fiberscope to capture an optical image formed by the optical endoscope.
An execution program for each process executed by different elements of the processing devices 4, 204, 204A, 304, and 404 according to the present embodiment may be configured to be provided as a file in an installable format or an executable format recorded on a computer-readable recording medium such as a CD-ROM, a flexible disk, a CD-R, or a DVD or may be configured to be stored on a computer connected to a network such as the Internet and to be provided by downloading via the network. The program may be provided or distributed via a network such as the Internet.
According to some embodiments, even in a configuration where sound is collected by a sound collection unit to generate pulsed light, since the sound collection unit is fixedly held at a location separated from a subject, patient insulation dedicated to the sound collection unit is not required for either the sound collection unit or a processing device. It is therefore possible to avoid a complicated configuration caused by the patient insulation and insulation between circuits.
Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2015-083464 | Apr 2015 | JP | national |
This application is a continuation of PCT international application Ser. No. PCT/JP2016/059739, filed on Mar. 25, 2016 which designates the United States, incorporated herein by reference, and which claims the benefit of priority from Japanese Patent Application No. 2015-083464, filed on Apr. 15, 2015, incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2016/059739 | Mar 2016 | US |
Child | 15625265 | US |