The invention relates to a method and an examination device with which an organ under examination is illuminated.
While examining different organs, such as the eye, ear, nose, mouth, etc., a digital examination device may be used, which forms an electric image that can be transferred to be displayed on a computer screen, for example. There may be a separate examination device for each organ, but the examination device may also comprise a common digital camera unit for examining different organs, and a plurality of optical components which can be attached to and detached from the camera unit and which act as objectives for the camera unit. The different optical components are in this case intended for forming an image of different organs, which makes the examination effective.
However, the use of optical components attachable to and detachable from the camera unit is associated with problems. Although image-forming optics can be arranged according to the object under examination, illuminating of the object under examination is inadequate, since different objects under examination are illuminated by the same sources in the same way. Usually the illumination in digital systems must be implemented according to the exposure capability of a digital cell acting as a detector. As a consequence, optical radiation not adapted to the object and directed at the object under examination seldom brings out desired properties of the object properly, nor does it illuminate the region around the object sufficiently. Thus, the illumination of the object under examination is not optimised and may in some cases be quite insufficient for the intensity and band of the optical radiation.
It is an object of the invention to provide an improved method and a device implementing the method. This is achieved by a method for illuminating an organ, wherein a camera unit is used for forming an electric image of the organ. The method may employ at least one optical component, which is connectable to the camera unit and comprises at least one optical radiation source and at least one optical radiation control structure; directing optical radiation with the at least one optical radiation source to the at least one optical radiation control structure, which is located non-axially to the optical axis of the optical component; and directing optical radiation with each optical radiation control structure at the organ under examination in a direction diverging from the optical axis of the optical component.
The invention also relates to a device for forming an image of an organ, the device comprising a camera unit for forming an electric image of the organ. The device comprises a group of optical components, the group comprising at least one optical component, each optical component being connectable to the camera unit; each optical component comprises at least one optical radiation source and at least one optical radiation control structure; the at least one optical radiation source is arranged to direct optical radiation at the at least one optical radiation control structure located non-axially to the optical axis of the optical component; and each optical radiation control structure is arranged to direct optical radiation from the optical radiation source at the organ in a direction diverging from the optical axis of the optical component.
Preferred embodiments of the invention are disclosed in the dependent claims.
The method and system of the invention provide a plurality of advantages. Radiation from an optical radiation source in an optical component is emitted to the object under examination in a direction diverging from the optical axis of the optical component in order to form a good image of the object under examination. Since each optical component is intended for examining and illuminating a specific organ, the object under examination can be illuminated as desired.
The invention is now described in closer detail in connection with the preferred embodiments and with reference to the accompanying drawings, in which
According to the object under examination, the examination device may be connected with one or more optical components with suitable imaging optics. The optical components may communicate with each other and the rest of the equipment, and by utilizing the communication, optical radiation sources in both the lenses themselves and the frame of the device may be used in a controlled manner in all objects of which an image is formed in such a manner that radiation from all or some of the available optical radiation sources can be directed at the object under examination, controlled in a desired manner according to the object of which an image is formed. In this application, optical radiation refers to a wavelength band of approximately 100 nm to 500 μm.
For the most parts, a camera unit of the examination device may be similar to the solutions disclosed in Finnish Patents FI 107120, FI 200212233 and FI 20075499, wherefore the present application does not disclose features known per se of the camera unit in greater detail but expressly concentrates on the features of disclosed solution that differ from both the above facts and the prior art.
Let us first view the examination device generally by means of
In addition to the camera unit 100, the examination device comprises at least one optical component 110 to 114, which is connectable to the camera unit 100. Each optical component 110 to 114 is intended, either alone or together with at least one other optical component 110 to 114, for forming an image of a predetermined organ. The at least one optical component 110 to 114 comprises at least one lens or mirror, which may, together with the optics unit 102, form an image of the organ, such as the eye, to the detector 104. An optical component suitable for the object under examination may be attached or added to or replaced in the camera unit 100. Attached to the camera unit 100, each of these optical components 110 to 114 may communicate with the camera unit 100 and/or with one another by using a data structure 116 to 120. Furthermore, it is possible that each optical component 110 to 114 communicates with devices in the surroundings. Each optical component 110 to 114, alone or together with one or more other optical components 110 to 114, may control the production, processing and storing of the image.
The data structure 116 to 120 of each optical component 110 to 114 may contain information on the optical component 110 to 114. The data structure 110 to 114 may be located in the frame of the optical component 110 to 114 or in at least one component used for forming an image, such as a lens. The optical component 110 to 114 may comprise, for instance, one or more elements forming an image, such as a lens or a mirror, and the optical component 110 to 114 may act as an additional objective of the camera unit 100.
The data structure 116 to 120 may be for instance an electromechanical structure, which attaches the optical component 110 to 114 mechanically to the camera unit 100 and establishes an electric connection between the camera unit 100 and the optical component 110 to 114. By connecting the data structure 116 to 120 against a counterpart 122 in the camera unit 100, the information associated with the optical component 110 to 114 may be transferred from the data structure 116 to 120 via the counterpart 122 to the controller 106 along a conductor, for instance. In this case, the data structure 116 to 120 and the counterpart 122 of the camera unit 100 may comprise one or more electric contact surfaces. The electrical connection may be specific to each optical component 110 to 114 or component type. Through the contact surfaces, the camera unit 100 may switch on the electricity in the data structure 116 to 120, and the response of the data structure 116 to 120 to the electric signal of the camera unit 100 contains information characteristic of each optical component 110 to 114.
There may be a different optical component 110 to 114 for forming an image of different organs, in which case each optical component 110 to 114 has a different kind of connection. The connections may differ from one another in terms of, for instance, resistance, capacitance or inductance, which affects for example the current or voltage detected by the camera unit 100. Instead of such analogue coding, digital coding may also be used for separating the optical components 110 to 114 from one another.
The data structure 116 to 120 may also be, for example, a memory circuit comprising information characteristic of each optical component 110 to 114. The data structure 116 to 120 may be, for instance, a USB memory and, as the counterpart 122, the camera unit 100 may have a connector for the USB memory. The information associated with the optical component 110 to 114 may be transferred from the memory circuit to the controller 106 of the camera unit 100, which may use this information to control the camera unit 100 and an optical radiation source 300, 304 of each optical component 110 to 114 and thus the optical radiation.
Reading of the information included in the data structure 116 to 120 does not necessarily require a galvanic contact between the camera unit 100 and the data structure 116 to 120. The information associated with the optical component 110 to 114 may in this case be read from the data structure 116 to 120 capacitively, inductively or optically, for instance. The data structure 116 to 120 may be a bar code, which is read by a bar code reader of the camera unit 100. The bar code may also be read from the formed image by means of an image processing program of the camera unit 100. The bar code may be detected at a wavelength different from the wavelength at which an image is formed of the organ. The bar code may be identified by means of infrared radiation, for example, when an image of the organ is formed with visible light. Thus, the bar code does not interfere with the forming of an image of the organ.
The data structure 116 to 120 may also be an optically detectable property of each optical component 110 to 114, such as an image aberration, which may be e.g. a spherical aberration, astigmatism, coma, curvature of image field, distortion (pin cushion and barrel distortion), chromatic aberration, and/or aberration of higher degree (terms above the third degree of Snell's law). In addition, the data structure 116 to 120 may be a structural aberration in the lens. Structural aberrations of the lens may include, for instance, shape aberrations (bulges or pits), lines, waste and bubbles. Each of these aberrations may affect the formed image in their own identifiable way. After the camera unit 100 has identified an aberration characteristic of a specific optical component 110 to 114, the optical component 110 to 114 may be identified, the identification data may be stored and/or the data may be utilized for controlling the optical radiation source 300, 304, directing the optical radiation at the measurement object and processing the image.
The memory circuit may also be an RFID (Radio Frequency Identification), which may also be called an RF tag. A passive RFID does not have its own power source but it operates with energy from the reader, in this case the camera unit 100. Energy may be supplied to the RFID via a conductor from for example a battery or, in a wireless solution, the energy of the identification data inquiry signal may be utilized.
The camera unit 100 may compare the image formed by a certain optical component 110 to 114 with a reference image, which may be stored in the memory of the camera unit 100. The comparison could be made about, for example, aberration, contrast or brightness in different parts of the image. Thus, information on the optical properties, such as refractive indices of lenses, of each optical component 110 to 114 may be obtained. In addition, image errors may be corrected by, for example, controlling the optical radiation sources 300, 304 and changing the optical radiation directed at the measurement object.
The data structure 116 to 120 of each optical component 110 to 114 may thus transmit information on the optical component 110 to 114 to the camera unit 100 when at least one optical component 110 to 114 is connected to the camera unit 100. By using the information associated with each connected optical component 110 to 114, the data structure 116 to 120 may thus directly or indirectly (e.g. by means of the controller 106 or the hospital's server) control the illumination of the measurement object and the formation of an image of the organ performed with the camera unit 100.
One or more optical components 110 to 114 may also comprise a detecting cell 138, to which the optical radiation may be directed either directly or by a mirror 140, for example. The mirror 140 may be semipermeable. The detecting cell 138 may also be so small that it only covers a part of the optical radiation passing through the optical component 110 to 114, whereby optical radiation also arrives at the detector 104. An optical component 110 to 114 may comprise more than one detecting cells, and they may be in a wired connection with the controller 106 when the optical component 110 to 114 is connected to the camera unit 100. Connected to the camera unit 100, the detecting cell 138 may be activated to operate with the energy from the camera unit 100 and it may be used for forming an image of the organ. The detecting cell 138 may operate at the same or a different wavelength as the detector 104. The cell 138 operating at a different wavelength may be used for forming an image in infrared light, for instance, and the detector 104 may be used for forming an image in visible light. The mirror 140 may reflect infrared radiation very well and, at the same time, allow a considerable amount of visible light to pass through it. Image data of the detecting cell 138 and that of the detector 104 may be processed and/or combined and utilized alone or together. The detecting cell 138 may be a CCD or CMOS element, for instance.
Each optical component 110 to 114 may comprise at least one sensor 134, such as an acceleration sensor, distance sensor, temperature sensor, and/or physiological sensor. A distance sensor may measure distance to the object under examination. A physiological sensor may measure blood sugar content and/or haemoglobin, for example.
When the camera unit 100 comprises a plurality of optical radiation sources, radiation can be emitted from different sources to the object under examination according to the distance between the camera unit 100 and the object under examination. Optical radiation sources may be used, for example, in such a manner that when the camera unit 100 is further than a predetermined distance away from the eye, the source of visible light illuminates the eye. On the other hand, when the camera unit is closer than a predetermined distance from the eye, the infrared source illuminates the eye. The illumination means of the eye may thus be a function of distance.
An acceleration sensor may be used, for instance, for implementing a function in which a first image is taken from the fundus of the eye at a first moment when a certain optical radiation source emits light towards the measurement object and at least one other image is taken by using a different optical radiation source at another moment when the camera unit is in the same position with respect to the eye as when the first image was taken. Since a hand-held camera unit is shaking in the hand, the position of the camera unit with respect to the eye changes all the time. By integrating the acceleration vector 5 of the camera unit into a velocity vector {right arrow over (v)}, wherein
and by converting the velocity vector {right arrow over (v)} into a spatial position vector {right arrow over (x)}, wherein {right arrow over (x)}={right arrow over (v)}t, the location of the camera unit may be determined at any moment. If the location is the same when the first and the second image are taken, the first and the second image may differ from one another in that they have been taken by using different wavelengths. Another difference may be that the shadows in the images may be cast in different directions, because the different optical radiation sources may be situated in different locations in the optical component. Different wavelengths and shadows in different directions may provide information on the measurement object.
In the case of
Let us now view in more detail the optical component 110 for forming an image of the eye by means of
The radiation source 300 may be switched on automatically when the optical component 110 is attached to the camera unit 100. In this case, the optical radiation source 126 inside the camera unit 100 may be switched on or off and the optical radiation source 124 may be switched off. The radiation source 300 may be a radiator of light in the visible region or of radiation in the infrared region, for example.
In an embodiment, the optical radiation from all optical radiation sources 300, 304 may be non-polarized or polarized in the same way. In addition or alternatively, the optical radiation from at least two optical radiation sources 300, 304 may differ from each other in terms of polarisation. Optical radiation polarized differently may be reflected from different objects in different ways and may thus contribute to separating and detecting different objects. If the polarisation change between the transmitted and the received optical radiation is determined at the reception, a desired property of the object may be determined on the basis of this change.
In an embodiment, the optical radiation sources 300, 304 may emit pulsed radiation or continuous radiation. Pulsed radiation may be used as flashlight, for instance. The optical power sources 300, 304 may also be set to operate independently in either pulsed or continuous mode.
In an embodiment, each optical radiation source 300, 304 may be set to a desired position or location during image-forming. Thus, the optical radiation may be directed at the control structure 302, 306 from a desired direction. The optical radiation sources 300, 304 may be moved by means of motors 308, 310, for instance. Likewise, each optical radiation control structure 302, 306 may be set to a desired position during image-forming. The control structures 302, 306 may also be moved in their entirety by means of motors 312, 314. The control structures 302, 306 may comprise row or matrix elements, whose direction affecting the optical radiation may be controlled independently (see
The control structure 302, 306 may be a digital radiation processor comprising a group of mirrors in line or matrix form, for example. The position of each mirror can be controlled. The digital radiation processor may be for example a DLP (Digital Light Processor) 500, which is shown in
The radiation source 300 inside the optical component 110 to 114 may be optimized to a great extent to emphasize a desired property of the object. As radiation sources both in other optical components and in the frame of the device can be directed at the measurement object at the same time, the measurement object may be illuminated with an amount of radiation required for forming an image, simultaneously emphasizing one or more desired properties.
In an embodiment, the optical component 110 to 114 directs a predetermined pattern at the measurement object. The predetermined pattern may be for instance a matrix, scale or grid. If a scale is directed at the measurement object, the sizes of the structures detected in the measurement object can be measured. For example, the size of a blood vessel, scar or tumour in the eye can be determined. The measurement can be performed by calculations on the basis of any predetermined pattern.
When a predetermined pattern is directed at the surface of the eye, it is possible to measure intraocular pressure. Intraocular pressure is usually 10 to 21 mmHg. However, the pressure will be higher if too much aqueous humour is produced in the surface cell layer of the ciliary body or humour drains too slowly through the trabecular meshwork in the anterior chamber angle into the Schlemm's canal and further to the venous circulation. During the measurement, a desired air spray may be directed at the eye from a known distance and with a predetermined or pre-measured pressure. The lower the pressure in the eye is, the more the air spray distorts the surface of the eye. The distortion produced by the air spray also causes that the predetermined pattern reflected from the surface of the eye changes. The shape of the pattern may be detected by the detector 104, and the pattern may be processed and measured by the controller 106 or an external computer. Since the force applied by the pressure to the surface of the eye may be determined on the basis of the measured or known variables, the measured change of the predetermined pattern may be used for determining the pressure that the eye must have had to enable the measured change.
In an embodiment, the formed image is coloured in a desired manner completely or partly. The colouring, as well as at least some of other procedures associated with the forming of an image, may be performed in the camera unit 100, in a separate computer 810, a docking station, a hospital's base station or a hospital's server. The colouring may be for example such that when an image is formed of the eye, the object is illuminated with orange light, when an image is formed of the ear, it is illuminated with red light, and when an image is formed of the skin, with blue light. In image processing, the image may also be edited into an orange, red or blue image.
In an embodiment, the information associated with the optical component 110 to 114 may be used for determining the information on the object of which an image is formed, such as the eye, nose, mouth, ear, skin, etc., since each optical component 110 to 114 alone or together with one or more predetermined optical component 110 to 114 may be intended for forming an image of a predetermined organ. Thus, for example, the optical component for examining the eye allows the information “eye optics” or “image of an eye” to be automatically attached to the image. As the camera unit 100 identifies the object of which an image is formed on the basis of information associated with one or more optical components 110 to 114, the camera unit 100 may by image processing operations automatically identify predetermined patterns in the object of which an image is formed and possibly mark them with colours, for instance. When the image is displayed, the marked-up sections, which may be for instance features of an illness, can be clearly distinguished.
Information received from one or more optical components may be used for monitoring how the diagnosis succeeds. If the patient has symptoms in the eye, but the camera unit 100 was used for forming an image of the ear, it can be deduced that this was not the right way of acting. It is also possible that the hospital's server has transmitted information on the patient and his/her ailment in DICOM (Digital Imaging and Communications in Medicine) format, for instance, to the camera unit 100. Thus, the camera unit 100 only forms an image of the organ about which the patient has complained, in this case the eye. If some other optical component 110 to 114 than the optical component suitable for forming an image of the object (eye) that is ailing is attached to the camera unit 100, the camera unit 100 warns the cameraman with a sound signal and/or a warning signal on the display of the camera unit 100.
For example, a hospital's patient data system collecting images by using the information received from one or more optical components can produce both statistics and billing data.
In an embodiment, by using the information associated with one or more optical components 110 to 114 attached to the camera unit 100, the illumination of the environment may be controlled and the illumination of the organ of which an image is formed may thus also be affected. Thus, the information on the optical component 110 to 114 is transferred, for instance, to the controller controlling the illumination of the examination room. The illumination of the examination room may also be controlled in other ways such that, for example, the illumination may be increased or reduced, or the colour or shade of the colour of the illumination may be controlled. When the camera unit 100 and the object of which an image is formed are close to the light source of the examination room, the light source may be dimmed, for example. Accordingly, if the camera unit 100 is far away from the light source, the light source may be adjusted to illuminate more intensely.
In an embodiment, the location of the optical component 110 to 114 fastened to the camera unit 100 may be determined by using, for example, one or more UWB (Ultra Wide Band) or WLAN transmitters (Wireless Local Area Network). Each transmitter transmits an identification, and the location of each transmitter is known. By using one transmitter, the position of the optical component 110 to 114 may be determined on the basis of the coverage of the transmitter, and on the basis of transmissions of two transmitters, the location of the optical component 110 to 114 may often be determined more precisely, but resulting in two alternative locations, and on the basis of transmissions of three or more transmitters, the location of the optical component 110 to 114 may be determined by triangulation quite accurately and more precisely than the coverage of the transmission. After the location of the used optical component 110 to 114 is determined, the illumination of the examination room may be controlled so that the information on the location of the optical component 110 to 114 is transferred, for instance, to the controller controlling the illumination of the examination room. The illumination of the examination room may be controlled in the same way as in the previous example. For instance, if the optical component 110 to 114 is used in a place that is known to be located next to the patient table, the illumination of the patient table and its surroundings may be increased (or reduced) automatically. Moreover, the image taken by the camera unit 100 may be attached with the information that the image is taken next to the patient table.
In an embodiment, the optical component 110 to 114 comprises an accelerator sensor, which may determine the position of the camera unit 100. After the position of the camera unit 100 is determined, the position information may be used for controlling the illumination of the patient room such that the information on the position of the camera unit 100 is transferred, for instance, to the controller controlling the illumination of the examination room, like in the previous examples. Acceleration sensors may be used for determining accelerations of the camera unit 100, and by integrating the accelerations, the velocity of the camera unit may be determined, and by integrating the velocity, the location of the camera unit may be determined if the camera unit has been located in a predetermined place at the starting moment. Consequently, the location of the camera unit can be determined by this measurement alone or together with the previous measurements three-dimensionally. The lights of the examination room may thus be controlled three-dimensionally to be suitable for taking images.
Let us view a block diagram of the examination device by means of
Although the invention is described above with reference to the example according to the accompanying drawings, it is obvious that the invention is not restricted thereto but may be varied in many ways within the scope of the attached claims.
Number | Date | Country | Kind |
---|---|---|---|
20075738 | Oct 2007 | FI | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/FI2008/050581 | 10/16/2008 | WO | 00 | 3/18/2010 |