Described below are a method and a device for the stereoscopic depiction of image data, in particular a method and a device for the stereoscopic depiction of image data in minimally invasive surgery.
Endoscopic treatments and examinations in the field of medicine enable significantly more gentle and less traumatic treatment in comparison to an open intervention on the patient. Therefore, these treatment methods are gaining increasingly large significance. During a minimally invasive intervention, optical and surgical instruments (endoscopes) are introduced into the body of a patient by an operator via one or more relatively small accesses on the body of the patient. The operator can therefore carry out an examination and treatment using the surgical instruments. At the same time, this procedure can be monitored through the optical instruments. Simple endoscopes enable in this case either a direct view through an eyepiece of the endoscope or an observation of the region to be operated on via a camera attached to the endoscope and an external monitor. Three-dimensional vision is not possible in the case of this simple endoscope. If the endoscope additionally has a second observation channel, which enables an observation of the object from a second direction, three-dimensional vision can thus be enabled, in that both directions are led outward by two eyepieces for the right and left eyes. Since, in the case of a single endoscope, the distance between the observation channels is generally very small (typically at most 6 mm), such a stereoscopic endoscope also only delivers very restricted three-dimensional vision in the microscopic range. For a three-dimensional observation which corresponds to a human eye spacing of approximately 10 cm, it is therefore necessary to provide an access channel which is spaced apart further. Since a further opening on the body of the patient for an additional access channel is linked to further traumatization of the patient, however, an additional access channel is to be avoided if possible.
If a three-dimensional visualization of the treatment region by way of a single endoscope is to be enabled in minimally invasive surgery, therefore either two observation beam paths have to be led outward inside the cross section of the endoscope, or alternatively two cameras spaced apart from one another have to be arranged on the endoscope tip, as was stated above. In both cases, because of the very limited cross section of the endoscope, only extremely low three-dimensional resolution is possible, which results in greatly restricted resolution of the depiction range.
Alternatively, it is also possible to survey the treatment region in the interior of the patient three-dimensionally by a digital system. Document DE 10 2006 017 003 A1 discloses, for example, an endoscope having optical depth data acquisition. For this purpose, modulated light is emitted into the treatment region and depth data of the treatment space are calculated based on the received light signal.
In this case, the direct three-dimensional view into the treatment region remains denied to the operator even after the ascertainment of the available depth data in the interior of the treatment space. The operator has to plan and execute his or her treatment based on a model depicted on a two-dimensional display screen.
A demand therefore exists for improved stereoscopic depiction of image data, in particular a demand exists for a stereoscopic depiction of image data in minimally invasive surgery.
The method for the stereoscopic depiction of image data in minimally invasive surgery includes at least partial three-dimensional acquisition of a surface; preparation of a depth map of the at least partially three-dimensionally acquired surface; texturing of the prepared depth map; calculation of stereoscopic image data from the textured depth map; and visualization of the calculated stereoscopic image data.
Described below is a device for the stereoscopic depiction of image data in minimally invasive surgery having a sensor device, which is designed to at least partially three-dimensionally acquire a surface; a device for preparing a depth map, which is designed to prepare a depth map from the at least partially three-dimensionally acquired surface; a texturing device, which is designed to texture the prepared depth map; an image data generator, which is designed to calculate stereoscopic image data from the textured depth map; and a visualization device, which is designed to visualize the calculated stereoscopic image data.
The method firstly three-dimensionally acquires a region which is not directly accessible by way of a sensor and prepares a digital model in the form of a depth map from this three-dimensional acquisition. Stereoscopic image data, which are optimally adapted to the eye spacing of the user, can then be generated automatically in a simple manner for a user from this depth map.
By three-dimensional surveying of the observation region using a special sensor system, the inaccessible region, for example, in the body interior of a patient, can be acquired in this case by a sensor having a very small structural size. The data thus acquired can be conducted outward in a simple manner, without an endoscope having a particularly large cross section being required for this purpose.
Therefore, outstanding spatial acquisition of the treatment region is achieved, without an endoscope having an extraordinarily large cross section or further accesses to the operation region in the body interior of the patient being required for this purpose.
A further advantage is that such a sensor system can acquire the region to be acquired in a very good three-dimensional resolution and a correspondingly high number of pixels, since the sensor on the endoscope only requires a single camera. Therefore, with only little traumatization of the patient, the operation region to be monitored can be depicted in a very good image quality.
A further advantage is that a stereoscopic visualization of the region to be monitored, which is optimally adapted to the eye spacing of a user, can be generated from the three-dimensional data provided by the sensor system. Therefore, the visualization of the image data can be prepared for a user so that optimum three-dimensional acquisition is possible.
It is furthermore advantageous that the calculation of the stereoscopic image data is performed independently of the three-dimensional acquisition of the object surface by the sensor. A stereoscopic depiction of the treatment region, which deviates from the present position of the endoscope, can therefore also be provided to a user.
By way of suitable preparation of the depth map from the three-dimensionally acquired object data, a depiction of the treatment region, which comes very close to the real conditions, can therefore be provided to a user.
According to one embodiment, the calculated stereoscopic image data correspond to two viewing directions of two eyes of a user. By way of the preparation of the stereoscopic image data in accordance with the viewing directions of the eyes of the user, an optimum stereoscopic visualization of the treatment region can be enabled for the user.
In one embodiment, the depth map includes spatial points of the at least partially three-dimensionally acquired surface. Such a depth map enables very good further processing of the three-dimensionally acquired surface.
According to one embodiment, the three-dimensional acquisition of the surface is executed continuously, and the depth map is adapted based on the continuously three-dimensionally acquired surface. In this manner, it is possible to supplement and if necessary also correct the depth map continuously, so that a complete three-dimensional model of the region to be observed is successively constructed. Therefore, after some time, image information can also be provided about regions which initially could not be acquired because of shadows or similar effects.
In a further embodiment, the method includes providing further items of image information and combining the further items of image information with the acquired three-dimensional surface. By way of this combination of the three-dimensionally acquired surface with further image data, a particularly good and realistic visualization of the stereoscopic image data can be enabled.
In a special embodiment, the further items of image information are diagnostic image data, in particular data from computer tomography, magnetic resonance tomography, an x-ray picture, and/or sonography. Such diagnostic image data, which were prepared before or during the treatment and are related to the treatment region to be observed, provide particularly valuable items of information for the preparation and visualization of the treatment region. For example, these image data can be provided directly by the imaging diagnostic devices, or a storage device 21.
In a further embodiment, the image data are calculated for a predefined viewing direction in calculating the stereoscopic image data. This viewing direction can be different from the present position of the endoscope having the sensor for three-dimensional acquisition of the surface. A particularly flexible visualization of the treatment region can thus be achieved.
In a special embodiment, the method includes acquiring a user input, wherein the predefined viewing direction is adapted in accordance with the acquired user input. It is therefore possible for the user to adapt the viewing direction individually to his or her needs.
In a further embodiment of the device, the sensor device is arranged on or in an endoscope.
In a special embodiment, the endoscope furthermore includes at least one surgical instrument. It is therefore possible simultaneously to execute a surgical intervention and visually monitor this intervention at the same time through a single access.
In one embodiment, the device includes a sensor device having a time-of-flight camera and/or a device for triangulation, in particular a device for active tri-angulation. A particularly good three-dimensional acquisition of the surface can be achieved by such sensor devices.
In a further embodiment, the sensor device includes a camera, which may be a color camera. In addition to the three-dimensional acquisition of the surface, digital image data, which are used for the visualization of the treatment region, can thus also be obtained simultaneously by the sensor device.
In a further embodiment, the image data generator calculates the image data for a predefined viewing direction.
In a special embodiment, the device includes an input device, which is designed to acquire an input of a user, wherein the image data generator calculates the stereoscopic image data for a viewing direction based on the input of the user.
In a further special embodiment, the input device acquires a movement of the user in this case, in particular a gesture performed by the user. This movement or gesture may be acquired by a camera.
These and other features and advantages will become more apparent and more readily appreciated from the following description of the exemplary embodiments with reference to the accompanying drawings of which:
Reference will now be made in detail to the preferred embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout.
The visual monitoring of this treatment is performed in this case by the sensor device 10. This sensor device 10 is a sensor in this case, which can three-dimensionally acquire the surface of the treatment space 2a and at the same time in particular also the surface of the treated object 2c. The sensor device 10 can be a sensor, for example, which operates according to the principle of a time-of-flight camera (ToF camera). In this case, modulated light pulses are emitted from a light source and the light which is scattered and reflected from the surface is analyzed by an appropriate sensor, for example a camera. A three-dimensional model can then be prepared based on the propagation speed of the light.
Alternatively, for example, the sensor device 10 can also carry out a triangulation to ascertain the three-dimensional location of the surface in the treatment space 2a. Fundamentally, such a triangulation can be performed, for example, by passive triangulation using two separate cameras. However, since solving the correspondence problem is difficult in the case of passive triangulation with low-contrast surfaces (for example, the liver) and the 3D data density is very low, active triangulation is preferably performed. In this case, a known pattern is projected onto the surface in the treatment space 2a by the sensor device 10 and the surface is recorded at the same time by a camera. The projection of the known pattern on the surface may be performed using visible light. Additionally or alternatively, however, the operation region can also be illuminated using light outside the visible wavelength range, for example using infrared or ultraviolet light.
By way of a comparison of the pattern recorded by the camera on the surface of the treatment space 2a with the known ideal pattern emitted from the projector, the surface of the treatment space 2a can thereupon be three-dimensionally acquired and analyzed.
In this case, the treatment space 2a and its surface can also be acquired by the camera, simultaneously or alternatively to the three-dimensional acquisition of the surface. In this manner, a corresponding color or black-and-white image of the treatment space 2a can be acquired. In this case, the light sources of the sensor device 10 can also be used simultaneously for illuminating the treatment space 2a, to obtain image data.
The data acquired by the sensor device 10 about the three-dimensional location of the surface in the treatment space 2a, and also the color or black-and-white image data acquired by the camera, are fed outward and are therefore available for further processing, in particular visualization.
Since the sensor device 10 only has a restricted field of vision and additionally some partial regions also initially cannot be acquired because of shadows, for example, as a result of protrusions in the treatment space 2a, at the beginning of the three-dimensional acquisition of the surface in the treatment space 2a, the depth map will initially have larger or smaller gaps. By way of further continuous acquisition of the surface in the treatment space 2a by the sensor device 10, the prepared depth map will be completed more and more in the course of time and in particular if the sensor device 10 moves inside the treatment space 2a. Therefore, items of information about spatial points which presently cannot be acquired by the sensor device 10 because they are located outside the field of vision or behind a shadow, for example, are also provided in this depth map after some time. In addition, by way of the continuous acquisition of the surface by the sensor device 10, a change of the surface can also be corrected in the depth map. The depth map therefore always reflects the presently existing state of the surface in the treatment space 2a.
The spatial points of the surface of the treatment space 2a present in the depth map are relayed to a texturing device 30. The texturing device 30 can optionally combine the items of information from the depth map in this case with the image data of an endoscopic black-and-white or color camera. The texturing device 30 generates a three-dimensional object having a coherent surface from the spatial points of the depth map. In this case, the surface can be suitably colored or shaded as needed by combining the three-dimensional spatial data of the depth map with the endoscopic camera data.
Furthermore, it is additionally possible to also incorporate additional diagnostic image data. For example, recordings can already be prepared of the treated region preoperatively. Imaging diagnostic methods are suitable for this purpose, for example, computer tomography (CT), magnetic resonance tomography (MR or MRT), x-ray pictures, sonography, or similar methods. It is also conceivable to generate additional items of information, which can also be incorporated into the image generation process, if necessary during the treatment by way of suitable imaging diagnostic methods.
After texturing of the surface of the treatment space 2a has been carried out from the image data of the depth map and optionally the further image data in the texturing device 30, the items of information thus processed are relayed to an image data generator 40. This image data generator 40 generates stereoscopic image data from the items of textured three-dimensional information. These stereoscopic image data include at least two images, which are slightly offset in relation to one another, and which take into consideration the eye spacing of a human observer. The spacing used between the two eyes is typically approximately 80 mm in this case. The user receives a particularly good three-dimensional impression in this case if it is presumed that the object to be observed is located approximately 25 cm in front of his or her eyes. Fundamentally, however, other parameters are also possible, which enable a three-dimensional impression of the object to be observed for an observer. The image data generator 40 therefore calculates at least two image data sets from a predefined viewing direction, wherein the viewing directions of the two image data sets differ by the eye spacing of an observer. The image data thus generated are subsequently supplied to a visualization device 50. If still further information or data are to be required for the visualization device 50 for a three-dimensional depiction, they can also be generated and provided by the image data generator 40.
In this case, all devices which are capable of providing different items of image information in each case to the two eyes of an observer are suitable as the visualization device 50. For example, the visualization device 50 can be a 3D monitor, or special spectacles, which display different image data for the two eyes of a user.
Fundamentally, all other types of 3D capable monitors are additionally also conceivable and suitable. Thus, for example, monitors can also be used which emit light having a different polarization in each case for the left and the right eye. In this case, however, the user must wear spectacles having a suitable polarization filter. In the case of monitors which alternately output image data for the left and the right eye, a user also has to wear suitable shutter spectacles, which each only release the view on the monitor for the left and the right eye alternately in synchronization with the alternately displayed images. Because of the comfort losses which accompany wearing spectacles, however, visualization devices which operate according to the principle of
Since the depth map and the texturing subsequent thereto, as described above, are successively completed gradually, a nearly complete model of the treatment space 2a is provided after some time, which also contains items of information about regions which are presently not visible and are shaded. It is therefore also possible for the image data generator 40 to generate image data from an observation angle which does not correspond to the present position of the sensor device 10. Therefore, for example, a depiction of the treatment space 2a can also be displayed on the visualization device 50, which deviates more or less strongly from the present position of the sensor device 10 and also the surgical instruments 11 optionally arranged on the end of the endoscope. After the depth map has been completed sufficiently, the user can specify the desired viewing direction nearly arbitrarily. In particular by way of the combination of the three-dimensional items of information from the depth map with the further image data of the endoscopic camera and additional items of diagnostic image information, a depiction, which is very close to a depiction of an opened body, can therefore be displayed to a user on the visualization device 50.
For better orientation during the surgical intervention, the user can therefore specify and change the viewing direction arbitrarily according to his or her wishes. This is helpful in particular, for example, if a specific point is to be found on an organ to be treated, or an orientation on the corresponding organ is to be assisted by way of the identification of specific blood vessels or the like.
The specification of the desired viewing direction can be performed in this case by a suitable input device 41. This input device 41 can be, for example, a keyboard, a computer mouse, a joystick, a trackball, or the like. Since the user normally has to operate the endoscope and the surgical instruments 11 contained thereon using both hands during the surgical intervention, however, in many cases, he or she will not have a hand free to operate the input device 41 to control the viewing direction to be specified. The control of the viewing direction can therefore also be performed in a contactless manner in an embodiment. For example, the control of the viewing direction can be carried out via a speech control. In addition, a control of the viewing direction by special, predefined movements is also possible. For example, the user can control the desired viewing direction by executing specific gestures. In particular, it is conceivable that the eye movements of the user are monitored and analyzed. Based on the acquired eye movements, the viewing direction is thereupon adapted for the stereoscopic depiction. Monitoring other body parts of the user to control the viewing direction is also possible, however. Such movements or gestures of the user may be monitored and analyzed by a camera. Alternatively, in the case of a speech control, the input device 41 can be a microphone. However, further possibilities for controlling the predefined viewing direction are also conceivable, for example, by movement of a foot or the like.
After a depth map has been prepared using an at least partially three-dimensionally acquired surface, texturing is carried out in 130 using the spatial points present in the depth map. Optionally provided further image data from a camera of the sensor device 10 and/or further items of diagnostic image information from imaging methods such as computer tomography, magnetic resonance tomography, sonography, or x-rays can also be integrated in this texturing. In this manner, initially a three-dimensional colored or black-and-white image of the surface of the treatment space 2a results. Stereoscopic image data are then calculated from the depth map thus textured in 140. These stereoscopic image data include at least two depictions from a predefined viewing direction, wherein the depiction differs in accordance with the eye spacing of an observer. Finally, in 150, the previously calculated stereoscopic image data are visualized on a suitable display device.
The viewing direction, on which the calculation of the stereoscopic image data in 140 is based, can be adapted arbitrarily in this case. In particular, the viewing direction for the calculation of the stereoscopic image data can be different from the viewing direction of the sensor device 10. To set the viewing direction on which the calculation of the stereoscopic image data in 140 is based, the method can include acquiring user input and adapting the viewing direction thereupon for the calculation of the stereoscopic image data in accordance with the user input. The user input for adapting the viewing direction may be performed in a contactless manner in this case. For example, the user input can be performed by analyzing a predefined user gesture.
In summary, a method for the stereoscopic depiction of image data, in particular for the three-dimensional depiction of items of image information during minimally invasive surgery is carried out using an endoscope. In this case, firstly the operation region of the endoscope is three-dimensionally acquired by a sensor device. Stereoscopic image data are generated from the 3D data obtained by sensors and visualized on a suitable display device.
A description has been provided with particular reference to preferred embodiments thereof and examples, but it will be understood that variations and modifications can be effected within the spirit and scope of the claims which may include the phrase “at least one of A, B and C” as an alternative expression that means one or more of A, B and C may be used, contrary to the holding in Superguide v. DIRECTV, 358 F3d 870, 69 USPQ2d 1865 (Fed. Cir. 2004).
Number | Date | Country | Kind |
---|---|---|---|
102013206911.1 | Apr 2013 | DE | national |
This application is the U.S. national stage of International Application No. PCT/EP2014/057231, filed Apr. 10, 2014 and claims the benefit thereof. The International Application claims the benefit of German Application No. 10201306911.1 filed on Apr. 17, 2013, both applications are incorporated by reference herein in their entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2014/057231 | 4/10/2014 | WO | 00 |