The invention relates to laproscopic surgical training.
It is known to provide a surgical training simulator, as described in U.S. Pat. No. 5,623,582. In this simulator a surgical instrument is supported on a universal joint and encoders monitor rotation of the instrument in 3D. However, it appears that this simulator suffers from allowing limited movement confined by the joint characteristics, limited simulation of the real situation in which the instrument is inserted through a patient's skin, and the fact that there is no relationship between the positions of the joints and the organs of a patient's body.
PCT Patent Specification WO02/059859 describes a system which automatically retrieves a stored video sequence according to detected interactions.
The invention is therefore directed towards providing an improved surgical training simulator which simulates more closely the real situation and/or which provides more comprehensive training to a user.
According to the invention, there is provided a surgical training simulator comprising:
In one embodiment, the simulator comprises a plurality of cameras mounted for capturing perspective views of a scene within the body form apparatus.
In another embodiment, a camera comprises an adjustment handle.
In a further embodiment, the body form apparatus comprises a panel of material simulating skin, and through which an instrument may be inserted.
In one embodiment, the motion analysis engine uses a stereo triangulation technique to determine positional data.
In another embodiment, the motion analysis engine determines instrument axis of orientation and linear position on that line.
In a further embodiment, the motion analysis engine monitors an instrument marking to determine degree of rotation about the axis of orientation.
In one embodiment, the motion analysis engine initially searches in a portion of an image representing a top space within the body form apparatus, and proceeds with a template matching operation only if a pixel pattern change is located in said image top portion.
In another embodiment, the motion analysis engine manipulates a linear pattern of pixels to compensate for camera lens warp before performing stereo triangulation.
In a further embodiment, the surgical training simulator further comprises a graphics engine for receiving the positional data and using it to generate a virtual reality simulation in a co-ordinate reference space common to that within the body form apparatus.
In one embodiment, the graphics engine renders each organ as an object having independent attributes of space, shape, lighting and texture.
In another embodiment, a scene manager of the graphics engine by default creates a static scene of all simulated organs in a static position from a camera angle of one of the actual cameras.
In a further embodiment, the graphics engine renders an instrument model, and simulates instrument movement according to the positional data.
In one embodiment, the graphics engine simulates organ surface distortion if the instrument positional data indicates that the instrument enters space of the simulated organ.
In another embodiment, the graphics engine comprises a view manager which changes simulated camera angle according to user movements.
In a further embodiment, the processor comprises a blending function for compositing real and recorded images according to overlay parameter values.
In one embodiment, the blending function blends real video images with simulated images to provide a composite video stream of real and simulated elements.
In another embodiment, the graphics engine generates simulated images representing internal surgical events such as bleeding, and the blending function composites real images with said simulated images.
In a further embodiment, the processor synchronises blending with generation of metrics for simultaneous display of metrics and blended images.
In one embodiment, the processor feeds positional data simultaneously to the graphics engine and to a processing function, and feeds the associated real video images to the blending function.
In another embodiment, the graphics engine generates graphical representations from low-bandwidth positional data, the motion analysis engine generates said low-bandwidth positional data, and the system further comprises an interface for transmitting said low bandwidth positional data to a remote second simulator and for receiving low bandwidth positional data from the second simulator.
In a further embodiment, the graphics engine renders a view of simulated organs with a viewing angle driven by the position and orientation of a model endoscope inserted in the body form apparatus. Both end view and angle endoscope simulated views may be produced.
In one embodiment, the motion analysis engine monitors movement of actual objects within the body form apparatus as the objects are manipulated by an instrument.
The invention will be more clearly understood from the following description of some embodiments thereof, given by way of example only with reference to the accompanying drawings in which:—
FIGS. 6 to 10 are flow diagrams illustrating image processing operations for operation of the simulator.
Referring to FIGS. 1 to 3 a surgical training simulator 1 of the invention comprises a body form apparatus 2 having a plastics torso body form 3 and a panel 4 of flexible material that simulates skin. Laproscropic surgical instruments 5 are shown extending through small apertures in the panel 4. The body form apparatus 2 is connected to a computer 6, in turn connected to an output display monitor 7 and to an input foot pedal 8. The main purpose of the foot pedal 8 is to allow inputs equivalent to those of a mouse, without the user needing to use his or her hands.
As shown in
Two fluorescent light sources 11 are mounted outside of the used space within the body form apparatus 2. The light sources operate at 40 kHz, and so there is no discernable interference with image acquisition (at a frequency of typically 30-60 Hz). One of the cameras 10 has an adjustment handle 20 protruding from the body form 3, although more of the cameras may have such an adjustment mechanism in other embodiments.
The cameras 10 are connected to the computer 6 to provide images of movement of the instruments 5 within the body form 3. The computer 6 uses stereo triangulation techniques with calibration of the space within the body form 3 to track location in 3D of each instrument 5. Referring to
A part, 32, of the instrument has a tapered marking 33 which allows the computer 6 to monitor rotation, depth of insertion about the axis 30 as indicated by an arrow 34, and to uniquely identify each instrument 5.
Referring to
The processor functions 40 may also receive training images and/or graphical templates. The outputs include displays of actual video, positional metrics and graphical simulations or combinations of these displays.
The output of the motion analysis engine 35 comprises 3D data fields linked effectively as packets with the associated video images. The packets 41 are represented in
Referring to
Referring to
The graphics engine 60 renders each internal organ on an individual basis by executing an object with space, shape, lighting and texture attributes. The objects are static until the instrument is inserted. The engine 60 moves an organ surface if the 3D position of an instrument 5 enters the space occupied by the organ as modelled. A scene manager of the graphics engine 60 by default renders a static scene of static organs viewed from the position of one of the actual cameras 10. A view manager of the graphics engine accepts inputs indicating the desired camera angle. Thus the view of the simulated organs may be from any selected camera angle as required by the user and/or the application. The graphics engine also renders an instrument model and moves it according to the current 3D data. Thus, the simulated instrument is moved and the surfaces of the simulated organs are deformed according to the 3D data. Thus an illusion is created that the internals of the body form 2 contains the simulated scene.
If an instrument 5 is placed within the body form 2 its position and orientation is tracked as described above. This 3D position data is used to tell the graphics engine where to render a model of the instrument within the simulation. A stream of 3D position data keeps the virtual model of the instrument in step with the movements of the real instrument 5. Within the simulation the virtual model of the instrument 5 can then interact with the elements of the simulation with actions such as grasping, cutting or suturing thereby creating the illusion that the real instrument 5 is interacting with simulated organs within the body form.
Referring to
In parallel, the 3D data is fed to the statistical analysis function 50, in turn feeding the results processing function 51.
This mode allows a teacher to demonstrate a technique within the same physical space as experienced by the student. The blending of the images gives the student a reference image that helps them identify the physical moves. Also, the educational goals at a given point in the lesson drive dynamic changes in the degree of blending. For example, during a demonstration phase the teacher stream is at 90% and the student stream is at 10% whereas during a guided practice the teacher stream is at 50% and the student stream is at 50%. During later stages of the training i.e. independent practice, the teacher stream is at 0% and the student stream 100%. The speed of the recorded teacher stream may be controlled such that it is in step with the speed of the student. This is achieved by maintaining a correspondence between the instrument positions of the teacher and the instrument positions of the student.
In this mode, the student's performance can be compared directly with that of the teacher. This result can be displayed visually as an output of the blending function 70 or as a numerical result produced by the results processing function 51.
The display of the synchronised image streams can be blended as described above or as image streams displayed side by side.
The running of the respective image streams can be:—
Referring to
Referring to
Thus, the system can deliver complex multimedia education over low bandwidth links. Currently high bandwidth links are required to deliver distance education in surgery. This is because video streams must be provided. Due to their size they are subject to the delays imposed by internet congestion. By abstracting both the student and teacher behaviour to the position and orientation of the tools and objects under manipulation, this configuration allows for distance education in surgery over low bandwidth links. A low bandwidth audio link may also be included.
This facility allows the teacher to add comments by way of textual, graphical, audio or in-scene demonstration to a recording of the student lesson.
The teacher receives either video of the lesson along with a record of the 3D position of the objects in the scene or just a record of the 3D positions of the objects in the scene. This is played back to the teacher on their workstation. The teacher can play, pause, or rewind the student's lesson. The teacher can record feedback to the student by overlaying text, overlay audio, or by using the instruments to insert their own graphical representation into the student lesson.
The simulator 1 may be used to simulate use of an endoscope. A physical model of an endoscope (which may simply be a rod) is inserted into the body form apparatus 2 and position of its tip is tracked in 3D by the motion analysis engine 35. This is treated as the position of a simulated endoscope camera, and its position and orientation is used to drive the optical axis of the view in the simulation. Both end view and angled endoscope views may be generated. The graphics engine 60 renders internal views of the simulated organs from this angle and optical axis. The view presented to the user simulates the actual view which would be seen if an actual endoscope were being used and it were inserted in a real body.
In another mode of operation, actual objects are inserted in the body form apparatus 2. Position in 3D of the instrument and/or of the objects is monitored and compared with targets. For example, one exercise may involve moving spheres from one location to another within the apparatus 2. In another example, an instrument is used for suturing an actual material, and pattern of movement of the instrument is analysed. The objects within the apparatus may incorporate sensors such as electromagnetic or optical sensors for monitoring their location within the apparatus 2. An example is an optical or electronic encoder monitoring opening of a door within the apparatus 2 by an instrument to determine dexterity of the student.
The invention is not limited to the embodiments described but may be varied in construction and detail.
Number | Date | Country | Kind |
---|---|---|---|
2002/0376 | May 2002 | IE | national |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/IE03/00069 | May 2003 | US |
Child | 10983740 | Nov 2004 | US |