The present teaching generally relates to computers. More specifically, the present teaching relates to signal processing.
With the advancement of technologies, more and more tasks are now performed with the assistance of computers. Different industries have benefited from such technological advancement, including medical industry, where large volume of image data, capturing anatomical information of a patient, may be processed by computers to identify anatomical structures of interest (e.g., organs, bones, blood vessels, or abnormal nodule), obtain measurements for each object of interest (e.g., dimension of a nodule growing in an organ), and visualize relevant features (e.g., three-dimensional (3D) visualization of an abnormal nodule). Such techniques have enabled healthcare workers (e.g., doctors) to use high-tech means to assist them in treating patients in a more effective manner. Nowadays, many surgeries are performed with laparoscopic guidance, making it often unnecessary to open up the patient, minimizing the damaged to the body.
Via the guidance of laparoscopy, a surgeon may be guided to approach a target organ and perform what is needed. Although each surgery may have a predetermined objective, there may be different subtasks in each surgery to accomplish in order to achieve the predetermined objective. Examples include maneuvering a surgical instrument to a certain location with the guidance of laparoscopic images, aiming at the anatomical structure where some subtask is to be performed, and then execute the subtask using the surgical instrument. Exemplary subtasks include using a surgical tool to separating blood vessels from an organ to be operated on, clapping the blood vessels to stop the blood flow prior to resection a part of an organ, etc. Different surgical instruments or tools may be needed for performing different subtasks.
While laparoscopic images may provide a user live visualization of the anatomies inside a patient's body, there are limitations on how to interpret these images. First, these laparoscopic images are usually two dimensional (2D) so that certain information of the 3D anatomies may not be displayed in the 2D images. Second, as a laparoscopic camera has a limited field of view, the acquired 2D images may capture only a partial view of the targeted organ. Moreover, some essential anatomical structures, such as blood vessels, may reside inside of an organ so that they are not visible in 2D images. Due to these limitations, a user needs to mentally reconstruct what is not visible which requires substantial experience to digest what is seen from laparoscopic images to align what is observable with the preplanned 3D surgical procedure. Thus, there is a need for a solution that addresses the challenges discussed above.
The teachings disclosed herein relate to methods, systems, and programming for information management. More particularly, the present teaching relates to methods, systems, and programming related to hash table and storage management using the same.
In one example, a method, implemented on a machine having at least one processor, storage, and a communication platform capable of connecting to a network for automated generation of a surgical tool-based visual guide. Two-dimensional (2D) images capturing anatomical structures and a surgical instrument therein are provided during a surgery. The type and pose of a tool attached to the surgical instrument are detected based on the 2D images. Focused information is determined based on the type and pose of the detected tool and is used to generate a visual guide to assist a user to perform a surgical task using the tool.
In a different example, a system is disclosed for automated generation of a surgical tool-based visual guide, which includes a surgical tool detection unit, a tool-based focused information identifier, and a focused information display unit. The surgical tool detection unit is provided for detecting the type and pose of a tool attached to a surgical instrument based on 2D images that capture anatomical structures and the surgical instrument. The tool-based focused information identifier is provided for determining focused information based on the type and pose of the tool. The focused information display unit is provided for generating a visual guide based on the focused information for assisting a user to perform a surgical task using the tool.
Other concepts relate to software for implementing the present teaching. A software product, in accordance with this concept, includes at least one machine-readable non-transitory medium and information carried by the medium. The information carried by the medium may be executable program code data, parameters in association with the executable program code, and/or information related to a user, a request, content, or other additional information.
Another example is a machine-readable, non-transitory and tangible medium having information recorded thereon for automated generation of a surgical tool-based visual guide. The information, when read by the machine, causes the machine to perform various steps. Two-dimensional (2D) images capturing anatomical structures and a surgical instrument therein are provided during a surgery. The type and pose of a tool attached to the surgical instrument are detected based on the 2D images. Focused information is determined based on the type and pose of the detected tool and is used to generate a visual guide to assist a user to perform a surgical task using the tool.
Additional advantages and novel features will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by production or operation of the examples. The advantages of the present teachings may be realized and attained by practice or use of various aspects of the methodologies, instrumentalities and combinations set forth in the detailed examples discussed below.
The methods, systems and/or programming described herein are further described in terms of exemplary embodiments. These exemplary embodiments are described in detail with reference to the drawings. These embodiments are non-limiting exemplary embodiments, in which like reference numerals represent similar structures throughout the several views of the drawings, and wherein:
In the following detailed description, numerous specific details are set forth by way of examples in order to facilitate a thorough understanding of the relevant teachings. However, it should be apparent to those skilled in the art that the present teachings may be practiced without such details. In other instances, well known methods, procedures, components, and/or system have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present teachings.
The present teaching discloses exemplary methods, systems, and implementations for automatic determining information of focus on-the-fly based on a surgical tool detected and displaying of such information of focus. A surgical instrument is inserted into a person's body for perform a predetermined surgical operation. The surgical instrument may carry on its tip end a tool to be used for carrying out some tasks. The surgical instrument's position in a 3D workspace is tracked so that its 3D position is known. 3D models for a target organ to be operated on are registered with 2D images acquired during the surgery. In addition to tracking the 3D position of n surgical instrument, the tool attached to the tip of the instrument may also be detected based on the 2D images.
Each tool may have certain designated function(s) to perform during a surgery. As such, the presence of a tool may signal what is the tasks to be carried out, what is the focus of the user (surgeon) at the moment, and what information may be relevant in the sense that it may assist the user to do the job with a greater degree of ease. For instance, a hook 120 may be attached to the tip of a surgical instrument 110 when the surgical instrument appears near a target organ 100, as shown in
The present teaching as disclosed herein determines what is the focused information that may help a user to accomplish the current task based on a surgical tool currently detected and displays accordingly such identified focused information to the user in a manner that improves the effectiveness of accomplishing the tasks in hand.
2D video images are provided as input to these three components. As discussed herein, in the surgical workspace, calibration may have been performed to facilitate registration of 2D images with 3D coordinates of points in the 3D workspace. In some embodiments, such calibration may be performed via a tracking mechanism (not shown) that tracks a surgical instrument having some tracking device(s) attached thereon on one end of the instrument remaining at outside of the patient's body. Feature points present in the 2D images may be registered with the 3D workspace. A transformation matrix may be derived based on a set of feature points identified from 2D images via the surgical instruments (so that the 3D coordinates of such feature points can be derived) and corresponding feature point on the 3D model of the organ. Based on the transformation matrix, any points on the 3D organ model may be projected onto the 2D images to provide 3D visualization of the organ at where the organ appears in the 2D images. Similarly, any type of information such as blood vessels within the organ may also be projected onto the 2D images.
In the illustrated embodiment in
A 3D organ model in storage 230 may be generated offline prior to the surgical procedure based on image data obtained from the patient. Such a 3D model may include different kinds of information. For example. It may include 3D modeling of the organ using a volumetric representation and/or surface representation. It may also include modeling of internal anatomical structures, such as a nodule growing in the organ and blood vessels structures around and within the organ. The modeling of the organ and internal anatomical structures may also be provided with different features and measurements thereof about, e.g., the organ itself, a nodule therein, or different vessels. In addition, a 3D model for a patient's organ may also incorporate information on a preplanned surgery trajectory with planned cut points on the surface of the organ with defined positions. Furthermore, a 3D organ model may also include modeling of some regions nearby the organ. For instance, blood vessels or other anatomical structure such as bones outside of the organ may also be modeled as they are connected to the organ or form a part of a blood supply network connected to the organ so that they may impact how the surgery is to be conducted.
Different types of information included in the 3D organ model may be used to be projected on the 2D video images to assist the user to performing different tasks involved in the surgery. At each moment, depending on the type of surgical tool detected, different piece(s) of information from the 3D model may correspond to focused information. For example, when a surgical hook is detected from 2D images, blood vessels may be deemed as focused task at the moment because a surgical hook may usually be used to handle tasks associated with blood vessels. In this case, the information in the 3D organ model 230 characterizing the blood vessels may be obtained from 230 as the focused information and may be used for special display. Although other types of information from the 3D organ model may also be retrieved and used for projection onto the 2D images (e.g., the organ's 3D shape and measures), the part that is deemed as the focused information may be displayed in a different way. One example is illustrated in an exemplary display 270 in
Based on the spatial relation between the tool tip and the relevant anatomical part, the tool-based focused region identifier 240 then identifies, at 245, both a 2D focused region in 2D images and a corresponding 3D focused region of appropriate part(s) of the 3D organ model. Such identified 2D/3D focused regions are where the focused information resides and is to be displayed to assist a surgeon to perform an operation using the surgical tool at that moment. Based on the identified 2D/3D focused regions, the focused information display unit 260 then renders, at 255, the content of the 3D model from the 3D focused region by projecting such 3D content onto the 2D focused region in 2D images in a registered manner. In some embodiments, the 3D organ model may be retrieved when the surgery starts and during the surgery relevant parts may then be used for projecting onto the 2D images in a manner determined by the surgical tool detected. For instance, at the beginning of a surgery, the surface representation of a 3D organ in the 3D model may be used to project onto 2D images. During the surgery, the display may be adjusted dynamically. When a surgical instrument changes its pose, e.g., it is oriented towards a different part of the organ, the projection of the surface representation of the 3D model needs to be adjusted so that the part of the surface of the organ directly facing the instrument is displayed via projection.
As another example, when a different surgical tool such as a hook is detected, the blood vessel tree inside the organ may need to be displayed in a way so that the user can “see through” the organ to look at the vessels. In this case, the blood vessel representations of the 3D model may be used for rendering such focused information. In some embodiments, the blood vessels may be projected with an orientation appropriate with respect to the perspective of the tip of the surgical hook, as discussed herein. While the blood vessels are the focus and are rendered in a special way, other parts of the organ may be rendered in such a way to deemphasize so that the focused information is more prominent. Thus, at each moment, depending on the situation, different parts of the 3D model may be used in different ways for different displays. The rendering may consider different rendering treatments of focused and non-focused information. For example, when blood vessels are focal point, other anatomical structures may be displayed to appear faded so that the blood vessels may appear to be more visible. At the same time, the focused information may also be displayed in a highlighted manner to increase the contrast, e.g., increasing the intensity of the pixels on the blood vessel or using a bright color to display blood vessel pixels.
That is, whenever the focus changes, the projection of different pieces of 3D model information onto the 2D images may also change accordingly. Continuing the above example, when later a surgical cutter is detected (after a surgical hook), the blood vessels previously projected in 2D images may now need to be completely hidden and the part of the organ in front of the detected cutter now needs to be displayed in a special and highlighted way so that a part of the organ near the cutter can be visualized clearly. Thus, a 3D organ model, once retrieved from storage 230 may need to be used whenever situation changes and each time, a different part of the 3D model may be used for focused display and other non-focused display.
In this illustrated embodiment, to recognize a surgical tool from 2D images, features may be explicitly extracted from preprocessed 2D images. With respect to different types of surgical tools, different features may be relevant. In addition, depending on a particular surgical procedure, it may be known as to types of surgical tools that may be used during the surgery. As such, the image feature selector 320 may access surgical tool detection models 220 which may specify different types of features keyed on different types of surgical tools. Based on that configuration, the image feature selector 320 may determine, at 315, the types of features that may need to be extracted from 2D images in a particular surgery procedure. For instance, if a surgery is resection of liver, it may be known that the only surgical tools to be used are cutters and hooks. Given that, the image feature selector 320 may obtain information about the features to be extracted in order to recognize these two types of tools.
The features to be extracted are sent used to inform the image feature extractor 330. Upon receiving the instruction on what image features to be identified, the image feature extractor 330 extracts, at 325, such image features from the preprocessed images. The extracted features may then be used by the feature-based tool classifier 340 to classify, at 335, the tool type present in the image based on the extracted features. As image features of multiple types of surgical tools may need to be extracted from the same images, some features expected for a certain tool may not present solid characteristics of the surgical tool (e.g., features extracted for surgical cutter from images having surgical hook present therein). In this situation, the confidence of the classification for that type of tool may also be consistently low. In this situation, a classification result with a poor confidence, as determined at 345, may not be adopted and no classification is attained. The process continues to process the next images at 325. If the confidence score for a classification satisfies certain conditions, as determined at 345, the classification of the tool type is accepted. In this case, the surgical tool pose estimator 350 proceeds to estimate, at 355, the pose of the surgical tool. For example, if the detection identifies that the surgical tool observed in 2D images corresponds to a hook, the 3D location and orientation of the hook are determined. As discussed herein, such pose information is important in determining what is the attention point of the user so that to accurately determine the focused information to be displayed in a special way.
The output from the surgical tool detection unit 210 includes the surgical tool type as well as the pose of the tool tip and is sent to the tool-based focused region identifier 240, as shown in
As illustrated in
As another example, if a surgical hook 420 is detected in 2D images during a laparoscopic procedure, the type of focused information is specified as “Vessel branches in front of the hook's tip” (430), as shown in
In some embodiments, other information in the 3D model near the detected tool location may also be retrieved and displayed to provide, e.g., better surgical context. For instance, when a hook is detected, the type of focused information may correspond to blood vessels. Although a specific portion of a vessel tree facing the tip of the tool may be deemed as specific focused information, 3D representation of other part of the vessel tree near the tool may also be output and displayed. In addition, the part of the organ around the vessels near the tool may also be output and displayed. The specific focused information may be displayed in a special way (e.g., with highlight or being colored or with a much higher contrast) and other related information around the special focused information may be displayed in a way that will not interfere or diminish the special effect of the special focused information.
In this illustrated embodiment, the 2D/3D focused region identifier 530 may take a 2D instrument tip location as input and generate both 2D and 3D focused regions as output. To facilitate the operation, the 2D/3D focused region identifier 530 comprises a 2D anatomical feature detector 505, an operation model determiner 515, an automatic 3D corresponding feature identifier 525, a manual 2D/3D feature selector 535, a 3D model rendering unit 545, and a focused region determiner 555.
In some situations, the input tip location may be such that there is no such distinct feature present. For instance, the instrument tip may be near the surface of a liver. In addition, in some situations, the detected 2D features may not be as good as needed. In the event that no adequately good 2D feature is detected via automatic means, the present teaching enables quality 2D feature detection by resorting to human assistance. The operation model determiner 515 may assess, at 565, whether the automatically detected 2D features possess a desirable level of distinctiveness or quality for being used to identify corresponding 3D features. If so, an automatic operation mode is applied and the automatic 3D corresponding feature identifier 525 is activated, which then accesses, at 567, the 3D models 230 and automatically detect, at 569, 3D anatomical features from the 3D models that correspond to the detected 2D anatomical features.
If the 2D anatomical features are not satisfactory (i.e., either not detected or not of a good quality), determined at 565, the operation model determiner 515 may control the process to identify 2D features in a manual mode by activating the manual 2D/3D feature selector 535, which may control an interface to communicate with a user to facilitate the user to manually identify, at 572, the 2D features. Such manually identified 2D anatomical features are then provided to the automatic 3D corresponding feature identifier 525 for accessing, at 567, the 3D models and then automatically detect, at 569, from the 3D models, the 3D anatomical features corresponding to the 2D features. To ensure quality of identified 3D corresponding anatomical features, the automatic 3D corresponding feature identifier 525 may assess, at 574, whether the identified 3D features are satisfactory based on some criteria. If they are satisfactory, both the 2D features and the corresponding 3D features may be fused in order to identify the 2D and 3D focused regions.
In the event that the 3D anatomical features are not satisfactory, the operation mode determiner 515 may activate the 3D model rendering unit 545 to render the 3D models in order to facilitate the manual 2D/3D feature selector 535 to interact with a user to manually identify, at 575, 3D corresponding anatomical features from the rendered 3D models. With the satisfactory corresponding 3D anatomical features (either automatically identified or manually selected), the 2D features and the corresponding 3D features can now be used to align (or orient) the 3D models with what is observed in 2D images (consistent with the perspective of the camera). That is, the 2D information as observed in 2D images (represented by, e.g., the 2D anatomical features) is fused, at 577, with 3D information from the 3D models (represented by, e.g., the corresponding 3D anatomical features) so that the focused region determiner 555 may then proceed to accordingly determine, at 579, the 2D focused region and the corresponding 3D focused region based on the fused 2D/3D information. As seen in
In this illustrated embodiment, the focused information display unit 260 comprises a registration mode determiner 650, a dynamic registration unit 600, and a focused overlay renderer 660. The registration mode determiner 650 may be provided to determine the mode of registration before rendering the 3D model information representing the 3D focused information onto the 2D images. In some situations, the registration mode may be for registering a rigid body (e.g., when the organ being operated on is mostly rigid such as bones). In other situations, the registration mode may have to be directed to registration of deformable object such as a heart in an operation where the heart may deform over time due to, e.g., pumping the blood or patient's breathing. The dynamic registration unit 600 is provided for registering the 3D information to be rendered with the 2D images (including, e.g., both focused and non-focused information) in a registration mode determined by the registration mode determiner 650. The focused overlay renderer 660 is provided to render the 3D focused information and non-focused information in 2D images based on registration result.
In some embodiments, the dynamic registration unit 600 may further comprise a 2D focused registration feature extractor 610 for extracting 2D feature points to be used for registration, a 3D focused corresponding feature extractor 620 for identifying 3D feature points corresponding to the 2D feature points, a rigid body registration unit 630 provided for performing rigid registration if it is called for, and a deformable registration unit 640 for performing deformable registration in a deformable registration mode. The registration result from either rigid registration or deformable registration is sent to the focused overlay renderer 660 so that the focused and non-focused information from the 3D model may be rendered by appropriately projecting the 3D information onto the 2D images based on the registration result.
In some embodiments, the dynamic registration unit 600 may also be realized using a model based registration implementation wherein deep-learned models (not shown) may be obtained via training data so that feature extraction and registration steps performed explicitly by 2D focused registration feature extractor 610, 3D focused corresponding feature extractor 620, rigid body registration unit 630, and deformable registration unit 640, may be instead performed implicitly via deep-learned models that incorporate knowledge learned in training in their embeddings and parameters of other layers. Such a model-based solution may take 2D images, 3D models and identified 2D/3D focused regions of interest as input and generate a registration result as output to be provided to the focused overlay renderer 660.
In the meantime, registration mode determiner 650 may determine, at 730, the registration mode to be applied based on the type of the surgical procedure at issue. As discussed herein, in some situations, rigid registration may be performed if the involved anatomical structures are rigid bodies (e.g., bones) but in some situations, deformable registration may be used when the anatomical parts involved do deform over time. When a rigid registration is applicable, determined at 740, the rigid body registration unit 630 performs, at 750, the registration based on 2D feature points as well as their corresponding 3D feature points. When a deformable registration is applied, the deformable registration unit 640 performs, at 760, deformable registration, at 760, based on the 2D and 3D corresponding feature points. Upon completion of registration, the focused overlay renderer 660 renders, at 770, the focused information by projecting, based on the registration result, the 3D focused information from the 3D model onto the 2D images. In presenting information around the detected tool, other information, although not focused information but nevertheless in the vicinity of the surgical tool may also be rendered. As discussed herein, the focused and non-focused information may be rendered in different ways so that the focused information is display in a special way to create a clearer visual to the user while the non-focused information may be displayed in a way that provides the context without taking away the attention from the focused information.
As discussed herein, there are different ways to render the focused information. Any implementation can be adopted so long as the focused information provides a clear visual to a user in contrast with the surrounding non-focused information. In this disclosure, some exemplary ways to render the focused information are provided but they are merely for illustration instead of as limitations.
For example, when a detected surgical tool is a cutter scissor and the surgery is on liver resection in a laparoscopic procedure, the identified focused information may be a portion of the surface of the liver with a section of the resection boundary characterized by the 3D model for the target liver. The section of the resection boundary identified as focused information may be a part of a resection trajectory that is faced with the opening of the cutter, determined based on the orientation of the cutter. In this example, the non-focused information may include a portion of the surface of the liver as modeled where the portion of the liver surface is similarly determined as near the opening location or orientation of the cutter detected from 2D images. With both focused and non-focused information determined, their 3D representations from the 3D model are projected onto the 2D images. For example, the portion of the surface of the liver (non-focused but relevant information that provides the visual context for the focused information) is rendered with the part of resection boundary (focused information) on the surgical trajectory superimposed as, e.g., cutting points at appropriate positions on the surface.
In some embodiments, focused and non-focused information may be rendered in special ways to enhance the quality of visual assistance to a user during a surgery. As discussed herein, some enhanced display options may be provided to improve the quality of the visual guidance. If the focused overlay renderer 660 does not support such enhanced display options, determined at 715, its display operation ends at 725. Otherwise, the focused overlay renderer 660 may be configured, by a user in each surgery, to apply certain enhanced display options based on the needs of the user. In this illustration, for example, the enhanced display process may check, at 735, whether it is configured to either dim the non-focused information or highlight the focused information displayed. If it is configured to dim the non-focused information (or even other background information in the 2D images), the focused overlay renderer 660 dims, at 745, the displayed non-focused information (and other parts of the 2D images). If it is configured to highlight the focused information, the focused overlay renderer 660 modifies the display of the focused information at 755. The modification may be applied to make the focused information visually more pronounced. Using the example above with focused information being a section of the resection boundary, the cut points of the section projected on the 2D images may be rendered using a bright color or with a maximum intensity. In some embodiments, both dim and highlight may be applied (not shown in
Another exemplary enhanced display option may be to provide an enlarged view of the part of the 2D image where the focused information is rendered. If the focused overlay renderer 660 is configured to do so, determined at 765, an area on the display screen may be identified, at 775, for presenting an enlarged view of a region of interest in the 2D image where the focused information is rendered. Then an enlarged view of the content in the region of interest may be generated and displayed, at 785, in the area of the display screen for the enlarged view.
To implement various modules, units, and their functionalities described in the present disclosure, computer hardware platforms may be used as the hardware platform(s) for one or more of the elements described herein. The hardware elements, operating systems and programming languages of such computers are conventional in nature, and it is presumed that those skilled in the art are adequately familiar with to adapt those technologies to appropriate settings as described herein. A computer with user interface elements may be used to implement a personal computer (PC) or other type of workstation or terminal device, although a computer may also act as a server if appropriately programmed. It is believed that those skilled in the art are familiar with the structure, programming, and general operation of such computer equipment and as a result the drawings should be self-explanatory.
Computer 900, for example, includes COM ports 950 connected to and from a network connected thereto to facilitate data communications. Computer 900 also includes a central processing unit (CPU) 920, in the form of one or more processors, for executing program instructions. The exemplary computer platform includes an internal communication bus 910, program storage and data storage of different forms (e.g., disk 970, read only memory (ROM) 930, or random-access memory (RAM) 940), for various data files to be processed and/or communicated by computer 900, as well as possibly program instructions to be executed by CPU 920. Computer 900 also includes an I/O component 960, supporting input/output flows between the computer and other components therein such as user interface elements 980. Computer 900 may also receive programming and data via network communications.
Hence, aspects of the methods of information analytics and management and/or other processes, as outlined above, may be embodied in programming. Program aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of executable code and/or associated data that is carried on or embodied in a type of machine readable medium. Tangible non-transitory “storage” type media include any or all of the memory or other storage for the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide storage at any time for the software programming.
All or portions of the software may at times be communicated through a network such as the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another, for example, in connection with information analytics and management. Thus, another type of media that may bear the software elements includes optical, electrical, and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links. The physical elements that carry such waves, such as wired or wireless links, optical links, or the like, also may be considered as media bearing the software. As used herein, unless restricted to tangible “storage” media, terms such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution.
Hence, a machine-readable medium may take many forms, including but not limited to, a tangible storage medium, a carrier wave medium or physical transmission medium. Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer(s) or the like, which may be used to implement the system or any of its components as shown in the drawings. Volatile storage media include dynamic memory, such as a main memory of such a computer platform. Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that form a bus within a computer system. Carrier-wave transmission media may take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media therefore include for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD-ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer may read programming code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a physical processor for execution.
Those skilled in the art will recognize that the present teachings are amenable to a variety of modifications and/or enhancements. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software only solution, e.g., an installation on an existing server. In addition, the techniques as disclosed herein may be implemented as a firmware, firmware/software combination, firmware/hardware combination, or a hardware/firmware/software combination.
While the foregoing has described what are considered to constitute the present teachings and/or other examples, it is understood that various modifications may be made thereto and that the subject matter disclosed herein may be implemented in various forms and examples, and that the teachings may be applied in numerous applications, only some of which have been described herein. It is intended by the following claims to claim any and all applications, modifications and variations that fall within the true scope of the present teachings.
The present application is related to U.S. patent application Ser. No. ______ (Attorney Docket No. 140551.569672) filed on ______, entitled “SYSTEM AND METHOD FOR SURGICAL TOOL BASED MODEL FUSION”, the contents of which are incorporated herein by reference in its entirety.