Surgeon head-mounted display apparatuses

Abstract
An augmented reality surgical system includes a head mounted display (HMD) with a see-through display screen, a motion sensor, a camera, and computer equipment. The motion sensor outputs a head motion signal indicating measured movement of the HMD. The computer equipment computes the relative location and orientation of reference markers connected to the HMD and to the patient based on processing a video signal from the camera. The computer equipment generates a three dimensional anatomical model using patient data created by medical imaging equipment, and rotates and scales at least a portion of the three dimensional anatomical model based on the relative location and orientation of the reference markers, and further rotate at least a portion of the three dimensional anatomical model based on the head motion signal to track measured movement of the HMD. The rotated and scaled three dimensional anatomical model is displayed on the display screen.
Description
TECHNICAL FIELD

The present disclosure relates to surgical operating room equipment and procedures and, more particular, to generating anatomical models and other information to be displayed as augmenting reality in surgical operating rooms.


BACKGROUND

It has become commonplace in operating rooms (ORs) for surgeons and other personnel to refer to many different types of visual patient data during surgical procedures. For example, a surgeon may refer to digital photographs and video from magnetic resonance imaging equipment, computed tomography scanning equipment, x-ray equipment, three-dimensional ultrasound equipment, endoscopic equipment, 3D computer modeling equipment, patient monitoring equipment, medical records databases, and other equipment during a surgical procedure.


ORs therefore typically include many display devices positioned at various locations that are expected to be viewable during a procedure. Personnel may refer to display devices hung from a ceiling, mounted from a wall, supported on a cart, etc. However, it is difficult or not possible to position the display devices for convenient viewing by all necessary personnel. Mentally translating between the orientation of information shown on display devices that are angularly offset from the orientation of a patient's body can be particularly difficult, prone to errors, or inefficiently time-consuming for personnel during a procedure. Moreover, frequent shifting of focal reference from a patient surgical site to remotely located display devices may result in fatigue and have a deleterious effect on quality of the procedure.


Another difficulty surgeons and other personnel have is relating what is viewed on the display devices to precise locations on a patient. For example, it can be difficult for a surgeon to identify where a particular location within a displayed x-ray or other image corresponds to on the patient. Moreover, a surgeon may use a 3D anatomical model to practice before a procedure, but may not be able to effectively use the model during surgery because of the inherent difficulty of relating the model to the patient in real-time.


SUMMARY

Some embodiments of the present disclosure are directed to an augmented reality surgical system that includes a head mounted display, a motion sensor, at least one camera, and computer equipment. The head mounted display includes a see-through display screen that display images while allowing transmission of ambient light therethrough. The motion sensor is connected to the head mounted display and configured to output a head motion signal indicating measured movement of the head mounted display. The at least one camera is configured to observe reference markers connected to the head mounted display, reference markers connected to a patient, and reference markers connected to a surgical tool located within a surgical room. The computer equipment is configured to compute the relative location and orientation of the reference markers connected to the head mounted display and the reference markers connected to the patient based on processing a video signal from the at least one camera. The computer equipment is further configured to generate a three dimensional anatomical model using patient data created by medical imaging equipment that has imaged a portion of the patient, and to rotate and scale at least a portion of the three dimensional anatomical model based on the relative location and orientation of the reference markers connected to the head mounted display and the reference markers connected to the patient, and further rotate the at least a portion of the three dimensional anatomical model based on the head motion signal to track measured movement of the head mounted display. The computer equipment is further configured to generate a video signal based on the rotated and scaled three dimensional anatomical model, and to output the video signal to the display screen of the head mounted display.


Some other embodiments of the present disclosure are directed to an augmented reality surgical system for displaying multiple video streams to a user. The augmented reality surgical system includes a head mounted display, a motion sensor, and computer equipment. The head mounted display includes a see-through display screen that display images while allowing transmission of ambient light therethrough. The motion sensor is connected to the head mounted display and configured to output a head motion signal indicating measured movement of the head mounted display. The computer equipment is configured to receive a plurality of video streams from one or more source devices and to control which of the video streams are output as a video signal to the display screen based on the head motion signal.


Other systems, methods, and computer program products according to embodiments of the inventive subject matter will be or become apparent to one with skill in the art upon review of the following drawings and detailed description. It is intended that all such additional systems, methods, and computer program products be included within this description, be within the scope of the present inventive subject matter, and be protected by the accompanying claims. Moreover, it is intended that all embodiments disclosed herein can be implemented separately or combined in any way and/or combination.





BRIEF DESCRIPTION OF THE DRAWINGS

Other features of embodiments will be more readily understood from the following detailed description of specific embodiments thereof when read in conjunction with the accompanying drawings, in which:



FIGS. 1-8 illustrate a head mounted display apparatus that can be worn on a user's head and operates according to some embodiments of the present disclosure;



FIG. 9 is a block diagram of electronic components of a computer system that includes a head mounted display apparatus configured according to some embodiments of the present disclosure;



FIGS. 10-12 illustrate operations and methods that may be performed by a system that includes a head mounted display apparatus to control the display of virtual display panels through a display screen of a head mounted display apparatus according to some embodiments of the present disclosure;



FIG. 13 is a block diagram of components of an augmented reality surgical system that tracks the location of surgical tools, a surgeon's head mounted display, and parts of a patient's anatomy, and generates a three dimensional (3D) model from patient data that is displayed on the head mounted display to be rendered super-imposed at a visually aligned location on the patient's body in accordance with some embodiments of the present disclosure;



FIG. 14 is another block diagram of the electronic components of the augmented reality surgical system of FIG. 13 according to some embodiments of the present disclosure;



FIG. 15 is another block diagram that illustrates further example operations of electronic system subcomponents of the augmented reality surgical system of FIG. 13 according to some embodiments of the present disclosure;



FIGS. 16 and 17 are block diagrams that illustrate further example operations of electronic system subcomponents that can be included in the augmented reality surgical system of FIG. 13 according to some other embodiments of the present disclosure;



FIG. 18 illustrates a graphical image generated on the head mounted display showing a virtual trajectory of a surgical tool into a patient's anatomy relative to an anatomical model generated from 3D computerized tomography (CT) scan data in accordance with some embodiments of the present disclosure;



FIG. 19 illustrates another graphical image generated on the head mounted display showing a cross-sectional slice along plane 19-19 in FIG. 18 rotated to provide a front view;



FIG. 20 illustrates another graphical display generated on the head mounted display showing a sequence of cross-sectional slices of the 3D CT scan anatomical model spaced apart along the virtual trajectory of the tool and illustrates points of intersection between the virtual trajectory and the slices;



FIG. 21 illustrates another graphical display generated on the head mounted display showing a sequence of cross-sectional slices of the 3D CT scan anatomical model spaced apart along the virtual trajectory of the tool and oriented in planes perpendicular to the virtual trajectory of the tool; and



FIGS. 22 and 23 each illustrate a graphical image generated on the head mounted display showing the cross-sectional slices along planes 22-22 and 23-23, respectively, in FIG. 21 rotated to provide a front view.





DETAILED DESCRIPTION

In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present invention. It is intended that all embodiments disclosed herein can be implemented separately or combined in any way and/or combination.


Embodiments of the present disclosure are directed to an augmented reality surgical system that includes a head mounted display (HMD) apparatus that can be worn by a surgeon, physician, or other personnel during a medical procedure. The HMD can be configured to provide localized, real-time situational awareness to the wearer. The HMD includes a display screen that can be positioned within the line-of-sight and/or periphery Field Of View (FOV) of the wearer to provide visual information that can be organized and displayed as a single virtual display or as a collection of virtual displays that a wearer can navigate between to view using head movement, hand gestures, voice commands, eye control, and/or other operations disclosed herein.


A surgeon or other person can wear the HMD to see a graphical representation of what is within the patient's body but covered from view by skin, muscle, organs, skeletal structure, etc. Using the HMD can enable a surgeon to minimize the size of an incision by observing where the incision needs to be made to reveal a targeted portion of the body. Similarly, the HMD can be used when replacing a bone with prosthesis to enable a surgeon to observe an exact positioning reference that aids with orienting and moving surgical tools and the prosthesis during the procedure. The HMD may operate to improve the efficiency, productivity, throughput, and/or accuracy, and/or safety of the wearer's performance of a medical procedure. Moreover, the HMD can reduce mental fatigue by reducing or eliminating a need for the wearer to reference remote display devices having substantial angular offsets during a medical procedure.


Although various embodiments of systems having HMDs are described for use in the environment of a surgical operating room, they may be used in other applications. For example, the systems may be used within industrial environments such as warehousing applications, manufacturing applications, product inspection applications, and/or maintenance applications.


Example Head Mounted Display Apparatuses


FIG. 1 illustrates a HMD apparatus 100 (also “HMD 100” for brevity) configured according to some embodiments of the present disclosure. Referring to FIG. 1, the HMD 100 includes a semitransparent display screen 110 connected to a display module that processes and displays video and other images on the display screen 110 (e.g., a LCD display, a reflective screen on which the display module projects images, etc.) for viewing by a user. The display module may be within a housing 118 of the HMD 100 or may be contained within a communicatively connected computer system.


In the illustrated embodiment, the HMD 100 is mounted to a headband 120 and positioned so that the display screen 110 extends within the peripheral vision of the user. The housing 118 encloses electronic components that display information on the display screen 110 and may operate in combination with a remote but communicatively connected computer equipment and/or with computer equipment integrated within the housing 118 to sense and interpret movement of the head, sense and interpret gestures made by a user's hands or other objects, and/or sense and interpret voice commands by the user. The display screen 110 can provide a monocular see-through display or a stereo set of see-through displays so that the user can view information displayed on the display while looking through the display to view other objects. The headband 120 may include a headlamp, camera, or other apparatus that can be worn on a user's head.


The user is illustrated as wearing glasses 150 that include through-the-lens (TTL) loupes 152, protruding from lenses of the glasses 150, that provide magnified viewing to the user. The display screen 110 extends downward from the housing 118 and is positionable by the user to be in the user's field-of-view or immediately adjacent to the TTL loupes 152 within the user's peripheral vision. The display screen 110 can be a see-through display device allowing the user to see video superimposed on the environment seen through the display screen 110.


The TTL loupes 152 may not be included in the HMD 100 when the display screen 110 is configured to be in the direct line-of-sight of the user. Alternatively, the display screen 110 can be positioned adjacent to the TTL loupes 152 so that the user can make a minor upward shift in eye line-of-sight from looking through the TTL loupes 152 to instead view information displayed on the display screen 110. In some embodiments the display screen 110 may be incorporated within one or both TTL loupes 152 so that the user can look through the TTL loupes 152 to view graphical images super-imposed on objects within the FOV of the TTL loupe 152. The HMD 100 may be configured to be attachable to any type of eyeglass frames, including prescription glasses, protective glasses, frames without lenses, transparent or protective shields, etc.


The display screen 110 can be moved by a user to a location providing convenient visual reference through a two-arm friction-joint linkage 112 that provides telescopic and up-and-down adjustment of location of the housing 118. A ball-and-socket joint 114 is connected between the linkage 112 and the housing 118 to provide planar adjustment for the display screen 110. A pivot joint 116 connected between the ball-and-socket joint 114 and the housing 118 allows the user to pivot the housing 118 and connected display screen 110. The display screen 110 can thereby be flipped-up outside the user's peripheral vision when not being used.


The HMD 100 may include sensors such as inertial sensors or other sensors, such as a gyroscope, accelerometer (e.g., a multi-axis accelerometer), and/or magnetometer that output a signal indicating a measurement of movement or static orientation of the user's head while wearing the HMD 100. For example, the motion sensor may output a head motion signal that indicates yaw (i.e., rotation of the user's head left or right), pitch (i.e., rotation of the user's head up or down), and/or roll (i.e., side-to-side tilt of the user's head). The sensors may be spaced apart on the headband 120 or enclosed within the housing 118.


The HMD 100 may include at least one camera facing away from the user that outputs video and/or other images for processing and relay to other HMDs 100 worn by other personnel assisting with the procedure, to other display devices, and/or to a video server for storage. For example, the camera may be configured to be aligned with the user's line-of-sight when the user has adjusted the display screen 110 to be comfortably viewed by the user. When more than one camera is connected to the HMD 100, video streams from the cameras can be provided to an operational function that estimates distance to an object viewed by the cameras. The operational function can include triangulation of distance to the object based on angular offset of the object viewed in the video streams and a known distance between the cameras.


The at least one camera may be connected to a gesture interpretation module configured to sense gestures made by a user's hands or other objects, recognize a gesture as corresponding to one of a plurality of defined commands, and trigger operation of the command. The HMD 100 may include a microphone connected to a voice interpretation module configured to recognize a received voice command as corresponding to one of a plurality of defined voice commands, and trigger operation of the command.


The headband 120 may have a plurality of attachment points where inertial sensors, camera, microphone, etc. can be releasably attached. Some of the attachment points may have rigid supporting structures between them to maintain a defined physical alignment between the attached inertial sensors, etc.



FIG. 2 illustrates a side view of another HMD 200 with a display screen 210 and electronic components 214 (shown without a housing) which are configured according to some embodiments of the present disclosure. The display screen 210 extends downward from the electronic components 214 to be in the user's line-of-sight or immediately adjacent TTL loupes 152 within the user's peripheral vision. The electronic components 214 are connected to the headband 120 via a pivot 212 allowing the electronic components 214 and connected display screen 210 to be flipped-down to a deployed position as shown in FIG. 2 and flipped-up to a stored position when the user does not desire to view the display screen 210.



FIG. 3 illustrates another HMD 300 configured according to some embodiments of the present disclosure. The HMD 300 includes a display screen illustrated behind a protective shield 310 that extends downward from a housing 318 enclosing electronic components. The display screen and/or the protective shield 310 may include a coating that provides variable contrast to enhance viewability of displayed information while subject to a range of ambient brightness. The protective shield 310 may provide a variable focal point (diopter). The protective shield 310 can be flipped from a stored up-position to a protective down-position (as shown in FIG. 3) to cover an outside surface of the display screen that faces a patient and function to protect the display screen from fluids and other materials occurring during a procedure. The display screen can be moved by a user through a two-arm friction-joint linkage 312 that provides telescopic and up-and-down adjustment of location of the housing 318 to enable a user to position the display screen at a location providing convenient visual reference. A ball-and-socket joint 316 is connected between the linkage 312 and the housing 118 to provide planar adjustment for the display screen. The linkage 312 is connected to the headband 120 through a pivot joint 314 to allow the user to flip the housing 318 and connected display screen up and down. The display screen can thereby be flipped-up outside the user's line-of-sight or the user's peripheral vision when not being used.



FIG. 4 illustrates another HMD 400 configured according to some embodiments of the present disclosure. The HMD 400 includes a display screen 410 that extends downward from a housing 418 enclosing electronic components. The display screen 410 and housing 418 are connected to a ball-and-socket joint 416 which provides planar adjustment for the display screen 410. The ball-and-socket joint 416 is connected to a pivot 414 that allows the housing 418 and connected display screen 410 to be pivoted up and down, so that the display screen 410 can be flipped-up outside the user's line-of-sight or the user's peripheral vision when not being used. The pivot 414 is connected to a sliding arm 412 that connects to the headband 120. The sliding arm 412 provides telescoping adjustment to allow user placement of the display screen 410 a desired distance from an eye.



FIG. 5 illustrates a front view of the HMD 400 of FIG. 4 with the housing 418 removed to expose printed circuit boards (PCBs) 450 which operationally connect the electronic components mounted thereon. The electronic components display images on the display screen 410, and may operate in combination with integrated or remote computer equipment to sense and interpret movement of the head, sense and interpret gestures made by a user's hands, eyes, or other objects, and/or sense and interpret voice commands by the user. The PCBs 450 are tilted at a defined non-zero angle relative to vertical to reduce the profile cross-section of the housing 418. For example, the PCBs 450 can extend generally diagonally across the housing 418.



FIG. 6 illustrates another HMD 500 having a single display screen connectable to an eyeglass frame to provide monocular viewing by the user. FIG. 7 illustrates another HMD 502 including a pair of display screens that are connectable to opposite sides of an eyeglass frame to provide binocular viewing. Although the display screens in FIGS. 6 and 7 are shown as being opaque, they may instead allow a user to see through the display screen while viewing information displayed thereon.



FIG. 8 illustrates example design parameters that may be used when configuring a HMD to allow movement of the display screen to accommodate anticipated variation in user head geometries. Referring to FIG. 8, the spread between the minimum and maximum eye position inward from the forehead is about 10 mm. Consequently, the HMD may preferably be configured to allow variation in the distance between the display screen and a user's eye of up to 27 mm and, more preferably, 13 mm.


Example Computer System Incorporating a Head Mounted Display Apparatus


FIG. 9 is a block diagram of electronic components of a computer system that includes a HMD apparatus 600, computer equipment 620, and a surgical video server 650. The video server 650 can be connected via a data network 640 to a patient database 642, imaging equipment 644, and other electronic equipment 646. The HMD 600 may correspond to any of the HMDs of FIGS. 1-8. Although the computer equipment 620 is illustrated as being separate from the HMD 600, some or all of the operations disclosed herein as being performed by the computer equipment 620 may additionally or alternatively be performed by one or more processors residing within the HMD 600. Similarly, some of the operations disclosed herein as being performed by the HMD 600 may additionally or alternatively be performed by one or more processors residing within the computer equipment 620.


The video server 650 can receive, store, and route information, video streams between the patient database 642, imaging equipment 644, and other electronic equipment 646 and the HMD 600. As used herein, a video stream can include any type of information that can be provided to a display device for display, including without limitation a still image (e.g., digital photo), a sequence of still images, and video having frames provided at a defined frame rate. The imaging equipment 644 may include endoscope cameras, magnetic resonance imaging equipment, computed tomography scanning equipment, three-dimensional ultrasound equipment, endoscopic equipment, and/or computer modeling equipment which can generate multidimensional (e.g., 3D) model based on combining images from imaging equipment. The patient database 642 can retrievably store information relating to a patient's medical history, and may store patient images from earlier procedures conducted via the imaging equipment 644. The other equipment 646 may provide information relating to real-time monitoring of a patient, including, for example, hemodynamic, respiratory, and electrophysiological signals.


The computer equipment 620 operationally interfaces the HMD 600 to the video server 650. The computer equipment 620 includes a video capture card 622 that can simultaneously receive a plurality (N) of video streams and information (e.g., textual descriptions, audio signals, etc.) from the video server 650 and/or directly from the imaging equipment 644, the patient database 642, and/or the other equipment 646. The computer equipment 620 may communicate with the video server 650, the HMD 600, and other equipment of the system via a wireless and/or wired network interface 628 using any appropriate communication medium, including but not limited to a wireless air interface (e.g., 3GPP Long Term Evolution (LTE), WLAN (IEEE 802.11), WiMax, etc.), wireline, optical fiber cable, or any combination thereof. In the example embodiment of FIG. 9 the video capture card 622 simultaneously receives up to 4 video streams via 4 HDMI interfaces. In one embodiment the HMD 600 is communicatively connected to the computer equipment 620 via an HDMI cable, a USB or RS422 cable connected to the motion sensor 604 and/or gesture sensor 602, and a USB 3.0 or firewire cable connected to the camera 610. A microphone 612 can be connected to the computer equipment 620. The video and/or sensor signaling may alternatively be communicated between the HMD 600 and the computer equipment 620 through a wireless air interface, such as the network interface 628.


The HMD 600 includes a display module 606 that processes and displays video and other images on the display screen 608 (e.g., a LCD display, a reflective screen on which the display module 606 projects images, etc.) for viewing by a user. It may be preferable for the display screen 608 to be a see-through display device allowing the user to see displayed video superimposed on what is viewed through the display screen 608. The video streams received by the video capture card 622 are processed by a graphics processing unit (GPU) 638, conditioned by a display driver 614, and provided to the display module 606 for display on the display screen 608. A symbol generator 624 may add graphical indicia and/or textual information to the video stream(s) provided to the HMD 600 based on information received from the video server 650 (e.g., via the patient database 642).


The display driver 614 may reside in the computer equipment 620 or the HMD 600. In one embodiment, the display driver 614 receives video via a HDMI interface from the GPU 638, and converts the digital video signal to an analog video signal which is output as low-voltage differential signaling (LVDS) to the display module 606. The display driver 614 may also provide power and/or other signaling to the display module 606 via a LED drive signal.


The HMD 600 can include a camera 610, or a plurality of the cameras 610, facing away from the user that outputs video and/or other images via the wireless and/or wired network interface 628, illustrated as a HDMI cable in FIG. 9, to the GPU 638 for processing and relay to the video server 650 for storage and possible further relay to other HMDs 600 worn by other personnel assisting with the procedure. For example, the camera 610 may be configured to be aligned with the user's line-of-sight when the user has adjusted the display screen 608 to be comfortably viewed by the user. A video signal from the camera 610 can be processed through the computer equipment 620 and provided to the video server 650 for recording what the user is viewing during the procedure and/or can be provided as a real-time video stream to other HMDs 600 worn by personnel assisting with the procedure so that the personnel can observe what the user is seeing. The video signal from the camera 610 may be augmented by the symbol generator 624 with one or more designation symbols. The augmented symbols may, for example, identify the user as the source of the video stream and/or be added to a video stream by the user to identify observed features, such as a patient's anatomy.


The HMD 600 may include a motion sensor 604 and/or a gesture sensor 602. The motion sensor 604 may be a gyroscope, accelerometer (e.g., a multi-axis accelerometer), and/or tilt sensor that outputs a head motion signal indicating a measurement of movement of the user's head while wearing the HMD 600. The motion sensor 604 may be powered by the computer equipment 620 and may output the head motion signal via a communication interface, such as a RS-422 serial digital interface. For example, the motion sensor 604 may output a head motion signal that indicates yaw movement (i.e., rotation of the user's head left or right) and/or indicates pitch movement (i.e., rotation of the user's head up or down).


The motion sensor 604 may be a sourceless orientation sensor. The head motion signal may be processed by the HMD 600 and/or by the computer equipment 620 to compensate for drift error introduced by the motion sensor 604. In one embodiment, one directional reference (e.g., yaw) component of the head motion signal is corrected toward zero responsive to another reference component (e.g., pitch) of the head motion signal being within a threshold offset of a defined value. For example, yaw drift error in the head motion signal can be determined based on monitoring yaw values of the motion signal while the user is looking down at a defined pitch (e.g., pitch being within a threshold range of a defined value) to align the user's eyes with an object (e.g., when a surgeon repetitively looks down to view a surgical site of a patient). In one embodiment, responsive to the pitch component of the head motion signal indicating that a surgeon is looking down for at least a threshold time that is indicative of the surgeon visually concentrating on a surgical site, the computer equipment 620 assumes that the HMD 600 is stabilized along the yaw axis and computes yaw drift error based on measured change in the yaw component over a defined time interval. The head motion signal is then compensated to remove the determined yaw drift error. In another embodiment, the computer equipment 620 measures drift in the yaw component of the head motion signal while a static image is displayed on the display screen, assuming that the surgeon's head is stabilized along the yaw axis, and then compensates the head motion signal to remove the measured drift in the yaw component.


The head motion signal may be processed by the HMD 600 and/or by the computer equipment 620 to identify an origin for one or more directional reference components from which movement is referenced. For example, an origin location from which yaw is measured may be identified based on an average (e.g., median or mode) of a yaw component of the head motion signal during times when the user is looking down at a defined pitch to align the user's eyes with an object (e.g., surgeon looking down to view a surgical site).


The directional reference (e.g., pitch or yaw) of the head motion signal, which is defined to trigger compensation for drift error and/or which is defined as a reference origin for movement measurement, may be identified based on the user maintaining a substantially constant orientation of the HMD 600 for a threshold time (e.g., dwell time). For example, when a surgeon has maintained a relatively constant head position while viewing a surgical site of a patient for a threshold time, the directional reference (e.g., pitch or yaw) of the head motion signal during that dwell time can be used as a basis for compensating for drift error and/or setting as a reference origin for display of virtual display panels illustrated in FIGS. 10-12. In one embodiment, the head motion signal may be processed by the HMD 600 and/or by the computer equipment 620 to estimate gyroscope bias(es) giving rise to yaw drift and/or pitch drift accumulating over time based on pseudo-measurements of the yaw and/or the pitch provided by the head motion signal which is expected to be nearly zero each time the surgeon looks down at the same surgical site and steadies the head to center the line-of-sight at a same location on the patient.


The gesture sensor 602 may include any type of sensor that can sense a gesture made by a user. In a surgical environment, use of a gesture sensor 602 to receive a gesture-based command from a surgeon or other OR personnel can be advantageous because it avoids a need for the user to touch a non-sterile surface of the HMD 600 or other device. The gesture sensor 602 may include the camera 610 which outputs video (e.g., RGB-D video) displaying movement of a user's hand, fingers, arms or other objects moved by the user along a pathway that the user knows will define a command identifiable by an operational surgical program (OSP) 632 and/or another component of the system. The camera 610 or another camera may be directed toward one of the user's eyes to identify a dwell time of the eye, blink timing, and/or movement of the eye to generate a command from the user to control what is displayed on the display screen 608.


The gesture sensor 602 may alternatively or additionally include one or more photoelectric motion and/or proximity sensors. In one embodiment, the gesture sensor 602 has a plurality of infrared emitters and a plurality of photodiodes. Adjacent pairs of an infrared emitter and a photodiode are spaced apart and arranged to form a directional array facing outward from a housing of the HMD 600 to sense presence of a user's hand adjacent to the array and/or to sense a direction of movement as the user's hand is moved across the array. A user may, for example, swipe a hand in a first direction across the array (without touching the housing) to input a first type of gesture recognized by the OSP 632 processed by the processor 626 which triggers a first type of operation by the OSP 632, swipe the hand in a second direction about opposite to the first direction across the array to input a second type of gesture recognized by the OSP 632 which triggers a second type of operation by the OSP 632, swipe the hand in a third direction about perpendicular to the first direction across the array to input a third type of gesture recognized by the OSP 632 which triggers a third type of operation by the OSP 632, and so on with other directions of movement being identifiable as other types of gestures provided by the user to trigger other types of operations by the OSP 632.


In another embodiment the gesture sensor 602 includes an ultrasonic echo ranging transducer that senses signal echo reflections from a user's hand and outputs a signal to the processor 626 which identifies gestures formed by movement of the hand. In another embodiment the gesture sensor 602 includes a capacitive sensor that senses presence of a user's hand through capacitive coupling between a charge plate and the user's hand. A plurality of capacitive sensors may be spaced apart to form the gesture sensor 602 and configured to sense a direction of movement of the user's hand relative to the array of charge plates (e.g., sense an order with which plates experienced increased coupling to the user's hand). Different sensed directions of movement can be interpreted by the OSP 632 and/or another component of the system as representing different commands selected by the user for operation.


The HMD 600 can include a microphone 612 configured to receive voice commands from a user. The processor 626 executing the OSP 632 and/or another component of the system can be configured to recognize a received voice command as corresponding to one of a plurality of defined voice commands, and trigger operation of a command corresponding to the recognized voice command to control information displayed on the display screen 608.


The computer equipment 620 includes a general processor 626 and memory 630. The processor 626 may include one or more data processing circuits, such as a general purpose processor and/or special purpose processor, such as a microprocessor and/or digital signal processor. The processor 626 is configured to execute computer program code in the memory 630, described below as a non-transitory computer readable medium, to perform at least some of the operations described herein. The computer program code may include the OSP 632. The OSP 632 when executed by the processor 626 causes the processor 626 to perform operations in accordance with one or more embodiments disclosed herein. The computer equipment 620 may further include a speaker, a user input interface (e.g., touch screen, keyboard, keypad, etc.), a display device, etc.


In one embodiment, the video signal from the camera 610 is displayed on the display device 608 of the same HMD 600, and the symbol generator 624 in combination with the OSP 632 processed by the processor 626 may operate to display graphical indicia (e.g., reticle) that can be positioned within a plane of the video stream by the user responsive to recognition of voice commands via the microphone 612, to track movement of the user's finger, hand, or other object recognized in the video stream responsive to the gesture sensor 602 (e.g., via the camera 610), and/or to track motion sensed by the motion sensor 604. The user may trigger the OSP 632 to capture a still image from the video stream with the incorporated graphical indicia responsive to a voice command, a gesture sensed by the gesture sensor 602, and/or a motion sensed by the motion sensor 604. In this manner, a surgeon may view video of a surgical site and steer a graphical indicia to be aligned within the video overlapping a point-of-interest in the patient's anatomy, and trigger capture of a still image including the video and graphical indicia that can be saved on the video server 650 and/or distributed to another HMD 600 for viewing by another surgeon or person.


In a further embodiment a user can view video from the camera 610 and steer a graphical indicia to be aligned with a location within a plane of the video stream, and can trigger recordation of the present positional alignment of the camera 610. The OSP 632 may then operate to maintain alignment of the graphical indicia displayed in the display screen 608 between, for example, the user's visual line-of-sight and the defined location as the user moves the HMD 600 (i.e., rotates the head up/down and/or right/left).


As the user looks away from the defined location, the OSP 632 responds to the head motion signal by correspondingly moving the graphical indicia across the display screen 608 maintaining visual alignment with the defined location until it is no longer displayed when the defined location is no longer in the user's field of view, and similarly as the user looks back toward the defined location the OSP 632 responds to the head motion signal by making the graphical indicia reappear on a corresponding edge of the display screen 608 and then further track movement to maintain visual alignment with the defined location as the user continues to rotate the HMD 600 toward the defined location. In this manner, a surgeon can virtually mark a location within a surgical site using a graphical indicia and can subsequently track that marked location based on the location of the graphical indicia within the display screen 608 while the surgeon's head moves. The graphical indicia may be included in a video stream from the camera 610 that is communicated, e.g., as a real-time video stream, to another HMD 600 worn by another person and/or that is communicated to the video server 650 for recordation and/or forwarding to other devices.


The computer equipment 620 may compare patterns of objects in the video stream from the camera 610 viewing a patient's anatomy to patterns of objects in other video (e.g., images, etc.) from the video server 650 and/or the imaging equipment 644 to identify levels of correspondence (e.g., output by a pattern matching algorithm). The computer equipment 620 may display indicia in the display screen 608 responsive to identifying a threshold level of correspondence between compared objects. For example, real-time video captured by the camera 610 during surgery of a patient may be processed by the computer equipment 620 and compared to video captured by one or more other sources. The other source(s) can include real-time feeds and/or earlier stored video provided by, for example, ultrasound equipment, cameras, CT scans, etc. The other source(s) may additionally or alternatively include an anatomical database specific for the particular patient or more generally for humans. The pattern matching may be constrained to characteristics of an object or a set of objects defined by a surgeon as being relevant to a present procedure. The computer equipment 620 may display on the display screen 608 an indicia (e.g., a crosshair or color marker) aligned with the identified object within the video from the camera 610 to assist the surgeon with identifying a location within the object of interest. The video sources may include an embedded marker that indicates a location within an object that is of interest to the surgeon. The pattern matching may further identify a location within the video stream from the camera 610 that corresponds to a location of the marker in the compared video, and an indicia may be displayed on the display screen 608 aligned with the location within the video from the camera 610 to assist the surgeon with identifying the location within the object of interest.


In some other embodiments, the user operates a handheld (e.g., wireless controller) control panel and/or a foot control panel to provide commands or other input to control operation of the computer equipment 620. A handheld control panel and/or foot control panel may be operated to select among a plurality of video streams that are provided to the HMD 600 for viewing, control magnification of the video stream provided to the HMD 600, control the location of a graphical indicia displayed within a video stream, control stop-start recordation of a video stream from the camera 610, and/or control routing of video streams and other information between the HMD 600, the video server 650, other HMDs 600, and other components of the system.


For example, a user may operate a portable computer, such as a laptop computer, tablet computer, mobile phone, or wearable computer to control the display of information on the display screen 608. For example, a tablet computer may be configured to select among a plurality of virtual display panels for display on the display screen 608. The tablet computer may select among the virtual display panels responsive to a user moving (e.g., rotating) the tablet computer, responsive to identifying a hand gesture via a camera of the tablet computer and/or via a touch sensitive display of the table computer, and/or responsive to recognizing a voice command via a microphone of the tablet computer. A user may, for example, rotate the tablet computer horizontally to scroll through a plurality of virtual display panels that are virtually organized along a horizontal plane. The user may similarly rotate the table computer vertically to scroll through a plurality of virtual display panels that are virtually organized along a vertical plane.


Virtual Displays Controlled by Head Mounted Display Apparatus

The computer equipment 620, via operation of the OSP 632, can generate virtual display panels that display video received from the video server 650 and/or other components of the system, and control which of the virtual display panels are presently displayed on the display screen 608 for viewing by a user based on user commands identified from motion sensed by the motion sensor 604, a gesture made by the user which is sensed by the gesture sensor 602 and/or the camera 610, and/or a voice command via the microphone 612. The virtual display panels can be arranged and visually presented to the user to visually appear to float within space in front of the user. The user's head may be rotated up and down and/or right and left to observe content of different ones of the virtual display panels that appear to retain a static position in space relative to the user's head. The user may alternatively or additionally make a gesture, such as by moving a hand left or right and/or moving the hand up or down, to cause the virtual display panels to correspondingly slide left or right and/or up or down.


The display screen 608 may be controlled to display only one of the virtual display panels at a time, such as by switching from one virtual display panel to another adjacent virtual display panel, or display a more continuous panning across the virtual display panels through which a user may view portions of two or more adjacent virtual display panels.


In case the user finds it undesirable to read the virtual display panels moving under control of the computer equipment 620 while looking around, the computer equipment 620 may enable the user to input a command (e.g., “Lock”) which causes whichever virtual display panel is presently most closely spaced to the user's line-of-sight to be displayed full screen and held statically in-place not responding to head movement. The virtual display panel may remain statically locked as-displayed until the user deactivates the command via, for example, another command (e.g., “Unlock”).


The computer equipment 620 may be configured to provide an automatic-lock and/or unlock of a virtual display panel relative to the display screen 608. When an automatic mode is enabled, a virtual display panel becomes automatically locked relative to the display screen 608 when the user's line-of-sight (e.g., yaw and pitch) indicated by the head motion signal is within a first threshold amount (e.g., 2°) offset from a defined location (e.g., yaw and pitch) of the virtual display panel. The computer equipment 620 may be configured to automatically unlock the virtual display panel from the display screen 608 when the user's line-of-sight (e.g., yaw and pitch) indicated by the head motion signal becomes at least a second threshold amount (e.g., 5°), which is greater than the first threshold amount, offset from the defined location (e.g., yaw and pitch) of the virtual display panel. The threshold amounts may be defined and/or adjusted by the user, and may be stored as a user's preference in the user's account information for subsequent retrieval and use.



FIGS. 10-12 illustrate operations and methods that may be performed by a system including a HMD 750 to control the display of virtual display panels through a display screen 752 of the HMD 750 according to some embodiments of the present disclosure. The various operations and methods described in the context of FIGS. 10-12 are not limited to use with the particular configuration of virtual display panels that are illustrated, but instead may be used with any number of virtual display panels and any arrangement of virtual display panels. Some of the virtual display panels may be displayed immediately adjacent to one another to appear as a single larger virtual display panel. Moreover, although various embodiments are described in the context of a particular HMD 750 configuration used in a surgical environment, the operations and methods may be used with any HMD 750 configuration for any type of operational use.


The example embodiments of FIGS. 10-12 allow a surgeon or other user to see several virtual displays of different medical information without looking away from the surgical site and focusing far away to view physical monitors that may be mounted across the OR or elsewhere adjacent to the patient. In some embodiments, three operational “modes” of the virtual displays are selectively activated based upon pitch of the surgeon's head and the corresponding viewing line-of-sight of the user. The three operations may be separately activated by increasing pitch angle of the HMD 750 through three corresponding ranges of viewing angles, such as low (directly at the surgical space), medium, high (horizontal eye-level). The viewing angle of the surgeon can be determined from the head motion signal output by a motion sensor of the HMD 750.


A full-screen operational mode is triggered when the OSP 632 determines that the surgeon is looking down at an operation site, which may be determined by when the pitch is below a first pitch threshold (e.g., about −45°). The first pitch threshold may be defined and/or adjusted by the surgeon based on a voice command, entered through a physical user interface, etc. In the full-screen operational mode, a defined one of the video streams (e.g., a primary video stream received via HDMI channel A) is displayed full screen through a display screen 752 of the HMD 750. The surgeon's corresponding preference settings may be saved in a configuration file stored in the memory 630 with an identifier for the surgeon, so that the surgeon's preferred settings can be automatically retrieved upon recognition of the surgeon (e.g., via a login process through the computer equipment 620).


Referring to FIG. 10, the computer equipment 620, as illustrated in FIG. 9, receives four video streams which it separately maps, via the OSP 632, to four different virtual display panels 710, 720, 722, and 724. Alternatively, the computer equipment 620 may combine two or more video streams to generate a combined video stream that is mapped to one of the virtual display panels 710, 720, 722, and 724. The computer equipment 620 may add graphical indicia and/or other information to one or more of the video streams, such as explained above. The computer equipment 620 can arrange the virtual display panels in two-dimensional space or in three-dimensional space. In the example of FIG. 10, three of the virtual display panels 720, 722, and 724 are arranged horizontally along an upper row designated as secondary virtual display panels, and the fourth virtual display panel 710 is arranged below the center virtual display panel 722 and designated as a primary virtual display panel. Other arrangements of the virtual display panels 710, 720, 722, and 724 may be provided, and other numbers of virtual display panels may be generated by the computer equipment 620 with corresponding mapping to any number of video streams.


In the example of FIG. 10, the surgeon's head is tilted downward below the first pitch threshold so that the surgeon's line-of-sight 702 is toward the operation site while looking through the TTL loupe. The surgeon may shift eye position upward so that the surgeon's line-of-sight 700 now looks through the display screen 752. The computer equipment 620 responds to the surgeon's head tilted below the first pitch threshold by operating in the full-screen operational mode to select a defined one of the video streams which it displays on the display screen 752 to generate the primary virtual display panel 710. Accordingly, the primary virtual display panel 710 can be positioned by the HMD 750 to appear to the surgeon, while maintaining the line-of-sight 700, to hover in space above and adjacent to the location of the operation site. None of the three other video streams mapped to the three secondary virtual display panels 720, 722, and 724 are presently displayed on the display screen 752.



FIG. 11 illustrates operations triggered by when the surgeon's head tilts up above the first pitch threshold and below a second pitch threshold so the surgeon's line-of-sight is along line 704 through the display screen 752. The primary virtual display panel 710 disappears from the display screen 752 and the three video streams (e.g., HDMI channels B, C and D) become separately viewable or at least partially collectively viewable as the three secondary virtual display panels 720, 722, and 724 floating in space. These secondary virtual display panels 720, 722, and 724 may first appear at default radial positions on a sphere centered in the surgeon's head, but the surgeon can have the ability to reposition and resize them for maximum convenience.


The collection of virtual display panels may be displayed as virtual floating monitor devices that are stationary in location, so that the user can move the HMD 750 to scan across their spaced apart locations and see individual ones of the virtual display panels. The surgeon can scroll sideways across the secondary virtual display panels 720, 722, and 724 by making corresponding sideways (e.g., yaw) head movements, by making defined gestures, and/or by speaking defined voice commands. The surgeon can similarly scroll downward from the secondary virtual display panels 720, 722, and 724 to view the primary virtual display panel 710 by making a corresponding downward (e.g., pitch) head movement, by making a defined gesture, and/or by speaking a defined voice command.


The computer equipment 620 may enlarge or shrink a portion of a video stream displayed on one of the virtual display panels being viewed by a surgeon responsive to a defined gesture by the surgeon and/or a defined voice command by the surgeon.


The symbol generator 624 in combination with the OSP 632 processed by the processor 626 of the computer equipment 620 may operate to display a graphical indicia (e.g., crosshair or other reticle) that can be positioned within a presently viewed one of the virtual display panels by the user responsive to recognition of voice commands via the microphone 612, to track movement of the user's finger, hand, or other object operationally recognized by the gesture sensor 602 and/or in a video stream from the camera 610. The user may trigger the OSP 632 to capture a still image of the video stream displayed in the virtual display plane with the incorporated graphical indicia responsive to a voice command, a gesture sensed by the gesture sensor 602, and/or a motion sensed by the motion sensor 604. In this manner, a surgeon may view video within one of the virtual display panels and steer a graphical indicia to be aligned within the video overlapping a point-of-interest, and trigger capture of a still image including the video and graphical indicia that can be saved on the video server 650 and/or distributed to another HMD 600 for viewing by another surgeon or assistant.


For example, the user may trigger display a crosshair indicia responsive to a voice command (e.g., “Crosshair On”). The graphical indicia may initially appear at the center of the display screen 608. Then, when the user has repositioned the crosshair indicia on an item to be marked, the user can speak another command (e.g., “Freeze”) to capture an image that is shared through other HMDs and/or recorded in memory.


In one embodiment, the virtual display panels are maintained level relative to an artificial horizon and facing directly towards the surgeon, and the surgeon can provide input to the computer equipment 620, e.g., via the HMD 750, to separately adjust azimuth, elevation and size parameters controlling display of each virtual display panel. In another embodiment, the surgeon can provide input to the computer equipment 620, e.g., via the HMD 750, to adjust position of the virtual display panels in 6-degrees of freedom. The radius of the sphere (virtual distance of the virtual display panels from the display screen 752 of the HMD 750) can be adjusted by the surgeon. A default focal radius of about 21 inches between the virtual display panels and the display screen 752 of the HMD 750 may be used.


The size of individual ones of the virtual display panels and/or the size of the collection of virtual display panels may be adjusted by the surgeon. By default, each of the virtual display panels may be sized to approximately fill the field-of-view of the display screen 752 of the HMD 750 (e.g., about 40° diagonal) when the surgeon is looking directly at one of the virtual display panels. In this manner, one virtual display panel can be controlled by the computer equipment 620 to fill the field-of-view of the display screen 752 of the HMD 750 when that display is centered in the field-of-view of the surgeon. By default, the positioning may be defined so that the secondary virtual display panel 722 on the upper row is directly above the operation site (azimuth 0° and elevation about −10°). Another secondary virtual display panel 720 starts just to the left of the secondary virtual display panel 722, and the other secondary virtual display panel 724 starts just to the right of the secondary virtual display panel 722. Distances between the virtual display panels may be defined or adjusted by the surgeon, and the surgeon's preferences may be stored associated with an identifier for the surgeon.



FIG. 12 illustrates that when the surgeon's head tilts further upward above the second pitch threshold so that the surgeon looks along the line-of-sight line 706 above the zone where the virtual display panels 710, 720, 722, and 724 are positioned, or anywhere else in the room except towards the locations of the virtual display panels 710, 720, 722, and 724, the display screen 752 does not display any of the four video streams (e.g., blank display) such that the surgeon may, for example, see thru the display screen 752 without observing obstructing video. Thus, in FIG. 12 the virtual display panels 710, 720, 722, and 724 are illustrated as below the line-of-sight line 706 of the surgeon, and none would be seen by the surgeon.


As explained above, the operations and methods disclosed herein are not restricted to medical uses. These operations and methods are applicable to many other uses, including industrial uses such as warehousing, manufacturing, product inspection, and/or maintenance.


In one illustrative embodiment, a HMD configured for use in a warehouse environment can provide a plurality of virtual display panels containing different information for guiding a user to a desired location within the warehouse and assisting the user with identifying a product on a shelf. For example, the HMD can respond to a user looking downward by displaying a first virtual display panel which provides directions for traveling to the desired location within the warehouse. An example first virtual display panel may illustrated a virtual pathway (e.g., line) on the ground that the user should follow to reach a desired location where a product resides, etc. In response to the user looking upward, the HMD can responsively display a second virtual display panel that assists the user in identifying the product. An example second virtual display panel may provide descriptive information and/or display a photograph or graphical representation of the product.


Some or all of the operations and methods described herein as being performed by a HMD and/or computer equipment may be performed by a portable computer, such as a laptop computer, tablet computer, mobile phone, data terminal, or wearable computer (e.g., on wrist, arm, leg, etc.). For example, a tablet computer may be configured to select among a plurality of virtual display panels for display on a display device of the table computer and/or on a communicatively connected HMD. The tablet computer may select among the virtual display panels responsive to a user moving (e.g., rotating, pitching, etc.) the tablet computer, responsive to identifying a hand gesture via a camera of the tablet computer and/or via a touch sensitive display of the table computer, and/or responsive to recognizing a voice command via a microphone of the tablet computer. A user may, for example, rotate the tablet computer horizontally to scroll through a plurality of virtual display panels that are virtually organized along a horizontal plane. The user may similarly rotate the table computer vertically to scroll through a plurality of virtual display panels that are virtually organized along a vertical plane. The portable computer (e.g., tablet computer) may be used to control display of the virtual display panels on a communicatively connected HMD. For example, the user may make a hand gesture that is sensed by the portable computer (e.g., via touch sensitive display, proximity sensor, and/or camera), the portable computer can communicate the command to computer equipment which then changes which of a plurality of virtual display panels are displayed on the HMD.



FIG. 13 is a block diagram of components of an augmented reality surgical system that include a position tracking system 810 (e.g., cameras spaced apart in the operating room) that track the location of a surgical tool 800 and/or prosthetic 802, the HMD 100, and a surgical site 804 or other target location on a patient. A computer system 820 uses patient data from imaging equipment 830 to generate a two dimensional (2D) or three dimensional (3D) model. The imaging equipment 830 may include, without limitation, x-ray equipment, endoscope cameras, magnetic resonance imaging equipment, computed tomography scanning equipment, three-dimensional ultrasound equipment, endoscopic equipment, and/or computer modeling equipment which can generate a multidimensional (e.g., 2D or 3D) model of a targeted site of a patient. The patient data can include real-time feeds and/or earlier stored data from the imaging equipment 830, and may include an anatomical database specific for the particular patient or more generally for humans.


The model can include reference markers or other references that assist with performing correlations between virtual locations in the patient model and physical locations on the patient's body. The computer system 820 uses the present locations of the HMD 100, the surgical site 804, and the surgical tool 800 and/or prosthetic 802 obtained by the position tracking system 810 and uses the reference markers or other references contained in the patient model to transform the patient model to a present perspective view of a wearer of the HMD 100. Some or all of the transformed patient model can then be displayed on the HMD 100 to provide the surgeon with a graphical overlay that is precisely oriented and scaled on the surgical site 804 or other target location on the patient.


The computer system 820 may display graphical representations of the patient model, the tool and/or the prosthetic on the display screen 110 of the HMD 100, and animate movement of the displayed patient mode, tool and/or prosthetic to illustrate a planned procedure relative to a defined location of the surgical site 804 or other target location on the patient's body. The HMD 100 may be communicatively connected to the computer system 820 through a wireless transceiver and/or wired network interface.


The computer system 820 may compare patterns of objects in the video stream from a camera on the HMD 100 to patterns of objects and/or reference markers in the patient model to identify levels of correspondence, and may control transformation of the patient model responsive to identifying a threshold level of correspondence between the compared objects. For example, real-time video captured by the HMD camera during surgery of a patient may be processed by the computer system 820 and compared to video captured by one or more other sources, e.g., the imaging equipment 830. The pattern matching may be constrained to characteristics of an object or a set of objects defined by a surgeon as being relevant to a present procedure.


The computer system 820 can control transformation of the patient model for display on the display screen 110 based on the pattern matching. The computer system 820 may display on the display screen 110 an indicia (e.g., a crosshair or color marker) aligned with an identified object within the video from the HMD camera to assist the surgeon with identifying the corresponding location on the patient. In one embodiment, the computer system 820 displays a graphical indicia on the display screen 110 aligned with one of the anatomical objects displayed on the display screen 110 from the rotated and scaled three dimensional anatomical model responsive to identifying a threshold level of correspondence between a pattern of the one of the anatomical objects and a pattern of one of the anatomical objects in the video stream from the video camera.


The computer system 820 may similarly receive other data and video streams from a patient database and other electronic equipment, which can be selectively displayed on the HMD 100. As used herein, a video stream can include any type of information that can be provided to a display device for display, including without limitation a still image (e.g., digital photo), a sequence of still images, and video having frames provided at a defined frame rate. The computer system 820 can retrieve information relating to a patient's medical history and data obtained by real-time monitoring of a patient, including, for example, hemodynamic, respiratory, and electrophysiological signals.


Although the computer system 820 is illustrated as being separate from the HMD 100, some or all of the operations disclosed herein as being performed by the computer system 820 may additionally or alternatively be performed by one or more processors residing within the HMD 100. Similarly, some of the operations disclosed herein as being performed by the HMD 100 may additionally or alternatively be performed by one or more processors residing within the computer system 820.


Although various embodiments are disclosed in the context of a HMD, some other embodiments are directed to tracking the location of a handheld display, such as a tablet computer, and displaying the transformed patient model on the handheld display. The handheld display may include tracking markers, such as on a front surface and/or back surface of the display housing, which are tracked by the position tracking system 810. The handheld display may include a camera that provides a video signal that is combined with a video signal of the transformed patient model from the computer system 820, to display a real world view of the patient augmented with the graphical overlay from a portion of the transformed patient model.


The transformed patient model may additionally be relayed to other HMDs 100 worn by other personnel assisting with the procedure, to other display devices, and/or to a video server for storage. In this manner, other personnel may observe what the surgeon views on the display screen 110.



FIG. 14 is another block diagram of the electronic components of the augmented reality surgical system of FIG. 13 according to some embodiments of the present disclosure. Referring to FIG. 14, the position tracking system 810 can include a camera system 902 that tracks the location tracking markers 904 attached to a surgery table, tracking markers 906 attached to patient adjacent to a surgical or other target site, tracking markers 908 attached to a surgery tool and/or prosthetic, and tracking markers 910 attached to the HMD 100. The camera system 902 can include a plurality of cameras that are spaced apart at defined locations within an operating room and each having a field of view that can observe objects to be tracked. In the illustrated example, the camera system 902 includes two sets of cameras spaced apart by a known distance and relative orientation. The position tracking system 810 may use active optical markers (e.g., light emitting sources), passive optical markers (e.g., light reflectors), but is not limited to the use of cameras. The position tracking system 810 may additionally or alternatively use electromagnetic ranging and trackers, ultrasonic emitters/sensors for ranging and trackers, etc.


Positioning data of the HMD 100 can include navigation coordinate system data determined from the location of the tracking markers 910 attached to the HMD 100, and inertial coordinate system data from the inertial sensors. The navigation coordinate system data and the inertial coordinate system data can be compensated for initial calibration and drift correction over time by a calibration component 912 and combined by a fusion component 914 to output combined HMD position data.


A relative positioning component 916 identifies the relative position and angular orientation of each of the tracked markers 904-910 and the combined HMD position data. The component 916 may perform coordinate transformations of the relative coordinate systems of the surgery table, the patient, the surgery tool, and the HMD 100 to a unified (common) coordinate system. In one embodiment, the relative positioning component 916 outputs sight coordinates data, patient model coordinates data, and tool coordinates data to an image generator 918. The sight coordinates data can be generated based on the combined HMD position data transformed to the unified coordinate system. The patient model coordinates data can be generated based on the position and orientation of the tracking markers 906 attached to the patient transformed to the unified coordinate system, and the tool coordinates data can be generated based on the position and orientation of the tracking markers 908 attached to the surgery tool transformed to the unified coordinate system.


The image generator 918 transforms the patient model (e.g., from the imaging equipment 830) to a present perspective view of a wearer of the HMD 100 mapped to a corresponding object within the FOV of the display screen 110 which is modeled by the patient model. The image generator 918 provides video generated based on the transformed patient model to the HMD 100 for display as a visual model that is dynamically oriented and scaled as a graphical overlay on the surgical site 804 or elsewhere to a corresponding location on the patient where the wearer of the HMD 100 is presently looking and which contains a corresponding object which is modeled by the patient model.


For example, the image generator 918 determines whether any portion of the patient's body is presently within the field of view of what the surgeon sees through the display screen 110 that corresponds to any portion of the transformed patient model. When a portion of the transformed patient model corresponds to a portion of the patient's body within the surgeon's field of view through the display screen 110, the image generator 918 generates video for display on the display screen 110 based on the corresponding portion of the transformed patient model, while translating the portion of the transformed patient model and scaling size of the portion of the transformed patient model to provide an accurately scaled graphical representation of the object that was imaged from the patient or modeled from another source such as an anatomical database.


Thus, for example, when a surgeon's head is rotated so that a portion of a patient's body having a bone that is modeled through CT imagery data becomes within the field of view of the display screen 110, the image generator 918 transforms and scales the patient model of the bone to generate a graphical representation of the bone that is displayed in the display screen 110 as a graphical overlay that matches the orientation and size of the bone from the perspective of the surgeon as-if the surgeon could view the bone through intervening layers of tissue and/or organs.


In the example illustration of block 920, a leg bone model that has been generated, e.g., based on a CT scan of the leg, is transformed and displayed on the display screen 110 to have the accurate six degree of freedom orientation and size relative to the leg bone when viewed as a graphically illustrated representation 922 of the leg bone superimposed on a skin surface of the leg. The surgeon therefore sees the skin surface of the leg through the semitransparent display screen 110 of the HMD 100 with an graphically illustrated representation of the leg bone model overlaid thereon.


Although the graphical representation 922 of the leg bone is illustrated as being displayed in a superimposed position on a skin surface of the leg, the graphical representation 922 can be displayed at other locations which may be controllable by the surgeon. The surgeon may, for example, select to have the graphical representation 922 displayed with a defined offset distance above or below the leg. Moreover, the surgeon may control the size of the displayed graphical representation 922 relative to the leg. The surgeon may, for example, temporarily magnify the displayed graphical representation 922 to view certain details and then return the displayed graphical representation 922 to be scaled and aligned with the leg.



FIG. 15 is another block diagram that illustrates further example operations of the electronic components of the augmented reality surgical system of FIG. 13 according to some alternative or additional embodiments of the present disclosure. Referring to FIG. 15, the Imagery Device (IMG) 1000 provides x-ray imagery data, e.g. 2D or 3D pictures, with or without reference markers for aiding with location correlation. The image generator 1000 may correspond to the imaging equipment 830 of FIG. 13. The imagery data can be generated before and/or during an operation. For example, real-time generated imagery data from an x-ray device may be used to aid a surgeon with navigating a tool to a target site within a patient during on operation. In another example, 3D imagery data generated days or weeks before an operation can be re-used and combined with 2D or 3D x-ray imagery data generated real-time during the operation. The imagery data can be output as a voxelised object, with an accurate 3D geometry of the imaged object.


Calibration of the x-ray device may be performed using an instrumented reference cube. Using an automatic superposition process, all the transformation tables can be determined based on the calibration to provide a clean and rectified geometry of the scanned object, without undesirable deformation. The calibration tables are specific to each x-ray device.


The REF component 1002 accurately references (correlates) the voxelised object in the virtual world to the tracked objects in the real world which may be tracked using the tracking markers explained above or other approaches. The x-ray imagery data can include reference markers which can be created using small objects placed on the imaged object (e.g., patient's body) and which can be readily identified in the x-ray imagery. The reference markers may be invasively attached to the patient (e.g., implanted during an operation by a surgeon) and/or may be non-invasively attached (e.g. adhesively attached to skin).


A modeling component (MOD) 1004 transforms the x-ray imagery data to a 3D patient model (graphical model) which can include reference markers. The patient model may have several separate representations, such as: 1) bones only; 2) skin only; 3) bones and skin; 4) wireframe or color; etc. In case of available articulation between two bones, geometrical transformation between the bones may be defined in the patient model to enable animated movement thereof. The reference markers on articulated bones or other attached objects are referenced in the same coordinate system as the body and/or skeletal model. The number of reference markers may be at least three per rigid object, e.g., bone.


A navigation component (NAV) 1006, which may be a component of the position tracking system 810 of FIG. 14, generates the relative positions (e.g., X, Y, Z coordinate system) of: 1) the targeted body part(s); 2) the HMD 100; 3) surgical tools and/or auxiliary tools; etc. The NAV 1006 identifies the relative positions of the tracked markers, and may further determine the relative angular orientations of the tracked markers 904, 906, 908, and 910. An inertial measurement unit (IMU) 1008 can be mounted to the HMD 100 and configured to sense translational and/or rotational movement and/or static orientation of the surgeon's head, and the sensor data is converted to position data and angular orientation data (e.g., in the inertial coordinate system) by a head tracking (HTK) component 1009. The head tracking component 1009 may perform Kalman filtering on the sensor data to filter noise and/or sensor drift over time when generating the position data and angular orientation data.


A generator component (GEN) 1010 computes in real-time the transformed patient model to be displayed (3D models, symbology, etc.) according to relative positions of the various tracked objects in the operating room. The graphical representation provided by the transformed patient model can be monoscopic or stereoscopic, and can represent several different modes (e.g., wireframe, color, textured, etc.) that are selectable under surgeon control. The GEN 1010 may perform calibration operations for the NAV 4\1006, the display screen (DIS) 110, the head tracking component 1009, and man-machine interface (e.g., calibration of gesture based control operations, voice based control operations, etc.).


As explained above, the display screen (DIS) 110 can be a see-through display device that is fed video which displays in real-time portions of the transformed patient model and desired symbology, which can be accurately superimposed on the target surgical site or other target area of the patient within the FOV of the surgeon.



FIG. 16 is a block diagram that illustrates further example operations of an electronic system subcomponent that can be included in the augmented reality surgical system of FIG. 13 according to some other embodiments of the present disclosure.


Referring to FIG. 16, the subcomponent accesses CT-SCAN data stored in a DICOM file 1100. The example file 1100 contains raw 3D CT scan data which is cleaned, filtered, and processed through the operations of blocks 1102-1118 to generate a 3D model that is ready for graphical visualization and display on the HMD 100. The geometric structure of the 3D CT scan is based on a set of parallel slices along planes through the patient's body. The slices can have a constant thickness and a shared coordinate system.


Although various embodiments are described in the context of processing 3D CT scan data to generate a graphical model for illustration to an operator, these embodiments are not limited thereto and may be used with any type of patient data that can be graphically modeled for illustration.


The system can adapt its processing of the 3D CT scan data based on knowing what is characterized by the data (e.g., one or more defined organs, one or more defined skeletal structures). The type of information that is extracted from the data can be selected by the operator based on what is desired for graphical visualization through the HMD 100 and/or the type of information may be automatically selected. One selection criterion is based on the X-ray intensity of each pixel per slice. Anatomical structure can be identified and characterized using windowing operations (block 1102) that map X-ray intensity to a contrast intensity image based on parameters that are have predefined for identifying skeletal structure, muscular structure, organ structure, vascular structure, etc. In the embodiment of FIG. 16, the windowing operations processes the types of CT scan data that includes: CT-ABDOMEN, CT-CRANE, CT-LUNG, CT-PELVIS, CT-BONE, etc. Output of the windowing operations can be voxelised (block 1104), e.g., by data vectorization based on opacity of the material.


Image processing (block 1106) can then be performed that includes contrast adjustment, Gaussian noise filtering, etc. Selection are made (block 1108) among various available types of modeled iso-surfaces, such as bone, outer skin, organs, etc., based on operator (e.g., surgeon) input and/or other selection criteria. The various types of iso-surface models may be generated using data from the voxel model 1110. A 3D model is generated (block 1112) based on the selections. Calibration information can be generated (block 1114) to facilitate the set-up of the system by performing an initial calibration of the orientation of markers located on the HMD 100, while the operator wears the HMD 100, markers located on the patient, markers located on tools, etc., such as described above regarding FIG. 14. The CT-scan can be performed while markers are fixed to the patient, such as described above. Locations of the markers are correlated to the single coordinate system for all the derived models (voxel or mesh).


Smoothing operations are performed (block 1116) on the resulting data, and data validity checks and operator selective controls are applied (block 1118). A 3D mesh graphical model is generated and can be displayed (block 1120) through the HMD 100 to illustrate the relative position and orientation of a surgical tool, the modeled iso-surfaces from the 3D CT scan data, and any other operator selected and/or defined information. The same coordinate system can be maintained throughout the operations of blocks 1102-1120.


As explained above, the system can generate a graphical model from the patient data that represents detectable anatomical structure that is hidden from direct observation. The system can automatically orient, scale, and display the model so that it is viewable through the HMD 100 superimposed on the relevant area of the physical anatomy of the patient. Thus, for example, when a surgeon's head is rotated so that an area of a patient's body having a bone modeled through CT imagery data becomes within the field of view of the display screen 110 of the HMD 100, the system displays a graphical model of the bone and selected types of anatomical structure (e.g., skeletal structure, muscular structure, organ structure, vascular structure, etc.). The surgeon is thereby able to peer into the patient to view a representation of the bone and/or intervening layers of tissue, organs, etc.


As will be appreciated in view of the present disclosure, previously available procedures that required a surgeon to view static CT scans or other patient data on remote monitors provided limited guidance to the surgeon for where a drill or other surgical tool should be placed on the patient's body and, more particularly, how the drill should be oriented relative to the patient's body so that the drill bit will travel through acceptable structure in the patient's body and reach a target location. Some further embodiments of the present disclosure are directed to displaying additional information through the HMD 100 that provides real-time feedback to the surgeon for where the drill or other tool should be located and oriented before continuing a procedure.


The system can compute and graphically display in real-time through the HMD 100 a projected path that the drill would make through a patient's body based on a present orientation and location of the drill. The surgeon can be allowed to select from among several available visualization modes to have particular anatomical structure and the trajectory of one or more surgical tools graphically modeled and displayed to the surgeon through the HMD 100 based on their computed relative locations, orientations and scaled relationships.



FIG. 17 is a block diagram that illustrates further example operations of an electronic system subcomponent 1200 that can be included in the augmented reality surgical system of FIG. 13 to generate the graphical representations of FIGS. 18-23 in according with some embodiments of the present disclosure. FIG. 18 illustrates a graphical image generated on the HMD 100 that shows a virtual trajectory 1312 of a drill bit 1310 extending from a drill or other surgical tool (e.g., scalpel, etc.) into a patient's anatomical bone model 1302 and other selected intervening structure, and which is overlaid at a patient site 1300.


Referring to FIGS. 17 and 18, a tracking system 1208 tracks the relative location and orientation of reference markers attached to the drill bit 1310, the patient site 1300, and the HMD 100, as illustrated in FIG. 14, and computes the relative distance and orientation therebetween. The tracking system 1208 may operate based on the position tracking system 810 described above in FIG. 14. A slice builder component 1204 uses the relative distance and orientation between the drill bit 1310, the patient site 1300, and/or the HMD 100 to select data from the voxel model 1110 that is used to form a slice for display through the HMD 100. Data in the slice may directly correspond to data contained in the voxel model 1110 and/or may be generated from voxel model 1110 data which was obtained from a plurality of 3D CT scans which are intersected by the virtual trajectory 1312 of the drill bit 1310. A viewer component 1206 combines the slice with the 3D mesh model 1202 and displays the combined graphical rendering on the HMD 100 so that it is precisely superimposed on the patient site 1300 from the surgeon's point-of-view. The viewer component 1206 also graphically displays the virtual trajectory 1312, which may be computed using operations based on a ray-tracing type algorithm. In the example of FIG. 18, the viewer component 1206 has also displayed on the HMD 100 a cross sectional slice 1320 that passes through the target location 1314.


The virtual trajectory 1312 can be recomputed and dynamically displayed at a sufficient update rate to provide real-time feedback to a surgeon who is repositioning and reorienting the drill bit 1310 and/or the patient site 1300 so that the virtual trajectory 1312 will intersect a target location 1314. In the example of FIG. 18, the target location 1314 corresponds to a point where the drill bit 1310 would impact the graphically displayed bone model 1302, which is oriented and scaled to match the patient's bone. A sufficient update rate for recomputing and displaying the virtual trajectory 1312 to provide acceptable real-time feedback to a surgeon may be, without limitation, at least 5 Hz.


The system subcomponent 1200 can be configured to provide defined visualization modes that a surgeon can select among using head movement, hand gestures, voice commands, etc., which are sensed by the HMD 100. In some visualization modes the surgeon controls what type of anatomical structure is graphically displayed on the HMD 100. For example, the surgeon can select among various visualization modes that control which one or more of the following are displayed on the HMD 100: 1) bone; 2) skin; 3) muscle; 4) organ; 5) vessel; 6) virtual tool trajectory; and 7) cross sectional slice. Another visualization mode can control which cross sectional slices are displayed and/or the orientation of the slice(s) relative to the surgeon's point-of-view. Some visualization modes cause the system subcomponent 1200 to graphically render anatomical structure of the patient for display as wireframes, polygons, or smoothed surfaces, and which can be selectively displayed in monochrome or false colors. A surgeon can dynamically command switching between the available visualization modes and can cause any combination of two more of the visualization modes to be simultaneously active.


In the example of FIG. 18, the surgeon has selected a visualization mode that displays the slice located at the target location 1314. The slice may be automatically generated and displayed by the viewer 1206 responsive to the target location 1314 being computed based on where the drill bit 1310 will intersect the surface of the graphically displayed bone model 1302 or other selected anatomical structure. FIG. 19 illustrates a portion of the slice 1320 along plane 19-19 in FIG. 18 that has been rotated to provide a front view. The surgeon may select another visualization mode that controls the system subcomponent 1200, e.g., through rotation of the surgeon's head, to rotate the slice 1320 displayed on the HMD 100 from the orientation shown in FIG. 18 to the orientation shown in FIG. 19 along the illustrated x-y coordinate plane, and/or to any other orientation. The slice 1320 can be variably scaled (e.g., zoom-in or zoom-out) for display responsive to commands received from the surgeon. The surgeon can select among various views disclosed here to guide the surgeon's manipulation of the drill bit 1310 or other surgical tool.


The surgeon can select another visualization mode that causes the system subcomponent 1200 to simultaneously or sequentially display a sequence of cross-sectional slices that are spaced apart along the virtual trajectory 1312 of the drill bit 1310. An example graphical display generated on the HMD 100 according to this visualization mode is shown in FIG. 20. In the non-limiting example of FIG. 20, three slices 1322, 1324, 1326 are parallel to one another, e.g., parallel to the illustrated x-y coordinate plane, and are spaced apart and each centered along the virtual trajectory 1312. The graphical display shows the spatial relationship between the virtual trajectory 1312 and anatomical structure illustrates in the slices 1322, 1324, 1326, and moreover shows intersection points between the virtual trajectory 1312 and each of the slices 1322, 1324, 1326. The surgeon can control the system subcomponent 1200 to rotate and/or scale all three slices 1322, 1324, 1326 or a selected one or more thereof. The slices are selected based on the computer determining that the slice contains imaging data of a location within the patient that is intersected by the virtual trajectory 1312. The surgeon can control how many slices are simultaneously displayed and can control where a slice is generated and displayed along the virtual trajectory 1312. For example, through a head movement, hand gesture, or other command the surgeon may move a pointer along the virtual trajectory 1312 to select a location where a slice, e.g., along the x-y plane, is to be generated and displayed on the HMD 100.


The surgeon can select another visualization mode that causes the system subcomponent 1200 to simultaneously or sequentially display another sequence of cross-sectional slices that are spaced apart along the virtual trajectory 1312 of the drill bit 1310 and oriented in parallel planes that are perpendicular to the virtual trajectory 1312. An example graphical display generated on the HMD 100 according to this visualization mode is shown in FIG. 21. In the non-limiting example of FIG. 21, three slices 1330, 1332, 1334 are parallel to one another and spaced apart and each centered along the virtual trajectory 1312. The surgeon can control the system subcomponent 1200 to rotate and/or scale all three slices 1330, 1332, 1334 or a selected one or more thereof. The surgeon can control how many slices are simultaneously displayed and can control where a slice is generated and displayed along the virtual trajectory 1312, and may control how many dimensions are illustrated in the various view and/or can select among various perspective views. The surgeon may furthermore control whether the graphical display renders a three dimensional or orthographic projection view of the graphical anatomical model and/or slices thereof.


The visualization mode that is displayed in FIG. 21 provides the surgeon with a view from the perspective of looking-down the drill bit 1310. A graphical display illustrates to the surgeon the virtual trajectory 1312 and a tool impact location 1314 where the drill bit 1310 is projected to make contact with the bone if the surgeon proceeded to drill while maintaining the present orientation of the drill bit 1310. The graphical display may also illustrate a target location 1316 that the surgeon has earlier defined as being a desired location where the drill bit 1310 should contact the bone. Simultaneous illustration of the tool impact location 1314 and the target location 1316 in this manner can be a particularly useful view mode while a surgeon is seeking to position and orient the drill bit 1310 to intersect or to avoid intersecting various anatomical structure illustrated by one or more of the slices 1330, 1332, 1334 and/or other slices selected by the surgeon, and while accomplishing the objective of guiding the drill bit 1310 to contact the target location 1316 on the bone. Through a head movement, hand gesture, or other command the surgeon may move a pointer along the virtual trajectory 1312 to select a location where a perpendicular slice is to be generated and displayed on the HMD 100.


In a further visualization mode, the surgeon can control the system subcomponent 1200 to rotate and/or scale one or more of the slices 1330, 1332, 1334. FIG. 22 illustrates the slice 1334 along plane 22-22 in FIG. 21 rotated to provide a front view, and illustrates the tool impact location 1314 where the drill bit 1310 would impact the graphically displayed bone model 1302 based on a present location and orientation of the drill bit 1310 relative to the patient site 1300 and further illustrates the target location 1316. FIG. 23 illustrates the slice 1330 along plane 23-23 in FIG. 21 rotated to provide a front view, and illustrates the tool impact location 1314 where the drill bit 1310 would impact the graphically displayed bone model 1302 based on a present location and orientation of the drill bit 1310 relative to the patient site 1300 and further illustrates the target location 1316. The surgeon may control the system subcomponent 1200, e.g., through rotation of the surgeon's head, to rotate and/or scale one or more of the slices displayed on the HMD 100. The surgeon may furthermore control the system subcomponent 1200 to selectively display any one or more of the virtual trajectory 1312, the tool impact location 1314, and the target location 1314.


In this manner various embodiments of the present disclosure displaying information through the HMD 100 that provides real-time guidance and feedback to a surgeon who seeks to position and orient a surgical tool to reach a target location within a patient.


Further Definitions and Embodiments

In the above-description of various embodiments of the present disclosure, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or contexts including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented in entirely hardware, entirely software (including firmware, resident software, micro-code, etc.) or combining software and hardware implementation that may all generally be referred to herein as a “circuit,” “module,” “component,” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product comprising one or more computer readable media having computer readable program code embodied thereon.


Any combination of one or more computer readable media may be used. The computer readable media may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an appropriate optical fiber with a repeater, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.


A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.


Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C #, VB.NET, Python or the like, conventional procedural programming languages, such as the “C” programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS).


Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


These computer program instructions may also be stored in a computer readable medium that when executed can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions when stored in the computer readable medium produce an article of manufacture including instructions which when executed, cause a computer to implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable instruction execution apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatuses or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.


The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various aspects of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.


The terminology used herein is for the purpose of describing particular aspects only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. Like reference numbers signify like elements throughout the description of the figures.


The corresponding structures, materials, acts, and equivalents of any means or step plus function elements in the claims below are intended to include any disclosed structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated.

Claims
  • 1. An augmented reality surgical system comprising: a head mounted display comprising a see-through display screen that display images while allowing transmission of ambient light therethrough;a motion sensor connected to the head mounted display and configured to output a head motion signal indicating measured movement of the head mounted display; andat least one camera configured to observe reference markers connected to the head mounted display, reference markers connected to a patient, and reference markers connected to a surgical tool located within a surgical room;
  • 2. The augmented reality surgical system of claim 1, wherein: the computer equipment is configured to rotate and scale the at least a portion of the three dimensional anatomical model based on the relative location and orientation of the reference markers connected to the head mounted display and the reference markers connected to the patient to transform the at least a portion of the three dimensional anatomical model to a present perspective view of a user wearing the head mounted display so the user sees a graphical representation on the display screen of the at least a portion of the three dimensional anatomical model oriented and scaled to provide a displayed overlay on the portion of the patient that was imaged by the medical imaging equipment.
  • 3. The augmented reality surgical system of claim 1, further comprising at least one video camera connected to the head mounted display to be forward-facing away from the user, wherein the computer equipment compares patterns of anatomical objects in a video stream from the at least one video camera to patterns of anatomical objects in the patient data created by the medical imaging equipment, and controls generation of the three dimensional anatomical model from the patient data responsive to identifying a threshold level of correspondence between the compared patterns of anatomical objects.
  • 4. The augmented reality surgical system of claim 1, further comprising at least one video camera connected to the head mounted display to be forward-facing away from the user, wherein the computer equipment compares patterns of anatomical objects in a video stream from the at least one video camera to patterns of anatomical objects in the patient data created by the medical imaging equipment, and displays a graphical indicia on the display screen aligned with one of the anatomical objects displayed on the display screen from the rotated and scaled three dimensional anatomical model responsive to identifying a threshold level of correspondence between a pattern of the one of the anatomical objects and a pattern of one of the anatomical objects in the video stream from the at least one video camera.
  • 5. The augmented reality surgical system of claim 1, wherein: the at least one camera is further configured to observe reference markers connected to a surgical tool located within a surgical room;the computer equipment is further configured to:compute the relative location and orientation of the reference markers connected to the head mounted display, the reference markers connected to the patient, and the reference markers connected to the surgical tool based on processing a video signal from the at least one camera; andgenerate the video signal to include a graphical representation of the surgical tool illustrated at a position relative to the three dimensional anatomical model that is determined based on the relative location and orientation of the reference markers connected to the head mounted display, the reference markers connected to the patient, and the reference markers connected to the surgical tool.
  • 6. The augmented reality surgical system of claim 5, wherein the computer equipment is further configured to: generate a graphical representation of a virtual trajectory extending from the surgical tool into the three dimensional anatomical model based on the relative location and orientation of the reference markers connected to the head mounted display, the reference markers connected to the patient, and the reference markers connected to the surgical tool; and generate the video signal to include the graphical representation of the virtual trajectory.
  • 7. The augmented reality surgical system of claim 6, wherein: the computer equipment is further configured to select an image slice from among a plurality of image slices contained in the three dimensional anatomical model, the image slice being selected based on the computer equipment determining that the image slice is traversed by the virtual trajectory extending from the surgical tool, and to generate the video signal to include the image slice.
  • 8. The augmented reality surgical system of claim 7, wherein: the computer equipment is further configured to rotate a graphical representation of the image slice and the graphical representation of the virtual trajectory that are displayed on the display screen responsive to the head motion signal from the motion sensor and/or responsive to a command received from a user.
  • 9. The augmented reality surgical system of claim 7, wherein: the computer equipment is further configured to rotate a graphical representation of the image slice and the graphical representation of the virtual trajectory that are displayed on the display screen to provide a view of the image slice that is perpendicular to a direction of the virtual trajectory.
  • 10. The augmented reality surgical system of claim 9, wherein: the computer equipment is further configured to identify a target location within the image slice, and to display the target location relative to the graphical representation of the virtual trajectory.
  • 11. An augmented reality surgical system for displaying multiple video streams to a user, the augmented reality surgical system comprising: a head mounted display comprising a see-through display screen that display images while allowing transmission of ambient light therethrough;a motion sensor connected to the head mounted display and configured to output a head motion signal indicating measured movement of the head mounted display; anda computer equipment configured to receive a plurality of video streams from one or more source devices and to control which of the video streams are output as a video signal to the display screen based on the head motion signalwherein the video signal is a graphical representation of a virtual trajectory of a surgical instrument that intersects a target location on a patient the target location corresponds to a point where the surgical instrument impact the three dimensional anatomical image which is oriented and scaled to match the patient's anatomy, and wherein the virtual trajectory is continuously updated and dynamically displayed to a surgeon who is repositioning and reorienting the surgical instrument.
  • 12. The augmented reality surgical system of claim 11, wherein: the computer equipment is communicatively connected to a surgical video server to receive the video streams, one of the video streams comprises data generated by medical imaging equipment, another one of the video streams comprises information from a patient database defining a patient's medical history, and another one of the video streams comprises data generated by medical equipment based on real-time monitoring of the patient's vitals.
  • 13. The augmented reality surgical system of claim 11, further comprising: a headband configured to grasp the user's head;an arm apparatus connected between the headband and the head mounted display, the arm apparatus configured to allow rotation of the head mounted display relative to the headband by the user to position the display screen relative to the user's line-of-sight.
  • 14. The augmented reality surgical system of claim 11, wherein: the arm apparatus comprises a pivot joint configured to allow the display screen to be flipped down to a deployed position within the user's line-of sight and flipped-up to a stored position outside the user's line-of sight.
  • 15. The augmented reality surgical system of claim 11, wherein: the computer equipment is configured to select one of the video streams from among the video streams based on the head motion signal, and to output the selected one of the video streams as a video signal to the display screen.
  • 16. The augmented reality surgical system of claim 11, wherein the motion sensor is configured to output the head motion signal containing a pitch component that provides an indication of pitch angle of the head mounted display.
  • 17. The augmented reality surgical system of claim 16, wherein the computer equipment is configured to set an origin reference point for measurement of a yaw component of the head motion signal based on processing the yaw component of the head motion signal during times when the pitch component of the head motion signal is below a defined threshold value.
  • 18. The augmented reality surgical system of claim 17, wherein the motion sensor is a sourceless orientation sensor, and the computer equipment is configured to correct yaw drift of the sourceless orientation sensor towards zero responsive to determining that the pitch component of the head motion signal is below the defined threshold value.
  • 19. The augmented reality surgical system of claim 18, wherein the sourceless orientation sensor comprises a gyroscope, and the computer equipment is configured to estimate biases of the gyroscope giving rise to the yaw drift, using yaw drift correction pseudo-measurements determined while the pitch component of the head motion signal is below the defined threshold value.
  • 20. The augmented reality surgical system of claim 11, wherein the computer equipment recognizes voice commands and/or gesture commands from the user, and defines and/or adjusts the yaw and pitch coordinates of the virtual floating panels displayed through the display screen responsive to the recognized voice commands and/or gesture commands.
RELATED APPLICATIONS

The present patent application is a continuation of U.S. patent application Ser. No. 15/013,594 filed on Feb. 2, 2016 which is claims the benefit of priority from U.S. Provisional Patent Application No. 62/111,379, filed on Feb. 3, 2015, from U.S. Provisional Patent Application No. 62/181,433, filed on Jun. 18, 2015, and from U.S. Provisional Patent Application No. 62/270,895, filed on Dec. 22, 2015, the disclosure and content of all of which are incorporated by reference herein in their entireties.

US Referenced Citations (722)
Number Name Date Kind
4150293 Franke Apr 1979 A
5091926 Horton Feb 1992 A
5246010 Gazzara et al. Sep 1993 A
5354314 Hardy et al. Oct 1994 A
5397323 Taylor et al. Mar 1995 A
5598453 Baba et al. Jan 1997 A
5645077 Foxlin Jul 1997 A
5772594 Barrick Jun 1998 A
5791908 Gillio Aug 1998 A
5807284 Foxlin Sep 1998 A
5820559 Ng et al. Oct 1998 A
5825982 Wright et al. Oct 1998 A
5887121 Funda et al. Mar 1999 A
5911449 Daniele et al. Jun 1999 A
5912720 Berger Jun 1999 A
5951475 Gueziec et al. Sep 1999 A
5987960 Messner et al. Nov 1999 A
6012216 Esteves et al. Jan 2000 A
6031888 Ivan et al. Feb 2000 A
6033415 Mittelstadt et al. Mar 2000 A
6080181 Jensen et al. Jun 2000 A
6106511 Jensen Aug 2000 A
6122541 Cosman et al. Sep 2000 A
6144875 Schweikard et al. Nov 2000 A
6157853 Blume et al. Dec 2000 A
6162191 Foxlin Dec 2000 A
6167145 Foley et al. Dec 2000 A
6167292 Badano et al. Dec 2000 A
6201984 Funda et al. Mar 2001 B1
6203196 Meyer et al. Mar 2001 B1
6205411 DiGioia, III et al. Mar 2001 B1
6212419 Blume et al. Apr 2001 B1
6231565 Tovey et al. May 2001 B1
6236875 Bucholz et al. May 2001 B1
6246900 Cosman et al. Jun 2001 B1
6301495 Gueziec et al. Oct 2001 B1
6306126 Montezuma Oct 2001 B1
6312435 Wallace et al. Nov 2001 B1
6314311 Williams et al. Nov 2001 B1
6320929 Von Der Haar Nov 2001 B1
6322567 Mittelstadt et al. Nov 2001 B1
6325808 Bernard et al. Dec 2001 B1
6340363 Bolger et al. Jan 2002 B1
6361507 Foxlin Mar 2002 B1
6377011 Ben-Ur Apr 2002 B1
6379302 Kessman et al. Apr 2002 B1
6396497 Reichlen May 2002 B1
6402762 Hunter et al. Jun 2002 B2
6424885 Niemeyer et al. Jul 2002 B1
6447503 Wynne et al. Sep 2002 B1
6451027 Cooper et al. Sep 2002 B1
6477400 Barrick Nov 2002 B1
6484049 Seeley et al. Nov 2002 B1
6487267 Wolter Nov 2002 B1
6490467 Bucholz et al. Dec 2002 B1
6490475 Seeley et al. Dec 2002 B1
6499488 Hunter et al. Dec 2002 B1
6501981 Schweikard et al. Dec 2002 B1
6507751 Blume et al. Jan 2003 B2
6535756 Simon et al. Mar 2003 B1
6560354 Maurer, Jr. et al. May 2003 B1
6565554 Niemeyer May 2003 B1
6587750 Gerbi et al. Jul 2003 B2
6614453 Suri et al. Sep 2003 B1
6614871 Kobiki et al. Sep 2003 B1
6619840 Rasche et al. Sep 2003 B2
6636757 Jascob et al. Oct 2003 B1
6645196 Nixon et al. Nov 2003 B1
6666579 Jensen Dec 2003 B2
6669635 Kessman et al. Dec 2003 B2
6701173 Nowinski et al. Mar 2004 B2
6757068 Foxlin Jun 2004 B2
6782287 Grzeszczuk et al. Aug 2004 B2
6783524 Anderson et al. Aug 2004 B2
6786877 Foxlin Sep 2004 B2
6786896 Madhani et al. Sep 2004 B1
6788018 Blumenkranz Sep 2004 B1
6804581 Wang et al. Oct 2004 B2
6823207 Jensen et al. Nov 2004 B1
6827351 Graziani et al. Dec 2004 B2
6837892 Shoham Jan 2005 B2
6839612 Sanchez et al. Jan 2005 B2
6856826 Seeley et al. Feb 2005 B2
6856827 Seeley et al. Feb 2005 B2
6879880 Nowlin et al. Apr 2005 B2
6892090 Verard et al. May 2005 B2
6920347 Simon et al. Jul 2005 B2
6922632 Foxlin Jul 2005 B2
6968224 Kessman et al. Nov 2005 B2
6978166 Foley et al. Dec 2005 B2
6988009 Grimm et al. Jan 2006 B2
6991627 Madhani et al. Jan 2006 B2
6996487 Jutras et al. Feb 2006 B2
6999852 Green Feb 2006 B2
7007699 Martinelli et al. Mar 2006 B2
7016457 Senzig et al. Mar 2006 B1
7043961 Pandey et al. May 2006 B2
7062006 Pelc et al. Jun 2006 B1
7063705 Young et al. Jun 2006 B2
7072707 Galloway, Jr. et al. Jul 2006 B2
7083615 Peterson et al. Aug 2006 B2
7097640 Wang et al. Aug 2006 B2
7099428 Clinthorne et al. Aug 2006 B2
7108421 Gregerson et al. Sep 2006 B2
7130676 Barrick Oct 2006 B2
7139418 Abovitz et al. Nov 2006 B2
7139601 Bucholz et al. Nov 2006 B2
7155316 Sutherland et al. Dec 2006 B2
7164968 Treat et al. Jan 2007 B2
7167738 Schweikard et al. Jan 2007 B2
7169141 Brook et al. Jan 2007 B2
7172627 Fiere et al. Feb 2007 B2
7194120 Wicker et al. Mar 2007 B2
7197107 Arai et al. Mar 2007 B2
7231014 Levy Jun 2007 B2
7231063 Naimark et al. Jun 2007 B2
7239940 Wang et al. Jul 2007 B2
7248914 Hastings et al. Jul 2007 B2
7301648 Foxlin Nov 2007 B2
7302288 Schellenberg Nov 2007 B1
7313430 Urquhart et al. Dec 2007 B2
7318805 Schweikard et al. Jan 2008 B2
7318827 Leitner et al. Jan 2008 B2
7319897 Leitner et al. Jan 2008 B2
7324623 Heuscher et al. Jan 2008 B2
7327865 Fu et al. Feb 2008 B2
7331967 Lee et al. Feb 2008 B2
7333642 Green Feb 2008 B2
7339341 Oleynikov et al. Mar 2008 B2
7366562 Dukesherer et al. Apr 2008 B2
7379790 Toth et al. May 2008 B2
7386365 Nixon Jun 2008 B2
7422592 Morley et al. Sep 2008 B2
7435216 Kwon et al. Oct 2008 B2
7440793 Chauhan et al. Oct 2008 B2
7460637 Clinthorne et al. Dec 2008 B2
7466303 Yi et al. Dec 2008 B2
7493153 Ahmed et al. Feb 2009 B2
7505617 Fu et al. Mar 2009 B2
7533892 Schena et al. May 2009 B2
7542791 Mire et al. Jun 2009 B2
7555331 Viswanathan Jun 2009 B2
7567834 Clayton et al. Jul 2009 B2
7594912 Cooper et al. Sep 2009 B2
7606613 Simon et al. Oct 2009 B2
7607440 Coste-Maniere et al. Oct 2009 B2
7623902 Pacheco Nov 2009 B2
7630752 Viswanathan Dec 2009 B2
7630753 Simon et al. Dec 2009 B2
7643862 Schoenefeld Jan 2010 B2
7660623 Hunter et al. Feb 2010 B2
7661881 Gregerson et al. Feb 2010 B2
7680308 Dale Mar 2010 B2
7683331 Chang Mar 2010 B2
7683332 Chang Mar 2010 B2
7689320 Prisco et al. Mar 2010 B2
7691098 Wallace et al. Apr 2010 B2
7702379 Avinash et al. Apr 2010 B2
7702477 Tuemmler et al. Apr 2010 B2
7711083 Heigl et al. May 2010 B2
7711406 Kuhn et al. May 2010 B2
7720523 Omernick et al. May 2010 B2
7725253 Foxlin May 2010 B2
7726171 Langlotz et al. Jun 2010 B2
7742801 Neubauer et al. Jun 2010 B2
7751865 Jascob et al. Jul 2010 B2
7760849 Zhang Jul 2010 B2
7762825 Burbank et al. Jul 2010 B2
7763015 Cooper et al. Jul 2010 B2
7787699 Mahesh et al. Aug 2010 B2
7796728 Bergfjord Sep 2010 B2
7813838 Sommer Oct 2010 B2
7818044 Dukesherer et al. Oct 2010 B2
7819859 Prisco et al. Oct 2010 B2
7824401 Manzo et al. Nov 2010 B2
7831294 Viswanathan Nov 2010 B2
7834484 Sartor Nov 2010 B2
7835557 Kendrick et al. Nov 2010 B2
7835778 Foley et al. Nov 2010 B2
7835784 Mire et al. Nov 2010 B2
7840253 Tremblay et al. Nov 2010 B2
7840256 Lakin et al. Nov 2010 B2
7843158 Prisco Nov 2010 B2
7844320 Shahidi Nov 2010 B2
7853305 Simon et al. Dec 2010 B2
7853313 Thompson Dec 2010 B2
7865269 Prisco et al. Jan 2011 B2
D631966 Perloff et al. Feb 2011 S
7879045 Gielen et al. Feb 2011 B2
7881767 Strommer et al. Feb 2011 B2
7881770 Melkent et al. Feb 2011 B2
7886743 Cooper et al. Feb 2011 B2
RE42194 Foley et al. Mar 2011 E
RE42226 Foley et al. Mar 2011 E
7900524 Calloway et al. Mar 2011 B2
7907166 Lamprecht et al. Mar 2011 B2
7909122 Schena et al. Mar 2011 B2
7925653 Saptharishi Apr 2011 B2
7930065 Larkin et al. Apr 2011 B2
7935130 Williams May 2011 B2
7940999 Liao et al. May 2011 B2
7945012 Ye et al. May 2011 B2
7945021 Shapiro et al. May 2011 B2
7953470 Vetter et al. May 2011 B2
7954397 Choi et al. Jun 2011 B2
7971341 Dukesherer et al. Jul 2011 B2
7974674 Hauck et al. Jul 2011 B2
7974677 Mire et al. Jul 2011 B2
7974681 Wallace et al. Jul 2011 B2
7979157 Anvari Jul 2011 B2
7983733 Viswanathan Jul 2011 B2
7988215 Seibold Aug 2011 B2
7996110 Lipow et al. Aug 2011 B2
8004121 Sartor Aug 2011 B2
8004229 Nowlin et al. Aug 2011 B2
8010177 Csavoy et al. Aug 2011 B2
8019045 Kato Sep 2011 B2
8021310 Sanborn et al. Sep 2011 B2
8035685 Jensen Oct 2011 B2
8046054 Kim et al. Oct 2011 B2
8046057 Clarke Oct 2011 B2
8052688 Wolf, II Nov 2011 B2
8054184 Cline et al. Nov 2011 B2
8054752 Druke et al. Nov 2011 B2
8057397 Li et al. Nov 2011 B2
8057407 Martinelli et al. Nov 2011 B2
8062288 Cooper et al. Nov 2011 B2
8062375 Glerum et al. Nov 2011 B2
8066524 Burbank et al. Nov 2011 B2
8073335 Labonville et al. Dec 2011 B2
8079950 Stern et al. Dec 2011 B2
8086299 Adler et al. Dec 2011 B2
8092370 Roberts et al. Jan 2012 B2
8098914 Liao et al. Jan 2012 B2
8100950 St. Clair et al. Jan 2012 B2
8105320 Manzo Jan 2012 B2
8108025 Csavoy et al. Jan 2012 B2
8109877 Moctezuma de la Barrera et al. Feb 2012 B2
8112292 Simon Feb 2012 B2
8116430 Shapiro et al. Feb 2012 B1
8120301 Goldberg et al. Feb 2012 B2
8121249 Wang et al. Feb 2012 B2
8123675 Funda et al. Feb 2012 B2
8133229 Bonutti Mar 2012 B1
8142420 Schena Mar 2012 B2
8147494 Leitner et al. Apr 2012 B2
8150494 Simon et al. Apr 2012 B2
8150497 Gielen et al. Apr 2012 B2
8150498 Gielen et al. Apr 2012 B2
8165658 Waynik et al. Apr 2012 B2
8170313 Kendrick et al. May 2012 B2
8179073 Farritor et al. May 2012 B2
8182476 Julian et al. May 2012 B2
8184880 Zhao et al. May 2012 B2
8202278 Orban, III et al. Jun 2012 B2
8208708 Homan et al. Jun 2012 B2
8208988 Jensen Jun 2012 B2
8219177 Smith et al. Jul 2012 B2
8219178 Smith et al. Jul 2012 B2
8220468 Cooper et al. Jul 2012 B2
8224024 Foxlin et al. Jul 2012 B2
8224484 Swarup et al. Jul 2012 B2
8225798 Baldwin et al. Jul 2012 B2
8228368 Zhao et al. Jul 2012 B2
8231610 Jo et al. Jul 2012 B2
8263933 Hartmann et al. Jul 2012 B2
8239001 Verard et al. Aug 2012 B2
8241271 Millman et al. Aug 2012 B2
8248413 Gattani et al. Aug 2012 B2
8256319 Cooper et al. Sep 2012 B2
8271069 Jascob et al. Sep 2012 B2
8271130 Hourtash Sep 2012 B2
8281670 Larkin et al. Oct 2012 B2
8282653 Nelson et al. Oct 2012 B2
8301226 Csavoy et al. Oct 2012 B2
8311611 Csavoy et al. Nov 2012 B2
8320991 Jascob et al. Nov 2012 B2
8332012 Kienzle, III Dec 2012 B2
8333755 Cooper et al. Dec 2012 B2
8335552 Stiles Dec 2012 B2
8335557 Maschke Dec 2012 B2
8348931 Cooper et al. Jan 2013 B2
8353963 Glerum Jan 2013 B2
8358818 Miga et al. Jan 2013 B2
8359730 Burg et al. Jan 2013 B2
8374673 Adcox et al. Feb 2013 B2
8374723 Zhao et al. Feb 2013 B2
8379791 Forthmann et al. Feb 2013 B2
8386019 Camus et al. Feb 2013 B2
8392022 Ortmaier et al. Mar 2013 B2
8394099 Patwardhan Mar 2013 B2
8395342 Prisco Mar 2013 B2
8398634 Manzo et al. Mar 2013 B2
8400094 Schena Mar 2013 B2
8414957 Enzerink et al. Apr 2013 B2
8418073 Mohr et al. Apr 2013 B2
8450694 Baviera et al. May 2013 B2
8452447 Nixon May 2013 B2
RE44305 Foley et al. Jun 2013 E
8462911 Vesel et al. Jun 2013 B2
8465476 Rogers et al. Jun 2013 B2
8465771 Wan et al. Jun 2013 B2
8467851 Mire et al. Jun 2013 B2
8467852 Csavoy et al. Jun 2013 B2
8469947 Devengenzo et al. Jun 2013 B2
RE44392 Hynes Jul 2013 E
8483434 Buehner et al. Jul 2013 B2
8483800 Jensen et al. Jul 2013 B2
8486532 Enzerink et al. Jul 2013 B2
8489235 Moll et al. Jul 2013 B2
8500722 Cooper Aug 2013 B2
8500728 Newton et al. Aug 2013 B2
8504201 Moll et al. Aug 2013 B2
8506555 Ruiz Morales Aug 2013 B2
8506556 Schena Aug 2013 B2
8508173 Goldberg et al. Aug 2013 B2
8512318 Tovey et al. Aug 2013 B2
8515576 Lipow et al. Aug 2013 B2
8518120 Glerum et al. Aug 2013 B2
8521331 Itkowitz Aug 2013 B2
8526688 Groszmann et al. Sep 2013 B2
8526700 Isaacs Sep 2013 B2
8527094 Kumar et al. Sep 2013 B2
8528440 Morley et al. Sep 2013 B2
8532741 Heruth et al. Sep 2013 B2
8541970 Nowlin et al. Sep 2013 B2
8548563 Simon et al. Oct 2013 B2
8549732 Burg et al. Oct 2013 B2
8551114 Ramos de la Pena Oct 2013 B2
8551116 Julian et al. Oct 2013 B2
8556807 Scott et al. Oct 2013 B2
8556979 Glerum et al. Oct 2013 B2
8560118 Green et al. Oct 2013 B2
8561473 Blumenkranz Oct 2013 B2
8562594 Cooper et al. Oct 2013 B2
8571638 Shoham Oct 2013 B2
8571710 Coste-Maniere et al. Oct 2013 B2
8573465 Shelton, IV Nov 2013 B2
8574303 Sharkey et al. Nov 2013 B2
8585420 Burbank et al. Nov 2013 B2
8594841 Zhao et al. Nov 2013 B2
8597198 Sanborn et al. Dec 2013 B2
8600478 Verard et al. Dec 2013 B2
8603077 Cooper et al. Dec 2013 B2
8611985 Lavallee et al. Dec 2013 B2
8613230 Blumenkranz et al. Dec 2013 B2
8621939 Blumenkranz et al. Jan 2014 B2
8624537 Nowlin et al. Jan 2014 B2
8630389 Kato Jan 2014 B2
8634897 Simon et al. Jan 2014 B2
8634957 Toth et al. Jan 2014 B2
8638056 Goldberg et al. Jan 2014 B2
8638057 Goldberg et al. Jan 2014 B2
8639000 Zhao et al. Jan 2014 B2
8641726 Bonutti Feb 2014 B2
8644907 Hartmann et al. Feb 2014 B2
8657809 Schoepp Feb 2014 B2
8660635 Simon et al. Feb 2014 B2
8666544 Moll et al. Mar 2014 B2
8675939 Moctezuma de la Barrera Mar 2014 B2
8678647 Gregerson et al. Mar 2014 B2
8679125 Smith et al. Mar 2014 B2
8679183 Glerum et al. Mar 2014 B2
8682413 Lloyd Mar 2014 B2
8684253 Giordano et al. Apr 2014 B2
8685098 Glerum et al. Apr 2014 B2
8693730 Umasuthan et al. Apr 2014 B2
8694075 Groszmann et al. Apr 2014 B2
8696458 Foxlin et al. Apr 2014 B2
8700123 Okamura et al. Apr 2014 B2
8706086 Glerum Apr 2014 B2
8706185 Foley et al. Apr 2014 B2
8706301 Zhao et al. Apr 2014 B2
8717430 Simon et al. May 2014 B2
8727618 Maschke et al. May 2014 B2
8734432 Tuma et al. May 2014 B2
8738115 Amberg et al. May 2014 B2
8738181 Greer et al. May 2014 B2
8740882 Jun et al. Jun 2014 B2
8746252 McGrogan et al. Jun 2014 B2
8749189 Nowlin et al. Jun 2014 B2
8749190 Nowlin et al. Jun 2014 B2
8761930 Nixon Jun 2014 B2
8764448 Yang et al. Jul 2014 B2
8771170 Mesallum et al. Jul 2014 B2
8781186 Clements et al. Jul 2014 B2
8781630 Banks et al. Jul 2014 B2
8784385 Boyden et al. Jul 2014 B2
8786241 Nowlin et al. Jul 2014 B2
8787520 Baba Jul 2014 B2
8792704 Isaacs Jul 2014 B2
8798231 Notohara et al. Aug 2014 B2
8800838 Shelton, IV Aug 2014 B2
8808164 Hoffman et al. Aug 2014 B2
8812077 Dempsey Aug 2014 B2
8814793 Brabrand Aug 2014 B2
8816628 Nowlin et al. Aug 2014 B2
8818105 Myronenko et al. Aug 2014 B2
8820605 Shelton, IV Sep 2014 B2
8821511 Von Jako et al. Sep 2014 B2
8823308 Nowlin et al. Sep 2014 B2
8827996 Scott et al. Sep 2014 B2
8828024 Farritor et al. Sep 2014 B2
8830224 Zhao et al. Sep 2014 B2
8834489 Cooper et al. Sep 2014 B2
8834490 Bonutti Sep 2014 B2
8838270 Druke et al. Sep 2014 B2
8844789 Shelton, IV et al. Sep 2014 B2
8855822 Bartol et al. Oct 2014 B2
8858598 Seifert et al. Oct 2014 B2
8860753 Bhandarkar et al. Oct 2014 B2
8864751 Prisco et al. Oct 2014 B2
8864798 Weiman et al. Oct 2014 B2
8864833 Glerum et al. Oct 2014 B2
8867703 Shapiro et al. Oct 2014 B2
8870880 Himmelberger et al. Oct 2014 B2
8876866 Zappacosta et al. Nov 2014 B2
8880223 Raj et al. Nov 2014 B2
8882803 Iott et al. Nov 2014 B2
8883210 Truncale et al. Nov 2014 B1
8888821 Rezach et al. Nov 2014 B2
8888853 Glerum et al. Nov 2014 B2
8888854 Glerum et al. Nov 2014 B2
8894652 Seifert et al. Nov 2014 B2
8894688 Suh Nov 2014 B2
8894691 Iott et al. Nov 2014 B2
8906069 Hansell et al. Dec 2014 B2
8964934 Ein-Gal Feb 2015 B2
8992580 Bar et al. Mar 2015 B2
8996169 Lightcap et al. Mar 2015 B2
9001963 Sowards-Emmerd et al. Apr 2015 B2
9002076 Khadem et al. Apr 2015 B2
9044190 Rubner et al. Jun 2015 B2
9107683 Hourtash et al. Aug 2015 B2
9125556 Zehavi et al. Sep 2015 B2
9131986 Greer et al. Sep 2015 B2
9215968 Schostek et al. Dec 2015 B2
9308050 Kostrzewski et al. Apr 2016 B2
9380984 Li et al. Jul 2016 B2
9393039 Lechner et al. Jul 2016 B2
9398886 Gregerson et al. Jul 2016 B2
9398890 Dong et al. Jul 2016 B2
9414859 Ballard et al. Aug 2016 B2
9420975 Gutfleisch et al. Aug 2016 B2
9492235 Hourtash et al. Nov 2016 B2
9592096 Maillet et al. Mar 2017 B2
9750465 Engel et al. Sep 2017 B2
9757203 Hourtash et al. Sep 2017 B2
9795354 Menegaz et al. Oct 2017 B2
9814535 Bar et al. Nov 2017 B2
9820783 Donner et al. Nov 2017 B2
9833265 Donner et al. Nov 2017 B2
9848922 Tohmeh et al. Dec 2017 B2
9925011 Gombert et al. Mar 2018 B2
9931025 Graetzel et al. Apr 2018 B1
10013808 Jones Jul 2018 B2
10034717 Miller et al. Jul 2018 B2
20010036302 Miller Nov 2001 A1
20020035321 Bucholz et al. Mar 2002 A1
20030229282 Burdette Dec 2003 A1
20040068172 Nowinski et al. Apr 2004 A1
20040076259 Jensen et al. Apr 2004 A1
20050096502 Khalili May 2005 A1
20050143651 Verard et al. Jun 2005 A1
20050171558 Abovitz et al. Aug 2005 A1
20060100610 Wallace et al. May 2006 A1
20060167945 Trautner Jul 2006 A1
20060173329 Marquart et al. Aug 2006 A1
20060184396 Dennis et al. Aug 2006 A1
20060241416 Marquart et al. Oct 2006 A1
20060291612 Nishide et al. Dec 2006 A1
20070015987 Benlloch Baviera et al. Jan 2007 A1
20070021738 Hasser et al. Jan 2007 A1
20070038059 Sheffer et al. Feb 2007 A1
20070073133 Schoenefeld Mar 2007 A1
20070156121 Millman et al. Jul 2007 A1
20070156157 Nahum et al. Jul 2007 A1
20070167712 Keglovich et al. Jul 2007 A1
20070233238 Huynh et al. Oct 2007 A1
20080002262 Chirieleison Jan 2008 A1
20080004523 Jensen Jan 2008 A1
20080010092 Smimiotopoulos Jan 2008 A1
20080013809 Zhu et al. Jan 2008 A1
20080033283 Dellaca et al. Feb 2008 A1
20080046122 Manzo et al. Feb 2008 A1
20080082109 Moll et al. Apr 2008 A1
20080108912 Node-Langlois May 2008 A1
20080108991 Von Jako May 2008 A1
20080109012 Falco et al. May 2008 A1
20080144906 Allred et al. Jun 2008 A1
20080161680 Von Jako et al. Jul 2008 A1
20080161682 Kendrick et al. Jul 2008 A1
20080177203 von Jako Jul 2008 A1
20080186378 Shen Aug 2008 A1
20080214922 Hartmann et al. Sep 2008 A1
20080228068 Viswanathan et al. Sep 2008 A1
20080228196 Wang et al. Sep 2008 A1
20080235052 Node-Langlois et al. Sep 2008 A1
20080269596 Revie et al. Oct 2008 A1
20080287771 Anderson Nov 2008 A1
20080287781 Revie et al. Nov 2008 A1
20080300477 Lloyd et al. Dec 2008 A1
20080300478 Zuhars et al. Dec 2008 A1
20080302950 Park et al. Dec 2008 A1
20080306490 Lakin et al. Dec 2008 A1
20080319311 Hamadeh Dec 2008 A1
20090012509 Csavoy et al. Jan 2009 A1
20090030428 Omori et al. Jan 2009 A1
20090080737 Battle et al. Mar 2009 A1
20090185655 Koken et al. Jul 2009 A1
20090198121 Hoheisel Aug 2009 A1
20090216113 Meier et al. Aug 2009 A1
20090228019 Gross et al. Sep 2009 A1
20090259123 Navab et al. Oct 2009 A1
20090259230 Khadem et al. Oct 2009 A1
20090264899 Appenrodt et al. Oct 2009 A1
20090281417 Hartmann et al. Nov 2009 A1
20100022874 Wang et al. Jan 2010 A1
20100039506 Sarvestani et al. Feb 2010 A1
20100104066 Foos Apr 2010 A1
20100125286 Wang et al. May 2010 A1
20100130986 Mailloux et al. May 2010 A1
20100228117 Hartmann Sep 2010 A1
20100228265 Prisco Sep 2010 A1
20100249571 Jensen et al. Sep 2010 A1
20100254017 Martins Oct 2010 A1
20100274120 Heuscher Oct 2010 A1
20100280363 Skarda et al. Nov 2010 A1
20100331858 Simaan et al. Dec 2010 A1
20110022229 Jang et al. Jan 2011 A1
20110046483 Fuchs Feb 2011 A1
20110077504 Fischer et al. Mar 2011 A1
20110077969 Zhu Mar 2011 A1
20110098553 Robbins et al. Apr 2011 A1
20110137152 Li Jun 2011 A1
20110175903 Munro et al. Jul 2011 A1
20110213384 Jeong Sep 2011 A1
20110224684 Larkin et al. Sep 2011 A1
20110224685 Larkin et al. Sep 2011 A1
20110224686 Larkin et al. Sep 2011 A1
20110224687 Larkin et al. Sep 2011 A1
20110224688 Larkin et al. Sep 2011 A1
20110224689 Larkin et al. Sep 2011 A1
20110224825 Larkin et al. Sep 2011 A1
20110230967 O'Halloran et al. Sep 2011 A1
20110238080 Ranjit et al. Sep 2011 A1
20110276058 Choi et al. Nov 2011 A1
20110279477 Wang et al. Nov 2011 A1
20110282189 Graumann Nov 2011 A1
20110286573 Schretter et al. Nov 2011 A1
20110295062 Gratacos Solsona et al. Dec 2011 A1
20110295370 Suh et al. Dec 2011 A1
20110306986 Lee et al. Dec 2011 A1
20120029271 Meyer et al. Feb 2012 A1
20120035507 George et al. Feb 2012 A1
20120046668 Gantes Feb 2012 A1
20120051498 Koishi Mar 2012 A1
20120053597 Anvari Mar 2012 A1
20120059248 Holsing et al. Mar 2012 A1
20120071753 Hunter et al. Mar 2012 A1
20120108954 Schulhauser et al. May 2012 A1
20120136372 Amat Girbau et al. May 2012 A1
20120143084 Shoham Jun 2012 A1
20120184839 Woerlein Jul 2012 A1
20120197182 Millman et al. Aug 2012 A1
20120203576 Bucur Aug 2012 A1
20120226145 Chang et al. Sep 2012 A1
20120235909 Birkenbach et al. Sep 2012 A1
20120245596 Meenink Sep 2012 A1
20120253332 Moll Oct 2012 A1
20120253360 White et al. Oct 2012 A1
20120256092 Zingerman Oct 2012 A1
20120294498 Popovic Nov 2012 A1
20120296203 Hartmann et al. Nov 2012 A1
20120320169 Bathiche Dec 2012 A1
20130006267 Odermatt et al. Jan 2013 A1
20130016889 Myronenko et al. Jan 2013 A1
20130024024 Namiki Jan 2013 A1
20130030571 Ruiz Morales et al. Jan 2013 A1
20130035583 Park et al. Feb 2013 A1
20130060146 Yang et al. Mar 2013 A1
20130060337 Petersheim et al. Mar 2013 A1
20130066192 Sarvestani Mar 2013 A1
20130094742 Feilkas Apr 2013 A1
20130096574 Kang et al. Apr 2013 A1
20130113791 Isaacs et al. May 2013 A1
20130116706 Lee et al. May 2013 A1
20130131695 Scarfogliero et al. May 2013 A1
20130144307 Jeong et al. Jun 2013 A1
20130158542 Manzo et al. Jun 2013 A1
20130165937 Patwardhan Jun 2013 A1
20130178867 Farritor et al. Jul 2013 A1
20130178868 Roh Jul 2013 A1
20130178870 Schena Jul 2013 A1
20130204271 Brisson et al. Aug 2013 A1
20130211419 Jensen Aug 2013 A1
20130211420 Jensen Aug 2013 A1
20130218142 Tuma et al. Aug 2013 A1
20130223702 Holsing et al. Aug 2013 A1
20130225942 Holsing et al. Aug 2013 A1
20130225943 Noising et al. Aug 2013 A1
20130231556 Noising et al. Sep 2013 A1
20130237995 Lee et al. Sep 2013 A1
20130245375 DiMaio et al. Sep 2013 A1
20130261640 Kim et al. Oct 2013 A1
20130272488 Bailey et al. Oct 2013 A1
20130272489 Dickman et al. Oct 2013 A1
20130274761 Devengenzo et al. Oct 2013 A1
20130281821 Liu et al. Oct 2013 A1
20130296884 Taylor et al. Nov 2013 A1
20130303887 Noising et al. Nov 2013 A1
20130307955 Deitz et al. Nov 2013 A1
20130317521 Choi et al. Nov 2013 A1
20130325033 Schena et al. Dec 2013 A1
20130325035 Hauck et al. Dec 2013 A1
20130331686 Freysinger et al. Dec 2013 A1
20130331858 Devengenzo et al. Dec 2013 A1
20130331861 Yoon Dec 2013 A1
20130342578 Isaacs Dec 2013 A1
20130345717 Markvicka et al. Dec 2013 A1
20130345757 Stad Dec 2013 A1
20140001235 Shelton, IV Jan 2014 A1
20140012131 Heruth et al. Jan 2014 A1
20140031664 Kang et al. Jan 2014 A1
20140046128 Lee et al. Feb 2014 A1
20140046132 Hoeg et al. Feb 2014 A1
20140046340 Wilson et al. Feb 2014 A1
20140049629 Siewerdsen et al. Feb 2014 A1
20140058406 Tsekos Feb 2014 A1
20140073914 Lavallee et al. Mar 2014 A1
20140080086 Chen Mar 2014 A1
20140081128 Verard et al. Mar 2014 A1
20140085203 Kobayashi Mar 2014 A1
20140088612 Bartol et al. Mar 2014 A1
20140088613 Seo Mar 2014 A1
20140092587 Delaney Apr 2014 A1
20140094694 Moctezuma de la Barrera Apr 2014 A1
20140094851 Gordon Apr 2014 A1
20140096369 Matsumoto et al. Apr 2014 A1
20140100587 Farritor et al. Apr 2014 A1
20140121676 Kostrzewski et al. May 2014 A1
20140128882 Kwak et al. May 2014 A1
20140135796 Simon et al. May 2014 A1
20140142591 Alvarez et al. May 2014 A1
20140142592 Moon et al. May 2014 A1
20140148692 Hartmann et al. May 2014 A1
20140163581 Devengenzo et al. Jun 2014 A1
20140171781 Stiles Jun 2014 A1
20140171900 Stiles Jun 2014 A1
20140171965 Loh et al. Jun 2014 A1
20140180308 von Grunberg Jun 2014 A1
20140180309 Seeber et al. Jun 2014 A1
20140187915 Yaroshenko et al. Jul 2014 A1
20140188132 Kang Jul 2014 A1
20140194699 Roh et al. Jul 2014 A1
20140130810 Azizian et al. Aug 2014 A1
20140221819 Sarment Aug 2014 A1
20140222023 Kim et al. Aug 2014 A1
20140228631 Kwak et al. Aug 2014 A1
20140234804 Huang et al. Aug 2014 A1
20140257328 Kim et al. Sep 2014 A1
20140257329 Jang et al. Sep 2014 A1
20140257330 Choi et al. Sep 2014 A1
20140275760 Lee et al. Sep 2014 A1
20140275985 Walker et al. Sep 2014 A1
20140276931 Parihar et al. Sep 2014 A1
20140276940 Seo Sep 2014 A1
20140276944 Farritor et al. Sep 2014 A1
20140288413 Hwang et al. Sep 2014 A1
20140299648 Shelton, IV et al. Oct 2014 A1
20140303434 Farritor et al. Oct 2014 A1
20140303643 Ha et al. Oct 2014 A1
20140305995 Shelton, IV et al. Oct 2014 A1
20140309659 Roh et al. Oct 2014 A1
20140316436 Bar et al. Oct 2014 A1
20140323803 Hoffman et al. Oct 2014 A1
20140324070 Min et al. Oct 2014 A1
20140330288 Date et al. Nov 2014 A1
20140364720 Darrow et al. Dec 2014 A1
20140371577 Maillet et al. Dec 2014 A1
20150039034 Frankel et al. Feb 2015 A1
20150085970 Bouhnik et al. Mar 2015 A1
20150146847 Liu May 2015 A1
20150150524 Yorkston et al. Jun 2015 A1
20150196261 Funk Jul 2015 A1
20150213633 Chang et al. Jul 2015 A1
20150272696 Fry et al. Oct 2015 A1
20150335480 Alvarez et al. Nov 2015 A1
20150342647 Frankel et al. Dec 2015 A1
20160000629 Jackson et al. Jan 2016 A1
20160005194 Schretter et al. Jan 2016 A1
20160085912 Jones et al. Mar 2016 A1
20160117446 Hussam Apr 2016 A1
20160166329 Langan et al. Jun 2016 A1
20160191887 Casas Jun 2016 A1
20160225192 Jones Aug 2016 A1
20160235480 Scholl et al. Aug 2016 A1
20160246384 Mullins Aug 2016 A1
20160249989 Devam Sep 2016 A1
20160249990 Glozman et al. Sep 2016 A1
20160302871 Gregerson et al. Oct 2016 A1
20160320322 Suzuki Nov 2016 A1
20160331335 Gregerson et al. Nov 2016 A1
20170042631 Doo Feb 2017 A1
20170084086 Pio Mar 2017 A1
20170135770 Scholl et al. May 2017 A1
20170143284 Sehnert et al. May 2017 A1
20170143426 Isaacs et al. May 2017 A1
20170150122 Cole May 2017 A1
20170156816 Ibrahim Jun 2017 A1
20170202629 Maillet et al. Jul 2017 A1
20170212723 Atarot et al. Jul 2017 A1
20170215825 Johnson et al. Aug 2017 A1
20170215826 Johnson et al. Aug 2017 A1
20170215827 Johnson et al. Aug 2017 A1
20170231710 Scholl et al. Aug 2017 A1
20170258426 Risher-Kelly et al. Sep 2017 A1
20170273748 Hourtash et al. Sep 2017 A1
20170296277 Hourtash et al. Oct 2017 A1
20170309069 Thomas et al. Oct 2017 A1
20170360493 Zucher et al. Dec 2017 A1
20180012413 Jones Jan 2018 A1
Non-Patent Literature Citations (9)
Entry
US 8,231,638 B2, 07/2012, Swarup et al. (withdrawn)
“Contextual Anatomic Mimesis Hybrid In-Situ Visualization Method for Improving Multi-Sensory Depth Perception in Medical Augmented Reality”, Bichlmeier et al., 2007 IEEE.
Bichlmeier et al., “Contextual Anatomic Mimesis Hybrid In-Situ Visualization Method for Improving Multi-Sensory Depth Perception in Medical Augmented Reality”, IEEE, 10 pages, 2007.
Nicolau et al., “Augmented reality in laparoscopic surgical oncology”, Surgical Oncology, 20, pp. 189-201, 2011.
Cutolo et al., “Video See Through AR Head-Mounted Display for Medical Procedures”, IEEE, 4 pages, 2014.
Grimson et al., An Automatic Registration Method for Framelss Stereotaxy, Image Guided Surgery, and Enhanced Reality Visualization, Member ZEEE, pp. 129-140, vol. 15, No. 2, Apr. 1996.
Debuck et al., “An Augmented Reality System for Patient-Specific Guidance of Cardiac Catheter Ablation Procedures”, IEEE Transactions on Medical Imaging, vol. 24, No. 11, pp. 1512-1524, Nov. 2005.
Zhang et al., “A High-accuracy Surgical Augmented Reality System Using Enhanced Integral Videography Image Overlay”, Senior Member IEEE, 4 pages, 2015.
Traub et al., “Advanced Display and Visualization Concepts for Image Guided Surgery”, Journal of Display Technology, vol. 4, No. 4, pp. 483-490, Dec. 2008.
Related Publications (1)
Number Date Country
20180286136 A1 Oct 2018 US
Provisional Applications (3)
Number Date Country
62270895 Dec 2015 US
62181433 Jun 2015 US
62111379 Feb 2015 US
Continuations (1)
Number Date Country
Parent 15013594 Feb 2016 US
Child 16000393 US