Remotely controlling a system using video

Information

  • Patent Grant
  • 11786325
  • Patent Number
    11,786,325
  • Date Filed
    Wednesday, June 17, 2020
    3 years ago
  • Date Issued
    Tuesday, October 17, 2023
    7 months ago
Abstract
Systems and methods for remotely controlling a system using video are provided. A method in accordance the present disclosure includes detecting a video signal of an auxiliary system at a video input, wherein the video signal including images encoded with control information. The method also includes determining that the images included in the video signal include the control information. The method further includes extracting the control information from the images. Additionally, the method includes modifying operations of the system based on the control information.
Description
FIELD

The present disclosure generally relates to remote control systems and, more particularly, relates to the remote control of a system using teleoperation.


BACKGROUND

One of the more significant medical advancements in recent decades has been in the field of surgical robotics technologies. For example, robotic surgical systems now allow surgeons teleoperative abilities to control surgical instruments with improved precision and range of motion, while also providing enhanced visualization and access to hard-to-reach areas. These systems have been able to provide patients with a minimally invasive alternative to open and laparoscopic surgeries. One such system is the da Vinci™ Surgical System, by Intuitive Surgical, Inc. of Sunnyvale, Calif. The da Vinci™ Surgical System is a robotically controlled surgical system that provides a surgeon console or control center and a patient-side cart. The patient-side cart has multiple movable arms for mechanically controlling attached surgical instruments. A drive system with a mechanical interface enables the surgeon to remotely move and position respective instruments with precision during medical procedures, while seated in the ergonomically designed console. A high-definition camera allows the surgeon a highly magnified, high-resolution 3D image of the surgical site once the camera is placed inside the patient. Coupled with the use of controllers on the surgeon console, the surgeon can translate his own hand movements into smaller, precise movements of tiny surgical instruments in the patient.


Current robotic surgical systems typically lack the functionality to interoperate with recently developed portable systems, such as mobile phones, tablet computers, and the like. The availability, power, and flexibility of such systems offer surgeons the opportunity to access information that can improve their ability to assess a patient's condition during the surgery and, thereby, provide improved results. However, modifying an existing system to accommodate these new devices typically involves significant modification the existing system's hardware, firmware, and software. It would therefore be desirable to enable recently developed systems to interoperate with existing systems, without making significant modifications to the existing systems.


BRIEF SUMMARY

The present disclosure is generally directed to performing remote control of a system using another system. A system in accordance with the present disclosure performs operations including detecting a video signal of an auxiliary system at a video input, wherein the video signal including images encoded with control information. The operations also include determining that the images included in the video signal include the control information. The operations further include extracting the control information from the images. Additionally, the operations include modifying operations of the system based on the control information.


In some implementations, an auxiliary system generates a video signal displayed by a primary system. The auxiliary system can embed control information within image data transmitted in the video signal to control modes of operation of the primary system. The control information may be embedded via steganography techniques from the mobile device. In some implementations, when the auxiliary system is communicating with the primary system, the primary system may adjust its display and user interface. In some implementations, the primary system may disable predetermined functions (e.g., teleoperations) when a user is using the auxiliary system to control the primary system. Some implementations may implement certain data security and integrity features, such as checksums or error-correcting codes. Some implementations may also employ cryptographic features for authentication and authorization.


In one exemplary implementation, the primary system can be a robotic surgical system and the auxiliary system can be a mobile device. A user of the mobile device may interact with the robotic surgical system and control information to indicate a desired state of operation for the robotic surgical system via a video signal, wherein the video signal comprises first data representing an image or video captured by the mobile device and second data representing the control information. The mobile device may selectively change the first data based on the control information and transmit the video signal to the robotic surgical system. The robot surgical system may decode the control information from the video signal and modify a mode of operation of the console based on the control information. For example, based on the control information, the surgical system performs operations such as, changing a location of a display, present a user interface, and selectively enable and disable the at least one robotic arm.


It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Additional features of the disclosure will be set forth in part in the description which follows or may be learned by practice of the disclosure.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate several implementations of the disclosure and together with the description, explain the principles of the disclosure.



FIG. 1 shows a block diagram illustrating an example of an environment for implementing systems and processes in accordance with aspects of the present disclosure.



FIG. 2 shows a block diagram illustrating an auxiliary system in accordance with an implementation of the disclosure.



FIG. 3 shows a block diagram illustrating a primary system in accordance with an implementation of the disclosure.



FIG. 4 shows a process flow diagram illustrating an example method of operating the gaming system in accordance with aspects of the present disclosure.



FIG. 5 illustrates a robotic surgical system in accordance with aspects of the present disclosure.





DETAILED DESCRIPTION

The present disclosure provides systems and methods for an auxiliary system to remotely control behaviors of a primary system. In some implementations, the auxiliary system can be a mobile device, the primary system can be a robotic surgical system, and the video signal can include images transmitted from a video source of the mobile device for display by the robotic surgical system. In accordance with aspects of the present disclosure, the auxiliary system may incorporate the control information into data comprising the images transmitted to the primary system. For example, the auxiliary system can embed the control information in the image data using steganography techniques. In some implementations, the control information provided by the auxiliary system controls the primary system to modify a behavior, an operation, or an output of a user interface of the primary system. For example, the surgical system may change display modes and suspend teleoperation of surgical instruments in response to receiving the control information included in video images from the mobile device. Further, some implementations may authenticate the identity and the integrity of the auxiliary system, the video connection, and the video signal. For example, the primary system and the auxiliary system may use cryptographic techniques, such as digital signatures or one-time passwords.


As disclosed herein, implementations disclosed herein enable auxiliary systems to interoperate and control an existing primary system without significant modification of the primary system. For example, the existing surgical system may have hardware, firmware, and/or software that are incompatible with a newly or recently developed mobile device. Consequently, the surgical system may be unable to interoperate with the mobile device. However, in accordance with aspects of the present disclosure, the auxiliary system can transparently embed control information (e.g., application data) in image data using steganography techniques and transmit them to an existing video input (e.g., Digital Video Interface (“DVI”), high-definition serial digital interface (“HD-SDI”), and high-definition multimedia interface (“HDMI”)) of the primary system in a video signal. The primary system can detect the video signal, extract the control information from the image data contained therein, and modify its operation based on extracted control information.



FIG. 1 shows a functional block diagram illustrating an example of an environment 100 for implementing systems and processes in accordance with aspects of the present disclosure. The environment can include a primary system 105, a user interface system 107, and an auxiliary system 110. In some implementations, the environment 100 can be a surgical system, wherein the primary system 105 and the user interface system 107 are a patient-side cart and a surgeon console, respectively. For example, the patient-side cart, the surgeon console, and the electronics/control can be the same or similar to those provided in the da Vinci® Xi (model no. IS4000) commercialized by Intuitive Surgical, Inc. of Sunnyvale, Calif., as previously described herein. In some such implementations, the primary system 105 can include tools to perform minimally invasive robotic surgery by interfacing with and controlling a variety of surgical instruments. For example, the primary system 105 can include one or more user-controllable arms configured to hold, position, and manipulate various surgical instruments and other tools.


The user interface system 107 can receive (directly or indirectly) video signals 140 and user interface signals 145 from the primary system 105 and the auxiliary system 110. While FIG. 1 illustrates the user interface system 107 being directly connected to the primary system 105 and indirectly connected to the auxiliary system 110, it is understood that other implementations of the environment 100 can rearrange the primary system 105, the user interface system 107, the auxiliary system 110, and the connections therebetween. For example, as indicated by the dashed lines in FIG. 1, some implementations of the environment 100 can connect auxiliary system 110 directly to the user interface system 107. Also, some implementations can combine some or all of the hardware and functionality of the user interface system 107 with the primary system 105.


In some implementations, the user interface system 107 can include one or more user-input devices 108 for operating the primary system 105. The input device 108 can include, for example, joysticks, computer mice, computer trackballs, foot pedals, touchscreen displays, or other suitable input devices. Additionally, the user interface system 107 can include one or more display devices 109 for displaying still images and video. The display device 109 can be, for example, a cathode ray tube (CRT) display, a liquid crystal display (LCD), a light-emitting diode (LED) display, a stereoscopic display, or other suitable display technology.


In some implementations, the user interface system 107 can be a surgeon console for operating a robotic surgical system 500 (see FIG. 5). Robotic surgical system 500 may include a console 504 having a three-dimensional (3D) display 506 and one or more controls 508 for controlling the instruments 502. Through the input devices, the user interface system 107 serves as a controller by which the instruments 502 (such as one or more robotic arms) mounted at the robotic surgical system 500 act as an implement to achieve desired motions of the surgical instrument(s), and accordingly perform a desired surgical procedure. It is understood, however, that the environment 100 is not limited to receiving inputs at the user interface system, and inputs may be received at any device which can be configured to realize a manipulation of the surgical instrument(s) at the primary system 105. For example, a surgical instrument at the primary system 105 may be manipulated at the primary system 105 through user interface system 107 in combination with another surgical instrument support device, or entirely through the other surgical support device.


Additionally, the user interface system 107 can receive image data from the primary system 105 and the auxiliary system 110, and can present such images using the display device 109. For example, the user interface system 107 can receive video signals 140 including image data from a primary video source 113 of the primary system and image data from an auxiliary video source 115 of the auxiliary system 110. The user interface system 107 can display the received images together on the display device 109. For example, the primary video source 113 can be an endoscopic camera that outputs primary video 151 that can be displayed to a surgeon using the display device 109 of the user interface system 107. Further, the auxiliary video source 115 can output auxiliary video 155 that can be displayed together with the primary video 151 on the display device 109. In some implementations, the user interface system 107 can present the primary video 151 and the auxiliary 155 as separate tiles in non-overlapping areas of the display device 109. It is understood that, in some implementations, the primary system 105 or the user interface system 107 can combine the primary video 151 and the auxiliary video 155. For example, the primary video 151 and the auxiliary video 155 can be indexed and/or overlaid to provide a single mixed image on the display device 109.


The auxiliary system 110 can be a computing system including a graphic user interface that displays images from an auxiliary video source 115. In some implementations, the auxiliary system 110 is a mobile personal computing device. For example, the auxiliary system 110 can be a laptop computer, a tablet computer, a smart phone, a video camera, or other such device. The auxiliary system 110 can be connected to the primary system 105 and the user interface system 107 by a video channel 120 and a data channel 125. In some implementations, the auxiliary video source 115 can be an image sensor of the auxiliary system 110. For example, the auxiliary video source 115 can be video captured by a camera of the auxiliary system 110 and presented on a display 117 of the auxiliary system 110. Additionally, in some implementations, the auxiliary video source 115 can be image data stored locally by the axillary system 110, or image data accessed by the auxiliary system 110 at a remote repository. Additionally, the auxiliary video source 115 can be images stored locally by the auxiliary system 110, or retrieved (e.g., accessed or streamed) from a remote system over a communication link or network (e.g., the Internet).


The video channel 120 can include one or more communication links, which can be any combination of wired and/or wireless links using any combination of video transmission techniques and video protocols. In some implementations, the video channel 120 may be a wired video link (e.g., S-video, HDMI, DVI, DisplayPort, or other suitable video connection). In other implementations, the video channel 120 can be a wireless video link (e.g., wireless home digital interface (“WHDI”), WirelessHD™, WiGig™, AirPlay™, Miracast™, WiDi™, or another suitable wireless video connection). Further, in some implementations, the video channel 120 can be a unidirectional video connection configured solely to transmit video signals or audiovisual signals. For example, the unidirectional video connection can be configured to solely function at particular video transmission frequencies and solely using video transmission protocols.


The information channel 125 can include one or more communication links or data links. The information channel 125 can comprise any combination of wired and/or wireless links; any combination of one or more types of networks (e.g., the Internet, a wide area network, a local area network, a virtual private network, etc.); and/or utilize any combination of transmission techniques and protocols and/or a communication networks, such as a local area network, a wide area network, or the Internet. In some implementations, the information channel can include, for example, a universal serial bus (“USB”), FireWire, Wi-Fi, Bluetooth, Ethernet, or other suitable data communication links. In some implementations, the auxiliary system 110 solely uses the information channel 125 to transmit non-video data signals 135, while solely using the video channel 120 for transmitting video or audiovisual signals 130. Some implementations lack information channels 125 and 125A, and the auxiliary system 110 solely uses the video channel 120 to communicate with the primary system 105 using the auxiliary video signal 130.


In a non-limiting example of an implementation consistent with the present disclosure, the primary system 105 can be a surgical system, and the auxiliary system 110 can be a mobile device connected to the surgical system solely by the video channel 120. For example, a user can connect the mobile device to the surgical system 100 by connecting a video transmission cable (e.g., HDMI) as the video channel 120 to a video input connector of the primary system 105 or the user interface 107. In accordance with aspects of the present disclosure, the user of the auxiliary system 110 can provide control inputs that change the behavior or state of the primary system 105 using the auxiliary system 110. The control inputs can modify behaviors, operations, and outputs of the primary system 105. For example, where the primary system is a robotic surgical system, the control information from the mobile device may cause the surgical system to change display modes. The display modes may include an interface for controlling system modes, such as: following mode (follower surgical instruments follow movements of the primary tool grips), clutch mode (disengaging follower actuation from primary movement), camera control mode (enabling endoscope movement), energy tools mode (enabling surgical energy tool control (e.g., electrocautery tools), camera focus mode (enabling camera focus control), arm swapping (allowing various primary and follower arm control combinations), and tiled auxiliary image swapping mode (also referred to as “tilepro”) for enabling control of various picture displays in the surgeon's display, e.g., swapping between a full screen display and a display in which the surgeon views two or more separate images or data screens.


The auxiliary system 110 can convert the user's control inputs to control information, incorporate the control information into video images, and transmit the video images to the primary system 105 in the video signal 130 via the video channel 120. In the implementations, the auxiliary system 110 uses steganographic techniques to incorporate the control information within the video images (see the optional step at block 401 in FIG. 4). For example, in one implementation, the auxiliary device 110 may vary pixels at the edge of the image, or make subtle adjustments to color values of the image data (e.g., by modifying the least significant bits). By doing so, the auxiliary system 110 can minimize alterations to the video images to render image data incorporating control inputs of a user.


Further, the primary system 105 can receive the transmission of the auxiliary video signal 130 including the control information, interpret the video images to extract the control information, and modify its behavior, operation, or output of the surgical system based on the control information.



FIG. 2 shows a block diagram illustrating an auxiliary system 110 in accordance with an implementation of the disclosure. The auxiliary system 110 can include a controller 205, a video processor 215, input/output (I/O) processor 220, a display 117, and a storage system 235. In some implementations, the controller 205 can include a processor 239, memory devices 241, an I/O controller 243, a network interface 245, and a bus 247. The memory devices 241 can include a read-only memory (ROM), a random-access memory (RAM)(e.g., static RAM), and an operating system (O/S). The controller 205 can be in communication with the video processor 215, the I/O processor 220 and the storage system 235. The bus 116 can provide a communication link between the components in the controller 205.


In general, processor 239 executes computer program instructions, which can be stored in the memory devices 241 and/or the storage system 235. In accordance with aspects of the present disclosure, the program instructions can include a control translation module 255 that, when executed by the processor 220, perform one or more of the functions described herein. It is understood that the control translation module 255 can be implemented as one or more sets of program code stored in memory 241 as separate or combined modules.


While executing the computer program instructions, the processor 239 can read and/or write data to/from memory 241, storage system 235, and the control translation module 255. The program code executes the processes of the present disclosure, for example, by modifying video image data to incorporate control information. In some implementations of the present disclosure, control translation module 255 is computer program code stored in, for example, the memory device 241 or the storage system 235 that, when executed by the processor 239, causes controller 205 to performs steganographic encoding of video images. For example, the control information can be application data embedded in a video image based on ancillary data, metadata substitution, least significant bit substitution or adaptive substitution, or frequency space manipulation of data in a carrier, i.e., the original image or video signal generated by an auxiliary device. The control translation module 255 can use existing steganographic tools, such as Xiao Steganography, Image Steganography, Steghide, Crypture, Rsteg, Ssuite Piscel, OpenStego, and SteganPeg, that encode and decode information into and from images. In some implementations, control translation module 255 employs steganographic techniques to adjust pixel values (color, brightness, etc.), such as the least significant bits or metadata within a portion or across all of its display to encode application data 225, such as control information messages, user interface inputs, etc. within the image or video signal.


The video processor 215 can include an input connection 259 and output connection 261. In some implementations, the input connection 259 is solely a video input solely configured to receive video signals or audiovisual signals. Likewise, in some implementations, the output connection 261 is solely a video output solely configured to transmit video signals or audiovisual signals. The video processor 215 can process image data received from the controller 205 to drive the display 117 and to generate the auxiliary video signal 130. In accordance with aspects of the present disclosure, the image data from the controller can include control information incorporated the control translation module 255.


The I/O processor can include an input/output connection 263. In some implementations, the input connection 259 is solely a data input solely configured to receive data signals. The I/O processor 220 can process data received from the controller 205 and convert it to a data signal for transmission via the I/O connector 263. The I/O processor 220 can also process data signals received from the I/O connection 263, convert it to a data, and provide the data to the controller 205. Further, the I/O processor can exchange information with the display 117. For example, the I/O can process user inputs generated by a touch screen display.



FIG. 3 shows a block diagram illustrating a primary system 105 in accordance with some implementations of the disclosure. The primary system 105 can include a controller 305, a video processor 315, an input/output (I/O) processor 325, a storage system 335, a processor 339, memory devices 341, an I/O controller 343, a network interface 345, and a bus 247, all of which can be the same or similar to that previously described herein.


In accordance with aspects of the present disclosure, the program instructions can include a video translation module 355 and a data integrity module 357 that, when executed by the processor 339, perform one or more of the functions described herein. In some implementations, the video translation module 355 can be the same or similar to the control translation module 255 described above regarding FIG. 2. It is understood that video translation module 355 and the data integrity module 357 can be implemented as one or more sets of program code stored in memory 341 as separate or combined modules.


As will be described further below, in some implementations, the video translation module 355 detects a video signal, such as the auxillary video signal 130 received by the primary system 105, and determine whether it contains video data including control information. If so, the video translation module 355 can modify the operation of the primary system 105 (including the user interface system 107). The primary system 105 may react to control information by, for example, adjusting its own behavior, mode, or outputs. In some implementations, the control information changes modes of the primary system to enable/disable teleoperation of the primary system 105 (see the optional step at block 413B in FIG. 4).


The data integrity module 357 can ensure integrity of information received in the auxiliary video signal 130 to prevent unexpected system behavior of the primary system 105. In some implementations, the data integrity module 357 performs various data integrity checks, such as checksums, error-correcting code, or other suitable verification techniques, for example, based on data embedded in the control information of the auxiliary video signal 130. In one implementation, data integrity module 357 may perform cryptographic functions to verify the identity, authenticity, and authority the source of the auxiliary video signal (e.g., auxiliary system 110) before the control information is permitted to trigger behavior changes by primary system 105. For example, in one implementation, the primary system 105 may employ a varying access code such as those used in two-factor authentication (e.g., a hash function that generates a constantly-varying one-time-password from a shared secret).


The flow block diagram in FIG. 4 illustrates an example of the functionality and operation of possible implementations of systems, methods, and computer program products according to various implementations consistent with the present disclosure. Each block in the flow diagram of FIG. 4 can represent a module, segment, or portion of program instructions, which includes one or more computer executable instructions for implementing the illustrated functions and operations. In some alternative implementations, the functions and/or operations illustrated in a particular block of the flow diagram can occur out of the order shown in FIG. 4. For example, two blocks shown in succession can be executed substantially concurrently, or the blocks can sometimes be executed in the reverse order, depending upon the functionality involved. Further, in some implementations, the flow diagram can include fewer blocks or additional blocks. It will also be noted that each block of the flow diagram and combinations of blocks in the diagram can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.



FIG. 4 shows a flow block diagram illustrating an example of a process 400 for controlling modes, behaviors, and states of a system (e.g., primary system 105) using control information incorporated into images transmitted in a video signal from an auxiliary system (e.g., auxiliary system 110). Turning to block 405 in FIG. 4, the system detects video (e.g., auxiliary video signal 130 or 130A) received from the auxiliary system. In some implementations, the system may detect a physical connection of a cable to a video connection (e.g. video input 359). In other implementations, the system can detect a wireless signal received by the video connection. Additionally, in some implementations, the system may respond to detection of the video at block 405 by providing an indication at the primary system or at a user interface (e.g., user interface system 107). For example, the system may display the received video (e.g., auxiliary video 155) on a display device (e.g., display device 109). The system may also provide an alert of the received video using, for example, an audiovisual indication or a pop-up message on the display.


At block 407, the system (e.g., executing video translation module 355) determines whether the video signal received at block 405 includes images including control information. If the video images do not include image data having control information (e.g., block 407 is “No,”) the method 400 ends. On the other hand, if the system determines that the video signal includes image data having control information (e.g., block 407 is “Yes,”) then at block 408 the system extracts the control information from the video images received in the video signal. For example, as described above, the system can use steganographic tools to detect and extract control information from the video signal received at block 405. At block 409, the system can validate the information received from the auxiliary system at block 405. In some implementations, the system validates the control information extracted at block 408. In some implementations, the system can validate information provided in the video signal received at block 405 prior to the extraction at block 408. The validation at block 409 can verify that the information received from the auxiliary system is authentic and that it represents actual control information. As previously described herein, validation of the information may utilize cryptographic techniques, such as digital signatures or one-time passwords. Additionally, the validation can use techniques such as cyclic redundancy check (“CRC”) or checksums.


At block 410, the system can modify one or more predetermined functions based on the control information extracted at block 408. In some implementations, after validating the control information at block 409, the system can limit the functionality of the primary system or the user interface system to prevent erroneous or malicious control information from causing the primary system to perform undesirable operations. For example, modifying the predetermined functions can include suspending teleoperation of the primary system by the user interface system while receiving valid control information from the auxiliary system. Additionally, in some implementations, modifying the predetermined functions can include disabling a data input/output interface (e.g., I/O 263 and I/O processor 220) while receiving valid control information from the auxiliary system (see the optional step at block 412).


At block 411 and the optional step at block 413A, the system can modify its behaviors, operations, and outputs based on the control information extracted at block 409. For example, as shown in the optional step at block 414, the system may combine video generated at the system (e.g., primary video 151 from primary video source 113) with video received from the auxiliary system (e.g., auxiliary video 155 of auxiliary video source 115) to display a tiled image on a user interface (e.g., display device 109 of user interface system 107). In a particular example, the system may generate a tiled display mode (e.g., a “TilePro Mode” of the da Vinci™ Surgical System) including an arrangement of two auxiliary video inputs (picture archiving and communication system (“PACS”), ultrasound, room camera, etc.) along with the operative image in main display portion. Additionally, in some implementations, the control information received in the auxiliary video signal may control a size and scaling of the display. For example, when two auxiliary inputs are present, the 3D image provided by the stereoscopic display on auxiliary system 110 may be scaled on the screen.


At block 415, the system determines whether the video signal received at block 405 includes additional control information. If the video images do not include additional control information (e.g., block 415 is “No,”) the method 400 ends. On the other hand, if the system determines that the video signal includes image data having additional control information (e.g., block 415 is “Yes,”) then the process 400 iteratively returns to block 409 and continues as described above.


The present disclosure is not to be limited in terms of the particular embodiments described in this application, which are intended as illustrations of various aspects. Many modifications and variations can be made without departing from its spirit and scope, as will be apparent to those skilled in the art. Functionally equivalent methods and apparatuses within the scope of the disclosure, in addition to those enumerated herein, will be apparent to those skilled in the art from the foregoing descriptions. Such modifications and variations are intended to fall within the scope of the appended claims. The present disclosure is to be limited only by the terms of the appended claims, along with the full scope of equivalents to which such claims are entitled. It is also to be understood that the terminology used herein is for the purpose of describing examples of implementations and is not intended to be limiting.


With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.


It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, means at least two recitations, or two or more recitations).


Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.).


It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.” In addition, where features or aspects of the disclosure are described in terms of Markush groups, those skilled in the art will recognize that the disclosure is also thereby described in terms of any individual member or subgroup of members of the Markush group.

Claims
  • 1. A robotic surgical system comprising: a display device;an endoscopic camera configured to generate a primary image on the display device;at least one robotic arm configured to perform surgical maneuvers on a patient;a console having a three-dimensional (3D) display and one or more controls for controlling the at least one robotic arm;a video input;a processor; anda computer-readable data storage device storing program instructions that, when executed by the processor, control the robotic surgical system to:detect a video signal of an auxiliary system at the video input, the video signal including one or more auxiliary images encoded with control information for the robotic surgical system;determine that the one or more auxiliary images included in the video signal include the control information;extract the control information from the one or more auxiliary images;selectively disabling the at least one robotic arm based on the control information; andcombine the one or more auxiliary images with the primary image on the display device.
  • 2. The robotic surgical system of claim 1, wherein the control information controls a behavior of the robotic surgical system.
  • 3. The robotic surgical system of claim 1, wherein the control information modifies a user interface of the robotic surgical system.
  • 4. The robotic surgical system of claim 1, wherein the video input comprises a unidirectional video input configured to solely receive video signals or audiovisual signals.
  • 5. The robotic surgical system of claim 1, wherein the control information is solely transmitted within the one or more auxiliary images.
  • 6. The robotic surgical system of claim 1, wherein the control information is embedded in the one or more auxiliary images using steganography.
  • 7. The robotic surgical system of claim 1, wherein the program instructions further control the robotic surgical system to modify a predetermined function of the robotic surgical system in response to detecting the video signal.
  • 8. The robotic surgical system of claim 7, wherein modifying the predetermined function comprises disabling teleoperation of the robotic surgical system.
  • 9. A method of remotely controlling a robotic surgical system including at least one robotic arm configured to perform surgical maneuvers on a patient and a console having a three-dimensional (3D) display and one or more controls for controlling the at least one robotic arm, the method comprising: generating a primary image from an endoscopic camera onto a display device;detecting, by a processor, a video signal of an auxiliary system at a video input, the video signal including one or more auxiliary images encoded with control information;determining, by the processor, that one or more auxiliary images included in the video signal include the control information;extracting, by the processor, the control information from the one or more auxiliary images;selectively disabling, by the processor, the at least one robotic arm based on the control information; andcombining the one or more auxiliary images with the primary image on the display device.
  • 10. The method of claim 9, wherein the control information controls a behavior of the robotic surgical system.
  • 11. The method of claim 9, wherein the control information modifies a user interface of the robotic surgical system.
  • 12. The method of claim 9, wherein the video input comprises a unidirectional video input configured to solely receive video signals or audiovisual signals.
  • 13. The method of claim 9, wherein the control information is solely transmitted within the one or more auxiliary images.
  • 14. The method of claim 9, wherein the control information is embedded in the one or more auxiliary images using steganography.
  • 15. The method of claim 9, further comprising modifying a predetermined function of the robotic surgical system in response to detecting the video signal.
  • 16. The method of claim 15, wherein modifying the predetermined function comprises disabling teleoperation of the robotic surgical system.
CROSS-REFERENCES TO RELATED APPLICATIONS

This application claims priority to U.S. Provisional Patent Application No. 62/869,587, filed Jul. 2, 2019, the content of which is incorporated herein in its entirety.

US Referenced Citations (348)
Number Name Date Kind
4305539 Korolkov et al. Dec 1981 A
4319576 Rothfuss Mar 1982 A
4407286 Noiles et al. Oct 1983 A
4429695 Green Feb 1984 A
4509518 McGarry et al. Apr 1985 A
4605001 Rothfuss et al. Aug 1986 A
4608981 Rothfuss et al. Sep 1986 A
4610383 Rothfuss et al. Sep 1986 A
4767044 Green Aug 1988 A
4809695 Gwathmey et al. Mar 1989 A
4848637 Pruitt Jul 1989 A
4892244 Fox et al. Jan 1990 A
4930503 Pruitt Jun 1990 A
4978049 Green Dec 1990 A
5027834 Pruitt Jul 1991 A
5040715 Green et al. Aug 1991 A
5133735 Slater et al. Jul 1992 A
5133736 Bales, Jr. et al. Jul 1992 A
5147357 Rose et al. Sep 1992 A
5180092 Crainich Jan 1993 A
5275323 Schulze et al. Jan 1994 A
5307976 Olson et al. May 1994 A
5312023 Green et al. May 1994 A
5334183 Wuchinich Aug 1994 A
5342395 Jarrett et al. Aug 1994 A
5342396 Cook Aug 1994 A
5366133 Geiste Nov 1994 A
5452836 Huitema et al. Sep 1995 A
5452837 Williamson, IV et al. Sep 1995 A
5480089 Blewett Jan 1996 A
5484095 Green et al. Jan 1996 A
5484451 Akopov et al. Jan 1996 A
5487500 Knodel et al. Jan 1996 A
5497931 Nakamura Mar 1996 A
5533521 Granger Jul 1996 A
5540375 Bolanos et al. Jul 1996 A
5554164 Wilson et al. Sep 1996 A
5560530 Bolanos et al. Oct 1996 A
5562239 Boiarski et al. Oct 1996 A
5564615 Bishop et al. Oct 1996 A
5571116 Bolanos et al. Nov 1996 A
5571285 Chow et al. Nov 1996 A
5573534 Stone Nov 1996 A
5615820 Viola Apr 1997 A
5624452 Yates Apr 1997 A
5628446 Geiste et al. May 1997 A
5651491 Heaton et al. Jul 1997 A
5652849 Conway Jul 1997 A
5667626 Cayford et al. Sep 1997 A
5673842 Bittner et al. Oct 1997 A
5676674 Bolanos et al. Oct 1997 A
5688269 Newton et al. Nov 1997 A
5690269 Bolanos et al. Nov 1997 A
5693042 Boiarski et al. Dec 1997 A
5697542 Knodel et al. Dec 1997 A
5700270 Peyser et al. Dec 1997 A
5700276 Benecke Dec 1997 A
5709680 Yates et al. Jan 1998 A
5752644 Bolanos et al. May 1998 A
5752973 Kieturakis et al. May 1998 A
5762255 Chrisman et al. Jun 1998 A
5762256 Mastri et al. Jun 1998 A
5779130 Alesi et al. Jul 1998 A
5782396 Mastri et al. Jul 1998 A
5792135 Madhani et al. Aug 1998 A
5820009 Melling et al. Oct 1998 A
5826776 Schulze et al. Oct 1998 A
5833695 Yoon Nov 1998 A
5865361 Milliman et al. Feb 1999 A
5871135 Williamson, IV et al. Feb 1999 A
5911353 Bolanos et al. Jun 1999 A
5915616 Viola et al. Jun 1999 A
5941442 Geiste et al. Aug 1999 A
5954259 Viola et al. Sep 1999 A
5959892 Lin et al. Sep 1999 A
6032849 Mastri et al. Mar 2000 A
6079606 Milliman et al. Jun 2000 A
6174309 Wrublewski et al. Jan 2001 B1
6202914 Geiste et al. Mar 2001 B1
6241139 Milliman et al. Jun 2001 B1
6250532 Green et al. Jun 2001 B1
6330956 Willinger Dec 2001 B1
6330965 Milliman et al. Dec 2001 B1
6488196 Fenton, Jr. Dec 2002 B1
6503259 Huxel et al. Jan 2003 B2
6585735 Frazier et al. Jul 2003 B1
6644532 Green et al. Nov 2003 B2
6656193 Grant et al. Dec 2003 B2
6669073 Milliman et al. Dec 2003 B2
6786382 Hoffman Sep 2004 B1
6817974 Cooper et al. Nov 2004 B2
6877647 Green et al. Apr 2005 B2
6905057 Swayze et al. Jun 2005 B2
6953139 Milliman et al. Oct 2005 B2
6959852 Shelton, IV et al. Nov 2005 B2
6964363 Wales et al. Nov 2005 B2
6978921 Shelton, IV et al. Dec 2005 B2
6978922 Bilotti et al. Dec 2005 B2
6986451 Mastri et al. Jan 2006 B1
6988649 Shelton, IV et al. Jan 2006 B2
7000818 Shelton, IV et al. Feb 2006 B2
7000819 Swayze et al. Feb 2006 B2
7044352 Shelton, IV et al. May 2006 B2
7044353 Mastri et al. May 2006 B2
7055731 Shelton, IV et al. Jun 2006 B2
7059508 Shelton, IV et al. Jun 2006 B2
7070083 Jankowski Jul 2006 B2
7114642 Whitman Oct 2006 B2
7128253 Mastri et al. Oct 2006 B2
7140527 Ehrenfels et al. Nov 2006 B2
7140528 Shelton, IV et al. Nov 2006 B2
7258262 Mastri et al. Aug 2007 B2
7308998 Mastri et al. Dec 2007 B2
7328828 Ortiz et al. Feb 2008 B2
7380695 Doll et al. Jun 2008 B2
7380696 Shelton, IV et al. Jun 2008 B2
7398908 Holsten et al. Jul 2008 B2
7401721 Holsten et al. Jul 2008 B2
7407075 Holsten et al. Aug 2008 B2
7455676 Holsten et al. Nov 2008 B2
7472814 Mastri et al. Jan 2009 B2
7481349 Holsten et al. Jan 2009 B2
7494039 Racenet et al. Feb 2009 B2
7565993 Milliman et al. Jul 2009 B2
7588174 Holsten et al. Sep 2009 B2
7654431 Hueil et al. Feb 2010 B2
7673783 Morgan et al. Mar 2010 B2
7699835 Lee et al. Apr 2010 B2
7721930 McKenna et al. May 2010 B2
7726539 Holsten et al. Jun 2010 B2
7770774 Mastri et al. Aug 2010 B2
7794475 Hess et al. Sep 2010 B2
7832611 Boyden et al. Nov 2010 B2
7837079 Holsten et al. Nov 2010 B2
7866526 Green et al. Jan 2011 B2
7942303 Shah et al. May 2011 B2
7950561 Aranyi May 2011 B2
8070035 Holsten et al. Dec 2011 B2
8083118 Milliman et al. Dec 2011 B2
8127975 Olson et al. Mar 2012 B2
8157152 Holsten et al. Apr 2012 B2
8272553 Mastri et al. Sep 2012 B2
8308042 Aranyi Nov 2012 B2
8348127 Marczyk Jan 2013 B2
8365972 Aranyi et al. Feb 2013 B2
8371492 Aranyi et al. Feb 2013 B2
8551091 Couture et al. Oct 2013 B2
8573465 Shelton, IV Nov 2013 B2
8579178 Holsten et al. Nov 2013 B2
8608047 Holsten et al. Dec 2013 B2
8672939 Garrison Mar 2014 B2
8783541 Shelton, IV et al. Jul 2014 B2
8800841 Ellerhorst et al. Aug 2014 B2
8858547 Brogna Oct 2014 B2
8864010 Williams Oct 2014 B2
8905287 Racenet et al. Dec 2014 B2
8925785 Holsten et al. Jan 2015 B2
9010606 Aranyi et al. Apr 2015 B2
9055961 Manzo et al. Jun 2015 B2
9192378 Aranyi et al. Nov 2015 B2
9192379 Aranyi et al. Nov 2015 B2
9211120 Scheib et al. Dec 2015 B2
9216019 Schmid et al. Dec 2015 B2
9345479 Racenet et al. May 2016 B2
9717497 Zerkle et al. Aug 2017 B2
9717498 Aranyi et al. Aug 2017 B2
9936949 Measamer et al. Apr 2018 B2
10111659 Racenet et al. Oct 2018 B2
10130367 Cappola et al. Nov 2018 B2
10231732 Racenet et al. Mar 2019 B1
10285693 Kimsey et al. May 2019 B2
10646219 Racenet et al. May 2020 B2
10828027 Racenet et al. Nov 2020 B2
10863988 Patel et al. Dec 2020 B2
11234700 Ragosta et al. Feb 2022 B2
11439390 Patel et al. Sep 2022 B2
11504124 Patel et al. Nov 2022 B2
11517312 Wixey Dec 2022 B2
20020165562 Grant et al. Nov 2002 A1
20030135204 Lee et al. Jul 2003 A1
20030181910 Dycus et al. Sep 2003 A1
20040232199 Shelton et al. Nov 2004 A1
20040267310 Racenet et al. Dec 2004 A1
20050006430 Wales Jan 2005 A1
20050006434 Wales et al. Jan 2005 A1
20050070925 Shelton, IV et al. Mar 2005 A1
20050070958 Swayze et al. Mar 2005 A1
20050101991 Ahlberg et al. May 2005 A1
20050113826 Johnson et al. May 2005 A1
20050178813 Swayze et al. Aug 2005 A1
20050187576 Whitman et al. Aug 2005 A1
20050273084 Hinman et al. Dec 2005 A1
20050273085 Hinman et al. Dec 2005 A1
20060000868 Shelton, IV et al. Jan 2006 A1
20060016853 Racenet Jan 2006 A1
20060022014 Shelton, IV et al. Feb 2006 A1
20060022015 Shelton et al. Feb 2006 A1
20060024817 Deguchi et al. Feb 2006 A1
20060025809 Shelton, IV Feb 2006 A1
20060025810 Shelton, IV Feb 2006 A1
20060025811 Shelton, IV Feb 2006 A1
20060025812 Shelton, IV Feb 2006 A1
20060025813 Shelton et al. Feb 2006 A1
20060025816 Shelton, IV Feb 2006 A1
20060049230 Shelton, IV et al. Mar 2006 A1
20060097026 Shelton, IV May 2006 A1
20060111209 Hinman et al. May 2006 A1
20060111210 Hinman May 2006 A1
20060161190 Gadberry et al. Jul 2006 A1
20060190031 Wales et al. Aug 2006 A1
20060226196 Hueil et al. Oct 2006 A1
20070010838 Shelton, IV et al. Jan 2007 A1
20070045379 Shelton, IV Mar 2007 A1
20070250113 Hegeman et al. Oct 2007 A1
20070262116 Hueil et al. Nov 2007 A1
20080023522 Olson et al. Jan 2008 A1
20080078804 Shelton et al. Apr 2008 A1
20080086114 Schmitz et al. Apr 2008 A1
20090277947 Viola Nov 2009 A1
20100108740 Pastorelli et al. May 2010 A1
20100145334 Olson et al. Jun 2010 A1
20100179545 Twomey et al. Jul 2010 A1
20100198248 Vakharia Aug 2010 A1
20100331857 Doyle et al. Dec 2010 A1
20110022078 Hinman Jan 2011 A1
20110118707 Burbank May 2011 A1
20110152879 Williams Jun 2011 A1
20110174863 Shelton, IV et al. Jul 2011 A1
20110251612 Faller et al. Oct 2011 A1
20110251613 Guerra et al. Oct 2011 A1
20110290851 Shelton, IV Dec 2011 A1
20110290854 Timm et al. Dec 2011 A1
20110295270 Giordano et al. Dec 2011 A1
20110301603 Kerr et al. Dec 2011 A1
20120000962 Racenet et al. Jan 2012 A1
20120022584 Donnigan et al. Jan 2012 A1
20120071891 Itkowitz Mar 2012 A1
20120223121 Viola et al. Sep 2012 A1
20120228358 Zemlok et al. Sep 2012 A1
20120248167 Flanagan et al. Oct 2012 A1
20130015231 Kostrzewski Jan 2013 A1
20130046303 Evans et al. Feb 2013 A1
20130056521 Swensgard Mar 2013 A1
20130068821 Huitema et al. Mar 2013 A1
20130087599 Krumanaker et al. Apr 2013 A1
20130098965 Kostrzewski et al. Apr 2013 A1
20130126586 Zhang et al. May 2013 A1
20130148577 Terry et al. Jun 2013 A1
20130248577 Leimbach et al. Sep 2013 A1
20130277410 Fernandez et al. Oct 2013 A1
20130296922 Allen, IV et al. Nov 2013 A1
20130327808 Chen et al. Dec 2013 A1
20140001236 Shelton, IV et al. Jan 2014 A1
20140005653 Shelton, IV et al. Jan 2014 A1
20140021239 Kostrzewski Jan 2014 A1
20140025071 Sims et al. Jan 2014 A1
20140175152 Hess et al. Jun 2014 A1
20140180286 Marczyk et al. Jun 2014 A1
20140214049 Jeong et al. Jul 2014 A1
20140257331 Kim et al. Sep 2014 A1
20140263546 Aranyi Sep 2014 A1
20140263550 Aranyi et al. Sep 2014 A1
20140263559 Williams et al. Sep 2014 A1
20140263567 Williams et al. Sep 2014 A1
20140263569 Williams et al. Sep 2014 A1
20140284372 Kostrzewski Sep 2014 A1
20140343550 Faller et al. Nov 2014 A1
20140343569 Turner Nov 2014 A1
20140364851 Batross et al. Dec 2014 A1
20150173789 Baxter, III et al. Jun 2015 A1
20150209037 Kostrzewski et al. Jul 2015 A1
20150250530 Manzo et al. Sep 2015 A1
20150256609 Morton Sep 2015 A1
20150272575 Leimbach et al. Oct 2015 A1
20150272576 Cappola Oct 2015 A1
20150297227 Huitema et al. Oct 2015 A1
20150297235 Harris et al. Oct 2015 A1
20160038227 Garrison Feb 2016 A1
20160120544 Shelton, IV et al. May 2016 A1
20160157863 Williams et al. Jun 2016 A1
20160174977 Lytle, IV et al. Jun 2016 A1
20160175033 Le Jun 2016 A1
20160192999 Stulen et al. Jul 2016 A1
20160235489 Gombert Aug 2016 A1
20160270780 Hall et al. Sep 2016 A1
20160287251 Shelton, IV et al. Oct 2016 A1
20160338764 Krastins et al. Nov 2016 A1
20170010578 Miyakawa Jan 2017 A1
20170042604 McFarland et al. Feb 2017 A1
20170079710 Deville et al. Mar 2017 A1
20170097035 Zimmerman et al. Apr 2017 A1
20170135746 Tetzlaff et al. May 2017 A1
20170189028 Aranyi Jul 2017 A1
20170231653 Kapadia Aug 2017 A1
20170245857 Shelton, IV et al. Aug 2017 A1
20170290584 Jasemian et al. Oct 2017 A1
20170296172 Harris et al. Oct 2017 A1
20180008265 Hatanaka et al. Jan 2018 A1
20180021042 Nicholas et al. Jan 2018 A1
20180161052 Weir et al. Jun 2018 A1
20180168581 Hunter et al. Jun 2018 A1
20180168622 Shelton, IV et al. Jun 2018 A1
20180168628 Hunter et al. Jun 2018 A1
20180168637 Harris et al. Jun 2018 A1
20180168641 Harris et al. Jun 2018 A1
20180168642 Shelton, IV et al. Jun 2018 A1
20180168649 Shelton, IV et al. Jun 2018 A1
20180168650 Shelton, IV et al. Jun 2018 A1
20180206844 Harris et al. Jul 2018 A1
20180214200 Nanditale et al. Aug 2018 A1
20180232951 Alterovitz Aug 2018 A1
20180296213 Strobl Oct 2018 A1
20180310948 Stamm et al. Nov 2018 A1
20180317915 Mcdonald, II Nov 2018 A1
20190000454 Swayze et al. Jan 2019 A1
20190015124 Williams et al. Jan 2019 A1
20190099181 Shelton, IV et al. Apr 2019 A1
20190125347 Stokes et al. May 2019 A1
20190142531 Wentworth et al. May 2019 A1
20190231350 Scott et al. Aug 2019 A1
20190290374 Ramadorai Sep 2019 A1
20190298356 Shelton, IV et al. Oct 2019 A1
20190314107 Worrell et al. Oct 2019 A1
20190365458 Whitlock et al. Dec 2019 A1
20200397430 Patel et al. Dec 2020 A1
20210022736 Wixey Jan 2021 A1
20210077101 Patel et al. Mar 2021 A1
20210177495 Ross et al. Jun 2021 A1
20210177500 Khalaji Jun 2021 A1
20210212683 Burbank Jul 2021 A1
20210267596 Fanelli et al. Sep 2021 A1
20210386427 Millman et al. Dec 2021 A1
20220015762 Wixey et al. Jan 2022 A1
20220015763 Wixey et al. Jan 2022 A1
20220015823 Wilson et al. Jan 2022 A1
20220054130 Overmyer et al. Feb 2022 A1
20220061836 Parihar et al. Mar 2022 A1
20220061840 Hites Mar 2022 A1
20220061841 Wixey et al. Mar 2022 A1
20220071632 Patel et al. Mar 2022 A1
20220079585 Egan Mar 2022 A1
20220125428 Ragosta et al. Apr 2022 A1
20220160358 Wixey May 2022 A1
20220183686 Wixey et al. Jun 2022 A1
20220192665 Wellman Jun 2022 A1
20220346790 Wellman Nov 2022 A1
20220378537 Hites et al. Dec 2022 A1
20220395270 Patel et al. Dec 2022 A1
Foreign Referenced Citations (42)
Number Date Country
0277532 Aug 1990 EP
0277529 Apr 1993 EP
0641546 Mar 1995 EP
1090592 Apr 2001 EP
1728473 Dec 2006 EP
1479346 Jan 2007 EP
1621141 Jul 2007 EP
1316290 Feb 2012 EP
1754445 Oct 2013 EP
3135225 Mar 2017 EP
3158947 Apr 2017 EP
3173029 May 2017 EP
2828952 Dec 2005 FR
5301166 Sep 2013 JP
2014530653 Nov 2014 JP
2016508792 Mar 2016 JP
2016513570 May 2016 JP
2017500146 Jan 2017 JP
2017513564 Jun 2017 JP
2017527396 Sep 2017 JP
6411461 Oct 2018 JP
2019141659 Aug 2019 JP
405234 Sep 1975 SU
886900 Dec 1981 SU
1333319 Aug 1987 SU
1442191 Dec 1988 SU
1459659 Feb 1989 SU
WO-8602254 Apr 1986 WO
WO-9005489 May 1990 WO
WO-9734533 Sep 1997 WO
WO-03094743 Nov 2003 WO
WO-03094746 Nov 2003 WO
WO-03094747 Nov 2003 WO
WO-2012142872 Oct 2012 WO
WO-2014106275 Jul 2014 WO
WO-2017026141 Feb 2017 WO
WO-2017034803 Mar 2017 WO
WO-2017156070 Sep 2017 WO
WO-2017214243 Dec 2017 WO
WO-2018005750 Jan 2018 WO
WO-2018071497 Apr 2018 WO
WO-2018118402 Jun 2018 WO
Non-Patent Literature Citations (19)
Entry
International Search Report and Written Opinion for Application No. PCT/US2020/054568, dated Jan. 29, 2021, 13 pages.
International Preliminary Report on Patentability for Application No. PCT/US2019/017646, dated Aug. 27, 2020, 10 pages.
International Preliminary Report on Patentability for Application No. PCT/US2019/019501, dated Sep. 3, 2020, 7 pages.
International Search Report and Written Opinion for Application No. PCT/US2020/025655, dated Jul. 22, 2020, 8 pages.
International Search Report and Written Opinion for Application No. PCT/US19/17646, dated Apr. 16, 2019, 11 pages.
International Search Report and Written Opinion for Application No. PCT/US2019/019501, dated May 9, 2019, 8 pages.
International Search Report and Written Opinion for Application No. PCT/US2019/056979, dated Dec. 18, 2019, 8 pages.
International Search Report and Written Opinion for Application No. PCT/US2019/062344, dated Mar. 23, 2020, 17 pages.
International Search Report and Written Opinion for Application No. PCT/US2019/062768, dated Mar. 9, 2020, 15 pages.
International Search Report and Written Opinion for Application No. PCT/US2019/064861, dated Mar. 30, 2020, 18 pages.
International Search Report and Written Opinion for Application No. PCT/US2019/066513, dated Apr. 21, 2020, 8 pages.
International Search Report and Written Opinion for Application No. PCT/US2019/066530, dated Apr. 21, 2020, 8 pages.
International Search Report and Written Opinion for Application No. PCT/US2020/020672, dated Jun. 29, 2020, 10 pages.
International Search Report and Written Opinion for Application No. PCT/US2020/033481, dated Sep. 3, 2020, 22 pages.
European Search Report (Corrected version) for Application No. EP19750317.0, dated Mar. 28, 2022, 26 pages.
Partial European Search Report for Application No. EP19757451.0, dated Feb. 2, 2022, 12 pages.
Vertut, Jean and Phillipe Coiffet, Robot Technology: Teleoperation and Robotics Evolution and Development, English translation, Prentice-Hall, Inc., Inglewood Cliffs, NJ, USA 1986, vol. 3A, 332 pages.
International Search Report and Written Opinion for Application No. PCT/US2021/012284, dated May 6, 2021, 23 pages.
Supplementary European Search Report for Application No. EP19873128.3, dated Jun. 22, 2022, 7 pages.
Related Publications (1)
Number Date Country
20210000557 A1 Jan 2021 US
Provisional Applications (1)
Number Date Country
62869587 Jul 2019 US