The present disclosure relates generally to videoconferencing. More particularly, various examples of the present disclosure relate to a robotic stand and systems and methods for controlling the stand during a videoconference.
Videoconferencing allows two or more locations to communicate simultaneously or substantially simultaneously via audio and video transmissions. Videoconferencing may connect individuals (such point-to-point calls between two units, also known as videophone calls) or groups (such as conference calls between multiple locations). In other words, videoconferencing includes calling or conferencing on a one-on-one, one-to-many, or many-to-many basis.
Each site participating in a videoconference typically has videoconferencing equipment capable of two-way audio and video transmissions. The videoconferencing equipment generally includes a data processing unit, an audio input and output, a video input and output, and a network connection for data transfer. Some or all of the components may be packaged into a single piece of equipment.
Examples of the disclosure may include a robotic stand for supporting a computing device at an elevated position during a teleconference. For example, the robotic stand may support the computing device above a support or work surface including a tabletop, a floor, or other suitable surfaces. The robotic stand may be operative to orient a computing device about at least one of a pan axis or a tilt axis during a videoconference. The robotic stand may include a base, a first member attached to the base, a second member attached to the first member, and a remotely-controllable rotary actuator associated with the first member. The first member may be swivelable relative to the base about a pan axis, and the rotary actuator may be operative to swivel the first member about the pan axis. The second member may be tiltable relative to the first member about a tilt axis, and the computing device may be attached to the second member.
The robotic stand may include a remotely-controllable rotary actuator associated with the second member and operative to tilt the second member about the tilt axis. The robotic stand may include multiple elongate arms each pivotally attached to the second member. The multiple elongate arms may be biased toward one another. The robotic stand may include a gripping member attached to a free end of each elongate arm of the multiple elongate arms. The robotic stand may include a gripping member attached directly to the second member. The robotic stand may include a counterbalance spring attached at a first end to the first member and at a second end to the second member. The counterbalance spring may be offset from the tilt axis. The robotic stand may include a microphone array attached to at least one of the base, the first member, or the second member.
Examples of the disclosure may include a method of orienting a local computing device during a videoconference established between the local computing device and one or more remote computing devices. The method may include supporting the local computing device at an elevated position, receiving a motion command signal from the local computing device, and in response to receiving the motion command signal, autonomously moving the local computing device about at least one of a pan axis or a tilt axis according to a positioning instruction received at the one or more remote computing devices. The motion command signal may be generated from the positioning instruction received at the one or more remote computing devices.
The motion command signal may include a pan motion command operative to pan the local computing device about the pan axis. The motion command signal may include a tilt motion command operative to tilt the local computing device about the tilt axis. The method may include moving the local computing device about the pan axis and the tilt axis. The method may include rotating the local computing device about the pan axis and tilting the local computing device about the tilt axis. The method may include gripping opposing edges of the local computing device with pivotable arms. The method may include biasing the pivotable arms toward one another. The method may include counterbalancing a weight of the local computing device about the tilt axis.
Examples of the disclosure may include automatically tracking an object during a videoconference with a computing device supported on a robotic stand. The method may include receiving sound waves with a directional microphone array, transmitting an electrical signal containing directional sound data to a processor, determining, by the processor, a location of a source of the directional sound data, and rotating the robotic stand about at least one of a pan axis or a tilt axis without user interaction to aim the computing device at the location of the source of the directional sound data.
Rotating the robotic stand about the at least one of a pan axis or a tilt axis may include actuating a rotary actuator associated with the at least one of a pan axis or a tilt axis. The method may include generating, by the processor, a motion command signal and transmitting the motion command signal to the rotary actuator to actuate the rotary actuator.
Examples of the disclosure may include a method of remotely controlling an orientation of a computing device supported on a robotic stand during a videoconference. The method may include receiving a video feed from the computing device, displaying the video feed on a screen, receiving a positioning instruction from a user to move the computing device about at least one of a pan axis or a tilt axis, and sending over a communications network a signal comprising the positioning instruction to the computing device.
The method may include displaying a user interface that allows a user to remotely control the orientation of the computing device. The displaying a user interface may include overlaying the video feed with a grid comprising a plurality of selectable cells. Each cell of the plurality of selectable cells may be associated with a pan and tilt position of the computing device. The receiving the positioning instruction from the user may include receiving an indication the user pressed an incremental move button. The receiving the positioning instruction from the user may include receiving an indication the user selected an area of the video feed for centering. The receiving the positioning instruction from the user may include receiving an indication the user selected an object of the video feed for automatic tracking. The receiving the indication may include receiving a user input identifying the object of the video feed displayed on the screen; in response to receiving the identification, displaying a graphical symbol on the screen illustrating a time period associated with initiation of the automatic tracking; continuing to receive the user input identifying the object for the time period; and in response to completion of the time period, triggering the automatic tracking of the identified object. The method may include receiving a storing instruction from a user to store a pan and tilt position; in response to receiving the storing instruction, storing the pan and tilt position; and in response to receiving the storing instruction, associating the pan and tilt position with a user interface element. The method may include storing a still image of the video feed and associating position data with the still image in response to a gesture performed by the user.
This summary of the disclosure is given to aid understanding, and one of skill in the art will understand that each of the various aspects and features of the disclosure may advantageously be used separately in some instances, or in combination with other aspects and features of the disclosure in other instances. Accordingly, while the disclosure is presented in terms of examples, it should be appreciated that individual aspects of any example can be claimed separately or in combination with aspects and features of that example or any other example.
This summary is neither intended nor should it be construed as being representative of the full extent and scope of the present disclosure. The present disclosure is set forth in various levels of detail in this application and no limitation as to the scope of the claimed subject matter is intended by either the inclusion or non-inclusion of elements, components, or the like in this summary.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate examples of the disclosure and, together with the general description given above and the detailed description given below, serve to explain the principles of these examples.
It should be understood that the drawings are not necessarily to scale. In certain instances, details that are not necessary for an understanding of the disclosure or that render other details difficult to perceive may have been omitted. In the appended drawings, similar components and/or features may have the same reference label. It should be understood that the claimed subject matter is not necessarily limited to the particular examples or arrangements illustrated herein.
The present disclosure describes examples of robotic stands for use in conducting a videoconference. The robotic stand, a local computing device, and a remote computing device may be in communication with one another during the videoconference. The local computing device may be mounted onto the robotic stand and may be electrically coupled to the stand (e.g. in electronic communication with the stand). A remote participant in the videoconference, or other entity, may control the orientation of the local computing device by interacting with the remote computing device and generating motion commands for the robotic stand. For example, the remote participant may generate pan and/or tilt commands using the remote computing device and transmit the commands to the local computing device, the robotic stand, or both. The robotic stand may receive the commands and rotate the local computing device about a pan axis, a tilt axis, or both in accordance with the commands received from the remote participant. As such, a user of a remote computing device may control the orientation of a local computing device real-time during a live videoconference.
The one or more remote computing devices 105 may include, but are not limited to, a desktop computer, a laptop computer, a tablet, a smart phone, or any other computing device capable of transmitting and receiving videoconference data. Each of the remote computing devices 105 may be configured to communicate over the network 110 with any number of devices, including the one or more servers 115, the local computing device 120, and the robotic stand 125. The network 110 may comprise one or more networks, such as campus area networks (CANs), local area networks (LANs), metropolitan area networks (MANs), personal area networks (PANs), wide area networks (WANs), cellular networks, and/or the Internet. Communications provided to, from, and within the network 110 may wired and/or wireless, and further may be provided by any networking devices known in the art, now or in the future. Devices communicating over the network 110 may communicate by way of various communication protocols, including TCP/IP, UDP, RS-232, and IEEE 802.11.
The one or more servers 115 may include any type of processing resources dedicated to performing certain functions discussed herein. For example, the one or more servers 115 may include an application or destination server configured to provide the remote and/or local computing devices 105, 120 with access to one or more applications stored on the server. In some embodiments, for example, an application server may be configured to stream, transmit, or otherwise provide application data to the remote and/or local computing devices 105, 120 such that the devices 105, 120 and an application server may establish a session, for example a video client session, in which a user may utilize on the remote or local computing devices 105, 120 a particular application hosted on the application server. As another example, the one or more servers 115 may include an Internet Content Adaptation Protocol (ICAP) server, which may reduce consumption of resources of another server, such as an application server, by separately performing operations such as content filtering, compression, and virus and malware scanning. In particular, the ICAP server may perform operations on content exchanged between the remote and/or local computing devices 105, 120 and an application server. As a further example, the one or more servers 115 may include a web server having hardware and software that delivers web pages and related content to clients (e.g., the remote and local computing devices 105, 120) via any type of markup language (e.g., HyperText Markup Language (HTML) or eXtensible Markup Language (XML)) or other suitable language or protocol.
The local computing device 120 may include a laptop computer, a tablet, a smart phone, or any other mobile or portable computing device that is capable of transmitting and receiving videoconference data. The local computing device 120 may be a mobile computing device including a display or screen that is capable of displaying video data. The local computing device 120 may be mounted onto the robotic stand 125 to permit a user of one of the remote computing devices 105 to remotely orient the local computing device 120 during a videoconference. For example, a user of one of the remote computing devices 105 may remotely pan and/or tilt the local computing device 120 during a videoconference, for example by controlling the robotic stand 125. The local computing device 120 may be electrically coupled to the robotic stand 125 by a wired connection, a wireless connection, or both. For example, the local computing device 120 and the robotic stand 125 may communicate wirelessly using Bluetooth.
It is to be understood that the arrangement of computing components described herein is quite flexible. While a single memory or processing unit may be shown in a particular view or described with respect to a particular system, it is to be understood that multiple memories and/or processing units may be employed to perform the described functions.
With reference to
With continued reference to
With further reference to
In some implementations, the video client modules 220, 270 and the control modules 225, 275 are standalone software applications existing on the computing devices 105, 120, respectively, and running in parallel with one another. In these implementations, the video client modules 220, 270 may send video and audio data through a first session established between the video client modules 220, 270. The control modules 225, 275 may run in parallel with the video client modules 220, 270, respectively, and send motion control data through a second session established between the control modules 225, 275. The first and second sessions may be established, for example, by way of the network 110, the server(s) 115, the web browser modules 215, 265, or any combination thereof. In one implementation, the first and second sessions are established between the respective modules via the Internet.
In some implementations, the video client module 220, 270 and the control module 225, 275 are combined together into a single software application existing on the computing devices 105, 120, respectively. In these implementations, the video client modules 220, 270 and the control modules 225, 275 may send video data, audio data, and/or motion control data through a single session established between the computing devices 105, 120. The single session may be established, for example, by way of the network 110, the server(s) 115, the web browser modules 215, 265, or any combination thereof. In one implementation, the single session is established between the computing devices 105, 120 via the Internet.
With specific reference to
With specific reference to
Although not depicted in
Remote control of the robotic stand 125 may be accomplished through numerous types of user interfaces.
Each cell 302 may represent a discrete position within the coordinate system 304. The current tilt and pan position of the robotic stand 125 may be denoted by visually distinguishing a cell 312 from the rest of the cells, such as highlighting the cell 312 and/or distinctly coloring the cell 312. A remote user may incrementally move the robotic stand 125 by pressing incremental move buttons 314, 316 situated along side portions of the coordinate system 304. The incremental move buttons 314, 316 may be represented by arrows pointing in the desired movement direction. A remote user may click on an incremental pan button 314 to incrementally pan the robotic stand 125 in the direction of the clicked arrow. Similarly, a remote user may click on an incremental tilt button 316 to incrementally tilt the robotic stand 125 in the direction of the clicked arrow. Each click of the incremental move buttons 314, 316 may move the current cell 312 by one cell in the direction of the clicked arrow. Additionally or alternatively, each cell 302 may be a button and may be selectable by a user of the remote computing device 105. Upon a user clicking or tapping (e.g. touching) one of the cells 302, the remote computing device 105 may transmit a signal containing motion command data to the local computing device 120, the robotic stand 125, or both. The motion command data may include a motion command to pan and/or tilt the local computing device 120 to an orientation associated with the selected cell. The robotic stand 125 may receive the motion command and move the local computing device 120 to the desired pan and tilt position. A user of the remote computing device 105 may orient the local computing device 105 into any orientation within a motion range of the robotic stand 125 by selecting any cell 302 within the coordinate space 304. In some examples, the cells 302 may not be displayed. However, a touch or click at a location on the screen may be translated into pan and/or tilt commands in accordance with the position of the click or tap on the screen.
The provided user interface examples may be implemented using any computing system, such as but not limited to a desktop computer, a laptop computer, a tablet computer, a smart phone, or other computing systems. Generally, a computing system 105 for use in implementing example user interfaces described herein may include one or more processing unit(s) 210, and may include one or more computer readable mediums (which may be transitory or non-transitory and may be implemented, for example, using any type of memory or electronic storage 205 accessible to the computing system 105) encoded with executable instructions that, when executed by one or more of the processing unit(s) 210, may cause the computing system 105 to implement the user interfaces described herein. In some examples, therefore, a computing system 105 may be programmed to provide the example user interfaces described herein, including displaying the described images, receiving described inputs, and providing described outputs to a local computing device 120, a motorized stand 125, or both.
With reference to
With continued reference to
With further reference to
The rotary actuator module 615 may include a servomotor or a stepper motor, for example. In some implementations, the rotary actuator module 615 includes multiple servomotors associated with different axes. The rotary actuator module 615 may include a first servomotor associated with a first axis and a second servomotor associated with a second axis that is angled relative to the first axis. The first and second axes may be perpendicular or substantially perpendicular to one another. The first axis may be a pan axis, and the second axis may be a tilt axis. Upon receiving a motion command signal from the processor unit(s) 610, the first servomotor may rotate the local computing device 120 about the first axis. Likewise, upon receiving a motion command signal from the processor unit(s) 610, the second servomotor may rotate the local computing device 120 about the second axis. In some implementations, the rotary actuator module 615 may include a third servomotor associated with a third axis, which may be perpendicular or substantially perpendicular to the first and second axes. The third axis may be a roll axis. Upon receiving a motion command signal from the processor unit(s) 610, the third servomotor may rotate the local computing device 120 about the third axis. In some implementations, a user of the remote computing device 105 may control a fourth axis of the local computing device 120. For example, a user of the remote computing device 105 may remotely control a zoom functionality of the local computing device 120 real-time during a videoconference. The remote zoom functionality may be associated with the control modules 225, 275 of the remote and local computers 105, 120, for example.
Still referring to
With continued reference to
The microphone array 665 may include one or more microphones that receive sound waves from the environment associated with the local computing device 120 and convert the sound waves into an electrical signal for transmission to the local computing device 120, the remote computing device 105, or both during a videoconference. The microphone array 665 may include three or more microphones spatially separated from one another for triangulation purposes. The microphone array 665 may be directional such that the electrical signal containing the local sound data includes the direction of the sound waves received at each microphone. The microphone array 665 may transmit the directional sound data in the form of an electrical signal to the sound processor 670, which may use the directional sound data to determine the location of the sound source. For example, the sound processor 670 may use triangulation methods to determine the source location. The sound processor 670 may transmit the sound data to the processor unit(s) 610, which may use the source data to generate motion commands for the rotary actuator(s) 620. The sound processor 670 may transmit the motion control commands to the rotary actuator module 615, which may produce rotary motion or torque based on the commands. As such, the robotic stand 125 may automatically track the sound originating around the local computing device 120 and may aim the local computing device 120 at the sound source without user interaction. The sound processor 670 may transmit the directional sound data to the local computing device 120, which in turn may transmit the data to the remote computing device(s) 105 for use in connection with a graphical user interface.
As explained above, various modules of the remote computing device(s) 105, the local computing device 120, and the robotic stand 125 may communicate with other modules by way of a wired or wireless connection. For example, various modules may be coupled to one another by a serial or parallel data connection. In some implementations, various modules are coupled to one another by way of a serial bus connection.
With reference to
The local computing device 702 may be securely held by the robotic stand 704 such that the stand 704 may move the local computing device 702 about various axes without the local computing device 702 slipping relative to the stand 704. The stand 704 may include a vertical grip 706 that retains a lower edge of the local computing device 702 (see
As shown in
With reference to
With continued reference to
With reference to
With reference to
Referring to
Referring to
Referring to
At operation 1120, the local computing device 120 is mounted onto a robotic stand 125, which operation may occur prior to, concurrently with, or subsequent to establishing the video session. To mount the local computing device 120 onto the robotic stand 125, a lower edge of the local computing device 120 may be positioned on a gripping member 706 coupled to the stand 125. Additional gripping members 708 may be positioned in abutment with opposing side edges of the local computing device 120, thereby securing the local computing device 120 to the stand 125. The additional gripping members 708 may be coupled to pivotable arms 712, which may be biased toward one another. In some implementations, a user of the local computing device 120 may pivot the arms 712 away from one another by applying an outwardly-directed force to one of the arms 712. Once the free ends of the arms 712 are spread apart from one another a sufficient distance to permit the local computing device 120 to be placed between the gripping members 708, the local computing device 120 may be positioned between the gripping members 708 and the user may release the arm 712 to permit the arms 712 to drive the gripping members 708 into engagement with opposing sides of the local computing device 120.
At operation 1130, the local computing device 120, the robotic stand 125, or both may receive motion control data. In some situations, the motion control data is received from the remote computing device 105. The motion control data may be transceived between the remote and local computing devices 105, 120 by way of the respective control modules 225, 275. In some situations, the motion control data is received from a sound module 655. The sound module 655 may receive sound waves with a microphone array 665 and transmit an electrical signal containing the sound data to a sound processor 670, which may determine a location of a source of the sound waves. The sound processor 670 may transmit the sound data to a processing unit 610, which may process the sound data into motion control data. Although referred to as separate components, the sound processor 670 and the processing unit 610 may be a single processing unit. The motion control data may include motion commands such as positioning instructions. The positioning instructions may include instructions to pan the local computing device 120 about a pan axis in a specified direction, to tilt the local computing device about a tilt axis in a specified direction, or both.
At operation 1140, the robotic stand 125 may orient the local computing device 120 according to the motion control data. The processing unit 610 may actuate a rotary actuator 620 associated with at least one of a pan axis 728 or a tilt axis 722 by transmitting a signal containing a trigger characteristic (such as a certain current or voltage) to the rotary actuator 620. The processing unit 610 may continue to transmit the signal to the rotary actuator 620 until the robotic stand 125 moves the local computing device 120 into the instructed position. A separate rotary actuator 620 may be associated with each axis 728, 722. The processing unit 610 may monitor the current rotational position of the rotary actuator relative to the instructed rotational position to ensure the robotic stand 125 moves the local computing device 120 into the desired position.
At operation 1220, a video feed is displayed on a screen 401 of the remote computing device 105. At operation 1230, motion control data is received from a user of the remote computing device 105. The user of the remote computing device 105 may input a positioning instruction by way of the motion control input module 230. For example, an interactive user interface may be displayed on a screen 401 of the remote computing device 105 and may allow a user to input positioning instructions. The interactive user interface may overlay the video feed data on the screen 401. By interacting with the user interface, the user may generate positioning instructions for transmission to the local computing device 120, the robotic stand 125, or both.
At operation 1240, the remote computing device 105 may transmit motion control data including positioning instructions to the local computing device 120, the robotic stand 125, or both. The motion control data may be transmitted from the remote computing device 105 to the local computing device 120 via the respective control module 225, 275 real-time during a video session between the computing devices 105, 120. The motion control data may include motion commands such as positioning instructions. The positioning instructions may include instructions to pan the local computing device 120 about a pan axis in a specified direction, to tilt the local computing device about a tilt axis in a specified direction, or both.
As discussed, a robotic stand 125 may include pan and tilt functionality. A portion of the stand 125 may be rotatable about a pan axis, and a portion of the stand 125 may be rotatable about a tilt axis. In some implementations, a user of a remote computing device 105 may remotely orient a local computing device 120, which may be mounted onto the robotic stand 125, by issuing motion commands via a communication network, such as the Internet, to the local computing device 120. The motion commands may cause the stand 125 to move about one or more axes, thereby allowing the remote user to remotely control the orientation of the local computing device 120. In some implementations, the motion commands may be initiated autonomously from within the local computing device 120.
The foregoing description has broad application. While the provided examples are discussed in relation to a videoconference between computing devices, it should be appreciated that the robotic stand may be used as a pan and tilt platform for other devices such as cameras, mobile phones, and digital picture frames. Further, the robotic stand may operate via remote web control following commands manually input by a remote user or may be controlled locally by autonomous features of the software running on a local computing device. Accordingly, the discussion of any embodiment is meant only to be explanatory and is not intended to suggest that the scope of the disclosure, including the claims, is limited to these examples. In other words, while illustrative embodiments of the disclosure have been described in detail herein, it is to be understood that the inventive concepts may be otherwise variously embodied and employed, and that the appended claims are intended to be construed to include such variations, except as limited by the prior art.
The term “module” as used herein refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and software that is capable of performing the functionality associated with that element.
All directional references (e.g., proximal, distal, upper, lower, upward, downward, left, right, lateral, longitudinal, front, back, top, bottom, above, below, vertical, horizontal, radial, axial, clockwise, and counterclockwise) are only used for identification purposes to aid the reader's understanding of the present disclosure, and do not create limitations, particularly as to the position, orientation, or use of this disclosure. Connection references (e.g., attached, coupled, connected, and joined) are to be construed broadly and may include intermediate members between a collection of elements and relative movement between elements unless otherwise indicated. As such, connection references do not necessarily infer that two elements are directly connected and in fixed relation to each other. Identification references (e.g., primary, secondary, first, second, third, fourth, etc.) are not intended to connote importance or priority, but are used to distinguish one feature from another. The drawings are for purposes of illustration only and the dimensions, positions, order and relative sizes reflected in the drawings attached hereto may vary.
The foregoing discussion has been presented for purposes of illustration and description and is not intended to limit the disclosure to the form or forms disclosed herein. For example, various features of the disclosure are grouped together in one or more aspects, embodiments, or configurations for the purpose of streamlining the disclosure. However, it should be understood that various features of the certain aspects, embodiments, or configurations of the disclosure may be combined in alternate aspects, embodiments, or configurations. In methodologies directly or indirectly set forth herein, various steps and operations are described in one possible order of operation, but those skilled in the art will recognize that steps and operations may be rearranged, replaced, or eliminated or have other steps inserted without necessarily departing from the spirit and scope of the present disclosure. Moreover, the following claims are hereby incorporated into this Detailed Description by this reference, with each claim standing on its own as a separate embodiment of the present disclosure.
This application claims the benefit of U.S. provisional patent application No. 61/708,440, filed Oct. 1, 2012, and U.S. provisional patent application No. 61/734,308, filed Dec. 6, 2012, the entire disclosures of which are hereby incorporated by reference herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US13/62692 | 9/30/2013 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
61708440 | Oct 2012 | US | |
61734308 | Dec 2012 | US |