This invention relates video systems. More particularly, the present invention relates to a video system that tracks and follows a target to collect dynamic video data.
Digital communications has become common place due to the speed and ease that digital data and information can be transmitted between local and remote devices. Current digital communications systems, however, provide an impersonal and static interactive user experience.
On one end of the communication spectrum is “texting” that includes text messaging and e-mailing. Texting and emailing are impersonal and void of expression but do provide quick and easy ways to convey information. On the other end of the communication spectrum are “meetings” or face-to-face communications, that provide the most personal and expressive communication experience. However, meetings are not always convenient and in some cases are impossible. With the increased band width and transmission speed of networks (internet, intranet and local area networks) video communication has been increasingly filling the void between texting or e-mailing and meetings.
For example, there are now several services that provide live-stream videos through personal computers or cell phones. Internet accessible video files that are posted (stored) on remote servers have become a common place method for distributing information to large audiences. These video systems do allow for a greater amount of information to be disseminated and do allow for a more personal and interactive experience. However, these video systems still do not provide a dynamic video experience.
Prior art video system include surveillance video systems, including drone surveillance systems, with static or pivoting video cameras operated remotely using a controller to document and record subjects or targets. Action video system, including hand-held cameras, head mounted cameras and/or other portable devices with video capabilities, are used by an operator to document and record subjects or targets. Also, most desk-top computer systems are now equipped with a video cameras or include the capability to attach a video camera. Some of these video systems that are currently available requires the operator follow or track subjects or targets by physically moving a video capturing device or by moving a video capturing device with a remote control. Other video systems require that the subject or target is placed in a fixed or static location in front of a viewing field of the video capturing device.
For the purpose of this application, the terms below are ascribed the following meaning:
1) Mirroring means that a two or more video screens are showing or displaying substantially the same representation of video data, usually originating from the same source.
2) Pushing is a process of transferring video data from one device to a video screen of another device.
3) Streaming means to display a representation of video data on a video screen from a video capturing device in real-time as the video data is being captured within the limits of data transfer speeds for a given system.
4) Recording means to temporarily or permanently store video data from a video capturing device on a memory device.
Preferably, the present invention is directed a video system that automatically follows or tracks a subject or target once the subject or target has been selected with a “hands-off” video capturing device. The system of the present invention seeks to expand the video experience by providing dynamic self-video capability. In the system of the present invention video data that is captured with a video capturing device is shared between remote users, live-streamed to or between remote users, pushed from a video capturing device to one or more remote or local video screens or televisions, mirrored from a video capturing device to one or more remote or local video screens or televisions, recorded or stored on a local memory device or remote server or any combination thereof.
System of the present invention includes a robotic pod for coupling to video capturing device, such a web-camera, a smart phone or any device with video capturing capabilities. The robotic pod and the video capturing devices are collectively referred to, herein, as a video robots or video units. The robotic pod includes a servo-motor or any other suitable drive mechanism for automatically moving a coupled video capturing device to collect video data corresponding to dynamic or changing locations of a subject, object or person (hereafter, target) as the target moves through a space, such as a room. In other words, the system automatically changes the viewing field of the video capturing device by physically moving the video capturing device, or portion thereof, (lens) to new positions in order to capture video data of the target as the target moves through the space.
In some embodiments of the invention a base portion of the robotic pod remains substantially stationary and the drive mechanism moves or rotates the video device and/or its corresponding lens. In other embodiments of the invention the robotic pod is also configured to move or rotate. Regardless of how the video capturing device follows the target, the system includes sensor technology for sensing locations of the target within a space and then causes or instructs the video capturing device to collect video data corresponding the locations of the target within that space. Preferably, the system is capable of following the target, such that the target is within viewing field of the video capturing device with an error of 30 degree or less of the center of the viewing field of the video capturing device.
In accordance with the embodiments of the invention, the sensor technology (one or more sensors, one or more micro-processors and corresponding software) lock onto and/or identifies target being videoed and automatically tracks the video capturing device to follow the motions or movements of the target with the viewing field of the video capturing device as the target moves through the space. For example, the robotic pod includes a receiving sensor and the target is equipped with, carries or wears a device with a transmitting sensor. The transmitting sensor can be any sensor in a smart phone, in a clip on device, in a smart watch, in a remote control device, in a heads-up display (i.e Google Glasses) or in a Blue-Tooth head set, to name a few. The transmitting sensor or sensors and the receiving sensor or sensors are radio sensors, short-wavelength microwave device (Blue-Tooth) sensors, infrared sensors, acoustic sensor, optical sensor, radio frequency identification device (RFIDs) sensors or any other suitable sensors or combination of sensors that allow the system to track the target and move or adjust the field of view of the video capturing device, for example, via the robotic pod, to collect dynamic video data as target moves through a space.
The sensor technology in hosted in the robotic pod, the video capturing device, an external sensing unit and/or combinations thereof. Preferably, the video capturing device includes a video screen for displaying the video data being collected by the video capturing device and/or other video data transmitted, for example, over the interne. In addition the system is configured to transmit and display (push and/or mirror) the video data being collected to a peripheral screen, such as a flat screen TV monitor or computer monitor using, for example, a wireless transmitter and receiver (Wi-Fi). The system of the present invention is particularly well suited for automated capturing of short range, within 50 meters, video of a target within a mobile viewing field of the video capturing device. The system is capable of being adapted to collect dynamic video data from any suitable video capturing device including, but not limited to, a video camera, a smart phone, web camera and a head mounted camera.
The video system 100 of the present invention includes a video capturing device 101 that is coupled to a robotic pod 103 (video robot 102) through, for example, a cradle. In accordance with the embodiments of the invention, the robot pod 103 is configured to power and/or charge the video capturing device 101 through a battery 109 and/or a power chord 107. The robotic pod 103 includes a servo-motor or stepper motor 119 for rotating or moving the video capturing device 101, or portion thereof, in a circular motion represented by the arrow 131 and/or move in any direction as indicated by the arrows 133, such that the viewing field of the video capturing device 101 follows a target 113′ as the target 113′ moves through a space. The robotic pod 103 includes, for example, wheels 139 and 139′ that move the robot pod 103 and the video capturing device 101 along a surface and/or a servo-motor or stepper motor 119 moves the video capturing device 101 while the robotic pod 103 remains stationary.
The robotic pod 103 includes a receiving sensor 113 for communicating with a target 113′ and a micro-processor with memory 117 programmed with software configured to instruct the servo-motor 119 to move the video capturing device 101, and/or portion thereof, to track and follow locations of the target 113′ being videoed. The video capturing device 101 includes, for example, a smart phone with a screen 125 for displaying a representation of video data being captured by the video capturing device 101. The video capturing device 101 includes at least one camera 121 and can also include additional sensors 123 and/or software for instructing the server motor or stepper motor 113 where to position and re-position the video capturing device 101, such that the target 113′ remains in a field of view of the video capturing device 101 as the target 113′ moves through the space.
In accordance with the embodiments of the invention the target 113′ includes a transmitting sensor that sends positioning or location signals 115 to the receiving sensor 113 and updates the micro-processor 117 of the current location of the target 113′ being videoed by the video capturing device 101. The target 113′ can also include a remote control for controlling the video capturing device 101 to change a position and/or size of the field of view (zoom in and zoom out) of the video capturing device 101.
Referring to
In further embodiments of the invention the video system 100 (
Referring now to
Still referring to
In operation multiple user's are capable of video conferencing while moving and each user is capable of seeing the other users even with their backs are facing their respective video capturing devices. Also, because the head-sets 500 and/or heads-up displays 315 transmit sound directly to an ear of each user and receives voice data through a micro-phone near the mouth of each user, the audio portion of the video data streamed, transmitted, received or recorded remains substantially constant as users move around during the video conferencing.
Now referring to
Now referring to
The video capturing unit 503 includes a housing 506, a camera 507 and a servo-motor 505, a processor unit (computer) 519 with memory and a receiver 517, such as described above. In operation, the sensing unit 501 transmits location data, location signals or version thereof to the video capturing unit 503 via the transmitter 523 or chord 526. The receiver 517 receives the location data, location signals or version thereof and communicates the location data or location signals, or a version thereof, to the processor unit 519. The processor unit 519 instructs the servo-motor 505 to move a field of view of the camera 507 in any number of directions, represented by the arrows 511 and 513, such that the target remains within the field of view of the camera 507 as the target moves through the two-dimensional or three-dimensional sensing field or sensing grid. In accordance with the embodiments of the invention any portion of the software to operated the video capturing unit 503 is supported or hosted by the processor unit 525 of the sensing unit 501 or the processing unit 519 of the video capturing unit 503.
Also, as described above, the housing 506 of the video capturing unit 503 is moved by the servo-motor 505, the camera 507 is moved by the servo-motor 505 or a lens of the camera 507 is moved by the servo-motor 505. In any case, the field of view of the video capturing unit 503 adjusts to remain on and/or stay in focus with the target. It also should be noted that the video system 500 of the present invention can include auto-focus features and auto calibration features the allows the video system 500 to run an initial set-up mode to calibrate starting locations of the sensor unit 501, the video capturing unit 503 and the target that is being videoed. The video data captured by the video capturing unit 503 is live-streamed to or between remote users, pushed from the video capturing unit 503 to one or more remote or local video screens or televisions, mirrored from video capturing unit 503 to one or more remote or local video screens or televisions, recorded and stored in a remote memory device or the memory of the processor unit 525 of the sensing unit 501 or the memory of the processing unit 519 of the video capturing unit 503, or any combination thereof.
Now referring to
For example, the video units 701 and 703 use a continuous auto focus feature and/or recognition software to lock onto a target and the video units 701 and 703 include a mechanism for moving itself, a camera or a portion thereof to keep the target in the field of view of video units 710 and 703. In operation, the video units 701 and 703 take an initial image and based on an analysis of the initial image, a processor unit coupled to video units 701 and 703 then determines a set of identifiers. The processor unit in combination with a sensor (which can be an imaging sensor of the camera) then uses these identifiers to move the field of view of the video capturing units of the video units 701 and 703 to follow the target as the target moves through a space or between the rooms 705 and 707. Alternatively, or in addition to computing identifiers and using identifiers to follow the target, the processor unit of the video units 701 and 707 continuously samples portions of the video data stream and based on comparisons of the samples, adjusts the field of view of the video capturing units, such that target stays within the field of view of the video capturing units as the target move through the space or between the rooms 705 and 707.
In accordance with this embodiment, the video capturing device 1031 includes a wireless transmitter/receiver 1033 and a camera 1035 that for capturing the local video data and/or receiving video data transmitted for one or more video capturing devices at remote locations (not shown). Representations of video data 1001 of the video data captured and/or received by the video capturing device 1031 can also be displayed on a screen of the video capturing device 1031 and the images displayed on the one or more video screens 1005 and 107 can be mirrored images or partial image representations of the video data displayed 1001 on the screen of the video capturing device 1031.
Preferably, the video capturing device 1031 includes a user interface 1009 that is accessible from the screen of video capturing device 1031 or portion thereof, such that a user can select which of one or more video screens or televisions 1005 and 1007, represented by images 1001′ and 1003′, that the video data being captured or received by the video capturing device 1031 is displayed on. In further embodiments of the invention the one or more video screens or televisions 1005 and 1007 are equipped with a sensor or sensor technology 1041 and 1043, for example, image recognition technology, such that the sensor or sensor technology 1041 and 1043 senses locations of a the user and/or the video capturing device 1031 and displays representations of the video data captured and/or received by the video capturing device 1031 on the one or more video screens or televisions 1005 and 1007 corresponding to near by locations of the user and/or video capturing device 1031.
The present invention has been described in terms of specific embodiments incorporating details to facilitate the understanding of the principles of construction and operation of the invention. As such, references herein to specific embodiments and details thereof are not intended to limit the scope of the claims appended hereto. It will be apparent to those skilled in the art that modifications can be made in the embodiments chosen for illustration without departing from the spirit and scope of the invention.
This patent application claims priority under 35 U.S.C. 119(e) of the U.S. Provisional Patent Application Ser. No. 61/964,900 filed Jan. 17, 2014, and titled “SYSTEM FOR COLLECTING LIVE STREAM VIDEO DATA”, the U.S. Provisional Patent Application Ser. No. 61/965,508 filed Feb. 3, 2014, and titled “SYSTEM FOR COLLECTING LIVE STREAM VIDEO DATA OR RECORDING VIDEO DATA”, the U.S. Provisional Patent Application Ser. No. 61/966,027 filed Feb. 14, 2014, and titled “SYSTEM FOR COLLECTING LIVE STREAM VIDEO DATA OR RECORDING VIDEO DATA”. The U.S. Provisional Patent Application Ser. Nos. 61/964,900 filed Jan. 17, 2014, 61/965,508 filed Feb. 3, 2014, and 61/966,027 filed Feb. 14, 2014 are all hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
61964900 | Jan 2014 | US | |
61965508 | Feb 2014 | US | |
61966027 | Feb 2014 | US |