Traditionally, video has been considered a linear medium with time being the only variable controlling the content. The introduction of video streaming over the internet has added an additional variable of alternative quality factors of the same footage, for example, video of different resolutions.
According to aspects of the present disclosure, a system for generating aggregated data associated with an event is provided. The system includes at least one relay server, at least one communications server, and a processor in communication with at least one of the at least one relay server and the at least one communication server via a network. The processor is configured to receive first video data and first location data associated with a first device from the one or more relay server, determine a first location of the first device based on the first location data, and generate a first event area associated with the first device based on at least one of the first location and a predetermined event location. The processor is also configured to receive second video data and second location data associated with a second device from the one or more relay server, determine a second location of the second device based on the second location data, generate a second event area associated with the second device based on the second location, determine the first event area and second event area overlap, merge the first event area and the second event area into a combined event area and link the second video data with the first video data, and generate the aggregated data, in response to the linking of the first video data and the second video data. The aggregated data includes the first video data and the second video data.
According to aspects of the present disclosure, a method for generating aggregated data associated with an event is provided. The method includes, by a processor, receiving first video data and first location data from a first device, determining a first location of the first device based on the first location data, generating a first event area associated with the first device based on at least one of the first location and a predetermined event location. The method also includes receiving second video data and second location data from a second device, determining a second location of the second device based on the second location data, generating a second event area associated with the second device based on the second location, determining the first event area and second event area overlap, merging the first event area and the second event area into a combined event area and linking the second video data with the first video data, and generating the aggregated data, in response to the linking of the first video data and the second video data. The aggregated data includes the first video data and the second video data.
According to aspects of the present disclosure, a non-transitory computer-readable medium having stored thereon sequences of instructions is provided. The medium has sequences of instructions which, when executed by a processor, cause the processor to receive first video data and first location data from a first device, determine a first location of the first device based on the first location data, generate a first event area associated with the first device based on at least one of the first location and a predetermined event location, receive second video data and second location data from a second device, determine a second location of the second device based on the second location data, generate a second event area associated with the second device based on the second location, determine the first event area and second event area overlap, merge the first event area and the second event area into a combined event area and link the second video data with the first video data, and generate the aggregated data, in response to the linking of the first video data and the second video data. The aggregated data includes the first video data and the second video data.
According to aspects of the present disclosure, a method for generating aggregated data associated with an event is provided. The method includes, by a processor, receiving first video data and first location data from a first device, determining a first location of the first device based on the first location data, determining a region associated with the first device based on the first location, receiving second video data and second location data from a second device, determining a second location of the second device based on the second location data, determining the second location is within the region, associating the second video data with the first video data based on the determination that the second location is within the region, and in response to the association of the first video data and the second video data, generating the aggregated data, wherein the aggregated data includes the first video data and the second video data.
The foregoing aspects and other features of the disclosed embodiments are explained in the following description, taken in connection with the accompanying drawings, wherein:
The embodiments described herein disclose an adaptive streaming protocol which adds an additional layer of control to the user's experience, aggregating multiple broadcasts into a single manifest, allowing for a determination of an event's location and boundaries based on user streaming and device location as well as optimal playback of multi-perspective events. In embodiments, the user is able to select one or more live (i.e., real time) video broadcasts.
As used herein, the term “device” includes, but is not limited to any mobile device such as a smart phone, cellular phone, or wearable electronic device, any computer such as a laptop, desktop, tablet, notebook, any entertainment system such as a television, gaming system, electronic device configured to receive or transmit streaming audio or video, or any other suitable electronic device. The term “event area” means an area associated with an event based on the location(s) of one or more devices, determined by a processor, wherein the event area is a predetermined area surrounding a device or a combination of overlapping event areas surrounding a plurality of respective devices. The term “predetermined event location” means a location associated with an event known to be occurring or anticipated to occur, and having a predetermined location.
As will be described in more detail below, an exemplary system 100 may include a network of different types of servers, where each type of server may provide a service to provide user experiences to be described below. System 100 may allow a user to activate and/or accept a location/event confirmation in order to activate a live stream function on a user device. System 100 may tag or mark each live stream with location specific data, allowing proper grouping into “events.” System 100 may cluster users into events based on geo-location, such as by using a partitioning technique (as illustrated in
System 100 may further be implemented to allow one or more users to review live and past events in full or highlight form. System 100 may allow the one or more users to review an event in its entirety by providing access to all of the live streams that were broadcast within the event. An event timeline is completed using timestamps of associated broadcasts of that event. A viewer is able to watch an event sequentially from start to finish through any of the broadcasts taken by various users. Viewers can select which broadcasts they would like to watch the event through. While watching a broadcast, the view is given access to image previews of other broadcasts happening at the same time period of the broadcast currently being watched. The viewer can then switch to different broadcasts by interacting with their associated image previews. Viewers can search through an event timeline. Broadcasts a user can access match the timeline of the overall event. System 100 may further be implemented to create highlight versions of events by editing together multiple segments of streams within any event. The streams and segments from each stream are determined by the criteria of highest number of viewers, and peak time segments of the live streams. System 100 may execute an instruction to generate a compilation of a set of streams or segments based on a rating system, and/or based on a number of views by users.
Devices 140, 150, 160 may be controlled by users 141, 151, 161, respectively. In some examples, devices 140, 150, 160 may each be a cellular phone, a smart phone, a tablet, a computer, a laptop computer, a wearable electronic device, etc., that may include image capturing devices such as a camera or a video recorder. Devices 140, 150, 160 and/or users 141, 151, 161 may be located in locations 142, 152, 162, respectively. Locations 142, 152, 162, may be locations within a vicinity of an event location 102. Event location 102 may be a location where an event may occur. In some examples, event location 102 may be a location where an event may be currently occurring. Examples of events which may occur at event location 102 include, but are not limited to, sports games, street protests, concerts, crimes, etc.
Device 140 may be configured to capture one or more images, or videos, and generate video data 146 based on the captured images and/or captured videos. Device 150 may be configured to capture one or more images, or videos, and generate video data 156 based on the captured images and/or captured videos. Device 160 may be configured to capture one or more images, or videos, and generate video data 166 based on the captured images and/or captured videos. Devices 140, 150, 160 may be configured to send video data 146, 156, 166 to the at least one relay server, respectively. The at least one relay server 130 may be configured to receive video data 146, 156, 166, and in response, may store video data 146, 156, 166 in a relay memory 132. Relay memory 132 may be configured to be in communication with the at least one relay server 130, and may be configured to store a relay instruction 134. Relay instruction 134 may include instructions that may be executed by relay server 130 to facilitate implementations of system 100.
The at least one communication server 120 may be configured to monitor the at least one relay server 130. In some examples, the at least one communication server 120 may determine a capacity of relay memory 132, and based on the capacity of relay memory 132, determine whether to assign relay server 130 to devices 140, 150, 160. In an example, the at least one communications server 120 may determine that a first capacity of a first relay memory, configured to be in communication with a first relay sever, is greater than a storage threshold. The at least one communication sever 120 may further determine that a second capacity of a second relay memory, configured to be in communication with a second relay server, is less than the storage threshold. In response to the first capacity being greater than the storage threshold, and the second capacity being less than the storage threshold, the at least one communication server 120 may assign one or more devices among devices 140, 150, 160 to the second relay server.
The at least one communication server 120 may be configured to be in communication with a communication memory 122, where communication memory 122 may be configured to store a communication instruction 124 and/or a database 126. The communication instruction 124 may include instructions effective to be executed by the at least one communication server 120 to facilitate implementation of system 100. Database 126 may include state data related to states of system 100, such as data effective to indicate assignments of relay servers to devices such as devices 140, 150, 160. In some examples, database 126 may include data effective to indicate one or more video data currently being processed, and/or one or more video data that were processed, by devices 140, 150, 160. In some examples, database 126 may include data effective to indicate communication links among the at least one relay server 130. The at least one communication server 120 may control, maintain, and/or modify data in database 126 in response to assigning the at least one relay server 130 to new devices, or in response to changes in assignments of relay server 130 to devices 140, 150, 160.
In an example, user 141 may be at a location 142, where location 142 may be within event location 102, or may be within a vicinity of event location 102. When user 141 is in location 142, user 141 may use device 140 to capture one or more images or videos to generate video data 146. Video data 146 may include images or videos corresponding to a perspective 144. The perspective 144 may relate to a position and/or angle in which user 141 views an event occurring at event location 102, through device 140, when user 141 is at location 142. In some examples, device 140 may generate location data 148, where location data 148 may be an indication of location 142. In some examples, device 140 may include a global positioning system (GPS) component that may be configured to generate location data 148. Device 140 may send video data 146 and location data to the at least one relay server 130. The at least one relay server 130 may receive video data 146 and location data 148. Device 140 may send video data 146 and location data 148 to the at least one relay server 130.
Similarly, user 151 may be at a location 152, where location 152 may be within event location 102, within a vicinity of location 142, and/or may be within a vicinity of event location 102. When user 151 is in location 152, user 151 may use device 150 to capture one or more images or videos to generate video data 156. Video data 156 may include images or videos corresponding to a perspective 154. The perspective 154 may relate to a position and/or angle in which user 151 views an event occurring at event location 102, through device 150, when user 151 is at location 152. In some examples, device 150 may generate location data 158, where location data 158 may be an indication of location 152. In some examples, device 150 may include a global positioning system (GPS) component that may be configured to generate location data 158. Device 150 may send video data 156 and location data 158 to the at least one relay server 130. The at least one relay server 130 may receive video data 156 and location data 158. Device 150 may send video data 156 and location data 158 to the at least one relay server 130.
Similarly, user 161 may be at a location 162, where location 162 may be within event location 102, within a vicinity of locations 142, 152, and/or may be within a vicinity of event location 102. When user 161 is in location 162, user 161 may use device 160 to capture one or more images or videos to generate video data 166. Video data 166 may include images or videos corresponding to a perspective 164. The perspective 164 may relate to a position and/or angle in which user 161 views an event occurring at event location 102, through device 160, when user 161 is at location 162. In some examples, device 160 may generate location data 168, where location data 168 may be an indication of location 162. In some examples, device 160 may include a global positioning system (GPS) component that may be configured to generate location data 168. Device 160 may send video data 166 and location data 168 to the one or more relay server 130. The one or more relay server 130 may receive video data 166 and location data 168. Device 160 may send video data 166 and location data 168 to relay server 130. In some examples, the one or more relay server 130 may request location data 148, 158, 168 from devices 140, 150, 160, periodically.
The one or more relay server 130 may receive video data 146, 156, 166 and/or location data 148, 158, 168. The one or more relay server 130 may store video data 146, 156, 166 and/or location data 148, 158, 168 in relay memory 132. In some examples, the one or more relay server 130 may send video data 146, 156, 166 and/or location data 148, 158, 168 to processor 110. The processor 110 may receive video data 146, 156, 166 and/or location data 148, 158, 168, and in response, may store video data 146, 156, 166 and/or location data 148, 158, 168 in central memory 112.
In an example, based on location data 148, processor 110 may determine that device 140 is located at location 142. In response to determining that device 140 is located at location 142, processor 110 may compare location data 148 with one or more pieces of stored location data 114 that may be stored in central memory. Stored location data 114 may correspond to respective locations that may include event locations (such as event location 102), or locations of other devices. Processor 110 may determine a distance between location 142 and each respective location that corresponds to stored location data 114 based on the comparison of location data 148 with stored location data 114. Processor 110 may compare the determined distances with a threshold 116 that may be stored in central memory 112.
In response to a determined distance being less than threshold 116, the processor 110 may determine that location 142 is located within a vicinity of an existing event. The processor 110 may associated video data 146 with the existing event, and with other video data that are associated with the existing event. The processor 110 may generate aggregated data 118 based on video data 146 and the other video data associated with the existing event, where aggregated data 118 may include one or more pieces of video data associated with the existing event.
In response to a determined distance being greater than threshold 116, the processor 110 may determine that location 142 is outside of a vicinity of an existing event, generate an indication of a new event, and may associate the new event with device 140. In an example, the processor 110 may receive location data 158 after receipt of location data 148. The processor 110 may compare location data 158 with stored location data 114 to determine a distance difference between location 152 and respective locations, including location 142, that corresponds to stored location data 114. If a distance difference between location data 158 and location data 148 is less than threshold 116, the processor 110 may determine that location 152 is within a vicinity of an event associated with device 140. In response to the determination that location data is within a vicinity of the event associated with device 140, the processor 110 may associate video data 156 with video data 146, and may generate aggregated data 118 based on video data 146 and video data 156.
In some examples, a first relay server of the at least one relay server 130 may send one or more pieces of video data to a second relay server of the at least one relay server 130 in order to relay video data from a first device to a second device. For example, device 140 may be configured to send video data 146 to a first relay server, and device 150 may be configured to send video data 156 to a second relay server. User 151 may use device 150 to request to view a video associated with video data 146, such as by sending a request signal to the second relay server. The second relay server, in response to receipt of the request signal, may request the first relay server for video data 146. The first relay server may send video data 146 to the second relay server such that the second relay server may send video data 146 to device 150. In some examples, the at least one relay server 130 may use a WEBRTC implementation based on MEETCHO's JANUS GATEWAY, or IONIC browser platform, etc., to communicate with various web browsers and/or mobile platforms that may be executed by devices 140, 150, 160. In some examples, the at least one relay server 130 may implement hypertext transfer protocol (HTTP) based media streaming communications protocol, such as HTTP live streaming (HLS), to communicate with various web browsers and/or mobile platforms that may be executed by devices 140, 150, 160. In some examples, the processor 110 may be a streaming engine configured to be in communication with the at least one relay server 130 and the at least one communication server 120, where the processor 110 may be configured to execute an application associated with system 100 on devices 140, 150, 160.
Among other benefits, a system in accordance with this disclosure may facilitate the streaming and viewing of multi-perspective live broadcasts, the grouping of streams into events, and the editing of archived streams into archived events. The system in accordance with this disclosure may server as a platform where users may initiate broadcasts, view live or archived video content, and search for events by applying various filters as described above and/or in
Users may make information requests to communication servers, which can serve a list of events, streams and archived videos. The relay servers may process multimedia data across the platform, such as by saving and redistributing video and audio clips, originating from a broadcasting endpoint (e.g., a location of a user and/or user device) to other devices. One or more relay servers of the at least one relay server 130 may further accept and relay streams from other relay servers of the at least one relay server 130 such that through daisy chaining and broadcasting to endpoint time synchronization, a single broadcaster can reach any number of viewers. The at least one communication server 120 may mediate between user devices, such as devices 140, 150, 160, and the at least one relay server 130 by, for example, generating authentication tokens to initiate communication between the user devices, 140, 150, 160 and the at least one relay server 130. The at least one communication server 120 may further maintain an accurate system state, in an external data store, that describes the entire mapping between user devices 140, 150, 160, the at least one relay server 130, and streams. Optimization of the platform may be achieved by iteratively shifting streams between relay servers of the at least one relay server 130, consolidating or expanding streams across multiple relay servers of the at least one relay server 130, and moving endpoint connections around.
As shown in
The interface 200 may include an event page 220, which provides users with access to recorded and live events being streamed. The event page 220 may include a number of display features or “buttons” providing a user with information or allowing a user to interact with the interface 200. For example, the event page 220 may include a location filter 221 by which users can filter streamed events based on whether they want a focus on what is happening in their local towns, nationally significant events, and even global news. The event page 220 may include an event card 222, wherein each card represents a single event and within each event users are able to see the title of the event, where the event is occurring, specific data on the event, and all the different streams of the event. The event page 220 may include event data 223, shown as a portion of the event card 222, which shows data to users related to the entire event, for example, showing the total runtime of the event, the total number of people streaming the event, and the number of people viewing the entire event.
The event page 220 may include a record button 224 allowing users to begin streaming and, further, showing a thumbnail or title of the event within the button if a user is within the physical location of an already existing event. The event page 220 may include a search button 225 allowing users to conduct an advanced search, by which they can search by typing the name of an event, or by a specific category of interest. The event page 220 may include a current stream button 226, which may show the user the most popular stream being viewed out of the entire event, an image of the actual stream so that a viewer can understand the perspective of the respective stream. The current stream button may also allow a user to view the live feed by tapping on the display image (i.e., the current stream button 226).
The event page 220 may include an event title and location button 227 displaying the title and location of the event being streamed by multiple users, wherein the title may be generated by the user who starts streaming in an area where no other event is occurring. Alternatively, the title may be automatically generated by accessing the location of the devices that are streaming and determining an appropriate title. Additionally, a user may tap the location to bring up a map with pins of the exact location of the streamers of that event. The event page 220 may include a next stream button 228, which may show the user the next most popular stream. Users may tap either an arrow or a circular thumbnail to rotate the button to the center of the card. This operation may be performed on both the left side as well as the right side of the card. Continuing to tap one side may continue to rotate-in other streams of lower popularity, while continuing to tap the opposite side may continue to rotate-in other streams of higher popularity.
As shown in
As shown in
As shown in
Referring to
Alternatively, if the processor 110 determines that device 150 (user 151) is outside of event area 140A (as shown in
In another example, as shown in
As shown in
An example is shown in
As shown in
A broadcaster 820 feature of the client application 800 may allow the client application 800 to connect, by an RTC connection for example, to the at least one relay server 900. The broadcasting user may, using the client application 800 record video/audio input from the mobile device 821, via the connection with the at least one relay server 900 through, WebSockets for example, determine a location range to group with nearby events, and send current video/audio input through WebRTC 822. The broadcaster 820 feature of the client application 800 may also allow the client application, via the connection with the at least one relay server 900, to display current live broadcasts 823, display current emotions being shared on a video stream 824, and display a current number of viewers for a current broadcast 825 on device 140, 150, 160.
The viewer 830 feature of the client application 800 provides a real-time connection to the relay servers allowing the user to watch video generated by broadcasting users. For example, the client application 800 may initiate a connection with the at least one relay server 900 through Websockets, for example, and request a current stream 831. The client application 800 may then receive the requested stream through WebRTC from the at least one relay server 900 at 832, along with current emotions and a number of viewers, and display the total emotions for the video stream and icons assigning an emotion to the video stream 833, play the video stream, or audio from the video stream 834, and display the current number of viewers for the video stream 835. The client application 800 may also request other linked video streams in the event from the at least one relay server 900 or an optimizing server 1000 at 836, and display the other linked video streams in a navigation menu for other broadcasts in the same location-based event 837.
The main feed 840 feature of the client application 800 provides the user with information about live and existing broadcast feeds and permits the user to select feeds for viewing. For example, the client application 800 may request current events (for example, groups of linked broadcasts), as well as the thumbnails of broadcasts within the respective current event, from the information server 841, display a list of current events with swipeable thumbnails pertaining to the broadcasts in that event 842 with an event title (or location if a title has not been determined) location, number of viewers, or a number of broadcasts) 843. The client application 800 may further display a record button providing a user with access to a broadcast state 844, display a profile icon providing a user with access to the user's profile 845, display an options icon providing a user with access to client application 800 features, options or filters 846, and display available filters for the events in the feed 847.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
This application claims priority to and the benefit of U.S. Provisional Patent Application No. 62/433,522, filed on Dec. 13, 2016, the disclosure of which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62433522 | Dec 2016 | US |