In today's online environment, users often want to record and view live events and generate content from the recorded live events, such as video content, audio content, pictures, and so on. Enabling users to view streaming data from live events, record the live events, and manage the resulting content, however, can present challenges for application developers in a web-based environment. For example, in the context of web browser applications, a web browser typically must call an external utility to record live events for the web browser. This can slow the recording process and increase the complexity of the application development process since a developer typically has to design the web browser to interface with the external utility.
In addition, many current computing devices include multiple recording devices, such as multiple video cameras. Recording utilities, however, typically only enable one instance of a particular type of recording device to be used at a time. For example, a computing device with two video cameras often cannot record video concurrently with both video cameras.
A further challenge to content management in an online environment exists in the upload of content to a web resource. For example, a user that wants to record a live event and upload the resulting content to a web resource typically must first record the live event via a local device and then upload the resulting content to the web resource. This increases the time required to complete the recording and upload process which in turn ties up computing resources that can be used for other tasks.
This document describes techniques for browser-based recording of content. In at least some embodiments, a web browser is configured to interface with recording devices (e.g., a video camera, a microphone, a still-image camera, and so on) of a computing device to record live events and produce content files from the live events. Examples of content files include a video file, an audio file, an image file, and so on. The web browser can also upload the content files to a web-based resource, such as a web server.
In at least some embodiments, live events can be captured using multiple recording devices to produce one or more content files and to enable access to streaming content data. For example, a computing device can include multiple recording devices, such as multiple video cameras, multiple microphones, and so on. According to some embodiments, the techniques can enable one or more of the recording devices to be selected for capturing live events and, in some embodiments, can enable multiple recording devices to be used concurrently to record one or more live events.
Also in at least some embodiments, the techniques can enable concurrent or semi-concurrent recording of live events and upload of content data produced from the recording of the live events. For example, a first portion of a live event can be captured to produce a first portion of content data. While the first portion of content data is being uploaded, a second portion of the live event can be recorded to produce a second portion of content data. Thus, in at least some embodiments, a recording process and a content data upload process can run concurrently or semi-concurrently. This can enable content to be captured and uploaded in an efficient manner.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit of a reference number identifies the figure in which the reference number first appears. The use of the same reference number in different instances in the description and the figures may indicate similar or identical items.
Example Environment
As also illustrated in
The computing device 102 further includes computer-readable media 118, which includes or has access to a web browser 120. The web browser 120 includes a content module 122 that is configured to implement various techniques discussed herein for browser-based recording of content. In at least some embodiments, the content module 122 is configured to interface with the recording devices 110 to enable various types of live events to be recorded and converted to digital content.
Further illustrated in
Note that one or more of the entities shown in
Example Processes for Browser-Based Recording of Content
The following discussion describes example processes for browser-based recording of content. Aspects of these processes may be implemented in hardware, firmware, software, or a combination thereof These processes are shown as sets of blocks that specify operations performed, such as through one or more entities of
Block 204 interfaces via the web browser with one or more recording devices to record one or more live events as one or more content files. In at least some implementations, the web browser can include an application programming interface (API) (e.g., as part of the content module 122) that can communicate with a recording device to initialize and coordinate the recording of live events. In at least some embodiments, the API can enable the web browser to communicate directly with a recording device (e.g., via a device driver) without requiring a user to interact with an external application or other utility. The recorded live event can then be stored as one or more content files on a computing device local to the web browser. In at least some embodiments, one or more content files can include multiple content files that can be stored separately or merged into a single content file.
Block 206 uploads the one or more content files via the web browser to a remote resource. For example and with reference to
In an example implementation where the content file(s) include an image file, a uniform resource identifier (URI) can be generated for the image file (e.g., by the content module 122) and used to reference the image file. The URI can be used to set a source for an image tag (e.g., a hypertext markup language (HTML) <img> tag) and/or can be uploaded to a remote resource such as the web application 124. The remote resource can then use the URI to retrieve the image file.
In a further example implementation, where the content file(s) include a video file, a uniform resource locator (URL) can be generated for the video file (e.g., by the content module 122) and used to reference the video file. In at least some embodiments, the URL can be used as a source attribute for a video tag (e.g., an HTML <video> tag) and can be used to cause the video file to be played based on the video tag.
According to at least some embodiments and in the context of recording video content, a number of invocable methods can affect the video recording process. For example, invoking a stop method (e.g., StoppableOperation.stop( )) can cause video content that is being recorded to be finalized and returned in response to a success callback. Additionally, invoking a cancel method (e.g., StoppableOperation.cancel( )) can cause video content that is being recorded to be discarded and can further cause a fail callback to be invoked. In at least some embodiments, if it is determined that a video content file is too large (e.g., during the recording process), all or part of the video content file can be discarded and a notification that the video recording process has stopped and/or failed can be sent.
Block 304 interfaces via the web browser with one or more recording devices to record the live event and to stream video data captured from the live event. For example, the web browser can communicate with one or more drivers for the recording devices to record the live event and to access a video data stream from the recording devices. Block 306 enables the streaming video data to be accessed while the live event is being recorded. In at least some embodiments, the web browser can generate tags (e.g., URLs) for the streaming video data and/or the recorded video data that enable each to be accessed.
Further to certain implementations, the web browser can enable a video data stream to by toggled on and off by a user and/or a remote resource, such as the web application 124. Thus, implementations enable a video stream of a live event to be turned off while the live event is being recorded without affecting the recording process. Additionally, implementations enable a process of recording a live event to be turned off without affecting access to streaming video data from the live event. Thus, streaming video data and recorded video data from a single recording device and/or multiple recording devices can be independently accessed and controlled.
Block 406 determines whether or not to allow access to one or more of the multiple recording devices. In at least some embodiments, a remote resource such as the web application 124 can request access to one or more of the multiple recording devices. Responsive to this request, a user can be given the option (e.g., via a user interface) to allow or deny the access. In accordance with at least some implementations, a user can allow or deny access on a device-by-device basis. For example, if the remote resource is requesting access to multiple recording devices, the user can allow or deny access to each of the multiple recording devices individually. Thus, a user may allow access to a first device yet deny access to another. This can enable a user to be aware of recording events and to have more control over the user's own privacy. If access to the one or more of the multiple recording devices is not allowed (“No”), block 408 denies access to the one or more of the recording devices.
If access to the one or more of the multiple recording devices is allowed (“Yes”), block 410 receives an indication to use two or more of the multiple recording devices to record the one or more live events. For example, the indication can be received responsive to user selection of the two or more of the multiple recording devices via a user interface. As mentioned above, a single computing device can include the multiple recording devices. Thus, in at least some embodiments, the two or more of the multiple recording devices can include devices that are configured to record a single type of content, e.g., two or more video cameras, two or more microphones, two or more still-image cameras, and so on.
Block 412 records the one or more live events via the web browser using the two or more of the multiple recording devices concurrently to produce one or more content files. In at least some embodiments, the two or more of the multiple recording devices can record the one or more live events simultaneously. For example, envision a scenario where the two or more of the multiple recording devices are two video cameras and a single computing device includes the two video cameras. According to at least some embodiments, techniques discusses herein enable the two video cameras on the single computing device to be operated simultaneously to record video content. This scenario is not intended to be limiting, however, and the two or more of the multiple recording devices can include devices that are configured to record a variety of different content, such as audio content, still images, and so on.
In at least some embodiments, the process 500 can upload the content data (e.g., the first portion and/or the second portion of content data) to the remote resource according to time-based and/or byte-based intervals. For example, in the context of a time-based interval, the process can automatically upload content data from the local device to the network resource according to a predetermined time interval, e.g., every 10 milliseconds. Thus, as the live event is recorded, portions of the content data that have not already been uploaded can be uploaded to the remote resource according to the time interval, e.g., at each expiration of the time interval.
In the context of a byte-based interval, when a particular portion of the content data is produced (e.g., 1 kilobyte), the process can automatically upload the particular portion of content data to the remote resource. Thus, in at least some embodiments, content data associated with the recorded live event can be uploaded to the remote resource according to a byte-wise basis.
Further, a progress callback function can be used to upload the content data to the remote resource. For example, the progress callback function can be called when a time interval has expired and/or a certain amount of content data (e.g., in bytes) has been produced. In at least some embodiments, the time interval can be user-specified, such as via the content module 122. Responsive to the progress callback function being called (e.g., by a local device and/or a remote resource), a portion of the content data can be uploaded from the local device to the remote resource.
Returning to the example process 500, block 506 uploads the second portion of the content data to the remote resource while the completing the recording of the live event to produce one or more additional portions of content data. The second portion of the content data can be uploaded in a time-based and/or byte-based manner, examples of which are discussed above. Block 508 uploads the one or more additional portions of the content data to the remote resource. The one or more additional portions of the content data can be uploaded in a time-based and/or byte-based manner, examples of which are discussed above.
Real-time Content Streaming
In at least some embodiments, techniques discussed herein can be used to stream real-time content, such as live video and/or audio. For example, content that is captured via one or more of the recording devices 110 can be streamed for consumption as it is being captured. To enable real-time content to be streamed, techniques herein can represent a real-time content stream via a URL. For example, the content module 122 can generate a URL that can be used to access a real-time content stream that is generated by one or more of the recording devices 110. In at least some embodiments, the URL can be used in a video tag (e.g., an HTML video tag) that can enable the real-time content stream to be accessed when the video tag is accessed.
Further to some embodiments, recorded content (e.g., video content, audio content, still images, and so on) and real-time content can be configured for simultaneous or semi-simultaneous consumption. For example, a webpage associated with the network resource 106 can include markup (e.g., HTML) that includes tags that link to recorded content and real-time content. When the webpage is accessed (e.g., via the web browser 120), the recorded content can be played back and the real-time content can be streamed simultaneously or semi-simultaneously. By enabling recorded content and real-time content to be represented via tags and/or URLs, both types of content can be easily embedded in documents (e.g., webpages) and accessed for consumption.
Consistent API
In at least some embodiments, techniques discussed herein can be implemented using one or more consistent application programming interfaces (APIs). For example, an API can enable access to content discussed herein via the recognition of calling conventions, tag names, function names, and/or method names that are consistent across multiple different applications and/or requesting entities. With reference to the real-time content streaming discussed above, a consistent API can enable access to a real-time content and recorded content via a tag (e.g., a video tag) such that both types of content can be accessed in a similar manner. Thus, a developer or other entity can use the same type of tag to access multiple types of content via the consistent API. With reference to the environment 100 discussed above, the consistent API can be embodied as one or more portions of the content module 122.
Attributes Parameter
In some cases, attribute parameters can be provided that enable recording attributes for the recording of live events to be controlled. Examples of recording attributes include bit rate, sample rate, frame rate, exposure, brightness, zoom, contrast, and so on. Thus, a web browser user interface can be configured to enable a user to control the recording attributes via input to the user interface. A user that wants a faster content upload, for example, can specify a lower content resolution (e.g., video resolution and/or image resolution) such that the content can be uploaded faster. Alternatively, the user can specify a higher content resolution that will, in some embodiments, increase the time required to upload the content.
Conclusion
This document describes techniques for browser-based recording of content. In some embodiments, these techniques enable a web browser to interface with recording devices to record live events as content without requiring an external utility or application. Although the invention has been described in language specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed invention.