This application claims priority to European Patent Application Serial No. 15307113.9, filed on Dec. 22, 2015, entitled “Video Stream Splicing,” invented by Eric Le Bars et al., the disclosure of which is hereby incorporated by reference in its entirety for all purposes as if fully set forth herein.
Embodiments of the invention relate to the distribution of video content over a delivery network.
The amount of video content delivered and consumed over a delivery network has dramatically increased over time. This increase is in part due to VOD (Video on Demand) services, but also to the increasing number of live services combined with the increasing number of devices capable of accessing a delivery network. By way of example only, video content can notably be accessed from various kinds of terminals, such as smart phones, tablets, PC, TV, Set Top Boxes, game consoles, and the like, which are connected through various types of delivery networks including broadcast, satellite, cellular, ADSL, and fibre.
Due to the large size of raw video, video content is generally accessed in compressed form. Consequently, video content is generally expressed using a video compression standard. The most widely used video standards belong to the “MPEG” (Motion Picture Experts Group) family, which notably comprise the MPEG-2, AVC (Advanced Video Compression also called H.264) and HEVC (High Efficiency Video Compression, also called H.265) standards. Generally speaking, more recent formats are considered to be more advanced, as newer formats support more encoding features and/or provide for better compression ratios. For example, the HEVC format is more recent and more advanced than AVC, which is itself more recent and more advanced than MPEG-2. Therefore, HEVC yields more encoding features and greater compression efficiency than AVC. The same applies for AVC in relation to MPEG-2. These compression standards are block-based compression standards, as are the Google formats VP8, VP9, and VP10.
Even within the same video compression standard, video content can be encoded using very different options. Video content can be encoded at different bitrates. Video content can also be encoded using only I frames (I Frame standing for Intra Frame), I and P Frames (P standing for Predicted Frame), or I, P and B frames (B standing for Bi-directional frames). Generally speaking, the number of available encoding options increases with the complexity of the video standard.
Conventional video coding methods use three types of frame: I or Intrapredicted frames, P or Predicted frames, and B or bi-directional frames. I frames can be decoded independently. P frames reference other frames that have been previously displayed, and B frames reference other frames that have been displayed or have yet to be displayed. The use of reference frames involves predicting image blocks as a combination of blocks in reference frames, and encoding only the difference between a block in the current frame and the combination of blocks from reference frames.
A GOP is generally defined as the Group of Pictures between one I frame and the next I frame in encoding/decoding order. Closed GOP refers to any block based encoding scheme where the information to decode a GOP is self-contained. In other words, a closed GOP contains one I frame, P frames that only reference the I frame and P frames within the GOP, and B frames that only reference frames within the GOP. Thus, in a closed GOP there is no need to obtain any reference frame from a prior GOP to decode the current GOP. In common decoder implementations, switching between resolutions at some point in a stream requires that a “closed GOP” encoding scheme is used, since the first GOP after a resolution change must not require any information from the previous GOP in order to be correctly decoded.
By contrast, in the coding scheme called open GOP, the first B frames in a current GOP which are displayed before the I frame can reference frames from prior GOPs. Open GOP coding schemes are widely used for broadcasting applications because this coding scheme provides a better video quality for a given bit rate.
Video delivery has continued to grow in popularity over a wide range of networks. Among the different networks on which video delivery may be performed, IP networks demand particular attention as video delivery represents a growing portion of the total capacity of IP networks.
Before primary video stream 110 can be combined with the material of secondary video stream 120, the primary video stream 110 is decoded at a decoder 211 to generate the decoded primary video stream 210. In many scenarios the secondary video stream 120 may be un-encoded digital video for example in Y′UV format such as ITU-R BT.656, and will not therefore necessarily need to be decoded, although this may still be necessary in other scenarios. In some cases it may be desirable to perform edition operations on the secondary video stream to add logos, station identifiers, or other graphical overlays to ensure a visual correspondence between images from the two streams, at editing unit 221. The decoded primary video stream 210 and edited secondary video stream 220 can then be directly combined by switching between the two video streams at the desired instant at switcher 130 to generate the combined video signal 140, which can then be re-encoded by an encoder 241 to generate an encoded, combined video stream 240. As shown, the encoded, combined video stream 240 comprises a series of encoded blocks, with the subject matter of the secondary video stream stretched across a number of blocks.
The continuous decoding of primary video signal 110 and re-encoding of the combined video signal 140 dictated by this approach calls for significant processing and storage capacity, and necessitates continuous power consumption. It furthermore introduces additional transmission latency. It is desired to avoid or mitigate these drawbacks.
Embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
Approaches for combining a first video stream with a second video stream are presented herein. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention described herein. It will be apparent, however, that the embodiments of the invention described herein may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form or discussed at a high level in order to avoid unnecessarily obscuring teachings of embodiments of the invention.
Embodiments of the invention are directed towards a video splicer for combining a first video stream with a second video stream. The first video stream may be encoded in accordance with a block based coding algorithm. This video splicer may comprise a header processor adapted to detect a key position picture in each group of pictures (GOP) of the first video stream. The header processor may also determine the presentation time of each key position picture in each group of pictures (GOP) of the first video stream. This may be performed for every GOP in the first video stream or a selected sequence of GOPs in the first video stream.
The video splicer of an embodiment may additional comprise a timing mapper that identifies a respective image in the second video stream having a presentation time corresponding to one key position picture of the first video stream. The video splicer may also include an encoder adapted to encode the second video stream in accordance with the block based coding algorithm. The encoder may encode the second video stream so that a new group of pictures is started with the respective image in the second video stream having a presentation time corresponding to the key position picture of the first video stream.
This video splicer of an embodiment may further include a switcher configured to switch between outputting the encoded first video stream or the second video stream. The switching may be triggered by a signal from the timing mapper that indicates the start of a new group of pictures in whichever stream is selected.
Additional details and embodiments will be discussed in greater detail below.
First video stream 310 may correspond to the primary video stream as described above, and is encoded in accordance with a block based coding algorithm, for example, as described above.
Header processor 331 is adapted to detect a key position picture in each group of pictures (GOPs) of first video stream 310, for example, by inspecting metadata at different parts of first video stream 310 which can be accessed without decoding first video stream 310. For example, in an MPEG-2 signal the Access Unit (AU) contains information specifying the image type, whilst timing information is available in the Packetized Elementary Stream (PES) header. A person of ordinary skill in the art will appreciate that other metadata resources in first video stream 310 may provide equivalent information, and that in data streams encoded in accordance with an alternative block encoding algorithm, such as those mentioned elsewhere in this description, corresponding information will be available at other positions in the data stream. On the basis of the retrieved information it is possible to determine the presentation time of each key position picture in each group of pictures (GOPs) of first video stream 331.
Timing mapper 332 is adapted to identify a respective image in second video stream 320 having a presentation time corresponding to each key position picture of first video stream 310.
Encoder 333 is adapted to encode second video stream 320 in accordance with the block based coding algorithm, whereby a new group of pictures (GOP) is started with each respective image in second video stream 320 having a presentation time corresponding to each key position picture of first video stream 310. A block based encoding algorithm employed by encoder 333 may be any block based algorithm, such as MPEG-2, MPEG4-AVC, HEVC, and VPx encoder.
Switcher 334 is configured to switch between outputting the encoded first video stream 310 or the second video stream 320. The switching is triggered or coordinated by a signal sent from timing mapper 332. The signal indicates the start of a new group of pictures (GOP) in whichever stream is selected.
The key position picture in each group of pictures (GOP) of first video stream 310 detected by header processor 331 may be the first image in a particular sequence of images, or the last image in a particular sequence of images, or any other instant in first video stream 310 which can be reliably detected as having some particular significance from an encoding perspective. The key position picture may be the first picture in each group of pictures (GOP) with respect to playback timing. The key position picture may be the last picture in each group of pictures (GOP) with respect to playback timing.
In particular, the key position picture may be the first image in a group of pictures (GOP) as described above, which in many encoding schemes will be an I frame. Header processor 331 may read the GOP header as a means to determine the group of pictures (GOP) structure and timing.
The key position picture may be the first image in a group of pictures (GOP) in playback sequence, which in many encoding mechanisms differs from transmission sequence. This may imply a reconstitution of the playback sequence as discussed below. Header processor 331 may read the GOP header as a means to determine the group of pictures (GOP) structure and timing.
One of ordinary skill in the art shall appreciate that while the present detailed description is couched in the language of MPEG-2 encoding, the principles presented are directly adaptable to any other block based encoding algorithms. It will further be appreciated that within a given encoding structure there may be alternative sources of equivalent information. For example, in MPEG-2 encoding, a new GOP, or sequence of GOPs, may be detected with reference to GOP headers, sequence headers, and the like. In H264 encoding, the Sequence Parameter Set (SPS) may also provide relevant information, for example. One of ordinary skill in the art will be able to identify the proper sources of information in a given video stream on the basis of the applicable encoding type and parameters.
On the basis of group of pictures (GOP) structure and timing information, it is then possible for header processor 331 to determine the presentation time of each key position picture such as each I frame in each group of pictures (GOP) of first video stream 310 as described in more detail hereafter. In general, this may be determined with reference to the Presentation Time Stamp of each image, as retrieved by header processor 331. If the intention is to identify the first image to be displayed, it is sufficient to select the image with the lowest Presentation Time Stamp value.
Timing mapper 332 is then able to determine a correspondence in image timing between first video stream 310 and second video stream 320 by reference to the respective timing information of each stream, e.g., with reference to the timing reference signals of un-encoded video under ITU-R BT.656 on one hand and the timing information from the MPEG headers extracted by header processor 331 on the other. ITU-R BT.656 is a common video format; however, it will be appreciated that the described approach is adaptable to any video format not incorporating block based encoding.
The correspondence in image timing identified by timing mapper 332 can be seen as tying together respective images of the two streams which are specified to be displayed at the same time. Since the image chosen as the basis of this correspondence in first video stream 310 is the key position picture, the image from second video stream 320 to which it is tied is the image of the second stream which is specified to be displayed at the same time as the key position picture of first video stream 310.
Timing mapper 332 outputs this timing or correspondence information to encoder 333, which also receives second video stream 320 as an input, so that the encoder 333 encodes second video stream 320 to produce encoded second video stream 340 with a new group of pictures (GOP) coinciding with the image from second video stream 320 to which a respective key position picture of first video stream 310 is tied. As a consequence, the encoded second video stream 340 output by the block based encoder 333 is synchronized with and has a matching GOP structure to the first video stream 310. In other words, every GOP in first video stream 310 has a matching GOP in encoded second video stream 340 of equal length and intended for display at the same time.
In some embodiments, this outputting of timing or correspondence information may occur only to coincide with the start of the group of pictures (GOP) at the moment of a specified switch time as described hereafter, so as to avoid degrading the performance of encoder 333.
It will be appreciated that since the intention is to combine first video stream 310 and second video stream 320 to constitute a single combined video stream 350, some of the GOPs in either or both video streams may be blank or contain dummy video information, and outside switching periods, whichever stream is not being retransmitted may be suspended in certain embodiments.
On the basis of this synchronization of the two video streams (namely streams 310 and 320), it becomes possible to constitute a new composite video stream by switching between the two synchronized streams at will as long as the switch is made at the beginning of a new GOP, there is no danger that the image will be corrupted or degraded. For this reason, timing mapper 332 provides timing information to switcher 334. This timing information may be combined with additional programming information to implement the switch from one video stream to the other.
Switcher 334 may be directly controlled to switch between two signals at a specified instant, or be adapted to switch on receiving a specified image or frame in one stream or the other, or the system may associate metadata with a particular image or frame in one stream or the other to indicate that it represents a particular reference point with respect to switching activity. For example, the last compressed image in a segment may be tagged “last picture for segment.” This tagging may be achieved either by adding data to the video stream itself or by a “virtual tag,” where the information is associated with the corresponding part of the video stream by reference, e.g., using a pointer or the like.
When timing information for the intended switch is available, it may be sufficient for timing mapper 332 to identify only one respective image in second video stream 320 having a presentation time corresponding to the key position picture of first video stream 310 closest to the switch time, and correspondingly for encoder 333 to encode second video stream 320 in accordance with the block based coding algorithm so as to start a new group of pictures (GOP) with the image in the second video stream 320 having a presentation time corresponding to that particular key position picture of first video stream 310.
The definition of the time of switching may come from a manual input via a Graphical User Interface, e.g., controlling the timing mapper 332, 432 (shown in
Accordance with some embodiments, it may be desirable to provide buffers on one input to switcher 334, or the other input to switcher 334, or both.
In certain embodiments, timing mapper 332 may tag the respective image in first video stream 310. To implement this approach, header processor 331 may log each incoming key position picture of first video steam 310 and its associated timing. Similarly, timing mapper 332 may log every incoming picture of second video stream 320 and its associated timing, and detect a matching timing information between the two sets of logged data. Two timings within a specified window may constitute a match, for example, with unsynchronized streams. When a match is detected, a pointer between the two log entries may be created. Encoder 333 may then encode second video stream 320 with reference to the log and associated pointers, so as to begin a new GOP corresponding to each tagged image. Similarly, switcher 334 may also be coupled to timing mapper 332 so as to switch between outputting encoded first video stream 310 or second video stream 340 with reference to the tagging, so as to synchronize a switch from one signal to the other with the occurrence of a new GOP.
The approach described with respect to
Auxiliary block based decoder 460 receives first video stream 310 as a data input. Auxiliary block based encoder 461 receives the output of the auxiliary block based decoder 460 as a data input. Timing mapper 432 of
Header processor 331 receives a transition time at which switcher 434 is to switch from outputting the encoded first video stream 310 or second video stream 340, and to determine whether the transition time coincides with the start of a new group of pictures (GOP) in first video stream 310.
Auxiliary block based decoder 460 is configured to decode the group of pictures (GOP) of first video stream 310 during which the transition time occurs. Auxiliary block based decoder 460 may continually decode all GOPs of first video stream 310, or merely those coinciding with a transition time. Auxiliary block based encoder 461 meanwhile is configured to re-encode the group of pictures (GOP) during which the transition time occurs, as output by the auxiliary block based decoder 460, as a first split group of pictures (GOP) and a second split group of pictures (GOP). The first split group of pictures (GOP) ends and the second split group of pictures (GOP) starts at the specified transition time. The constitution of these two split GOPs is determined with respect to information provided by timing mapper 432, such as the time of the transition, the frame number of the last frame in the first split GOP, the number of the first frame in the second split GOP, and so on. Accordingly, the output of auxiliary block based encoder 470 corresponds to a very similar video sequence to that of first video stream 310, but having a different GOP structure, where the transition between two GOPs has been deliberately imposed so as to coincide with the transition time.
As shown in
As a further development of the approach described with reference to
Either way, at least the group of pictures (GOP) of first video stream 310 during which the transition time occurs is decoded, and re-encoded as a first split group of pictures and a second split group of pictures, where the first split group of pictures (GOP) ends and the second split group of pictures (GOP) starts at the specified transition time.
As shown in
As a further development of the approach described with reference to
Either way, at least the group of pictures (GOP) of first video stream 310 during which the transition time occurs is decoded, and re-encoded as a first split group of pictures (GOP) and a second split group of pictures (GOP), wherein the first split group of pictures (GOP) ends and the second split group of pictures (GOP) starts at the specified transition time.
The frame rate of the two video streams may be equal, so that there is always a one-to-one mapping between image timings in the two video streams, or not. In a case where the video streams are at different rates, or are at the same rate, but not synchronized, encoder 333 may be adapted to adjust timing to as to bring the signals into synchronization on the basis of information received from timing mapper 432. In many scenarios, it will be satisfactory to simply select the frame from whichever stream is to be switched to with the closest timing to that specified for the switch.
In some block based encoding mechanisms, the order of playback may differ from the order of transmission of individual picture frames in the video stream. If the key position picture is the first picture to be displayed of a new GOP, this may not correspond to the first transmitted picture of that GOP. For this reason, depending on the block based encoding method used, and the key position picture chosen, it may be necessary to reconstruct the group of pictures in playback sequence in order to identify the key position picture's timing. Accordingly, header processor 331 may be adapted to decode headers of second video stream 320 and reconstitute the playback order of the images of second video stream 320 to determine a playback timing for each image.
The group of picture (GOP) concept is inherited from the MPEG video standard and refers to an I picture, followed by all the P and B pictures until the next I picture. Typical MPEG GOP structures might be IBBPBBPBBI. Although H.264 or other block-based compression standard does not strictly require more than one I picture per video sequence, the recommended rate control approach does suggest a repeating GOP structure to be effective.
For a better video quality at a given bit rate, an open GOP encoding scheme may be used in many situations.
Stream 720 also has timing information, similarly represented by the numbering at the bottom of each frame as shown. On this basis, it is possible to identify the frame in the second video stream corresponding the key position frames. Since frames 701, 702 and 703 have timing 1, 7 and 14 respectively, frames 721, 722 and 723 of the second video stream can be tagged as key position frames, on the basis that they have corresponding timing 1, 7 and 14 respectively. On this basis, the encoder 333 can be instructed to start a new GOP for these frames. Once encoded, these frames will themselves be transposed correspondingly to a different part of the GOP as shown in encoded second video stream 740.
The combined video stream 750 as shown comprises a first GOP 751 from the first video stream, a second GOP 752 from the second video stream, and a third GOP 751 from the first video stream, this combined video stream having been generated without the need to decode the first video stream, and without damaging or degrading the parts of the first video stream included in the combined video stream 750 when decoded and displayed.
The second video stream 820 also has timing information, similarly represented by the numbering at the bottom of each frame as shown. On this basis, it is possible to identify the frame in the second video stream corresponding to the key position frames. Since frames 801 and 802 have timing 4 and N+4 respectively, frames 821 and 822 of the second video stream can be tagged as key position frames, on the basis that they have corresponding timing 4 and N+4 respectively. On this basis, encoder 333 can be instructed to start a new GOP for these frames. Once encoded, these frames will themselves be transposed correspondingly to a different part of the GOP as shown in encoded second video stream 840.
The combined video stream 850 as shown comprises a first GOP 851 from the first video stream, a second GOP 852 from the second video stream, this combined video stream having been generated without the need to decode the first video stream, and without damaging or degrading the parts of the first video stream included in the combined video stream 850 when decoded and displayed.
Second video stream 920 also has timing information, similarly represented by the numbering at the bottom of each frame as shown. On this basis, it is possible to identify the frame in the second video stream corresponding the key position frames. Since frames 901, 902 and 903 have timing 1, 7 and 14 respectively, frames 921, 922 and 923 of the second video stream can be tagged as key position frames, on the basis that they have corresponding timing 1, 7 and 14 respectively. On this basis, encoder 333 can be instructed to start a new GOP for these frames. Once encoded, these frames will themselves be transposed correspondingly to a different part of the GOP as shown in encoded second video stream 940.
Combined video stream 950 as shown comprises a first GOP 951 from the first video stream, a second GOP 952 from the second video stream, and a third GOP 951 from the first video stream, this combined video stream having been generated without the need to decode the first video stream, and without damaging or degrading the parts of the first video stream included in the combined video stream 950 when decoded and displayed.
Although the present description is primarily couched in terms of the vocabulary of MPEG-2 encoding, it will be appreciated that the described approaches applies to any block based compression scheme: MPEG standards such as MPEG-2, MPEG-4/AVC, HEVC, and other formats that MPEG may produce in the future, but also specific formats such as VPx or AVS.
In some embodiments, the second video signal may be processed (resized, logo or text added, etc) before encoding.
While the foregoing focuses on video data, it will also generally be necessary to consider synchronous splicing of the audio data accompanying the video stream. In this regard, there may additionally be provided means for adapting the length of the audio buffer for each stream. The required buffer length will be determined on the basis of the number of samples and the sample frequency, and the audio coding protocol used for the first video stream. For example, 1152 samples in MPEG-1 layer 2, 2048 in AAC LC, 1152 in DD/DD+, and so on. For example, PTS information from the two audio streams and two video streams may be correlated to determine the timing of the audio frame for switching.
In accordance with one variant of this method, the key position picture is the first picture in each group of pictures as regards playback timing. Alternatively, the key position picture is the last picture in each group of pictures as regards playback timing.
In accordance with one variant of this method, the step 1030 of identifying a respective image in the second video stream having a presentation time corresponding to each key position picture of the first video signal may comprise tagging the respective image in the second video stream, and the step 1050 of switching between outputting the encoded first video stream or the second video stream may be carried out with reference to this tag.
In accordance with one variant of this method, the step 1010 of detecting the key position picture in each group of pictures of the first video stream may comprise decoding headers of the first video stream and reconstituting the playback order of the images of the first video stream to determine a playback timing for each image.
This method may be carried out once, to determine the proper encoding and transition time for one particular switch between two channels, or may be performed cyclically, for example so as to provide regular opportunities for switching. Where the method is performed cyclically, it may be applied to each GOP. It will be appreciated that it may be desirable to perform certain steps more often than others, for example it may be desirable to perform steps 1010, 1020, 1030 more frequently than step 1040, for example so that the system is ready to begin encoding of the second video stream on demand. Similarly, it may be desirable to perform step 1040 continuously, even if immediate switching to the second video signal is not foreseen, so as to support instantaneous transition on demand.
In accordance with a further variant, the method may comprise the further steps of specifying a transition time at which the step 1050 of switching between outputting the first video stream or the encoded second video stream should occur, and in a case where the transition time does not coincide with the start of a new group of pictures in the first video stream, decoding the group of pictures of the first video stream during which the transition time occurs and re-encoding this group of pictures as a first split group of pictures and a second split group of pictures, wherein the first split group of pictures ends and the second split group of pictures starts at the specified transition time, for example as described above with reference to
In accordance with certain embodiments, a splicer is able to combine an encoded video stream with a further video stream without needing to de-code the encoded video stream, by reading timing and frame structure information from the meta data of the encoded video stream available in headers and the like, and encoding the further video stream with a structure synchronized with that of the first video stream as determined with reference to the meta-data. It thus becomes possible to switch between the two signals at the desired instant without loss of data. Since encoded images are transmitted in sequence that differs from playback sequence, synchronizing the encoded streams means reconstructing the playback sequence of the encoded video stream to identify respective images having the same playback timing.
Other implementation details and variants of these methods may be envisaged, in particular corresponding to the variants of the apparatus described with reference to the preceding drawings.
The disclosed methods can take form of an entirely hardware embodiment (e.g. FPGA), an entirely software embodiment (for example to control a system according to the invention) or an embodiment containing both hardware and software elements. Software embodiments include but are not limited to firmware, resident software, microcode, etc. Embodiments of the invention can take the form of a non-transitory computer program product accessible from a computer-usable or non-transitory computer-readable medium providing program code for use by or in connection with a computer or an instruction execution system. A computer-usable or computer-readable apparatus can be any apparatus that can contain, persistently store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device).
These methods and processes may be implemented by means of computer-application programs or services, an application-programming interface (API), a library, and/or other computer-program product, or any combination of such entities.
Logic device 1101 includes one or more physical devices configured to execute instructions. For example, logic device 1101 may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
Logic device 1101 may include one or more processors configured to execute software instructions. Additionally or alternatively, logic device 1101 may include one or more hardware or firmware logic devices configured to execute hardware or firmware instructions. Processors of logic device 1101 may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of logic device 1101 optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of logic device 1101 may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration.
Storage device 1102 includes one or more physical devices configured to hold instructions executable by the logic device to implement the methods and processes described herein. When such methods and processes are implemented, the state of storage 1102 device may be transformed—e.g., to hold different data.
Storage device 1102 may include removable and/or built-in devices. Storage device 1102 may comprise one or more types of storage device including optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. Storage device may include volatile, non-volatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
In certain arrangements, the system may comprise an interface 1103 adapted to support communications between the Logic device 1101 and further system components. For example, additional system components may comprise removable and/or built-in extended storage devices. Extended storage devices may comprise one or more types of storage device including optical memory 1132 (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory 1133 (e.g., RAM, EPROM, EEPROM, FLASH etc.), and/or magnetic memory 1131 (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. Such extended storage device may include volatile, non-volatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
It will be appreciated that storage device includes one or more physical devices, and excludes propagating signals per se. However, aspects of the instructions described herein alternatively may be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.), as opposed to being stored on a storage device.
Aspects of logic device 1101 and storage device 1102 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
The term “program” may be used to describe an aspect of computing system implemented to perform a particular function. In some cases, a program may be instantiated via logic device executing machine-readable instructions held by storage device. It will be understood that different modules may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same program may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The term “program” may encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc.
In particular, the system of
It will be appreciated that a “service”, as used herein, is an application program executable across multiple user sessions. A service may be available to one or more system components, programs, and/or other services. In some implementations, a service may run on one or more server-computing devices.
When included, display subsystem 1111 may be used to present a visual representation of the first video stream, the second video stream or the combined video stream, or may otherwise present statistical information concerning the processes undertaken. As the herein described methods and processes change the data held by the storage device 1102, and thus transform the state of the storage device 1002, the state of display subsystem 1111 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 1111 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic device and/or storage device in a shared enclosure, or such display devices may be peripheral display devices.
When included, input subsystem may comprise or interface with one or more user-input devices such as a keyboard 1112, mouse 1113, touch screen 1111, or game controller (not shown). In some embodiments, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, colour, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity.
When included, communication subsystem 1120 may be configured to communicatively couple computing system with one or more other computing devices. For example, communication module of may communicatively couple computing device to remote service hosted for example on a remote server 1076 via a network of any size including for example a personal area network, local area network, wide area network, or the internet. Communication subsystem may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network 1174, or a wired or wireless local- or wide-area network. In some embodiments, the communication subsystem may allow computing system to send and/or receive messages to and/or from other devices via a network such as the Internet 1175. The communications subsystem may additionally support short range inductive communications 1021 with passive devices (NFC, RFID etc).
It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed.
In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is the invention, and is intended by the applicants to be the invention, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Number | Date | Country | Kind |
---|---|---|---|
15307113.9 | Dec 2015 | EP | regional |