The present patent application claims priority to European Patent Application No. EP 15306451.4, filed Sep. 17, 2015, entitled “Video Processing with Dynamic Resolution Changes,” the entire disclosure of which is hereby incorporated by reference for all purposes as if fully set forth herein.
Embodiments of the invention generally relate to the distribution of multimedia content over any delivery network.
The consumption of video content delivered over various networks has dramatically increased over time due, at least in part, to the availability of VOD (Video On Demand) services and live services as well as to the multiplication of devices on which such video content can be accessed. By way of example only, video content can be accessed from various kinds of terminals such as smart phones, tablets, personal computers (PCs), televisions, Set Top Boxes, and game consoles. Video content may also be distributed over various types of networks including broadcast, satellite, cellular, ADSL, and fibre.
Video content can be characterized by different parameters such as the spatial resolution parameter which defines the number of horizontal and vertical pixels for the video content. While the resolution may be identified using any integer, in practice the resolution typically corresponds to one of a number of standard resolutions that have been defined. Popular resolutions available today include 480p (720×480 pixels), 576p (720×576 pixels), 720p (1280×720 pixels), 1080i (1920×1080 pixels split in two interlaced fields of 540 lines), 1080p (1920×1080 pixels), 2160p (3840×2160 pixels) and 4320p (7680×4320 pixels). The resolutions 720p, 1080i and 1080p are generally referred as “HD” (High Definition) or “HDTV” (High Definition Television), the resolution 1080p can also be referred to as “Full HD” (Full High Definition). Resolutions 2160p and 4320p may also be referred to as “UHD” (Ultra High definition) or “UHDTV” (Ultra High Definition Television), resolution 2160p may also be referred to as “4K UHD” (4 kilo Ultra High Definition), and resolution 4320p may be known as“8k UHD” (8 kilo Ultra High Definition). Between these resolutions, there are many intermediate resolutions that can exist. Such intermediate resolutions may be used during transmission of video content to reduce footprint or impact of the video content on the delivery network even if the end device rescales the video content to full resolution just before the display of the video content on the end device.
Due to the huge size of raw video, video content is generally accessed in compressed form. Video content is therefore digital expressed or represented using a particular video compression standard. The most widely used video standards belong to the “MPEG” (Motion Picture Experts Group) family, which notably comprises the MPEG-2, AVC (Advanced Video Compression which is also called H.264) and HEVC (High Efficiency Video Compression, which is also called H.265) standards. Generally speaking, more recent formats are considered to be more advanced, support more encoding features, and/or provide a better compression ratio than prior formats. For example, the HEVC format is more recent and more advanced than AVC, which is itself more recent and more advanced than MPEG-2. Therefore, HEVC yields more encoding features and greater compression efficiency than AVC. The same applies for AVC in relation to MPEG-2. These compression standards are block-based compression standards, as are the Google formats VP8, VP9 and VP10.
Even using a single video compression standard, video content can be encoded in many different ways. Using the same video compression standard, digital video may be encoded at different bitrates. Also, using the same video compression standard, digital video may be encoded using only I-Frames (I-Frame standing for Intra-Frame), I and P-Frames (P standing for Predicted Frame) or I, P and B frames (B standing for Bi-directional frames). Generally, the number of available encoding options increases with the complexity of the video standard.
Conventional video coding methods use three types of frames: I or Intrapredicted frames, P or Predicted frames, and B or bi-directional frames. I frames can be decoded independently, like a static image. P frames use reference frames that have been previously displayed, and B frames use reference frames that are displayed prior to and/or later than the B frame to be encoded. The use of reference frames reduces the amount of information which needs to be encoded as only the differences between blocks in a current frame and the reference frame(s) need be encoded.
A GOP is defined as the Group of Pictures between one I-frame and the next I-frame in encoding/decoding order. A closed GOP refers to any block based encoding scheme where the information needed to decode a GOP is self-contained. In other words, a closed GOP may comprise one I-frame, P-frames that only reference that I-frame and P frames within the GOP, and B-frames that only reference frames within the GOP; consequently, there would be no need to obtain any reference frame from a prior GOP to decode the current GOP. In common decoder implementations, switching between resolutions at some point in a stream requires a “closed GOP” encoding scheme is used since the first GOP after a resolution change must not require any information from the previous GOP in order to be correctly decoded.
By contrast, according to another coding scheme called Open GOP, B frames in a GOP that are displayed before the I-frame in that GOP can reference frames from prior GOPs. The Open GOP coding scheme is widely used for broadcasting applications as this encoding scheme provides a better video quality for a given bitrate.
Digital video is being distributed over IP networks with increasing frequency. Video content corresponds to an increasing percentage of the total traffic carried by IP networks. As video consumption has been increasing faster than the available bandwidth of content delivery networks, there is pressing need to find more efficient compression schemes, especially at low bit rates.
The operators of content delivery networks continually weigh certain decisions when delivering content. For example, when the inherent costs are reasonable, video content may be concerted using an appropriate video codec. One of the inherent costs is the existing customer decoder park that may need to be replaced when changing the video codec. Changing the video codec used in a broadcasted service is therefore always a costly decision for an operator, and therefore, is typically only made when all other improvements using the currently used video codec has been exhausted.
For a particular video codec, the operator generally attempts to select an operating point (i.e., a video bit rate for a CBR encoding scheme) that will satisfy its customer expectation for video quality while using the lowest possible bitrate.
A known drawback of a block based compression technique, such as a MPEG encoder, is that when the output bit rate is reduced too much for a given signal, the video encoder produces block artifacts. These block artifacts appears when the input signal has too much video entropy for the desired output bitrate. So, for a given bitrate, the operator may need to reduce the video entropy before going to the compression stage. One approach for reducing video entropy is to reduce its spatial resolution using a spatial resolution filter that removes a portion of the signal information.
Embodiments of the invention will be better understood and its various characteristics and advantages will emerge from the following description of a number of exemplary embodiments and its appended figures in which:
Conventional MPEG encoder systems are configured to encode incoming video signals having known parameters. As a consequence, the bit range in which the encoder can operate without generating block artifacts is limited. However, for a given bit rate, if a wide range of possible signal entropies are considered, then this fixed encoder setting may be challenged by an alternative approach using a scalable scheme, where at a given bitrate, for the highest complex sequence the enhancement layers may be skipped to give more bitrate to the base layer running at a lower resolution.
Embodiments of the invention provide for a scalable compression scheme that can operate on a wider range of bit rates than prior approaches because the scalable compression scheme natively works with multiple resolution formats. Even if the compression efficiency of the scalable compression scheme is lower than a single layer compression scheme, the behavior is better than prior approaches at very low bit rates.
As shown in
As shown in the two examples of
As the signal changes, or the targeted bitrate evolves as a consequence of bandwidth availability or user configuration, it may become desirable to switch back or forth between the available resolutions. Those in the art shall appreciated that while but two resolutions are shown in
As used herein, video quality refers to the fidelity of the resulting video after encoding and decoding compared to the input video used as a reference. Video quality is often associated with video blockiness, i.e., the level of visible block artifacts. While in everyday usage video quality may be considered to have objective and subjective aspects, in the context of embodiments of the invention the objective components of video quality is of primary interest, and more particularly, the objective components or indicators of video quality that can be ascertained automatically.
Peak Signal to noise ratio and Structural Similarity are two known tools for measuring fidelity. A related consideration is complexity or entropy, as a high entropy signal is inherently less compressible, and, to meet bandwidth constraints, will have to be compressed or down filtered more than an equivalent low entropy signal, thereby reducing quality. Accordingly, video quality can be predicted by comparing different input and output signals or through consideration of the characteristics of the incoming signal together with knowledge of the behavior of the compression algorithm. Further, a person of ordinary skill in the art would recognize that other approaches used to predict video quality may be used with embodiments of the invention. In the following description, where the terms ‘video quality,’ ‘complexity,’ ‘entropy,’ or ‘Quantization Parameter’ are used, is shall be born in mind that in view of the relationships between them, any approach described with reference to any one of these terms could equally be approached on the basis of any other of these terms, with the modifications implied by the relationship between the terms in question.
In operation, resolution selector 22 determines a desired resolution level of the video signal based on a comparison of the output of video quality estimator 21 at the available transmission channel bandwidth and predefined quality thresholds dynamically, that is to say, in an iterative or repeated manner, so as to continually present to encoder 23 a video signal at the optimal resolution. More particularly, there is provided a resolution selector 22 adapted to determine a desired resolution level of an input video signal for encoding based on a comparison of a predicted video quality for the video signal at the available transmission channel bandwidth with a predefined quality threshold, where a new resolution level is selected if the predicted video quality passes the quality threshold. Resolution selector 22 is further adapted to output an instruction for a resolution filter to reduce the resolution of the video signal to the desired resolution.
Video quality estimator 21 may provide a measurement of the video signal complexity or entropy. A measurement of the video signal complexity or entropy may be calculated for every image, for images occurring at a regular interval, for images corresponding to I, P or B frames, for images corresponding to the start or end of a GOP, for images corresponding to a change from open GOP to closed GOP encoding, or any combination of these.
The Group of Picture (GOP) concept is inherited from MPEG and refers to an I-picture, followed by all the P and B pictures until the next I picture. As an example, a typical MPEG GOP structure might be IBBPBBPBBI, where each letter in that sequence corresponds to a I picture, a B picture or a P picture. Although H.264 and certain other block-based compression standards do not strictly require more than one I picture per video sequence, the recommended rate control approach does suggest a repeating GOP structure to be effective.
Resolution selector 22 is adapted to select a higher resolution for a signal with a lower entropy, and adapted to select a lower resolution for a signal having higher entropy.
In certain embodiments, video quality estimator 21 constitutes the first stage of a two pass encoder structure as explained hereafter with reference to
The average of the QP on a given image is a strong indication of the image complexity when an encoder works in a constant bit rate mode. The QP factor used for the first pass encoding will be high if the entropy is high and vice versa. Also, the higher the QP, the greater the macroblock artifacts. An objective of encoder 23 is to generate video with a level of macroblock artifacts below a certain threshold; consequently, encoder 23 seeks to reduce the signal entropy using decimation filters to reduce the spatial resolution.
In a typical block based video encoding algorithm, residuals (i.e., the difference between the source and prediction signals) are transformed into the spatial frequency domain by an integer transform, such as the Discrete Cosine Transform (DCT) function. The Quantization Parameter determines the step size for associating the transformed coefficients with a finite set of steps. Large values of QP represent big steps that crudely approximate the spatial transform, so that most of the signal can be represented by only a few coefficients. Small values of QP give a more accurate approximation or the block's spatial frequency spectrum, but at the cost of more bits. In an embodiment, CBR encoder Rate controllers compute the QP needed to achieve the target bit rate at the GOP level.
Spatial Resolution Selector 22 renders a decision on the spatial resolution to be used considering the current input signal entropy (information provided by video quality estimator 21) and the target bitrate configured by the user. Spatial Resolution Selector 22 selects the new resolution among a set of predefined sub-resolutions of the input Full resolution. Spatial Resolution Selector 22 provides the spatial resolution filter 24 with the next resolution to be applied once the current GOP is complete; spatial Resolution Selector 22 also provides this information to the GOP manager 25 in order for GOP manager 25 to be aware of a call for a new resolution.
In accordance with the approach of
As shown in
Comparing the system of
According to an alternative approach, the system may initialise at a highest resolution, and if it is determined that predicted video quality falls below a threshold, the resolution may be taken to the next resolution down. This process can be repeated until a resolution is reached at which predicted video quality remains consistently above the threshold. In this approach, the system may revert to the highest resolution periodically, for example at system start up, whenever a new GOP is initiated, when content type changes, and at regular predefined intervals.
According to a further alternative approach, the system may initialise at a lowest resolution, and if it is determined that predicted video quality is above a threshold (meaning a QP below a threshold), the resolution may be taken to the next resolution up. This process can be repeated until a resolution is reached at which predicted video quality remains consistently above the threshold. In this approach, the system may revert to the lowest resolution periodically, for example at system start up, whenever a new GOP is initiated, when content type changes, and at regular predefined intervals.
As shown in
Comparing the system of
In this approach, the role of resolution selector 62 is to observe the QP values corresponding to the available resolutions and to select the available resolution which fits best between the defined thresholds. For the duration of the cinematographic content 311, the high resolution QP signal lies below the thresholds 721; however, at the cut-off 741 when the content type shifts from the cinematographic content 311 to the sporting emission 312, the high resolution QP moves above the threshold 721. This move in QP above the threshold 721 reflects the fact that the higher entropy of the incoming video means that it can only be brought below the required bandwidth with strong compression. The line corresponding to the lower resolution 713 is now the best match for the threshold as it is the only one below the QP threshold 721; consequently, resolution selector 62 instructs spatial resolution filter 64 to reduce the video signal size to the low resolution. This remains the case until the next cut-off 742 when the content type shifts from the sporting emission content 312 to the news report 313, wherein the QP of the medium resolution signal 712 drops below the threshold 722. This drop in QP reflects the fact that the lower entropy of the incoming video means that it can be brought below the required bandwidth with lower compression. On this basis, as the signal slips over the threshold 722, resolution selector 42 selects the medium resolution, on the basis that this now offers the best match to the threshold.
In an embodiment described with respect to
Spatial resolution filter blocks 24, 44, 64 may implement one or more downscaling algorithms as will readily occur to the skilled person, such as bi-linear bi-cubic, Gaussian and Lanczos. Spatial resolution filter blocks 24, 44, 64 may receive input from the resolution selector block that specifies which of the available output resolutions is required.
To obtain a better video quality at a given bit rate, an Open GOP encoding scheme may be used by certain embodiments, in which case the encoder shall properly manage the moment in time when the resolution change occurs. Note that the video resolution change must occur on an Intra picture. The Intra picture when a change of resolution occurs must be the first image (in encoding order) of a closed GOP. This means that any other images contained in this GOP or any future GOP must not reference any previous image (which would be encoded in the previous resolution). After this first GOP, the encoder will move back to an open GOP encoding scheme for a better video quality
In the preceding embodiments, the possible resolutions that the resolution selector may specify are limited to three standard resolutions. It will be understood that any number of resolutions may be specified, and that some or all of these resolutions may be non-standard resolutions. According to some embodiments, the possible resolutions may evolve over time, such that the set of available resolutions is updated to better correspond to network conditions, video content types, and other relevant factors.
While certain preceding embodiments refer to a reduction of the resolution of the incoming video signal to the resolution specified by the resolution selector, it will be understood that in some cases the native resolution of the incoming video signal might be specified, for example, if the incoming video signal is of low resolution, if very high bit rate is available, or if the video signal is of very low entropy. In this case, we can consider that the video resolution is reduced by zero.
In accordance with an embodiment, encoders 23, 43, and 63 operate in a closed GOP mode and GOP management blocks 25, 45, and 65 are adapted to issue an instruction to the encoder to begin a new Group of Pictures to coincide with a change in resolution demanded by a resolution selector such as 22, 42, and 62.
In accordance with an embodiment, encoders 23, 43, and 63 operate in a closed GOP and GOP management blocks 25, 45, 65 control the Group Of Pictures structure under processing in the encoder and enable the spatial resolution filters 24, 44, and 64 to implement a change in resolution requested by the resolution selector 22, 42, and 62 to coincide with the initiation of a new Group of pictures.
In accordance an embodiment, the encoder operates in an open GOP mode and the GOP management blocks 25, 45, and 65 are adapted to issue an instruction to the encoder to temporarily switch to a closed GOP mode, to begin a new closed Group of Pictures to coincide with a change in resolution demanded by the resolution selector 22, 42, 62, and then revert to an open GOP mode of encoding.
In accordance an embodiment, the encoder operates in an open GOP mode and the GOP management blocks 25, 45, and 65 control the Group Of Pictures structure under processing in the encoder and enables the spatial resolution filter 24, 44, 64 to implement a change in resolution requested by the resolution selector 22, 42, 62 to coincide with the initiation of a new closed Group of pictures.
In accordance with an embodiment, the Block based Encoder 23, 43, 63 implements a bock-based encoding algorithm. The algorithm is preferably a standard block based algorithm such as MPEG-2, MPEG4-AVC, HEVC or VPx encoder, etc. as shown in
It will be appreciated that certain features of embodiment may be partially available in existing encoder products, for example the first pass encoding used in entropy estimation or features supporting the functions described for GOP Manager 23.
Video is composed of a stream of individual pictures that can be broken down into individual blocks of x pixels by x lines called “macroblocks.” It is at the macroblock level that the following processing takes place:
Residuals (i.e., the difference between the source and prediction signals coming from Intra or Inter prediction blocks) are transformed into the spatial frequency domain by transform like or closed to Discrete Cosine Transform (DCT). Depending on the standard in question this transform can be an integer or a floating point transform. Then, at the Quantization stage, a Quantization Parameter (QP) determines the step size for associating the transformed coefficients with a finite set of steps. Large values of QP represent big steps that crudely approximate the spatial transform, so that most of the signal can be represented by only a few coefficients. Small values of QP gives a more accurate approximation the block's spatial frequency spectrum, but at the cost of more bits. In usual implementation, CBR encoder rate controllers will compute the QP needed to achieve the target bit rate at the GOP level.
Some standard defines a de-blocking filter that operates on both 16×16 macroblocks and 4×4 block boundaries. In the case of macroblocks, the filter is intended to remove artifacts that may result from adjacent macroblocks having different estimation types (e.g., motion vs. intra estimation), and/or a different quantizer scale. In the case of blocks, the filter is intended to remove artifacts that may be caused by transform/quantization and from motion vector differences between adjacent blocks. The loop filter typically modifies the two pixels on either side of the macroblock/block boundary using a content adaptive non-linear filter.
Intra and motion estimation (prediction) may be used to identify and eliminate the spatial and temporal redundancies that exist inside and between individual pictures. Intra estimation attempts to predict the current block by extrapolating the neighboring pixels from adjacent blocks in a defined set of different directions. Inter prediction attempts to predict the current block using motion vectors to previous and/or future pictures.
Before entropy coding can occur, the 4×4 quantized coefficients must be serialized. Depending on whether these coefficients were originally motion estimated or intra estimated, a different scan pattern is selected to create the serialized stream. The scan pattern orders the coefficients from low frequency to high frequency. Then, since higher frequency quantized coefficients tend to be zero, run-length encoding is used to group trailing zeros, resulting in more efficient entropy coding.
The entropy coding stage maps symbols representing motion vectors, quantized coefficients, and macroblock headers into actual bits. Entropy coding improves coding efficiency by assigning a smaller number of bits to frequently used symbols and a greater number of bits to less frequently used symbols.
As described, such an encoder may fulfil the functions described with regard to embodiments described in reference to
In certain embodiments, the resolution selector block also provide a signal to the GOP management block in order to warn the GOP management block that the resolution must be changed and consequently that the next GOP shall be a closed GOP.
In an embodiment, the resolution changes of the signal to be encoded is modified to adopt a picture format (spatial resolution) selected such that block based compression does not generate block artifacts. Such an encoder can support a wider range of signal entropy at the input for a given bit rate.
In certain embodiments, a resolution selector 22, 42, and 62 may need to apply different thresholds depending on a number of system parameters, including the resolution of the input video signal, the characteristics of the outgoing transmission channel and in particular the available bandwidth, the configuration of the encoder and the GOP structure in effect. These and other parameters may all affect the proper choice of video quality threshold. Accordingly, the resolution selector may be adapted to store or access a repository containing definitions of the proper threshold for each scenario. This technique applies to any block based compression scheme: MPEG standards such as MPEG-2, MPEG-4/AVC, HEVC, and other formats that MPEG may produce in the future, but also specific formats such as VPx or AVS. The approach is particularly advantageous for Constant Bit Rate (CBR) encoders, such as IPTV encoders, since for a given bit rate setting, such encoders have to adapt to any source entropy (such as, for example, from sports material known as high entropy signals to film).
VBR encoders have the capability to work around this problem by increasing the bit rate proportionally to signal entropy. Nevertheless, the described approach remains equally applicable to VBR encoders, such as in the case of a linear TV channel, where the program aggregator is putting back to back different content. With 4k in mind, the traditional broadcasters are challenged by on-demand video service providers which can deliver 4k content because the delivery chain is there, film content is there, and the investment to put in place the 4k file workflow for content preparation in advance is inexpensive. The traditional broadcasters could envisage a similar approach on the basis of a dedicated 4k channel. However, it is far more costly in terms of production infrastructure to adopt a full 4k live workflow.
One way to work around it is to use a 4k encoder that works with a 4K format for film (file based work flow) and switches to HD formats for live content (typically sports or news). The advantage is traditional broadcasters may then broadcast 4k content like on-demand suppliers, but traditional broadcasters can aggregate the 4k content in a live channel which such on-demand suppliers cannot offer.
For this particular use case, the switching criteria can be of two types. Depending on input signal nature (entropy or native spatial resolution) made automatically by the encoder or may be based on content type via play list information. For both cases, the behavior of the encoder is a seamless encoding format switching.
It will be appreciated that the forgoing embodiments are merely non-limiting examples. In particular, it will be appreciated that the functions required to implement the invention may be distributed in a variety of ways amongst system components, for example whether entropy estimation is performed entirely or partially by a first pass encoder, or by a separate subsystem, whether the GOP management is performed entirely or partially by the encoder, or by a separate subsystem, etc.
The step 914 of selecting a new resolution may comprise selecting a resolution with reference to the threshold, or may simply proceed to the next resolution in a predefined sequence as described above. In either case, by iteration of these steps will converge on an optimal resolution.
In some variants of the embodiment, the steps of
It will be appreciated that these steps can be carried out in different orders, and that certain steps may be carried out more frequently than others. In particular, it will be appreciated that while the signal will generally need to be continuously down filtered and encoded, video quality prediction and resolution selection may be carried out from time to time as appropriate, as discussed with regard to the foregoing embodiments.
In certain embodiments, the version of the incoming video signal used in the prediction of output video quality is the incoming video signal itself. In certain embodiments, the version of the incoming video signal used in the prediction of output video quality is the filtered video signal generated at step 915.
It will be appreciated that these steps can be carried out in different orders, and that certain steps may be carried out more frequently than others. In particular, it will be appreciated that while the signal will generally need to be continuously down filtered and encoded, video quality prediction and resolution selection may be carried out from time to time as appropriate, as discussed with regard to the foregoing embodiments.
According to certain embodiments, there is provided approaches for video encoding which down-filters an incoming video signal to a standard resolution before encoding by a standard block based encoding algorithm. The selection of the resolution to which the incoming signal is down filtered is determined on the basis of a prediction of the video quality that may be expected at the system output with regard to the complexity or entropy of the signal. The predicted output video quality may be estimated on the basis of the Quantization Parameter of an encoder receiving the input video signal or a filtered video signal. The selection of a new down-filtered resolution may be carried out with regard to one or more thresholds.
The disclosed embodiments can take form of an entirely hardware embodiment (e.g. FPGA), an entirely software embodiment (for example to control a system according to the invention) or an embodiment containing both hardware and software elements. Software embodiments include but are not limited to firmware, resident software, microcode, etc. Embodiments may take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or an instruction execution system. A computer-usable or computer-readable can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device).
These methods and processes may be implemented by means of computer-application programs or services, an application-programming interface (API), a library, and/or other computer-program product, or any combination of such entities.
Logic device 1001 includes one or more physical devices configured to execute instructions. For example, logic device 1001 may be configured to execute instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
Logic device 1001 may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic device may include one or more hardware or firmware logic devices configured to execute hardware or firmware instructions. Processors of the logic device may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic device 1001 optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic device 1001 may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration.
Storage device 1002 includes one or more physical devices configured to hold instructions executable by the logic device to implement the methods and processes described herein. When such methods and processes are implemented, the state of storage device 1002 may be transforme—e.g., to hold different data.
Storage device 1002 may include removable and/or built-in devices. Storage device 1002 may comprise one or more types of storage device including optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. Storage device 1002 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
In certain arrangements, the system may comprise an interface 1003 adapted to support communications between logic device 1001 and further system components. For example, additional system components may comprise removable and/or built-in extended storage devices. Extended storage devices may comprise one or more types of storage device including optical memory 1032 (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory 1033 (e.g., RAM, EPROM, EEPROM, FLASH etc.), and/or magnetic memory 1031 (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. Such extended storage device may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
It will be appreciated that storage device includes one or more physical devices, and excludes propagating signals per se. However, aspects of the instructions described herein alternatively may be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.), as opposed to being stored on a storage device.
Aspects of logic device 1001 and storage device 1002 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
The term “program” may be used to describe an aspect of computing system implemented to perform a particular function. In some cases, a program may be instantiated via logic device executing machine-readable instructions held by storage device. It will be understood that different modules may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same program may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The term “program” may encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc.
The system of
Accordingly the invention may be embodied in the form of a computer program.
It will be appreciated that a “service”, as used herein, is an application program executable across multiple user sessions. A service may be available to one or more system components, programs, and/or other services. In some implementations, a service may run on one or more server-computing devices.
When included, display subsystem 1011 may be used to present a visual representation of data held by storage device. This visual representation may take the form of a graphical user interface (GUI). As the herein described methods and processes change the data held by storage device 1002, and thus transform the state of storage device 1002, the state of display subsystem 1011 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 1011 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic device and/or storage device in a shared enclosure, or such display devices may be peripheral display devices.
When included, input subsystem may comprise or interface with one or more user-input devices such as a keyboard 1012, mouse 1011, touch screen 1011, or game controller (not shown). In some embodiments, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity.
When included, communication subsystem 1020 may be configured to communicatively couple computing system with one or more other computing devices. For example, communication module of may communicatively couple computing device to remote service hosted for example on a remote server 1076 via a network of any size including for example a personal area network, local area network, wide area network, or the internet. Communication subsystem may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network 1074, or a wired or wireless local- or wide-area network. In some embodiments, the communication subsystem may allow computing system to send and/or receive messages to and/or from other devices via a network such as the Internet 1075. The communications subsystem may additionally support short range inductive communications 1021 with passive devices (NFC, RFID etc).
It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
15306451.4 | Sep 2015 | EP | regional |