The present disclosure relates to a method of minimizing artifacts in video coding and compression. More specifically, it relates to methods for reducing “flashing” due to relatively low complexity portions of a sequence of video frames in video coding and processing systems such as within the High Efficiency Video Coding (HEVC) standard.
Many video compression standards, e.g. H.264/AVC and H.265/HEVC, currently published as ISO/IEC 23008-2 MPEG-H Part 2 and ITU-T H.265, have been widely used in video capture, video storage, real time video communication and video transcoding. Examples of popular applications include Apple AirPlay® Mirroring, FaceTime®, and video capture in iPhone® and iPad®.
Most video compression standards achieve much of their compression efficiency by motion compensating a reference picture and using it as a prediction for the current picture, and only coding the difference between the current picture and the prediction. The highest rates of compression can be achieved when the prediction is highly correlated to the current picture. When a portion of the view is generally not moving or is of relatively low complexity, i.e., substantially visually static from frame to frame for a duration of time, then there is high correlation of that portion to itself. When this happens, the video may be compressed by skipping the coding of that relatively low complexity portion for a number of frames, and/or coding the relatively low complexity portion with fewer bits than the more complex portions of the view.
However, skipping the coding or low number of bits in the coding of the relatively low complexity portions may cause small amount of errors to accumulate from reference frame to reference frame, such that eventually the accumulated error would be encoded into the video data, and upon decoding and displaying, the relatively low complexity portion shows a “flashing” or “beating” visual effect, which is particularly noticeable as a relatively large image change compared to the relatively low complexity portion, in for example luminance, color, etc. Generally, this “flashing” occurs particularly when refresh frames (e.g., intra frame or instantaneous decoder refresh frames) are displayed every 30 or so frames (approximately every 1 second) in the video data, which makes the “flashing” or “beating” effect resemble a heartbeat.
This “flashing” effect may be minimized by not skipping the coding of the relatively low complexity portions and/or increasing a number of bits used for the coding of the relatively low complexity portions. However, doing so significantly increases a required number of bits in the coded video data, causing increase in computational time and decrease in compression efficiency, which is not acceptable for many applications.
Methods and systems provide removal of a flashing artifact from a sequence of frames of video data. In an embodiment, a method compares a first frame of the video data to a second frame of the video data. The method may then determine that at least a portion of the first frame and a corresponding portion of the second frame of relatively low complexity based on the comparing. The method may determine a likelihood of flashing being present in the relatively low complexity portions of the first frame and the second frame, based on a tracked history of at least one block associated with the at least a portion of the first frame and the second frame. The method may then code the at least one associated block based on the first frame and the second frame responsive to a determination that the flashing artifact is more likely than not present.
For bidirectional transmission of data, however, each terminal 110, 120 may code video data captured at a local location for transmission to the other terminal via the network 130. Each terminal 110, 120 also may receive the coded video data transmitted by the other terminal, may decode the coded data and may display the recovered video data at a local display device.
In
The video source 215 may provide video to be coded by the terminal 210. In a videoconferencing system, the video source 215 may be a camera that captures local image information as a video sequence or it may be a locally-executing application that generates video for transmission (such as in gaming or graphics authoring applications). In a media serving system, the video source 215 may be a storage device storing previously prepared video.
The pre-processor 220 may perform various analytical and signal conditioning operations on video data. For example, the pre-processor 220 may search for video content in the source video sequence that is likely to generate artifacts when the video sequence is coded, decoded, and displayed. The pre-processor 220 also may apply various filtering operations to the frame data to improve efficiency of coding operations applied by a video coder 225.
The video coder 225 may perform coding operations on the video sequence to reduce the bit rate of a sequence. The video coder 225 may code the input video data by exploiting temporal and spatial redundancies in the video data. The transmitter 230 may buffer coded video data and to prepare it for transmission to a second terminal 250. The controller 235 may manage operations of the first terminal 210,
The first terminal 210 may operate according to a coding policy, which may be implemented by the controller 235 and video coder 225. The controller 235 may select coding parameters to be applied by the video coder 225 in response to various operational constraints. Such constraints may be established by, among other things: a data rate that is available within the channel to carry coded video between terminals, a size and frame rate of the source video, a size and display resolution of a display at a terminal 250 that will decode the video, and error resiliency requirements required by a protocol by which the terminals operate. Based upon such constraints, the controller 235 and/or the video coder 225 may select a target bit rate for coded video (for example, as N bits/sec) and an acceptable coding error for the video sequence. Thereafter, they may make various coding decisions to individual frames of the video sequence. For example, the controller 235 and/or the video coder 225 may select a frame type for each frame, a coding mode to be applied to pixel blocks within each frame, and quantization parameters to be applied to frames and or pixel blocks.
During coding, the controller 235 and/or video coder 225 may assign to each frame a certain frame type, which can affect the coding techniques that are applied to the respective frame. Frames commonly are parsed spatially into a plurality of pixel blocks (for example, blocks of 4×4, 8×8, 16×16, 32×32, 64×64 pixels each) and coded on a pixel-block-by-pixel-block basis. Pixel blocks may be coded predictively with reference to other coded pixel blocks as determined by the coding assignment applied to the pixel blocks' respective frame. For example, pixel blocks of Intra Frames (“I frames”) can be coded non-predictively or they may be coded predictively with reference to pixel blocks of the same frame (spatial prediction). Pixel blocks of Predictive Frames (“P frames”) may be coded non-predictively, via spatial prediction or via temporal prediction with reference to one previously coded reference frame. Pixel blocks of Bidirectionally Predictive Frames (“B frames”) may be coded non-predictively, via spatial prediction or via temporal prediction with reference to one or two previously coded reference frames.
Receiver 255 may receive coded data from a channel 245 and parse it according to its constituent elements. For example, the receiver 255 may distinguish coded video data from coded audio data and route each coded data to decoders to handle them. In the case of coded video data, the receiver 255 may route it to the video decoder 260.
Video decoder 260 may perform decoding operations that invert processes applied by the video coder 225 of the first terminal 210. Thus, the video decoder 260 may perform prediction operations according to the coding mode that was identified and perform entropy decoding, inverse quantization and inverse transforms to generate recovered video data representing each coded frame.
Post-processor 265 may perform additional processing operations on recovered video data to improve quality of the video prior to rendering. Filtering operations may include, for example, filtering at pixel block edges, anti-banding filtering and the like.
Video sink 270 may consume the reconstructed video. The video sink 270 may be a display device that displays the reconstructed video to an operator. Alternatively, the video sink may be an application executing on the second terminal 250 that consumes the video (as in a gaming application).
The video coder 300 may operate in a pipelined fashion where the HME 310 operates on data from one frame (labeled “frame N” herein), while the BPC 320 operates on data from the same frame (“frame N”).
The HME 310 may estimate motion of image content from the content of a frame. Typically, the HME 310 may analyze frame content at two or more levels of data to estimate motion. The HME 310, therefore, may output a motion vector representing identified motion characteristics that are observed in motion content. The motion data may be output to the BPC 320 to aid in prediction operations.
The HME 310 also may perform statistical analysis of the frame N and output data representing those statistics. The statistics also may be output to the BPC 320 to assist in mode selection operations, discussed below. The HME 310 further may determine weighting factors and offset values to be used in weighted prediction. The weighting factors and offset values also may be output to the BPC 320.
The BPC 320 may include: a subtractor 321, a transform unit 322, a quantizer 323, an entropy coder 324, an inverse quantizer 325, an inverse transform unit 326, an intra frame estimation and prediction unit 352, a motion estimation and compensation unit 327 that performs motion prediction, a mode selector 354, a multiplier 328, and an adder 329.
The BPC 320 may operate on an input frame on a pixel-block-by-pixel-block basis. Typically, a frame of content may be parsed into a plurality of pixel blocks, each of which may correspond to a respective spatial area of the frame. The BPC 320 may process each pixel block individually.
The subtractor 321 may perform a pixel-by-pixel subtraction between pixel values in the source frame and any pixel values that are provided to the subtractor 321 by the motion estimation and compensation units 327-329. The subtractor 321 may output residual values representing results of the subtraction on a pixel-by-pixel basis. In some cases, the motion estimation and compensation units 327-329 may provide no data to the subtractor 321 in which case the subtractor 321 may output the source pixel values without alteration.
The transform unit 322 may apply a transform to a pixel block of input data, which converts the pixel block to an array of transform coefficients. Exemplary transforms may include discrete sine transforms, discrete cosine transforms, and wavelet transforms. The transform unit 322 may output transform coefficients for each pixel block to the quantizer 323.
The quantizer 323 may apply a quantization parameter Qp to the transform coefficients output by the transform unit 322. The quantization parameter Qp may be a single value applied uniformly to each transform value in a pixel block or, alternatively, it may represent an array of values, each value being applied to a respective transform coefficient in the pixel block. The quantizer 323 may output quantized transform coefficients to the entropy coder 324.
The entropy coder 324, as its name applies, may perform entropy coding of the quantized transform coefficients presented to it. The entropy coder 324 may output a serial data stream, representing the quantized transform coefficients. Typical entropy coding schemes include arithmetic coding, Huffman coding and the like. The entropy coded data may be output from the BPC 320 as coded data of the pixel block. Thereafter, it may be merged with other data such as coded data from other pixel blocks and coded audio data and be output to a channel (not shown).
The BPC 320 may include a local decoder formed of the inverse quantizer unit 325, inverse transform unit 326, and an adder (not shown) that reconstruct select coded frames, called “reference frames.” Reference frames are frames that are selected as a candidate for prediction of other frames in the video sequence. When frames are selected to serve as reference frames, a decoder (not shown) decodes the coded reference frame and stores it in a local cache for later use. The encoder also includes decoder components so it may decode the coded reference frame data and store it in its own cache. Thus, absent transmission errors, the encoder's reference picture cache 330 and the decoder's reference picture cache (not shown) should store the same data.
The inverse quantizer unit 325 may perform processing operations that invert coding operations performed by the quantizer 323. Thus, the transform coefficients that were divided down by a respective quantization parameter may be scaled by the same quantization parameter. Quantization often is a lossy process, however, and therefore the scaled coefficient values that are output by the inverse quantizer unit 325 oftentimes will not be identical to the coefficient values that were input to the quantizer 323.
The inverse transform unit 326 may inverse transformation processes that were applied by the transform unit 322. Again, the inverse transform unit 326 may apply inverses of discrete sine transforms, discrete cosine transforms, or wavelet transforms to match those applied by the transform unit 322. The inverse transform unit may generate pixel values, which approximate prediction residuals input to the transform unit 322.
Although not shown in
The mode selector 354 may perform mode selection operations for the input pixel block. In doing so, the mode selector 354 may select a type of coding to be applied to the pixel block, for example intra-prediction, unidirectional inter-prediction or bidirectional inter-prediction. For either type of inter prediction, the motion estimation and compensation unit 327 may perform a prediction search to identify from a reference picture stored in the reference picture cache 330 to serve as a prediction reference for the input pixel block.
The intra frame estimation and prediction unit 352 may use Intra prediction, which uses the pixels in the current frame to generate prediction. When performing Intra prediction, the intra frame estimation and prediction unit 352 may use only the reconstructed pixels within the same frame and does not use data from the reference frame cache 330. The intra frame estimation and prediction unit 352 and the motion estimation and compensation unit 327 may generate identifiers of the prediction reference by providing motion vectors, Intra prediction modes or other metadata (not shown) for the prediction. The motion vector may be output from the BPC 320 along with other data representing the coded block.
The motion estimation and compensation unit 327 may also determine that the input pixel block is a relatively low complexity portion by obtaining the analysis from previous frame statistics. Based upon a determination that the input pixel block is a relatively low complexity portion, the motion estimation and compensation unit 327 may further analyze the history of the pixel blocks from the memory 335 as corresponding to the reference pixel blocks stored in the reference picture cache 330. Then, the motion estimation and compensation unit 327 may control the various other portions of BPC 320 to implement compression and coding of the input pixel block of a relatively low complexity portion.
For example, the motion estimation and compensation unit 327 may determine whether the input pixel block of a relatively low complexity portion should be skipped in coding, be allocated a default number of bits for coding, or be allocated additional bits for coding (for example in a P frame or a B frame), based upon the history of coding for that same pixel block. The motion estimation and compensation unit 327 may maintain and update the history of the pixel blocks from the memory 335 in reference to the reference pixel blocks stored in the reference picture cache 330.
“Skipped in coding” may represent several different scenarios of coding for a pixel block in various embodiments. In an embodiment according to the present disclosure, “skipped in coding” for a pixel block may indicate that the pixel block has no residual image information for coding, (but there may be motion vectors for the pixel block for coding).
The memory 335 may store a sequence of frames such as the sequence of frames shown in
The multiplier 328 and adder 329 may apply a weighting factor and offset to the predicted data generated by the motion estimation and compensation unit 327. Specifically, the multiplier 328 may scale the predicted data according to the weighting factor provided by the HME 310. The adder 329 may add an offset value to the output of the multiplier, again, using a value that is provided by the HME. Data output from the adder 329 may be input to the subtractor 321 as prediction data.
The principles of the present disclosure reduce or fix “flashing” effects in the video by analyzing the history of coding blocks using the BPC 320, and slightly increasing a number of coding bits more frequently compared with conventional methods to reduce accumulation of errors that may lead to the “flashing” effect, without significantly increasing overall number of coding bits. In an embodiment, the BPC 320 may be configured to reduce flashing effects as described herein, for example by performing methods 500, 600, and/or 650. Thus, the principles of the present disclosure reduce visual artifacts without compromising video compression and coding efficiencies. The concepts of the present disclosure apply as well to other encoders, e.g., encoders conforming to other standards or including additional filtering such as deblock filtering, sample adaptive offset, etc.
In an embodiment, the method 500 may compare two frames (box 502). Based on at least a portion of each frame, method 500 may determine whether a region is of relatively low complexity. For example, method 500 may determine that a region is of relatively low complexity if a difference between a first version of the region in a first frame and a second version of the same region in a different frame is below a difference threshold (box 504). If method 500 identifies a relatively low complexity region, the method 500 may then proceed to analyze a history of coding the corresponding blocks in multiple frames (box 506).
Method 600 may begin by initializing (box 602). The initializing may include setting or resetting one or more counters used by method 600. For each frame, the method 600 may then determine whether a number of frames processed is equal to a frame threshold (box 604). The frame threshold may determine a frequency of performing particular portions of method 600. The frame threshold may be set based on a frame rate. If the frame rate of the video encoding changes, the frequency of the checking and resetting of counters may be adjusted to ensure that the checking is performed sufficiently frequently in any given time period.
If the method 600 determines that the number of frames processed is not equal to the threshold, the method 600 may then determine, for each pixel block of the frame, whether the block is skip coded (box 614). If coding is skipped, method 600 may advance a corresponding counter (“skip counter”) for the block (box 616). If, however, coding is not skipped, method 600 may proceed to processing the next block in the frame (box 618). The skip counter may track the number of times a block is skip coded.
If the number of frames processed is equal to the frame threshold, method 600 may determine whether a skip counter for a pixel block exceeds a skip threshold (box 606). If the skip counter exceeds the skip threshold, the method may decrease a quantization parameter for the block (box 608). Decreasing a quantization parameter may increase the number of bits used for coding. In an embodiment, if the skip counter does not exceed the skip threshold, the method 600 may or use a default quantization parameter for the block (box 622) or may increase the quantization parameter for the block (box 612). Increasing a quantization parameter may decrease the number of bits used for coding. Performing optional step 612 or step 622 for a block may compensate for any increase in bits used for coding of other pixel blocks in a frame. Thus, an overall number of bits consumed (i.e., bitrate) for coding the frame is maintained or reduced. As shown, boxes 606-618 may be performed for each block in a frame.
In an embodiment (not shown), block coding decisions may be delayed by one or more frames so that statistics from preceding frames are not used for an immediately subsequent frame. The number of frames by which to delay block coding may be based on available computational resources and channel speed. For instance, if resources are insufficient, force coding (i.e., increasing bits allocated or non-skip coding) of relatively low complexity portions may be delayed by a few frames.
The method 650 may determine whether a number of encoded bits is above a bit threshold (box 652). If the number of encoded bits is above the bit threshold, the method 650 may reset a bit counter (box 656). In box 652, the method 650 may determine whether a quantization parameter (QP) for the block is less than a QP threshold (box 652). If a QP is not less than the QP threshold, the method 650 may reset a bit counter (box 656).
The bit counter may track how often a block is coded with a relatively high number of bits, i.e., has a relatively low QP. If the QP is less than the QP threshold, the method 650 may increment the bit counter (box 654). The method 650 may then proceed to box 658 to determine whether the bit counter is greater than or equal to a change threshold. Being above a change threshold may indicate that a pixel block changes relatively frequently from frame to frame. Being above a change threshold may indicate that a pixel block contains a relatively large amount of pixel changes. If the bit counter is greater than or equal to the change threshold, the method 650 may de-allocate a number of bits used for coding the block (box 662). However, if the bit counter is less than the change threshold, the method 650 may proceed to the next block (box 664).
In an embodiment (not shown), the method 650 may also determine whether a complexity of the block is below a complexity threshold. This may indicate that the block is of relatively low complexity. If the bit counter exceeds the change threshold and the block is of relatively low complexity, method 600 may deallocate bits for coding the block. If either or both conditions are not true, method 600 may proceed to the next block.
Each frame shown in
In
In encoding frame0, pixel blocks B0, B1, and B3 are not skipped coded (NS), and pixel block B2 is skipped coded (S). Upon or after encoding frame0, the skip counter C2 (corresponding to skipped coded pixel block B2) is incremented by 1 (box 616 of
In encoding frame1, pixel blocks B0 and B3 are not skipped coded (NS), and pixel blocks B1 and B2 are skipped coded (S). Upon or after encoding frame1, the skip counters C1 and C2 (corresponding to skipped coded pixel blocks B1 and B2) are each incremented by 1 (box 616 of
In encoding frame2, B0, B1 and B3 are not skipped coded (NS), and pixel block B2 is skipped coded (S). Upon or after encoding frame2, the skip counter C2 (corresponding to skipped coded pixel block B2) is incremented by 1 (box 616 of
Suppose a frame threshold is set to 3. In this scenario, the statistics of the counters are checked and then reset once every 3 frames (i.e. statistics of 3 frames are used to make coding decisions for a 4th frame). However, the frequency of the checking and resetting of the counters may increase or decrease. For example, if the frame rate of the video encoding changes, the frequency of the checking and resetting of the counters may be adjusted to ensure that the checking is done sufficiently frequently in any given time period.
Upon reaching frame3, a number of processed frames matches the frame threshold because frame0, frame1, and frame2 have been processed (box 604 of
In this case, before encoding frame3, skip counter C2 has a value of 3, which is greater than the threshold of 2. Thus, pixel block B2 is given a reduced QP value (QP of 26 in frame3). Optionally, in order to reduce (or not increase) the bitrate for frame3, the pixel blocks with skip counter values below the skip threshold (or another threshold) may use default quantization parameter for the block (box 622), may have their QP values increased to reduce the bits for their coding (box 612 of
Upon or after encoding frame3, the skip counters (C0-C3) are all reset to 0, and then based upon frame3 coding, any skip counters (C0-C3) corresponding to a skipped coded (S) pixel block in frame3 may be incremented by 1 (boxes 614 and 616 of
This process may be iterated over a whole video sequence, i.e. statistics of the counters from frames 0, 1, 2 may be used for frame 3, statistics from frame 3, 4, 5 may be used for frame 6, statistics from frame 6, 7, 8 may be used for frame 9, etc. As discussed herein, in an embodiment, the block coding decisions may be delayed (by, for example, one frame), such that statistics from frame 0, 1, 2 are used for frame 4, statistics from frame 4, 5, 6 are used for frame 8, statistics from frame 8, 9, 10 are used for frame 12. If resources such as computational resources or network resources are not sufficiently available, force coding of relatively low complexity portions may be delayed by a few frames.
Additionally, a memory may store a second set of counter values that includes a counter value for each pixel block, maintained or updated by the prediction unit. The second set of counter values may each represent a running count of how many times a corresponding pixel block has been coded with a number of bits above a threshold value (a high bit counter). For example, a “very high number of bits” may be defined by a number of bits being above the threshold value. Each time a pixel block is coded with a QP value that is lower than some preset amount in a frame, the prediction unit may control the memory to increment the high bit counter value for the pixel block. If a pixel block is skipped coded or the QP value is higher than another preset amount such that a lower number of bits are used for the coding, the high bit counter value for the pixel block may be reset to zero.
In an embodiment, the prediction unit may control the various other portions of a block-pipelined coder to deallocate bits for coding of a specific pixel block (for example, by increasing QP value of the pixel block). Whether a current pixel block is of relatively low complexity may be determined by the pixel block being in a portion of the video sequence with complexity below a complexity threshold. In effect, the prediction unit looks at the history of coding of each pixel block, and forces higher QP value coding of any pixel block of high motion portions to reduce number of bits in the high motion portions. The effect of reduction of bits in the coding of high motion portions is not very noticeable during display, as compared to the high degree of visual changes in the low motion portions. This allows additional saving of bits in compression and coding, and compensates for additional bits allocated to force coding the relatively low complexity portions.
Alternatively, the memory may simply store a running history of QP values for each pixel block coded. This would allow the prediction unit to perform more flexible history analysis of each pixel block on the fly where counter values and preset threshold values may be inflexible relative to variations of coding within the video data. For example, a video stream may switch frame rate, depending on lighting conditions. Counter values may not be sufficient alone to compensate for the change in frame rate and the corresponding error accumulation rate. Thus, a more holistic decision may be made by the prediction unit based upon the history of the QP values that determines potential error accumulation in time (instead of over number of frames) for each pixel block and noticeability for each pixel block. A flashing effect may occur if the error accumulation rate exceeds a threshold (“error threshold”). For example, flashing of a relatively dark pixel block may be more noticeable when the error is in the luminance. In other words, a flashing effect may occur or a flashing effect may be more noticeable for a relatively dark pixel block. A pixel block may be determined to be relatively dark if the luminance of the pixel block is below a luminance threshold.
Although the foregoing description includes several exemplary embodiments, it is understood that the words that have been used are words of description and illustration, rather than words of limitation. Changes may be made within the purview of the appended claims, as presently stated and as amended, without departing from the scope and spirit of the disclosure in its aspects. Although the disclosure has been described with reference to particular means, materials and embodiments, the disclosure is not intended to be limited to the particulars disclosed; rather the disclosure extends to all functionally equivalent structures, methods, and uses such as are within the scope of the appended claims.
As used in the appended claims, the term “computer-readable medium” may include a single medium or multiple media, such as a centralized or distributed database, and/or associated caches and servers that store one or more sets of instructions. The term shall also include any medium that is capable of storing, encoding or carrying a set of instructions for execution by a processor or that cause a computer system to perform any one or more of the embodiments disclosed herein.
The computer-readable medium may comprise a non-transitory computer-readable medium or media and/or comprise a transitory computer-readable medium or media. In a particular non-limiting, exemplary embodiment, the computer-readable medium may include a solid-state memory such as a memory card or other package that houses one or more non-volatile read-only memories. Further, the computer-readable medium may be a random access memory or other volatile re-writable memory. Additionally, the computer-readable medium may include a magneto-optical or optical medium, such as a disk or tapes or other storage device to capture carrier wave signals such as a signal communicated over a transmission medium. Accordingly, the disclosure is considered to include any computer-readable medium or other equivalents and successor media, in which data or instructions may be stored.
The present specification describes components and functions that may be implemented in particular embodiments which may operate in accordance with one or more particular standards and protocols. However, the disclosure is not limited to such standards and protocols. Such standards periodically may be superseded by faster or more efficient equivalents having essentially the same functions. Accordingly, replacement standards and protocols having the same or similar functions are considered equivalents thereof.
The illustrations of the embodiments described herein are intended to provide a general understanding of the various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of apparatus and systems that utilize the structures or methods described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. Additionally, the illustrations are merely representational and may not be drawn to scale. Certain proportions within the illustrations may be exaggerated, while other proportions may be minimized. Accordingly, the disclosure and the figures are to be regarded as illustrative rather than restrictive.
For example, operation of the disclosed embodiments has been described in the context of servers and terminals that implement video compression, coding, and decoding. These systems can be embodied in electronic devices or integrated circuits, such as application specific integrated circuits, field programmable gate arrays and/or digital signal processors. Alternatively, they can be embodied in computer programs that execute on personal computers, notebook computers, tablets, smartphones or computer servers. Such computer programs typically are stored in physical storage media such as electronic-, magnetic- and/or optically-based storage devices, where they may be read to a processor, under control of an operating system and executed. And, of course, these components may be provided as hybrid systems that distribute functionality across dedicated hardware components and programmed general-purpose processors, as desired.
In addition, in the foregoing Detailed Description, various features may be grouped or described together the purpose of streamlining the disclosure. This disclosure is not to be interpreted as reflecting an intention that all such features are required to provide an operable embodiment, nor that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, subject matter may be directed to less than all of the features of any of the disclosed embodiments. Thus, the following claims are incorporated into the Detailed Description, with each claim standing on its own as defining separately claimed subject matter.
Also, where certain claims recite methods, sequence of recitation of a particular method in a claim does not require that that sequence is essential to an operable claim. Rather, particular method elements or steps could be executed in different orders without departing from the scope or spirit of the invention.
This application claims benefit under 35 U.S.C. §119(e) of U.S. Provisional Application Ser. No. 62/005,228, filed May 30, 2014, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62005228 | May 2014 | US |