Systems and methods for protecting elementary bitstreams incorporating independently encoded tiles

Information

  • Patent Grant
  • 12010362
  • Patent Number
    12,010,362
  • Date Filed
    Friday, February 4, 2022
    2 years ago
  • Date Issued
    Tuesday, June 11, 2024
    17 days ago
Abstract
Systems and methods for partial frame encryption in accordance with embodiments of the invention are disclosed. In one embodiment, the method receives a video bitstream that includes several frames, each frame including several independently encoded compression units within the frame, encrypts a portion of each of several compression units in several frames, and generates an output bitstream that includes the several independently encoded compression units including the encrypted portions of the compression units.
Description
FIELD OF INVENTION

The present invention relates to the field of encryption and decryption of video information. More specifically, the present invention is directed to methods and systems for generating a protected stream of compressed digital video using partial frame encryption.


BACKGROUND

Existing digital video compression techniques are complex processes which rely upon a variety of techniques in transforming (i.e., “encoding”) a unit of uncompressed video data into an encoded form. Such encoding permits fewer bits to be used in representing the content of the original uncompressed video data. The resultant encoded data is capable of being transformed using a reverse process (i.e., “decoding”) yielding a digital video unit of data that is either visually similar or identical to the original data. Modern techniques of digital video compression can achieve very high levels of compression.


The Motion Pictures Experts Group (MPEG) and the International Standards Organization (ISO) have produced various international standards specifying the video compression and decompression algorithms for video coding. These standards include MPEG-1, MPEG-2, MPEG-4, H.261, H.264, and the newer High-Efficiency Video Coding (HEVC) standard, which has significantly improved compression efficiency relative to its predecessors. In particular, HEVC is able to achieve a 2× compression ratio at a same subjective quality as compared to the prior H.264 standard. To achieve these compression optimizations, the HEVC standard has introduced several new tools designed specifically for parallel processing of video content on multi-core processor architectures. In particular, many smartphone and tablet architectures available on the market now utilize multi-core processors and thus are capable of playing back HEVC content using their multi-core architectures. Furthermore, with the growth of video traffic over networks, the HEVC standard provides certain tools that alleviate some of the bandwidth requirements for distributing high quality content.


Protecting the distribution of digital content from piracy and other types of unlawful distribution is yet another concern for content providers. The term Digital Rights Management (DRM) is utilized to describe access control technologies that are used to control access to and/or copying of digital content. DRM systems typically involve the use of cryptographic information to control access to or protect a piece of content. Content protection is typically achieved using cryptographic information such as (but not limited to) one or more encryption keys to encrypt content.


Various types of encryption schemes currently exist that may be used to protect data. In the digital world, encryption is often implemented by using a collection of bits of some length known as a “key” to execute a predictable transform on a unit of data. This yields another unit of data that cannot be “read” without knowledge of the key used to execute the transform. The process of encryption is only easily reversible to the extent the encrypting key or its counterpart (e.g., a “public” key) is available for use in transforming or “decrypting” the encrypted data back into the original form. Video data is often encrypted using a symmetric block cipher conforming to, for example the Data Encryption Standard (DES) or Advanced Encryption Standard (AES). The particular techniques used to encrypt digital content may yet consume further processing resources that need to be taken into account with respect to the encoding and distribution of content across a network.


SUMMARY OF THE INVENTION

Systems and methods for partial frame encryption in accordance with embodiments of the invention are disclosed. In one embodiment, the method receives a video bitstream that includes several frames, each frame including several independently encoded compression units within the frame, encrypts a portion of each of several compression units in several frames, and generates an output bitstream that includes the several independently encoded compression units including the encrypted portions of the compression units.


In a further embodiment or the invention, a compression unit is an independently decodable portion of a particular frame of video such that it does not depend upon another compression unit within the particular frame in order to be decoded.


In still a further embodiment of the invention, the method further parses a metadata header to identify locations of compression units within a frame of video, and encrypts a portion of the video bitstream based on the locations of the compression units.


In yet still a further embodiment of the invention, the method encrypts the portion of each of the plurality of compression units by determining that compression units are enabled based on information within a header associated with the video bitstream.


In yet a further embodiment of the invention still, the method further includes encrypting a portion of each compression unit in a frame of video.


In yet another embodiment of the invention, the portion is selected from the group consisting of i) a first N bytes of the compression unit, ii) a last N bytes of the compression unit, iii) a middle portion of N bytes within the compression unit and iv) a pattern of N bytes within the compression unit.


In still yet another embodiment of the invention, the compression units are tiles within a High-Efficiency Video Coding (HEVC) standard and the video bitstream is encoded based on the HEVC standard.


In yet another embodiment of the invention further still, the method further includes parsing a Picture Parameter Set (PPS) of the HEVC video bitstream to identify the structure of the tiles within the video bitstream, and encrypt portions of the tiles based on the structure.


In another embodiment of the invention again, the method further encrypts the portion of each of the several compression units using a common encryption format (CENC) to encrypt the portions.


Still another embodiment of the invention includes a content encoder that includes: a processor configured to communicate with a memory, where the memory contains an encoder application, where the encoder application directs the processor to: receive a video bitstream including several frames, each frame including several independently encoded compression units within the frame, encrypt a portion of each of several compression units in several frames, and generate an output bitstream including the several independently encoded compression units including the encrypted portions of the compression units.


In another embodiment of the invention, a compression unit is an independently decodable portion of a particular frame of video such that it does not depend upon another compression unit within the particular frame in order to be decoded.


In still another embodiment invention, the encoder application further directs the processor to parse a metadata header to identify locations of compression units within a frame of video, and encrypt a portion of the video bitstream based on the locations of the compression units.


In still yet another embodiment of the invention, encrypting the portion of each of the several compression units includes determining that compression units are enabled based on information within a header associated with the video bitstream.


In still yet another embodiment again, the encoder application further directs the processor to encrypt a portion of each compression unit in a frame of video.


In another embodiment of the invention again, the portion is selected from the group consisting of i) a first N bytes of the compression unit, ii) a last N bytes of the compression unit, iii) a middle portion of N bytes within the compression unit and iv) a pattern of N bytes within the compression unit.


In another further embodiment of the invention, the compression units are tiles within a High-Efficiency Video Coding (HEVC) standard and the video bitstream is encoded based on the HEVC standard.


In yet another embodiment of the invention again, the encoder application further directs the processor to parse a Picture Parameter Set (PPS) of the HEVC video bitstream to identify the structure of the tiles within the video bitstream, and encrypt portions of the tiles based on the structure.


In still a further embodiment of the invention again, encrypting the portion of each of the plurality of compression units includes using a common encryption format (CENC) to encrypt the portions.


In another embodiment of the invention, a content decoder includes: a processor configured to communicate with a memory, where the memory contains a decoder application, where the decoder application directs the processor to receive a video bitstream including several frames, each frame including several independently encoded compression units within the frame, decrypt a portion of each of several compression units in several frames, and generate an output decoded video for playback.


In yet another embodiment of the invention, a compression unit is an independently decodable portion of a particular frame of video such that it does not depend upon another compression unit within the particular frame in order to be decoded.


In yet another embodiment of the invention again, the decoder application further directs the processor to parse a metadata header to identify locations of compression units within a frame of video, and decrypt a portion of the video bitstream based on the locations of the compression units.


In still another embodiment of the invention again, decrypting the portion of each of the several compression units includes determining that compression units are enabled based on information within a header associated with the video bitstream.


In still another further embodiment of the invention, the decoder application further directs the processor to decrypt a portion of each compression unit in a frame of video.


In yet another embodiment of the invention again, the portion is selected from the group consisting of i) a first N bytes of the compression unit, ii) a last N bytes of the compression unit, iii) a middle portion of N bytes within the compression unit and iv) a pattern of N bytes within the compression unit.


In a further embodiment of the invention still, the compression units are tiles within a High-Efficiency Video Coding (HEVC) standard and the video bitstream is decoded based on the HEVC standard.


In another embodiment of the invention still, the decoder application further directs the processor to parse a Picture Parameter Set (PPS) of the HEVC video bitstream to identify the structure of the tiles within the video bitstream, and decrypt portions of the tiles based on the structure.


In yet still another embodiment of the invention, decrypting the portion of each of the plurality of compression units includes using a common encryption format (CENC) to decrypt the portions.





BRIEF DESCRIPTION OF FIGURES


FIG. 1 is a system diagram of a video encoding and delivery system in accordance with embodiments of the invention.



FIG. 2A conceptually illustrates a content encoder configured to generate partially encrypted content in accordance with embodiments of the invention.



FIG. 2B conceptually illustrates a content server configured to manage and distribute partially encrypted content in accordance with embodiments of the invention.



FIG. 2C conceptually illustrates a playback device configured to receive and play back partially encrypted content in accordance with embodiments of the invention.



FIG. 3 illustrates a process for partially encrypting content in accordance with embodiments of the invention.



FIG. 4 illustrates a process for partially encrypting content in accordance with embodiments of the invention.



FIG. 5 illustrates a process for decoding and playing back partially encrypted content in accordance with embodiments of the invention.



FIG. 6 illustrates an example of tiles within a frame of video in accordance with embodiments of the invention.



FIG. 7 illustrates an example of a syntax structure for tiles in an HEVC video in accordance with embodiments of the invention.





DETAILED DESCRIPTION

As described above, different techniques may be used to encrypt content, and each may consume different amounts of processing resources in addition to the processing costs associated with the compression technique (e.g., H.264 or HEVC) being utilized to compress or encode the video content. Accordingly, many embodiments of the invention are able to achieve efficiency in generating a protected compressed video sequence that has encrypted frames by encrypting only portions of a frame, rather than the entire frame. These techniques may generally be referred to as “partial frame encryption” since they encrypt only portions of a frame. The one or more portions that are encrypted within a frame of video can be specified within the frame by a start location and length. Often, this information may be provided within a header associated with the frame and used by a decoder to locate the encrypted portions of the frame for decryption.


In many video compression formats such as H.264/MPEG-4 AVC (Advanced Video Coding), there are dependencies within a frame and across multiple frames (due to compression algorithms). Because of the dependencies, when a portion that is encrypted cannot be decrypted and therefore cannot be played back properly, other portions within the frame or in other frames that depend on the encrypted portion also cannot be played back. Thus, in an AVC encoded bitstream, encrypting the beginning x number of bytes of a frame or sequence of units often is sufficient to prevent the decoding of many other parts of the frame or other units.


Many embodiments may utilize the ISO/IEC 23001-7:2012 Common Encryption Scheme (CENC) standard for encryption, which is an industry encryption standard that specifies standard encryption and key mapping methods that can be utilized by one or more digital rights and key management systems (DRM systems) to enable decryption of the same file using different DRM systems. The scheme allows for encrypting multiple discontinuous portions of a frame.


Some video compression formats such as High Efficiency Video Coding (HEVC) allow for portions of a frame to be independently encoded and decoded, without reference to or dependence from information in other portions, which enables the simultaneous parallel processing of different portions of a frame video. One such feature designed to enable parallel processing is “tiles” in HEVC. In particular, tiles can be used for encoding and decoding portions of a frame simultaneously by different processors, by dividing a picture into rectangular areas (tiles), where each tile consists of a number of coding tree units (CTUs).


Tiles can be contained within a single NAL (Network Abstraction Layer) unit or slice. Similar independently decodable portions of a frame can be referred to across different encoding formats as compression units (i.e., tiles in HEVC). Compression units enable parallelism when decoding the bitstream as they can be processed independently of each other. In an HEVC encoded stream with tiles enabled, if merely the first x bytes of the video NAL units or frame were encrypted, other portions (tiles) may be fully decodable without having to decrypt the encrypted portion(s) because of their independence from the encrypted portion(s).


Thus, in many embodiments, the security of an encoded bitstream having tiles (or other compression units) can be improved by encrypting at least a portion of multiple tiles within a frame to render more of the frame unrecoverable without decrypting the encrypted portions. In several embodiments, an encoder and/or encoding process can be designed to decode at least part of the bitstream to determine where tiles are located and encrypt portions of tiles. The encoder may acquire information concerning the structure and/or location of tiles in order to encrypt information within multiple tiles and protect more of the bitstream from being decoded without being decrypted. Methods for acquiring this information regarding tiles (or other independently decodable units) can include parsing a NAL unit header to determine the start location of one or more tiles. Systems and methods for partial frame encryption of compression units in accordance with embodiments of the invention are discussed further below.


System Architecture for Partial Encoding and Playing Back Video Using Partial Frame Encryption


As described above, many new compression standards provide new tools that allow for the parallel processing (i.e., encoding and decoding) of video content on multi-core architectures. These tools include, for example, the use of “tiles” in the HEVC standard, among other types of similar, independently decodable compression units that may be utilized to partition a frame of video content into separate decodable units. As will be described throughout this application, compression units (e.g., tiles in HEVC) may generally refer to partitioned and/or independently decodable portions of a single frame of video for a given encoding standard. Furthermore, “tiles” are a type of compression unit that has been introduced within the HEVC standard. Although many of the examples below describe partial frame encryption of tiles based on video compressed according to the HEVC standard, partial frame encryption may be used to encrypt video that has been compressed according to any other standards which uses similar types of compression units to partition a video frame as appropriate to the requirements of specific applications in accordance with embodiments of the invention.


Furthermore, in order to protect digital content that has been compressed using independently decodable compression units, certain encryption techniques may be used that apply partial frame encryption to one or more portions of the compression units (i.e., tiles) within a frame of video. In particular, in the newer standards (e.g., HEVC) that have been designed to allow for the independent decoding of compression units within a frame of video, it may no longer be sufficient to encrypt only a portion of a total video frame (i.e., video picture) based on the compression standard design that other portions would have inter-frame dependencies that would require the proper decryption of the encrypted frame. As described above, within these older compression standards, because of the dependencies between different portions of a single video frame, when a portion that is encrypted cannot be decrypted and therefore cannot be played back properly, other portions within the frame or in other frames that depend on the encrypted portion also cannot be played back. Thus, in many embodiments, partial frame encryption may be applied to portions of one or more compression units within a frame of video. A system for encoding video content using partial frame encryption in accordance with an embodiment of the invention is illustrated in FIG. 1.


The system 100 includes a content encoder 102 configured to encode source media into encoded video. In several embodiments, the content encoder may encode content using a compression standard that allows for the parallel processing of the content (e.g., HEVC standard) by generating, for example, compression units (e.g., tiles) within each frame of video that allow for the independent encoding/decoding of portions of a frame without reference to other portions of the frame of video. In particular, in several embodiments, the content encoder may encode the content using the HEVC standard to encode frames of video content. The HEVC standard may also generate one or more independently decodable tiles for each frame of video.


In addition to encoding frames of video based on a compression standard (e.g., HEVC), in many embodiments, the content encoder 106 may further encrypt portions of the video content to protect the content from unlawful distribution. In order to reduce the overhead costs associated with encrypting video content, in many embodiments, the content encoder 106 encodes the video content using partial frame encryption whereby only a portion of one or more compression units (i.e., tiles) within a frame of video are encrypted (rather than encrypting the entire frame of video content). In some embodiments, the content encoder encrypts the starting x number of bytes of each tile within a frame of video. Other embodiments may encrypt different portions of a tile, including the x bytes located somewhere within the bitstream, the ending x bytes, or any other combination of bytes within the tile as appropriate to the requirements of specific application. In certain embodiments, the content encoder may encrypt a same portion of every tile in a frame. In other embodiments, the content encoder may encrypt different portions of different tiles. In several embodiments, the content encoder may encrypt portions of only certain tiles (e.g., less than all tiles) within a frame of video. As can readily be appreciated, container files that contain the encrypted video can include a separate DRM track that contains information concerning the location of the encrypted portions of tiles within a frame and/or the cryptographic information utilized to encrypt all or each of the encrypted portions.


In some embodiments, the content encoder 106 stores the content in a Matroska (MKV) container file. The Matroska container is a media container developed as an open standard project by the Matroska non-profit organization of Aussonne, France. The Matroska container is based upon Extensible Bindary Meta Language (EBML), which is a binary derivative of the Extensible Markup Language (XML). Decoding of the Matroska container is supported by many consumer electronic (CE) devices. In other embodiments, any of a variety of container file formats can be utilized including (but not limited to) the MP4 container file format specified by the Motion Picture Experts Group as MPEG-4 Part 14 as appropriate to the requirements of specific applications.


In some embodiments, after the content encoder 106 has compressed and/or encrypted a video sequence, the content encoder 106 uploads the encoded video to the content server 102.


In many embodiments, the content server 102 facilitates the distribution of the source media to one or more playback devices 108-114. Content servers 102 in accordance with several embodiments of the invention can be responsible for storing protected content for distribution to playback devices. In many embodiments, the content server receives and processes download requests from the variety of playback devices 108-114 that seek to download the encoded video. In some embodiments, a device may request either to (i) download the entire file, or (ii) receive a streamed video for playback in either a progressive or adaptive streaming mode. When the distribution server receives a download request from a playback device, it can provide the playback device with the encoded video for storage and/or playback.


A downloaded video file may include one or more headers that contain data describing the structure of the compression units (e.g., tiles in HEVC encoded video) within frames of video. The headers may include pointers to the start location of one or more tiles. In some embodiments, the location of tiles within an encoded HEVC video sequence may be specified in a picture parameter structure (PPS) that provides information regarding the tile structure within one or more frames of video. In some embodiments, the tiles may be fixed at certain locations within a frame while in other embodiments, the tiles may be at different locations for different frames of video. A decoder on a playback device may use this information to determine the portions of a frame that need to be decrypted in order to play back the video file.


In some embodiments, the content server 102 receives stream requests from a variety of playback devices and subsequently streams the encoded video to the playback devices for progressive playback and/or as part of an adaptive bitrate streaming system. In several embodiments, the variety of playback devices can use HTTP or another appropriate stateless protocol to request streams via a network 104 such as the internet. In several embodiments, a variety of playback devices can use RTSP whereby the distribution server records the state of each playback device and determines the video to stream based upon instructions received from the playback devices and stored data describing the state of the playback device.


In several embodiments, the DRM server 116 (digital rights management) facilitates the authorization and access to the source media, including managing the keys required to encrypt/decrypt the source media.


DRM servers 116 in accordance with certain embodiments of the invention can be responsible for storing protected streams and/or files of content for distribution (e.g. streaming and/or downloading) to playback devices. DRM servers can also store common cryptographic information utilized to protect the content. In several embodiments, the common cryptographic information is identified utilizing an identifier associated with the common cryptographic information and a piece of content.


In the illustrated embodiment, playback devices include personal computers 108-110, and mobile phones 112-114. In other embodiments, playback devices can include consumer electronics devices such as DVD players, Blu-ray players, televisions, set top boxes, video game consoles, tablets, and other devices that are capable of connecting to a server via HTTP and playing back encoded video.


In the illustrated embodiment, the content encoder, content server and DRM server are server application configured to execute on server computer hardware. In other embodiments, the content encoder, content server and DRM server can be any processing device including a processor and having sufficient resources to perform the encryption, distribution, and digital rights management of source media including (but not limited to) video, audio, and/or subtitles. Although a specific architecture is shown in FIG. 1 any of a variety of architectures can be utilized that enable playback devices to request encoded video with partial frame encryption as appropriate to the requirements of specific applications in accordance with embodiments of the invention.


The basic architecture of a content encoder 202 in accordance with an embodiment of the invention is illustrated in FIG. 2A. The content encoder 202 includes a processor 204 in communication with non-volatile memory 208, volatile memory 206, and a network interface 214. In the illustrated embodiment, the non-volatile memory includes a content encoder application 210 that configures the processor to encode content 212. In several embodiments, the content encoder application 210 encrypts content using partial frame encryption such that only portions of one or more compression units (e.g., tiles) are encrypted within a frame of video, rather than the entire frame, to reduce the overhead associated with the encryption of the compressed video.


In several embodiments, the network interface 214 may be in communication with the processor 204, the volatile memory 206 and/or the non-volatile memory 208. Although a specific content encoder architecture is illustrated in FIG. 2A, any of a variety of architectures including architectures where the content encoder application is located on disk or some other form of storage and is loaded into volatile memory at runtime can be utilized to implement content encoders in accordance with embodiments of the invention.


The basic architecture of a content server 222 in accordance with an embodiment of the invention is illustrated in FIG. 2B. The content server 222 includes a processor 224 in communication with non-volatile memory 228, volatile memory 226, and a network interface 234. In the illustrated embodiment, the non-volatile memory includes a content distribution application 230 that configures the processor to distribute content 232. In several embodiments, the network interface 234 may be in communication with the processor 224, the volatile memory 226 and/or the non-volatile memory 228. Although a specific content server architecture is illustrated in FIG. 2B, any of a variety of architectures including architectures where the content distribution application is located on disk or some other form of storage and is loaded into volatile memory at runtime can be utilized to implement content servers in accordance with embodiments of the invention.


The basic architecture of a playback device in accordance with an embodiment of the invention is illustrated in FIG. 2C. The playback device 252 includes a processor 254 in communication with non-volatile memory 258, volatile memory 256, and a network interface 240. In the illustrated embodiment, the non-volatile memory includes a decoder application 260 that configures the processor to decode content 262. In some embodiments, the decoder application 260 uses information provided within video container file and/or video stream to identify the location of compression units within a frame of video and decrypts only certain portions of the compression units in order to decode the video.


In several embodiments, the network interface 264 may be in communication with the processor 254, the volatile memory 256, and/or the non-volatile memory 258. Although a specific playback device architecture is illustrated in FIG. 2C, any of a variety of architectures including architectures where the decoder application is located on disk or some other form of storage and is loaded into volatile memory at runtime can be utilized to implement playback devices in accordance with embodiments of the invention.


Systems and Methods for Partial Frame Encryption


As described above, some video compression formats (e.g., HEVC) allow for portions of a frame (e.g., compression units or tiles) to be independently encoded and decoded, without reference to or dependence from information in other portions of the frame (or other frames). These independently decodable portions of a frame can be referred to across different encoding formats as compression units. Thus during encryption of a stream with independent compression units, if merely the first x bytes of the frame were encrypted, other portions (compression unites or tiles) may be fully decodable without having to decrypt the encrypted portions of the compression units because of their independence from the encrypted compression units. Thus, the security of an encoded bitstream having tiles (or other compression units) can be improved by encrypting at least a portion of multiple tiles within a frame to render more of the frame unrecoverable without decrypting the encrypted portions. A process for partial frame encryption of compression units of a video bitstream in accordance with an embodiment of the invention is illustrated in FIG. 3.


The process receives (at 302) video data. In some embodiments, the process may download video data from one or more content distributors. In other embodiments, the process may stream video data during video playback.


The process determines (at 304) locations of a plurality of compression units within the video data. The locations may be determined based on information provided by one or more header(s) associated with a frame of video. In some embodiments, the header may provide information regarding the start location of each compression unit within the frame. In some embodiments, the location of each compression unit may be fixed within each frame of video and thus may not need to be identified by a header. For example, an encoder may be pre-programmed with information regarding the structure of a video sequence.


The process determines (at 306) a portion of each compression unit within a frame of video to encrypt. In some embodiments, the process determines a fixed x bytes of each compression unit should be encrypted. In several embodiments, the process determines different portions of different compression units based on characteristics of the compression units. In other embodiments, the process may encode the middle or last x number of bytes in one or more compression units for a frame of video. In certain embodiments, the process may not encrypt certain frames of video while only encrypting portions of other frames of video. As can readily be appreciated, the specific portions of particular frames that are encrypted and the manner of encryption is typically dependent upon the requirements of the application.


The process encrypts (at 308) the portions of the compression units. In some embodiments, the process encrypts the portions using a standard DES and/or AES cipher. Other embodiments may use other encryption mechanisms as appropriate to the requirements of specific applications.


The process generates (at 310) an output bitstream containing compression units with portions that have been encrypted. The process then ends.


Although a specific process for encrypting portions of compression units is described in FIG. 3, any of a variety of processes may be utilized to encrypt portions of compression units as appropriate to the requirements of specific applications in accordance with embodiments of the invention.


Overview of HEVC Standard


As described above, the HEVC video compression standard includes several new tools designed for playback of video content using multi-core architectures that support parallel processing. The tools include Wavefront Parallel Processing (WPP) and tiles, in addition to slice structures. When WPP and/or tiles are used, the video bitstream corresponding to one picture may be packetized into independently decodable subsets of bitstreams. In particular, HEVC includes independently decodable tiles that partition a frame of video into rectangular regions of certain sizes. An example of tiles within a frame of video in accordance with embodiments of the invention is illustrated in FIG. 6. In particular, FIG. 6 is a diagram illustrating an example of evenly partitioning a frame in the horizontal and the vertical dimensions into nine tiles, tile 1 at the top left corner to tile 9 at the bottom right corner. Each tile includes a coding tree unit.


Tile related parameters may be signaled in a Picture Parameter Set (PPS) in HEVC. Within a video sequence, different pictures may be allowed to use different PPS's. The tile parameters may change from picture to picture in the same video sequence. In most video applications, the number of tiles and the locations of the tiles are likely to remain the same within a video sequence (e.g., a series of pictures), however, situations may arise where not only the configurations of tiles may be allowed to change from picture to picture in the same video sequence, but also the grouping of tiles may be allowed to change from picture to picture.



FIG. 7 illustrates an example of a syntax structure for tiles in Picture Parameter Set (PPS) in an HEVC video. If a tiles_enabled_flag is turned on, then the number of tiles in each dimension may be signaled. If the tiles are uniformly sized (e.g., if uniform_spacing_flag is 1), then no additional information may be signaled. The width and height of the tiles may be signaled. For example, as shown in FIG. 7, num_tile_columns_minus1 and num_tile_rows_minus1 may be set to 2 and uniform_spacing_flag may be set to 1.


An encoder may change how the tiles are partitioned from fame to frame by the encoder signaling a new PPS with new tile partition parameters. In many embodiments, tiles need not remain equally sized compared to each other, or the same size compared to the same tile at an earlier instance. In particular, the encoder may signal a new PPS with new tile partition parameters that would apply to a new set of one or more frames.


Partial Frame Encryption in HEVC


As described above, the HEVC standard introduces certain tools that support high level parallel processing. In particular, HEVC includes tiles, which allow a frame to be split into rectangular regions, which can then be independently encoded and decoded. A frame may be split uniformly or non-uniformly as tiles. An entry point for each tile may be specified in a slice header. In order to allow for partial encryption of a video file using the HEVC standard, many embodiments of the invention may partially encrypt multiple tiles in order to encrypt the video content. A process for partial encryption of HEVC tiles in accordance with an embodiment of the invention is illustrated in FIG. 4.


The process determines (at 402) if tiles are enabled. In many embodiments, when tiles are enabled, a bit stream may contain entry point offsets that indicate the start position of each picture partition which is necessary for each core to immediately access the partition.


The process determines (at 404) the structure of NAL units within a frame and/or bitstream.


The process determines (at 406) the structure of tiles within a NAL unit. In some embodiments, the process parses a NAL header to determine the start locations of each tile within a frame of video. In some embodiments, HEVC tiles may partition a picture into rectangular regions of certain sizes. The parameter structure for tiles may be specified in a Picture Parameter Set (PPS), video usability information (VUI), and/or a supplemental enhancement information (SEI) message, in HEVC. An example of a PPS in HEVC is illustrated in FIG. 7. If a tiles_enabled_flag is turned on, then the number of tiles in each dimension may be signaled. In some embodiments, if the tiles are uniformly sized (e.g., if uniform_spacing_flag is 1), then no additional information may be signaled. The PPS may also signal the width and height of tiles,


The process selects (at 408) a number of NAL units. In some embodiments, the process may select all of the NAL units. In certain embodiments, the process may select one or more NAL units.


The process selects (at 410) a number of tiles within each of the selected NAL units. In some embodiments, an encoder may change how the tiles are partitioned from picture to picture by the encoder signaling a new PPS with new tile partition parameters. FIG. 7 illustrates an example of signaling tiles in a PPS. In some embodiments, the tiles may be different sizes compared to each other, or different sizes compared to the same tile at an earlier instance. In some embodiments, the encoder may signal a new PPS with new tile partition parameters for each new picture, or when the tile partitions change from a previous picture.


The process encrypts at least a portion of the selected tiles. In some embodiments, the process may encrypt the first x number of bytes, ending x bytes, or some x number of bytes that lie within a certain portion of the bitstream of the tile. In several embodiments, the process may encrypt a number of blocks within the tile. Other embodiments may encrypt other portions of a tile as appropriate to the requirements of specific applications. In many embodiments, the process encrypts the portions of tiles using a common encryption format (CENC) that uses a common specification regarding how to encrypt bitstreams. The CENC specifies industry standard encryption and key mapping methods that can be used by DRM systems to enable decryption of files. The scheme operates by defining a common format for the encryption related metadata necessary to decrypt protected streams. The scheme leaves the details of rights mappings, key acquisition and storage, DRM compliance rules, among various other considerations, up to the DRM system supporting the CENC scheme. Furthermore, in many embodiments, encryption information may be stored within the MKV containers.


The process then ends. Although a specific process for encrypting a portion of tiles in an HEVC video content are described in FIG. 4, any of a variety of processes may be utilized to encrypt portions of tiles as appropriate to the requirements of specific applications in accordance with embodiments of the invention.


Decoding Partially Encrypted Video


A process for decoding partially encrypted video in accordance with an embodiment of the invention is illustrated in FIG. 5.


The process receives (at 502) encrypted video data. In some embodiments, the process may download, stream, and/or stream to download video content from a content provider. In other embodiments, the video data may be stored on a disk or obtained by any other mechanism as appropriate to the requirements of specific applications.


The process determines (at 504) the locations of a plurality of compression units (e.g., tiles in HEVC) within the video data. In some embodiments, the locations of the tiles may be fixed within one or more frames of video. In other embodiments, the locations of tiles may change between frames, or sets of frames. The location of tiles may be determined based on information contained within a PPS corresponding to a frame. In particular, the process may parse the PPS to identify the particular bytes within a tile that have been encrypted.


The process determines (at 506) if compression units are encrypted and decrypts the encrypted compression units. In some embodiments, the process may obtain a decryption key for decrypting the encrypted content. The decryption key may be obtained based on authorizations received from a DRM service associated with the content.


The process decodes (at 508) the compression units. In many embodiments, the process decodes the content based on the particular compression standard used to encode the video (e.g. HEVC video).


The process generates (at 510) output decoded video for playback. The process then ends.


Although a specific process for decrypting portions of compression units in video content are described in FIG. 5, any of a variety of processes may be utilized to decrypt portions of compression units in video content as appropriate to the requirements of specific applications in accordance with embodiments of the invention.


Although the present invention has been described in certain specific aspects, many additional modifications and variations would be apparent to those skilled in the art. It is therefore to be understood that the present invention may be practices otherwise than specifically described. Thus, embodiments of the present invention should be considered in all respects as illustrative and not restrictive.


Furthermore, the foregoing discussion discloses and describes merely exemplary embodiments of the present invention. One skilled in the art will readily recognize from such discussion and from the accompanying drawings that various changes, modifications and variations may be made therein without departing from the spirit and scope of the invention. Therefore, it is intended that the invention may not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims
  • 1. A non-transitory machine readable medium containing processor instructions, where execution of the instructions by a set of one or more processors causes the set of processors to perform a process comprising: receiving at least a portion of a container file that contains encrypted video comprising a plurality of encoded frames of video and encryption-related metadata, where: each of the plurality of encoded frames of video comprises a plurality of tiles that partition the encoded frame of video into rectangular regions and are independently encoded; andthe plurality of encoded frames of video comprises a partially encrypted frame of video that is encrypted using partial frame encryption such that each of the plurality of tiles within the partially encrypted frame of video includes at least one portion that is protected using encryption and at least one portion that is not encrypted;locating encryption-related metadata from the container file, where the located encryption-related metadata identifies portions of the partially encrypted frame for decryption prior to decoding the plurality of tiles within the partially encrypted frame;decrypting the portions of the partially encrypted frame identified by the located encryption-related metadata to obtain a decrypted frame; anddecoding the decrypted frame using information from a header associated with the frame to determine the locations of the plurality of tiles.
  • 2. The non-transitory machine readable medium of claim 1, wherein a tile is an independently decodable portion of a particular frame of video and does not depend upon another compression unit within the particular frame in order to be decoded.
  • 3. The non-transitory machine readable medium of claim 1, further comprising parsing the header associated with the frame to identify a structure of the files within the frame.
  • 4. The non-transitory machine readable medium of claim 1, wherein the header associated with the frame comprises pointers, where each pointer is a pointer to a start location of one of the plurality of tiles.
  • 5. The non-transitory machine readable medium of claim 1, wherein the header associated with the frame comprises offsets indicating a start location of the encrypted portion of one of the plurality of tiles within the frame.
  • 6. The non-transitory machine readable medium of claim 1, wherein the plurality of tiles is fixed at locations within the partially encrypted frame determined using the information from the header.
  • 7. The non-transitory machine readable medium of claim 1, wherein the plurality of tiles is located in different locations for different partially encrypted frames.
  • 8. The non-transitory machine readable medium of claim 1, wherein the encryption-related metadata are contained in a separate DRM track at least a portion of which is located within the at least one portion of the container file.
  • 9. The non-transitory machine readable medium of claim 1, wherein decrypting the portions of the partially encrypted frame comprises decrypting each of the at least one portion that is protected using encryption within each of the plurality of tiles within the partially encrypted frame of video using an AES cipher.
  • 10. The non-transitory machine readable medium of claim 1, wherein the plurality of encoded frames of video form part of a video bitstream further comprising at least one unit containing a parameter set and identifies the locations of the plurality of tiles within the encoded frames of video.
  • 11. The non-transitory machine readable medium of claim 10, wherein parameters related to each tile in the plurality of tiles are signaled in the parameter set.
  • 12. A playback device comprising: a memory; anda processor configured to communicate with the memory, where the memory contains a decoder application;wherein the decoder application directs the processor to: receive at least a portion of a container file that contains encrypted video comprising a plurality of encoded frames of video and encryption related metadata, where: each of the plurality of encoded frames of video comprises a plurality of tiles that partition the encoded frame of video into rectangular regions and are independently encoded; andthe plurality of encoded frames of video comprises a partially encrypted frame of video that is encrypted using partial frame encryption such that each of the plurality of tiles within the partially encrypted frame of video includes at least one portion that is protected using encryption and at least one portion that is not encrypted;locate encryption-related metadata from the container file, where the located encryption-related metadata identifies portions of the partially encrypted frame for decryption prior to decoding the plurality of tiles within the partially encrypted frame;decrypt the portions of the partially encrypted frame identified by the located encryption-related metadata to obtain a decrypted frame;decode the decrypted frame using information from a header associated with the frame to determine the locations of the plurality of tiles; anddisplay the decoded frame.
  • 13. The playback device of claim 12, wherein a tile is an independently decodable portion of a particular frame of video and does not depend upon another compression unit within the particular frame in order to be decoded.
  • 14. The playback device of claim 12, further comprising parsing the header associated with the frame to identify a structure of the tiles within the frame.
  • 15. The playback device of claim 12, wherein the header associated with the frame comprises pointers, where each pointer is a pointer to a start location of one of the plurality of tiles.
  • 16. The playback device of claim 12, wherein the header associated with the frame comprises offsets indicating a start location of the encrypted portion of one of the plurality of tiles within the frame.
  • 17. The playback device of claim 12, wherein the plurality of tiles is fixed at certain locations within the partially encrypted frame.
  • 18. The playback device of claim 12, wherein the plurality of tiles is located in different locations for different partially encrypted frames.
  • 19. The playback device of claim 12, wherein the encryption-related metadata are contained in a separate DRM track within the at least one portion of the container file.
  • 20. The playback device of claim 12, wherein decrypting the portions of the partially encrypted frame comprises decrypting the encrypted portion using an AES cipher.
  • 21. The playback device of claim 12, wherein the plurality of encoded frames of video form part of a video bitstream further comprising at least one unit containing a parameter set and identifies the locations of the plurality of tiles within the encoded frames of video.
  • 22. The playback device of claim 21, wherein parameters related to each tile in the plurality of tiles are signaled in the parameter set.
  • 23. A playback device comprising: a memory; anda processor configured to communicate with a memory, where the memory contains a video player application;wherein the video player application directs the processor to: receive at least a portion of a container file that contains encrypted is video comprising a plurality of encoded frames of video and encryption-related metadata, where: each of the plurality of encoded frames of video comprises a plurality of tiles that partition the encoded frame of video into rectangular regions and are independently encoded; andthe plurality of encoded frames of video comprises a partially encrypted frame of video that is encrypted using partial frame encryption such that each of the plurality of tiles within the partially encrypted frame of video includes at least one portion that is protected using encryption and at least one portion that is not encrypted, where the locations of the plurality of tiles are determined using information from a header associated with the frame;locate encryption-related metadata from the container file, where the located encryption-related metadata are contained in a separate DRM track at least a portion of which is located within the at least one portion of the container file, and identifies portions of the partially encrypted frame for decryption prior to decoding the plurality of tiles within the partially encrypted frame;decrypt the portions of the partially encrypted frame identified by the located encryption-related metadata to obtain a decrypted frame, where decrypting the portions of the partially encrypted frame comprises decrypting each of the at least one portion that is protected using encryption within each of the plurality of tiles within the partially encrypted frame of video using an AES cipher;decode the decrypted frame using information from the header associated with the frame; anddisplay the decoded frame.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 16/725,795 filed on Dec. 23, 2019, which is a continuation of U.S. application Ser. No. 15/693,016 filed on Aug. 31, 2017 and issued as U.S. Pat. No. 10,542,303 on Jan. 21, 2020, which is a continuation of U.S. patent application Ser. No. 14/821,454 filed on Aug. 7, 2015 and issued as U.S. Pat. No. 9,762,937 on Sep. 12, 2017, which claims the benefit of U.S. Provisional Patent Application No. 62/034,714 filed on Aug. 7, 2014, the disclosures of which are incorporated herein by reference in their entireties for all purposes.

US Referenced Citations (251)
Number Name Date Kind
4535355 Arn et al. Aug 1985 A
5185794 Thompson et al. Feb 1993 A
5267312 Thompson et al. Nov 1993 A
5285497 Thatcher, Jr. Feb 1994 A
5400401 Wasilewski et al. Mar 1995 A
5406627 Thompson et al. Apr 1995 A
5477263 Ocallaghan et al. Dec 1995 A
5546461 Ibaraki et al. Aug 1996 A
5574785 Ueno et al. Nov 1996 A
5594492 Ocallaghan et al. Jan 1997 A
5600721 Kitazato Feb 1997 A
5621794 Matsuda et al. Apr 1997 A
5642338 Fukushima et al. Jun 1997 A
5761302 Park Jun 1998 A
5778102 Sandford, II et al. Jul 1998 A
5805700 Nardone et al. Sep 1998 A
5813010 Kurano et al. Sep 1998 A
5854873 Mori et al. Dec 1998 A
5887192 Nishio Mar 1999 A
5907658 Murase et al. May 1999 A
5923869 Kashiwagi et al. Jul 1999 A
5963909 Warren et al. Oct 1999 A
5978916 Randall Nov 1999 A
6002834 Hirabayashi et al. Dec 1999 A
6005940 Kulinets Dec 1999 A
6009237 Hirabayashi et al. Dec 1999 A
6016381 Taira et al. Jan 2000 A
6028932 Park Feb 2000 A
6057832 Lev et al. May 2000 A
6065050 DeMoney May 2000 A
6167136 Chou Dec 2000 A
6236727 Ciacelli et al. May 2001 B1
6266483 Okada et al. Jul 2001 B1
6282320 Hasegawa et al. Aug 2001 B1
6285774 Schumann et al. Sep 2001 B1
6320905 Konstantinides Nov 2001 B1
6351538 Uz Feb 2002 B1
6373803 Ando et al. Apr 2002 B2
6415031 Colligan et al. Jul 2002 B1
6445877 Okada et al. Sep 2002 B1
6449719 Baker Sep 2002 B1
6453115 Boyle Sep 2002 B1
6453116 Ando et al. Sep 2002 B1
6504873 Vehvilaeinen Jan 2003 B1
6505266 Gu Jan 2003 B1
6512883 Shim et al. Jan 2003 B2
6594699 Sahai et al. Jul 2003 B1
6654933 Abbott et al. Nov 2003 B1
6671408 Kaku Dec 2003 B1
6690838 Zhou Feb 2004 B2
6711683 Laczko, Sr. et al. Mar 2004 B1
6724944 Kalevo et al. Apr 2004 B1
6725372 Lewis et al. Apr 2004 B1
6751623 Basso et al. Jun 2004 B1
6813437 Ando et al. Nov 2004 B2
6871006 Oguz et al. Mar 2005 B1
6876746 Pham Apr 2005 B2
6912513 Candelore Jun 2005 B1
6931531 Takahashi Aug 2005 B1
6957350 Demos Oct 2005 B1
6965993 Baker Nov 2005 B2
6970564 Kubota et al. Nov 2005 B1
6983079 Kim Jan 2006 B2
6996129 Krause et al. Feb 2006 B2
7006757 Ando et al. Feb 2006 B2
7007170 Morten Feb 2006 B2
7020284 Boykin et al. Mar 2006 B2
7020287 Unger Mar 2006 B2
7043473 Rassool et al. May 2006 B1
7058815 Morin Jun 2006 B2
7093277 Perlman Aug 2006 B2
7120252 Jones et al. Oct 2006 B1
7136487 Schon et al. Nov 2006 B1
7150045 Koelle et al. Dec 2006 B2
7151832 Fetkovich et al. Dec 2006 B1
7151833 Candelore et al. Dec 2006 B2
7165175 Kollmyer et al. Jan 2007 B1
7188183 Paul et al. Mar 2007 B1
7206940 Evans et al. Apr 2007 B2
7212726 Zetts May 2007 B2
7242772 Tehranchi Jul 2007 B1
7274861 Yahata et al. Sep 2007 B2
7295673 Grab et al. Nov 2007 B2
7299292 Morten et al. Nov 2007 B2
7328345 Morten et al. Feb 2008 B2
7349886 Morten et al. Mar 2008 B2
7352956 Winter et al. Apr 2008 B1
7356143 Morten Apr 2008 B2
7376831 Kollmyer et al. May 2008 B2
7380117 Baker May 2008 B2
7382879 Miller Jun 2008 B1
7397853 Kwon et al. Jul 2008 B2
7400679 Kwon et al. Jul 2008 B2
7406174 Palmer Jul 2008 B2
7418132 Hoshuyama Aug 2008 B2
7457415 Reitmeier et al. Nov 2008 B2
7499930 Naka et al. Mar 2009 B2
7546641 Robert et al. Jun 2009 B2
7594271 Zhuk et al. Sep 2009 B2
7639921 Seo et al. Dec 2009 B2
7640435 Morten Dec 2009 B2
7711052 Hannuksela et al. May 2010 B2
7817608 Rassool et al. Oct 2010 B2
7853980 Pedlow, Jr. et al. Dec 2010 B2
7864186 Robotham et al. Jan 2011 B2
7945143 Yahata et al. May 2011 B2
8000344 Frick Aug 2011 B1
8131875 Chen Mar 2012 B1
8169916 Pai et al. May 2012 B1
8243924 Chen et al. Aug 2012 B2
8286213 Seo Oct 2012 B2
8312079 Newsome et al. Nov 2012 B2
8369421 Kadono et al. Feb 2013 B2
8649669 Braness et al. Feb 2014 B2
8683066 Hurst et al. Mar 2014 B2
8782268 Pyle et al. Jul 2014 B2
8819116 Tomay et al. Aug 2014 B1
8849950 Stockhammer et al. Sep 2014 B2
9038116 Knox et al. May 2015 B1
9762937 Braness et al. Sep 2017 B2
10542303 Braness et al. Jan 2020 B2
11245938 Braness et al. Feb 2022 B2
20010021276 Zhou Sep 2001 A1
20010052077 Fung et al. Dec 2001 A1
20010052127 Seo et al. Dec 2001 A1
20020018565 Luttrell et al. Feb 2002 A1
20020048450 Zetts Apr 2002 A1
20020067432 Kondo et al. Jun 2002 A1
20020135607 Kato et al. Sep 2002 A1
20020138736 Morin Sep 2002 A1
20020141503 Kobayashi et al. Oct 2002 A1
20020154779 Asano et al. Oct 2002 A1
20020164024 Arakawa et al. Nov 2002 A1
20020169971 Asano et al. Nov 2002 A1
20020184637 Perlman Dec 2002 A1
20030002577 Pinder Jan 2003 A1
20030044080 Frishman et al. Mar 2003 A1
20030053541 Sun et al. Mar 2003 A1
20030063675 Kang et al. Apr 2003 A1
20030063745 Boykin et al. Apr 2003 A1
20030077071 Lin et al. Apr 2003 A1
20030133571 Chaubert Jul 2003 A1
20030135742 Evans Jul 2003 A1
20030142594 Tsumagari et al. Jul 2003 A1
20030159139 Candelore et al. Aug 2003 A1
20030206717 Yogeshwar et al. Nov 2003 A1
20040001594 Krishnaswamy et al. Jan 2004 A1
20040022391 Obrien Feb 2004 A1
20040028227 Yu Feb 2004 A1
20040037421 Truman Feb 2004 A1
20040047592 Seo et al. Mar 2004 A1
20040047607 Seo et al. Mar 2004 A1
20040076237 Kadono et al. Apr 2004 A1
20040081333 Grab et al. Apr 2004 A1
20040088574 Walter et al. May 2004 A1
20040093494 Nishimoto et al. May 2004 A1
20040101059 Joch et al. May 2004 A1
20040107356 Shamoon et al. Jun 2004 A1
20040141613 Hayashi Jul 2004 A1
20040143732 Choi et al. Jul 2004 A1
20040174998 Youatt et al. Sep 2004 A1
20040193871 Seshadri Sep 2004 A1
20040196972 Zhu et al. Oct 2004 A1
20040196975 Zhu et al. Oct 2004 A1
20040247122 Hobrock et al. Dec 2004 A1
20040258242 Hsu Dec 2004 A1
20050013494 Srinivasan et al. Jan 2005 A1
20050028192 Hooper et al. Feb 2005 A1
20050063541 Candelore Mar 2005 A1
20050076232 Kawaguchi Apr 2005 A1
20050144468 Northcutt Jun 2005 A1
20050169466 Graunke Aug 2005 A1
20050177741 Chen et al. Aug 2005 A1
20050185821 Lecomte et al. Aug 2005 A1
20050198686 Krause et al. Sep 2005 A1
20050243912 Kwon et al. Nov 2005 A1
20050265555 Pippuri Dec 2005 A1
20060013568 Rodriguez Jan 2006 A1
20060143018 Densham et al. Jun 2006 A1
20060164544 Lecomte et al. Jul 2006 A1
20060165163 Burazerovic et al. Jul 2006 A1
20060165232 Burazerovic et al. Jul 2006 A1
20060184790 Oliveira et al. Aug 2006 A1
20060210075 Lecomte et al. Sep 2006 A1
20060216003 Lecomte et al. Sep 2006 A1
20060280301 Oliveira et al. Dec 2006 A1
20070047645 Takashima Mar 2007 A1
20070067472 Maertens et al. Mar 2007 A1
20070071091 Lay et al. Mar 2007 A1
20070083467 Lindahl et al. Apr 2007 A1
20070180051 Kelly et al. Aug 2007 A1
20080040805 Yasue Feb 2008 A1
20080086570 Dey et al. Apr 2008 A1
20080101718 Yang et al. May 2008 A1
20080137847 Candelore et al. Jun 2008 A1
20080294786 Tinker et al. Nov 2008 A1
20090003600 Chen et al. Jan 2009 A1
20090010622 Yahata et al. Jan 2009 A1
20090013195 Ochi et al. Jan 2009 A1
20090077143 Macy, Jr. Mar 2009 A1
20090106082 Senti et al. Apr 2009 A1
20090132599 Soroushian et al. May 2009 A1
20090144560 Takenaka et al. Jun 2009 A1
20090178090 Oztaskent Jul 2009 A1
20090249081 Zayas Oct 2009 A1
20090282162 Mehrotra et al. Nov 2009 A1
20090310819 Hatano Dec 2009 A1
20100005483 Rao Jan 2010 A1
20100142915 Mcdermott et al. Jun 2010 A1
20100189131 Branam et al. Jul 2010 A1
20110010466 Fan et al. Jan 2011 A1
20110058675 Brueck et al. Mar 2011 A1
20110096828 Chen et al. Apr 2011 A1
20110103374 Lajoie et al. May 2011 A1
20110135090 Chan et al. Jun 2011 A1
20110145858 Philpott et al. Jun 2011 A1
20110173345 Knox et al. Jul 2011 A1
20110179185 Wang et al. Jul 2011 A1
20110188652 Yamaguchi et al. Aug 2011 A1
20110197261 Dong et al. Aug 2011 A1
20110246661 Manzari et al. Oct 2011 A1
20110296048 Knox et al. Dec 2011 A1
20110314130 Strasman Dec 2011 A1
20120005312 Mcgowan et al. Jan 2012 A1
20120042090 Chen et al. Feb 2012 A1
20120047542 Lewis et al. Feb 2012 A1
20120051539 Kar et al. Mar 2012 A1
20120110120 Willig et al. May 2012 A1
20120167132 Mathews et al. Jun 2012 A1
20120311174 Bichot et al. Dec 2012 A1
20120331167 Hunt Dec 2012 A1
20130013803 Bichot et al. Jan 2013 A1
20130080267 McGowan Mar 2013 A1
20130107952 Coban et al. May 2013 A1
20130182774 Wang et al. Jul 2013 A1
20140020111 Wang et al. Jan 2014 A1
20140093180 Esenlik et al. Apr 2014 A1
20140098110 Ju et al. Apr 2014 A1
20140140253 Lohmar et al. May 2014 A1
20140149557 Lohmar et al. May 2014 A1
20140157304 Fay et al. Jun 2014 A1
20140281481 Moroney Sep 2014 A1
20150033008 Einarsson et al. Jan 2015 A1
20150163540 Masterson Jun 2015 A1
20150288530 Oyman Oct 2015 A1
20160044346 Braness et al. Feb 2016 A1
20160156949 Hattori Jun 2016 A1
20160156968 Toma et al. Jun 2016 A1
20160162709 Staudenmaier et al. Jun 2016 A1
20170366829 Braness et al. Dec 2017 A1
20200137435 Braness et al. Apr 2020 A1
Foreign Referenced Citations (64)
Number Date Country
2237293 Jul 1997 CA
1623327 Jun 2005 CN
1623328 Jun 2005 CN
1726713 Jan 2006 CN
101461244 Jun 2009 CN
102804766 Nov 2012 CN
106664203 May 2017 CN
106664203 Feb 2021 CN
1187483 Mar 2002 EP
1453319 Sep 2004 EP
1283640 Oct 2006 EP
2180664 Apr 2010 EP
2337348 Jun 2011 EP
2360923 Aug 2011 EP
3134995 Jan 2017 EP
3134995 Dec 2021 EP
3989477 Apr 2022 EP
1235169 Mar 2018 HK
2005341316 Dec 2005 JP
2006510308 Mar 2006 JP
201223765 Feb 2012 JP
2017526228 Sep 2017 JP
2018174533 Nov 2018 JP
6666953 Feb 2020 JP
202078095 May 2020 JP
6914381 Jul 2021 JP
2021-182744 Nov 2021 JP
7204838 Jan 2023 JP
202314134 Jan 2023 JP
20040039852 May 2004 KR
20060045675 May 2006 KR
20060106250 Oct 2006 KR
20080070017 Jul 2008 KR
20130095324 Aug 2013 KR
20130133830 Dec 2013 KR
10-2014-0085539 Jul 2014 KR
20140085539 Jul 2014 KR
10-2306641 Sep 2021 KR
10-2426067 Jul 2022 KR
102548789 Jun 2023 KR
102597985 Oct 2023 KR
2016015022 Mar 2018 MX
370687 Dec 2019 MX
2328040 Jun 2008 RU
11201609457 Oct 2018 SG
2000049762 Aug 2000 WO
2000049763 Aug 2000 WO
0104892 Jan 2001 WO
20020961170 Nov 2002 WO
2003047262 Jun 2003 WO
2004012378 Feb 2004 WO
2004100158 Nov 2004 WO
2005008385 Jan 2005 WO
2005015935 Feb 2005 WO
2009006302 Jan 2009 WO
2009109976 Sep 2009 WO
2010044146 Apr 2010 WO
2011087449 Jul 2011 WO
2011101371 Aug 2011 WO
2011103364 Aug 2011 WO
2013063094 May 2013 WO
2014012073 Jan 2014 WO
2014050038 Apr 2014 WO
2016022979 Feb 2016 WO
Non-Patent Literature Citations (133)
Entry
Extended European Search Report for European Application No. 15829005.6, Search completed Oct. 13, 2017, dated Nov. 6, 2017, 9 Pgs.
Final draft ETSI ES 202 109, V1.1.1, ETSI Standard, Terrestrial Trunked Radio (TETRA); Security; Synchronization mechanism for end-to-end encryption, Oct. 2002, 17 pgs.
Information Technology—MPEG Systems Technologies—Part 7: Common Encryption in ISO Base Media File Format Files (ISO/IEC 23001-7), Apr. 2015, 24 pgs.
Information technology—MPEG systems technologies—Part 7:Common encryption in ISO base media file format files, International Standard, Third Edition, 2016, 34 pgs.
International Preliminary Report on Patentability for International Application PCT/US2015/044325, Report dated Feb. 7, 2017, dated Feb. 16, 2017, 7 Pgs.
International Search Report and Written Opinion for International Application No. PCT/US2015/044325, Search completed Oct. 6, 2015, dated Oct. 29, 2015, 8 Pgs.
International Telecommunication Union, Telecommunication Standardization Sector of ITU, H.233, Line Transmission of Non-Telephone Signals, Confidentiality System for Audiovisual Services, ITU-T Recommendation H.233, Mar. 1993, 18 pgs.
ISO/IEC 14496-12 Information technology—Coding of audio-visual objects—Part 12: ISO base media file format, Feb. 2004 (“MPEG-4 Part 12 Standard”), 62 pgs.
ISO/IEC 14496-12:2008(E) Informational Technology—Coding of Audio-Visual Objects Part 12: ISO Base Media File Format, Oct. 2008, 120 pgs.
ISO/IEC FCD 23001-6 MPEG systems technologies Part 6: Dynamic adaptive streaming over HTTP (DASH), Jan. 28, 2011, 86 pgs.
Microsoft Corporation, Advanced Systems Format (ASF) Specification, Revision 01.20.03, Dec. 2004, 121 pgs.
MPEG-DASH presentation at Streaming Media West 2011, Nov. 2011, 14 pgs.
Pomelo, LLC Tech Memo, Analysis of Netflix's Security Framework for ‘Watch Instantly’ Service, Mar.-Apr. 2009, 18 pgs.
Provisional Application Cover Sheet, U.S. Appl. No. 60/409,675, filed Sep. 9, 2002, 144 pgs.
Server-Side Stream Repackaging (Streaming Video Technologies Panorama, Part 2), Jul. 2011, 15 pgs.
Supplementary European Search Report for European Application No. EP15829005.6, Search completed Oct. 13, 2017, dated Nov. 6, 2017, 9 pgs.
Text of ISO/IEC 23001-6: Dynamic adaptive streaming over HTTP (DASH), Oct. 2010, 71 pgs.
Universal Mobile Telecommunications System (UMTS), ETSI TS 126 233 V9.1.0 (Jun. 2011) 3GPP TS 26.233 version 9.1.0 Release 9, 18 pgs.
Universal Mobile Telecommunications Systems (UMTS); ETSI TS 126 244 V9.4.0 (May 2011) 3GPP TS 26.244 version 9.4.0 Release 9, 58 pgs.
“Apple HTTP Live Streaming specification”, Aug. 2017, 60 pgs.
“Coding of Audio-Video Object with a Container File Format”, Microsoft Corporation, PIFF Provisional (SPARTA file format), 89 pgs.
“Data Encryption Decryption using AES Algorithm, Key and Salt with Java Cryptography Extension”, Available at https://www.digizol.com/2009/10/java-encrypt-decrypt-jce-salt.html, October 200, 6 pgs.
“Delivering Live and On-Demand Smooth Streaming”, Microsoft Silverlight, 2009, 28 pgs.
“Guidelines for Implementation: DASH-IF Interoperability Points”, DASH Industry Forum, Apr. 7, 2005, Version 3.0 (final version), 154 pgs.
“High-Definition Video UMA Subsystem with 2D/3D Graphics”, Broadcom., BCM7030 Product Brief, 3 pgs.
“HTTP Based Adaptive Streaming over HSPA”, Apr. 2011, 73 pgs.
“HTTP Live Streaming”, Mar. 2011, 24 pgs.
“HTTP Live Streaming”, Sep. 2011, 33 pgs.
“Information Technology—Coding of Audio Visual Objects—Part 2: Visual”, International Standard, ISO/IEC 14496-2, Third Edition, Jun. 1, 2004, pp. 1-724. (presented in three parts).
“Java Cryptography Architecture API Specification & Reference”, Available at https://docs.oracle.com/javase/1.5.0/docs/guide/security/CryptoSpec.html, Jul. 25, 2004, 68 pgs.
“Java Cryptography Extension, javax.crypto.Cipher class”, Available at https://docs.oracle.com/javase/1.5.0/docs/api/javax/crypto/Cipher.html, 2004, 24 pgs.
“JCE Encryption—Data Encryption Standard (DES) Tutorial”, Available at https://mkyong.com/java/jce-encryption-data-encryption-standard-des-tutorial/, Feb. 25, 2009, 2 pgs.
“Live and On-Demand Video with Silverlight and IIS Smooth Streaming”, Microsoft Silverlight, Windows Server Internet Information Services 7.0, Feb. 2010, 15 pgs.
“Microsoft Smooth Streaming specification”, Jul. 22, 2013, 56 pgs.
“MPEG-2 Stream Encryption Format for HTTP Live Streaming, 1.0 Introduction”, Apple Inc., Retrieved from: https://developer.apple.com/library/content/documentation/AudioVideo/Conceptual/HLS_Sample_Encryption/Intro/Intro.html#//apple_ref/doc/uid/TP40012862-CH5-SW1, Updated Oct. 29, 2015, Printed Jul. 12, 2017.
“MPEG-2 Stream Encryption Format for HTTP Live Streaming, 2.1 Encryption Overview”, Apple Inc., Retrieved from: https://developer.apple.com/library/content/documentation/AudioVideo/Conceptual/HLS_Sample_Encryption/Encryption/Encryption.html, Updated Oct. 29, 2015, Printed Jul. 12, 2017.
“MPEG-2 Stream Encryption Format for HTTP Live Streaming, 3.0 Transport Stream Signaling”, Apple Inc., Retrieved from: https://developer.apple.com/library/content/documentation/AudioVideo/Conceptual/HLS_Sample_Encryption/TransportStreamSignaling/TransportStreamSignaling.html, Updated Oct. 29, 2015, Printed Jul. 12, 2017.
“MPEG-2 Stream Encryption Format for HTTP Live Streaming, 4.0 References”, Apple Inc., Retrieved from: https://developer.apple.com/library/content/documentation/AudioVideo/Conceptual/HLS_Sample_Encryption/References/References.html, Updated Oct. 29, 2015, Printed Jul. 12, 2017.
“MPEG-2 Stream Encryption Format for HTTP Live Streaming, Document Revision History”, Apple Inc., Retrieved from: https://developer.apple.com/library/content/documentation/AudioVideo/Conceptual/HLS_Sample_Encryption/RevisionHistory.html, Updated Oct. 29, 2015, Printed Jul. 12, 2017.
“OpenDML AVI File Format Extensions Version 1.02”, OpenDMLAVI MJPEG File Format Subcommittee. Last revision: Feb. 28, 1996. Reformatting: Sep. 1997, 42 pgs.
“Series H: Audiovisual and Multimedia Systems Infrastructure of audiovisual services—Coding of moving video; High efficiency video coding”, International Telecommunication Union, ITU-T H.265, Apr. 2015, 634 pages (presented in six parts).
“Single-Encode Streaming for Multiple Screen Delivery”, Telestream Wowza Media Systems, 2009, 6 pgs.
“The MPEG-DASH Standard for Multimedia Streaming Over the Internet”, IEEE MultiMedia, vol. 18, No. 4, 2011, 7 pgs.
“Windows Media Player 9”, Microsoft, Mar. 23, 2017, 3 pgs.
Abomhara et al., “Enhancing Selective Encryption for H.264/AVC Using Advanced Encryption Standard”, International Journal of computer Theory and Engineering, Apr. 2010, vol. 2, No. 2, pp. 223-229.
Alattar et al, “Improved selective encryption techniques for secure transmission of MPEG video bit-streams”, Image Processing, 1999. ICIP 99. Proceedings. 1999 International Conference on, vol. 4, IEEE, 1999, pp. 256-260.
Antoniou et al., “Adaptive Methods for the Transmission of Video Streams in Wireless Networks”, 2015, 50 pgs.
Apostolopoulos et al., “Secure Media Streaming and Secure Transcoding”, Multimedia Security Technologies for Digital Rights Management, 2006, 33 pgs.
Asai et al., “Essential Factors for Full-Interactive VOD Server: Video File System, Disk Scheduling, Network”, Proceedings of Globecom '95, Nov. 14-16, 1995, 6 pgs.
Beker et al., “Cipher Systems, The Protection of Communications”, 1982, 40 pgs.
Bergeron et al, “Compliant Selective Encryption for H.264/AVC Video Streams”, Multimedia Signal Processing, 2005 IEEE 7th Workshop, IEEE, 2005, 5 pgs.
Bloom et al., “Copy Protection for DVD Video”, Proceedings of the IEEE, vol. 87, No. 7, Jul. 1999, pp. 1267-1276.
Bocharov et al, “Portable Encoding of Audio-Video Objects, The Protected Interoperable File Format (PIFF)”, Microsoft Corporation, First Edition Sep. 8, 2009, 30 pgs.
Boho et al., “End-to-end security for video distribution: the combination of encryption, watermarking, and video adaptation”, IEEE Signal Processing Magazine, vol. 30, Issue 2, Mar. 2013, pp. 97-107.
Bross et al., “High Efficiency Video Coding (HEVC) Text Specification Draft 8”, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11,10th Meeting: Stockholm, SE, Jul. 2012, JCTVC-J1003_d7 (version 8), pp. 30-31, 65-68.
Bulterman et al., “Synchronized Multimedia Integration Language (SMIL 3.0)”, W3C Recommendation, Dec. 1, 2008, https://www.w3.org/TR/2008/REC-SMIL3-20081201/, 321 pgs. (presented in five parts).
But, “A Novel MPEG-1 Partial Encryption Scheme for the Purposes of Streaming Video”, A Novel MPEG-1 Partial Encryption Scheme for the Purposes of Streaming Video, Monash University, 2004, 297 pgs.
But, “Limitations of existing MPEG-1 ciphers for streaming video”, vol. 40429. Technical Report CAIA, 2004, 7 pgs.
Cahill et al., “Locally Adaptive Deblocking Filter for Low Bit Rate Video”, Proceedings 2000 International Conference on Image Processing, Sep. 10-13, 2000, Vancouver, BC, Canada, 4 pgs.
Candelore, U.S. Appl. No. 60/372,901, filed Apr. 17, 2002, 5 pgs.
Chaddha et al., “A Frame-work for Live Multicast of Video Streams over the Internet”, Proceedings of 3rd IEEE International Conference on Image Processing, Sep. 19, 1996, Lausanne, Switzerland, 4 pgs.
Cheng, “Partial Encryption for Image and Video Communication”, Thesis, Fall 1998, 95 pgs.
Cheng et al., “Partial encryption of compressed images and videos”, IEEE Transactions on Signal Processing, vol. 48, No. 8, Aug. 2000, 33 pgs.
Cheung et al., “On the Use of Destination Set Grouping to Improve Fairness in Multicast Video Distribution”, Proceedings of IEEE INFOCOM'96, Conference on Computer Communications, vol. 2, IEEE, 1996, 23 pgs.
Collet, “Delivering Protected Content, An Approach for Next Generation Mobile Technologies”, Thesis, 2010, 84 pgs.
Diamantis et al., “Real Time Video Distribution using Publication through a Database”, Proceedings SIBGRAPI'98. International Symposium on Computer Graphics, Image Processing, and Vision (Cat. No.98EX237), Oct. 1990, 8 pgs.
Dubois et al., “Selective Encryption of Images and Videos: From JPEG to H.265/HEVC through JPEG2000 and H.264/AVC”, Nova Publishers, Progress in Data Encryption Research, Chapter 6, Nov. 2013, pp. 137-178.
Dworkin, “Recommendation for Block Cipher Modes of Operation: Methods and Techniques”, NIST Special Publication 800-38A, 2001, 66 pgs.
Fang et al., “Real-time deblocking filter for MPEG-4 systems”, Asia-Pacific Conference on Circuits and Systems, Oct. 28-31, 2002, Bail, Indonesia, pp. 541-544.
Farajallah et al., “ROI Encryption for the HEVC Coded Video Contents”, 2015 IEEE International Conference on Image Processing (ICIP), Sep. 2015, 5 pages.
Fecheyr-Lippens, “A Review of HTTP Live Streaming”, Jan. 2010, 38 pgs.
Fielding et al., “Hypertext Transfer Protocol—HTTP1.1”, Network Working Group, RFC 2616, Jun. 1999, 114 pgs.
Fukuda et al., “Reduction of Blocking Artifacts by Adaptive DCT Coefficient Estimation in Block-Based Video Coding”, Proceedings 2000 International Conference on Image Processing, Sep. 10-13, 2000, Vancouver, BC, Canada, pp. 969-972.
Fuldseth, “Replacing Slices with Tiles for High Level Parallelism”, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 4th Meeting, Daegu, KR, 20-28, Jan. 2011.
Furht et al, “Fundamentals of Multimedia Encryption Techniques”, Multimedia Security Handbook 4, 2004, 50 pgs.
Hofbauer et al., “Transparent encryption for HEVC using bit-stream-based selective coefficient sign encryption”, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 4-9, 2014, Florence, Italy, pp. 1986-1990.
Hooda et al, “A Comprehensive Survey of Video Encryption Algorithms”, International Journal of Computer Applications (0975-8887), vol. 59—No. 1, Dec. 2012, 6 pgs.
Huang, U.S. Pat. No. 7,729,426, U.S. Appl. No. 11/230,794, filed Sep. 20, 2005, 143 pgs.
Huang, Filed Amendment, U.S. Appl. No. 12/788,210, Amendment filed Aug. 15, 2011, 20 pgs.
Huang et al., “Adaptive MLP post-processing for block-based coded images”, IEEE Proceedings—Vision, Image and Signal Processing, vol. 147, No. 5, Oct. 2000, pp. 463-473.
Huang et al., “Architecture Design for Deblocking Filter in H.264/JVT/AVC”, 2003 International Conference on Multimedia and Expo., Jul. 6-9, 2003, Baltimore, MD, 4 pgs.
Jain et al., U.S. Appl. No. 61/522,623, filed Aug. 11, 2011, 44 pgs.
Jeong et al., “Slice-Level Selective Encryption for Protecting Video Data”, Information Networking (ICOIN), Mar. 26, 2011, pp. 54-57, XP031922441, DOI: 10.1109/ICOIN.2011.5723113.
Jung et al., “Design and Implementation of an Enhanced Personal Video Recorder for DTV”, IEEE Transactions on Consumer Electronics, vol. 47, No. 4, Nov. 2001, 6 pgs.
Kalva, Hari “Delivering MPEG-4 Based Audio-Visual Services”, 2001, 113 pgs.
Kang et al., “Access Emulation and Buffering Techniques for Steaming of Non-Stream Format Video Files”, IEEE Transactions on Consumer Electronics, vol. 43, No. 3, Aug. 2001, 7 pgs.
Kim et al, “A Deblocking Filter with Two Separate Modes in Block-Based Video Coding”, IEEE transactions on circuits and systems for video technology, vol. 9, No. 1, 1999, pp. 156-160.
Kim et al., “Tree-Based Group Key Agreement”, Feb. 2004, 37 pgs.
Laukens, “Adaptive Streaming—A Brief Tutorial”, EBU Technical Review, 2011, 6 pgs.
Legault et al., “Professional Video Under 32-bit Windows Operating Systems”, SMPTE Journal, vol. 105, No. 12, Dec. 1996, 10 pgs.
Li, “Overview of Fine Granularity Scalability in MPEG-4 Video Standard”, IEEE Transactions on circuits and systems for video technology 11, No. 3, 2001, 17 pgs.
Li et al., “Layered Video Multicast with Retransmission (LVMR): Evaluation of Hierarchical Rate Control”, Proceedings of IEEE INFOCOM'98, the Conference on Computer Communications. Seventeenth Annual Joint Conference of the IEEE Computer and Communications Societies. Gateway to the 21st Century, Cat. No. 98, vol. 3, 1998, 26 pgs.
List et al., “Adaptive deblocking filter”, IEEE transactions on circuits and systems for video technology, vol. 13, No. 7, Jul. 2003, pp. 614-619.
Liu et al, “A Selective Video Encryption Scheme for MPEG Compression Standard”, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences 89, No. 1, 2006, pp. 194-202.
Liu et al, “Selective Encryption of Multimedia Content in Distribution Networks: Challenges and New Directions”, IASTED Communications, Internet & Information Technology (CIIT), USA (2003), 10 pgs.
Lookabaugh et al, “Security Analysis of Selectively Encrypted MPEG-2 Streams”, Multimedia Systems and Applications VI, vol. 5241, International Society for Optics and Photonics, 2003, 12 pgs.
Massoudi et al., “Overview on Selective Encryption of Image and Video: Challenges and Perspectives”, EURASIP Journal on Information Security, Nov. 2008, 18 pgs.
McCanne et al., “Receiver-driven Layered Multicast”, Conference proceedings on Applications, technologies, architectures, and protocols for computer communications, Aug. 1996, 14 pgs.
Meier, “Reduction of Blocking Artifacts in Image and Video Coding”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 9, No. 3, Apr. 1999, pp. 490-500.
Misra et al., “An Overview of Tiles in HEVC”, IEEE journal of selected topics in signal processing 7, No. 6, 2013, pp. 969-977.
Nelson, “Smooth Streaming Deployment Guide”, Microsoft Expression Encoder, Aug. 2010, 66 pgs.
Newton et al., “Preserving Privacy by De-identifying Facial Images”, Carnegie Mellon University School of Computer Science, Technical Report, CMU-CS-03-119, Mar. 2003, 26 pgs.
O'Brien, U.S. Appl. No. 60/399,846, filed Jul. 30, 2002, 27 pgs.
Okubo et al., “H.264/AVC Textbook”, Third revision, Impress R&D, Jan. 1, 2009.
O'Rourke, “Improved Image Decompression for Reduced Transform Coding Artifacts”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 5, No. 6, Dec. 1995, pp. 490-499.
Ouamri et al., “Robust and fast selective encryption for HEVC videos”, Journal of Communications Software and Systems, vol. 10, No. 4, Dec. 2014, pp. 221-229.
Park et al., “A postprocessing method for reducing quantization effects in low bit-rate moving picture coding”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 9, No. 1, Feb. 1999, pp. 161-171.
Richardson, “H.264 and MPEG-4 Video Compression”, Wiley, 2003, 306 pgs. (presented in 2 parts).
Shahid et al., “Investigating the Structure Preserving Encryption of High Efficiency Video Coding (HEVC)”, Electronic Imaging, Feb. 2013, San Francisco, CA, United States. SPIE (8656), pp. 86560N, 2013, Real-Time Image and Video Processing. <10.1117/12.2011933>, 11 pages.
Shahid et al., “Visual Protection of HEVC Video by Selective Encryption of CABAC Binstrings”, IEEE Transactions on Multimedia, Institute of Electrical and Electronics Engineers, Jan. 2014, vol. 16, No. 1, pp. 24-36.
Sima et al., “An Efficient Architecture for Adaptive Deblocking Filter of H.264 AVC Video Coding”, IEEE Transactions on Consumer Electronics, vol. 50, No. 1, Feb. 2004, pp. 292-296.
Skrepth et al, “Selective Encryption of Visual Data”, In Advanced Communications and Multimedia Security, pp. 213-226. Springer, Boston, MA, 2002, 14 pgs.
Spanos et al., “Performance Study of a Selective Encryption Scheme for the Security of Networked, Real-Time Video”, Proceedings of the Fourth International Conference on Computer Communications and Networks, IC3N'95, Sep. 20-23, 1995, Las Vegas, NV, pp. 2-10.
Srinivasan et al., “Windows Media Video 9: overview and applications”, Signal Processing: Image Communication, 2004, 25 pgs.
Stockhammer, “Dynamic Adaptive Streaming over HTTP—Standards and Design Principles”, Proceedings of the second annual ACM conference on Multimedia, Feb. 2011, pp. 133-145.
Timmerer et al., “HTTP Streaming of MPEG Media”, Proceedings of Streaming Day, 2010, 4 pgs.
Tiphaigne et al., “A Video Package for Torch”, Jun. 2004, 46 pgs.
Trappe et al., “Key Management and Distribution for Secure Multimedia Multicast”, IEEE Transaction on Multimedia, vol. 5, No. 4, Dec. 2003, pp. 544-557.
Unknown, “Information Technology—Coding of Audio-Visual Objects—Part 2: Visual”, International Standard, ISO/IEC 14496-2, Third edition, Jun. 1, 2004, pp. 1-706.
Van Deursen et al., “On Media Delivery Protocols in the Web”, 2010 IEEE International Conference on Multimedia and Expo, Jul. 19-23, 2010, 6 pgs.
Ventura, Guillermo Albaida “Streaming of Multimedia Learning Objects”, AG Integrated Communication System, Mar. 2003, 101 pgs.
Waggoner, “Compression for Great Digital Video”, 2002, 184 pgs.
Wallendael et al., “Encryption for High Efficiency Video Coding with Video Adaptation Capabilities”, IEEE Transactions on Consumer Electronics, vol. 59, Issue 3, Aug. 2013, pp. 634-642.
Wallendael et al., “Format-Compliant Encryption Techniques for High Efficiency Video Coding”, 2013 IEEE International Conference on Image Processing, Sep. 15-18, 2013, Melbourne, VIC, Australia, pp. 4583-4587.
Watanabem et al., “MPEG-2 decoder enables DTV trick plays”, esearcher System LSI Development Lab, Fujitsu Laboratories Ltd., Kawasaki, Japan, Jun. 2001, 2 pgs.
Wiegand, “Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG”, Jan. 2002, 70 pgs.
Willig et al., U.S. Appl. No. 61/409,285, filed Nov. 2, 2010, 43 pgs.
Yang et al., “Projection-Based Spatially Adaptive Reconstruction of Block-Transform Compressed Images”, IEEE Transactions on Image Processing, vol. 4, No. 7, Jul. 1995, pp. 896-908.
Yang et al., “Regularized Reconstruction to Reduce Blocking Artifacts of Block Discrete Cosine Transform Compressed Images”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 3, No. 6, Dec. 1993, pp. 421-432.
Yu et al., “Video deblocking with fine-grained scalable complexity for embedded mobile computing”, Proceedings 7th International Conference on Signal Processing, Aug. 31-Sep. 4, 2004, pp. 1173-1178.
Zakhor, “Iterative Procedures for Reduction of Blocking Effects in Transform Image Coding”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 2, No. 1, Mar. 1992, pp. 91-95.
Extended European Search Report for European Application No. 21215991.7, Search completed Mar. 21, 2022, dated Mar. 30, 2022, 11 Pgs.
Bross et al., “Proposed Editorial Improvements for High efficiency video coding (HEVC) Text Specification Draft 8”, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 11th Meeting: Shanghai, CN, Oct. 10-19, 2012.
Related Publications (1)
Number Date Country
20220408132 A1 Dec 2022 US
Provisional Applications (1)
Number Date Country
62034714 Aug 2014 US
Continuations (3)
Number Date Country
Parent 16725795 Dec 2019 US
Child 17650057 US
Parent 15693016 Aug 2017 US
Child 16725795 US
Parent 14821454 Aug 2015 US
Child 15693016 US