Systems, methods, and media for transcoding video data

Information

  • Patent Grant
  • 10264255
  • Patent Number
    10,264,255
  • Date Filed
    Monday, February 26, 2018
    6 years ago
  • Date Issued
    Tuesday, April 16, 2019
    5 years ago
Abstract
Methods, systems, and computer readable media for transcoding video data based on metadata are provided. In some embodiments, methods for transcoding video data using metadata are provided, the methods comprising: receiving a first plurality of encoded images from a storage device; decoding the first plurality of encoded images based on a first coding scheme to generate a plurality of decoded images; receiving a plurality of encoding parameters from the storage device; and encoding the plurality of decoded images into a second plurality of encoded images based on a second coding scheme and the plurality of encoding parameters.
Description
BACKGROUND OF THE INVENTION

Transcoding is an important task in video distribution applications. For example, a transcoder can receive input video data having a first format and convert the input video data into video data having a second format. More particularly, for example, the first format and the second format can correspond to different video coding standards, such as Motion JPEG, JPEG 2000, MPEG-2, MPEG-4, H.263, H.264, AVC, High Efficiency Video Coding (HEVC), etc. Alternatively or additionally, the first format and the second format can have different bitrates and/or resolutions.


There are many current approaches to transcoding video data. For example, a transcoder can decode video data compressed in a first format into raw video data and re-encode the raw video data into a second format. More particularly, for example, the transcoder can estimate encoding parameters and re-encode the raw video data using the estimated encoding parameters. The estimation of encoding parameters within a transcoder is very time-consuming.


Accordingly, new mechanisms for transcoding video data are desirable.


SUMMARY OF THE INVENTION

In view of the foregoing, systems, methods, and media for transcoding video data using metadata are provided.


In some embodiments, methods for transcoding video data using metadata are provided, the methods comprising: receiving a first plurality of encoded images from a storage device; decoding the first plurality of encoded images based on a first coding scheme to generate a plurality of decoded images; receiving a plurality of encoding parameters from the storage device; and encoding the plurality of decoded images into a second plurality of encoded images based on a second coding scheme and the plurality of encoding parameters.


In some embodiments, systems for transcoding video data using metadata are provided, the systems comprising: processing circuitry configured to: receive a first plurality of encoded images from a storage device; decode the first plurality of encoded images based on a first coding scheme to generate a plurality of decoded images; receive a plurality of encoding parameters from the storage device; and encode the plurality of decoded images into a second plurality of encoded images based on a second coding scheme and the plurality of encoding parameters.


In some embodiments, non-transitory media containing computer-executable instructions that, when executed by a processing circuitry, cause the processing circuitry to performing a method for transcoding video data are provided, the method comprising: receiving a first plurality of encoded images from a storage device; decoding the first plurality of encoded images based on a first coding scheme to generate a plurality of decoded images; receiving a plurality of encoding parameters from the storage device; and encoding the plurality of decoded images into a second plurality of encoded images based on a second coding scheme and the plurality of encoding parameters.





BRIEF DESCRIPTION OF THE DRAWINGS

The above and other objects and advantages of the invention will be apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings, in which like reference characters refer to like parts throughout, and in which:



FIG. 1 shows a generalized block diagram of an example of an architecture of hardware that can be used in accordance with some embodiments of the invention;



FIG. 2 shows a block diagram of an example of storage device and transcoder in accordance with some embodiments of the invention;



FIG. 3 shows a flow chart of an example of a process for transcoding video data in accordance with some embodiments of the invention;



FIG. 4 shows a flow chart of an example of a process for decoding video data in accordance with some embodiments of the invention; and



FIG. 5 shows a flow chart of an example of a process for encoding video data in accordance with some embodiments of the invention.





DETAILED DESCRIPTION OF EMBODIMENTS

This invention generally relates to mechanisms (which can be systems, methods, media, etc.) for transcoding video data based on metadata. In some embodiments, the mechanisms can be used to transcode video data having a first format into video data having a second format.


In some embodiments, the mechanisms can receive a compressed bitstream and media metadata. The mechanisms can decompress the compressed bitstream and generate decoded video data based on a first coding scheme. The mechanisms can then encode the decoded video data based on a second coding scheme.


In some embodiments, the media metadata can include any suitable data. For example, the media metadata can include a set of coding parameters that can be used to encoding video data. More particularly, the media metadata can include information about one or more video scenes, such as a scene change indication signal, the number of frames between two scenes, the type of a video scene, etc. The media metadata can also include motion data, intra-prediction information, picture complexity information, etc. about video data.


In some embodiments, the mechanisms can encode the decoded video data using the media content data. For example, the mechanisms can generate a prediction image based on the motion data, the intra-prediction information, etc. As another example, the mechanisms can perform rate-control on the decoded video data based on the information about the video scenes, picture complexity information, etc.


Turning to FIG. 1, a generalized block diagram of an example 100 of an architecture of hardware that can be used in accordance with some embodiments is shown. As illustrated, architecture 100 can include a media content source 102, a media encoder 104, a media metadata source 106, a communications network 108, a storage device 110, a transcoder 112, and communications paths 114, 116, 118, 120, 122, 124, 126, 128, and 130.


Media content source 102 can include any suitable device that can provide media content. For example, media content source 102 can include one or more suitable cameras that can be configured to capture still images or moving images. As another example, media content source 102 can include one or more types of content distribution equipment for distributing any suitable media content, including television distribution facility equipment, cable system head-end equipment, satellite distribution facility equipment, programming source equipment (e.g., equipment of television broadcasters, such as NBC, ABC, HBO, etc.), intermediate distribution facility equipment, Internet provider equipment, on-demand media server equipment, and/or any other suitable media content provider equipment. NBC is a trademark owned by the National Broadcasting Company, Inc., ABC is a trademark owned by the ABC, INC., and HBO is a trademark owned by the Home Box Office, Inc.


Media content source 102 may be operated by the originator of content (e.g., a television broadcaster, a Webcast provider, etc.) or may be operated by a party other than the originator of content (e.g., an on-demand content provider, an Internet provider of content of broadcast programs for downloading, etc.).


Media content source 102 may be operated by cable providers, satellite providers, on-demand providers, Internet providers, providers of over-the-top content, and/or any other suitable provider(s) of content.


Media content source 102 may include a remote media server used to store different types of content (including video content selected by a user), in a location remote from any of the user equipment devices. Systems and methods for remote storage of content, and providing remotely stored content to user equipment are discussed in greater detail in connection with Ellis et al., U.S. Pat. No. 7,761,892, issued Jul. 20, 2010, which is hereby incorporated by reference herein in its entirety.


As referred to herein, the term “media content” or “content” should be understood to mean one or more electronically consumable media assets, such as television programs, pay-per-view programs, on-demand programs (e.g., as provided in video-on-demand (VOD) systems), Internet content (e.g., streaming content, downloadable content, Webcasts, etc.), movies, films, video clips, audio, audio books, and/or any other media or multimedia and/or combination of the same. As referred to herein, the term “multimedia” should be understood to mean media content that utilizes at least two different content forms described above, for example, text, audio, images, video, or interactivity content forms. Media content may be recorded, played, displayed or accessed by user equipment devices, but can also be part of a live performance. In some embodiments, media content can include over-the-top (OTT) content. Examples of OTT content providers include YOUTUBE, NETFLIX, and HULU, which provide audio and video via IP packets. Youtube is a trademark owned by Google Inc., Netflix is a trademark owned by Netflix Inc., and Hulu is a trademark owned by Hulu, LLC.


Media content can be provided from any suitable source in some embodiments. In some embodiments, media content can be electronically delivered to a user's location from a remote location. For example, media content, such as a Video-On-Demand movie, can be delivered to a user's home from a cable system server. As another example, media content, such as a television program, can be delivered to a user's home from a streaming media provider over the Internet.


Media encoder 104 can include any suitable circuitry that is capable of encoding media content. For example, media encoder 104 can include one or more suitable video encoders, audio encoders, video decoders, audio decoders, etc. More particularly, for example, media encoder 104 can include one or more video encoders that can encode video data including a set of images in accordance with a suitable coding standard, such as Motion JPEG, JPEG 2000, MPEG-2, MPEG-4, H.263, H.264, AVC, High Efficiency Video Coding (HEVC), etc. As referred to herein, an image can have any suitable size and shape. For example, an image can be a frame, a field, or any suitable portion of a frame or a field, such as a slice, a block, a macroblock, a set of macroblocks, a coding tree unit (CTU), a coding tree block (CTB), etc.


Media metadata source 106 can include any suitable circuitry that is capable of providing metadata for media content. The metadata for media content can include any suitable information about the media content. For example, the metadata can include one or more coding parameters that can be used by suitable encoding circuitry and/or suitable decoding circuitry to encode and/or decode video data including multiple video frames.


In a more particular example, the metadata can include information about one or more video scenes, each of which can be composed of a set of images that have similar content. More particularly, for example, the metadata can include scene change information that can indicate the start and/or end of one or more scene changes in the video data. In some embodiments, the metadata can also include a set of parameters that can indicate the type of each of the scene changes, such as a shot change, a fading change, a dissolving change, etc. In some embodiments, the metadata can include the number of images between two scene changes. For example, the metadata can include the number of images between two consecutive scene changes, two scene changes of a given type (e.g., such as two shot changes), etc.


In another more particular example, the media metadata can include picture complexity information. The picture complexity information can include any suitable information about the spatial and/or temporal complexity of an image, such as a frame, a field, a slice, a macroblock, a sub-macroblock, a CTU, a CTB, etc.


In some embodiments, for example, the picture complexity information can include spatial complexity of an image that can indicate the amount of intra-distortion across the image. The amount of intra-distortion can be measured in any suitable manner. For example, the amount of intra-distortion of the image can be measured based on the variances of pixel values, luminance, brightness, or other characteristics of the image using a suitable metric, such as the mean absolute difference (MAD), the mean square error (MSE), etc. In some embodiments, the spatial complexity of a frame can be measured using the sum of the spatial complexity of the macroblocks and/or CTUs of the frame. In some embodiments, the picture complexity information can include a map of spatial complexity distribution within a frame for each frame of the video data.


In some embodiments, for example, the picture complexity information can include temporal complexity of an image that can indicate the amount of motion between the image and one or more reference images. The amount of motion can be represented in any suitable manner. For example, the amount of motion between the image and a reference can be measured using a suitable difference metric, such as the sum of the absolute difference (SAD), the sum of the squared difference (SSD), the mean absolute difference (MAD), the sum of absolute transformed differences (SATD), etc. More particularly, for example, the temporal complexity of a frame can be represented as the SAD, SSD, MAD, SAID, etc. between two consecutive frames. In some embodiments, the picture complexity information can include a map of temporal complexity distribution within a frame for each frame of the video data.


In yet another more particular example, the metadata can include motion data about the video data. The motion data can be generated in any suitable manner and can include any suitable data about changes among video frames due to object motions, camera motions, uncovered regions, lighting changes, etc. More particularly, for example, media metadata source 106 can generate a motion vector map for each video frame of the media content, motion characteristics (e.g., high motion, slow motion, etc.) of one or a set of frames, the number of B-frames between two P-frames, etc. In some embodiments, the motion data can be generated based on a suitable motion estimation algorithm, such as a block matching algorithm, an optical flow algorithm, a sub-pixel motion estimation algorithm, a hieratical block matching algorithm, etc. For example, in some embodiments, the motion vector map can include a set of integer vectors corresponding to each integer pixel of a video frame. As another example, the motion vector map can include a set of fractional motion vectors corresponding to each sub-pixel of the video frame (e.g., ½ pixel, ¼ pixel, ⅛ pixel, etc.). In some embodiments, the media metadata can also include one or more reference lists that can contain a set of frames that can serve as reference frames.


As yet another example, the media metadata can include intra-prediction data about the media content. The intra prediction data can include any suitable data that can be used for intra prediction under a suitable coding standard. For example, the intra-prediction data can include a set of candidate intra prediction modes, such as a vertical mode, a horizontal mode, a DC mode, a diagonal down-left mode, a diagonal down-right mode, a vertical-right mode, a horizontal-down node, a vertical-left mode, a horizontal-up mode, a plane mode, an intra-angular mode, etc. Additionally, the intra-prediction data can include a coding cost and/or distortion corresponding to each intra-prediction mode.


In some embodiments, the media metadata can be stored based on the play order of the video frames.


Storage device 110 can be any suitable digital storage mechanism in some embodiments. For example, storage 110 can include any device for storing electronic data, program instructions, computer software, firmware, register values, etc., such as random-access memory, read-only memory, hard drives, optical drives, digital video disc (DVD) recorders, compact disc (CD) recorders, BLU-RAY disc (BD) recorders, BLU-RAY 3D disc recorders, digital video recorders (DVR, sometimes called a personal video recorder, or PVR), solid state devices, quantum storage devices, gaming consoles, gaming media, or any other suitable fixed or removable storage devices, and/or any combination of the same. Storage 110 may be used to store media content, media metadata, media guidance data, executable instructions (e.g., programs, software, scripts, etc.) for providing an interactive media guidance application, and for any other suitable functions, and/or any other suitable data or program code, in accordance with some embodiments. Nonvolatile memory may also be used (e.g., to launch a boot-up routine and other instructions), in some embodiments. In some embodiments, storage 110 can store media content, encoded video data, and/or metadata provided by media content source 102, media encoder 104, and/or media metadata source 106.


Transcoder 112 can include any suitable circuitry that is capable of converting input media content having a first format into media content having a second format. For example, transcoder 112 can include a suitable video transcoder that can convert a first set of images that are encoded in accordance with a first coding scheme into a second set of images that are encoded in accordance with a second coding scheme. In some embodiments, the first coding scheme and the second coding scheme may have different target bitrates. In some embodiments, the first set of encoded images and the second set of encoded images may have different resolutions, such as spatial resolutions, temporal resolutions, quality resolutions, etc. In some embodiments, the first coding scheme and the second coding scheme may correspond to different coding standards, such as Motion JPEG, JPEG 2000, MPEG-2, MPEG-4/AVC, H.263, H.264, High Efficiency Video Coding (HEVC), etc. More particularly, for example, in some embodiments, transcoder 112 can convert a set of images encoded based on MPEG-2 standard into a set of images encoded based on HEVC standard.


In some embodiments, communications network 108 may be any one or more networks including the Internet, a mobile phone network, a mobile voice, a mobile data network (e.g., a 3G, 4G, or LTE network), a cable network, a satellite network, a public switched telephone network, a local area network, a wide area network, a fiber-optic network, any other suitable type of communications network, and/or any suitable combination of communications networks.


In some embodiments, media content source 102, media encoder 104, media metadata source 106, storage device 110, and transcoder 112 can be implemented in any suitable hardware. For example, each of media content source 102, media encoder 104, media metadata source 106, storage 126, and transcoder 112 can be implemented in any of a general purpose device such as a computer or a special purpose device such as a client, a server, mobile terminal (e.g., mobile phone), etc. Any of these general or special purpose devices can include any suitable components such as a hardware processor (which can be a microprocessor, digital signal processor, a controller, etc.).


In some embodiments, each of media content source 102, media encoder 104, media metadata source 106, storage device 110, and transcoder 112 can be implemented as a stand-alone device or integrated with other components of architecture 100.


In some embodiments, media content source 102 can be connected to media metadata source 106 through communications path 114. In some embodiments, media encoder 104 can be connected to media content source 102 and media metadata source 106 through communications paths 116 and 118, respectively. In some embodiments, communications network 108 can be connected to media content source 102, media encoder 104, media metadata source 106, storage device, and transcoder 112 through communications paths 120, 122, 124, 126, and 128, respectively. In some embodiments, storage device 110 can be connected to transcoder 112 through communications path 130.


Communications paths 116, 118, 120, 122, 124, 126, 128, and 130 may separately or together include one or more communications paths, such as, a satellite path, a fiber-optic path, a cable path, a path that supports Internet communications (e.g., IPTV), free-space connections (e.g., for broadcast or other wireless signals), or any other suitable wired or wireless communications path or combination of such paths, in some embodiments.


Turning to FIG. 2, a block diagram of an example 200 of storage device 110 and transcoder 112 of FIG. 1 in accordance with some embodiments of the disclosure is shown.


As illustrated, transcoder 112 may include a decoding circuitry 202, an encoding circuitry 204, a video-data storage 206, and communication paths 208, 210, 212, and 214.


Decoding circuitry 202 can include any suitable circuitry that is capable of performing video decoding. For example, decoding circuitry 202 can include one or more decoders that can decode a set of encoded images based on a suitable coding standard, such as MPEG-2, MPEG-4, AVC, H.263, H.264, HEVC, etc.


Encoding circuitry 204 can include any suitable circuitry that is capable of performing video encoding. For example, encoding circuitry 204 can include one or more suitable encoders that can encode a set of images based on a suitable coding standard, such as MPEG-2, MPEG-4, AVC, H.263, H.264, HEVC, etc. In some embodiments, encoding circuitry 204 can also include scaler circuitry for upconverting and/or downconverting content into a preferred output format.


Decoding circuitry 202 can be connected to encoding circuitry 204 through communication path 210. Encoding circuitry 204 can be connected to video storage 206 through communication path 214. Transcoder 112 may be connected to media storage 110 through communication paths 208 and 212.


Each of decoding circuitry 202 and encoding circuitry 204 can include any suitable processing circuitry. As referred to herein, processing circuitry can be any suitable circuitry that includes one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), hardware processors, etc., and may include a multi-core processor (e.g., dual-core, quad-core, hexa-core, or any suitable number of cores) or a supercomputer, in some embodiments. In some embodiments, processing circuitry may be distributed across multiple separate processors or processing units, such as, for example, multiple of the same type of processing units (e.g., two Intel Core i7 processors) or multiple different processors (e.g., an Intel Core i5 processor and an Intel Core i7 processor).


Video data storage 206 can be any suitable digital storage mechanism in some embodiments. For example, video data storage 206 can include any device for storing electronic data, program instructions, computer software, firmware, register values, etc., such as random-access memory, read-only memory, hard drives, optical drives, digital video disc (DVD) recorders, compact disc (CD) recorders, BLU-RAY disc (BD) recorders, BLU-RAY 3D disc recorders, digital video recorders (DVR, sometimes called a personal video recorder, or PVR), solid state devices, quantum storage devices, gaming consoles, gaming media, or any other suitable fixed or removable storage devices, and/or any combination of the same. Video data storage 206 may be used to store media content, media guidance data, executable instructions (e.g., programs, software, scripts, etc.) for providing an interactive media guidance application, and for any other suitable functions, and/or any other suitable data or program code, in accordance with some embodiments. Nonvolatile memory may also be used (e.g., to launch a boot-up routine and other instructions), in some embodiments.


Each of storage device 110, decoding circuitry 202, encoding circuitry 204, and video-data storage 206 can be provided as a stand-alone device or integrated with other components of architecture 200.


In some embodiments, storage device 110 can be connected to decoding circuitry 202 and encoding circuitry 204 through path paths 208 and 210, respectively. In some embodiments, decoding circuitry 202 can be connected to encoding circuitry 204 through communications path 212. In some embodiments, encoding circuitry 204 can be connected to video-data storage 206 through communications path 214.


Communications paths 208, 210, 212, and 214 may separately or together include one or more communications paths, such as, a satellite path, a fiber-optic path, a cable path, a path that supports Internet communications (e.g., IPTV), free-space connections (e.g., for broadcast or other wireless signals), or any other suitable wired or wireless communications path or combination of such paths, in some embodiments


In some embodiments, transcoder 112 can also include a demultiplexer circuitry (not shown in FIG. 2). The demultiplexer circuitry can be any suitable circuitry that is capable of demultiplexing a media content transport stream (TS). For example, the demultiplexer circuitry can receive a TS from storage 110 and demultiplex the TS into a video stream, an audio stream, program and system information protocol data stream, etc. The demultiplexer circuitry can also pass the video stream to decoding circuitry 202.


Turning to FIG. 3, a flow chart of an example 300 of a process for transcoding video data in accordance with some embodiments of the disclosure is shown. In some embodiments, process 300 can be implemented by transcoder 112 as illustrated in FIGS. 1 and 2.


As illustrated, process 300 can start by receiving a compressed bitstream at 302. The compressed bitstream can include any suitable data and can be received in any suitable manner. For example, the compressed bitstream can include video data generated based on any suitable coding standard, such as Motion JPEG, JPEG, MPEG-2, MPEG-4, H.263, H.264, HEVC, etc. More particularly, for example, the video data can include encoded images, decoding parameters, header information, etc. In some embodiments, each of the encoded images can include one or more quantized transform coefficients.


In some embodiments, for example, the compressed bitstream can be received from storage 110 as illustrated in FIGS. 1 and 2. Alternatively or additionally, the compressed bitstream can be received from media encoder 104 and/or media content source 102.


Next, at 304, transcoder 112 can decompress the compressed bitstream and generate decoded video data. The compressed bitstream can be decompressed and the decoded video data can be generated in any suitable manner. For example, transcoder 112 can decompress the compressed bitstream and generate multiple decoded images based on a suitable coding standard, such as Motion JPEG, JPEG 2000, MPEG-2, MPEG-4, H.263, H.264, HEVC, etc. In some embodiments, the decoded images can have any suitable color format, such as RGB, YCrCb, YUV, etc.


More particularly, for example, each of the decoded images can be generated using a process 400 as illustrated in FIG. 4. In some embodiments, for example, process 400 can be implemented by decoding circuitry 202 of transcoder 112 (FIG. 2).


As shown, at 402, decoding circuitry 202 can perform entropy decoding on the compressed bitstream and extract the quantized transform coefficients associated with each of the encoded images, decoding parameters (e.g., quantization parameters, coding modes, macroblock partition information, motion vectors, reference lists, etc.), header information, etc.


At 404, decoding circuitry 202 can perform inverse quantization on the quantized transformed coefficients associated with a current encoded image to generate one or more transform coefficients. The inverse quantization can be performed in any suitable manner. For example, decoding circuitry 202 can multiply each of the quantized transform coefficients by a suitable quantization parameter. In some embodiments, for example, decoding circuitry 202 can obtain the quantization parameter from the decoding parameters.


At 406, decoding circuitry 202 can perform an inverse transform on the transform coefficients to generate a decoded residual image for the current encoded image. The inverse transform can be performed in any suitable manner. For example, the inverse transform can be an inverse Discrete Cosine Transform (IDCT).


Next, at 408, decoding circuitry 202 can generate a prediction image for the current encoded image. The prediction image can be calculated in any suitable manner. For example, decoding circuitry 202 can generate the prediction image based on a suitable inter-prediction method by referring to one or more previously decoded frames. More particularly, for example, decoding circuitry 202 can perform motion compensation on one or more previously decoded frames and produce a motion compensated reference image as the prediction image. In a more particular example, decoding circuitry 202 can locate a previously decoded image or a portion of the previously decoded image as a reference image for the current encoded image using a motion vector. The reference image can then be used as the motion compensated prediction for the current image. In another more particular example, decoding circuitry 202 can locate two reference images for the current encoded image using one or more motion vectors. Decoding circuitry 202 can then calculate a prediction image for the current encoded image based on the reference images. More particularly, for example, the prediction image can be a weighted prediction of the two reference images.


As another example, decoding circuitry 202 can generate the prediction image based on a suitable intra-prediction method by referring to one or more previously decoded pixels in the same frame. More particularly, for example, decoding circuitry 202 can perform spatial extrapolation to produce an intra-prediction image for the current encoded image. In some embodiments, one or more prediction images can be formed by extrapolating previously decoded pixels of the current frame in any suitable direction, such as vertical, horizontal, diagonal down-left, diagonal down-right, vertical-left, horizontal-down, vertical right, horizontal-up, etc.


At 410, decoding circuitry 202 can generate a decoded image for the current encoded image based on the residual image and the prediction image. The decoded image can be generated in any suitable manner. For example, decoding circuitry 202 can add the prediction image to the decoded residual image to produce the decoded image.


Turning back to FIG. 3, at 306, transcoder 112 can receive media metadata. The media metadata can include any suitable data and can be received in any suitable manner. For example, the media metadata can be the metadata produced by media metadata source 106, as described above in connection with FIG. 1. More particularly, for example, the media metadata can include information about video scenes (e.g., scene change information, the number of the frames between scene changes, the type of a scene change, the number of B-frames between two P-frames, picture complexity information, etc.), motion data about the media content (e.g., motion vector maps, reference lists, etc.), intra-prediction data (e.g., a set of candidate intra-prediction modes, the coding cost and/or distortion corresponding to each candidate intra-prediction mode, etc.), etc.


In some embodiments, for example, encoding circuitry 204 (FIG. 2) can receive the media metadata from storage 110. In some embodiments, encoding circuitry 204 can receive the media metadata from media metadata source 106 through communications network 108 as illustrated in FIG. 1.


At 308, transcoder 112 can encode the decoded video data using the media metadata based on a second coding scheme. The decoded video data can be encoded in any suitable manner. For example, transcoder 112 can encode the decoded images into a set of encoded images based on any suitable coding standard, such as MPEG-2, MPEG-4, H.263, H.264, HEVC, etc. As another example, transcoder 112 can encode the decoded video data into a compressed bitstream including a set of encoded images that has a given bitrate. As yet another example, encoding circuitry 204 can encode the decoded images into a set of encoded images that has a given resolution, such as a spatial resolution, a temporal resolution, a quality resolution, etc.


More particularly, for example, transcoder 112 can generate each of the encoded images using a process 500 as illustrated in FIG. 5. In some embodiments, process 500 can be implemented by encoding circuitry 204 of transcoder 112.


At 502, encoding circuitry 204 can receive the set of decoded images and the media metadata. The set of decoded images and the media metadata can be received in any suitable manner. For example, encoding circuitry 204 can receive the set of decoded images from the decoding circuitry 202 and receive the media metadata from storage device 110.


At 504, encoding circuitry 204 can divide a decoded image into one or more suitable coding units based on the second coding scheme. Each of the coding units can have any suitable size and shape and can be obtained in any suitable manner. In some embodiments, for example, the second coding scheme can include the HEVC coding standard. Encoding circuitry 204 can divide a video frame into multiple coding tree units (CTU), each of which can have a size of 8×8, 16×16, 32×32, 64×64, etc. In some embodiments, each of the CTUs can be partitioned into multiple coding tree blocks (CTBs), each of which can have a size of 4×4, 8×8, 16×16, etc. based on the size of the CTU. In some embodiments, each of the CTBs can be further partitioned into multiple coding blocks (CBs) and coding units (CUs).


At 506, encoding circuitry 204 can generate a prediction image for a coding unit. The prediction image can be generated in any suitable way. For example, encoding circuitry 204 can generate the prediction image based on the media metadata such as scene change information, motion data, picture complexity information, intra-prediction information, etc.


In some embodiments, for example, encoding circuitry 204 can generate the prediction image based on a suitable inter-prediction method by referring to one or more reference images. More particularly, for example, encoding circuitry 204 can calculate one or more suitable motion vectors for the coding unit based on the motion vector map corresponding to the coding unit. Encoding circuitry 204 can then generate a motion compensated prediction image for the coding unit based on the motion vectors by referring to one or more reference images. In some embodiments, the motion compensated prediction image can be generated based on one reference frame that can be located using the reference frame lists. For example, encoding circuitry 204 can locate a region in the reference frame as a reference image for the coding unit based on a motion vector. The reference image can then be used as a prediction image for the coding unit. In some embodiments, the motion compensated prediction image can be generated based on two reference frames that can be located using the reference frame lists. For example, encoding circuitry 204 can generate two reference images by locating a region in each of the two reference frames, respectively, based on one or more motion vectors. Encoding circuitry 204 can then produce a prediction for the coding unit using the two reference images. More particularly, for example, the prediction for the coding unit can be a weighted prediction of the two reference images.


In some embodiments, encoding circuitry 204 can generate the predicted image based on a suitable intra-prediction method. The intra-prediction can be performed in any suitable manner. For example, encoding circuitry 204 can generate an intra-prediction image for the coding unit based on the media metadata, such as the intra-prediction data including the set of candidate intra-prediction modes, the coding cost and/or distortion corresponding to each intra-prediction mode, etc. More particularly, for example, encoding circuitry 204 can determine a sub-set of the candidate intra-prediction modes that can be used in accordance with the second coding scheme. Additionally, encoding circuitry 204 can select an intra-prediction mode from the sub-set of candidate intra-prediction modes based on the coding costs and/or distortion corresponding to each of the sub-set of candidate intra-prediction modes. Encoding circuitry 204 can then generate a prediction image for the coding unit based on the selected intra-prediction mode. More particularly, for example, encoding circuitry 204 can predict each pixel of the coding unit by extrapolating pixel samples in a direction defined by the intra-prediction mode.


At 508, encoding circuitry 204 can generate a residual image for the coding unit. The residual image can be generated in any suitable manner. For example, the residual image can be generated at 506 by subtracting the prediction image generated at from the original image of the coding unit.


At 510, encoding circuitry 204 can perform a transform on the residual image and generate a set of transform coefficients. The set of transform coefficients can be generated in any suitable manner. For example, encoding circuitry 204 can perform a Discrete Cosine Transform (DCT) on the residual image and generate a set of DCT coefficients.


At 512, encoding circuitry 204 can perform quantization on the set of transform coefficients. The quantization can be performed in any suitable manner. For example, encoding circuitry 204 can determine a suitable quantization parameter (QP) for a coding unit based on a target bitrate of the second coding scheme. Encoding circuitry 204 can then quantize the transform coefficients using the QP. The target bitrate can be any suitable bitrate, such as a constant bitrate, a variable bitrate, etc. A QP can be determined in any suitable manner. In some embodiments, for example, encoding circuitry 204 can reduce the bitrate of a compressed bitstream by increasing QP or increase the bitrate of a compressed bitstream by decreasing QP. In some embodiments, for example, an I-frame can be encoded using most bits, followed by a P-frame and a B-frame.


In some embodiments, encoding circuitry 204 can determine a QP based on the media metadata (e.g., scene change information, the number of frames between two scenes, the type of each scene change, picture complexity information, etc.), the target bitrate in accordance with the second coding scheme, etc.


For example, encoding circuitry 204 can determine a QP for a group of pictures (GOP) based on the media metadata. The QP can be determined for the GOP in any suitable manner. More particularly, for example, encoding circuitry 204 can determine the structure of a GOP (e.g., the length of the GOP, the distance between P-frames, the distance between I-frames, etc.) based on the media metadata and determine the QP for the GOP based on the structure of the GOP.


In some embodiments, encoding circuitry 204 can calculate the number of bits available to encode the GOP based on the structure of a GOP, the frame rate of the video data, the target rate, etc. Encoding circuitry 204 can then calculate a QP for the GOP based on the number of bits available to encode the GOP. More particularly, for example, the QP can be calculated based on a suitable model that can define the relation between the QP and the target rate, such as a rate-distortion model, a rate-distortion optimization model, etc.


In some embodiments, encoding circuitry 204 can determine the structure of GOP based on the media metadata, such as scene information, the number of frames between two scene changes, the number of B-frames between two P-frames, etc.


In a more particular example, the first frame of the GOP can be an I-frame that can be located using the scene change information. More particularly, for example, the first frame of the GOP can correspond to the start of a video scene.


In another more particular example, the length of the GOP, i.e., the number of frames in the GOP, can be determined based on the number of frames between two scene changes. In some embodiments, the length of the GOP can be equal to the number of frames between two adjacent scene changes. In some embodiments, the length of the GOP can be equal to the number of frames between two given scene changes, e.g., two shot changes, etc.


In yet another more particular example, the distance between P-frames in the GOP can be determined based on the number of B-frames between two P-frames included in the media metadata. In a more particular example, the GOP can include a set of frames IBBPBBP . . . where the distance between P-frames is three.


As another example, encoding circuitry 204 can determine a QP for the coding unit based on the media metadata. More particularly, for example, encoding circuitry 204 can determine the complexity of the coding unit using the picture complexity information (e.g., the maps of spatial complexity, the maps of motion complexity, etc.). Encoding circuitry 204 can then calculate a target number of bits that are available to encode the coding unit based on the complexity of the coding unit. In some embodiments, for example, more bits can be allocated to a coding unit having relatively high complexity while fewer bits can be allocated to a coding unit having relatively lower complexity.


Additionally, encoding circuitry 204 can determine a QP for the coding unit to produce the target number of bits. More particularly, for example, the QP can be calculated based on a suitable model that can define the relation between the QP and the target rate, such as a rate-distortion model, a rate-distortion optimization model, etc.


Next, at 514, encoding circuitry 204 can perform entropy encoding on the quantized transform coefficients. The entropy encoding can be performed in any suitable manner. For example, encoding circuitry 204 can perform the entropy encoding using a suitable variable length encoding method.


It should be noted that the above steps of the flow diagrams of FIGS. 3-5 may be executed or performed in any order or sequence not limited to the order and sequence shown and described in the figures. Furthermore, it should be noted, some of the above steps of the flow diagrams of FIGS. 3-5 may be executed or performed substantially simultaneously where appropriate or in parallel to reduce latency and processing times. And still furthermore, it should be noted, some of the above steps of the flow diagrams of FIGS. 3-5 may be omitted.


In some embodiments, any suitable computer readable media can be used for storing instructions for performing the mechanisms and/or processes described herein. For example, in some embodiments, computer readable media can be transitory or non-transitory. For example, non-transitory computer readable media can include media such as magnetic media (such as hard disks, floppy disks, etc.), optical media (such as compact discs, digital video discs, Blu-ray discs, etc.), semiconductor media (such as flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), etc.), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.


The above described embodiments of the present disclosure are presented for purposes of illustration and not of limitation, and the present disclosure is limited only by the claims which follow.

Claims
  • 1. A method for transcoding a source video file into a set of multiple alternate video streams, the method comprising performing the following at each of a plurality of transcoding devices in parallel: receiving at least a portion of the source video file that includes a first plurality of encoded images encoded according to a source format from a media content source;decoding the at least a portion of the source video file based on the source format to generate a decoded portion of video including a plurality of decoded images;receiving media metadata generated prior to the decoding of the portion of the encoded video over a communications network from a media metadata source, where the media metadata comprises scene change information indicating the start and end of a scene, and scene complexity information; andencoding the plurality of decoded images of the decoded portion of video into an alternate video stream including a second plurality of encoded images based on a target format and the media metadata, the alternate video stream being one of the set of multiple alternate video streams, by performing at least the following operations for images in the plurality of decoded images: generating a prediction image for each of a plurality of coding units of an image in the plurality of decoded images using the scene change information and the scene complexity information within the received media metadata according to the target format;performing transforms on residual images of the plurality of coding units to generate sets of transform coefficients based on the target format; andperforming entropy encoding on the sets of transform coefficients to generate images for the second plurality of encoded images.
  • 2. The method of claim 1, wherein the media metadata is generated by a first device and then accessed by the plurality of transcoding devices.
  • 3. The method of claim 1, by further performing the following at each of the plurality of transcoding devices in parallel: performing quantization on the sets of transform coefficients for an image in the plurality of decoded images based at least in part on the scene complexity information within the received media metadata; andquantizing the generated set of transform coefficients according to the target format.
  • 4. The method of claim 1, by further performing the following at each of the plurality of transcoding devices in parallel: determining a number of bits to encode a group of pictures (GOP) based at least in part on a number of frames between the start and end of a scene as indicated by the received media metadata.
  • 5. The method of claim 1, wherein the source format and the target format have different resolutions.
  • 6. The method of claim 1, by further performing the following at each of the plurality of transcoding devices in parallel: dividing an image in the plurality of decoded images into a plurality of coding units based on the target format.
  • 7. A system for transcoding video data, the system comprising: a non-transitory memory storing a transcoding application;a processing circuitry; andwherein the transcoding application directs the processing circuitry to: receive at least a portion of the source video file that includes a first plurality of encoded images encoded according to a source format from a media content source;decode the at least a portion of the source video file based on the source format to generate a decoded portion of video including a plurality of decoded images;receive media metadata generated prior to the decoding of the portion of the encoded video over a communications network from a media metadata source, where the media metadata comprises scene change information indicating the start and end of a scene, and scene complexity information; andencode the plurality of decoded images of the decoded portion of video into an alternate video stream including a second plurality of encoded images based on a target format and the media metadata, the alternate video stream being one of the set of multiple alternate video streams, by performing at least the following operations for images in the plurality of decoded images: generating a prediction image for each of a plurality of coding units of an image in the plurality of decoded images using the scene change information and the scene complexity information within the received media metadata according to the target format;performing transforms on residual images of the plurality of coding units to generate sets of transform coefficients based on the target format; andperforming entropy encoding on the sets of transform coefficients to generate images for the second plurality of encoded images.
  • 8. The system of claim 7, wherein the media metadata received by the system for transcoding video data is generated by another device.
  • 9. The system of claim 7, wherein the processing circuitry is further configured to transcode the source video file by further performing the following at each of the plurality of transcoding devices in parallel: performing quantization on the sets of transform coefficients for an image in the plurality of decoded images based at least in part on the scene complexity information within the received media metadata; andquantizing the generated set of transform coefficients according to the target format.
  • 10. The system of claim 7, wherein the processing circuitry is further configured to transcode the source video file by further performing the following at each of the plurality of transcoding devices in parallel: determining a number of bits to encode a group of pictures (GOP) based at least in part on a number of frames between the start and end of a scene as indicated by the received media metadata.
  • 11. The system of claim 7, wherein the processing circuitry is further configured to transcode the source video file by further performing the following at each of the plurality of transcoding devices in parallel: dividing an image in the plurality of decoded images into a plurality of coding units based on the target format.
  • 12. The system of claim 7, wherein the source format and the target format correspond to different video encoding standards.
  • 13. A non-transitory computer-readable medium containing computer-executable instructions that, when executed by a processing circuitry, cause the processing circuitry to perform a method for transcoding video data, the method comprising: receive at least a portion of the source video file that includes a first plurality of encoded images encoded according to a source format from a media content source;decode the at least a portion of the source video file based on the source format to generate a decoded portion of video including a plurality of decoded images;receive media metadata generated prior to the decoding of the portion of the encoded video over a communications network from a media metadata source, where the media metadata comprises scene change information indicating the start and end of a scene, and scene complexity information; andencode the plurality of decoded images of the decoded portion of video into an alternate video stream including a second plurality of encoded images based on a target format and the media metadata, the alternate video stream being one of the set of multiple alternate video streams, by performing at least the following operations for images in the plurality of decoded images: generating a prediction image for each of a plurality of coding units of an image in the plurality of decoded images using the scene change information and the scene complexity information within the received media metadata according to the target format;performing transforms on residual images of the plurality of coding units to generate sets of transform coefficients based on the target format; andperforming entropy encoding on the sets of transform coefficients to generate images for the second plurality of encoded images.
  • 14. The non-transitory computer-readable medium of claim 13, wherein the received media metadata is generated by another device.
  • 15. The non-transitory computer-readable medium of claim 13, wherein the method further comprises transcoding the source video file by further performing the following at each of the plurality of transcoding devices in parallel: performing quantization on the sets of transform coefficients for an image in the plurality of decoded images based at least in part on the scene complexity information within the received media metadata; andquantizing the generated set of transform coefficients according to the target format.
  • 16. The non-transitory computer-readable medium of claim 13, wherein the method further comprises transcoding the source video file by further performing the following at each of the plurality of transcoding devices in parallel: determining a number of bits to encode a group of pictures (GOP) based at least in part on a number of frames between the start and end of a scene as indicated by the received media metadata.
  • 17. The non-transitory computer-readable medium of claim 13, wherein the method further comprises transcoding the source video file by further performing the following at each of the plurality of transcoding devices in parallel: dividing an image in the plurality of decoded images into a plurality of coding units based on the target format.
  • 18. The non-transitory computer-readable medium of claim 13, wherein the source format and the target format have different resolutions.
  • 19. The non-transitory computer-readable medium of claim 13, wherein the source format and the target format correspond to different video coding standards.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 13/841,943, entitled “Systems, Methods, and Media for Transcoding Video Data According to Encoding Parameters Indicated by Received Metadata”, filed Mar. 15, 2013. The disclosure of U.S. patent application Ser. No. 13/841,943 is incorporated herein by reference in its entirety.

US Referenced Citations (640)
Number Name Date Kind
4009331 Goldmark et al. Feb 1977 A
4694357 Rahman et al. Sep 1987 A
4802170 Trottier Jan 1989 A
4964069 Ely Oct 1990 A
5119474 Beitel et al. Jun 1992 A
5274758 Beitel et al. Dec 1993 A
5361332 Yoshida et al. Nov 1994 A
5396497 Veltman Mar 1995 A
5404436 Hamilton Apr 1995 A
5420801 Dockter et al. May 1995 A
5420974 Morris et al. May 1995 A
5471576 Yee Nov 1995 A
5479303 Suzuki et al. Dec 1995 A
5487167 Dinallo et al. Jan 1996 A
5502766 Boebert et al. Mar 1996 A
5509070 Schull Apr 1996 A
5533021 Branstad et al. Jul 1996 A
5537408 Branstad et al. Jul 1996 A
5539908 Chen et al. Jul 1996 A
5541662 Adams et al. Jul 1996 A
5583652 Ware Dec 1996 A
5589993 Naimpally et al. Dec 1996 A
5627936 Prasad May 1997 A
5633472 DeWitt et al. May 1997 A
5642171 Baumgartner et al. Jun 1997 A
5655117 Goldberg et al. Aug 1997 A
5664044 Ware Sep 1997 A
5675382 Bauchspies Oct 1997 A
5675511 Prasad et al. Oct 1997 A
5684542 Tsukagoshi Nov 1997 A
5715403 Stefik Feb 1998 A
5717816 Boyce et al. Feb 1998 A
5719786 Nelson et al. Feb 1998 A
5745643 Mishina Apr 1998 A
5751280 Abbott May 1998 A
5754648 Ryan et al. May 1998 A
5763800 Rossum et al. Jun 1998 A
5765164 Prasad et al. Jun 1998 A
5794018 Vrvilo et al. Aug 1998 A
5805700 Nardone et al. Sep 1998 A
5822524 Chen et al. Oct 1998 A
5828370 Moeller et al. Oct 1998 A
5841432 Carmel et al. Nov 1998 A
5844575 Reid Dec 1998 A
5848217 Tsukagoshi et al. Dec 1998 A
5867625 McLaren Feb 1999 A
5887110 Sakamoto et al. Mar 1999 A
5892900 Ginter et al. Apr 1999 A
5903261 Walsh et al. May 1999 A
5907597 Mark May 1999 A
5946446 Yanagihara Aug 1999 A
5956729 Goetz et al. Sep 1999 A
5959690 Toebes, VIII et al. Sep 1999 A
5999812 Himsworth Dec 1999 A
6031622 Ristow et al. Feb 2000 A
6038257 Brusewitz et al. Mar 2000 A
6044469 Horstmann Mar 2000 A
6046778 Nonomura et al. Apr 2000 A
6047100 McLaren Apr 2000 A
6058240 McLaren May 2000 A
6064794 McLaren et al. May 2000 A
6065050 DeMoney May 2000 A
6018611 Nogami et al. Jun 2000 A
6079566 Eleftheriadis et al. Jun 2000 A
6097877 Katayama et al. Aug 2000 A
6141754 Choy Oct 2000 A
6155840 Sallette Dec 2000 A
6169242 Fay et al. Jan 2001 B1
6175921 Rosen Jan 2001 B1
6195388 Choi et al. Feb 2001 B1
6204883 Tsukagoshi Mar 2001 B1
6222981 Rijckaert Apr 2001 B1
6282653 Berstis et al. Aug 2001 B1
6289450 Pensak et al. Sep 2001 B1
6292621 Tanaka et al. Sep 2001 B1
6308005 Ando et al. Oct 2001 B1
6330286 Lyons et al. Dec 2001 B1
6374144 Viviani et al. Apr 2002 B1
6389218 Gordon et al. May 2002 B2
6389473 Carmel et al. May 2002 B1
6395969 Fuhrer May 2002 B1
6397230 Carmel et al. May 2002 B1
6418270 Steenhof et al. Jul 2002 B1
6441754 Wang Aug 2002 B1
6449719 Baker Sep 2002 B1
6466671 Maillard et al. Oct 2002 B1
6466733 Kim Oct 2002 B1
6510513 Danieli Jan 2003 B1
6510554 Gordon et al. Jan 2003 B1
6532262 Fukuda Mar 2003 B1
6621979 Eerenberg et al. Sep 2003 B1
6625320 Nilsson Sep 2003 B1
6658056 Duruöz et al. Dec 2003 B1
6665835 Gutfreund et al. Dec 2003 B1
6671408 Kaku Dec 2003 B1
6697568 Kaku Feb 2004 B1
6725281 Zintel et al. Apr 2004 B1
6771703 Oguz et al. Aug 2004 B1
6807306 Girgensohn et al. Oct 2004 B1
6810031 Hegde et al. Oct 2004 B1
6810389 Meyer Oct 2004 B1
6819394 Nomura et al. Nov 2004 B1
6850252 Hoffberg Feb 2005 B1
6856997 Lee et al. Feb 2005 B2
6859496 Boroczky et al. Feb 2005 B1
6917652 Lyu Jul 2005 B2
6944621 Collart Sep 2005 B1
6944629 Shioi et al. Sep 2005 B1
6956901 Boroczky et al. Oct 2005 B2
6965724 Boccon-Gibod et al. Nov 2005 B1
6965993 Baker Nov 2005 B2
6985588 Glick et al. Jan 2006 B1
6988144 Luken et al. Jan 2006 B1
7007170 Morten Feb 2006 B2
7023924 Keller et al. Apr 2006 B1
7043473 Rassool et al. May 2006 B1
7127155 Ando et al. Oct 2006 B2
7150045 Koelle et al. Dec 2006 B2
7151832 Fetkovich et al. Dec 2006 B1
7151833 Candelore et al. Dec 2006 B2
7165175 Kollmyer et al. Jan 2007 B1
7185363 Narin et al. Feb 2007 B1
7197234 Chatterton Mar 2007 B1
7209892 Galuten et al. Apr 2007 B1
7231132 Davenport Jun 2007 B1
7237061 Boic Jun 2007 B1
7242772 Tehranchi Jul 2007 B1
7328345 Morten et al. Feb 2008 B2
7330875 Parasnis et al. Feb 2008 B1
7340528 Noblecourt et al. Mar 2008 B2
7349886 Morten et al. Mar 2008 B2
7356143 Morten Apr 2008 B2
7356245 Belknap et al. Apr 2008 B2
7366788 Jones et al. Apr 2008 B2
7376831 Kollmyer et al. May 2008 B2
7406174 Palmer Jul 2008 B2
7421411 Kontio et al. Sep 2008 B2
7457359 Mabey et al. Nov 2008 B2
7472280 Giobbi Dec 2008 B2
7478325 Foehr Jan 2009 B2
7484103 Woo et al. Jan 2009 B2
7493018 Kim Feb 2009 B2
7499938 Collart Mar 2009 B2
7526450 Hughes et al. Apr 2009 B2
7594271 Zhuk et al. Sep 2009 B2
7610365 Kraft et al. Oct 2009 B1
7640435 Morten Dec 2009 B2
7689510 Lamkin et al. Mar 2010 B2
7720352 Belknap et al. May 2010 B2
7747853 Candelore et al. Jun 2010 B2
7761892 Ellis et al. Jul 2010 B2
7779097 Lamkin et al. Aug 2010 B2
7817608 Rassool et al. Oct 2010 B2
7869691 Kelly et al. Jan 2011 B2
7962942 Craner Jun 2011 B1
7974714 Hoffberg Jul 2011 B2
7991156 Miller Aug 2011 B1
8023562 Zheludkov et al. Sep 2011 B2
8046453 Olaiya Oct 2011 B2
8054880 Yu et al. Nov 2011 B2
8065708 Smyth et al. Nov 2011 B1
8069260 Speicher et al. Nov 2011 B2
8201264 Grab et al. Jun 2012 B2
8225061 Greenebaum Jul 2012 B2
8233768 Soroushian et al. Jul 2012 B2
8245124 Gupta Aug 2012 B1
8249168 Graves Aug 2012 B2
8261356 Choi et al. Sep 2012 B2
8265168 Masterson et al. Sep 2012 B1
8270473 Chen et al. Sep 2012 B2
8270819 Vannier Sep 2012 B2
8289338 Priyadarshi et al. Oct 2012 B2
8291460 Peacock Oct 2012 B1
8296434 Miller et al. Oct 2012 B1
8311111 Xu Nov 2012 B2
8311115 Gu et al. Nov 2012 B2
8321556 Chatterjee et al. Nov 2012 B1
8386621 Park Feb 2013 B2
8401900 Cansler et al. Mar 2013 B2
8412841 Swaminathan et al. Apr 2013 B1
8452110 Shoham et al. May 2013 B2
8456380 Pagan Jun 2013 B2
8472792 Butt et al. Jun 2013 B2
8473630 Galligan et al. Jun 2013 B1
8510303 Soroushian et al. Aug 2013 B2
8510404 Carmel et al. Aug 2013 B2
8515265 Kwon et al. Aug 2013 B2
8516529 Lajoie et al. Aug 2013 B2
8595378 Cohn Nov 2013 B1
8606069 Okubo et al. Dec 2013 B2
8640166 Craner et al. Jan 2014 B1
8649669 Braness et al. Feb 2014 B2
8681866 Jia Mar 2014 B1
8726264 Allen et al. May 2014 B1
RE45052 Li Jul 2014 E
8774609 Drake et al. Jul 2014 B2
8781122 Chan et al. Jul 2014 B2
8805109 Shoham et al. Aug 2014 B2
8806188 Braness et al. Aug 2014 B2
8843586 Pantos et al. Sep 2014 B2
8908984 Shoham et al. Dec 2014 B2
8909922 Kiefer et al. Dec 2014 B2
8914534 Braness et al. Dec 2014 B2
8914836 Shivadas et al. Dec 2014 B2
8918636 Kiefer Dec 2014 B2
8918908 Ziskind et al. Dec 2014 B2
8997161 Priyadarshi et al. Mar 2015 B2
8997254 Amidei et al. Mar 2015 B2
9014471 Shoham et al. Apr 2015 B2
9025659 Soroushian et al. May 2015 B2
9042670 Carmel et al. May 2015 B2
9094737 Shivadas et al. Jul 2015 B2
9124773 Chan et al. Sep 2015 B2
9184920 Grab et al. Nov 2015 B2
9191457 Van der Schaar Nov 2015 B2
9197685 Soroushian Nov 2015 B2
9210481 Braness et al. Dec 2015 B2
9247311 Kiefer Jan 2016 B2
9247312 Braness et al. Jan 2016 B2
9247317 Shivadas et al. Jan 2016 B2
9264475 Shivadas et al. Feb 2016 B2
9313510 Shivadas et al. Apr 2016 B2
9343112 Amidei et al. May 2016 B2
9344517 Shivadas et al. May 2016 B2
9712890 Shivadas et al. Jul 2017 B2
9906785 Naletov et al. Feb 2018 B2
20010030710 Werner Oct 2001 A1
20010036355 Kelly et al. Nov 2001 A1
20010046299 Wasilewski et al. Nov 2001 A1
20010053222 Wakao et al. Dec 2001 A1
20020026560 Jordan et al. Feb 2002 A1
20020034252 Owen et al. Mar 2002 A1
20020051494 Yamaguchi et al. May 2002 A1
20020057898 Normile May 2002 A1
20020062313 Lee et al. May 2002 A1
20020076112 Devara Jun 2002 A1
20020087569 Fischer et al. Jul 2002 A1
20020091665 Beek et al. Jul 2002 A1
20020093571 Hyodo Jul 2002 A1
20020110193 Yoo et al. Aug 2002 A1
20020116481 Lee Aug 2002 A1
20020118953 Kim Aug 2002 A1
20020120934 Abrahams et al. Aug 2002 A1
20020136298 Anantharamu et al. Sep 2002 A1
20020143413 Fay et al. Oct 2002 A1
20020143547 Fay et al. Oct 2002 A1
20020147980 Satoda Oct 2002 A1
20020159528 Graziani Oct 2002 A1
20020161462 Fay Oct 2002 A1
20020180929 Tseng et al. Dec 2002 A1
20020184159 Tadayon et al. Dec 2002 A1
20020191112 Akiyoshi et al. Dec 2002 A1
20020191959 Lin et al. Dec 2002 A1
20020191960 Fujinami et al. Dec 2002 A1
20030001964 Masukura et al. Jan 2003 A1
20030002578 Tsukagoshi et al. Jan 2003 A1
20030005442 Brodersen et al. Jan 2003 A1
20030021296 Wee et al. Jan 2003 A1
20030031178 Haeri Feb 2003 A1
20030035488 Barrau Feb 2003 A1
20030035545 Jiang Feb 2003 A1
20030035546 Jiang et al. Feb 2003 A1
20030041257 Wee et al. Feb 2003 A1
20030061305 Copley et al. Mar 2003 A1
20030061369 Aksu et al. Mar 2003 A1
20030065777 Mattila et al. Apr 2003 A1
20030078930 Surcouf et al. Apr 2003 A1
20030093799 Kauffman et al. May 2003 A1
20030123855 Okada et al. Jul 2003 A1
20030128296 Lee Jul 2003 A1
20030133506 Haneda Jul 2003 A1
20030152370 Otomo et al. Aug 2003 A1
20030163824 Gordon et al. Aug 2003 A1
20030165328 Grecia Sep 2003 A1
20030174844 Candelore Sep 2003 A1
20030185302 Abrams Oct 2003 A1
20030185542 McVeigh et al. Oct 2003 A1
20030206558 Parkkinen et al. Nov 2003 A1
20030210821 Yogeshwar Nov 2003 A1
20030216922 Gonzales et al. Nov 2003 A1
20030229900 Reisman Dec 2003 A1
20030231863 Eerenberg et al. Dec 2003 A1
20030231867 Gates et al. Dec 2003 A1
20030233464 Walpole et al. Dec 2003 A1
20030236836 Borthwick Dec 2003 A1
20030236907 Stewart et al. Dec 2003 A1
20040006701 Kresina Jan 2004 A1
20040021684 Millner Feb 2004 A1
20040024688 Bi et al. Feb 2004 A1
20040025180 Begeja et al. Feb 2004 A1
20040031058 Reisman Feb 2004 A1
20040039916 Aldis et al. Feb 2004 A1
20040047614 Green Mar 2004 A1
20040052501 Tam Mar 2004 A1
20040071453 Valderas Apr 2004 A1
20040081333 Grab et al. Apr 2004 A1
20040081434 Jung et al. Apr 2004 A1
20040093618 Baldwin et al. May 2004 A1
20040105549 Suzuki et al. Jun 2004 A1
20040114687 Ferris et al. Jun 2004 A1
20040117347 Seo et al. Jun 2004 A1
20040136698 Mock Jul 2004 A1
20040139335 Diamand et al. Jul 2004 A1
20040143760 Alkove et al. Jul 2004 A1
20040146276 Ogawa Jul 2004 A1
20040158878 Ratnakar et al. Aug 2004 A1
20040184534 Wang Sep 2004 A1
20040202320 Amini et al. Oct 2004 A1
20040217971 Kim Nov 2004 A1
20040255115 DeMello et al. Dec 2004 A1
20040255236 Collart Dec 2004 A1
20050015797 Noblecourt et al. Jan 2005 A1
20050038826 Bae et al. Feb 2005 A1
20050055399 Savchuk Mar 2005 A1
20050071280 Irwin et al. Mar 2005 A1
20050108320 Lord et al. May 2005 A1
20050114896 Hug May 2005 A1
20050149450 Stefik et al. Jul 2005 A1
20050180641 Clark Aug 2005 A1
20050183120 Jain et al. Aug 2005 A1
20050193070 Brown et al. Sep 2005 A1
20050193322 Lamkin et al. Sep 2005 A1
20050196147 Seo et al. Sep 2005 A1
20050204289 Mohammed et al. Sep 2005 A1
20050207442 van Zoest et al. Sep 2005 A1
20050207578 Matsuyama et al. Sep 2005 A1
20050254508 Aksu et al. Nov 2005 A1
20050273695 Schnurr Dec 2005 A1
20050275656 Corbin et al. Dec 2005 A1
20060026294 Virdi et al. Feb 2006 A1
20060036549 Wu Feb 2006 A1
20060037057 Xu Feb 2006 A1
20060052095 Vazvan Mar 2006 A1
20060053080 Edmonson et al. Mar 2006 A1
20060064605 Giobbi Mar 2006 A1
20060078301 Ikeda et al. Apr 2006 A1
20060093320 Hallberg et al. May 2006 A1
20060120378 Usuki et al. Jun 2006 A1
20060129909 Butt et al. Jun 2006 A1
20060168639 Gan et al. Jul 2006 A1
20060173887 Breitfeld et al. Aug 2006 A1
20060179239 Fluhr Aug 2006 A1
20060181965 Collart Aug 2006 A1
20060235880 Qian Oct 2006 A1
20060245727 Nakano et al. Nov 2006 A1
20060259588 Lerman et al. Nov 2006 A1
20060263056 Lin et al. Nov 2006 A1
20060267986 Bae Nov 2006 A1
20060274835 Hamilton et al. Dec 2006 A1
20060294164 Armangau et al. Dec 2006 A1
20070005333 Setiohardjo et al. Jan 2007 A1
20070031110 Rijckaert Feb 2007 A1
20070044010 Sull et al. Feb 2007 A1
20070047901 Ando et al. Mar 2007 A1
20070053513 Hoffberg Mar 2007 A1
20070058928 Naito et al. Mar 2007 A1
20070083617 Chakrabarti et al. Apr 2007 A1
20070086528 Mauchly et al. Apr 2007 A1
20070100757 Rhoads May 2007 A1
20070133603 Weaver Jun 2007 A1
20070136817 Nguyen Jun 2007 A1
20070140647 Kusunoki et al. Jun 2007 A1
20070154165 Hemmeryckz-Deleersnijder et al. Jul 2007 A1
20070168541 Gupta et al. Jul 2007 A1
20070168542 Gupta et al. Jul 2007 A1
20070178933 Nelson Aug 2007 A1
20070180125 Knowles et al. Aug 2007 A1
20070185982 Nakanowatari et al. Aug 2007 A1
20070192810 Pritchett et al. Aug 2007 A1
20070217339 Zhao Sep 2007 A1
20070217759 Dodd Sep 2007 A1
20070234391 Hunter et al. Oct 2007 A1
20070239839 Buday et al. Oct 2007 A1
20070255940 Ueno Nov 2007 A1
20070271317 Carmel et al. Nov 2007 A1
20070271385 Davis et al. Nov 2007 A1
20070274679 Yahata et al. Nov 2007 A1
20070277219 Toebes et al. Nov 2007 A1
20070277234 Bessonov et al. Nov 2007 A1
20070280298 Hearn et al. Dec 2007 A1
20070288745 Kwan Dec 2007 A1
20070292107 Yahata et al. Dec 2007 A1
20070297422 Matsuo et al. Dec 2007 A1
20080005175 Bourke et al. Jan 2008 A1
20080008455 De Lange et al. Jan 2008 A1
20080043832 Barkley et al. Feb 2008 A1
20080066099 Brodersen et al. Mar 2008 A1
20080066181 Haveson et al. Mar 2008 A1
20080086456 Rasanen et al. Apr 2008 A1
20080086747 Rasanen et al. Apr 2008 A1
20080101466 Swenson et al. May 2008 A1
20080104633 Noblecourt et al. May 2008 A1
20080120330 Reed et al. May 2008 A1
20080120342 Reed et al. May 2008 A1
20080120389 Bassali et al. May 2008 A1
20080126248 Lee et al. May 2008 A1
20080137541 Agarwal et al. Jun 2008 A1
20080137736 Richardson et al. Jun 2008 A1
20080151817 Fitchett Jun 2008 A1
20080172441 Speicher et al. Jul 2008 A1
20080187283 Takahashi Aug 2008 A1
20080192818 DiPietro et al. Aug 2008 A1
20080195664 Maharajh et al. Aug 2008 A1
20080195744 Bowra et al. Aug 2008 A1
20080205860 Holtman Aug 2008 A1
20080240144 Kruse et al. Oct 2008 A1
20080256105 Nogawa et al. Oct 2008 A1
20080263354 Beuque et al. Oct 2008 A1
20080279535 Haque et al. Nov 2008 A1
20080294453 Baird-Smith Nov 2008 A1
20080298358 John et al. Dec 2008 A1
20080310454 Bellwood et al. Dec 2008 A1
20080310496 Fang Dec 2008 A1
20090031220 Tranchant et al. Jan 2009 A1
20090037959 Suh et al. Feb 2009 A1
20090048852 Burns et al. Feb 2009 A1
20090055546 Jung et al. Feb 2009 A1
20090060452 Chaudhri Mar 2009 A1
20090066839 Jung et al. Mar 2009 A1
20090097644 Haruki Apr 2009 A1
20090132599 Soroushian et al. May 2009 A1
20090132721 Soroushian et al. May 2009 A1
20090132824 Terada et al. May 2009 A1
20090136216 Soroushian et al. May 2009 A1
20090150557 Wormley et al. Jun 2009 A1
20090168795 Segel et al. Jul 2009 A1
20090169181 Priyadarshi et al. Jul 2009 A1
20090172201 Carmel et al. Jul 2009 A1
20090178090 Oztaskent Jul 2009 A1
20090196139 Bates et al. Aug 2009 A1
20090201988 Gazier et al. Aug 2009 A1
20090217317 White et al. Aug 2009 A1
20090226148 Nesvadba et al. Sep 2009 A1
20090228395 Wegner et al. Sep 2009 A1
20090290706 Amini et al. Nov 2009 A1
20090290708 Schneider et al. Nov 2009 A1
20090293116 DeMello Nov 2009 A1
20090303241 Priyadarshi et al. Dec 2009 A1
20090307258 Priyadarshi et al. Dec 2009 A1
20090307267 Chen et al. Dec 2009 A1
20090310933 Lee Dec 2009 A1
20090313544 Wood et al. Dec 2009 A1
20090313564 Rottler et al. Dec 2009 A1
20090316783 Au et al. Dec 2009 A1
20090328124 Khouzam et al. Dec 2009 A1
20090328228 Schnell Dec 2009 A1
20100040351 Toma et al. Feb 2010 A1
20100057928 Kapoor et al. Mar 2010 A1
20100058405 Ramakrishnan et al. Mar 2010 A1
20100074324 Qian et al. Mar 2010 A1
20100074333 Au Mar 2010 A1
20100083322 Rouse Apr 2010 A1
20100094969 Zuckerman et al. Apr 2010 A1
20100095121 Shetty et al. Apr 2010 A1
20100106968 Mori et al. Apr 2010 A1
20100107260 Orrell et al. Apr 2010 A1
20100111192 Graves May 2010 A1
20100138903 Medvinsky Jun 2010 A1
20100142917 Isaji Jun 2010 A1
20100158109 Dahlby et al. Jun 2010 A1
20100161825 Ronca et al. Jun 2010 A1
20100166060 Ezure Jul 2010 A1
20100186092 Takechi et al. Jul 2010 A1
20100189183 Gu et al. Jul 2010 A1
20100228795 Hahn Sep 2010 A1
20100235472 Sood et al. Sep 2010 A1
20100250532 Soroushian et al. Sep 2010 A1
20100290761 Drake et al. Nov 2010 A1
20100299522 Khambete et al. Nov 2010 A1
20100306249 Hill et al. Dec 2010 A1
20100313225 Cholas et al. Dec 2010 A1
20100313226 Cholas et al. Dec 2010 A1
20100319014 Lockett et al. Dec 2010 A1
20100319017 Cook Dec 2010 A1
20100332595 Fullagar et al. Dec 2010 A1
20110002381 Yang Jan 2011 A1
20110016225 Park Jan 2011 A1
20110047209 Lindholm et al. Feb 2011 A1
20110055585 Lee Mar 2011 A1
20110060808 Martin et al. Mar 2011 A1
20110066673 Outlaw Mar 2011 A1
20110067057 Karaoguz et al. Mar 2011 A1
20110078440 Feng et al. Mar 2011 A1
20110080940 Bocharov Apr 2011 A1
20110082924 Gopalakrishnan Apr 2011 A1
20110096828 Chen et al. Apr 2011 A1
20110107379 Lajoie et al. May 2011 A1
20110116772 Kwon et al. May 2011 A1
20110126191 Hughes et al. May 2011 A1
20110129011 Cilli et al. Jun 2011 A1
20110135090 Chan et al. Jun 2011 A1
20110138018 Raveendran et al. Jun 2011 A1
20110142415 Rhyu Jun 2011 A1
20110145726 Wei et al. Jun 2011 A1
20110149753 Bapst et al. Jun 2011 A1
20110150100 Abadir Jun 2011 A1
20110153785 Minborg et al. Jun 2011 A1
20110153835 Rimac et al. Jun 2011 A1
20110184738 Kalisky et al. Jul 2011 A1
20110191439 Dazzi et al. Aug 2011 A1
20110191803 Baldwin et al. Aug 2011 A1
20110197237 Turner Aug 2011 A1
20110213827 Kaspar et al. Sep 2011 A1
20110222786 Carmel et al. Sep 2011 A1
20110225302 Park et al. Sep 2011 A1
20110225315 Wexler et al. Sep 2011 A1
20110225417 Maharajh et al. Sep 2011 A1
20110239078 Luby et al. Sep 2011 A1
20110246657 Glow Oct 2011 A1
20110246659 Bouazizi Oct 2011 A1
20110252118 Pantos et al. Oct 2011 A1
20110264530 Santangelo et al. Oct 2011 A1
20110268178 Park et al. Nov 2011 A1
20110276695 Maldaner et al. Nov 2011 A1
20110283012 Melnyk Nov 2011 A1
20110291723 Hashimoto Dec 2011 A1
20110302319 Ha et al. Dec 2011 A1
20110305273 He et al. Dec 2011 A1
20110314176 Frojdh et al. Dec 2011 A1
20110314500 Gordon Dec 2011 A1
20120005368 Knittle et al. Jan 2012 A1
20120023251 Pyle et al. Jan 2012 A1
20120036365 Kyslov et al. Feb 2012 A1
20120036544 Chen et al. Feb 2012 A1
20120093214 Urbach Apr 2012 A1
20120114302 Randall May 2012 A1
20120124191 Lyon May 2012 A1
20120137336 Applegate et al. May 2012 A1
20120144117 Weare et al. Jun 2012 A1
20120144445 Bonta et al. Jun 2012 A1
20120147958 Ronca et al. Jun 2012 A1
20120166633 Baumback et al. Jun 2012 A1
20120170642 Braness et al. Jul 2012 A1
20120170643 Soroushian et al. Jul 2012 A1
20120170906 Soroushian et al. Jul 2012 A1
20120170915 Braness et al. Jul 2012 A1
20120173751 Braness et al. Jul 2012 A1
20120177101 van der Schaar Jul 2012 A1
20120179834 van der Schaar et al. Jul 2012 A1
20120201475 Carmel et al. Aug 2012 A1
20120201476 Carmel et al. Aug 2012 A1
20120233345 Hannuksela Sep 2012 A1
20120240176 Ma et al. Sep 2012 A1
20120254455 Adimatyam et al. Oct 2012 A1
20120257678 Zhou Oct 2012 A1
20120260277 Kosciewicz Oct 2012 A1
20120263434 Wainner et al. Oct 2012 A1
20120265562 Daouk et al. Oct 2012 A1
20120278496 Hsu Nov 2012 A1
20120289147 Raleigh et al. Nov 2012 A1
20120294355 Holcomb et al. Nov 2012 A1
20120297039 Acuna et al. Nov 2012 A1
20120307883 Graves Dec 2012 A1
20120311094 Biderman et al. Dec 2012 A1
20120314778 Salustri et al. Dec 2012 A1
20130007223 Luby et al. Jan 2013 A1
20130013730 Li et al. Jan 2013 A1
20130019107 Grab et al. Jan 2013 A1
20130019273 Ma et al. Jan 2013 A1
20130041808 Pham et al. Feb 2013 A1
20130044821 Braness et al. Feb 2013 A1
20130046849 Wolf Feb 2013 A1
20130046902 Villegas Nuñez et al. Feb 2013 A1
20130051554 Braness et al. Feb 2013 A1
20130054958 Braness et al. Feb 2013 A1
20130055084 Soroushian et al. Feb 2013 A1
20130058480 Ziskind et al. Mar 2013 A1
20130061040 Kiefer et al. Mar 2013 A1
20130061045 Kiefer et al. Mar 2013 A1
20130064466 Carmel et al. Mar 2013 A1
20130066838 Singla et al. Mar 2013 A1
20130094565 Yang et al. Apr 2013 A1
20130097309 Ma et al. Apr 2013 A1
20130114944 Soroushian et al. May 2013 A1
20130128962 Rajagopalan et al. May 2013 A1
20130152767 Katz et al. Jun 2013 A1
20130166580 Maharajh Jun 2013 A1
20130166765 Kaufman Jun 2013 A1
20130166906 Swaminathan et al. Jun 2013 A1
20130170764 Carmel et al. Jul 2013 A1
20130173513 Chu et al. Jul 2013 A1
20130179199 Ziskind et al. Jul 2013 A1
20130179992 Ziskind et al. Jul 2013 A1
20130182952 Carmel et al. Jul 2013 A1
20130196292 Brennen et al. Aug 2013 A1
20130212228 Butler Aug 2013 A1
20130223812 Rossi Aug 2013 A1
20130226578 Bolton et al. Aug 2013 A1
20130226635 Fisher Aug 2013 A1
20130227122 Gao Aug 2013 A1
20130311670 Tarbox et al. Nov 2013 A1
20130329781 Su et al. Dec 2013 A1
20140003516 Soroushian Jan 2014 A1
20140037620 Ferree et al. Feb 2014 A1
20140052823 Gavade et al. Feb 2014 A1
20140059156 Freeman, II et al. Feb 2014 A1
20140101722 Moore Apr 2014 A1
20140119432 Wang et al. May 2014 A1
20140140396 Wang et al. May 2014 A1
20140140417 Shaffer et al. May 2014 A1
20140143301 Watson et al. May 2014 A1
20140143431 Watson et al. May 2014 A1
20140143440 Ramamurthy et al. May 2014 A1
20140177734 Carmel et al. Jun 2014 A1
20140189065 van der Schaar et al. Jul 2014 A1
20140201382 Shivadas et al. Jul 2014 A1
20140211840 Butt et al. Jul 2014 A1
20140211859 Carmel et al. Jul 2014 A1
20140241420 Orton-jay et al. Aug 2014 A1
20140241421 Orton-jay et al. Aug 2014 A1
20140250473 Braness et al. Sep 2014 A1
20140258714 Grab Sep 2014 A1
20140269927 Naletov et al. Sep 2014 A1
20140269936 Shivadas et al. Sep 2014 A1
20140280763 Grab et al. Sep 2014 A1
20140297804 Shivadas et al. Oct 2014 A1
20140297881 Shivadas et al. Oct 2014 A1
20140355668 Shoham et al. Dec 2014 A1
20140359678 Shivadas et al. Dec 2014 A1
20140359679 Shivadas et al. Dec 2014 A1
20140359680 Shivadas et al. Dec 2014 A1
20140376720 Chan et al. Dec 2014 A1
20150006662 Braness Jan 2015 A1
20150026677 Stevens et al. Jan 2015 A1
20150049957 Shoham et al. Feb 2015 A1
20150063693 Carmel et al. Mar 2015 A1
20150104153 Braness et al. Apr 2015 A1
20150117836 Amidei et al. Apr 2015 A1
20150117837 Amidei et al. Apr 2015 A1
20150139419 Kiefer et al. May 2015 A1
20150188758 Amidei et al. Jul 2015 A1
20150188842 Amidei et al. Jul 2015 A1
20150188921 Amidei et al. Jul 2015 A1
20150189017 Amidei et al. Jul 2015 A1
20150189373 Amidei et al. Jul 2015 A1
20150334435 Shivadas et al. Nov 2015 A1
20150373421 Chan et al. Dec 2015 A1
20160070890 Grab et al. Mar 2016 A1
20160149981 Shivadas et al. May 2016 A1
20160219303 Braness et al. Jul 2016 A1
Foreign Referenced Citations (122)
Number Date Country
1169229 Dec 1997 CN
1221284 Jun 1999 CN
1723696 Jan 2006 CN
757484 Feb 1997 EP
813167 Dec 1997 EP
0936812 Aug 1999 EP
1056273 Nov 2000 EP
1420580 May 2004 EP
1553779 Jul 2005 EP
1657835 May 2006 EP
1718074 Nov 2006 EP
2486517 Aug 2012 EP
2486727 Aug 2012 EP
2507995 Oct 2012 EP
2564354 Mar 2013 EP
2616991 Jul 2013 EP
2617192 Jul 2013 EP
2661875 Nov 2013 EP
2486727 Mar 2014 EP
2564354 Mar 2014 EP
2616991 Mar 2014 EP
2617192 Mar 2014 EP
2716048 Apr 2014 EP
2721826 Apr 2014 EP
2486517 Jun 2014 EP
2751990 Jul 2014 EP
2807821 Dec 2014 EP
2751990 Apr 2015 EP
08046902 Feb 1996 JP
8111842 Apr 1996 JP
08163488 Jun 1996 JP
08287613 Nov 1996 JP
09037225 Feb 1997 JP
11164307 Jun 1999 JP
11275576 Oct 1999 JP
11328929 Nov 1999 JP
2000201343 Jul 2000 JP
02001043668 Feb 2001 JP
2001346165 Dec 2001 JP
2002170363 Jun 2002 JP
2002170363 Jun 2002 JP
2002518898 Jun 2002 JP
2002218384 Aug 2002 JP
2003250113 Sep 2003 JP
2004013823 Jan 2004 JP
2004515941 May 2004 JP
2004172830 Jun 2004 JP
2004187161 Jul 2004 JP
2004234128 Aug 2004 JP
2005027153 Jan 2005 JP
2005080204 Mar 2005 JP
2006524007 Oct 2006 JP
2007036666 Feb 2007 JP
2007174375 Jul 2007 JP
2007235690 Sep 2007 JP
2007535881 Dec 2007 JP
2008235999 Oct 2008 JP
2014506430 Mar 2014 JP
100221423 Jun 1999 KR
100221423 Sep 1999 KR
2002013664 Feb 2002 KR
1020020064888 Aug 2002 KR
100669616 Sep 2007 KR
101874907 Jun 2018 KR
1995015660 Jun 1995 WO
9613121 May 1996 WO
1996013121 May 1996 WO
1997031445 Apr 1998 WO
1999010836 Mar 1999 WO
1999065239 Dec 1999 WO
2001031497 May 2001 WO
2001050732 Jul 2001 WO
2001065762 Sep 2001 WO
2002001880 Jan 2002 WO
2002008948 Jan 2002 WO
2002035832 May 2002 WO
2002037210 May 2002 WO
2002054196 Jul 2002 WO
2004054247 Jun 2004 WO
2004097811 Nov 2004 WO
2004102571 Nov 2004 WO
2006018843 Feb 2006 WO
2006018843 Dec 2006 WO
2007044590 Apr 2007 WO
2007113836 Oct 2007 WO
2008010275 Jan 2008 WO
2008042242 Apr 2008 WO
2007113836 Nov 2008 WO
2008135932 Nov 2008 WO
2007113836 Dec 2008 WO
2009065137 May 2009 WO
2010060106 May 2010 WO
2010080911 Jul 2010 WO
2010089962 Aug 2010 WO
2010122447 Oct 2010 WO
2010147878 Dec 2010 WO
2011042898 Apr 2011 WO
2011042900 Apr 2011 WO
2011068668 Jun 2011 WO
2011103364 Aug 2011 WO
2011132184 Oct 2011 WO
2011135558 Nov 2011 WO
2012035533 Mar 2012 WO
2012035534 Mar 2012 WO
2012035534 Jul 2012 WO
2012094171 Jul 2012 WO
2012094181 Jul 2012 WO
2012094189 Jul 2012 WO
2012035533 Aug 2012 WO
2012162806 Dec 2012 WO
2012171113 Dec 2012 WO
2013030833 Mar 2013 WO
2013032518 Mar 2013 WO
2013103986 Jul 2013 WO
2013111126 Aug 2013 WO
2013111126 Aug 2013 WO
2013032518 Sep 2013 WO
2013144942 Oct 2013 WO
2014145901 Sep 2014 WO
2014193996 Dec 2014 WO
2014193996 Feb 2015 WO
2015031982 Mar 2015 WO
Non-Patent Literature Citations (159)
Entry
Author Unknown, “O'Reilly—802.11 Wireless Networks: The Definitive Guide, Second Edition”, printed Oct. 30, 2008 from http://oreilly.com/catalog/9780596100520, 2 pgs.
Author Unknown, “Tunneling QuickTime RTSP and RTP over HTTP”, Published by Apple Computer, Inc.: 1999 (month unknown) 6 pages.
Author Unknown, “Turbo-Charge Your Internet and PC Performance”, printed Oct. 30, 2008 from Speedtest.net—The Global Broadband Speed Test, 1 pg.
Gast, “When is 54 Not Equal to 54? A Look at 802.11a, b and g Throughput”, Aug. 8, 2003, printed Oct. 30, 2008 from www.oreillynet.com/pub/a/wireless/2003/08/08/wireless_throughput.html, 4 pgs.
Author Unknown, “White paper, The New Mainstream Wireless LAN Standard”, Broadcom Corporation, Jul. 2003, 12 pgs.
Blasiak, “Video Transrating and Transcoding: Overview of Video Transrating and Transcoding Technologies,” Ingenient Technologies, TI Developer Conference, Aug. 6-8, 2002, 22 pgs.
Casares et al., “Simplifying Video Editing Using Metadata”, DIS2002, 2002, pp. 157-166.
Deutscher, “IIS Transform Manager Beta—Using the MP4 to Smooth Task”, Retrieved from: https://web.archive.org/web/20130328111303/http://blog.johndeutscher.com/category/smooth-streaming, Blog post of Apr. 29, 2011, 14 pgs.
Gannes, “The Lowdown on Apple's HTTP Adaptive Bitrate Streaming”, GigaOM, Jun. 10, 2009, 12 pgs.
Garg et al., “An Experimental Study of Throughput for UDP and VoIP Traffic in IEEE 802.11b Networks”, Wireless Communications and Networkings, Mar. 2003, pp. 1748-1753.
Ghosh, “Enhancing Silverlight Video Experiences with Contextual Data”, Retrieved from: http://msdn.microsoft.com/en-us/magazine/ee336025.aspx, 2010, 15 pgs.
Griffith, Eric “The Wireless Digital Picture Frame Arrives”, Wi-Fi Planet, printed May 4, 2007 from http://www.wi-fiplanet.com/news/article.php/3093141, Oct. 16, 2003, 3 pgs.
Inlet Technologies, “Adaptive Delivery to iDevices”, 2010, 2 pages.
Inlet Technologies, “Adaptive delivery to iPhone 3.0”, 2009, 2 pgs.
Inlet Technologies, “HTTP versus RTMP”, 2009, 3 pages.
Inlet Technologies, “The World's First Live Smooth Streaming Event: The French Open”, 2009, 2 pages.
I-O Data, “Innovation of technology arrived”, from http://www.iodata.com/catalogs/AVLP2DVDLA_Flyer200505.pdf, 2 pgs.
Kaspar et al., “Using HTTP Pipelining to Improve Progressive Download over Multiple Heterogeneous Interfaces”, IEEE ICC proceedings, 2010, 5 pgs.
Kim, Kyuheon “MPEG-2 ES/PES/TS/PSI”, Kyung-Hee University, Oct. 4, 2010, 66 pages.
Kozintsev et al., “Improving last-hop multicast streaming video over 802.11”, Workshop on Broadband Wireless Multimedia, Oct. 2004, pp. 1-10.
Kurzke et al., “Get Your Content Onto Google TV”, Google, Retrieved from: http://commondatastorage.googleapis.com/io2012/presentations/live%20to%20website/1300.pdf, 2012, 58 pgs.
Lang, “Expression Encoder, Best Practices for live smooth streaming broadcasting”, Microsoft Corporation, 2010, retrieved from http://www.streamingmedia.com/conferences/west2010/presentations/SMWest-12010-Expression-Encoder.pdf, 20 pgs.
Levkov, “Mobile Encoding Guidelines for Android Powered Devices”, Adobe Systems Inc., Addendum B, Dec. 22, 2010, 42 pgs.
Long et al., “Silver: Simplifying Video Editing with Metadata”, CHI 2003: New Horizons, Apr. 5-10, 2003, pp. 628-629.
Morrison, “EA IFF 85 Standard for Interchange Format Files”, Jan. 14, 1985, printed from http://www.dcs.ed.ac.uk/home/mxr/gfx/2d/IFF.txt on Mar. 6, 2006, 24 pgs.
MSDN, “Adaptive streaming, Expression Studio 2.0”, Apr. 23, 2009, 2 pgs.
Nelson, “Arithmetic Coding + Statistical Modeling = Data Compression: Part 1—Arithmetic Coding”, Doctor Dobb's Journal, Feb. 1991, USA, pp. 1-12.
Nelson, “Smooth Streaming Deployment Guide”, Microsoft Expression Encoder, Aug. 2010, 66 pgs.
Nelson, Michael “IBM's Cryptolopes”, Complex Objects in Digital Libraries Course, Spring 2001, Retrieved from http://www.cs.odu.edu/˜mln/teaching/unc/inls210/?method=display&pkg_name=cryptolopes.pkg&element_name=cryptolopes.ppt, 12 pages.
Noboru, “Play Fast and Fine Video on Web! codec”, Co.9 No. 12, Dec. 1, 2003, pp. 178-179.
Noe, A. “Matroska File Format (under construction!)”, Retrieved from the Internet: URL:http://web.archive.orgweb/20070821155146/www.matroska.org/technical/specs/matroska.pdf [retrieved on Jan. 19, 2011], Jun. 24, 2007, 1-51.
Noe, Alexander “AVI File Format”, http://www.alexander-noe.com/video/documentation/avi.pdf, Dec. 14, 2006, pp. 1-26.
Noe, Alexander “Definitions”, Apr. 11, 2006, retrieved from http://www.alexander-noe.com/video/amg/definitions.html on Oct. 16, 2013, 2 pages.
Ooyala, “Widevine Content Protection”, Ooyala Support Center for Developers. Ooyala, Inc., 2013. Jun. 3, 2013. http://support.ooyala.com/developers/documentation/concepts/player_v3_widevine_integration.html.
Ozer, “The 2012 Encoding and Transcoding Buyers' Guide”, Streamingmedia.com, Retrieved from: http://www.streamingmedia.com/Articles/Editorial/Featured-Articles/The-2012-Encoding-and-Transcoding-Buyers-Guide-84210.aspx, 2012, 8 pgs.
Pantos, “HTTP Live Streaming, draft-pantos-http-live-streaming-10”, IETF Tools, Oct. 15, 2012, Retrieved from: http://tools.ietf.org/html/draft-pantos-http-live-streaming-10, 37 pgs.
Pantos, R “HTTP Live Streaming: draft-pantos-http-live-streaming-06”, Published by the Internet Engineering Task Force (IETF), Mar. 31, 2011, 24 pages.
Papagiannaki et al., “Experimental Characterization of Home Wireless Networks and Design Implications”, INFOCOM 2006, 25th IEEE International Conference of Computer Communications, Proceedings, Apr. 2006, 13 pgs.
Phamdo, Nam “Theory of Data Compression”, printed from http://www.data-compression.com/theoroy.html on Oct. 10, 2003, 12 pgs.
RGB Networks, “Comparing Adaptive HTTP Streaming Technologies”, Nov. 2011, Retrieved from: http://btreport.net/wp-content/uploads/2012/02/RGB-Adaptive-HTTP-Streaming-Comparison-1211-01.pdf, 20 pgs.
Schulzrinne, H “Real Time Streaming Protocol 2.0 (RTSP): draft-ietfmmusic-rfc2326bis-27”, MMUSIC Working Group of the Internet Engineering Task Force (IETF), 296 pgs., Mar. 9, 2011. (presented in two parts).
Siglin, “HTTP Streaming: What You Need to Know”, streamingmedia.com, 2010, 15 pages.
Siglin, “Unifying Global Video Strategies, MP4 File Fragmentation for Broadcast, Mobile and Web Delivery”, Nov. 16, 2011, 16 pgs.
Tan, Yap-Peng et al., “Video transcoding for fast forward/reverse video playback”, IEEE ICIP, 2002, pp. I-713 to I-716.
Taxan, “AVel LinkPlayer2 for Consumer”, I-O Data USA—Products—Home Entertainment, printed May 4, 2007 from http://www.iodata.com/usa/products/products.php?cat=HNP&sc=AVEL&pld=AVLP2/DVDLA&ts=2&tsc, 1 pg.
Unknown, “AVI RIFF File Reference (Direct X 8.1 C++ Archive)”, printed from http://msdn.microsoft.com/archive/en-us/dx81_c/directx_cpp/htm/avirifffilereference.asp?fr . . . on Mar. 6, 2006, 7 pgs.
Unknown, “Entropy and Source Coding (Compression)”, TCOM 570, Sep. 1999, pp. 1-22.
Unknown, “MPEG-4 Video Encoder: Based on International Standard ISO/IEC 14496-2”, Patni Computer Systems, Ltd., publication date unknown, 15 pgs.
Wang et al., “Image Quality Assessment: From Error Visibility to Structural Similarity”, IEEE Transactions on Image Processing, Apr. 2004, vol. 13, No. 4, pp. 600-612.
Wu, Feng et al., “Next Generation Mobile Multimedia Communications: Media Codec and Media Transport Perspectives”, In China Communications, Oct. 2006, pp. 30-44.
Zambelli, “IIS Smooth Streaming Technical Overview”, Microsoft Corporation, Mar. 2009.
U.S. Appl. No. 13/905,804, “Notice of Allowance” dated Aug. 12, 2015, 7 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2004/041667, Completed May 24, 2007, dated Jun. 20, 2007, 6 pgs.
Written Opinion for International Application No. PT/US2005/025845 filed Jul. 21, 2005, report completed Feb. 5, 2007 and dated May 10, 2007, 5 pgs.
Written Opinion for International Application No. PCT/US2007/063950 filed Mar. 14, 2007, report completed Mar. 1, 2008 and dated Mar. 19, 2008, 6 pgs.
Written Opinion for International Application No. PCT/US2008/083816, Opinion completed Jan. 10, 2009, dated Jan. 22, 2009, 5 pgs.
Written Opinion for International Application No. PCT/US2009/046588, completed Jul. 14, 2009, dated Jul. 23, 2009, 5 pgs.
Written Opinion of the International Searching Authority for International Application No. PCT/US 08/87999, date completed Feb. 7, 2009, dated Mar. 19, 2009, 4 pgs.
Invitation to Pay Add'l Fees Rcvd for International Application PCT/US14/39852, dated Sep. 25, 2 pgs.
“DVD-Mpeg differences”, http://dvd.sourceforge.net/dvdinfo/dvdmpeg.html, printed on Jul. 2, 2009, 1 page.
“International Search Report and Written Opinion for International Application PCT/US2010/020372”, Search Report dated Mar. 1, 2010, 8 pgs.
Casares, Juan et al., “Simplifying Video Editing Using Metadata” 10 pgs.
De Cock et al., “Complexity-Based Consistent-Quality Encoding in the Cloud”, IEEE International Conference on Image Processing (ICIP), Date of Conference Sep. 25-28, 2016, Phoenix, AZ, pp. 1484-1488.
Lin et al., “Multipass Encoding for Reducing Pulsing Artifacts in Cloud Based Video Transcoding”, IEEE International Conference on Image Processing (ICIP), Date of Conference Sep. 27, 30, 2015, Quebec City, QC, Canada, 5 pgs.
Long et al., “Silver: Simplifying Video Editing with Metadata”, Demonstrations, CHI 2003: New Horizons, Apr. 5-10, pp. 628-629.
Nelson, Mark, “Arithmetic Coding + Statistical Modeling = Data Compression: Part 1—Arithmetic Coding”, Doctor Dobb's Journal, Feb. 1991, printed from http://www.dogma.net/markn/articles/arith/part1.htm; printed Jul. 2, 2003, 12 pgs.
“IBM Closes Cryptolopes Unit,” Dec. 17, 1997, CNET News, Printed on Apr. 25, 2014 from http://news.cnet.com/IBM-closes-Cryptolopes-unit/2100-1001_3206465.html, 3 pages.
“Information Technology—Coding of Audio Visual Objects—Part 2: Visual” International Standard, ISO/IEC 14496-2, Third Edition, Jun. 1, 2004, pp. 1-724, (presented in three parts).
Broadq—The Ultimate Home Entertainment Software, printed May 11, 2009 from ittp://web.srchive.org/web/20030401122010/www.broadq.com/qcasttuner/, 1 pg.
Cloakware Corporation, “Protecting Digital Content Using Cloakware Code Transformation Technology”, Version 1.2, May 2002, pp. 1-10.
European Search Report Application No. EP 08870152, Search Completed May 19, 2011, dated May 26, 2011, 9 pgs.
European Search Report for Application 11855103.5, search completed Jun. 26, 2014, 9 pgs.
European Search Report for Application 11855237.1, search completed Jun. 12, 2014, 9 pgs.
European Supplementary Search Report for Application EP09759600, completed Jan. 25, 2011, 11 pgs.
Extended European Search Report for European Application EP10821672, completed Jan. 30, 2014, 3 pgs.
Extended Supplementary European Search Report for European Application EP11824682, completed Feb. 6, 2014, 4 pgs.
Supplementary European Search Report for Application No. EP 04813918, Search Completed Dec. 19, 2012, 3 pgs.
Supplementary European Search Report for Application No. EP 10729513, completed Dec. 9, 2013, 4 pgs.
Supplementary European Search Report for EP Application 11774529, completed Jan. 31, 2014, 2 pgs.
Supplementary European Search Report for Application No. EP 10834935, completed May 27, 2014, 9 pgs.
Federal Computer Week, “Tool Speeds Info to Vehicles”, Jul. 25, 1999, 5 pages.
HTTP Live Streaming Overview, Networking & Internet, Apple, Inc., Apr. 1, 2011, 38 pages.
IBM Corporation and Microsoft Corporation, “Multimedia Programming Interface and Data Specifications 1.0”, Aug. 1991, printed from http://www.kk.iij4u.or.jp/˜kondo/wave/mpidata.txt on Mar. 6, 2006, 100 pgs.
InformationWeek, “Internet on Wheels”, InformationWeek: Front End: Daily Dose, Jul. 20, 1999, Printed on Mar. 26, 2014, 3 pgs.
International Preliminary Report for International Application No. PCT/US2011/067243, dated Jul. 10, 2013, 7 pgs.
International Preliminary Report on Patentability for International Application PCT/US14/30747, dated Sep. 15, 2015, dated Sep. 24, 2015, 6 pgs.
International Preliminary Report on Patentability for International Application No. PCT/US2008/083816, dated May 18, 2010, 6 pgs.
International Preliminary Report on Patentability for International Application No. PCT/US2011/068276, dated Mar. 4, 2014, 23 pgs.
International Preliminary Report on Patentability for International Application PCT/US2013/043181, dated Dec. 31, 2014, dated Jan. 8, 2015, 11 pgs.
International Preliminary Report on Patentability for International Application PCT/US2014/039852, dated Dec. 1, 2015, dated Dec. 5, 2015, 8 pgs.
International Search Report and Written Opinion for International Application No. PCT/US07/63950, completed Feb. 19, 2008; dated Mar. 19, 2008, 9 pgs.
International Search Report and Written Opinion for International Application No. PCT/US08/87999, completed Feb. 7, 2009, dated Mar. 19, 2009, 6 pgs.
International Search Report and Written Opinion for International Application No. PCT/US09/46588, completed Jul. 13, 2009, dated Jul. 23, 2009, 7 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2005/025845, completed Feb. 5, 2007 and dated May 10, 2007, 8 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2008/083816, completed Jan. 10, 2009, dated Jan. 22, 2009, 7 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2010/020372, completed Feb. 10, 2009, dated Mar. 1, 2010, 8 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2010/56733, completed Jan. 3, 2011, dated Jan. 14, 2011, 9 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2011/067243, completed Apr. 24, 2012, dated May 8, 2012, 8 pgs.
International Search Report and Written Opinion for International Application No. PCT/US2013/043181, completed Nov. 27, 2013, dated Dec. 6, 2013, 12 pgs.
International Search Report and Written Opinion for International Application PCT/US14/30747, completed Jul. 30, 2014, dated Aug. 22, 2014, 7 pgs.
International Search Report and Written Opinion for International Application PCT/US14/39852, completed Oct. 21, 2014, dated Dec. 5, 2014, 11 pgs.
International Search Report and Written Opinion for International Application PCT/US2011/066927, completed Apr. 3, 2012, dated Apr. 20, 2012, 14 pgs.
International Search Report and Written Opinion for International Application PCT/US2011/067167, completed Jun. 19, 2012, dated Jul. 2, 2012, 11 pgs.
International Search Report and Written Opinion for International Application PCT/US2011/068276, completed Jun. 19, 2013, dated Jul. 8, 2013, 24 pgs.
International Search Report and Written Opinion for PCT/US2013/020572, completed Mar. 19, 2013, dated Apr. 29, 2013, 10 pgs.
ITS International, “Fleet System Opts for Mobile Server”, Aug. 26, 1999, Printed on Oct. 21, 2011 from http://www.itsinternational.com/News/article.cfm?recordID=547, 2 pgs.
Lifehacker—Boxqueue Bookmarklet Saves Videos for Later Boxee Watching, printed Jun. 16, 2009 from http://feeds.gawker.com/˜r/lifehacker/full/˜3/OHvDmrlgZZc/boxqueue-bookmarklet-saves-videos-for-late-boxee-watching, 2 pgs.
Linksys Wireless-B Media Adapter Reviews, printed May 4, 2007 from http://reviews.cnet.com/Linksys_Wireless_B_Media_Adapter/4505-6739_7-30421900.html?tag=box, 5 pgs.
Linksys, KISS DP-500, printed May 4, 2007 from http://www.kiss-technology.com/?p=dp500, 2 pgs.
Linksys®: “Enjoy your digital music and pictures on your home entertainment center, without stringing wires!”, Model No. WMA 11B, printed May 9, 2007 from http://www.linksys.com/servlet/Satellite?c=L_Product_C2&childpagename=US/Layout&cid=1115416830950&p.
Microsoft Corporation, “Chapter 8, Multimedia File Formats” 1991, Microsoft Windows Multimedia Programmer's Reference, 3 cover pgs, pp. 8-1 to 8-20.
Microsoft Media Platform: Player Framework, “Microsoft Media Platform: Player Framework v2.5 (formerly Silverlight Media Framework)”, May 3, 2011, 2 pages.
Microsoft Media Platform: Player Framework, “Silverlight Media Framework v1.1”, Jan. 2010, 2 pages.
Microsoft Windows® XP Media Center Edition 2005, Frequently asked Questions, printed May 4, 2007 from http://www.microsoft.com/windowsxp/mediacenter/evaluation/faq.mspx.
Microsoft Windows® XP Media Center Edition 2005: Features, printed May 9, 2007, from http://www.microsoft.com/windowsxp/mediacenter/evaluation/features.mspx, 4 pgs.
Office Action for Chinese Patent Application No. CN200880127596.4, dated May 6, 2014, 8 pgs.
Office Action for U.S. Appl. No. 13/223,210, dated Apr. 30, 2015, 14 pgs.
Office Action for U.S. Appl. No. 14/564,003, dated Apr. 17, 2015, 28 pgs.
Open DML AVI-M-JPEG File Format Subcommittee, “Open DML AVI File Format Extensions”, Version 1.02, Feb. 28, 1996, 29 pgs.
PC world.com, Future Gear: PC on the HiFi, and the TV, from http://www.pcworld.com/article/id,108818-page,1/article.html, printed May 4, 2007, from IDG Networks, 2 pgs.
Qtv—About BroadQ, printed May 11, 2009 from http://www.broadq.com/en/about.php, 1 pg.
Windows Media Center Extender for Xbox, printed May 9, 2007 from http://www.xbox.com/en-US/support/systemuse/xbox/console/mediacenterextender.htm, 2 pgs.
Windows® XP Media Center Edition 2005, “Experience more entertainment”, retrieved from http://download.microsoft.com/download/c/9/a/c9a7000a-66b3-455b-860b-1c16f2eecfec/MCE.pdf on May 9, 2007, 2 pgs.
“Adaptive Streaming Comparison”, Jan. 28, 2010, 5 pgs.
“Best Practices for Multi-Device Transcoding”, Kaltura Open Source Video, Printed on Nov. 27, 2013 from knowledge.kaltura.com/best-practices-multi-device-transcoding, 13 pgs.
“Container format (digital)”, printed Aug. 22, 2009 from http://en.wikipedia.org/wiki/Container_format_(digital), 4 pgs.
“Diagram | Matroska”, Dec. 17, 2010, Retrieved from http://web.archive.org/web/201 01217114656/http:l/matroska.org/technical/diagram/index.html on Jan. 29, 2016, 5 pages, Dec. 17, 2010.
“DVD—MPeg differences”, printed Jul. 2, 2009 from http://dvd.sourceforge.net/dvdinfo/dvdmpeg.html, 1 pg.
“DVD subtitles”, sam.zoy.org/writings/dvd/subtitles, dated Jan. 9, 2001, printed Jul. 2, 2009, 4 pgs.
“Final Committee Draft of MPEG-4 streaming text format”, International Organisation for Standardisation, Feb. 2004, 22 pgs.
“IBM Spearheading Intellectual Property Protection Technology for Information on the Internet; Cryptolope Containers Have Arrived”, May 1, 1996, Business Wire, Printed on Aug. 1, 2014 from http://www.thefreelibrary.com/IBM+Spearheading+Intellectual+Property+Protection+Technology+for...-a018239381, 6pg.
“Information Technology—Coding of audio-visual objects—Part 17: Streaming text”, International Organisation for Standardisation, Feb. 2004, 22 pgs.
“Information technology—Coding of audio-visual objects—Part 18: Font compression and streaming”, ISO/IEC 14496-18, First edition Jul. 1, 2004, 26 pgs.
“Innovation of technology arrived”, I-O Data, Nov. 2004, Retrieved from http://www.iodata.com/catalogs/AVLP2DVDLA_Flyer200505.pdf on May 30, 2013, 2 pgs., I-O Data, 2 pgs.
KISS Players, “KISS DP-500”, retrieved from http://www.kiss-technology.com/?p=dp500 on May 4, 2007, 1 pg.
“Matroska Streaming | Matroska”, Retrieved from the Internet: URL:http://web.archive.org/web/201 0121711431 O/http://matroska.org/technical!streaming/index.html, [retrieved on Jan. 29, 2016], Dec. 17, 2010.
“Netflix turns on subtitles for PC, Mac streaming”, Yahoo! News, Apr. 21, 2010, Printed on Mar. 26, 2014, 3 pgs.
“OpenDML AVI File Format Extensions”, OpenDML AVI M-JPEG File Format Subcommittee, retrieved from www.the-labs.com/Video/odmlff2-avidef.pdf, Sep. 1997, 42 pgs.
“OpenDML AVI File Format Extensions Version 1.02”, OpenDMLAVI MJPEG File Format Subcommittee. Last revision: Feb. 28, 1996. Reformatting: Sep. 1997.
“QCast Tuner for PS2”, printed May 11, 2009 from http://web.archive.org/web/20030210120605/www.divx.com/software/detail.php?ie=39, 2 pgs.
“Smooth Streaming Client”, The Official Microsoft IIS Site, Sep. 24, 2010, 4 pages.
“Specifications | Matroska”, Retrieved from the Internet: URL:http://web.archive.org/web/201 00706041303/http:/1 www.matroska.org/technical/specs/index.html [retrieved on Jan. 29, 2016, Jul. 6, 2010.
“Supported Media Formats”, Supported Media Formats, Android Developers, Printed on Nov. 27, 2013 from developer.android.com/guide/appendix/media-formats.html, 3 pgs.
“Text of ISO/IEC 14496-18/COR1, Font compression and streaming”, ITU Study Group 16—Video Coding Experts Group—ISO/IEC MPEG & ITU-T VCEG(ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 06), No. N8664, Oct. 27, 2006, 8 pgs.
“Text of ISO/IEC 14496-18/FDIS, Coding of Moving Pictures and Audio”, ITU Study Group 16—Videocoding Experts Group—ISO/IEC MPEG & ITU-T VCEG(ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 06), No. N6215, Dec. 2003, 26 pgs.
“Thread: SSME (Smooth Streaming Medial Element) config.xml review (Smooth Streaming Client configuration file)”, Printed on Mar. 26, 2014, 3 pgs.
“Transcoding Best Practices”, From movideo, Printed on Nov. 27, 2013 from code.movideo.com/Transcoding_Best_Practices, 5 pgs.
“Using HTTP Live Streaming”, iOS Developer Library, http://developer.apple.com/library/ios/#documentation/networkinginternet/conceptual/streamingmediaguide/UsingHTTPLiveStreaming/UsingHTTPLiveStreaming.html#//apple_ref/doc/uid/TP40008332-CH102-SW1, Feb. 11, 2014, 10 pgs.
“Video Manager and Video Title Set IFO file headers”, printed Aug. 22, 2009 from http://dvd.sourceforge.net/dvdinfo/ifo.htm, 6 pgs.
“What is a DVD?”, printed Aug. 22, 2009 from http://www.videohelp.com/dvd, 8 pgs.
“What is a VOB file”, http://www.mpucoder.com/DVD/vobov.html, printed on Jul. 2, 2009, 2 pgs.
“What's on a DVD?”, printed Aug. 22, 2009 from http://www.doom9.org/dvd-structure.htm, 5 pgs.
U.S. Appl. No. 13/224,298, “Final Office Action Received”, dated May 19, 2014, 26 pgs.
U.S. Appl. No. 13/905,804, “Non-Final Office Action Received”, U.S. Appl. No. 13/905,804, “Non-Final Office Action Received”, dated Jul. 25, 2014, 15 pgs.
Akhshabi et al., “An Experimental Evaluation of Rate-Adaptation Algorithms in Adaptive Streaming over HTTP”, MMSys'11, Feb. 23-25, 2011, 12 pgs.
Anonymous, “Method for the encoding of a compressed video sequence derived from the same video sequence compressed at a different bit rate without loss of data”, ip.com, ip.com No. IPCOM000008165D, May 22, 2002, pp. 1-9.
Author Unknown, “Blu-ray Disc—Blu-ray Disc—Wikipedia, the free encyclopedia”, printed Oct. 30, 2008 from http://en.wikipedia.org/wiki/Blu-ray_Disc, 11 pgs.
Author Unknown, “Blu-ray Movie Bitrates Here—Blu-ray Forum”, printed Oct. 30, 2008 from http://forum.blu-ray.com/showthread.php?t=3338, 6 pgs.
Author Unknown, “MPEG-4 Video Encoder: Based on International Standard ISO/IEC 14496-2”, Patni Computer Systems, Ltd., printed Jan. 24, 2007, USA, pp. 1-15.
Related Publications (1)
Number Date Country
20180262757 A1 Sep 2018 US
Continuations (1)
Number Date Country
Parent 13841943 Mar 2013 US
Child 15905695 US