Digital video can be used, for example, for remote business meetings via video conferencing, high definition video entertainment, video advertisements, or sharing of user-generated videos. Due to the large amount of data involved in video data, high performance compression is needed for transmission and storage. Accordingly, it would be advantageous to provide high resolution video transmitted over communications channels having limited bandwidth.
This application relates to encoding and decoding of video stream data for transmission or storage. Disclosed herein are aspects of systems, methods, and apparatuses for encoding and decoding using right-edge extension for quad-tree intra-prediction.
An aspect is a method for performing right-edge extension for quad-tree intra-prediction, which may include identifying a current frame from the plurality of frames, the current frame including a plurality of blocks, identifying a current block from the plurality of blocks, identifying a proximal reconstructed pixel, wherein the proximal reconstructed pixel is proximally above the current block, identifying a distant reconstructed pixel, wherein the distant reconstructed pixel is a most proximate reconstructed pixel above the proximal reconstructed pixel such that a reconstructed pixel proximally to the right of the distant reconstructed pixel is available for prediction, generating a prediction pixel based on the proximal reconstructed pixel, the distant reconstructed pixel, and the reconstructed pixel proximally to the right of the distant reconstructed pixel, and generating a predicted pixel for the current block based on the prediction pixel.
Another aspect is a method for performing right-edge extension for quad-tree intra-prediction. The method may include identifying a current pixel of a current block from a plurality of blocks of a current frame of a plurality of frames of a video stream, identifying a proximal above pixel, wherein the proximal above pixel is a pixel of a block proximally above the current block, and wherein the proximal above pixel is available for predicting the current pixel, identifying a distant above pixel, wherein the distant above pixel is a most proximate reconstructed pixel above the proximal above pixel such that a pixel proximally to the right of the distant above pixel is available for prediction, and generating, by a processor, a predicted pixel for the current pixel based on the proximal above pixel, the distant above pixel, and the pixel proximally to the right of the distant above pixel.
Another aspect is a method for performing right-edge extension for quad-tree intra-prediction. The method may include identifying a current frame from the plurality of frames, the current frame including a plurality of superblocks, wherein each superblock in the plurality of superblocks includes a respective plurality of blocks, identifying a first superblock from the plurality of superblocks, identifying a first block from the plurality of blocks of the first superblock, generating predicted pixels for the first block based on reconstructed pixels from a block proximally above and to the right of the first block, identifying a second block from the plurality of blocks of the first superblock, and on a condition that reconstructed pixels from a block proximally above and to the right of the second block for predicting pixels for the second block are unavailable for prediction, generating predicted pixels for the second block. Generating predicted pixels for the second block may include identifying a proximal reconstructed pixel, wherein the proximal reconstructed pixel is proximally above the second block, identifying a distant reconstructed pixel, wherein the distant reconstructed pixel is a most proximate reconstructed pixel above the proximal reconstructed pixel such that a plurality of reconstructed pixels proximally to the right of the distant reconstructed pixel are available for prediction, generating, by a processor, a plurality of prediction pixels based on the proximal reconstructed pixel, the distant reconstructed pixel, and the plurality of reconstructed pixels proximally to the right of the distant reconstructed pixel, and generating a plurality of predicted pixels for the second block based on the plurality of prediction pixels.
Variations in these and other aspects will be described in additional detail hereafter.
The description herein makes reference to the accompanying drawings wherein like reference numerals refer to like parts throughout the several views, and wherein:
Digital video may be used for various purposes including, for example, remote business meetings via video conferencing, high definition video entertainment, video advertisements, and sharing of user-generated videos. Digital video streams may represent video using a sequence of frames or images. Each frame can include a number of blocks, which may include information indicating pixel attributes, such as color values or brightness. Transmission and storage of video can use significant computing or communications resources. Compression and other coding techniques may be used to reduce the amount of data in video streams.
Encoding a video stream, or a portion thereof, such as a frame or a block, can include using temporal and spatial similarities in the video stream to improve coding efficiency. Video encoding may include using prediction to generate predicted pixel values in a frame based on similarities between pixels. One form of prediction is intra-prediction, which can include predicting values for a current block based on values of reference blocks which correspond to spatially proximal previously encoded and decoded blocks in the current frame. However, some intra-prediction modes may not predict some portions of a frame well, such as where spatially proximal previously encoded and decoded blocks in the current frame are not available for prediction. For example, top-right diagonal down intra-prediction may include predicting pixels in a current block based on reference pixels in a block immediately above and to the right of the current block and may not predict pixels, such as right edge pixels, well for relatively large blocks where the reference pixels are not available for prediction.
Right-edge extension for quad-tree intra-prediction may improve the quality of intra-prediction, such as top-right diagonal down intra-prediction, where spatially proximal previously encoded and decoded blocks in the current frame are not available for prediction. Right-edge extension for quad-tree intra-prediction may include maintaining the top-right diagonal down intra-prediction pattern while improving prediction quality by generating prediction pixels corresponding to the unavailable reference pixels based on a combination of proximal and distant reference pixels.
The computing device 100 may be a stationary computing device, such as a personal computer (PC), a server, a workstation, a minicomputer, or a mainframe computer; or a mobile computing device, such as a mobile telephone, a personal digital assistant (PDA), a laptop, or a tablet PC. Although shown as a single unit, any one or more element of the communication device 100 can be integrated into any number of separate physical units. For example, the UI 130 and processor 140 can be integrated in a first physical unit and the memory 150 can be integrated in a second physical unit.
The communication interface 110 can be a wireless antenna, as shown, a wired communication port, such as an Ethernet port, an infrared port, a serial port, or any other wired or wireless unit capable of interfacing with a wired or wireless electronic communication medium 180.
The communication unit 120 can be configured to transmit or receive signals via a wired or wireless medium 180. For example, as shown, the communication unit 120 is operatively connected to an antenna configured to communicate via wireless signals. Although not explicitly shown in
The UI 130 can include any unit capable of interfacing with a user, such as a virtual or physical keypad, a touchpad, a display, a touch display, a speaker, a microphone, a video camera, a sensor, or any combination thereof. The UI 130 can be operatively coupled with the processor, as shown, or with any other element of the communication device 100, such as the power source 170. Although shown as a single unit, the UI 130 may include one or more physical units. For example, the UI 130 may include an audio interface for performing audio communication with a user, and a touch display for performing visual and touch based communication with the user. Although shown as separate units, the communication interface 110, the communication unit 120, and the UI 130, or portions thereof, may be configured as a combined unit. For example, the communication interface 110, the communication unit 120, and the UI 130 may be implemented as a communications port capable of interfacing with an external touchscreen device.
The processor 140 can include any device or system capable of manipulating or processing a signal or other information now-existing or hereafter developed, including optical processors, quantum processors, molecular processors, or a combination thereof. For example, the processor 140 can include a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessor in association with a DSP core, a controller, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a programmable logic array, programmable logic controller, microcode, firmware, any type of integrated circuit (IC), a state machine, or any combination thereof. As used herein, the term “processor” includes a single processor or multiple processors. The processor can be operatively coupled with the communication interface 110, communication unit 120, the UI 130, the memory 150, the instructions 160, the power source 170, or any combination thereof.
The memory 150 can include any non-transitory computer-usable or computer-readable medium, such as any tangible device that can, for example, contain, store, communicate, or transport the instructions 160, or any information associated therewith, for use by or in connection with the processor 140. The non-transitory computer-usable or computer-readable medium can be, for example, a solid state drive, a memory card, removable media, a read only memory (ROM), a random access memory (RAM), any type of disk including a hard disk, a floppy disk, an optical disk, a magnetic or optical card, an application specific integrated circuits (ASICs), or any type of non-transitory media suitable for storing electronic information, or any combination thereof. The memory 150 can be connected to, for example, the processor 140 through, for example, a memory bus (not explicitly shown).
The instructions 160 can include directions for performing any method, or any portion or portions thereof, disclosed herein. The instructions 160 can be realized in hardware, software, or any combination thereof. For example, the instructions 160 may be implemented as information stored in the memory 150, such as a computer program, that may be executed by the processor 140 to perform any of the respective methods, algorithms, aspects, or combinations thereof, as described herein. The instructions 160, or a portion thereof, may be implemented as a special purpose processor, or circuitry, that can include specialized hardware for carrying out any of the methods, algorithms, aspects, or combinations thereof, as described herein. Portions of the instructions 160 can be distributed across multiple processors on the same machine or different machines or across a network such as a local area network, a wide area network, the Internet, or a combination thereof.
The power source 170 can be any suitable device for powering the communication device 110. For example, the power source 170 can include a wired power source; one or more dry cell batteries, such as nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion); solar cells; fuel cells; or any other device capable of powering the communication device 110. The communication interface 110, the communication unit 120, the UI 130, the processor 140, the instructions 160, the memory 150, or any combination thereof, can be operatively coupled with the power source 170.
Although shown as separate elements, the communication interface 110, the communication unit 120, the UI 130, the processor 140, the instructions 160, the power source 170, the memory 150, or any combination thereof can be integrated in one or more electronic units, circuits, or chips.
A computing and communication device 100A/100B/100C can be, for example, a computing device, such as the computing device 100 shown in
Each computing and communication device 100A/100B/100C can be configured to perform wired or wireless communication. For example, a computing and communication device 100A/100B/100C can be configured to transmit or receive wired or wireless communication signals and can include a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a cellular telephone, a personal computer, a tablet computer, a server, consumer electronics, or any similar device. Although each computing and communication device 100A/100B/100C is shown as a single unit, a computing and communication device can include any number of interconnected elements.
Each access point 210A/210B can be any type of device configured to communicate with a computing and communication device 100A/100B/100C, a network 220, or both via wired or wireless communication links 180A/180B/180C. For example, an access point 210A/210B can include a base station, a base transceiver station (BTS), a Node-B, an enhanced Node-B (eNode-B), a Home Node-B (HNode-B), a wireless router, a wired router, a hub, a relay, a switch, or any similar wired or wireless device. Although each access point 210A/210B is shown as a single unit, an access point can include any number of interconnected elements.
The network 220 can be any type of network configured to provide services, such as voice, data, applications, voice over internet protocol (VoIP), or any other communications protocol or combination of communications protocols, over a wired or wireless communication link. For example, the network 220 can be a local area network (LAN), wide area network (WAN), virtual private network (VPN), a mobile or cellular telephone network, the Internet, or any other means of electronic communication. The network can use a communication protocol, such as the transmission control protocol (TCP), the user datagram protocol (UDP), the internet protocol (IP), the real-time transport protocol (RTP) the Hyper Text Transport Protocol (HTTP), or a combination thereof.
The computing and communication devices 100A/100B/100C can communicate with each other via the network 220 using one or more a wired or wireless communication links, or via a combination of wired and wireless communication links. For example, as shown the computing and communication devices 100A/100B can communicate via wireless communication links 180A/180B, and computing and communication device 100C can communicate via a wired communication link 180C. Any of the computing and communication devices 100A/100B/100C may communicate using any wired or wireless communication link, or links. For example, a first computing and communication device 100A can communicate via a first access point 210A using a first type of communication link, a second computing and communication device 100B can communicate via a second access point 210B using a second type of communication link, and a third computing and communication device 100C can communicate via a third access point (not shown) using a third type of communication link. Similarly, the access points 210A/210B can communicate with the network 220 via one or more types of wired or wireless communication links 230A/230B. Although
Other implementations of the computing and communications system 200 are possible. For example, in an implementation the network 220 can be an ad-hock network and can omit one or more of the access points 210A/210B. The computing and communications system 200 may include devices, units, or elements not shown in
The encoder 400 can encode an input video stream 402, such as the video stream 300 shown in
For encoding the video stream 402, each frame within the video stream 402 can be processed in units of blocks. Thus, a current block may be identified from the blocks in a frame, and the current block may be encoded.
At the intra/inter prediction unit 410, the current block can be encoded using either intra-frame prediction, which may be within a single frame, or inter-frame prediction, which may be from frame to frame. Intra-prediction may include generating a prediction block from samples in the current frame that have been previously encoded and reconstructed. Inter-prediction may include generating a prediction block from samples in one or more previously constructed reference frames. Generating a prediction block for a current block in a current frame may include performing motion estimation to generate a motion vector indicating an appropriate reference block in the reference frame.
The intra/inter prediction unit 410 may subtract the prediction block from the current block (raw block) to produce a residual block. The transform unit 420 may perform a block-based transform, which may include transforming the residual block into transform coefficients in, for example, the frequency domain. Examples of block-based transforms include the Karhunen-Loève Transform (KLT), the Discrete Cosine Transform (DCT), and the Singular Value Decomposition Transform (SVD). In an example, the DCT may include transforming a block into the frequency domain. The DCT may include using transform coefficient values based on spatial frequency, with the lowest frequency (i.e. DC) coefficient at the top-left of the matrix and the highest frequency coefficient at the bottom-right of the matrix.
The quantization unit 430 may convert the transform coefficients into discrete quantum values, which may be referred to as quantized transform coefficients or quantization levels. The quantized transform coefficients can be entropy encoded by the entropy encoding unit 440 to produce entropy-encoded coefficients. Entropy encoding can include using a probability distribution metric. The entropy-encoded coefficients and information used to decode the block, which may include the type of prediction used, motion vectors, and quantizer values, can be output to the compressed bitstream 404. The compressed bitstream 404 can be formatted using various techniques, such as run-length encoding (RLE) and zero-run coding.
The reconstruction path can be used to maintain reference frame synchronization between the encoder 400 and a corresponding decoder, such as the decoder 500 shown in
Other variations of the encoder 400 can be used to encode the compressed bitstream 404. For example, a non-transform based encoder 400 can quantize the residual block directly without the transform unit 420. In some implementations, the quantization unit 430 and the dequantization unit 450 may be combined into a single unit.
The decoder 500 may receive a compressed bitstream 502, such as the compressed bitstream 404 shown in
The entropy decoding unit 510 may decode data elements within the compressed bitstream 502 using, for example, Context Adaptive Binary Arithmetic Decoding, to produce a set of quantized transform coefficients. The dequantization unit 520 can dequantize the quantized transform coefficients, and the inverse transform unit 530 can inverse transform the dequantized transform coefficients to produce a derivative residual block, which may correspond with the derivative residual block generated by the inverse transformation unit 460 shown in
Other variations of the decoder 500 can be used to decode the compressed bitstream 502. For example, the decoder 500 can produce the output video stream 504 without the deblocking filtering unit 570.
In some implementations, video coding may include ordered block-level coding. Ordered block-level coding may include coding blocks of a frame in an order, such as raster-scan order, wherein blocks may be identified and processed starting with a block in the upper left corner of the frame, or portion of the frame, and proceeding along rows from left to right and from the top row to the bottom row, identifying each block in turn for processing. For example, the superblock in the top row and left column of a frame may be the first block coded and the superblock immediately to the right of the first block may be the second block coded. The second row from the top may be the second row coded, such that the superblock in the left column of the second row may be coded after the superblock in the rightmost column of the first row.
In some implementations, coding a block may include using quad-tree coding, which may include coding smaller block units with a block in raster-scan order. For example, the 64×64 superblock shown in the bottom left corner of the portion of the frame shown in
Although right-edge extension for quad-tree intra-prediction is described herein with reference to matrix or Cartesian representation of a frame for clarity, a frame may be stored, transmitted, processed, or any combination thereof, in any data structure such that pixel values may be efficiently predicted for a frame or image. For example, a frame may be stored, transmitted, processed, or any combination thereof, in a two dimensional data structure such as a matrix as shown, or in a one dimensional data structure, such as a vector array. In an implementation, a representation of the frame, such as a two dimensional representation as shown, may correspond to a physical location in a rendering of the frame as an image. For example, a pixel in the top left corner of a block in the top left corner of the frame may correspond with a physical pixel in the top left corner of a rendering of the frame as an image.
For example, horizontal prediction may be used to predict pixel values based on horizontal similarities between pixels and can include filling each column of a current block with a copy of a column to the left of the current block. Similarly, vertical prediction may be used to predict pixel values based on vertical similarities between pixels and can include filling each row of a current block with a copy of a row above the current block. Diagonal prediction may be used to predict pixel values based on diagonal similarities between pixels and can include filling a column or row of pixels of a current block with copies of pixels from a block diagonal to the current block.
For clarity, blocks in a frame may be identified based on location relative to a current block in the frame, wherein the current block refers to the block currently identified for coding, a pixel of the current block identified for coding may be referred to as the current pixel, the 8×8 block including the current block may be referred to as the current 8×8 block, the 16×16 block including the current block may be referred to as the current 16×16 block, the 32×32 block including the current block may be referred to as the current 32×32 block, and the 64×64 block including the current block may be referred to as the current 64×64 block or the current superblock.
In
For example, as shown in
In some implementations, diagonal intra-coding may include top-right diagonal down intra-prediction, wherein values of the rightmost column of the current block may be predicted by copying the values of blocks in a row diagonally above and to the right of the current block as shown.
As shown, the blocks immediately above and to the right of the current block may be unavailable for predicting the current block. Although
Although not shown in
As shown in
Although
In some implementations, a current frame may be identified at 1110. The current frame may be a frame, such as the frame 330 shown in
In some implementations, a current block may be identified at 1120. Identifying the current block may include determining whether to code the current block using top-right diagonal down intra-prediction, and may include identifying a current block including a current pixels, which may be right edge pixels of the current block, where reference pixels for performing diagonal intra-prediction of the current pixels are unavailable. For example, the top right 4×4 block of the top right 8×8 block of the bottom right 16×16 block of the bottom right 32×32 block of the bottom left superblock of a portion of a frame, such as the current block shown in
In some implementations, a proximal above pixel may be identified at 1130. For example, the proximal above pixel may be the bottom right pixel in the block immediately above the current block and the current pixels, and may be available for prediction, such as the proximal above pixel PA shown in
In some implementations, a distant above pixel may be identified at 1140. For example, identifying the distant above pixel may include identifying the nearest, or most proximate, pixel above the current block and the current pixels that is available for prediction and has a pixel proximally to the right that is available for prediction, such as the distant above pixel DA shown in
In some implementations, a prediction pixel may be generated at 1150. Generating a prediction pixel may include identifying a current pixel in the current block for coding, and identifying a prediction pixel corresponding to the current pixel, and identifying a distant above right pixel corresponding to the prediction pixel. The distant above right pixel may be proximately to the right of the distant above pixel and may be available for prediction. For example, as shown in
In some implementations, a predicted pixel may be generated at 1160. Generating the predicted pixel for the current block may include using the prediction pixel generated at 1150. For example, a prediction pixel for a current pixel of the current block may be generated using the value of the corresponding prediction pixel as the value of the predicted pixel.
Other implementations of the diagrams of right-edge extension for quad-tree intra-prediction as shown in
The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term “an embodiment” or “one embodiment” or “an implementation” or “one implementation” throughout is not intended to mean the same embodiment or implementation unless described as such. As used herein, the terms “determine” and “identify”, or any variations thereof, includes selecting, ascertaining, computing, looking up, receiving, determining, establishing, obtaining, or otherwise identifying or determining in any manner whatsoever using one or more of the devices shown in
Further, for simplicity of explanation, although the figures and descriptions herein may include sequences or series of steps or stages, elements of the methods disclosed herein can occur in various orders and/or concurrently. Additionally, elements of the methods disclosed herein may occur with other elements not explicitly presented and described herein. Furthermore, not all elements of the methods described herein may be required to implement a method in accordance with the disclosed subject matter.
The implementations of the transmitting station 100A and/or the receiving station 100B (and the algorithms, methods, instructions, etc. stored thereon and/or executed thereby) can be realized in hardware, software, or any combination thereof. The hardware can include, for example, computers, intellectual property (IP) cores, application-specific integrated circuits (ASICs), programmable logic arrays, optical processors, programmable logic controllers, microcode, microcontrollers, servers, microprocessors, digital signal processors or any other suitable circuit. In the claims, the term “processor” should be understood as encompassing any of the foregoing hardware, either singly or in combination. The terms “signal” and “data” are used interchangeably. Further, portions of the transmitting station 100A and the receiving station 100B do not necessarily have to be implemented in the same manner.
Further, in one implementation, for example, the transmitting station 100A or the receiving station 100B can be implemented using a general purpose computer or general purpose/processor with a computer program that, when executed, carries out any of the respective methods, algorithms and/or instructions described herein. In addition or alternatively, for example, a special purpose computer/processor can be utilized which can contain specialized hardware for carrying out any of the methods, algorithms, or instructions described herein.
The transmitting station 100A and receiving station 100B can, for example, be implemented on computers in a real-time video system. Alternatively, the transmitting station 100A can be implemented on a server and the receiving station 100B can be implemented on a device separate from the server, such as a hand-held communications device. In this instance, the transmitting station 100A can encode content using an encoder 400 into an encoded video signal and transmit the encoded video signal to the communications device. In turn, the communications device can then decode the encoded video signal using a decoder 500. Alternatively, the communications device can decode content stored locally on the communications device, for example, content that was not transmitted by the transmitting station 100A. Other suitable transmitting station 100A and receiving station 100B implementation schemes are available. For example, the receiving station 100B can be a generally stationary personal computer rather than a portable communications device and/or a device including an encoder 400 may also include a decoder 500.
Further, all or a portion of implementations can take the form of a computer program product accessible from, for example, a tangible computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be any device that can, for example, tangibly contain, store, communicate, or transport the program for use by or in connection with any processor. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or a semiconductor device. Other suitable mediums are also available.
The above-described implementations have been described in order to allow easy understanding of the application are not limiting. On the contrary, the application covers various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structure as is permitted under the law.
Number | Name | Date | Kind |
---|---|---|---|
5150209 | Baker et al. | Sep 1992 | A |
5708473 | Mead | Jan 1998 | A |
5916449 | Ellwart et al. | Jun 1999 | A |
5930387 | Chan et al. | Jul 1999 | A |
5956467 | Rabbani et al. | Sep 1999 | A |
6005625 | Yokoyama | Dec 1999 | A |
6044166 | Bassman et al. | Mar 2000 | A |
6058211 | Bormans et al. | May 2000 | A |
6108383 | Miller et al. | Aug 2000 | A |
6243416 | Matsushiro et al. | Jun 2001 | B1 |
6314208 | Konstantinides et al. | Nov 2001 | B1 |
6473460 | Topper | Oct 2002 | B1 |
6532306 | Boon et al. | Mar 2003 | B1 |
6611620 | Kobayashi et al. | Aug 2003 | B1 |
6654419 | Sriram et al. | Nov 2003 | B1 |
6687304 | Peng | Feb 2004 | B1 |
7116830 | Srinivasan | Oct 2006 | B2 |
7158681 | Persiantsev | Jan 2007 | B2 |
7197070 | Zhang et al. | Mar 2007 | B1 |
7218674 | Kuo | May 2007 | B2 |
7236527 | Ohira | Jun 2007 | B2 |
7253831 | Gu | Aug 2007 | B2 |
7263125 | Lainema | Aug 2007 | B2 |
7333544 | Kim et al. | Feb 2008 | B2 |
7450642 | Youn | Nov 2008 | B2 |
7466774 | Boyce | Dec 2008 | B2 |
7602851 | Lee et al. | Oct 2009 | B2 |
7689051 | Mukerjee | Mar 2010 | B2 |
7924918 | Lelescu et al. | Apr 2011 | B2 |
8000546 | Yang et al. | Aug 2011 | B2 |
8094722 | Wang | Jan 2012 | B2 |
8111914 | Lee et al. | Feb 2012 | B2 |
8135064 | Tasaka et al. | Mar 2012 | B2 |
8208545 | Seo et al. | Jun 2012 | B2 |
8311111 | Xu et al. | Nov 2012 | B2 |
8320470 | Huang et al. | Nov 2012 | B2 |
8325796 | Wilkins et al. | Dec 2012 | B2 |
8369402 | Kobayashi et al. | Feb 2013 | B2 |
8369411 | Au et al. | Feb 2013 | B2 |
8526498 | Lim et al. | Sep 2013 | B2 |
8559512 | Paz | Oct 2013 | B2 |
8666181 | Venkatapuram et al. | Mar 2014 | B2 |
8711935 | Kim et al. | Apr 2014 | B2 |
8724702 | Bulusu et al. | May 2014 | B1 |
8761242 | Jeon et al. | Jun 2014 | B2 |
8885956 | Sato | Nov 2014 | B2 |
20020017565 | Ju et al. | Feb 2002 | A1 |
20020026639 | Haneda | Feb 2002 | A1 |
20030202705 | Sun | Oct 2003 | A1 |
20030215135 | Caron et al. | Nov 2003 | A1 |
20040001634 | Mehrotra | Jan 2004 | A1 |
20040252886 | Pan et al. | Dec 2004 | A1 |
20050180500 | Chiang et al. | Aug 2005 | A1 |
20050265447 | Park | Dec 2005 | A1 |
20060056689 | Wittebrood et al. | Mar 2006 | A1 |
20060215751 | Reichel et al. | Sep 2006 | A1 |
20070036354 | Wee et al. | Feb 2007 | A1 |
20070080971 | Sung | Apr 2007 | A1 |
20070121100 | Divo | May 2007 | A1 |
20070216777 | Quan et al. | Sep 2007 | A1 |
20070217701 | Liu et al. | Sep 2007 | A1 |
20080069440 | Forutanpour | Mar 2008 | A1 |
20080212678 | Booth et al. | Sep 2008 | A1 |
20080239354 | Usui | Oct 2008 | A1 |
20080260042 | Shah et al. | Oct 2008 | A1 |
20080294962 | Goel | Nov 2008 | A1 |
20080310745 | Ye et al. | Dec 2008 | A1 |
20090161763 | Rossignol et al. | Jun 2009 | A1 |
20090196342 | Divorra Escoda et al. | Aug 2009 | A1 |
20090232401 | Yamashita et al. | Sep 2009 | A1 |
20090257492 | Andersson et al. | Oct 2009 | A1 |
20100021009 | Yao | Jan 2010 | A1 |
20100034265 | Kim et al. | Feb 2010 | A1 |
20100034268 | Kusakabe et al. | Feb 2010 | A1 |
20100086028 | Tanizawa et al. | Apr 2010 | A1 |
20100104021 | Schmit | Apr 2010 | A1 |
20100111182 | Karczewicz et al. | May 2010 | A1 |
20100118945 | Wada et al. | May 2010 | A1 |
20100128796 | Choudhury | May 2010 | A1 |
20100195715 | Liu et al. | Aug 2010 | A1 |
20110002541 | Varekamp | Jan 2011 | A1 |
20110026591 | Bauza et al. | Feb 2011 | A1 |
20110033125 | Shiraishi | Feb 2011 | A1 |
20110069890 | Besley | Mar 2011 | A1 |
20110158529 | Malik | Jun 2011 | A1 |
20110176607 | Kim et al. | Jul 2011 | A1 |
20110211757 | Kim et al. | Sep 2011 | A1 |
20110235706 | Demircin et al. | Sep 2011 | A1 |
20110243229 | Kim et al. | Oct 2011 | A1 |
20110243230 | Liu | Oct 2011 | A1 |
20110249741 | Zhao et al. | Oct 2011 | A1 |
20110255592 | Sung et al. | Oct 2011 | A1 |
20110268359 | Steinberg et al. | Nov 2011 | A1 |
20110293001 | Lim et al. | Dec 2011 | A1 |
20120014436 | Segall et al. | Jan 2012 | A1 |
20120014437 | Segall et al. | Jan 2012 | A1 |
20120014438 | Segall et al. | Jan 2012 | A1 |
20120014439 | Segall et al. | Jan 2012 | A1 |
20120014440 | Segall et al. | Jan 2012 | A1 |
20120014444 | Min et al. | Jan 2012 | A1 |
20120020408 | Chen et al. | Jan 2012 | A1 |
20120039388 | Kim et al. | Feb 2012 | A1 |
20120057630 | Sanena et al. | Mar 2012 | A1 |
20120287998 | Sato | Nov 2012 | A1 |
20120307884 | MacInnis | Dec 2012 | A1 |
20120320975 | Kim et al. | Dec 2012 | A1 |
20120320984 | Zhou | Dec 2012 | A1 |
20130027230 | Marpe et al. | Jan 2013 | A1 |
Number | Date | Country |
---|---|---|
2007267414 | Oct 2007 | JP |
Entry |
---|
Park, Jun Sung, et al., “Selective Intra Prediction Mode Decision for H.264/AVC Encoders”, World Academy of Science, Engineering and Technology 13, (2006). |
Su M—T Sun University of Washington et al. “Encoder Optimization for H.264/AVC Fidelity Range Extensions” Jul. 12, 2005. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Version 1. International Telecommunication Union. Dated May, 2003. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Version 3. International Telecommunication Union. Dated Mar. 2005. |
“Overview; VP7 Data Format and Decoder”. Version 1.5. On2 Technologies, Inc. Dated Mar. 28, 2005. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Amendment 1: Support of additional colour spaces and removal of the High 4:4:4 Profile. International Telecommunication Union. Dated Jun. 2006. |
“VP6 Bitstream & Decoder Specification”. Version 1.02. On2 Technologies, Inc. Dated Aug. 17, 2006. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Amendment 2: New profiles for professional applications. International Telecommunication Union. Dated Apr. 2007. |
“VP6 Bitstream & Decoder Specification”. Version 1.03. On2 Technologies, Inc. Dated Oct. 29, 2007. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Advanced video coding for generic audiovisual services. Version 8. International Telecommunication Union. Dated Nov. 1, 2007. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 11. Dated Mar. 2009. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 12. Dated Mar. 2010. |
“Implementors' Guide; Series H: Audiovisual and Multimedia Systems; Coding of moving video: Implementors Guide for H.264: Advanced video coding for generic audiovisual services”. H.264. International Telecommunication Union. Version 12. Dated Jul. 30, 2010. |
“VP8 Data Format and Decoding Guide”. WebM Project. Google On2. Dated: Dec. 1, 2010. |
Bankoski et al. “VP8 Data Format and Decoding Guide; draft-bankoski-vp8-bitstream-02” Network Working Group. Dated May 18, 2011. |
Bankoski et al. “Technical Overview of VP8, An Open Source Video Codec for the Web”. Dated Jul. 11, 2011. |
Bankoski, J., Koleszar, J., Quillio, L., Salonen, J., Wilkins, P., and Y. Xu, “VP8 Data Format and Decoding Guide”, RFC 6386, Nov. 2011. |
Mozilla, “Introduction to Video Coding Part 1: Transform Coding”, Video Compression Overview, Mar. 2012, 171 pp. |
Han et al., “Jointly Optimized Spatial Prediction and Block Transform for Video and Image Coding,” IEEE Transactions on Image Processing, vol. 21, No. 4 (Apr. 2012). |
Han et al., “Toward Jointly Optimal Spatial Prediction and Adaptive Transform in Video/Image Coding,” ICASSP 2010 (Dallas, TX, Mar. 14-19, 2010). |