Remote access encoding

Information

  • Patent Grant
  • 9225979
  • Patent Number
    9,225,979
  • Date Filed
    Wednesday, January 30, 2013
    11 years ago
  • Date Issued
    Tuesday, December 29, 2015
    9 years ago
  • Inventors
  • Original Assignees
  • Examiners
    • Ustaris; Joseph
    • Nawaz; Talha
    Agents
    • Young Basile, Hanlon & MacFarlane P.C.
  • CPC
    • H04N19/00006
  • Field of Search
    • US
    • NON E00000
  • International Classifications
    • H04B1/66
    • H04N7/12
    • H04N11/02
    • H04N11/04
    • H04N19/10
    • Term Extension
      417
Abstract
A method and apparatus for remote access encoding is provided. Remote access encoding may include receiving, at a host device, from a client device, a remote access request indicating a portion of a display area of an operating environment of the host device, rendering a representation of the portion of the display area, wherein rendering includes generating rendered content including a plurality of frames, generating an encoded block, and transmitting encoded content to the client device, wherein the encoded content includes the encoded block. Generating the encoded block may include identifying a current block from a plurality of blocks in a current frame, wherein the current frame is one of the plurality of frames, determining whether the current block is a static block, determining a coding quality for encoding the current block, and determining whether to encode the current block as a skipped block.
Description
TECHNICAL FIELD

This application relates to computer implemented applications.


BACKGROUND

A computing device may execute an operating environment that may include elements, such as file system objects and executing applications. The computing device may render a representation of the operating environment as part of a graphical interface, which may be output for presentation on a display unit of the computing device. The representation of the operating environment may be rendered at a defined display resolution, which may define a display area included in the graphical interface. Accordingly, it would be advantageous to provide high resolution video transmitted over communications channels having limited bandwidth.


SUMMARY

Disclosed herein are aspects of systems, methods, and apparatuses for remote access encoding.


An aspect is a method for remote access encoding. Remote access encoding may include receiving, at a host device, from a client device, a remote access request indicating a portion of a display area of an operating environment of the host device, rendering a representation of the portion of the display area, wherein rendering includes generating rendered content including a plurality of frames, generating an encoded block, and transmitting encoded content to the client device, wherein the encoded content includes the encoded block. Generating the encoded block may include identifying a current block from a plurality of blocks in a current frame, wherein the current frame is one of the plurality of frames, determining whether the current block is a static block, determining a coding quality for encoding the current block, and determining whether to encode the current block as a skipped block.


Another aspect is another method for remote access encoding. Remote access encoding may include receiving, at a host device, from a client device, a remote access request indicating a portion of a display area of an operating environment of the host device, rendering a representation of the portion of the display area, wherein rendering includes generating rendered content including a plurality of frames, generating an encoded block, and transmitting encoded content to the client device, wherein the encoded content includes the encoded block. Generating the encoded block may include identifying a current block from a plurality of blocks in a current frame, wherein the current frame is one of the plurality of frames, identifying a reference block from a plurality of blocks in a reference frame, on a condition that the reference block is a high quality reference block and the current block is a static block, encoding the current block as a skipped block and indicating that the skipped block is a high quality block, and on a condition that the reference block is a low quality reference block and the current block is a static block, encoding the current block as a skipped block and indicating that the skipped block is a low quality block.


Variations in these and other aspects will be described in additional detail hereafter.





BRIEF DESCRIPTION OF THE DRAWINGS

The description herein makes reference to the accompanying drawings wherein like reference numerals refer to like parts throughout the several views, and wherein:



FIG. 1 is a diagram of a computing device in accordance with implementations of this disclosure;



FIG. 2 is a diagram of a computing and communications system in accordance with implementations of this disclosure;



FIG. 3 is a diagram of a video stream for encoding and decoding in accordance with implementations of this disclosure;



FIG. 4 is a block diagram of a video compression device in accordance with implementations of this disclosure;



FIG. 5 is a block diagram of a video decompression device in accordance with implementations of this disclosure;



FIG. 6 is a diagram of remote access in accordance with implementations of this disclosure; and



FIG. 7 is a diagram of remote access encoding in accordance with implementations of this disclosure.





DETAILED DESCRIPTION

Remote access technologies, such as remote desktop or screen sharing, may allow a computing device (client) to remotely access an operating environment of another computing device (host). For example, the host device may render a representation of a display area of the operating environment, which may be associated with a defined resolution, and may transmit the rendered output to the client device for presentation on a display unit of the client device. Rendering the representation of the display area may include, for example, encoding the content of the display area as a series of frames, which may include video compression using one or more video compression schemes. Video compression schemes may include identifying temporal or spatial similarities between frames, or between blocks in a frame, and omitting repetitious information from the encoded output.


Content rendered for remote access may include significant areas of static content, wherein corresponding portions of consecutive frames remain unchanged and corresponding pixel values are identical. For example, elements of the operating environment, such as a background or an out of focus window, may remain static for two or more consecutive frames. Implementations of remote access encoding may improve coding efficiency and quality by increasing the likelihood that static content is compressed using high quality configuration, and encoding blocks including static content as skipped blocks. Portions including static content can be identified using a quality oriented technique. In some implementations, context information, such as information indicating movement of a window within the operating environment of the host device, may be used to identify portions to encode using high quality configuration.



FIG. 1 is a diagram of a computing device 100 in accordance with implementations of this disclosure. A computing device 100 can include a communication interface 110, a communication unit 120, a user interface (UI) 130, a processor 140, a memory 150, instructions 160, a power source 170, or any combination thereof. As used herein, the term “computing device” includes any unit, or combination of units, capable of performing any method, or any portion or portions thereof, disclosed herein.


The computing device 100 may be a stationary computing device, such as a personal computer (PC), a server, a workstation, a minicomputer, or a mainframe computer; or a mobile computing device, such as a mobile telephone, a personal digital assistant (PDA), a laptop, or a tablet PC. Although shown as a single unit, any one or more element of the communication device 100 can be integrated into any number of separate physical units. For example, the UI 130 and processor 140 can be integrated in a first physical unit and the memory 150 can be integrated in a second physical unit.


The communication interface 110 can be a wireless antenna, as shown, a wired communication port, such as an Ethernet port, an infrared port, a serial port, or any other wired or wireless unit capable of interfacing with a wired or wireless electronic communication medium 180.


The communication unit 120 can be configured to transmit or receive signals via a wired or wireless medium 180. For example, as shown, the communication unit 120 is operatively connected to an antenna configured to communicate via wireless signals. Although not explicitly shown in FIG. 1, the communication unit 120 can be configured to transmit, receive, or both via any wired or wireless communication medium, such as radio frequency (RF), ultra violet (UV), visible light, fiber optic, wire line, or a combination thereof. Although FIG. 1 shows a single communication unit 120 and a single communication interface 110, any number of communication units and any number of communication interfaces can be used.


The UI 130 can include any unit capable of interfacing with a user, such as a virtual or physical keypad, a touchpad, a display, a touch display, a speaker, a microphone, a video camera, a sensor, or any combination thereof. The UI 130 can be operatively coupled with the processor, as shown, or with any other element of the communication device 100, such as the power source 170. Although shown as a single unit, the UI 130 may include one or more physical units. For example, the UI 130 may include an audio interface for performing audio communication with a user, and a touch display for performing visual and touch based communication with the user. Although shown as separate units, the communication interface 110, the communication unit 120, and the UI 130, or portions thereof, may be configured as a combined unit. For example, the communication interface 110, the communication unit 120, and the UI 130 may be implemented as a communications port capable of interfacing with an external touchscreen device.


The processor 140 can include any device or system capable of manipulating or processing a signal or other information now-existing or hereafter developed, including optical processors, quantum processors, molecular processors, or a combination thereof. For example, the processor 140 can include a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessor in association with a DSP core, a controller, a microcontroller, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a programmable logic array, programmable logic controller, microcode, firmware, any type of integrated circuit (IC), a state machine, or any combination thereof. As used herein, the term “processor” includes a single processor or multiple processors. The processor can be operatively coupled with the communication interface 110, communication unit 120, the UI 130, the memory 150, the instructions 160, the power source 170, or any combination thereof.


The memory 150 can include any non-transitory computer-usable or computer-readable medium, such as any tangible device that can, for example, contain, store, communicate, or transport the instructions 160, or any information associated therewith, for use by or in connection with the processor 140. The non-transitory computer-usable or computer-readable medium can be, for example, a solid state drive, a memory card, removable media, a read only memory (ROM), a random access memory (RAM), any type of disk including a hard disk, a floppy disk, an optical disk, a magnetic or optical card, an application specific integrated circuits (ASICs), or any type of non-transitory media suitable for storing electronic information, or any combination thereof. The memory 150 can be connected to, for example, the processor 140 through, for example, a memory bus (not explicitly shown).


The instructions 160 can include directions for performing any method, or any portion or portions thereof, disclosed herein. The instructions 160 can be realized in hardware, software, or any combination thereof. For example, the instructions 160 may be implemented as information stored in the memory 150, such as a computer program, that may be executed by the processor 140 to perform any of the respective methods, algorithms, aspects, or combinations thereof, as described herein. The instructions 160, or a portion thereof, may be implemented as a special purpose processor, or circuitry, that can include specialized hardware for carrying out any of the methods, algorithms, aspects, or combinations thereof, as described herein. Portions of the instructions 160 can be distributed across multiple processors on the same machine or different machines or across a network such as a local area network, a wide area network, the Internet, or a combination thereof.


The power source 170 can be any suitable device for powering the communication device 110. For example, the power source 170 can include a wired power source; one or more dry cell batteries, such as nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion); solar cells; fuel cells; or any other device capable of powering the communication device 110. The communication interface 110, the communication unit 120, the UI 130, the processor 140, the instructions 160, the memory 150, or any combination thereof, can be operatively coupled with the power source 170.


Although shown as separate elements, the communication interface 110, the communication unit 120, the UI 130, the processor 140, the instructions 160, the power source 170, the memory 150, or any combination thereof can be integrated in one or more electronic units, circuits, or chips.



FIG. 2 is a diagram of a computing and communications system 200 in accordance with implementations of this disclosure. The computing and communications system 200 may include one or more computing and communication devices 100A/100B/100C, one or more access points 210A/210B, one or more networks 220, or a combination thereof. For example, the computing and communication system 200 can be a multiple access system that provides communication, such as voice, data, video, messaging, broadcast, or a combination thereof, to one or more wired or wireless communicating devices, such as the computing and communication devices 100A/100B/100C. Although, for simplicity, FIG. 2 shows three computing and communication devices 100A/100B/100C, two access points 210A/210B, and one network 220, any number of computing and communication devices, access points, and networks can be used.


A computing and communication device 100A/100B/100C can be, for example, a computing device, such as the computing device 100 shown in FIG. 1. For example, as shown the computing and communication devices 100A/100B may be user devices, such as a mobile computing device, a laptop, a thin client, or a smartphone, and computing and the communication device 100C may be a server, such as a mainframe or a cluster. Although the computing and communication devices 100A/100B are described as user devices, and the computing and communication device 100C is described as a server, any computing and communication device may perform some or all of the functions of a server, some or all of the functions of a user device, or some or all of the functions of a server and a user device.


Each computing and communication device 100A/100B/100C can be configured to perform wired or wireless communication. For example, a computing and communication device 100A/100B/100C can be configured to transmit or receive wired or wireless communication signals and can include a user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a cellular telephone, a personal computer, a tablet computer, a server, consumer electronics, or any similar device. Although each computing and communication device 100A/100B/100C is shown as a single unit, a computing and communication device can include any number of interconnected elements.


Each access point 210A/210B can be any type of device configured to communicate with a computing and communication device 100A/100B/100C, a network 220, or both via wired or wireless communication links 180A/180B/180C. For example, an access point 210A/210B can include a base station, a base transceiver station (BTS), a Node-B, an enhanced Node-B (eNode-B), a Home Node-B (HNode-B), a wireless router, a wired router, a hub, a relay, a switch, or any similar wired or wireless device. Although each access point 210A/210B is shown as a single unit, an access point can include any number of interconnected elements.


The network 220 can be any type of network configured to provide services, such as voice, data, applications, voice over internet protocol (VoIP), or any other communications protocol or combination of communications protocols, over a wired or wireless communication link. For example, the network 220 can be a local area network (LAN), wide area network (WAN), virtual private network (VPN), a mobile or cellular telephone network, the Internet, or any other means of electronic communication. The network can use a communication protocol, such as the transmission control protocol (TCP), the user datagram protocol (UDP), the internet protocol (IP), the real-time transport protocol (RTP) the Hyper Text Transport Protocol (HTTP), or a combination thereof.


The computing and communication devices 100A/100B/100C can communicate with each other via the network 220 using one or more a wired or wireless communication links, or via a combination of wired and wireless communication links For example, as shown the computing and communication devices 100A/100B can communicate via wireless communication links 180A/180B, and computing and communication device 100C can communicate via a wired communication link 180C. Any of the computing and communication devices 100A/100B/100C may communicate using any wired or wireless communication link, or links. For example, a first computing and communication device 100A can communicate via a first access point 210A using a first type of communication link, a second computing and communication device 100B can communicate via a second access point 210B using a second type of communication link, and a third computing and communication device 100C can communicate via a third access point (not shown) using a third type of communication link. Similarly, the access points 210A/210B can communicate with the network 220 via one or more types of wired or wireless communication links 230A/230B. Although FIG. 2 shows the computing and communication devices 100A/100B/100C in communication via the network 220, the computing and communication devices 100A/100B/100C can communicate with each other via any number of communication links, such as a direct wired or wireless communication link.


Other implementations of the computing and communications system 200 are possible. For example, in an implementation the network 220 can be an ad-hock network and can omit one or more of the access points 210A/210B. The computing and communications system 200 may include devices, units, or elements not shown in FIG. 2. For example, the computing and communications system 200 may include many more communicating devices, networks, and access points.



FIG. 3 is a diagram of a video stream 300 for use in encoding and decoding in accordance with implementations of this disclosure. A video stream 300, such as a video stream generated by a host device during a remote desktop session, may include a video sequence 310. The video sequence 310 may include a sequence of adjacent frames 320. Although three adjacent frames 320 are shown, the video sequence 310 can include any number of adjacent frames 320. Each frame 330 from the adjacent frames 320 may represent a single image from the video stream. A frame 330 may include blocks 340. Although not shown in FIG. 3, a block can include pixels. For example, a block can include a 16×16 group of pixels, an 8×8 group of pixels, an 8×16 group of pixels, or any other group of pixels. Unless otherwise indicated herein, the term ‘block’ can include a macroblock, a segment, a slice, or any other portion of a frame. A frame, a block, a pixel, or a combination thereof can include display information, such as luminance information, chrominance information, or any other information that can be used to store, modify, communicate, or display the video stream or a portion thereof.



FIG. 4 is a block diagram of an encoder 400 in accordance with implementations of this disclosure. Encoder 400 can be implemented in a device, such as the computing device 100 shown in FIG. 1 or the computing and communication devices 100A/100B/100C shown in FIG. 2, as, for example, a computer software program stored in a data storage unit, such as the memory 150 shown in FIG. 1. The computer software program can include machine instructions that may be executed by a processor, such as the processor 160 shown in FIG. 1, and may cause the device to encode video data as described herein. The encoder 400 can be implemented as specialized hardware included, for example, in computing device 100.


The encoder 400 can encode an input video stream 402, such as the video stream 300 shown in FIG. 3 to generate an encoded (compressed) bitstream 404. In some implementations, the encoder 400 may include a forward path for generating the compressed bitstream 404. The forward path may include an intra/inter prediction unit 410, a transform unit 420, a quantization unit 430, an entropy encoding unit 440, or any combination thereof. In some implementations, the encoder 400 may include a reconstruction path (indicated by the broken connection lines) to reconstruct a frame for encoding of further blocks. The reconstruction path may include a dequantization unit 450, an inverse transform unit 460, a reconstruction unit 470, a loop filtering unit 480, or any combination thereof. Other structural variations of the encoder 400 can be used to encode the video stream 402.


For encoding the video stream 402, each frame within the video stream 402 can be processed in units of blocks. Thus, a current block may be identified from the blocks in a frame, and the current block may be encoded.


At the intra/inter prediction unit 410, the current block can be encoded using either intra-frame prediction, which may be within a single frame, or inter-frame prediction, which may be from frame to frame. Intra-prediction may include generating a prediction block from samples in the current frame that have been previously encoded and reconstructed. Inter-prediction may include generating a prediction block from samples in one or more previously constructed reference frames. Generating a prediction block for a current block in a current frame may include performing motion estimation to generate a motion vector indicating an appropriate reference block in the reference frame.


The intra/inter prediction unit 410 may subtract the prediction block from the current block (raw block) to produce a residual block. The transform unit 420 may perform a block-based transform, which may include transforming the residual block into transform coefficients in, for example, the frequency domain. Examples of block-based transforms include the Karhunen-Loève Transform (KLT), the Discrete Cosine Transform (DCT), and the Singular Value Decomposition Transform (SVD). In an example, the DCT may include transforming a block into the frequency domain. The DCT may include using transform coefficient values based on spatial frequency, with the lowest frequency (i.e. DC) coefficient at the top-left of the matrix and the highest frequency coefficient at the bottom-right of the matrix.


The quantization unit 430 may convert the transform coefficients into discrete quantum values, which may be referred to as quantized transform coefficients or quantization levels. The quantized transform coefficients can be entropy encoded by the entropy encoding unit 440 to produce entropy-encoded coefficients. Entropy encoding can include using a probability distribution metric. The entropy-encoded coefficients and information used to decode the block, which may include the type of prediction used, motion vectors, and quantizer values, can be output to the compressed bitstream 404. The compressed bitstream 404 can be formatted using various techniques, such as run-length encoding (RLE) and zero-run coding.


The reconstruction path can be used to maintain reference frame synchronization between the encoder 400 and a corresponding decoder, such as the decoder 500 shown in FIG. 5. The reconstruction path may be similar to the decoding process discussed below, and may include dequantizing the quantized transform coefficients at the dequantization unit 450 and inverse transforming the dequantized transform coefficients at the inverse transform unit 460 to produce a derivative residual block. The reconstruction unit 470 may add the prediction block generated by the intra/inter prediction unit 410 to the derivative residual block to create a reconstructed block. The loop filtering unit 480 can be applied to the reconstructed block to reduce distortion, such as blocking artifacts.


Other variations of the encoder 400 can be used to encode the compressed bitstream 404. For example, a non-transform based encoder 400 can quantize the residual block directly without the transform unit 420. In some implementations, the quantization unit 430 and the dequantization unit 450 may be combined into a single unit.



FIG. 5 is a block diagram of a decoder 500 in accordance with implementations of this disclosure. The decoder 500 can be implemented in a device, such as the computing device 100 shown in FIG. 1 or the computing and communication devices 100A/100B/100C shown in FIG. 2, as, for example, a computer software program stored in a data storage unit, such as the memory 150 shown in FIG. 1. The computer software program can include machine instructions that may be executed by a processor, such as the processor 160 shown in FIG. 1, and may cause the device to decode video data as described herein. The decoder 400 can be implemented as specialized hardware included, for example, in computing device 100.


The decoder 500 may receive a compressed bitstream 502, such as the compressed bitstream 404 shown in FIG. 4, and may decode the compressed bitstream 502 to generate an output video stream 504. The decoder 500 may include an entropy decoding unit 510, a dequantization unit 520, an inverse transform unit 530, an intra/inter prediction unit 540, a reconstruction unit 550, a loop filtering unit 560, a deblocking filtering unit 570, or any combination thereof. Other structural variations of the decoder 500 can be used to decode the compressed bitstream 502.


The entropy decoding unit 510 may decode data elements within the compressed bitstream 502 using, for example, Context Adaptive Binary Arithmetic Decoding, to produce a set of quantized transform coefficients. The dequantization unit 520 can dequantize the quantized transform coefficients, and the inverse transform unit 530 can inverse transform the dequantized transform coefficients to produce a derivative residual block, which may correspond with the derivative residual block generated by the inverse transformation unit 460 shown in FIG. 4. Using header information decoded from the compressed bitstream 502, the intra/inter prediction unit 540 may generate a prediction block corresponding to the prediction block created in the encoder 400. At the reconstruction unit 550, the prediction block can be added to the derivative residual block to create a reconstructed block. The loop filtering unit 560 can be applied to the reconstructed block to reduce blocking artifacts. The deblocking filtering unit 570 can be applied to the reconstructed block to reduce blocking distortion, and the result may be output as the output video stream 504.


Other variations of the decoder 500 can be used to decode the compressed bitstream 502. For example, the decoder 500 can produce the output video stream 504 without the deblocking filtering unit 570.



FIG. 6 is a diagram of remote access in accordance with implementations of this disclosure. Remote access may include a host device 610, which may be a computing device, such as the computing device 100 shown in FIG. 1 or the computing and communication devices 100A/100B/100C shown in FIG. 2, communicating with a client device 620, which may be may be a computing device, such as the computing device 100 shown in FIG. 1 or computing and communication device 100A/100B/100C shown in FIG. 2, via a network 630, such as the network 220 shown in FIG. 2.


The host device 610 may execute an operating environment, which may include an instance of an operating system and may be associated with an account, such as a logged in user account. As shown, a representation of the operating environment may include a display area 640. The display area 640 may indicate a height and a width of the representation of the operating environment. For example, the display area 640 may be associated with a defined display resolution, which may be expressed in physical units of measure, such as millimeters, or logical units of measure, such as pixels. For example, the display area 640 may have a display resolution of 1920 (width) by 1080 (height) pixels. The host device 610 may render the display area and may transmit the rendered content to the client device 620 via the network 630. In some implementations, the host device 610 may render the content as a series of frames, which may include an I-frame followed by one or more P-frames. The rendered content may be encoded and the encoded content may be transmitted to the client device 620. For example, the rendered content may be encoded as shown in FIG. 7.


The client device 620 may execute an operating environment, which may include a remote access application 622. The client device 620 may receive the rendered output from the host device 610 via the network 630 and may present the representation of the display area 640A via a graphical display unit of the client device 620.


In some implementations, the client device 620 may be configured to present the representation of the display area 640A at a display resolution that differs from the display resolution rendered by the host device 610. For example, the client device 620 may scale the rendered output for presentation via the graphical display unit of the client device 620. In some implementations, the host device 610 may receive an indication of the display resolution of the client device 620 and may render the representation of the operating environment using the display resolution of the client device 620.


For example, the host device 610 may adjust the display resolution of the host device 610 to match the display resolution of the client device 620, and may render the representation of the display area at the adjusted display resolution. Adjusting the display resolution may cause unwanted interference with the operating environment of the host device 610.


In another example, rendering the representation of the display are at the host device 610 may include scaling or sampling the representation of the display area to generate output at the display resolution of the client device 620, which may consume significant resources, such as processing resources, and may produce graphical artifacts.



FIG. 7 is a block diagram of remote access encoding in accordance with implementations of this disclosure. Remote access encoding may include a host device performing remote access, such as the remote access shown in FIG. 6, with a client device via a network. The host device may execute an operating environment. A representation of the operating environment may include a display area, which may include elements of the operating environment, such as windows, and window content. The host device may render the display area, encode the rendered content, and output the encoded content to the client device. In some implementations, remote access encoding may be performed by an encoder, such as the encoder 400 shown in FIG. 4, of the host device.


Rendered video, such as remote access video, may include relatively large amounts of static content, wherein pixel values remain static (do not change) from frame to frame. Static content may be compressed using high quality encoding. Quality and contextual metrics may be used to simplify and improve encoding performance. Implementations of remote access encoding may include initiating remote access at 700, rendering content at 710, encoding rendered content at 720, and transmitting the encoded content at 730. Although not shown separately, the client device may receive the encoded content, may decode the content, and may output the content to a local graphical display unit for presentation.


Remote access may be initiated at 700. Initiating remote access may include establishing a connection between the client device and the host device. The client device and the host device may exchange information for performing remote access. For example, the host device may receive a remote access request from the client device.


The host device may render a representation (rendered content) of the display area, or a portion of the display area, of the operating environment of the host device at 710. In some implementations, the host device may generate the rendered content as a sequence of frames. Each frame may include implicit or explicit information, such as the request identifier, offset information, buffer information, a timestamp, or any other information relevant to the rendered sequence of frames.


The host device may encode the rendered content at 720. Encoding the rendered content may include identifying a current block at 722, determining whether a block is a static block at 724, determining a coding quality for encoding a current block at 726, determining whether to encode the current block as a skipped block at 728, or a combination thereof.


A current block of a current frame may be identified for encoding at 722. For example, the representation of the display area of the operating environment may be rendered as video stream, such as the vides stream 300 shown in FIG. 3, and the encoding may include block based encoding, wherein a block of a frame of the video stream may be encoded based on, for example, a reference block in a previously encoded reference frame.


Remote access encoding may include determining whether a block is a static block at 724. In some implementations, a portion or portions of a rendered display area may be static (static content) from frame to frame. A block in a frame that includes content that is the same as the content of a corresponding reference block may be referred to as a static block. In some implementations, static blocks may be identified based on differences between blocks of a current frame and corresponding blocks of an unencoded (raw) frame corresponding to the reference frame identified for encoding the current frame. Blocks for which the difference between the current block and the corresponding block is within a threshold, which may be zero, may be identified as static blocks and may be assigned a zero motion vector. In some implementations, static blocks may be identified prior to, instead of, or as part of, performing motion estimation.


In some implementations, identifying static blocks may include using information indicating movement of an element of the operating environment. For example, the movement information may indicate motion of a window in the operating environment or motion, such as scrolling, of content within a window of the operating environment, such that the content changes location within the frame, but otherwise remains static. The motion information may be used identify a non-zero motion vector indicating the difference in location of the element between the current frame and the reference frame. Blocks including the static content may be identified as static blocks and may be assigned the non-zero motion vector.


Remote access encoding may include determining a coding quality for encoding a current block at 726. In some implementations, the coding quality of an encoded block may indicate differences between a current frame (raw frame) and a corresponding reconstructed frame. For example, the coding quality may be measured based on a sum of absolute differences (SAD) in the transform domain or spatial domain. For example, the SAD for an encoded block may be smaller than a threshold and the block may be a high quality block. Pixels in a high quality block may be referred to as high quality pixels.


In some implementations, determining a coding quality for encoding a current block may be based on a relative priority (importance) of the content included in the block. For example, video encoding may be subjected to resource utilization limitations, such as bit rate constraint, and content may be prioritized so that blocks including important content (important blocks) may be encoded using high quality encoding using a relatively large amount of resources (i.e., higher priority in bit allocation). Bits allocated for encoding important blocks, may not be used for encoding the important blocks and may be used for encoding other blocks. In some implementations important blocks may be encoded before other blocks. For example, a frame may be divided into slices and the important blocks may be included in a slice that may be encoded before other slices.


In some implementations, the priority for encoding a block may be based on the context of the content included in the block relative to the operating environment. For example, a topmost window in the operating environment may be relatively important to a user (in focus), and blocks of the rendered content including the topmost window may have a higher priority than other blocks. Blocks encoded using high quality encoding may suffer less quantization distortion than other blocks, convergence between a current frame and subsequent frames may be faster than for blocks that are not encoded using high quality encoding. Corresponding blocks in subsequent frames may be encoded as skipped blocks. Some subsequent frames may be encoded without encoding residual data. For example, encoding a first subsequent block may include encoding residual data and encoding other subsequent blocks may not include encoding residual data. In another example, priority may be based on the frequently and recency with which a window has been in focus, or interacted with by a user. The higher the focus frequency and recency, the higher the priority. For example, windows may be indexed in order of focus frequency, focus recency, or based on a combination of focus frequency and focus recency.


In some implementations, a static block may be encoded using high quality encoding. For example, encoding a static block as a high quality block may allow subsequent corresponding blocks to be encoded using fewer resources, which may reduce overall resource utilization (bit count). Bits utilized for encoding a static block as a high quality block can be used to improve the efficiency of encoding blocks which use the static block as a reference block. For example, a current block may be identified as a static block in a current frame, and the likelihood that a corresponding block in a subsequent frame is a static block may be high. In some implementations, the likelihood that the corresponding block in the subsequent frame is a static block may be particularly high when the current block does not include a portion of a topmost window.


Remote access encoding may include determining whether to encode the current block as a skipped block at 728. In some implementations, determining whether to encode the current block as a skipped block may be based on whether the current block is a static block, whether a reference block identified for encoding the current block is a high quality reference block, or a combination thereof.


In some implementations, determining whether to encode the current block as a skipped block may include determining whether a reference block for encoding the current block is a high quality reference block or a low quality reference block. For example, a reference block aligned with a block boundary and encoded using high quality encoding may be a high quality reference block; a reference block that overlaps multiple blocks that were encoded using high quality encoding may be a high quality reference block; and a reference block that overlaps with a block was not encoded using high quality encoding may be a low quality reference block.


In some implementations, encoding a static block using a high quality reference block may include identifying the current block as a skipped block, identifying a motion vector, indicating the reference block, and identifying the current block as a high quality block.


In some implementations, such as implementations where processing resources are limited, encoding a static block using a low quality reference block may include identifying the current block as a skipped block, identifying a motion vector indicating the reference block, and identifying the current block as a low quality block.


In some implementations, encoding a static block using a low quality reference block may include encoding the block without identifying the block as a skipped block, which may include using very good motion vector estimation. In some implementations, the encoder may utilize the motion vector directly in the coding process. In some implementations, the encoder may utilize the motion vector as a good starting point to find the best motion vector for coding the block.


Non-static blocks in the current frame may be encoded using a non-static block encoding technique, such as the encoding shown in FIG. 3.


Other implementations of the diagram of remote access encoding as shown in FIG. 7 are available. In implementations, additional elements of remote access encoding can be added, certain elements can be combined, and/or certain elements can be removed. Remote access encoding, or any portion thereof, can be implemented in a device, such as the computing device 100 shown in FIG. 1 or the computing and communication devices 100A/100B/100C shown in FIG. 2. For example, an encoder, such as the encoder 400 shown in FIG. 4, can implement remote access encoding, or any portion thereof, using instruction stored on a tangible, non-transitory, computer readable media, such as memory 150 shown in FIG. 1.


The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term “an embodiment” or “one embodiment” or “an implementation” or “one implementation” throughout is not intended to mean the same embodiment or implementation unless described as such. As used herein, the terms “determine” and “identify”, or any variations thereof, includes selecting, ascertaining, computing, looking up, receiving, determining, establishing, obtaining, or otherwise identifying or determining in any manner whatsoever using one or more of the devices described herein.


Further, for simplicity of explanation, although the figures and descriptions herein may include sequences or series of steps or stages, elements of the methods disclosed herein can occur in various orders or concurrently. Additionally, elements of the methods disclosed herein may occur with other elements not explicitly presented and described herein. Furthermore, not all elements of the methods described herein may be required to implement a method in accordance with the disclosed subject matter.


The implementations of the computing and communication devices (and the algorithms, methods, or any part or parts thereof, stored thereon or executed thereby) can be realized in hardware, software, or any combination thereof. The hardware can include, for example, computers, intellectual property (IP) cores, application-specific integrated circuits (ASICs), programmable logic arrays, optical processors, programmable logic controllers, microcode, microcontrollers, servers, microprocessors, digital signal processors or any other suitable circuit. In the claims, the term “processor” should be understood as encompassing any of the foregoing hardware, either singly or in combination. The terms “signal” and “data” are used interchangeably. Further, portions of the computing and communication devices do not necessarily have to be implemented in the same manner.


Further, all or a portion of implementations can take the form of a computer program product accessible from, for example, a tangible computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be any device that can, for example, tangibly contain, store, communicate, or transport the program for use by or in connection with any processor. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or a semiconductor device. Other suitable mediums are also available.


The above-described implementations have been described in order to allow easy understanding of the application are not limiting. On the contrary, the application covers various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structure as is permitted under the law.

Claims
  • 1. A method of remote access encoding, the method comprising: receiving, at a host device, from a client device, a remote access request indicating a portion of a display area of an operating environment of the host device;rendering a representation of the portion of the display area, wherein rendering includes generating rendered content including a plurality of frames;generating an encoded block by: identifying a current block from a plurality of blocks in a current frame, wherein the current frame is one of the plurality of frames,determining whether the current block is a static block, by determining that the current block is a static block on a condition that a difference between the current block and a corresponding raw reference block does not exceed a threshold and determining that the current block is not a static block on a condition that the difference between the current block and the raw reference block exceeds the threshold, wherein determining that the current block is a static block includes associating the current block with a zero motion vector,determining a coding quality for encoding the current block, anddetermining whether to encode the current block as a skipped block; andtransmitting encoded content to the client device, wherein the encoded content includes the encoded block.
  • 2. The method of claim 1, wherein determining whether the current block is a static block includes determining whether the current block includes static content.
  • 3. The method of claim 1, wherein the raw reference block is one of a plurality of blocks in a raw reference frame, and wherein a position of the raw reference block in the raw reference frame corresponds with a position of the current block in the current frame offset by information indicating movement of a window in the operating environment, and wherein determining that the current block is a static block includes associating the current block with a non-zero motion vector.
  • 4. The method of claim 1, wherein determining the coding quality for encoding the current block includes determining that the encoding quality is high quality on a condition that the current block is a static block.
  • 5. The method of claim 1, wherein determining whether to encode the current block as a skipped block includes encoding the current block as a skipped block and indicating that the skipped block is a high quality block on a condition that a reference block for encoding the current block is a high quality reference block and the current block is a static block.
  • 6. The method of claim 1, wherein determining whether to encode the current block as a skipped block includes encoding the current block as a skipped block and indicating that the skipped block is a low quality block on a condition that a reference block for encoding the current block is a low quality reference block and the current block is a static block.
  • 7. The method of claim 1, wherein determining whether to encode the current block as a skipped block includes determining whether a reference block for encoding the current block is a high quality reference block.
  • 8. A method of remote access encoding, the method comprising: receiving, at a host device, from a client device, a remote access request indicating a portion of a display area of an operating environment of the host device;rendering a representation of the portion of the display area, wherein rendering includes generating rendered content including a plurality of frames;generating an encoded block by: identifying a current block from a plurality of blocks in a current frame, wherein the current frame is one of the plurality of framesdetermining whether the current block is a static block,determining a coding quality for encoding the current block, wherein determining the coding quality for encoding the current block includes determining that the encoding quality is high quality on a condition that the current block includes a portion of a topmost window of the operating environment, anddetermining whether to encode the current block as a skipped block; andtransmitting encoded content to the client device, wherein the encoded content includes the encoded block.
  • 9. A method of remote access encoding, the method comprising: receiving, at a host device, from a client device, a remote access request indicating a portion of a display area of an operating environment of the host device;rendering a representation of the portion of the display area, wherein rendering includes generating rendered content including a plurality of frames;generating an encoded block by: identifying a current block from a plurality of blocks in a current frame, wherein the current frame is one of the plurality of framesdetermining whether the current block is a static block,determining a coding quality for encoding the current block, wherein determining the coding quality for encoding the current block includes determining that the encoding quality is high quality on a condition that the current block includes a portion of a recently in-focus window of the operating environment, anddetermining whether to encode the current block as a skipped block; andtransmitting encoded content to the client device, wherein the encoded content includes the encoded block.
  • 10. A method of remote access encoding, the method comprising: receiving, at a host device, from a client device, a remote access request indicating a portion of a display area of an operating environment of the host device;rendering a representation of the portion of the display area, wherein rendering includes generating rendered content including a plurality of frames;generating an encoded block by: identifying a current block from a plurality of blocks in a current frame, wherein the current frame is one of the plurality of framesdetermining whether the current block is a static block,determining a coding quality for encoding the current block, wherein determining the coding quality for encoding the current block includes determining that the encoding quality is high quality on a condition that the current block includes a portion of a frequently in-focus window of the operating environment, anddetermining whether to encode the current block as a skipped block; andtransmitting encoded content to the client device, wherein the encoded content includes the encoded block.
  • 11. A method of remote access encoding, the method comprising: receiving, at a host device, from a client device, a remote access request indicating a portion of a display area of an operating environment of the host device;rendering a representation of the portion of the display area, wherein rendering includes generating rendered content including a plurality of frames;generating an encoded block by: identifying a current block from a plurality of blocks in a current frame, wherein the current frame is one of the plurality of framesdetermining whether the current block is a static block,determining a coding quality for encoding the current block, anddetermining whether to encode the current block as a skipped block, whereindetermining whether to encode the current block as a skipped block includes determining whether a reference block for encoding the current block is a high quality reference block, andwherein determining whether the reference block is a high quality reference block includes determining whether the reference block was encoded using high quality encoding based on a difference between a reference frame and a corresponding reconstructed frame, wherein the reference block is one of a plurality of blocks from the reference frame; andtransmitting encoded content to the client device, wherein the encoded content includes the encoded block.
  • 12. The method of claim 11, wherein determining whether the reference block was encoded using high quality encoding includes determining a sum of absolute differences in a transform domain or a spatial domain between the reference frame and the corresponding reconstructed frame, such that the reference block was encoded using high quality encoding on a condition that the sum of absolute differences does not exceed a threshold, and the reference block was encoded using low quality encoding on a condition that the sum of absolute differences exceeds the threshold.
  • 13. The method of claim 11, wherein determining whether the reference block is a high quality reference block includes: determining that the reference block is a high quality reference block on a condition that the reference block was encoded using high quality encoding and is aligned with a block boundary;determining that the reference block is a high quality reference block on a condition that the reference block overlaps with a plurality of blocks that were encoded using high quality encoding; anddetermining that the reference block is a low quality reference block on a condition that the reference block overlaps with a block that was encoded using low quality encoding.
  • 14. A method of remote access encoding, the method comprising: receiving, at a host device, from a client device, a remote access request indicating a portion of a display area of an operating environment of the host device;rendering a representation of the portion of the display area, wherein rendering includes generating rendered content including a plurality of frames;generating an encoded block by: identifying a current block from a plurality of blocks in a current frame, wherein the current frame is one of the plurality of frames,identifying a reference block from a plurality of blocks in a reference frame,determining whether the current block is a static block, wherein determining whether the current block is a static block includes at least one of determining that the current block is a static block and associating the current block with a zero motion vector, on a condition that a difference between the current block and a corresponding raw reference block does not exceed a threshold, wherein the reference block is a decoded block based on the raw reference block, determining that the current block is a static block and associating the current block with a non-zero motion vector, on a condition that a difference between the current block and a corresponding raw reference block does not exceed a threshold, wherein the raw reference block is one of a plurality of blocks in a raw reference frame, and wherein a position of the raw reference block in the raw reference frame corresponds with a position of the current block in the current frame offset by information indicating movement of a window in the operating environment, or determining that the current block is not a static block on a condition that the difference between the current block and the raw reference block exceeds the threshold,on a condition that the reference block is a high quality reference block and the current block is a static block, encoding the current block as a skipped block and indicating that the skipped block is a high quality block, andon a condition that the reference block is a low quality reference block and the current block is a static block, encoding the current block as a skipped block and indicating that the skipped block is a low quality block; andtransmitting encoded content to the client device, wherein the encoded content includes the encoded block.
  • 15. The method of claim 14, wherein determining whether the current block is a static block includes determining whether the current block includes static content.
  • 16. The method of claim 14, wherein generating the encoded block includes determining whether the reference block is a high quality reference block.
  • 17. A method of remote access encoding, the method comprising: receiving, at a host device, from a client device, a remote access request indicating a portion of a display area of an operating environment of the host device;rendering a representation of the portion of the display area, wherein rendering includes generating rendered content including a plurality of frames;generating an encoded block by:identifying a current block from a plurality of blocks in a current frame, wherein the current frame is one of the plurality of frames,identifying a reference block from a plurality of blocks in a reference frame,determining whether the reference block is a high quality reference block, wherein determining whether the reference block is a high quality reference block includes at least one of determining whether the reference block was encoded using high quality encoding based on a difference between the reference frame and a corresponding reconstructed frame, determining that the reference block is a high quality reference block on a condition that the reference block was encoded using high quality encoding and is aligned with a block boundary, determining that the reference block is a high quality reference block on a condition that the reference block overlaps with a plurality of blocks that were encoded using high quality encoding, or determining that the reference block is a low quality reference block on a condition that the reference block overlaps with a block that was encoded using low quality encoding,on a condition that the reference block is a high quality reference block and the current block is a static block, encoding the current block as a skipped block and indicating that the skipped block is a high quality block, andon a condition that the reference block is a low quality reference block and the current block is a static block, encoding the current block as a skipped block and indicating that the skipped block is a low quality block; andtransmitting encoded content to the client device, wherein the encoded content includes the encoded block.
US Referenced Citations (319)
Number Name Date Kind
4924310 von Brandt May 1990 A
5148269 de Haan et al. Sep 1992 A
5337086 Fujinami Aug 1994 A
5398068 Liu et al. Mar 1995 A
5452435 Malouf et al. Sep 1995 A
5512952 Iwamura Apr 1996 A
5638114 Hatanaka et al. Jun 1997 A
5731840 Kikuchi et al. Mar 1998 A
5801756 Iizawa Sep 1998 A
5870146 Zhu Feb 1999 A
5886742 Hibi et al. Mar 1999 A
5916449 Ellwart et al. Jun 1999 A
5930387 Chan et al. Jul 1999 A
5991447 Eifrig et al. Nov 1999 A
6005625 Yokoyama Dec 1999 A
6005980 Eifrig et al. Dec 1999 A
6021213 Helterbrand et al. Feb 2000 A
6025870 Hardy Feb 2000 A
6044166 Bassman et al. Mar 2000 A
6058211 Bormans et al. May 2000 A
6075554 Andrews et al. Jun 2000 A
6091777 Guetz et al. Jul 2000 A
6195391 Hancock et al. Feb 2001 B1
6204847 Wright Mar 2001 B1
6243683 Peters Jun 2001 B1
6266337 Marco Jul 2001 B1
6271840 Finseth et al. Aug 2001 B1
6272179 Kadono Aug 2001 B1
6289049 Kim et al. Sep 2001 B1
6346963 Katsumi Feb 2002 B1
6359929 Boon Mar 2002 B1
6363067 Chung Mar 2002 B1
6363119 Oami Mar 2002 B1
6381277 Chun et al. Apr 2002 B1
6421387 Rhee Jul 2002 B1
6462791 Zhu Oct 2002 B1
6483454 Torre et al. Nov 2002 B1
6556588 Wan et al. Apr 2003 B2
6577333 Tai et al. Jun 2003 B2
6587985 Fukushima et al. Jul 2003 B1
6681362 Abbott et al. Jan 2004 B1
6684354 Fukushima et al. Jan 2004 B2
6707852 Wang Mar 2004 B1
6711209 Lainema et al. Mar 2004 B1
6728317 Demos Apr 2004 B1
6732313 Fukushima et al. May 2004 B2
6735249 Karczewicz et al. May 2004 B1
6741569 Clark May 2004 B1
6812956 Ferren et al. Nov 2004 B2
6816836 Basu et al. Nov 2004 B2
6918077 Fukushima et al. Jul 2005 B2
6952450 Cohen Oct 2005 B2
7007098 Smyth et al. Feb 2006 B1
7007235 Hussein et al. Feb 2006 B1
7015954 Foote et al. Mar 2006 B1
7114129 Awada et al. Sep 2006 B2
7124333 Fukushima et al. Oct 2006 B2
7178106 Lamkin et al. Feb 2007 B2
7180896 Okumura Feb 2007 B1
7197070 Zhang et al. Mar 2007 B1
D541293 Harvey et al. Apr 2007 S
7219062 Colmenarez et al. May 2007 B2
7263644 Park et al. Aug 2007 B2
7266782 Hull et al. Sep 2007 B2
D553632 Harvey et al. Oct 2007 S
7356750 Fukushima et al. Apr 2008 B2
7372834 Kim et al. May 2008 B2
7376880 Ichiki et al. May 2008 B2
7379653 Yap et al. May 2008 B2
7424056 Lin et al. Sep 2008 B2
7447235 Luby et al. Nov 2008 B2
7447969 Park et al. Nov 2008 B2
7484157 Park et al. Jan 2009 B2
D594872 Akimoto Jun 2009 S
7577898 Costa et al. Aug 2009 B2
7636298 Miura et al. Dec 2009 B2
7664185 Zhang et al. Feb 2010 B2
7664246 Krantz et al. Feb 2010 B2
7680076 Michel et al. Mar 2010 B2
7684982 Taneda Mar 2010 B2
D614646 Chen et al. Apr 2010 S
7707224 Chastagnol et al. Apr 2010 B2
7710973 Rumbaugh et al. May 2010 B2
7720686 Volk et al. May 2010 B2
7735111 Michener et al. Jun 2010 B2
7739714 Guedalia Jun 2010 B2
7756127 Nagai et al. Jul 2010 B2
7797274 Strathearn et al. Sep 2010 B2
7822607 Aoki et al. Oct 2010 B2
7823039 Park et al. Oct 2010 B2
7860718 Lee et al. Dec 2010 B2
7864210 Kennedy Jan 2011 B2
7974243 Nagata et al. Jul 2011 B2
8010185 Ueda Aug 2011 B2
8019175 Lee et al. Sep 2011 B2
8060651 Deshpande et al. Nov 2011 B2
8078493 Rosenberg et al. Dec 2011 B2
8085767 Lussier et al. Dec 2011 B2
8087056 Ryu Dec 2011 B2
8130823 Gordon et al. Mar 2012 B2
8161159 Shetty et al. Apr 2012 B1
8175041 Shao et al. May 2012 B2
8176524 Singh et al. May 2012 B2
8179983 Gordon et al. May 2012 B2
8223268 Fujiwara et al. Jul 2012 B2
8233539 Kwon Jul 2012 B2
8265450 Black et al. Sep 2012 B2
8307403 Bradstreet et al. Nov 2012 B2
8316450 Robinson et al. Nov 2012 B2
8443398 Swenson et al. May 2013 B2
8448259 Haga et al. May 2013 B2
8494053 He et al. Jul 2013 B2
8553776 Shi et al. Oct 2013 B2
8566886 Scholl Oct 2013 B2
8649668 Moorer Feb 2014 B2
8705620 Jia Apr 2014 B1
8719888 Xu et al. May 2014 B1
8804819 Jia Aug 2014 B1
9026615 Sirton et al. May 2015 B1
20020003575 Marchese Jan 2002 A1
20020017565 Ju et al. Feb 2002 A1
20020031272 Bagni et al. Mar 2002 A1
20020085637 Henning Jul 2002 A1
20020140851 Laksono Oct 2002 A1
20020152318 Menon et al. Oct 2002 A1
20020157058 Ariel et al. Oct 2002 A1
20020176604 Shekhar et al. Nov 2002 A1
20020191072 Henrikson Dec 2002 A1
20030012281 Cho et al. Jan 2003 A1
20030012287 Katsavounidis et al. Jan 2003 A1
20030016630 Vega-Garcia et al. Jan 2003 A1
20030053544 Yasunari et al. Mar 2003 A1
20030061368 Chaddha Mar 2003 A1
20030098992 Park et al. May 2003 A1
20030112864 Karczewicz et al. Jun 2003 A1
20030215135 Caron et al. Nov 2003 A1
20030226094 Fukushima et al. Dec 2003 A1
20030229822 Kim et al. Dec 2003 A1
20030229900 Reisman Dec 2003 A1
20040001634 Mehrotra Jan 2004 A1
20040017939 Mehrotra Jan 2004 A1
20040071170 Fukuda Apr 2004 A1
20040105004 Rui et al. Jun 2004 A1
20040165585 Imura et al. Aug 2004 A1
20040172252 Aoki et al. Sep 2004 A1
20040172255 Aoki et al. Sep 2004 A1
20040184444 Aimoto et al. Sep 2004 A1
20040196902 Faroudja Oct 2004 A1
20040233938 Yamauchi Nov 2004 A1
20040252886 Pan et al. Dec 2004 A1
20040258158 Gordon Dec 2004 A1
20050033635 Jeon Feb 2005 A1
20050041150 Gewickey et al. Feb 2005 A1
20050071781 Atkins Mar 2005 A1
20050076272 Delmas et al. Apr 2005 A1
20050091508 Lee et al. Apr 2005 A1
20050117653 Sankaran Jun 2005 A1
20050125734 Mohammed et al. Jun 2005 A1
20050154965 Ichiki et al. Jul 2005 A1
20050157793 Ha et al. Jul 2005 A1
20050180415 Cheung et al. Aug 2005 A1
20050185715 Karczewicz et al. Aug 2005 A1
20050220188 Wang Oct 2005 A1
20050238243 Kondo et al. Oct 2005 A1
20050251856 Araujo et al. Nov 2005 A1
20050259729 Sun Nov 2005 A1
20050271140 Hanamura et al. Dec 2005 A1
20060008038 Song et al. Jan 2006 A1
20060013310 Lee et al. Jan 2006 A1
20060039470 Kim et al. Feb 2006 A1
20060056689 Wittebrood et al. Mar 2006 A1
20060066717 Miceli Mar 2006 A1
20060140584 Ellis et al. Jun 2006 A1
20060146940 Gomila et al. Jul 2006 A1
20060150055 Quinard et al. Jul 2006 A1
20060153217 Chu et al. Jul 2006 A1
20060195864 New et al. Aug 2006 A1
20060215014 Cohen et al. Sep 2006 A1
20060215752 Lee et al. Sep 2006 A1
20060247927 Robbins et al. Nov 2006 A1
20060248563 Lee et al. Nov 2006 A1
20060282774 Covell et al. Dec 2006 A1
20060291475 Cohen Dec 2006 A1
20070011702 Vaysman Jan 2007 A1
20070036354 Wee et al. Feb 2007 A1
20070064094 Potekhin et al. Mar 2007 A1
20070065026 Lee et al. Mar 2007 A1
20070080971 Sung Apr 2007 A1
20070081522 Apelbaum Apr 2007 A1
20070081587 Raveendran et al. Apr 2007 A1
20070097257 El-Maleh et al. May 2007 A1
20070121100 Divo May 2007 A1
20070168824 Fukushima et al. Jul 2007 A1
20070195893 Kim et al. Aug 2007 A1
20070216777 Quan et al. Sep 2007 A1
20070223529 Lee et al. Sep 2007 A1
20070237226 Regunathan et al. Oct 2007 A1
20070237232 Chang et al. Oct 2007 A1
20070250754 Costa et al. Oct 2007 A1
20070268964 Zhao Nov 2007 A1
20070285505 Korneliussen Dec 2007 A1
20080004731 Ozaki Jan 2008 A1
20080037624 Walker et al. Feb 2008 A1
20080043832 Barkley et al. Feb 2008 A1
20080046939 Lu et al. Feb 2008 A1
20080069440 Forutanpour Mar 2008 A1
20080072267 Monta et al. Mar 2008 A1
20080077264 Irvin et al. Mar 2008 A1
20080089414 Wang et al. Apr 2008 A1
20080101403 Michel et al. May 2008 A1
20080109369 Su et al. May 2008 A1
20080109707 Dell et al. May 2008 A1
20080126278 Bronstein et al. May 2008 A1
20080134005 Izzat et al. Jun 2008 A1
20080144553 Shao et al. Jun 2008 A1
20080209300 Fukushima et al. Aug 2008 A1
20080239354 Usui Oct 2008 A1
20080250294 Ngo et al. Oct 2008 A1
20080260042 Shah et al. Oct 2008 A1
20080270528 Girardeau et al. Oct 2008 A1
20080273591 Brooks et al. Nov 2008 A1
20090006927 Sayadi et al. Jan 2009 A1
20090007159 Rangarajan et al. Jan 2009 A1
20090010325 Nie et al. Jan 2009 A1
20090013086 Greenbaum Jan 2009 A1
20090022157 Rumbaugh et al. Jan 2009 A1
20090031390 Rajakarunanayake et al. Jan 2009 A1
20090059067 Takanohashi et al. Mar 2009 A1
20090059917 Lussier et al. Mar 2009 A1
20090080510 Wiegand et al. Mar 2009 A1
20090080523 McDowell Mar 2009 A1
20090103635 Pahalawatta Apr 2009 A1
20090122867 Mauchly et al. May 2009 A1
20090125812 Blinnikka et al. May 2009 A1
20090138784 Tamura et al. May 2009 A1
20090141792 Mori et al. Jun 2009 A1
20090144417 Kisel et al. Jun 2009 A1
20090161763 Rossignol et al. Jun 2009 A1
20090180537 Park et al. Jul 2009 A1
20090187862 DaCosta Jul 2009 A1
20090232217 Lee et al. Sep 2009 A1
20090232401 Yamashita et al. Sep 2009 A1
20090237728 Yamamoto Sep 2009 A1
20090238277 Meehan Sep 2009 A1
20090241147 Kim et al. Sep 2009 A1
20090245351 Watanabe Oct 2009 A1
20090249158 Noh et al. Oct 2009 A1
20090254657 Melnyk et al. Oct 2009 A1
20090268819 Nishida Oct 2009 A1
20090276686 Liu et al. Nov 2009 A1
20090276817 Colter et al. Nov 2009 A1
20090307227 Prestenback et al. Dec 2009 A1
20090307428 Schmieder et al. Dec 2009 A1
20090322854 Ellner Dec 2009 A1
20100021009 Yao Jan 2010 A1
20100026608 Adams et al. Feb 2010 A1
20100034268 Kusakabe et al. Feb 2010 A1
20100040349 Landy Feb 2010 A1
20100054333 Bing et al. Mar 2010 A1
20100074536 Hamada et al. Mar 2010 A1
20100077058 Messer Mar 2010 A1
20100104021 Schmit Apr 2010 A1
20100111410 Lu et al. May 2010 A1
20100122127 Oliva et al. May 2010 A1
20100128170 Hirai et al. May 2010 A1
20100149301 Lee et al. Jun 2010 A1
20100153828 De Lind Van Wijngaarden et al. Jun 2010 A1
20100171882 Cho et al. Jul 2010 A1
20100186041 Chu et al. Jul 2010 A1
20100192078 Hwang et al. Jul 2010 A1
20100202414 Malladi et al. Aug 2010 A1
20100220172 Michaelis Sep 2010 A1
20100235583 Gokaraju et al. Sep 2010 A1
20100235820 Khouzam et al. Sep 2010 A1
20100290710 Gagvani et al. Nov 2010 A1
20100293470 Zhao et al. Nov 2010 A1
20100306413 Kamay Dec 2010 A1
20100306618 Kim et al. Dec 2010 A1
20100309372 Zhong Dec 2010 A1
20100309982 Le Floch et al. Dec 2010 A1
20100316127 Yokoyama Dec 2010 A1
20110002541 Varekamp Jan 2011 A1
20110010629 Castro et al. Jan 2011 A1
20110026591 Bauza et al. Feb 2011 A1
20110032982 Costa et al. Feb 2011 A1
20110033125 Shiraishi Feb 2011 A1
20110047163 Chechik et al. Feb 2011 A1
20110069890 Besley Mar 2011 A1
20110093273 Lee et al. Apr 2011 A1
20110103480 Dane May 2011 A1
20110131144 Ashour et al. Jun 2011 A1
20110158529 Malik Jun 2011 A1
20110191374 Bengio et al. Aug 2011 A1
20110194605 Amon et al. Aug 2011 A1
20110218439 Masui et al. Sep 2011 A1
20110219331 DeLuca et al. Sep 2011 A1
20110255592 Sung et al. Oct 2011 A1
20110258338 Vass Oct 2011 A1
20110268359 Steinberg et al. Nov 2011 A1
20110279634 Periyannan et al. Nov 2011 A1
20120013705 Taylor et al. Jan 2012 A1
20120020408 Chen et al. Jan 2012 A1
20120084821 Rogers Apr 2012 A1
20120110443 Lemonik et al. May 2012 A1
20120206562 Yang et al. Aug 2012 A1
20120213280 Srinivasan et al. Aug 2012 A1
20120232681 Mundy et al. Sep 2012 A1
20120246343 Story, Jr. et al. Sep 2012 A1
20120278433 Liu et al. Nov 2012 A1
20120287999 Li et al. Nov 2012 A1
20120314942 Williams et al. Dec 2012 A1
20120324324 Hwang et al. Dec 2012 A1
20130031441 Ngo et al. Jan 2013 A1
20130039409 Gupta Feb 2013 A1
20130050254 Tran et al. Feb 2013 A1
20130114687 Kim et al. May 2013 A1
20130198617 Maloney et al. Aug 2013 A1
20130329810 Mese et al. Dec 2013 A1
20140369421 Zhu et al. Dec 2014 A1
Foreign Referenced Citations (8)
Number Date Country
0634873 Sep 1998 EP
1777969 Apr 2007 EP
0715711 Jan 1995 JP
2008146057 Jun 2008 JP
2008225379 Sep 2008 JP
WO0249356 Jun 2002 WO
WO2007057850 May 2007 WO
WO2008006062 Jan 2008 WO
Non-Patent Literature Citations (75)
Entry
Ahn et al., Flat-region Detection and False Contour Removal in the Digital TV Display, http://cilab.knu.ac.kr/seminar/Seminar/2012/20121013%20Flat-region%20Detection%20And%20False%20Contour%20Removal%20In%20The%20Digital%20TV%20Display.pdf. Dec. 12, 2012.
Ahn et al., Flat-region Detection and False Contour Removal in the Digital TV Display, http://www.cecs.uci.edu/˜papers/icme05/defevent/papers/cr1737.pdf., ICME 2005.
Daly et al., Decontouring: Prevention and Removal of False Contour Artifacts, from Conference vol. 5292, Human Vision and Electronic Imaging IX, Jun. 7, 2004.
Wright, R. Glenn, et al.; “Multimedia—Electronic Technical Manual for ATE”, IEEE 1996, 3 pp.
Zhang, Kui, et al.; “Variable Block Size Video Coding With Motion Prediction and Motion Segmentation”, SPIE vol. 2419, 1995, 9 pp.
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Version 1. International Telecommunication Union. Dated May 2003.
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Version 3. International Telecommunication Union. Dated Mar. 2005.
“Overview; VP7 Data Format and Decoder”. Version 1.5. On2 Technologies, Inc. Dated Mar. 28, 2005.
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Amendment 1: Support of additional colour spaces and removal of the High 4:4:4 Profile. International Telecommunication Union. Dated Jun. 2006.
“VP6 Bitstream & Decoder Specification”. Version 1.02. On2 Technologies, Inc. Dated Aug. 17, 2006.
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Amendment 2: New profiles for professional applications. International Telecommunication Union. Dated Apr. 2007.
“VP6 Bitstream & Decoder Specification”. Version 1.03. On2 Technologies, Inc. Dated Oct. 29, 2007.
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Advanced video coding for generic audiovisual services. Version 8. International Telecommunication Union. Dated Nov. 1, 2007.
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 11. Dated Mar. 2009.
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 12. Dated Mar. 2010.
“Implementors' Guide; Series H: Audiovisual and Multimedia Systems; Coding of moving video: Implementors Guide for H.264: Advanced video coding for generic audiovisual services”. H.264. International Telecommunication Union. Version 12. Dated Jul. 30, 2010.
“VP8 Data Format and Decoding Guide”. WebM Project. Google On2. Dated: Dec. 1, 2010.
Bankoski et al. “VP8 Data Format and Decoding Guide; draft-bankoski-vp8-bitstream-02” Network Working Group. Dated May 18, 2011.
Bankoski et al. “Technical Overview of VP8, an Open Source Video Codec for the Web”. Dated Jul. 11, 2011.
Bankoski, J., Koleszar, J., Quillio, L., Salonen, J., Wilkins, P., and Y. Xu, “VP8 Data Format and Decoding Guide”, RFC 6386, Nov. 2011.
Mozilla, “Introduction to Video Coding Part 1: Transform Coding”, Video Compression Overview, Mar. 2012, 171 pp.
Wiegand, Thomas, et al.; “Long-Term Memory Motion-Compensated Prediction”, Publication Unknown, Date Unknown, 15 pp.
Wiegand, Thomas, et al.; “Rate-Distortion Optimized Mode Selection for Very Low Bit Rate Video Coding and the Emerging H.263 Standard”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 6, No. 2, Apr. 1996, 9 pp.
Notice of Allowance in related matter U.S. Appl. No. 13/095,975 mailed Jan. 29, 2014.
Schuster, Guido M., et al.; “A Video Compression Scheme With Optimal Bit Allocation Among Segmentation, Motion, and Residual Error”, IEEE Transactions on Image Processing, vol. 6, No. 11, Nov. 1997, 16 pp.
Chen, Yu, et al., “An Error Concealment Algorithm for Entire Frame Loss in Video Transmission,” Picture Coding Symposium, 2004.
European Search Report for European Patent Application No. 08146463.1 dated Jun. 23, 2009.
Feng, Wu-chi; Rexford, Jennifer; “A Comparison of Bandwidth Smoothing Techniques for the Transmission of Prerecorded Compressed Video”, Paper, 1992, 22 pages.
Friedman, et al., “RTP: Control Protocol Extended Reports (RTPC XR),” Network Working Group RFC 3611 (The Internet Society 2003) (52 pp).
Frossard, Pascal; “Joint Source/FEC Rate Selection for Quality-Optimal MPEG-2 Video Delivery”, IEEE Transactions on Image Processing, vol. 10, No. 12, (Dec. 2001) pp. 1815-1825.
Hartikainen, E. and Ekelin, S. Tuning the Temporal Characteristics of a Kalman-Filter Method for End-to-End Bandwidth Estimation. IEEE E2EMON. Apr. 3, 2006.
International Search Report and Written Opinion Dated Aug. 13, 2012, in PCT/US2012/034426.
International Search Report and Written Opinion for International Application No. PCT/US2011/051818 dated Nov. 21, 2011 (16 pages).
International Search Report for International Application No. PCT/EP2009/057252 mailed on Aug. 27, 2009.
JongWon Kim, Young-Gook Kim, HwangJun Song, Tien-Ying Kuo, Yon Jun Chung, and C.-C. Jay Kuo; “TCP-friendly Internet Video Streaming employing Variable Frame-rate Encoding and Interpolation”; IEEE Trans. Circuits Syst. Video Technology, Jan. 2000; vol. 10 pp. 1164-1177.
Khronos Group Inc. OpenMAX Integration Layer Application Programming Interface Specification. Dec. 16, 2005, 326 pages, Version 1.0.
Korhonen, Jari; Frossard, Pascal; “Flexible forward error correction codes with application to partial media data recovery”, Signal Processing: Image Communication vol. 24, No. 3 (Mar. 2009) pp. 229-242.
Li, A., “RTP Payload Format for Generic Forward Error Correction”, Network Working Group, Standards Track, Dec. 2007, (45 pp).
Liang, Y.J.; Apostolopoulos, J.G.; Girod, B., “Analysis of packet loss for compressed video: does burst-length matter?,” Acoustics, Speech and Signal Processing, 2003. Proceedings. (ICASSP '03). 2003 IEEE International conference on, vol. 5, No., pp. V, 684-7 vol. 5, Apr. 6-10, 2003.
Neogi, A., et al., Compression Techniques for Active Video Content; State University of New York at Stony Brook; Computer Science Department; pp. 1-11.
Peng, Qiang, et al., “Block-Based Temporal Error Concealment for Video Packet Using Motion Vector Extrapolation,” IEEE 2003 Conference of Communications, Circuits and Systems and West Sino Expositions, vol. 1, No. 29, pp. 10-14 (IEEE 2002)
Roca, Vincent, et al., Design and Evaluation of a Low Density Generator Matrix (LDGM) Large Block FEC Codec, INRIA Rhone-Alpes, Planete project, France, Date Unknown, (12 pp).
“Rosenberg, J. D. RTCWEB I-D with thoughts on the framework. Feb. 8, 2011. Retrieved fromhttp://www.ietf.org/mail-archive/web/dispatch/current/msg03383.html on Aug. 1, 2011.”
“Rosenberg, J.D., et al. An Architectural Framework for Browser based Real-Time Communications (RTC) draft-rosenberg-rtcweb-framework-00. Feb. 8, 2011. Retrieved fromhttp://www.ietf.org/id/draft-rosenberg-rtcweb-framework-00.txt on Aug. 1, 2011.”
Scalable Video Coding, SVC, Annex G extension of H264.
Wikipedia, the free encyclopedia, “Low-density parity-check code”, http://en.wikipedia.org/wiki/Low-density—parity-check—code, Jul. 30, 2012 (5 pp).
Yan, Bo and Gharavi, Hamid, “A Hybrid Frame Concealment Algorithm for H.264/AVC,” IEEE Transactions on Image Processing, vol. 19, No. 1, pp. 98-107 (IEEE, Jan. 2010).
Yoo, S. J.B., “Optical Packet and burst Switching Technologies for the Future Photonic Internet,” Lightwave Technology, Journal of, vol. 24, No. 12, pp. 4468, 4492, Dec. 2006.
Yu, Xunqi, et al; “The Accuracy of Markov Chain Models in Predicting Packet-Loss Statistics for a Single Multiplexer”, IEEE Transaactions on Information Theory, vol. 54, No. 1 (Jan. 2008) pp. 489-501.
Nokia, Inc., Nokia Research Center, “MVC Decoder Description”, Telecommunication Standardization Sector, Study Period 1997-2000, Geneva, Feb. 7, 2000, 99 pp.
Series H: Audiovisual and Multimedia Systems, “Infrastructure of audiovisual services—Coding of moving video, Video coding for low bit rate communication”, International Telecommunication Union, ITU-T Recommendation H.263, Feb. 1998, 167 pp.
Stiller, Christoph; “Motion-Estimation for Coding of Moving Video at 8 kbit/s with Gibbs Modeled Vectorfield Smoothing”, SPIE vol. 1360 Visual Communications and Image Processing 1990, 9 pp.
Chen, Xing C., et al.; “Quadtree Based Adaptive Lossy Coding of Motion Vectors”, IEEE 1996, 4 pp.
Schiller, H., et al.; “Efficient Coding of Side Information in a Low Bitrate Hybrid Image Coder”, Signal Processing 19 (1990) Elsevier Science Publishers B.V. 61-73, 13 pp.
Strobach, Peter; “Tree-Structured Scene Adaptive Coder”, IEEE Transactions on Communications, vol. 38, No. 4, Apr. 1990, 10 pp.
Steliaros, Michael K., et al.; “Locally-accurate motion estimation for object-based video coding”, SPIE vol. 3309, 1997, 11 pp.
Martin, Graham R., et al.; “Reduced Entropy Motion Compensation Using Variable Sized Blocks”, SPIE vol. 3024, 1997, 10 pp.
Liu, Bede, et al.; “New Fast Algorithms for the Estimation of Block Motion Vectors”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 3, No. 2, Apr. 1993, 10 pp.
Kim, Jong Won, et al.; “On the Hierarchical Variable Block Size Motion Estimation Technique for Motion Sequence Coding”, SPIE Visual Communication and Image Processing 1993, Cambridge, MA, Nov. 8, 1993, 29 pp.
Guillotel, Philippe, et al.; “Comparison of motion vector coding techniques”, SPIE vol. 2308, 1994, 11 pp.
Orchard, Michael T.; “Exploiting Scene Structure in Video Coding”, IEEE 1991, 5 pp.
Liu, Bede, et al.; “A simple method to segment motion field for video coding”, SPIE vol. 1818, Visual Communications and Image Processing 1992, 10 pp.
Ebrahimi, Touradj, et al.; “Joint motion estimation and segmentation for very low bitrate video coding”, SPIE vol. 2501, 1995, 12 pp.
Karczewicz, Maria, et al.; “Video Coding Using Motion Compensation With Polynomial Motion Vector Fields”, IEEE COMSOC EURASIP, First International Workshop on Wireless Image/Video Communications—Sep. 1996, 6 pp.
Chen, Michael C., et al.; “Design and Optimization of a Differentially Coded Variable Block Size Motion Compensation System”, IEEE 1996, 4 pp.
Orchard, Michael T.; “Predictive Motion-Field Segmentation for Image Sequence Coding”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 3, No. 1, Feb. 1993, 17 pp.
Nicolas, H., et al.; “Region-based motion estimation using deterministic relaxation schemes for image sequence coding”, IEEE 1992, 4 pp.
Luttrell, Max, et al.; “Simulation Results for Modified Error Resilient Syntax With Data Partitioning and RVLC”, ITU—Telecommunications Standardization Sector, Study Group 16, Video Coding Experts Group (Question 15), Sixth Meeting: Seoul, South Korea, Nov. 2, 1998, 34 pp.
Office Action Mailed May 22, 2013 in co-pending U.S. Appl. No. 13/095,975, filed Apr. 28, 2011.
Office Action Mailed May 30, 2013 in co-pending U.S. Appl. No. 13/089,383, filed Apr. 19, 2011.
Office Action Mailed Jun. 5, 2013 in co-pending U.S. Appl. No. 13/095,971, filed Apr. 28, 2011.
Chae-Eun Rhee et al. (:A Real-Time H.264/AVC Encoder with Complexity-Aware Time Allocation, Circuits and Systems for video Technology, IEEE Transactions on, vol. 20, No. 12, pp. 1848, 1862, Dec. 2010).
Gachetti (Matching techniques to compute image motion, Image and Vision Computing, vol. 18, No. 3, Feb. 2000, pp. 247-260.
Sceen shot of website dated Oct. 14, 2011: www:abc.go.com/watch/2020/SH559026/VD55148316/2020.
Screen shot of website dated May 2011: www.cbs.com/primtime/60—minutes/video/?pid=Hwiua1litcOuuHiAYN.