The present disclosure relates to coding/decoding systems for multi-directional imaging system and, in particular, to use of coding techniques that originally were developed for flat images, for multi-directional image data.
In multi-directional imaging, a two-dimensional image represents image content taken from multiple fields of view. Omnidirectional imaging is one type of multi-directional imaging where a single image represents content viewable from a single vantage point in all directions—360° horizontally about the vantage point and 360° vertically about the vantage point. Other multi-directional images may capture data in fields of view that are not fully 360°.
Modern coding protocols tend to be inefficient when coding multi-directional images. Multi-directional images tend to allocate real estate within the images to the different fields of view essentially in a fixed manner. For example, in many multi-directional imaging formats, different fields of view may be allocated space in the multi-directional image equally. Some other multi-directional imaging formats allocate space unequally but in a fixed manner. And, many applications that consume multi-directional imaging tend to use only a portion of the multi-directional image during rendering, which causes resources spent to code un-used portions of the multi-directional image to be wasted.
Accordingly, the inventors recognized a need to improve coding systems to increase efficiency of multi-directional image data.
Embodiments of the present disclosure provide techniques for implementing organizational configurations for multi-directional video and for switching between them. Source images may be assigned to formats that may change during a coding session. When a change occurs between formats, video coders and decoder may transform decoded reference frames from the first configuration to the second configuration. Thereafter, new frames in the second configuration may be coded or decoded predictively using transformed reference frame(s) as source(s) of prediction. In this manner, video coders and decoders may use inter-coding techniques and achieve high efficiency in coding.
In
The video decoder 240 may invert coding operations performed by the video encoder 230 to obtain a reconstructed picture from the coded video data. Typically, the coding processes applied by the video coder 230 are lossy processes, which cause the reconstructed picture to possess various errors when compared to the original picture. The video decoder 240 may reconstruct picture data of select coded pictures, which are designated as “reference pictures,” and store the decoded reference pictures in the reference picture store 250. In the absence of transmission errors, the decoded reference pictures will replicate decoded reference pictures obtained by a decoder at the receiving terminal 120 (
The predictor 260 may select prediction references for new input pictures as they are coded. For each portion of the input picture being coded (called a “pixel block” for convenience), the predictor 260 may select a coding mode and identify a portion of a reference picture that may serve as a prediction reference search for the pixel block being coded. The coding mode may be an intra-coding mode, in which case the prediction reference may be drawn from a previously-coded (and decoded) portion of the picture being coded. Alternatively, the coding mode may be an inter-coding mode, in which case the prediction reference may be drawn from another previously-coded and decoded picture.
When an appropriate prediction reference is identified, the predictor 260 may furnish the prediction data to the video coder 230. The video coder 230 may code input video data differentially with respect to prediction data furnished by the predictor 260. Typically, prediction operations and the differential coding operate on a pixel block-by-pixel block basis. Prediction residuals, which represent pixel-wise differences between the input pixel blocks and the prediction pixel blocks, may be subject to further coding operations to reduce bandwidth further.
The video coder 230, the video decoder 240, the reference picture store 250 and the predictor 260 each operate on video frames in a formatted representation that is determined by the image processor 220. In an embodiment, the format may change from time to time during a coding session and, in response, the format of previously-coded reference frames may change correspondingly.
As indicated, the coded video data output by the video coder 230 should consume less bandwidth than the input data when transmitted and/or stored. The coding system 200 may output the coded video data to a transmitter 270 that may transmit the coded video data across a communication network 130 (
Recovered frame data of reference frames may be stored in the reference picture store 350 for use in decoding later-received frames. The predictor 360 may respond to prediction information contained in coded video data to retrieve prediction data and supply it to the video decoder 320 for use in decoding new frames. As indicated, video coding operations often code pixel blocks from within a source image differentially with respect to prediction data. In a video decoder 320, the differential coding processes may be inverted—coded pixel block data may be decoded and then added to prediction data that the predictor 360 retrieves from the reference picture store 350.
In ideal operating conditions, where channel errors do not cause loss of information between a coding system 200 (
As discussed, the image processing system 220 (
Coding formats may vary based on the organization of views contained therein, based on the resolution of the views and based on projections used to represent the views. The coding formats may vary adaptively based on operating conditions at the coding system 200 and/or decoding system 300. Changes among the coding formats may occur at a frame level within a coding session. Alternatively, the coding formats may change at a slice-level, tile-level, group of frames-level or track-level within a coding session. Several exemplary image formats are described below.
As discussed, an image processing system 220 (
As applied to the source image 420 illustrated in
In the example of
Frame formats used for coding also may alter other aspects of captured video data. For example, frame formats may project image from their source projection to an alternate projection.
When projecting image data from the native representation to the spherical projection, an image processor 220 (
θ=α·x+θ0, and (Eq. 1.)
φ=β·y+φ0, where (Eq. 2.)
θ and φ respectively represents the longitude and latitude of a location in the spherical projection 720, α, β are scalars, θ0, φ0 represent an origin of the spherical projection 720, and x and y represent the horizontal and vertical coordinates of source data in top and bottom views 712, 714 of the source image 710.
The image processor 220 (
x=r*sin(φ)*cos(θ), (Eq. 3.)
y=r*sin(φ)*sin(θ) (Eq. 4.)
z=r*cos(φ), where (Eq. 5.)
r represents a radial distance of the point φ from a center of the respective polar region 722, 724.
It is expected that, over time, as new input frames are coded and designated as reference frames, that decoded reference frames will replace the transformed reference frames in the decoded picture buffer. Thus, any coding inefficiencies that might be obtained from use of the transformed reference frames will be overcome by the ordinary eviction policies under which the decoded picture buffer operates.
It is expected that, over time, as new coded frames are decoded and reference frames are obtained therefrom, that decoded reference frames will replace the transformed reference frames in the decoded picture buffer. Thus, any coding inefficiencies that might be obtained from use of the transformed reference frames will be overcome by the ordinary eviction policies under which the decoded picture buffer operates.
Switching may be triggered in a variety of ways. In a first embodiment, activity at a decoding terminal 120 (
In another embodiment, an image processor 220 may assign priority to region(s) of a multi-directional image based on characteristics of the image data itself. For example, image analysis may identify regions within an input frame that indicates the presence of relatively close objects (identified by depth analyses of the frame) or motion activity occurs (identified by motion analysis of the frame). Such regions may be selected as high priority regions of the image and a format that prioritizes these regions(s) may be defined for coding.
In a further embodiment, an image processor 220 may select frame formats based on estimates of distortion among candidate frame formats and selecting one of the frame formats that minimizes distortion under a governing coding rate. For example, a governing coding rate may be imposed by a bit rate afforded by a channel between a coding system 200 and a decoding system 300. Distortion estimates may be calculated for eligible frame formats based on candidate viewing conditions, for example, estimates of how often a segment of video is likely to be viewed. Dynamic switching may be performed when an eligible frame format is identified that is estimated to have lower distortion than another frame format that is then being used for coding.
The encoding terminal 110 may determine whether its format should be switched (box 1050). If the encoding terminal 110 determines that the format should be switched, the encoding terminal 110 may send a new message 1055 to the decoding terminal 120 identifying the new configuration. In response to the format configuration message 1055 both terminals 110, 120 may repack reference frames stored in their decoded picture buffers according to the new configuration (boxes 1060, 1065). The operations 1025-1065 may repeat for as long as necessary under the coding session.
The coding session may begin at a time t1, when a first frame is coded. At this point, the reference picture store 1120 likely will be empty (because the input frame IF1 is the first frame to be processed). The input frame IF1 may be coded by intra-coding and output from the video coder. The coded input frame IF1 likely will be designated as a reference frame and, therefore, it may be decoded and stored to the reference picture store as reference frame RF1.
Input frames IF2-IFN may be coded according to the first format, also. The video coder 1110 may code the input frames predictively, using reference frames from the reference picture store 1120 as bases for prediction. The coded input frames IF2-IFN may output from the video coder 1110. Select coded input frames IF2-IFN may be designated as reference frames and stored to the reference picture store 1120. Thus, at time t2, after the input frame IFN is coded, the reference picture store 1120 may store reference frames RF1-RFM.
The input frames' format may change to the second format when input frame IFN+1 is coded by the video coder. In response, the reference frames RF1-RFM may be transformed from a representation corresponding to the first format to a representation corresponding to the second format. The input frame IFN+1 may be coded predictively using select transformed frame(s) TF1-TFM as prediction references and output from the video coder. If the coded input frame IFN+1 is designated as a reference frame, it may be decoded and stored to the reference picture store as reference frame RFN+1 (not shown).
At the video decoder 1130, decoding may begin at a time t4, when a first coded frame is decoded. At this point, the reference picture store 1140 will be empty because the input frame IF1 is the first frame to be processed. The input frame IF1 may be decoded and output from the video decoder 1130. The decoded frame IF1 likely will have been designated as a reference frame and, therefore, it may be stored to the reference picture store 1140 as reference frame RF1.
Coded input frames IF2-IFN may be decoded according to the first format, also. The video decoder 1130 may decode the input frames according to the coding modes applied by the video coder 1110, using reference frames from the reference picture store 1140 as bases for prediction when so designated. The decoded input frames IF2-IFN may be output from the video decoder 1130. Decoded input frames IF2-IFN that are designated as reference frames also may be stored to the reference picture store 1140. Thus, at time t5, after the coded input frame IFN is decoded, the reference picture store 1140 may store reference frames RF1-RFM.
In this example, the frames' format changes to the second format when the coded input frame IFN+1 is decoded by the video decoder 1130. In response, the reference frames RF1-RFM may be transformed from a representation corresponding to the first format to a representation corresponding to the second format. The coded input frame IFN+1 may be coded predictively using designated transformed frame(s) TF1-TFM as prediction references and output from the video decoder 1130. If the decoded input frame IFN+1 is designated as a reference frame, it may be stored to the reference picture store 1140 as reference frame RFN+1 (not shown).
Note that, in the foregoing example, there are no constraints on the timing between the coding events at times t1-t3 and the decoding events at times t4-t6. The principles of the present disclosure apply equally as well to real time coding scenarios, which may be appropriate for “live” video feeds, and also to store-and-forward coding scenarios, where video may be coded for storage and then delivered to decoding devices on demand.
Transforms of reference pictures may be performed in a variety of different ways. In a simple example, a region of image data that is being “demoted” in priority may be spatially downscaled according to the size differences between the region that the demoted content occupies in the reference frame and the region that the demoted content occupies in the transform frame. For example, the front region F in reference frame RF1 is demoted when generating transform frame TF1; it may be downscaled according to the size differences that occur due to this demotion.
Similarly, a region of image data that is “promoted” in priority may be spatially upsampled according to the size differences between the region that the promoted content occupies in the reference frame and the region that the promoted content occupies in the transform frame. For example, in
Although not illustrated in
As in the prior example, a coding session may begin at a time t1, when a first frame is coded. At this point, the reference picture store 1220 likely will be empty (because the input frame IF1 is the first frame to be processed). The input frame IF1 may be coded by intra-coding and output from the video coder. The coded input frame IF1 likely will be designated as a reference frame and, therefore, it may be decoded and stored to the reference picture store as reference frame RF1.
Input frames IF2-IFN may be coded according to the first format, also. The video coder 1210 may code the input frames predictively, using reference frames from the reference picture store 1220 as bases for prediction. The coded input frames IF2-IFN may output from the video coder 1210. Select coded input frames IF2-IFN may be designated as reference frames and stored to the reference picture store 1220. Thus, at time t2, after the input frame IFN is coded, the reference picture store 1220 may store reference frames RF1-RFM.
The input frames' format may change to the second format when input frame IFN+1 is coded by the video coder. In response, the reference frames RF1-RFM may be transformed from a representation corresponding to the first format to a representation corresponding to the second format. The input frame IFN+1 may be coded predictively using select transformed frame(s) TF1-TFM as prediction references and output from the video coder. If the coded input frame IFN+1 is designated as a reference frame, it may be decoded and stored to the reference picture store as reference frame RFN+1 (not shown).
At the video decoder 1230, decoding may begin at a time t4, when a first coded frame is decoded. At this point, the reference picture store 1240 will be empty because the input frame IF1 is the first frame to be processed. The input frame IF1 may be decoded and output from the video decoder 1230. The decoded frame IF1 likely will have been designated as a reference frame and, therefore, it may be stored to the reference picture store 1240 as reference frame RF1.
Coded input frames IF2-IFN may be decoded according to the first format, also. The video decoder 1230 may decode the input frames according to the coding modes applied by the video coder 1210, using reference frames from the reference picture store 1240 as bases for prediction when so designated. The decoded input frames IF2-IFN may be output from the video decoder 1230. Decoded input frames IF2-IFN that are designated as reference frames also may be stored to the reference picture store 1240. Thus, at time t5, after the coded input frame IFN is decoded, the reference picture store 1240 may store reference frames RF1-RFM.
In this example, the frames' format changes to the second format when the coded input frame IFN+1 is decoded by the video decoder 1230. In response, the reference frames RF1-RFM may be transformed from a representation corresponding to the first format to a representation corresponding to the second format. The coded input frame IFN+1 may be coded predictively using designated transformed frame(s) TF1-TFM as prediction references and output from the video decoder 1230. If the decoded input frame IFN+1 is designated as a reference frame, it may be stored to the reference picture store 1240 as reference frame RFN+1 (not shown).
As in the prior example, there are no constraints on the timing between the coding events at times t1-t3 and the decoding events at times t4-t6. The principles of the present disclosure apply equally as well to real time coding scenarios, which may be appropriate for “live” video feeds, and also to store-and-forward coding scenarios, where video may be coded for storage and then delivered to decoding devices on demand.
The pixel block coder 1310 may include a subtractor 1312, a transform unit 1314, a quantizer 1316, and an entropy coder 1318. The pixel block coder 1310 may accept pixel blocks of input data at the subtractor 1312. The subtractor 1312 may receive predicted pixel blocks from the predictor 1350 and generate an array of pixel residuals therefrom representing a difference between the input pixel block and the predicted pixel block. The transform unit 1314 may apply a transform to the sample data output from the subtractor 1312, to convert data from the pixel domain to a domain of transform coefficients. The quantizer 1316 may perform quantization of transform coefficients output by the transform unit 1314. The quantizer 1316 may be a uniform or a non-uniform quantizer. The entropy coder 1318 may reduce bandwidth of the output of the coefficient quantizer by coding the output, for example, by variable length code words.
The transform unit 1314 may operate in a variety of transform modes as determined by the controller 1360. For example, the transform unit 1314 may apply a discrete cosine transform (DCT), a discrete sine transform (DST), a Walsh-Hadamard transform, a Haar transform, a Daubechies wavelet transform, or the like. In an embodiment, the controller 1360 may select a coding mode M to be applied by the transform unit 1315, may configure the transform unit 1315 accordingly and may signal the coding mode M in the coded video data, either expressly or impliedly.
The quantizer 1316 may operate according to a quantization parameter QP that is supplied by the controller 1360. In an embodiment, the quantization parameter QP may be applied to the transform coefficients as a multi-value quantization parameter, which may vary, for example, across different coefficient locations within a transform-domain pixel block. Thus, the quantization parameter QP may be provided as a quantization parameters array.
The pixel block decoder 1320 may invert coding operations of the pixel block coder 1310. For example, the pixel block decoder 1320 may include a dequantizer 1322, an inverse transform unit 1324, and an adder 1326. The pixel block decoder 1320 may take its input data from an output of the quantizer 1316. Although permissible, the pixel block decoder 1320 need not perform entropy decoding of entropy-coded data since entropy coding is a lossless event. The dequantizer 1322 may invert operations of the quantizer 1316 of the pixel block coder 1310. The dequantizer 1322 may perform uniform or non-uniform de-quantization as specified by the decoded signal QP. Similarly, the inverse transform unit 1324 may invert operations of the transform unit 1314. The dequantizer 1322 and the inverse transform unit 1324 may use the same quantization parameters QP and transform mode M as their counterparts in the pixel block coder 1310. Quantization operations likely will truncate data in various respects and, therefore, data recovered by the dequantizer 1322 likely will possess coding errors when compared to the data presented to the quantizer 1316 in the pixel block coder 1310.
The adder 1326 may invert operations performed by the subtractor 1312. It may receive the same prediction pixel block from the predictor 1350 that the subtractor 1312 used in generating residual signals. The adder 1326 may add the prediction pixel block to reconstructed residual values output by the inverse transform unit 1324 and may output reconstructed pixel block data.
The in-loop filter 1330 may perform various filtering operations on recovered pixel block data. For example, the in-loop filter 1330 may include a deblocking filter 1332 and a sample adaptive offset (“SAO”) filter 1333. The deblocking filter 1332 may filter data at seams between reconstructed pixel blocks to reduce discontinuities between the pixel blocks that arise due to coding. SAO filters may add offsets to pixel values according to an SAO “type,” for example, based on edge direction/shape and/or pixel/color component level. The in-loop filter 1330 may operate according to parameters that are selected by the controller 1360.
The reference picture store 1340 may store filtered pixel data for use in later prediction of other pixel blocks. Different types of prediction data are made available to the predictor 1350 for different prediction modes. For example, for an input pixel block, intra prediction takes a prediction reference from decoded data of the same picture in which the input pixel block is located. Thus, the reference picture store 1340 may store decoded pixel block data of each picture as it is coded. For the same input pixel block, inter prediction may take a prediction reference from previously coded and decoded picture(s) that are designated as reference pictures. Thus, the reference picture store 1340 may store these decoded reference pictures.
As discussed, the predictor 1350 may supply prediction data to the pixel block coder 1310 for use in generating residuals. The predictor 1350 may include an inter predictor 1352, an intra predictor 1353 and a mode decision unit 1352. The inter predictor 1352 may receive pixel block data representing a new pixel block to be coded and may search reference picture data from store 1340 for pixel block data from reference picture(s) for use in coding the input pixel block. The inter predictor 1352 may support a plurality of prediction modes, such as P mode coding and B mode coding. The inter predictor 1352 may select an inter prediction mode and an identification of candidate prediction reference data that provides a closest match to the input pixel block being coded. The inter predictor 1352 may generate prediction reference metadata, such as motion vectors, to identify which portion(s) of which reference pictures were selected as source(s) of prediction for the input pixel block.
The intra predictor 1353 may support Intra (I) mode coding. The intra predictor 1353 may search from among pixel block data from the same picture as the pixel block being coded that provides a closest match to the input pixel block. The intra predictor 1353 also may generate prediction reference indicators to identify which portion of the picture was selected as a source of prediction for the input pixel block.
The mode decision unit 1352 may select a final coding mode to be applied to the input pixel block. Typically, as described above, the mode decision unit 1352 selects the prediction mode that will achieve the lowest distortion when video is decoded given a target bitrate. Exceptions may arise when coding modes are selected to satisfy other policies to which the coding system 1300 adheres, such as satisfying a particular channel behavior, or supporting random access or data refresh policies. When the mode decision selects the final coding mode, the mode decision unit 1352 may output a selected reference block from the store 1340 to the pixel block coder and decoder 1310, 1320 and may supply to the controller 1360 an identification of the selected prediction mode along with the prediction reference indicators corresponding to the selected mode.
The controller 1360 may control overall operation of the coding system 1300. The controller 1360 may select operational parameters for the pixel block coder 1310 and the predictor 1350 based on analyses of input pixel blocks and also external constraints, such as coding bitrate targets and other operational parameters. As is relevant to the present discussion, when it selects quantization parameters QP, the use of uniform or non-uniform quantizers, and/or the transform mode M, it may provide those parameters to the syntax unit 1370, which may include data representing those parameters in the data stream of coded video data output by the system 1300. The controller 1360 also may select between different modes of operation by which the system may generate reference images and may include metadata identifying the modes selected for each portion of coded data.
During operation, the controller 1360 may revise operational parameters of the quantizer 1316 and the transform unit 1315 at different granularities of image data, either on a per pixel block basis or on a larger granularity (for example, per picture, per slice, per largest coding unit (“LCU”) or another region). In an embodiment, the quantization parameters may be revised on a per-pixel basis within a coded picture.
Additionally, as discussed, the controller 1360 may control operation of the in-loop filter 1330 and the prediction unit 1350. Such control may include, for the prediction unit 1350, mode selection (lambda, modes to be tested, search windows, distortion strategies, etc.), and, for the in-loop filter 1330, selection of filter parameters, reordering parameters, weighted prediction, etc.
And, further, the controller 1360 may perform transforms of reference pictures stored in the reference picture store when new formats are defined for input video.
The principles of the present discussion may be used cooperatively with other coding operations that have been proposed for multi-directional video. For example, the predictor 1350 may perform prediction searches using input pixel block data and reference pixel block data in a spherical projection. Operation of such prediction techniques are described in U.S. patent application Ser. No. 15/390,202, filed Dec. 23, 2016 and assigned to the assignee of the present application. In such an embodiment, the coder 1300 may include a spherical transform unit 1390 that transforms input pixel block data to a spherical domain prior to being input to the predictor 1350.
The pixel block decoder 1420 may include an entropy decoder 1422, a dequantizer 1424, an inverse transform unit 1426, and an adder 1428. The entropy decoder 1422 may perform entropy decoding to invert processes performed by the entropy coder 1318 (
The adder 1428 may invert operations performed by the subtractor 1312 (
The in-loop filter 1430 may perform various filtering operations on reconstructed pixel block data. As illustrated, the in-loop filter 1430 may include a deblocking filter 1432 and an SAO filter 1434. The deblocking filter 1432 may filter data at seams between reconstructed pixel blocks to reduce discontinuities between the pixel blocks that arise due to coding. SAO filters 1434 may add offset to pixel values according to an SAO type, for example, based on edge direction/shape and/or pixel level. Other types of in-loop filters may also be used in a similar manner. Operation of the deblocking filter 1432 and the SAO filter 1434 ideally would mimic operation of their counterparts in the coding system 1300 (
The reference picture store 1440 may store filtered pixel data for use in later prediction of other pixel blocks. The reference picture store 1440 may store decoded pixel block data of each picture as it is coded for use in intra prediction. The reference picture store 1440 also may store decoded reference pictures.
As discussed, the predictor 1450 may supply the transformed reference block data to the pixel block decoder 1420. The predictor 1450 may supply predicted pixel block data as determined by the prediction reference indicators supplied in the coded video data stream.
The controller 1460 may control overall operation of the coding system 1400. The controller 1460 may set operational parameters for the pixel block decoder 1420 and the predictor 1450 based on parameters received in the coded video data stream. As is relevant to the present discussion, these operational parameters may include quantization parameters QP for the dequantizer 1424 and transform modes M for the inverse transform unit 1411. As discussed, the received parameters may be set at various granularities of image data, for example, on a per pixel block basis, a per picture basis, a per slice basis, a per LCU basis, or based on other types of regions defined for the input image.
And, further, the controller 1460 may perform transforms of reference pictures stored in the reference picture store 1440 when new formats are detected in coded video data.
The foregoing discussion has described operation of the embodiments of the present disclosure in the context of video coders and decoders. Commonly, these components are provided as electronic devices. Video decoders and/or controllers can be embodied in integrated circuits, such as application specific integrated circuits, field programmable gate arrays and/or digital signal processors. Alternatively, they can be embodied in computer programs that execute on camera devices, personal computers, notebook computers, tablet computers, smartphones or computer servers. Such computer programs typically are stored in physical storage media such as electronic-, magnetic- and/or optically-based storage devices, where they are read to a processor and executed. Decoders commonly are packaged in consumer electronics devices, such as smartphones, tablet computers, gaming systems, DVD players, portable media players and the like; and they also can be packaged in consumer software applications such as video games, media players, media editors, and the like. And, of course, these components may be provided as hybrid systems that distribute functionality across dedicated hardware components and programmed general-purpose processors, as desired.
For example, the techniques described herein may be performed by a central processor of a computer system.
The central processor 1510 may read and execute various program instructions stored in the memory 1530 that define an operating system 1512 of the system 1500 and various applications 1514.1-1514.N. The program instructions may perform coding mode control according to the techniques described herein. As it executes those program instructions, the central processor 1510 may read, from the memory 1530, image data created either by the camera 1520 or the applications 1514.1-1514.N, which may be coded for transmission. The central processor 1510 may execute a program that operates according to the principles of
As indicated, the memory 1530 may store program instructions that, when executed, cause the processor to perform the techniques described hereinabove. The memory 1530 may store the program instructions on electrical-, magnetic- and/or optically-based storage media.
The transceiver 1540 may represent a communication system to transmit transmission units and receive acknowledgement messages from a network (not shown). In an embodiment where the central processor 1510 operates a software-based video coder, the transceiver 1540 may place data representing state of acknowledgment message in memory 1530 to retrieval by the processor 1510. In an embodiment where the system 1500 has a dedicated coder, the transceiver 1540 may exchange state information with the coder 1550.
The foregoing discussion has described the principles of the present disclosure in terms of encoding systems and decoding systems. As described, an encoding system typically codes video data for delivery to a decoding system where the video data is decoded and consumed. As such, the encoding system and decoding system support coding, delivery and decoding of video data in a single direction. In applications where bidirectional exchange is desired, a pair of terminals 110, 120 (
Several embodiments of the present invention are specifically illustrated and described herein. However, it will be appreciated that modifications and variations of the present invention are covered by the above teachings and within the purview of the appended claims without departing from the spirit and intended scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
4890257 | Anthias et al. | Dec 1989 | A |
5185667 | Zimmerman | Feb 1993 | A |
5262777 | Low et al. | Nov 1993 | A |
5313306 | Kuban et al. | May 1994 | A |
5359363 | Kuban et al. | Oct 1994 | A |
5448687 | Hoogerhyde et al. | Sep 1995 | A |
5537155 | O'Connell et al. | Jul 1996 | A |
5600346 | Kamata et al. | Feb 1997 | A |
5684937 | Oxaal | Nov 1997 | A |
5689800 | Downs | Nov 1997 | A |
5715016 | Kobayashi et al. | Feb 1998 | A |
5787207 | Golin | Jul 1998 | A |
5872604 | Ogura | Feb 1999 | A |
5903270 | Gentry et al. | May 1999 | A |
5936630 | Oxaal | Aug 1999 | A |
6011897 | Koyama | Jan 2000 | A |
6031540 | Golin et al. | Feb 2000 | A |
6043837 | Driscoll et al. | Mar 2000 | A |
6058212 | Yokoyama | May 2000 | A |
6122317 | Hanami et al. | Sep 2000 | A |
6144890 | Rothkop | Nov 2000 | A |
6204854 | Signes et al. | Mar 2001 | B1 |
6219089 | Driscoll, Jr. et al. | Apr 2001 | B1 |
6222883 | Murdock et al. | Apr 2001 | B1 |
6317159 | Aoyama | Nov 2001 | B1 |
6331869 | Furlan et al. | Dec 2001 | B1 |
6426774 | Driscoll, Jr. et al. | Jul 2002 | B1 |
6535643 | Hong | Mar 2003 | B1 |
6539060 | Lee et al. | Mar 2003 | B1 |
6559853 | Hashimoto et al. | May 2003 | B1 |
6577335 | Kobayashi et al. | Jun 2003 | B2 |
6751347 | Pettigrew et al. | Jun 2004 | B2 |
6762789 | Sogabe et al. | Jul 2004 | B1 |
6769131 | Tanaka et al. | Jul 2004 | B1 |
6795113 | Jackson et al. | Sep 2004 | B1 |
6907310 | Gardner et al. | Jun 2005 | B2 |
6973130 | Wee et al. | Dec 2005 | B1 |
6993201 | Haskell et al. | Jan 2006 | B1 |
7006707 | Peterson | Feb 2006 | B2 |
7015954 | Foote et al. | Mar 2006 | B1 |
7039113 | Soundararajan | May 2006 | B2 |
7050085 | Park et al. | May 2006 | B1 |
7095905 | Peterson | Aug 2006 | B1 |
7123777 | Rondinelli et al. | Oct 2006 | B2 |
7139440 | Rondinelli et al. | Nov 2006 | B2 |
7149549 | Ortiz et al. | Dec 2006 | B1 |
7259760 | Hashimoto et al. | Aug 2007 | B1 |
7327787 | Chen et al. | Feb 2008 | B1 |
7382399 | McCall et al. | Jun 2008 | B1 |
7385995 | Stiscia et al. | Jun 2008 | B2 |
7415356 | Gowda et al. | Aug 2008 | B1 |
7433535 | Mukherjee et al. | Oct 2008 | B2 |
7450749 | Rouet et al. | Nov 2008 | B2 |
7593041 | Novak et al. | Sep 2009 | B2 |
7660245 | Luby | Feb 2010 | B1 |
7742073 | Cohen-Solal et al. | Jun 2010 | B1 |
7755667 | Rabbani et al. | Jul 2010 | B2 |
7782357 | Cutler | Aug 2010 | B2 |
8027473 | Stiscia et al. | Sep 2011 | B2 |
8045615 | Liang et al. | Oct 2011 | B2 |
8217956 | Jin | Jul 2012 | B1 |
8255552 | Witt et al. | Aug 2012 | B2 |
8270496 | Yin et al. | Sep 2012 | B2 |
8295360 | Lewis et al. | Oct 2012 | B1 |
8339394 | Lininger | Dec 2012 | B1 |
8442109 | Wang et al. | May 2013 | B2 |
8442311 | Hobbs et al. | May 2013 | B1 |
8462109 | Nasiri et al. | Jun 2013 | B2 |
8462853 | Jeon et al. | Jun 2013 | B2 |
8482595 | Kweon | Jul 2013 | B2 |
8682091 | Amit et al. | Mar 2014 | B2 |
8693537 | Wang et al. | Apr 2014 | B2 |
8711941 | Letunovskiy et al. | Apr 2014 | B2 |
9013536 | Zhu et al. | Apr 2015 | B2 |
9071484 | Traux | Jun 2015 | B1 |
9094681 | Wilkins et al. | Jul 2015 | B1 |
9098870 | Meadow et al. | Aug 2015 | B2 |
9219919 | Deshpande | Dec 2015 | B2 |
9224247 | Wada et al. | Dec 2015 | B2 |
9258520 | Lee | Feb 2016 | B2 |
9277122 | Imura et al. | Mar 2016 | B1 |
9404764 | Lynch | Aug 2016 | B2 |
9430873 | Nakamura et al. | Aug 2016 | B2 |
9510007 | Chan et al. | Nov 2016 | B2 |
9516225 | Banta et al. | Dec 2016 | B2 |
9596899 | Stahl et al. | Mar 2017 | B2 |
9639935 | Douady-Pleven et al. | May 2017 | B1 |
9723223 | Banta et al. | Aug 2017 | B1 |
9743060 | Matias et al. | Aug 2017 | B1 |
9754413 | Gray | Sep 2017 | B1 |
9781356 | Banta et al. | Oct 2017 | B1 |
9838687 | Banta et al. | Dec 2017 | B1 |
9866815 | Vrcelj et al. | Jan 2018 | B2 |
9936204 | Sim et al. | Apr 2018 | B1 |
9967563 | Hsu et al. | May 2018 | B2 |
9967577 | Wu et al. | May 2018 | B2 |
9992502 | Abbas et al. | Jun 2018 | B2 |
9996945 | Holzer et al. | Jun 2018 | B1 |
10102611 | Murtha et al. | Oct 2018 | B1 |
10204658 | Krishnan | Feb 2019 | B2 |
10212456 | Guo et al. | Feb 2019 | B2 |
10282814 | Lin et al. | May 2019 | B2 |
10306186 | Chuang et al. | May 2019 | B2 |
10321109 | Tanumihardja et al. | Jun 2019 | B1 |
10339627 | Abbas et al. | Jul 2019 | B2 |
10339688 | Su et al. | Jul 2019 | B2 |
10349068 | Banta et al. | Jul 2019 | B1 |
10375371 | Xu et al. | Aug 2019 | B2 |
10455238 | Mody et al. | Oct 2019 | B2 |
10523913 | Kim et al. | Dec 2019 | B2 |
10559121 | Moudgil et al. | Feb 2020 | B1 |
10573060 | Ascolese et al. | Feb 2020 | B1 |
10574997 | Chung et al. | Feb 2020 | B2 |
20010006376 | Numa et al. | Jul 2001 | A1 |
20010028735 | Pettigrew et al. | Oct 2001 | A1 |
20010036303 | Maurincomme et al. | Nov 2001 | A1 |
20020080878 | Li | Jun 2002 | A1 |
20020093670 | Luo et al. | Jul 2002 | A1 |
20020126129 | Snyder et al. | Sep 2002 | A1 |
20020140702 | Koller et al. | Oct 2002 | A1 |
20020141498 | Martins | Oct 2002 | A1 |
20020190980 | Gerritsen et al. | Dec 2002 | A1 |
20020196330 | Park et al. | Dec 2002 | A1 |
20030098868 | Fujiwara et al. | May 2003 | A1 |
20030099294 | Wang et al. | May 2003 | A1 |
20030152146 | Lin et al. | Aug 2003 | A1 |
20040022322 | Dye | Feb 2004 | A1 |
20040028133 | Subramaniyan et al. | Feb 2004 | A1 |
20040028134 | Subramaniyan et al. | Feb 2004 | A1 |
20040032906 | Lillig et al. | Feb 2004 | A1 |
20040056900 | Blume | Mar 2004 | A1 |
20040189675 | Pretlove et al. | Sep 2004 | A1 |
20040201608 | Ma et al. | Oct 2004 | A1 |
20040218099 | Washington | Nov 2004 | A1 |
20040227766 | Chou et al. | Nov 2004 | A1 |
20040247173 | Nielsen et al. | Dec 2004 | A1 |
20050013498 | Srinivasan et al. | Jan 2005 | A1 |
20050041023 | Green | Feb 2005 | A1 |
20050069682 | Tseng | Mar 2005 | A1 |
20050129124 | Ha | Jun 2005 | A1 |
20050204113 | Harper et al. | Sep 2005 | A1 |
20050243915 | Kwon et al. | Nov 2005 | A1 |
20050244063 | Kwon et al. | Nov 2005 | A1 |
20060034527 | Gritsevich | Feb 2006 | A1 |
20060055699 | Perlman et al. | Mar 2006 | A1 |
20060055706 | Perlman et al. | Mar 2006 | A1 |
20060119599 | Woodbury | Jun 2006 | A1 |
20060126719 | Wilensky | Jun 2006 | A1 |
20060132482 | Oh | Jun 2006 | A1 |
20060165164 | Kwan et al. | Jul 2006 | A1 |
20060165181 | Kwan et al. | Jul 2006 | A1 |
20060204043 | Takei | Sep 2006 | A1 |
20060238445 | Wang et al. | Oct 2006 | A1 |
20060282855 | Margulis | Dec 2006 | A1 |
20070024705 | Richter et al. | Feb 2007 | A1 |
20070057943 | Beda et al. | Mar 2007 | A1 |
20070064120 | Didow et al. | Mar 2007 | A1 |
20070071100 | Shi et al. | Mar 2007 | A1 |
20070097268 | Relan et al. | May 2007 | A1 |
20070115841 | Taubman et al. | May 2007 | A1 |
20070223582 | Borer | Sep 2007 | A1 |
20070263722 | Fukuzawa | Nov 2007 | A1 |
20070291143 | Barbieri et al. | Dec 2007 | A1 |
20080036875 | Jones et al. | Feb 2008 | A1 |
20080044104 | Gering | Feb 2008 | A1 |
20080049991 | Gering | Feb 2008 | A1 |
20080077953 | Fernandez et al. | Mar 2008 | A1 |
20080118180 | Kamiya et al. | May 2008 | A1 |
20080184128 | Swenson et al. | Jul 2008 | A1 |
20080252717 | Moon et al. | Oct 2008 | A1 |
20080310513 | Ma et al. | Dec 2008 | A1 |
20090040224 | Igarashi et al. | Feb 2009 | A1 |
20090123088 | Kallay et al. | May 2009 | A1 |
20090153577 | Ghyme et al. | Jun 2009 | A1 |
20090190858 | Moody et al. | Jul 2009 | A1 |
20090219280 | Maillot | Sep 2009 | A1 |
20090219281 | Maillot | Sep 2009 | A1 |
20090251530 | Cilia | Oct 2009 | A1 |
20090262838 | Gholmieh et al. | Oct 2009 | A1 |
20100029339 | Kim et al. | Feb 2010 | A1 |
20100079605 | Wang et al. | Apr 2010 | A1 |
20100080287 | Ali | Apr 2010 | A1 |
20100110481 | Do et al. | May 2010 | A1 |
20100124274 | Cheok et al. | May 2010 | A1 |
20100215226 | Kaufman et al. | Aug 2010 | A1 |
20100305909 | Wolper et al. | Dec 2010 | A1 |
20100316129 | Zhao et al. | Dec 2010 | A1 |
20100329361 | Choi et al. | Dec 2010 | A1 |
20100329362 | Choi et al. | Dec 2010 | A1 |
20110058055 | Lindahl et al. | Mar 2011 | A1 |
20110128350 | Oliver et al. | Jun 2011 | A1 |
20110142306 | Nair | Jun 2011 | A1 |
20110200100 | Kim et al. | Aug 2011 | A1 |
20110235706 | Demircin et al. | Sep 2011 | A1 |
20110305274 | Fu et al. | Dec 2011 | A1 |
20110310089 | Petersen | Dec 2011 | A1 |
20120082232 | Rojals et al. | Apr 2012 | A1 |
20120098926 | Kweon | Apr 2012 | A1 |
20120192115 | Falchuk et al. | Jul 2012 | A1 |
20120219055 | He et al. | Aug 2012 | A1 |
20120260217 | Celebisoy | Oct 2012 | A1 |
20120263231 | Zhou | Oct 2012 | A1 |
20120307746 | Hammerschmidt et al. | Dec 2012 | A1 |
20120320984 | Zhou | Dec 2012 | A1 |
20120327172 | El-Saban et al. | Dec 2012 | A1 |
20130003858 | Sze | Jan 2013 | A1 |
20130016783 | Kim et al. | Jan 2013 | A1 |
20130044108 | Tanaka et al. | Feb 2013 | A1 |
20130051452 | Li et al. | Feb 2013 | A1 |
20130088491 | Hobbs et al. | Apr 2013 | A1 |
20130094568 | Hsu et al. | Apr 2013 | A1 |
20130101025 | Van der Auwera et al. | Apr 2013 | A1 |
20130101042 | Sugio et al. | Apr 2013 | A1 |
20130111399 | Rose | May 2013 | A1 |
20130124156 | Wolper et al. | May 2013 | A1 |
20130127844 | Koeppel et al. | May 2013 | A1 |
20130128986 | Tsai et al. | May 2013 | A1 |
20130136174 | Xu et al. | May 2013 | A1 |
20130170726 | Kaufman et al. | Jul 2013 | A1 |
20130182775 | Wang et al. | Jul 2013 | A1 |
20130195183 | Zhai et al. | Aug 2013 | A1 |
20130208787 | Zheng et al. | Aug 2013 | A1 |
20130219012 | Suresh et al. | Aug 2013 | A1 |
20130251028 | Au et al. | Sep 2013 | A1 |
20130301706 | Qiu et al. | Nov 2013 | A1 |
20140002439 | Lynch | Jan 2014 | A1 |
20140003450 | Bentley et al. | Jan 2014 | A1 |
20140010293 | Srinivasan et al. | Jan 2014 | A1 |
20140078263 | Kim | Mar 2014 | A1 |
20140082054 | Denoual et al. | Mar 2014 | A1 |
20140089326 | Lin et al. | Mar 2014 | A1 |
20140140401 | Lee et al. | May 2014 | A1 |
20140153636 | Esenlik et al. | Jun 2014 | A1 |
20140169469 | Bernal et al. | Jun 2014 | A1 |
20140176542 | Shohara et al. | Jun 2014 | A1 |
20140218356 | Distler et al. | Aug 2014 | A1 |
20140254949 | Chou | Sep 2014 | A1 |
20140267235 | DeJohn et al. | Sep 2014 | A1 |
20140269899 | Park et al. | Sep 2014 | A1 |
20140286410 | Zenkich | Sep 2014 | A1 |
20140355667 | Lei et al. | Dec 2014 | A1 |
20140368669 | Talvala et al. | Dec 2014 | A1 |
20140376634 | Guo et al. | Dec 2014 | A1 |
20150003525 | Sasai et al. | Jan 2015 | A1 |
20150003725 | Wan | Jan 2015 | A1 |
20150016522 | Sato | Jan 2015 | A1 |
20150029294 | Lin et al. | Jan 2015 | A1 |
20150062292 | Kweon | Mar 2015 | A1 |
20150089348 | Jose | Mar 2015 | A1 |
20150103884 | Ramasubramonian et al. | Apr 2015 | A1 |
20150145966 | Krieger et al. | May 2015 | A1 |
20150195491 | Shaburov et al. | Jul 2015 | A1 |
20150195559 | Chen et al. | Jul 2015 | A1 |
20150237370 | Zhou et al. | Aug 2015 | A1 |
20150256839 | Ueki et al. | Sep 2015 | A1 |
20150264259 | Raghoebardajal et al. | Sep 2015 | A1 |
20150264386 | Pang et al. | Sep 2015 | A1 |
20150264404 | Hannuksela | Sep 2015 | A1 |
20150271517 | Pang et al. | Sep 2015 | A1 |
20150279087 | Myers et al. | Oct 2015 | A1 |
20150279121 | Myers et al. | Oct 2015 | A1 |
20150304665 | Hannuksela et al. | Oct 2015 | A1 |
20150321103 | Barnett et al. | Nov 2015 | A1 |
20150326865 | Yin et al. | Nov 2015 | A1 |
20150339853 | Wolper et al. | Nov 2015 | A1 |
20150341552 | Chen et al. | Nov 2015 | A1 |
20150346812 | Cole et al. | Dec 2015 | A1 |
20150350673 | Hu et al. | Dec 2015 | A1 |
20150351477 | Stahl et al. | Dec 2015 | A1 |
20150358612 | Sandrew et al. | Dec 2015 | A1 |
20150358613 | Sandrew et al. | Dec 2015 | A1 |
20150358633 | Choi et al. | Dec 2015 | A1 |
20150373334 | Rapaka et al. | Dec 2015 | A1 |
20150373372 | He et al. | Dec 2015 | A1 |
20160012855 | Krishnan | Jan 2016 | A1 |
20160014422 | Su et al. | Jan 2016 | A1 |
20160027187 | Wang et al. | Jan 2016 | A1 |
20160050369 | Takenaka et al. | Feb 2016 | A1 |
20160080753 | Oh | Mar 2016 | A1 |
20160112489 | Adams et al. | Apr 2016 | A1 |
20160112704 | Grange et al. | Apr 2016 | A1 |
20160142697 | Budagavi | May 2016 | A1 |
20160150231 | Schulze | May 2016 | A1 |
20160165257 | Chen et al. | Jun 2016 | A1 |
20160227214 | Rapaka et al. | Aug 2016 | A1 |
20160234438 | Satoh | Aug 2016 | A1 |
20160241836 | Cole et al. | Aug 2016 | A1 |
20160269632 | Morioka | Sep 2016 | A1 |
20160277746 | Fu et al. | Sep 2016 | A1 |
20160286119 | Rondinelli | Sep 2016 | A1 |
20160350585 | Lin et al. | Dec 2016 | A1 |
20160350592 | Ma et al. | Dec 2016 | A1 |
20160352791 | Adams et al. | Dec 2016 | A1 |
20160352971 | Adams et al. | Dec 2016 | A1 |
20160353089 | Gallup et al. | Dec 2016 | A1 |
20160353146 | Weaver et al. | Dec 2016 | A1 |
20160360104 | Zhang et al. | Dec 2016 | A1 |
20160360180 | Cole et al. | Dec 2016 | A1 |
20170013279 | Puri et al. | Jan 2017 | A1 |
20170026659 | Lin | Jan 2017 | A1 |
20170038942 | Rosenfeld et al. | Feb 2017 | A1 |
20170054907 | Nishihara et al. | Feb 2017 | A1 |
20170064199 | Lee et al. | Mar 2017 | A1 |
20170078447 | Hancock et al. | Mar 2017 | A1 |
20170085892 | Liu et al. | Mar 2017 | A1 |
20170094184 | Gao et al. | Mar 2017 | A1 |
20170104927 | Mugavero et al. | Apr 2017 | A1 |
20170109930 | Holzer et al. | Apr 2017 | A1 |
20170127008 | Kankaanpaa et al. | May 2017 | A1 |
20170142371 | Barzuza et al. | May 2017 | A1 |
20170155912 | Thomas et al. | Jun 2017 | A1 |
20170180635 | Hayashi et al. | Jun 2017 | A1 |
20170200255 | Lin et al. | Jul 2017 | A1 |
20170200315 | Lockhart | Jul 2017 | A1 |
20170214937 | Lin et al. | Jul 2017 | A1 |
20170223268 | Shimmoto | Aug 2017 | A1 |
20170223368 | Abbas et al. | Aug 2017 | A1 |
20170228867 | Baruch | Aug 2017 | A1 |
20170230668 | Lin et al. | Aug 2017 | A1 |
20170236323 | Lim et al. | Aug 2017 | A1 |
20170244775 | Ha | Aug 2017 | A1 |
20170251208 | Adsumilli et al. | Aug 2017 | A1 |
20170257644 | Andersson et al. | Sep 2017 | A1 |
20170272698 | Liu et al. | Sep 2017 | A1 |
20170278262 | Kawamoto et al. | Sep 2017 | A1 |
20170280126 | Van der Auwera | Sep 2017 | A1 |
20170287200 | Forutanpour et al. | Oct 2017 | A1 |
20170287220 | Khalid et al. | Oct 2017 | A1 |
20170295356 | Abbas et al. | Oct 2017 | A1 |
20170301065 | Adsumilli et al. | Oct 2017 | A1 |
20170301132 | Dalton et al. | Oct 2017 | A1 |
20170302714 | Ramsay et al. | Oct 2017 | A1 |
20170302951 | Joshi et al. | Oct 2017 | A1 |
20170309143 | Trani et al. | Oct 2017 | A1 |
20170322635 | Yoon et al. | Nov 2017 | A1 |
20170323422 | Kim et al. | Nov 2017 | A1 |
20170323423 | Lin et al. | Nov 2017 | A1 |
20170332107 | Abbas et al. | Nov 2017 | A1 |
20170336705 | Zhou | Nov 2017 | A1 |
20170339324 | Tocher et al. | Nov 2017 | A1 |
20170339341 | Zhou et al. | Nov 2017 | A1 |
20170339391 | Zhou et al. | Nov 2017 | A1 |
20170339392 | Forutanpour | Nov 2017 | A1 |
20170339415 | Wang | Nov 2017 | A1 |
20170344843 | Wang | Nov 2017 | A1 |
20170353737 | Lin et al. | Dec 2017 | A1 |
20170359590 | Zhang et al. | Dec 2017 | A1 |
20170366808 | Lin et al. | Dec 2017 | A1 |
20170374332 | Yamaguchi et al. | Dec 2017 | A1 |
20170374375 | Makar et al. | Dec 2017 | A1 |
20180005447 | Wallner et al. | Jan 2018 | A1 |
20180005449 | Wallner et al. | Jan 2018 | A1 |
20180007387 | Izumi | Jan 2018 | A1 |
20180007389 | Izumi | Jan 2018 | A1 |
20180018807 | Lu et al. | Jan 2018 | A1 |
20180020202 | Xu et al. | Jan 2018 | A1 |
20180020238 | Liu et al. | Jan 2018 | A1 |
20180027178 | MacMillan et al. | Jan 2018 | A1 |
20180027226 | Abbas et al. | Jan 2018 | A1 |
20180027257 | Izumi | Jan 2018 | A1 |
20180047208 | Marin et al. | Feb 2018 | A1 |
20180048890 | Kim et al. | Feb 2018 | A1 |
20180053280 | Kim et al. | Feb 2018 | A1 |
20180054613 | Lin et al. | Feb 2018 | A1 |
20180061002 | Lee et al. | Mar 2018 | A1 |
20180063505 | Lee et al. | Mar 2018 | A1 |
20180063544 | Tourapis et al. | Mar 2018 | A1 |
20180075576 | Liu et al. | Mar 2018 | A1 |
20180075604 | Kim et al. | Mar 2018 | A1 |
20180075635 | Choi et al. | Mar 2018 | A1 |
20180077451 | Yip et al. | Mar 2018 | A1 |
20180084257 | Abbas | Mar 2018 | A1 |
20180091812 | Guo et al. | Mar 2018 | A1 |
20180098090 | Lin et al. | Apr 2018 | A1 |
20180101931 | Abbas et al. | Apr 2018 | A1 |
20180109810 | Xu et al. | Apr 2018 | A1 |
20180130243 | Kim et al. | May 2018 | A1 |
20180130264 | Ebacher | May 2018 | A1 |
20180146136 | Yamamoto | May 2018 | A1 |
20180146138 | Jeon et al. | May 2018 | A1 |
20180152636 | Yim et al. | May 2018 | A1 |
20180152663 | Wozniak et al. | May 2018 | A1 |
20180160138 | Park | Jun 2018 | A1 |
20180160156 | Hannuksela et al. | Jun 2018 | A1 |
20180164593 | Van Der Auwera et al. | Jun 2018 | A1 |
20180167613 | Hannuksela et al. | Jun 2018 | A1 |
20180167634 | Salmimaa et al. | Jun 2018 | A1 |
20180174619 | Roy et al. | Jun 2018 | A1 |
20180176468 | Wang et al. | Jun 2018 | A1 |
20180176536 | Jo et al. | Jun 2018 | A1 |
20180184101 | Ho | Jun 2018 | A1 |
20180184121 | Kim et al. | Jun 2018 | A1 |
20180191787 | Morita et al. | Jul 2018 | A1 |
20180192074 | Shih et al. | Jul 2018 | A1 |
20180199029 | Van Der Auwera et al. | Jul 2018 | A1 |
20180199034 | Nam et al. | Jul 2018 | A1 |
20180199070 | Wang | Jul 2018 | A1 |
20180218512 | Chan | Aug 2018 | A1 |
20180227484 | Hung et al. | Aug 2018 | A1 |
20180234700 | Kim et al. | Aug 2018 | A1 |
20180240223 | Yi et al. | Aug 2018 | A1 |
20180240276 | He et al. | Aug 2018 | A1 |
20180242016 | Lee et al. | Aug 2018 | A1 |
20180242017 | Van Leuven et al. | Aug 2018 | A1 |
20180249076 | Sheng et al. | Aug 2018 | A1 |
20180249163 | Curcio et al. | Aug 2018 | A1 |
20180249164 | Kim et al. | Aug 2018 | A1 |
20180253879 | Li et al. | Sep 2018 | A1 |
20180268517 | Coban et al. | Sep 2018 | A1 |
20180270417 | Suitoh et al. | Sep 2018 | A1 |
20180276789 | Van Der Auwera et al. | Sep 2018 | A1 |
20180276826 | Van Der Auwera et al. | Sep 2018 | A1 |
20180276890 | Wang | Sep 2018 | A1 |
20180288435 | Boyce | Oct 2018 | A1 |
20180295282 | Boyce | Oct 2018 | A1 |
20180302621 | Fu et al. | Oct 2018 | A1 |
20180307398 | Kim et al. | Oct 2018 | A1 |
20180315245 | Patel | Nov 2018 | A1 |
20180322611 | Bang et al. | Nov 2018 | A1 |
20180329482 | Woo et al. | Nov 2018 | A1 |
20180332265 | Hwang et al. | Nov 2018 | A1 |
20180332279 | Kang | Nov 2018 | A1 |
20180343388 | Matsushita | Nov 2018 | A1 |
20180349705 | Kim et al. | Dec 2018 | A1 |
20180350407 | Decoodt et al. | Dec 2018 | A1 |
20180352225 | Guo et al. | Dec 2018 | A1 |
20180352259 | Guo et al. | Dec 2018 | A1 |
20180352264 | Guo et al. | Dec 2018 | A1 |
20180359487 | Bang et al. | Dec 2018 | A1 |
20180374192 | Kunkel et al. | Dec 2018 | A1 |
20180376126 | Hannuksela | Dec 2018 | A1 |
20180376152 | Wang et al. | Dec 2018 | A1 |
20190004414 | Kim et al. | Jan 2019 | A1 |
20190007669 | Kim et al. | Jan 2019 | A1 |
20190007679 | Coban et al. | Jan 2019 | A1 |
20190007684 | Van Der Auwera et al. | Jan 2019 | A1 |
20190012766 | Yoshimi | Jan 2019 | A1 |
20190014304 | Curcio et al. | Jan 2019 | A1 |
20190026956 | Gausebeck et al. | Jan 2019 | A1 |
20190028642 | Fujita et al. | Jan 2019 | A1 |
20190045212 | Rose et al. | Feb 2019 | A1 |
20190057487 | Cheng | Feb 2019 | A1 |
20190057496 | Ogawa et al. | Feb 2019 | A1 |
20190082184 | Hannuksela | Mar 2019 | A1 |
20190104315 | Guo et al. | Apr 2019 | A1 |
20190108611 | Izumi | Apr 2019 | A1 |
20190132521 | Fujita et al. | May 2019 | A1 |
20190132594 | Chung et al. | May 2019 | A1 |
20190200016 | Jang et al. | Jun 2019 | A1 |
20190215512 | Lee et al. | Jul 2019 | A1 |
20190215532 | He et al. | Jul 2019 | A1 |
20190230377 | Ma et al. | Jul 2019 | A1 |
20190236990 | Song et al. | Aug 2019 | A1 |
20190246141 | Kim et al. | Aug 2019 | A1 |
20190253622 | Van der Auwera et al. | Aug 2019 | A1 |
20190268594 | Lim et al. | Aug 2019 | A1 |
20190273929 | Ma et al. | Sep 2019 | A1 |
20190273949 | Kim et al. | Sep 2019 | A1 |
20190281290 | Lee et al. | Sep 2019 | A1 |
20190289324 | Budagavi | Sep 2019 | A1 |
20190289331 | Byun | Sep 2019 | A1 |
20190306515 | Shima | Oct 2019 | A1 |
20200029077 | Lee et al. | Jan 2020 | A1 |
20200036976 | Kanoh et al. | Jan 2020 | A1 |
20200074687 | Lin et al. | Mar 2020 | A1 |
20200077092 | Lin et al. | Mar 2020 | A1 |
Number | Date | Country |
---|---|---|
2077525 | Jul 2009 | EP |
WO 2012044709 | Apr 2012 | WO |
WO 2015138979 | Sep 2015 | WO |
WO 2016076680 | May 2016 | WO |
WO 2016140060 | Sep 2016 | WO |
WO 2017125030 | Jul 2017 | WO |
WO 2017127816 | Jul 2017 | WO |
Entry |
---|
Choi et al.; “Text of ISO/IEC 23000-20 CD Omnidirectional Media Application Format”; Coding of Moving Pictures and Audio; ISO/IEC JTC1/SC29/WG11 N16636; Jan. 2017; 51 pages. |
He et al.; “AHG8: InterDigital's projection format conversion tool”; Joint Video Exploration Team (JVET) of ITU-T SG 16 WP3 and ISO/IEC JTC 1/SC 29/WG 11 4thmeeting; Oct. 2016; 18 pages. |
Kammachi et al.; “AHG8: Test results for viewport-dependent pyramid, cube map, and equirectangular panorama schemes”; JVET-D00078; Oct. 2016; 7 pages. |
Yip et al.; “Technologies under Considerations for ISO/IEC 23000-20 Omnidirectional Media Application Format”; ISO/IEC JTC1/SC29/WG11 MPEG2017/W16637; Jan. 2017; 50 pages. |
International Patent Application No. PCT/US2018/018246; Int'l Search Report and the Written Opinion; dated Apr. 20, 2018; 15 pages. |
Tosic et al.; “Multiresolution Motion Estimation for Omnidirectional Images”; IEEE 13thEuropean Signal Processing Conference; Sep. 2005; 4 pages. |
He et al.; “AHG8: Geometry padding for 360 video coding”; Joint Video Exploration Team (JVET); Document: JVET-D0075; Oct. 2016; 10 pages. |
Vishwanath et al.; “Rotational Motion Model for Temporal Prediction in 360 Video Coding”; IEEE 19thInt'l Workshop on Multimedia Signal Processing; Oct. 2017; 6 pages. |
Sauer et al.; “Improved Motion Compensation for 360 Video Projected to Polytopes” Proceedings of the IEEE Int'l Conf. On Multimedia and Expo; Jul. 2017; pp. 61-66. |
International Patent Application No. PCT/US2018/017124; Int'l Search Report and the Written Opinion; dated Apr. 30, 2018; 19 pages. |
Boyce et al.; “Common Test Conditions and Evaluation Procedures for 360 degree Video Coding”; Joint Video Exploration Team; ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11; Doc. JVET-D1030; Oct. 2016; 6 pages. |
390 Li et al.; “Projection Based Advanced Motion Model for Cubic Mapping for 360-Degree Video”; Cornell University Library; 2017; 5 pages. |
Zheng et al.; “Adaptive Selection of Motion Models for Panoramic Video Coding”; IEEE Int'l Conf. Multimedia and Expo; Jul. 2007; pp. 1319-1322. |
He et al.; “AHG8: Algorithm description of InterDigital's projection format conversion tool (PCT360)”; Joint Video Exploration Team; ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11; Doc. JVET-D0090; Oct. 2016; 6 pages. |
International Patent Application No. PCT/US2017/051542; Int'l Search Report and the Written Opinion; dated Dec. 7, 2017; 17 pages. |
International Patent Application No. PCT/US2017/051542; Int'l Preliminary Report on Patentability; dated Jul. 4, 2019; 10 pages. |
International Patent Application No. PCT/US2018/018246; Int'l Preliminary Report on Patentability; dated Sep. 6, 2019; 8 pages. |
International Patent Application No. PCT/US2018/017124; Int'l Preliminary Report on Patentability; dated Aug. 29, 2019; 12 pages. |
Number | Date | Country | |
---|---|---|---|
20190004414 A1 | Jan 2019 | US |