1. Field
The invention generally is directed to multimedia data processing, and more particularly, to encoding multimedia data based on shot detection processing.
2. Background
Shot detection relates to determining when a frame in a group of pictures (GOP) exhibits data that indicates a scene change has occurred. Generally, within a GOP, the frames may have no significant changes in any two or three (or more) adjacent frames, or there may be slow changes, or fast changes. Of course, these scene change classifications can be further broken down to a greater level of changes depending on a specific application, if necessary.
Detecting shot or scene changes is important for efficient encoding of video. Typically, when a GOP is not changing significantly, an I-frame at the beginning of the GOP is followed by a number of predictive frames can sufficiently encode the video so that subsequent decoding and display of the video is visually acceptable. However, when a scene is changing, either abruptly or slowly, additional I-frames and less predictive encoding (P-frames and B-frames) may be used to produce subsequently decoded visually acceptable results. Improvements in shot detection and corresponding encoding using the results of shot detection could improve coding efficiency and overcome other problems in the art associated with GOP partitioning.
Each of the inventive apparatuses and methods described herein has several aspects, no single one of which is solely responsible for its desirable attributes. Without limiting the scope of this invention, its more prominent features will now be discussed briefly. After considering this discussion, and particularly after reading the section entitled “Detailed Description” one will understand how the features of this invention provides improvements for multimedia data processing apparatuses and methods.
A method of processing multimedia data, the method comprising obtaining at least one metric indicative of a difference between a selected frame and frames temporally adjacent to the selected frame in a plurality of video frames, said at least one metric comprising bi-directional motion information and luminance difference information associated with the selected frame and the frames temporally adjacent to the selected frame, determining a shot event associated with the selected frame based on said at least one metric, and adaptively encoding the selected frame based on the shot event. In one aspect obtaining the at least one metric comprises calculating the at least one metric. If the shot event indicates that the selected frame is an abrupt scene change, the selected frame can be adaptively encoded as an I-frame. If the shot event indicates the selected frame is a portion of a plurality of frames comprising a slow scene change, the selected frame can be encoded as either a P-frame or a B-frame. In another aspect, if the shot event indicates that the selected frame contains at least one camera flashlight, the selected frame can be identified as requiring special processing. Examples of such special processing include removal of the selected frame from video, and replicating a frame temporally adjacent to the selected frame and substituting the replicated frame for the selected frame. In some aspects, the shot event indicates the selected frame comprises an abrupt scene change, a portion of a slow scene change, or at least one camera flashlight. In some aspects, adaptively encoding comprises encoding the selected frame as an I-frame if the shot event does not indicate the selected frame comprises abrupt scene change, a portion of a slow scene change, or at least one camera flashlight.
In another aspect, an apparatus for processing a multimedia data includes a motion compensator configured to obtain at least one metric indicative of a difference between a selected frame and frames temporally adjacent to the selected frame in a plurality of video frames, said at least one metric comprising bi-directional motion information and luminance information, a shot classifier configured to determine shot events associated with the selected frame based on said at least one metric, and an encoder configured to adaptively encode the selected frame based on the shot event.
In another aspect, an apparatus for processing multimedia data includes means for obtaining at least one metric indicative of a difference between a selected frame and frames temporally adjacent to the selected frame in a plurality of video frames, said at least one metric comprising bi-directional motion information and luminance difference information associated with the selected frame and the frames temporally adjacent to the selected frame, means for determining a shot event associated with the selected frame based on said at least one metric, and means for adaptively encoding the selected frame based on the shot event. If the shot event indicates that the selected frame is an abrupt scene change, and the adaptively encoding means can for encode the selected frame as an T-frame. In another aspect, where the shot event indicates the selected frame is portion of a plurality of frames comprising a slow scene change, and the adaptively encoding means can comprise means for encoding the selected frame as either a P-frame or a B-frame. In another aspect, the shot event indicates that the selected frame contains at least one camera flashlight, and the adaptively encoding means can include means for encoding the identifying the selected frame as requiring special processing.
In another aspect, a machine readable medium includes instructions for processing multimedia data, wherein the instructions upon execution cause a machine to obtain at least one metric indicative of a difference between a selected frame and frames temporally adjacent to the selected frame in a plurality of video frames, the at least one metric comprising bi-directional motion information and luminance difference information associated with the selected frame and the frames temporally adjacent to the selected frame, determine a shot event associated with the selected frame based on said at least one metric, and adaptively encode the selected frame based on the shot event.
In another aspect, a processor for processing multimedia data, the processor being configured comprising a configuration to obtain at least one metric indicative of a difference between a selected frame and frames temporally adjacent to the selected frame in a plurality of video frames, said at least one metric comprising bi-directional motion information and luminance difference information associated with the selected frame and the frames temporally adjacent to the selected frame, determine a shot event associated with the selected frame based on said at least one metric, and adaptively encode the selected frame based on the shot event.
In the following description, specific details are given to provide a thorough understanding of the aspects. However, it will be understood by one of ordinary skill in the art that the aspects may be practiced without these specific detail. For example, communication systems and video processing devices may be shown in block diagrams in order not to obscure the aspects in unnecessary detail.
Described herein are certain inventive aspects for shot detection and encoding systems and methods that improve the performance of existing encoding systems. Such aspects utilizes statistics (or metrics) that include statistical comparisons between adjacent frames of video data to determine if an abrupt scene change occurred, a scene is slowly changing, or if there are camera flashlights in the scene which can make video encoding especially complex. The statistics can be obtained from a preprocessor and then sent to an encoding device, or they can be generated in an encoding device (e.g., by a processor configured to perform motion compensation). The resulting statistics aid scene change detection decision. In a system that does transcoding, often a suitable preprocessor or configurable processor exists. If the preprocessor perform motion-compensation aided deinterlacing, the motion compensation statistics are available and ready to use.
A shot detector as described herein can utilize statistics from a previous frame, a current frame, and a next frame only so that the algorithm has very low latency. The shot detector differentiates several different types of shot events, including abrupt scene change, cross-fading and other slow scene change, and camera flashlight. By determining different type of shot events with different strategies in the encoder, the encoding efficiency and visual quality is enhanced.
References herein to “one aspect,” “an aspect,” some aspects,” or “certain aspects” mean that one or more of a particular feature, structure, or characteristic described in connection with the aspect can be included in at least one aspect of a shot detection and encoding system. The appearances of such phrases in various places in the specification are not necessarily all referring to the same aspect, nor are separate or alternative aspects mutually exclusive of other aspects. Moreover, various features are described which may be exhibited by some aspects and not by others. Similarly, various requirements are described which may be requirements for some aspects but not other aspects.
“Multimedia data” or “multimedia” as used herein is a broad term that includes video data (which can include audio data), audio data, or both video data and audio data. “Video data” or “video” as used herein as a broad term, which refers to an image or one or more series or sequences of images containing text, image, and/or audio data, and can be used to refer to multimedia data or the terms may be used interchangeably, unless otherwise specified.
The motion compensator 23 can be configured to determine bi-directional motion information about frames in the video. The motion compensator 23 can also be configured to determine one or more difference metrics, for example, the sum of absolute differences (SAD) or the sum of absolute differences (SSD), and calculate other information including luminance information for one or more frames (e.g., macroblock (MB) luminance averages or differences), a luminance histogram difference, and a frame difference metric, examples of which are described in reference to Equations 1-3. The shot classifier can be configured to classify frames in the video into two or more categories of “shots” using information determined by the motion compensator. The encoder is configured to adaptively encode the plurality of frames based on the shot classifications. The motion compensator, shot classifier, and encoder are described below in reference to Equations 1-10.
The encoding component 22, components thereof, and processes contained therein, can be implemented by hardware, software, firmware, middleware, microcode, or any combination thereof. For example, the motion compensator 23, shot classifier 24, and the encoder 25 may be standalone components, incorporated as hardware, firmware, middleware in a component of another device, or be implemented in microcode or software that is executed on a processor, or a combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments that perform the motion compensation, shot classifying and encoding processes may be stored in a machine readable medium such as a storage medium. A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents.
The multimedia processing device 30 can provide encoded video for further processing and/or transmission to other devices, for example, terminals 6 (
The various illustrative logical blocks, components, modules, and circuits described in connection with
Video encoding usually operates on a structured group of pictures (GOP). A GOP normally starts with an intra-coded frame (I-frame), followed by a series of P (predictive) or B (bi-directional) frames. Typically, an I-frame can store all the data for displaying the frame, a B-frame relies on data in the preceding and following frames (e.g., only containing data changed from the preceding frame or is different from data in the next frame), and a P-frame contains data that has changed from the preceding frame.
In common usage, I-frames are interspersed with P-frames and B-frames in encoded video. In terms of size (e.g., number of bits used to encode the frame), I-frames are typically much larger than P-frames, which in turn are larger than B-frames. For efficient encoding, transmission and decoding processing, the length of a GOP should be long enough to reduce the efficient loss from big I-frames, and short enough to fight mismatch between encoder and decoder, or channel impairment. In addition, macro blocks (MB) in P frames can be intra coded for the same reason.
Scene change detection can be used for a video encoder to determine a proper GOP length and insert I-frames based on the GOP length, instead of inserting an often unneeded I-frame at a fixed interval. In a practical streaming video system, the communication channel is usually impaired by bit errors or packet losses. Where to place I-frames or I MBs may significantly impact decoded video quality and viewing experience. One encoding scheme is to use intra-coded frames for pictures or portions of pictures that have significant change from collocated previous pictures or picture portions. Normally these regions cannot be predicted effectively and efficiently with motion estimation, and encoding can be done more efficiently if such regions are exempted from inter-frame coding techniques (e.g., encoding using B-frames and P-frames). In the context of channel impairment, those regions are likely to suffer from error propagation, which can be reduced or eliminated (or nearly so) by intra-frame encoding.
A selected frame or a portions of the GOP video can be classified into two or more categories, where each frame or portion can have different intra-frame encoding criteria that may depend on the particular implementation. As an illustrative example, a selected frame in the video can processed to determine if it includes a certain “shot event” which can be used to classify the frame into one of three categories based on its content; that is, each category indicating a type of shot event that is captured by the frame or that the frame is a part of. These three categories are an abrupt scene change, a portion of a cross-fading and/or other slow scene change, or as a frame containing at least one camera flash, also referred to as “camera flashlights.”
A frame that is classified as an abrupt scene change includes frames that are significantly different from the previous frame. These abrupt scene changes are usually caused by a camera operation during editing or generation of video. For example, video generated from different cameras can include abrupt scene changes because the cameras have different viewpoints. Also, abruptly changing the field of view of a camera while recording video can result in an abrupt scene change. Since the content of a frame classified as an abrupt scene change is different from that of the previous frame, an abrupt scene change frame should typically be encoded as an I-frame.
A frames that is classified as a portion of a slow scene change includes video having cross-fading and other slow scene changes or slow switching of scenes. In some examples, this can be caused by computer processing of camera shots. Gradual blending of two different scenes may look more pleasing to human eyes, but poses a challenge to video coding. For some slowly changing scenes, motion compensation may not reduce the bitrate of those frames effectively. In some circumstances, more intra-coded MBs can be used for these frames.
A frame is classified as having camera flashlights, or a camera flash event, can include frames with content that includes one or more camera flashes. Such flashes are relatively short in duration (e.g., one frame) and can be extremely bright such that the pixels in a frame portraying the flashes exhibit unusually high luminance relative to a corresponding area on an adjacent frame. Camera flashlights shift the luminance of a picture suddenly and swiftly. Usually the duration of a camera flashlight is shorter than the temporal masking duration of the human vision system (HVS), which is typically defined to be 44 ms. Human eyes are not sensitive to the quality of these short bursts of brightness and therefore they can be encoded coarsely. Because the flashlight frames cannot be handled effectively with motion compensation and they are bad prediction candidate for future frames, coarse encoding of these frames does not reduce the encoding efficiency of future frames. Scenes classified as flashlights should not be used to predict other frames because of the “artificial” high luminance, and other frames cannot effectively be used to predict these frames for the same reason. Once identified, these frames can be taken out because they may require a relatively high amount of processing. One option is to remove the frames that are determined to include camera flashlight and encode a DC coefficient in their place; such a solution is simple, computationally fast and can save many bits during encoding.
When any of the above types of scene changes are detected in a frame, a shot event is declared and the detected scene type can be used to determine how the frame can be encoded; in other words, the frame can be adaptively encoded based on the determined shot event. Shot detection is not only useful to improve encoding quality, it can also aid in identifying video content searching and indexing. One aspect of a scene detection process is described hereinbelow.
Process 40 then proceeds to block 44 where shot changes in the video are determined based on the metrics. A video frame can be classified into two or more categories of what type of shot is contained in the frame, for example, an abrupt scene change, a slowly changing scene, or a scene containing high luminance values (camera flashes). Certain implementations encoding may necessitate other categories. An illustrative example of shot classification is described in reference to process B in
Once a frame is classified, process 40 proceeds to block 46 where the frame can be encoded, or designated for encoding, using the shot classification results. Such results can influence whether to encode the frame with an intra-coded frame or a predictive frame (e.g., P-frame or B-frame). Process C in
Motion Compensation
To perform bi-directional motion estimation/compensation, a video sequence can be preprocessed with a bi-directional motion compensator that matches every 8×8 block of the current frame with blocks in two of the frames most adjacent neighboring frames, one in the past, and one in the future. The motion compensator produces motion vectors and difference metrics for every block.
In MPEG, Y, Cr and Cb components can be stored in a 4:2:0 format, where the Cr and Cb components are down-sampled by 2 in the X and the Y directions. Hence, each macroblock would consist of 256 Y components, 64 Cr components and 64 Cb components. Macroblock 136 of current picture 134 is predicted from reference picture 132 at a different time point than current picture 134. A search is made in reference picture 132 to locate best matching macroblock 138 that is closest, in terms of Y, Cr and Cb values to current macroblock 136 being encoded. The location of best matching macroblock 138 in reference picture 132 is encoded in motion vector 140. Reference picture 132 can be an I-frame or P-frame that a decoder will have reconstructed prior to the construction of current picture 134. Best matching macroblock 138 is subtracted from current macroblock 136 (a difference for each of the Y, Cr and Cb components is calculated) resulting in residual error 142. Residual error 142 is encoded with 2D Discrete Cosine Transform (DCT) 144 and then quantized 146. Quantization 146 can be performed to provide spatial compression by, for example, allotting fewer bits to the high frequency coefficients while allotting more bits to the low frequency coefficients. The quantized coefficients of residual error 142, along with motion vector 140 and reference picture 134 identifying information, are encoded information representing current macroblock 136. The encoded information can be stored in memory for future use or operated on for purposes of, for example, error correction or image enhancement, or transmitted over network 4.
The encoded quantized coefficients of residual error 142, along with encoded motion vector 140 can be used to reconstruct current macroblock 136 in the encoder for use as part of a reference frame for subsequent motion estimation and compensation. The encoder can emulate the procedures of a decoder for this P-frame reconstruction. The emulation of the decoder will result in both the encoder and decoder working with the same reference picture. The reconstruction process, whether done in an encoder, for further inter-coding, or in a decoder, is presented here. Reconstruction of a P-frame can be started after the reference frame (or a portion of a picture or frame that is being referenced) is reconstructed. The encoded quantized coefficients are dequantized 150 and then 2D Inverse DCT, or IDCT, 152 is performed resulting in decoded or reconstructed residual error 154. Encoded motion vector 140 is decoded and used to locate the already reconstructed best matching macroblock 156 in the already reconstructed reference picture 132. Reconstructed residual error 154 is then added to reconstructed best matching macroblock 156 to form reconstructed macroblock 158. Reconstructed macroblock 158 can be stored in memory, displayed independently or in a picture with other reconstructed macroblocks, or processed further for image enhancement.
Encoding using B-frames (or any section coded with bi-directional prediction) can exploit temporal redundancy between a region in a current picture and a best matching prediction region in a previous picture and a best matching prediction region in a subsequent picture. The subsequent best matching prediction region and the previous best matching prediction region are combined to form a combined bi-directional predicted region. The difference between the current picture region and the best matching combined bi-directional prediction region is a residual error (or prediction error). The locations of the best matching prediction region in the subsequent reference picture and the best matching prediction region in the previous reference picture can be encoded in two motion vectors.
Luminance Histogram Difference
In one aspect, the motion compensator can produce a difference metric for every block. The difference metric is based on luminance differences between a blocks in one frame and corresponding blocks in a temporally adjacent previous frame and a temporally adjacent next frame. The difference metric can, for example, include a sum of square differences (SSD) or a sum of absolute differences (SAD). Without a loss of generality, here SAD is used as an illustrative example.
For the current (or selected) frame, a SAD ratio is calculated as shown in Equation 1:
where SADP and SADN are the sum of absolute differences of the forward and the backward difference metric, respectively, for a selected frame. It should be noted that the denominator contains a small positive number ε to prevent the “divide-by-zero” error. The nominator also contains a value ε to balance the effect of the unity in the denominator. For example, if the previous frame, the current frame, and the next frame are identical, motion search should yield SADP=SADN=0. In this case, the above calculation generators γ=1 instead of 0 or infinity.
A luminance histogram can be calculated for every frame. Typically the multimedia images have a luminance depth (e.g., number of “bins”) of eight bits. The luminance depth used for calculating the luminance histogram according to some aspects can be set to 16 to obtain the histogram. In other aspects, the luminance depth can be set to an appropriate number which may depend upon the type of data being processed, the computational power available, or other predetermined criteria. In some aspects, the luminance depth can be set dynamically based on a calculated or received metric, such as the content of the data.
Equation 2 illustrates one example of calculating a luminance histogram difference (lambda):
where NPi is the number of blocks in the ith bin for the previous frame, and NCi is the number of blocks in the ith bin for the current frame, and N is the total number of blocks in a frame. If the luminance histogram difference of the previous and the current frame are completely dissimilar (or disjoint), then λ=2.
A frame difference metric D, discussed in reference to block 56 of
where A is a constant chosen by application, and
Abrupt Scene Change
where A is a constant chosen by application, and T1 is a threshold value (e.g., a threshold criterion). If the threshold value is met, at block 84 process D designates the frame as an abrupt scene change and, in this example, no further shot classification may be needed.
In one example simulation shows, setting A=1, and T1=5 achieve good detection performance. If the current frame is an abrupt scene change frame, then γC should be large and γP should be small. The ratio
can be used instead of γC alone so that the metric is normalized to the activity level of the context.
It should be noted that the criterion the luminance histogram difference lambda (λ) is used in Equation 4 in a non-linear way.
Cross-Fading and Slow Scene Changes
T2≦D <T1 [5]
for a certain number of continuous frames, where T1 is the same threshold value as used in Equation 4 and T2 is another threshold value. Typically, the criteria T1 and T2 are determined by normal experimentation or simulation because of the difference in implementations that are possible. If the criteria in Equation 5 are satisfied, at block 94 process E classifies the frame as part of a slow changing scene. No further classification of the frame may be needed, and shot classification for the selected frame ends.
Camera Flashlight Events
Process F shown in
In one illustrative example, at block 102, process F determines if the average luminance of the current frame minus the average luminance of the previous frame is equal of exceeds a threshold value T3, and process F also determines if the average luminance of the current frame minus the average luminance of the next frame is greater than or equal to the threshold value T3, as shown in Equations 6 and 7:
If the criteria of Equations 6 and 7 are not satisfied, the current frame is not classified as comprising camera flashlights and process F returns. If the criteria illustrated in Equations 6 and 7 are satisfied, process F proceeds to block 104 where it determines if a backwards difference metric SADP and a forward difference metric SADN are greater or equal to a certain threshold value T4, as illustrated in Equations 8 and 9 below:
SADP≧T4 [8]
SADN≧T4 [9]
where
Values of threshold T3 are typically determined by normal experimentation as the implementation of the described processes can result in differences in operating parameters including threshold values. SAD values are included in the determination because camera flashes typically take only one frame, and due to the luminance difference, this frame cannot be predicted well using motion compensation from both the forward and the backward direction.
In some aspects, one or more of the threshold values T1, T2, T3, and T4 are predetermined and such values are incorporated into the shot classifier in the encoding device. Typically, these threshold values are selected through testing of a particular implementation of shot detection. In some aspects, one or more of the threshold values T1, T2, T3, and T4 can be set during processing (e.g., dynamically) based on using information (e.g., metadata) supplied to the shot classifier or based on information calculated by the shot classifier itself.
Referring now to
In the above-described aspect, the amount of difference between the frame to be compressed and its adjacent two frames is indicated by a frame difference metric D. If a significant amount of a monotonic luminance change is detected, it signifies a cross-fade effect in the frame. The more prominent the cross-fade is, the more gain may be achieved by using B frames. In some aspects, a modified frame difference metric is used as shown in Equation 10 below:
where dp=|YC−YP| and dN=|YC−YN| are the luma difference between the current frame and the previous frame, and the luma difference between the current frame and the next frame, respectively, Δ represents a constant that can be determined in normal experimentation as it can depend on the implementation, and α is a weighting variable having a value between 0 and 1.
The modified frame difference metric D1 is only different from the original frame difference metric D if a consistent trend of luma shift is observed and the shift strength is large enough. D1 is equal to or less than D. If the change of luma is steady (dP=dN), the modified frame difference metric D1 is lower than the original frame difference metric D with the lowest ratio of (1−α).
Table 1 below shows performance improvement by adding abrupt scene change detection. The total number of I-frames in both the non-scene-change (NSC) and the scene-change (SC) cases are approximately the same. In the NSC case, I-frames are distributed uniformly among the whole sequence, while in the SC case, I-frames are only assigned to abrupt scene change frames.
It can be seen that typically 0.2˜0.3 dB improvement can be achieve PSNR-wise. Simulation results show that the shot detector is very accurate in determining the shot events above-mentioned. Simulation of five clips with normal cross-fade effect shows that at Δ=5.5 and α=0.4, a PSNR gain of 0.226031 dB is achieved at the same bitrate.
It is noted that the shot detection and encoding aspects described herein may be described as a process which is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although the flowcharts shown in the figures may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function.
It should also be apparent to those skilled in the art that one or more elements of a device disclosed herein may be rearranged without affecting the operation of the device. Similarly, one or more elements of a device disclosed herein may be combined without affecting the operation of the device. Those of ordinary skill in the art would understand that information and multimedia data may be represented using any of a variety of different technologies and techniques. Those of ordinary skill would further appreciate that the various illustrative logical blocks, modules, and algorithm steps described in connection with the examples disclosed herein may be implemented as electronic hardware, firmware, computer software, middleware, microcode, or combinations thereof. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed methods.
For example, the steps of a method or algorithm described in connection with the shot detection and encoding examples and figures disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The methods and algorithms are particularly applicable to communication technology including wireless transmissions of video to cell phones, computers, laptop computers, PDA's and all types of personal and business communication devices software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage me dium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuit (ASIC). The ASIC may reside in a wireless modem. In the alternative, the processor and the storage medium may reside as discrete components in the wireless modem.
In addition, the various illustrative logical blocks, components, modules, and circuits described in connection with the examples disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The previous description of the disclosed examples is provided to enable any person of ordinary skill in the art to make or use the disclosed methods and apparatus. Various modifications to these examples will be readily apparent to those skilled in the art, and the principles defined herein may be applied to other examples and additional elements may be added without departing from the spirit or scope of the disclosed method and apparatus. The description of the aspects is intended to be illustrative, and not to limit the scope of the claims.
The present Application for Patent claims priority to Provisional Application No. 60/727,644 entitled “METHOD AND APPARATUS FOR SHOT DETECTION IN VIDEO STREAMING” filed Oct. 17, 2005, and assigned to the assignee hereof and hereby expressly incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
5289276 | Siracusa et al. | Feb 1994 | A |
5508752 | Kim et al. | Apr 1996 | A |
5565920 | Lee et al. | Oct 1996 | A |
5598428 | Sato | Jan 1997 | A |
5619272 | Salmon et al. | Apr 1997 | A |
5642294 | Taniguchi et al. | Jun 1997 | A |
5642460 | Shimoda | Jun 1997 | A |
5654805 | Boon | Aug 1997 | A |
5682204 | Uz et al. | Oct 1997 | A |
5684917 | Yanagihara et al. | Nov 1997 | A |
5686962 | Chung et al. | Nov 1997 | A |
5699119 | Chung et al. | Dec 1997 | A |
5745645 | Nakamura et al. | Apr 1998 | A |
5754233 | Takashima | May 1998 | A |
5771357 | Kato et al. | Jun 1998 | A |
5790179 | Shibata et al. | Aug 1998 | A |
5793895 | Chang et al. | Aug 1998 | A |
5801765 | Gotoh et al. | Sep 1998 | A |
5821991 | Kwok | Oct 1998 | A |
5835163 | Liou et al. | Nov 1998 | A |
5841939 | Takahashi et al. | Nov 1998 | A |
5864369 | Swan | Jan 1999 | A |
5960148 | Miyazawa | Sep 1999 | A |
5978029 | Boice et al. | Nov 1999 | A |
5991502 | Kawakami et al. | Nov 1999 | A |
6012091 | Boyce | Jan 2000 | A |
6014493 | Shimoda | Jan 2000 | A |
6064796 | Nakamura et al. | May 2000 | A |
6091460 | Hatano et al. | Jul 2000 | A |
6157674 | Oda et al. | Dec 2000 | A |
6175593 | Kim et al. | Jan 2001 | B1 |
6317518 | Enari | Nov 2001 | B1 |
6333950 | Karasawa | Dec 2001 | B1 |
6363114 | Kato | Mar 2002 | B1 |
6370672 | Rick et al. | Apr 2002 | B1 |
6449002 | Markman et al. | Sep 2002 | B1 |
6473459 | Sugano et al. | Oct 2002 | B1 |
6490320 | Vetro et al. | Dec 2002 | B1 |
6501796 | Dusseux et al. | Dec 2002 | B1 |
6507618 | Wee et al. | Jan 2003 | B1 |
6538688 | Giles | Mar 2003 | B1 |
6539220 | Sakai et al. | Mar 2003 | B1 |
6553068 | Wake et al. | Apr 2003 | B1 |
6574211 | Padovani et al. | Jun 2003 | B2 |
6580829 | Hurst et al. | Jun 2003 | B1 |
6600836 | Thyagarajan et al. | Jul 2003 | B1 |
6718121 | Shikunami | Apr 2004 | B1 |
6721492 | Togashi | Apr 2004 | B1 |
6724819 | Takaki et al. | Apr 2004 | B1 |
6744474 | Markman | Jun 2004 | B2 |
6773437 | Ogilvie et al. | Aug 2004 | B2 |
6784942 | Selby et al. | Aug 2004 | B2 |
6791602 | Sasaki et al. | Sep 2004 | B1 |
6798834 | Murakami et al. | Sep 2004 | B1 |
6891891 | Pau et al. | May 2005 | B2 |
6900846 | Lee et al. | May 2005 | B2 |
6904081 | Frank | Jun 2005 | B2 |
6909745 | Puri et al. | Jun 2005 | B1 |
6928151 | Yamada et al. | Aug 2005 | B2 |
6934335 | Liu et al. | Aug 2005 | B2 |
6959044 | Jin et al. | Oct 2005 | B1 |
6970506 | Kim et al. | Nov 2005 | B2 |
6985635 | Chen et al. | Jan 2006 | B2 |
6987728 | Deshpande | Jan 2006 | B2 |
6996186 | Ngai et al. | Feb 2006 | B2 |
7009656 | Thomson et al. | Mar 2006 | B2 |
7027512 | Jeon | Apr 2006 | B2 |
7039855 | Nikitin et al. | May 2006 | B2 |
7042512 | Yang et al. | May 2006 | B2 |
7075581 | Ozgen et al. | Jul 2006 | B1 |
7089313 | Lee et al. | Aug 2006 | B2 |
7093028 | Shao et al. | Aug 2006 | B1 |
7095874 | Moskowitz et al. | Aug 2006 | B2 |
7123816 | McGrath et al. | Oct 2006 | B2 |
7129990 | Wredenhagen et al. | Oct 2006 | B2 |
7136417 | Rodriguez | Nov 2006 | B2 |
7139551 | Jamadagni | Nov 2006 | B2 |
7142599 | Henocq | Nov 2006 | B2 |
7154555 | Conklin | Dec 2006 | B2 |
7167507 | Mailaender et al. | Jan 2007 | B2 |
7203236 | Yamada | Apr 2007 | B2 |
7280708 | Song et al. | Oct 2007 | B2 |
7356073 | Heikkila | Apr 2008 | B2 |
7359466 | Huang et al. | Apr 2008 | B2 |
7430336 | Raveendran | Sep 2008 | B2 |
7433982 | Peev et al. | Oct 2008 | B2 |
7443448 | Yang et al. | Oct 2008 | B2 |
7474701 | Boice et al. | Jan 2009 | B2 |
7479978 | Cho et al. | Jan 2009 | B2 |
7483581 | Raveendran et al. | Jan 2009 | B2 |
7486736 | Zhidkov | Feb 2009 | B2 |
7528887 | Wyman | May 2009 | B2 |
7536626 | Sutivong et al. | May 2009 | B2 |
7634260 | Chun | Dec 2009 | B2 |
7660987 | Baylis et al. | Feb 2010 | B2 |
7676106 | Panusopone et al. | Mar 2010 | B2 |
7705913 | Jia et al. | Apr 2010 | B2 |
7738716 | Song | Jun 2010 | B2 |
7840112 | Rao | Nov 2010 | B2 |
8060720 | Uppala | Nov 2011 | B2 |
20010001614 | Boice et al. | May 2001 | A1 |
20010017888 | Bruls | Aug 2001 | A1 |
20010055337 | Matsuzaki et al. | Dec 2001 | A1 |
20020021485 | Pilossof | Feb 2002 | A1 |
20020047936 | Tojo | Apr 2002 | A1 |
20020054621 | Kyeong et al. | May 2002 | A1 |
20020097791 | Hansen | Jul 2002 | A1 |
20020146071 | Liu et al. | Oct 2002 | A1 |
20020149703 | Adams et al. | Oct 2002 | A1 |
20020150162 | Liu et al. | Oct 2002 | A1 |
20020154705 | Walton et al. | Oct 2002 | A1 |
20020163964 | Nichols | Nov 2002 | A1 |
20030021485 | Raveendran et al. | Jan 2003 | A1 |
20030142762 | Burke | Jul 2003 | A1 |
20030219160 | Song et al. | Nov 2003 | A1 |
20040013196 | Takagi et al. | Jan 2004 | A1 |
20040045038 | Duff et al. | Mar 2004 | A1 |
20040073901 | Imamatsu | Apr 2004 | A1 |
20040125877 | Chang et al. | Jul 2004 | A1 |
20040136566 | Cho et al. | Jul 2004 | A1 |
20040190609 | Watanabe | Sep 2004 | A1 |
20040192274 | Vuori | Sep 2004 | A1 |
20040264790 | Song et al. | Dec 2004 | A1 |
20050022178 | Ghafoor et al. | Jan 2005 | A1 |
20050062885 | Kadono et al. | Mar 2005 | A1 |
20050063500 | Li et al. | Mar 2005 | A1 |
20050076057 | Sharma et al. | Apr 2005 | A1 |
20050078750 | Shen et al. | Apr 2005 | A1 |
20050081482 | Lembo | Apr 2005 | A1 |
20050134735 | Swartz | Jun 2005 | A1 |
20050168634 | Wyman et al. | Aug 2005 | A1 |
20050168656 | Wyman et al. | Aug 2005 | A1 |
20050185719 | Hannuksela | Aug 2005 | A1 |
20050192878 | Minear et al. | Sep 2005 | A1 |
20050195889 | Grant et al. | Sep 2005 | A1 |
20050195899 | Han | Sep 2005 | A1 |
20050201478 | Claussen et al. | Sep 2005 | A1 |
20050222961 | Staib et al. | Oct 2005 | A1 |
20050231635 | Lin | Oct 2005 | A1 |
20050249282 | Landsiedel et al. | Nov 2005 | A1 |
20050254692 | Caldwell | Nov 2005 | A1 |
20050265461 | Raveendran | Dec 2005 | A1 |
20060002340 | Criss et al. | Jan 2006 | A1 |
20060023788 | Otsuka et al. | Feb 2006 | A1 |
20060129646 | Rhee et al. | Jun 2006 | A1 |
20060133514 | Walker | Jun 2006 | A1 |
20060146934 | Caglar et al. | Jul 2006 | A1 |
20060153294 | Wang et al. | Jul 2006 | A1 |
20060159160 | Kim et al. | Jul 2006 | A1 |
20060166739 | Lin | Jul 2006 | A1 |
20060197879 | Covell et al. | Sep 2006 | A1 |
20060210184 | Song et al. | Sep 2006 | A1 |
20060215539 | Vrcelj et al. | Sep 2006 | A1 |
20060215761 | Shi et al. | Sep 2006 | A1 |
20060222078 | Raveendran | Oct 2006 | A1 |
20060230162 | Chen et al. | Oct 2006 | A1 |
20060233239 | Sethi et al. | Oct 2006 | A1 |
20060239347 | Koul | Oct 2006 | A1 |
20060244840 | Eshet et al. | Nov 2006 | A1 |
20060282737 | Shi et al. | Dec 2006 | A1 |
20070014354 | Murakami et al. | Jan 2007 | A1 |
20070074266 | Raveendran et al. | Mar 2007 | A1 |
20070081586 | Raveendran et al. | Apr 2007 | A1 |
20070081587 | Raveendran et al. | Apr 2007 | A1 |
20070081588 | Raveendran et al. | Apr 2007 | A1 |
20070097259 | MacInnis et al. | May 2007 | A1 |
20070124443 | Nanda et al. | May 2007 | A1 |
20070124459 | Kasama | May 2007 | A1 |
20070160142 | Abrams | Jul 2007 | A1 |
20070171280 | Tian et al. | Jul 2007 | A1 |
20070171972 | Tian et al. | Jul 2007 | A1 |
20070171986 | Hannuksela | Jul 2007 | A1 |
20070206117 | Tian et al. | Sep 2007 | A1 |
20070208557 | Li et al. | Sep 2007 | A1 |
20070252894 | Satou et al. | Nov 2007 | A1 |
20080151101 | Tian et al. | Jun 2008 | A1 |
20090074070 | Yin et al. | Mar 2009 | A1 |
20090092944 | Pirker | Apr 2009 | A1 |
20090122186 | Rodriguez et al. | May 2009 | A1 |
20090168880 | Jeon et al. | Jul 2009 | A1 |
20090244840 | Takawa et al. | Oct 2009 | A1 |
20100020886 | Raveendran et al. | Jan 2010 | A1 |
20100171814 | Routhier et al. | Jul 2010 | A1 |
Number | Date | Country |
---|---|---|
01332005 | Oct 2005 | CL |
10432005 | Dec 2005 | CL |
10452005 | Dec 2005 | CL |
14732005 | Jan 2006 | CL |
5422006 | Aug 2006 | CL |
05432006 | Sep 2006 | CL |
05392006 | Nov 2006 | CL |
5402006 | Nov 2006 | CL |
5442006 | Feb 2007 | CL |
02052001 | Dec 2008 | CL |
1328747 | Dec 2001 | CN |
1372769 | Oct 2002 | CN |
1395800 | Feb 2003 | CN |
1436423 | Aug 2003 | CN |
1647540 | Jul 2005 | CN |
1669314 | Sep 2005 | CN |
0547460 | Jun 1993 | EP |
0644695 | Mar 1995 | EP |
0690617 | Jan 1996 | EP |
0946054 | Sep 1999 | EP |
1005227 | May 2000 | EP |
1022667 | Jul 2000 | EP |
1164792 | Dec 2001 | EP |
1168731 | Jan 2002 | EP |
1195992 | Apr 2002 | EP |
1209624 | May 2002 | EP |
1265373 | Dec 2002 | EP |
1505488 | Feb 2005 | EP |
1547016 | Jun 2005 | EP |
1615447 | Jan 2006 | EP |
1657835 | May 2006 | EP |
2646047 | Oct 1990 | FR |
3189292 | Aug 1991 | JP |
5064175 | Mar 1993 | JP |
7222145 | Aug 1995 | JP |
H07203433 | Aug 1995 | JP |
7298272 | Nov 1995 | JP |
7312756 | Nov 1995 | JP |
8046969 | Feb 1996 | JP |
08102938 | Apr 1996 | JP |
8130716 | May 1996 | JP |
08214210 | Aug 1996 | JP |
8251594 | Sep 1996 | JP |
09018782 | Jan 1997 | JP |
H09503890 | Apr 1997 | JP |
09284770 | Oct 1997 | JP |
10013826 | Jan 1998 | JP |
10302396 | Nov 1998 | JP |
10313463 | Nov 1998 | JP |
H114260 | Jan 1999 | JP |
11243547 | Sep 1999 | JP |
11316843 | Nov 1999 | JP |
2000032474 | Jan 2000 | JP |
2000059774 | Feb 2000 | JP |
2000115778 | Apr 2000 | JP |
2000209553 | Jul 2000 | JP |
2000295626 | Oct 2000 | JP |
2000350217 | Dec 2000 | JP |
2001045494 | Feb 2001 | JP |
2001169251 | Jun 2001 | JP |
3189292 | Jul 2001 | JP |
2001204026 | Jul 2001 | JP |
2001251629 | Sep 2001 | JP |
2002010259 | Jan 2002 | JP |
2002051336 | Feb 2002 | JP |
2002077822 | Mar 2002 | JP |
2002125227 | Apr 2002 | JP |
2002252834 | Sep 2002 | JP |
2002543714 | Dec 2002 | JP |
2003037844 | Feb 2003 | JP |
2003110474 | Apr 2003 | JP |
2003111079 | Apr 2003 | JP |
2003209837 | Jul 2003 | JP |
2003209848 | Jul 2003 | JP |
2003224847 | Aug 2003 | JP |
2003319341 | Nov 2003 | JP |
2003348583 | Dec 2003 | JP |
2004023288 | Jan 2004 | JP |
2004140488 | May 2004 | JP |
2005123732 | May 2005 | JP |
2006074684 | Mar 2006 | JP |
2007520126 | Jul 2007 | JP |
2008500935 | Jan 2008 | JP |
1020010099660 | Nov 2001 | KR |
20020010171 | Feb 2002 | KR |
20030029507 | Apr 2003 | KR |
100389893 | Jun 2003 | KR |
20030073254 | Sep 2003 | KR |
1020040046320 | Jun 2004 | KR |
20050089721 | Sep 2005 | KR |
20060011281 | Feb 2006 | KR |
536918 | Jun 2003 | TW |
WO9535628 | Dec 1995 | WO |
WO9739577 | Oct 1997 | WO |
9932993 | Jul 1999 | WO |
WO9943157 | Aug 1999 | WO |
0019726 | Apr 2000 | WO |
WO0067486 | Nov 2000 | WO |
WO0156298 | Aug 2001 | WO |
0166936 | Sep 2001 | WO |
WO0169936 | Sep 2001 | WO |
0178389 | Oct 2001 | WO |
WO0178398 | Oct 2001 | WO |
WO0225925 | Mar 2002 | WO |
WO0243398 | May 2002 | WO |
WO02087255 | Oct 2002 | WO |
03052695 | Jun 2003 | WO |
WO2004008747 | Jan 2004 | WO |
WO2004008757 | Jan 2004 | WO |
WO2004019273 | Mar 2004 | WO |
WO2004049722 | Jun 2004 | WO |
WO2004054270 | Jun 2004 | WO |
WO2004057460 | Jul 2004 | WO |
WO2004070953 | Aug 2004 | WO |
WO2004114667 | Dec 2004 | WO |
WO2004114672 | Dec 2004 | WO |
WO2005006764 | Jan 2005 | WO |
WO2005069917 | Aug 2005 | WO |
WO2005074147 | Aug 2005 | WO |
WO2005081482 | Sep 2005 | WO |
WO2005107266 | Nov 2005 | WO |
WO2005109900 | Nov 2005 | WO |
2006099082 | Sep 2006 | WO |
WO2006099242 | Sep 2006 | WO |
WO2007047755 | Apr 2007 | WO |
Entry |
---|
Wiegand, T. et al., “Draft ITU-R reccommendation and final draft international standard of Joint Video Specification” Joint Video Team (JVT) of ISO-IEC MPEG & ITU-T VCEG. 8th Meeting: Geneva, Switzerland. May 27, 2003. |
Lee, J. “A fast frame type selection technique for very low bit rate coding using MPEG-1,” Real-Time Imaging, Academic Press Limited, GB, vol. 5, No. 2, Apr. 1999, pp. 83-94. |
MPEG Digital Video-Coding Standards, IEEE Signal Processing Magazine, IEEE Service Center, Piscataway, NJ, vol. 14, No. 5, Sep. 1997, pp. 82-100, XP011089789. |
R4-030797, An Advanced Receiver Proposal for MIMO, TSG-RAN WG4 #28, Lucent Technologies, Sophia-Antipolis, Aug. 18-22, 2003, pp. 1-8. |
SMPTE RP 273-1989 SMPTE Recommended Practice Specifications for Safe Action and Safe Title Areas Test Pattern for Television Systems Society of Motion Picture and Television Engineers, pp. 1-3 Approved Mar. 29, 1989. |
Donoho D. L., et al., “Ideal spatial adaptation by wavelet shrinkage,” Biometrika, 1994, 8, 1-30. |
Ghael S.P., et al., “Improved Wavelet dertoising via empirical Wiener filtering,” Proceedings of SPIE, 1997, 3169, 1-11. |
Girod, et al., “Scalable codec architectures for Internet video-on-demand, Signals, Systems & Computers, CA, USA Nov. 2-5, 1997, Los Alamitos, CA, USA, IEEE Comput Soc US vo,” Conference Record of the Thirty-First Asilomar Conference of Pacific Grove, 1997. |
Haan G. D., et al., “De-interlacing of video data,” IEEE Transactions on Consumer Electronics, 1997, 43 (3), 1-7. |
Haavisto P., et al., “Scan rate up-conversions using adaptive weighted median filtering” Signal Processing of HDTV II, 1990, Elsevier Science Publishers, 703-710. |
Jung K., et al., “Deinterlacing using Edge based Motion Estimation,” IEEE MWSCS, 1995, 892-895. |
Kwon, et al., “Overview of H264/MPEG-4 part 10, Inc, XP005312621,” Journal of visual Communication and Image Representation, 2006, 17 (2), Academic Press, 186-216. |
Liang Y., et al., “A new content-based hybrid video transcoding method Oct. 7-10, 2001,” 2001 International Conference on Image Processing, 2001, 4, 429-432. |
Mihaela Van Der Schaar, et al., “A Hybrid Temporal-SNR Fine-Granular Scalability for Internet Video, IEEE Service Center, XP011014178,” IEEE Transactions on Circuits and Systems for Video Technology, 2001, 11 (3). |
Rusert, et al., “Enhanced interference wavelet video coding considering the interrelation of spatio-temporal transform and motion compensation,Aug. 2004, XP00004524456,” Signal Processing, Image Communication, 2004, 19 (7), Elsevier Science Publishers, 617-635. |
Simonetti R., et al., “Deinterlacing of HDTV Images for Multimedia Applications,” Signal Processing of HDTV IV, 1993, Elsevier Science Publishers, 765-772. |
Wang F. M., et al., “Time recursive Deinterlacing for IDTV and Pyramid Coding, Signal Processing,” Image Communications 2, 1990, 1306-1309. |
Puri et al, “Video coding using the H.264/MPEG-4 AVC compression standard” Signal Processing. Image Communication, Elsevier Science Publishers, Amsterdam, NL, vol. 19, No. 9, Oct. 1, 2004, pp. 793-849, XP004607150. |
De Haan Gerard, et al., “Deinterlacing—An Overview,” Proceeding of the IEEE, 1998, 86 (9), 1839-1857. |
International Search Report—PCT/US2006/040712, International Search Authority—European Patent Office—Mar. 7, 2007. |
International Preliminary Report on Patentability—PCT/US2006/040712, International Search Authority—The International Bureau of WIPO—Geneva, Switzerland—Apr. 23, 2008. |
Written Opinion—PCT/US2006/040712, International Search Authority—European Patent Office—Mar. 7, 2007. |
C. Huang et al.: “A Robust Scene-Change Detection method for Video Segmentation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 11, No. 12, pp. 1281-1288, Dec. 2001. |
S. Pei et al.: “Effective Wipe Detection in MPEG Compressed Video Using Macroblock Type Information,” IEEE Transactions on Multimedia, vol. 4, No. 3, pp. 309-319, Sep. 2002. |
S. Lee et al.: “Fast Scene Change Detection using Direct Feature Extraction from MPEG Compressed Videos,” IEEE Transactions on Multimedia, vol. 2, No. 4, pp. 240-254, Dec. 2000. |
D. Lelescu et al.: “Statistical Sequential Analysis for Real-Time Video Scene Change Detection on Compressed Multimedia Bitstream,” IEEE Transactions on Multimedia, vol. 5, No. 1, pp. 106-117, Mar. 2003. |
Boyce, Jill M.: “Packet loss resilient transmission of MPEG video over the Internet” Signal Processing: Image Communication, Elsevier Science Publishers, Amsterdam, NL, vol. 15, No. 1-2, Sep. 1999, pp. 7-24, XP004180635. |
Translation of Office Action dated Jan. 28, 2011 in Taiwanese application 096110382. |
3GPP2-C10-20040607-102, “Third Generation Partnership Project 2”, TSG-C Working Group 1.2—Multimedia Services, Montreal, Quebec, Canada, May 17-20, 2004, Qualcomm. |
Mailaender, L., et al., “Linear MIMO equalization for CDMA downlink signals with code reuse”, Wireless Communications, IEEE Transactions on, Sep. 2005, vol. 4, Issue 5, pp. 2423-2434. |
Number | Date | Country | |
---|---|---|---|
20070160128 A1 | Jul 2007 | US |
Number | Date | Country | |
---|---|---|---|
60727644 | Oct 2005 | US |