Telecine may refer a technique used to convert film to video. Film material may be recorded at 24 frames per second, while National Television System Committee (NTSC) video may be recorded at 59.94 Hz vertical scanning frequency or 59.94 fields per second and displayed at 29.97 Hz frame rate of interlaced fields.
Systems, methods, and instrumentalities are disclosed to filter video. A plurality of frames may be converted to a plurality of fields. A field may be of an even parity. A field may be of an odd parity. A series of fields may contain at least one superfluous field (e.g., a series of even parity fields may contain at least one superfluous field, and a series of odd parity fields may contain at least one superfluous field). Comparing each field to at least one temporally adjacent field (in the same parity or in an opposing parity) may determine a pair of fields which are most similar to each other. A pair of such fields may comprise the superfluous field. The superfluous field may be a field of the pair of fields which is least similar to a respective temporally adjacent field (e.g., a field that is not the other of the pair of fields, and which may be a temporally adjacent field of the same parity). The superfluous field may be designated as the superfluous field. The plurality of frames may be reconstructed (e.g., from the plurality of fields without the determined superfluous field).
Methods, servers, filters, and displays comprising video filtering may comprise receiving and decoding a video sequence (e.g., an encoded video sequence) comprising a plurality of fields. A field of the plurality of fields may be one of an even parity field and an odd parity field. A field of the plurality of fields may include a superfluous field (e.g., a repeated or redundant field as a result of telecine). Comparing each field to at least one temporally adjacent field of the same parity may determine a pair of fields which are most similar to each other. The pair of fields comprise the superfluous field (e.g., one of the fields in the pair is a superfluous field). Which of the pair of fields is the superfluous field may be determined by determining which of the pair of fields is least similar to a respective temporally adjacent field (e.g., of the same parity) that is not the other of the pair of fields. The video sequence may be reconstructed without the determined superfluous field. The video sequence may he re-encoded without the determined superfluous field.
A detailed description of illustrative embodiments will now be described with reference to the various Figures. Although this description provides a detailed example of possible implementations, it should be noted that the details are intended to be exemplary and in no way limit the scope of the application.
As shown in
The communications systems 100 may also include a base station 114a and a base station 114b. Each of the base stations 114a, 114b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the core network 106/107/109, the Internet 110, and/or the networks 112. By way of example, the base stations 114a, 114b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a site controller, an access point (AP), a wireless router, and the like. While the base stations 114a, 114b are each depicted as a single element, it will be appreciated that the base stations 114a, 114b may include any number of interconnected base stations and/or network elements.
The base station 114a may be part of the RAN 103/104/105, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. The base station 114a and/or the base Malian 114b may be configured to transmit and/or receive wireless signals within a particular geographic region, which may be referred to as a cell (not shown). The cell may further be divided into cell sectors. For example, the cell associated with the base station 114a may be divided into three sectors. Thus, in one embodiment, the base station 114a may include three transceivers, e.g., one for each sector of the cell. In another embodiment, the base station 114a may employ multiple-input multiple output (MIMO) technology and, therefore, may utilize multiple transceivers for each sector of the cell.
The base stations 114a, 114b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 115/116/117, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, infrared (IR), ultraviolet (UV), visible light, etc.). The air interface 115/116/117 may be established using any suitable radio access technology (RAT).
More specifically, as noted above, the communications system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station 114a in the RAN 103/104/105 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunications System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 115/116/117 using wideband CDMA (WCDMA). WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High-Speed Downlink Packet Access (HSDPA) and/or High-Speed Uplink Packet Access (HSUPA).
In another embodiment, the base station 114a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UNITS Terrestrial Radio Access (E-UTRA), which may establish the air interface 115/116/117 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A).
In other embodiments, the base station 114a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.16 (e.g., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1X, COMA2000 EV-DO, interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.
The base station 114b in
The RAN 103/104/105 may be in communication with the core network 106/107/109, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d. For example, the core network 106/107/109 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication, Although not shown in
The core network 106/107/109 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 110, and/or other networks 112. The PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS). The Internet 110 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and the Internet protocol (IP) in the TCP/IP internet protocol suite. The networks 112 may include wired or wireless communications networks owned and/or operated by other service providers. For example, the networks 112 may include another core network connected to one or more RANs, which may employ the same RAT as the RAN 103/104/105 or a different RAT.
Some or all of the WTRUs 102a, 102b, 102c, 102d in the communications system 100 may include multi-mode capabilities, e.g., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links. For example, the WTRU 102c shown in
The processor 118 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller. a microcontroller, Application Specific integrated Circuits (ASICs), Held Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor 118 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment. The processor 118 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While
The transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 114a) over the air interface 115/116/117. For example, in one embodiment, the transmit/receive element 122 may he an antenna configured to transmit and/or receive RF signals. In another embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and receive both RE and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.
In addition, although the transmit/receive element 122 is depicted in
The transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above. the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as UTRA and IEEE 802.11, for example.
The processor 118 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light-emitting diode (OLED display unit). The processor 118 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. In addition, the processor 118 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 130 and/or the removable memory 132. The non-removable memory 130 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 118 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).
The processor 118 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. For example, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCd), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like.
The processor 118 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to, or in lieu of, the information from the GPS chipset 136, the WTRU 102 may receive location information over the air interface 115/116/117 from a base station (e.g., base stations 114a, 114b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.
The processor 118 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like.
As shown in
The core network 106 shown in
The RNC 142a in the RAN 103 may be connected to the MSC 146 in the core network 106 via an IuCS interface. The MSC 146 may be connected to the MGW 144. The MSC 146 and the MGW 144 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices.
The RNC 142a in the RAN 103 may also be connected to the SGSN 148 in the core network 106 via an IuPS interface. The SGSN 148 may be connected to the GGSN 150. The SGSN 148 and the GGSN 150 may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between and the WTRUs 102a, 102b, 102c and IP-enabled devices.
As noted above, the core network 106 may also be connected to the networks 112, which may include other wired or wireless networks that are owned and/or operated by other service providers.
The RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment. The eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 116. In one embodiment, the eNode-Bs 160a, 160b, 160c may implement MIMO technology. Thus, the eNode-B 160a, for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU 102a.
Each of the eNode-Bs 160a, 160b, 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the uplink and/or downlink, and the like. As shown in
The core network 107 shown in
The MME 162 may be connected to each of the eNode-Bs 160a, 160b, 160c in the RAN 104 via an S1 interface and may serve as a control node. For example, the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like, The MME 162 may also provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM or WCDMA.
The serving gateway 164 may be connected to each of the eNode-Bs 160a, 160b, 160c in the RAN 104 via the S1 interface. The serving gateway 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c. The serving gateway 164 may also perform other functions, such as anchoring user planes during inter-eNode B handovers, triggering paging when downlink data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like.
The serving gateway 164 may also be connected to the PDN gateway 166, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.
The core network 107 may facilitate communications with other networks. For example, the core network 107 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a. 102b, 102c and traditional land-line communications devices. For example, the core network 107 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (MS) server) that serves as an interface between the core network 107 and the PSTN 108. In addition, the core network 107 may provide the WTRUs 102a, 102b, 102c with access to the networks 112, which may include other wired or wireless networks that are owned and/or operated by other service providers.
As shown in
The air interface 117 between the WTRUs 102a, 102b, 102c and the RAN 105 may be defined as an R1 reference point that implements the IEEE 802.16 specification. In addition, each of the WTRUs 102a, 102b, 102c may establish a logical interface (not shown) with the core network 109. The logical interface between the WTRUs 102a, 102b, 102c and the core network 109 may he defined as an R2 reference point, which may be used for authentication, authorization, IP host configuration management, and/or mobility management.
The communication link between each of the base stations 180a, 180b, 180c may be defined as an R8 reference point that includes protocols for facilitating WTRU handovers and the transfer of data between base stations. The communication link between the base stations 180a, 180b, 180c and the ASN gateway 182 may be defined as an R6 reference point. The R6 reference point may include protocols for facilitating mobility management based on mobility events associated with each of the WTRUs 102a, 102b, 102c.
As shown in
The MIP-HA may be responsible for IP address management, and may enable the WTRUs 102a, 102b, 102c to roam between different ASNs and/or different core networks. The MIP-HA 184 may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 110, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices. The AAA server 186 may be responsible for user authentication and for supporting user services. The gateway 188 may facilitate interworking with other networks. For example, the gateway 188 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices. In addition, the gateway 188 may provide the WTRUs 102a, 102b, 102c with access to the networks 112, which may include other wired or wireless networks that are owned and/or operated by other service providers.
Although not shown in
One or more embodiments described herein may be used in video processing systems, video display systems, and/or video compression systems.
Turning to
When film is transferred to a video format (e.g., NTSC video), a conversion technique which may be referred to as telecine may be used. One or more variations of telecine techniques may be used, for example, the 2:3 pulldown or 3:2 pulldown technique.
When performing 2:3 pulldown, mixed or dirty video frames may be created. A mixed or dirty frame may refer to a video frame that includes fields from adjacent film frames, for example, instead of the same film frames. For example, frames 303 and 304 of
An inverse telecine process may be provided. There may be benefits to detect and remove telecine in decoded video sequences. For example, non-interlaced displays (e.g., computer monitors, digital TVs, etc,) may show higher quality non-interlaced content. Removing mixed or dirty frames may improve results of video compression and/or processing techniques (e.g., filtering) that may be applied to decoded video.
A search for repeated (e.g., redundant or superfluous) fields may be performed. Fields from adjacent video frames may be compared to determine the 2:3 pulldown telecine patterns. For example, this may be done sequentially by examining frames of the interlaced video and keeping track of pair-wise differences between even and odd fields in a last number of frames (e.g., in the last 5-10 frames). The instances in which pair-wise differences are smaller than usual frame-wise differences may be suspected to be repetitive (e.g., redundant or superfluous) fields. If such instances form a systematic pattern with a periodicity of 5 frames, then the instances may be determined to be telecine generated.
A filter, server, and/or display device may receive (e.g., via a processor) an encoded video sequence. The filer, server, and/or display device may comprise a processor, which for example, may perform one or more of the functions described herein. The filter may comprise one or more video filters (e.g., one or more video post-filters). The server may comprise a content server. The filter, serve, and/or display device (e.g., via the processor) may decode the encoded video sequence. The decoded video sequence may comprise a plurality of fields. A frame may be represented by an even parity field (e.g., top, even, and/or 0 field) and an odd parity field (e.g., bottom, odd, and/or 1 field). The even field and/or the odd field (decoded video sequence) may comprise a series of fields (e.g., the fields A0′-D0″ and/or the fields A1′-D1′). At least one field in the series of fields may be a superfluous field.
Reconstructed (e.g., decoded) fields A0′-D0″ and A1′-D1′ may not be identical to original fields A0-D0 and A1-D1 after encoding, transmission/storage, and decoding. For example, decoded fields may not be the same as the corresponding frame data in the original video. Moreover, even reconstructed repeated fields may not be identical to each other (e.g., in addition to not being identical to the original repeated fields). For example, repeated fields B1′ and B1″ may not be identical to each other or to original repeated field B1. Repeated fields B1′ and B1″ may contain varying amounts of artifacts. Repeated fields B1′ and B1″ may not yield. identical levels of quality for a reconstructed frame, Repeated fields B1′ and/or B1″ in the decoded video may be predicted from different reference fields. B1′ may be intracoded while B1″ may be motion predicted, for example, or vice versa. The result may be that the difference between B1′ and B1″ may be significant. A resulting non-interlaced frame may exhibit visible visual artifacts.
Repeated fields D0′ and D0″ may not be identical to each other or to original repeated field D0. Repeated fields D0′ and D0″ may contain varying amounts of artifacts. Repeated fields D0′ and D0″ may not yield identical levels of quality for a reconstructed frame.
Inverse telecine transformation of encoded video enabling more accurate reconstruction of the original progressive video may be performed. The effects of video compression may be reduced, for example, by identifying the repeated field (e.g., by identifying the superfluous field in a repeated pair) in the decoded sequence that most closely resembles the original sequence. An inverse telecine technique may be performed by identifying the pulldown pattern (e.g., 2:3 pulldown pattern), and determining and/or combining repeated fields to create a. reconstructed frame.
Identifying the pulldown pattern may be provided. Although described with reference to a 2:3 pulldown pattern, the embodiments described herein may be applied to any pulldown pattern.
The identity of the repeated fields may be used to determine the 2:3 pulldown pattern. Once identified, a 2:3 pulldown pattern may be assumed to remain constant. A pulldown pattern may be assumed to change due to editing, ad insertion, and/or the like, and so MSE may be tracked throughout the sequence and the pattern may be adjusted, for example, as needed.
MSE may be a used in video coding techniques, such as motion estimation, for example, for objectively comparing video frames and/or video fields. MSE may track visual disparity. For example, a low MSE may be an indication that frames and/or fields are well matched, which for example, may reduce the possibility of misidentification. The following equation (Equation 1) may be used to identify the pulldown pattern (e.g., the 2:3 pulldown pattern):
The repeated fields that minimize distortion may be the fields that minimize the expected distortion relative to the corresponding original field(s). For example, since there may be two or more repeated fields, the repeated field that minimizes distortion may be determined and selected using a reference field for comparison.
A first parity series of fields (e.g., top, even, and/or 0 field) may comprise A0′, B0′, C0′, D0′, and D0″. D0′ and D0″ may be identified as repeated pair as described herein. In the inverse telecine, either D0′ or D0″ will be selected and the other field will be superfluous. The selected field (e.g., the field that minimizes distortion) may be represented as D0*.
A second parity series of fields (e.g., bottom, odd, and/or 1 field) may comprise A1′, B1′, B1″, C1′, and D1′, B1′ and B1″ may be identified as repeated pair as described herein. In the inverse telecine, either B1′ or B1″ will be selected and the other field will be superfluous. The selected field (e.g., the field that minimizes distortion) may be represented as B1*.
For example, each of field B1′ and B1″ may he compared to their adjacent (e.g., temporally adjacent) fields in the bottom field (e.g., same parity), B1′ may be compared to A1′ 1002. B1′ may be compared to C1′ 1004. B1″ may be compared to A1′ 1006. B1″ may be compared to C1′ 1008. The superfluous field may be selected by determining which of the pair of fields is least similar to its respective adjacent field(s). For example, when selecting the superfluous field, the similarity between adjacent fields may be determined by calculating the distortion between the adjacent field and its temporal neighbor (e.g., using a metric, such as but not limited to, MSE). p A set of possible reference fields (e.g., A1′ and C1′ in
if (MSE(A1′, B1′)+MSE(A1′, B1″))<(MSE(C1′, B1′)+MSE(C1′, B1″))
Otherwise (e.g., if (MSE(A1′, B1′)+(MSE(A1′, B1″)>(MSE(C1′, B1′)+MSE(C1′, B1″))) Equation 2
C1′ is selected as the best reference
A reference field may refer to the field that may be used as a surrogate for the original field.
For example, for a given set of repeated fields, the comparison may be performed against one or more of the closest (e.g., temporally closest) fields of the same parity. For example, for repeated fields B140 and B1″ of
The comparison may be performed over a search window of size N against one or more fields of one or more of the parities. For example, the comparison may be performed against the fields (e.g., all fields) that are within a distance of two field positions (N=2) of the detected repeated fields. For example, for repeated fields B1′ and B1″ of
The repeated field that minimizes distortion relative to the best reference field may be determined and this field may be selected for use in the reconstruction of the original progressive video sequence. For example, the selected field which minimizes distortion may be determined using the following equation (Equation 3):
A repeated field may be selected using quantization values. A video bitstream which encodes the fields may be analyzed to determine the quantization values. The selection of a repeated field that minimizes distortion may be done by evaluating quantization scale parameters used to encode a (e.g., each) macroblock of the repeated fields. A lower quantization scale parameter may indicate that fields and/or frames have lower quantization error, they are less affected by coding artifacts, and/or they most closely resemble the original.
Evaluation of the quantization parameters (QP) may be done by performing a statistical analysis. For a (e.g., each) macroblock in a given repeated field (e.g., B1′ or B1″ of
The repeated field that minimizes distortion may be selected based on prediction type. When repeated fields are part of a mixed or dirty frame, the selection of the repeated field that minimizes distortion (e.g., deselection of the superfluous field) may be done based on the prediction type used on the field. For example, a field and/or frame may be intracoded or non-intracoded (e.g., motion compensated). A non-intracoded field may be cleaner because it is predicted from a reference field and may be a closer representation of the original. Prediction type may be signaled at the picture (e.g., field) level, and such signaling in the encoded bitstream may be analyzed to determine the selected field based on the prediction type of each of the detected repeated fields.
A repeated field that minimizes distortion may be determined based on a macroblock level selection process. For example, a new field may be constructed by piecing together selected macroblocks from two or more different repeated fields. For example, the corresponding macroblocks of the different repeated fields may be compared against each other and the macroblock that is expected to most closely resemble the original progressive video sequence may be selected for use in the construction of the new field. The new field may then be used to reconstruct the original progressive video sequence. The comparison between the corresponding macroblocks of the different repeated fields may be done using MSE, QP, and/or prediction type comparisons, for example, as described herein.
As an example of macroblock comparison, a pair of repeated frames B1′ and B1″ may be detected as illustrated in
Instead of selecting a single best reference field to use for comparison with the macroblocks of the determined repeated frames, a best reference macroblock may be determined for use in the determination of each of the macroblocks B1*(n). For example, Equation 2 may be applied at the macroblock level in order to determine each best reference macroblock (e.g., “best reference(n)”) given the corresponding macroblocks B1′(n) and B1″(n) of the repeated fields. The corresponding macroblocks of a set of surrounding fields of the same parity, opposite parity, and/or both parities may be searched to find the corresponding macroblock which has the least distortion (e.g., least MSE) when compared to corresponding macroblocks B1′(n) and B1″(n) of the repeated fields.
As an example of macroblock comparison, a pair of repeated frames B1′ and B1″ may be detected (e.g., as illustrated in
Post-filtering may be performed after inverse telecine is performed. Once the original progressive frames are reconstructed, a post-filter may be applied to remove artifacts that might have been introduced by interlaced encoding of the content. For example, if it is determined that fields belonging to different frames were coded as picture frames by codecs such as MPEG-2, then there may be significant noise at Nyquist vertically in the reconstructed progressive frames. A vertical low-pass filter with a cut-off frequency (fc) set below Nyquist may be applied.
Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor. Examples of computer-readable media include electronic signals (transmitted over wired or wireless connections) and computer-readable storage media. Examples of computer-readable storage media include, but are not limited to, a read only memory (ROM), a random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.
This application claims the benefit of U.S. Provisional Patent Application No. 61/938,100, filed Feb. 10, 2014, the contents of which are hereby incorporated by reference herein.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US15/15187 | 2/10/2015 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
61938100 | Feb 2014 | US |