Digital video content is typically generated to target a specific data format. A video data format generally conforms to a specific video coding standard or a proprietary coding algorithm, with a specific bit rate, spatial resolution, frame rate, etc. Such coding standards include MPEG-2 and WINDOWS Media Video (WMV). Most existing digital video contents are coded according to the MPEG-2 data format. WMV is widely accepted as a qualified codec in the streaming realm, being widely deployed throughout the Internet, adopted by the HD-DVD consortium, and currently being considered as a SMPTE standard. Different video coding standards provide varying compression capabilities and visual quality.
Transcoding refers to the general process of converting one compressed bitstream into another compressed one. To match a device's capabilities and distribution networks, it is often desirable to convert a bitstream in one coding format to another coding format such as from MPEG-2 to WMV, to H.264, or even to a scalable format. Transcoding may also be utilized to achieve some specific functionality such as VCR-like functionality, logo insertion, or enhanced error resilience capability of the bitstream for transmission over wireless channels.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In view of the above, efficient integrated digital video transcoding is described. In one aspect, an integrated transcoder receives an encoded bitstream. The integrated transcoder transcodes the encoded bitstream by partially decoding the encoded bitstream based on a first set of compression techniques associated with a first media data format. The decoding operations generate an intermediate data stream. The integrated transcoder then encodes the intermediate data stream using a second set of compression techniques associated with a second media data format. The first and second sets of compression techniques are not the same.
In the Figures, the left-most digit of a component reference number identifies the particular Figure in which the component first appears.
For purposes of discussion and illustration, color is used in the figures to present the following conventions. A blue solid arrow represents pixel domain signal with respect to real or residual picture data. A red solid arrow represents signal in the DCT domain. An orange dashed arrow represents motion information.
Overview
Systems and methods for efficient digital video transcoding are described below in reference to
In one implementation, where efficient digital video transcoding transcodes a bitstream data format (e.g., MPEG-2, etc.) to WMV, the high-quality profile transcoding operations support advanced coding features of WMV. In one implementation, high-speed profile transcoding operations implement arbitrary resolution two-stage downscaling (e.g., when transcoding from high definition (HD) to standard definition (SD)). In such two-stage downscaling operations, part of the downscaling ratio is efficiently achieved in the DCT domain, while downscaling ratio operations are implemented in the spatial domain at a substantially reduced resolution.
Exemplary Conceptual Basis
For purposes of description and exemplary illustration, system 300 is described with respect to transcoding from MPEG-2 to WMV with bit rate reduction, spatial resolution reduction, and their combination. Many existing digital video contents are coded according to the MPEG-2 data format. WMV is widely accepted as a qualified codec in the streaming realm, being widely deployed throughout the Internet, adopted by the HD-DVD Consortium, and currently being considered as a SMPTE standard.
MPEG-2 and WMV provide varying compression and visual quality capabilities. For example, the compression techniques respectively used by MPEG-2 and WMV are very different. For instance, the motion vector (MV) precision and motion compensation (MC) filtering techniques are different. In MPEG-2 motion precision is only up to half-pixel accuracy and the interpolation method is bilinear filtering. In contrast, in WMV, the motion precision can go up to quarter-pixel accuracy, and two interpolation methods namely bilinear filtering and bicubic filtering are supported. Moreover, there is a rounding control parameter involved in the filtering process. Use of WMV may result in up to a 50% reduction in video bit rate with negligible visual quality loss, as compared to an MPEG-2 bit rate.
In another example, transforms used by MPEG-2 and WMV are different. For instance, MPEG-2 uses standard DCT/IDCT and the transform size is fixed to 8×8. In contrast, WMV uses integer transforms (VC1-T) where the elements of transform kernel matrix are all small integers. Additionally, transform size can be altered using WMV from blocks to blocks using either 8×8, 8×4, 4×8 and 4×4. MPEG-2 does not support frame level optimization. Whereas, WMV supports various frame level syntaxes for performance optimization. WMV supports many other advanced coding features such as intensity compensation, range reduction, and dynamic resolution change, etc.
In view of the above, to provide bit rate reduction without resolution change, the filtering process bridging the MPEG-2 decoder and the WMV encoder shown in
ei+1={circumflex over (r)}i+1+MCmp2({circumflex over (B)}i, MVmp2)−MCvc1({tilde over (B)}i, MVvc1) (1)
In this implementation, WMV coding efficiency of
ei+1={circumflex over (r)}i+1+MCmp2({circumflex over (B)}i−{tilde over (B)}i, MVmp2) (2)
According to Equation 2, the reference CPDT transcoder in
An Exemplary System
Although not required, efficient digital video transcoding is described in the general context of computer-program instructions being executed by a computing device such as a personal computer. Program modules generally include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. While the systems and methods are described in the foregoing context, acts and operations described hereinafter may also be implemented in hardware.
In this implementation, system 400 includes a general-purpose computing device 402. Computing device 402 represents any type of computing device such as a personal computer, a laptop, a server, handheld or mobile computing device, etc. Computing device 402 includes program modules 404 and program data 406 to transcode an encoded bitstream in a first data format (e.g. MPEG-2) to a bitstream encoded into a different data formats (e.g., WMV). Program modules 404 include, for example, efficient digital video transcoding module 408 (“transcoding module 408”) and other program modules 410. Transcoding module 408 transcodes encoded media 412 (e.g., MPEG-2 media) into transcoded media 414 (e.g., WMV media). Other program modules 410 include, for example, an operating system and an application utilizing the video bitstream transcoding capabilities of transcoding module 408, etc. In one implementation, the application is part of the operating system. In one implementation, transcoding module 408 exposes its transcoding capabilities to the application via an Application Programming Interface (API) 416.
Please note that the WMV transform is different from the one used in MPEG-2. In MPEG-2, standard floating point DCT/IDCT is used whereas the integer transform, whose energy packing property is akin to DCT, is adopted in WMV. As a result, the IDCT in the MPEG-2 decoder and the VC1-T in WMV encoder do not cancel out each other. The integer transform in WMV is different from the integer implementation of DCT/IDCT. The integer transform in WMV is carefully designed with all the transform coefficients to be small integers. Conventional transcoders are not integrated to transcode a bitstream encoded with respect to a first transform to a second transform that is not the same as the first transform.
Equation 3 provides an exemplary transform matrix for 8×8 VC1-T.
Equation 3 in combination with equations 4 and 5, which are described below, indicate how two different transforms are implemented into a scaling component of transcoding module 408 (
b=C8′BC8
Let {circumflex over (B)} be the VC1-T of b, then {circumflex over (B)} is calculated as:
{circumflex over (B)}=T8bT8′oN88
where o denotes element-wise multiplication of two matrices, and N88 is the normalization matrix for VC1-T transform which is calculated as follows:
N88=c8·c8′
with
c8=[8/288 8/289 8/292 8/298 8/288 8/289 8/292 8/298];
{circumflex over (B)} is directly computed from B, using the following formula:
{circumflex over (B)}=T8(C8′BC8)T8′oN88 (4)
To verify that T8C8′ and C8T8′ are very close to diagonal matrices, if we apply the approximation, then Equation 4 becomes an element-wise scaling of matrix B. That is,
{circumflex over (B)}=BoS88 (5)
where
S88=diag(T8C8′)·diag(C8T8′)oN88
Equation 5 shows that the VC1-T in WMV encoder and the IDCT in MPEG-2 decoder can be merged. Consequently, the architecture in
More particularly, conventional cascaded transcoder architectures (e.g., the architectures of
After transcoding module 408 of
Switch S0 controls when requantization error of a block should be accumulated into the residue-error buffer. As compared to a standard reconstruction selector, the role of switch S0 is improved by adopting a fast lookup table based requantization process and by providing a finer drifting control mechanism via a triple-threshold algorithm. As a result, all observations made with respect to switch S0 are considered. For example, in one implementation, the DCT domain energy difference may be utilized as the indicator.
Switch S1 controls when the most time-consuming module, MC of the accumulated residue error. In one implementation, switch S1 is on. A binary activity mask is created for the reference frame. Each element of the activity mask corresponds to the activeness of an 8×8 block, as determined by
where Energy(blocki) is the energy of the block in the accumulated residue-error buffer. In one implementation, Energy(blocki) is calculated spatial domain or DCT domain. Energy(blocki) can be approximated by the sum of absolute values. If the MV points to blocks belonging to the area of low activity, then MC of the accumulated residue error for that specific block is skipped.
Switch S2 performs early detection to determine whether block error should be encoded. This is especially useful in transrating applications where the encoder applies a coarser quantization step size. In this implementation, if the input signal (the sum of the MC of accumulated residue error and the reconstructed residue from MPEG-2 decoder) is weaker than a threshold, then switch S2 is turned off so that no error will be encoded.
In one implementation, thresholds for the switches S0, S1, and S2 are adjusted such that earlier reference frames are processed with higher quality and at slower speed. This is because the purpose of the switches is to achieve a better trade-off between quality and speed, and because of the predictive coding nature.
If bit rate change is not significant or the input source quality is not very high, the architecture of
Resolution Change
In conventional media transcoding systems there are generally three sources of errors for transcoding with spatial resolution downscaling. These errors are as follows:
Let D denote the down-sampling filtering. Referring to the architecture of
ei+1=D({circumflex over (r)}i+1)+D(MCmp2({circumflex over (B)}i, MVmp2))−MCvc1({tilde over (b)}i, mvvc1) (6)
Assume that MCVC1=MCmp2, mvmp2=mvvc1=MVmp2/2. With the approximation that
D(MCmp2({circumflex over (B)}i, MVmp2))=MC′mp2(D({circumflex over (B)}i), D(MVmp2))=MC′mp2({circumflex over (b)}i, mvmp2) (7),
Equation 6 is simplified to the following:
ei+1=D({circumflex over (r)}i+1)+MC′mp2({circumflex over (b)}i−{tilde over (b)}i, mvmp2) (8)
The first term in Equation 8, D({circumflex over (r)}i+1), refers to the downscaling process of the decoded MPEG-2 residue signal. This first term can be determined using spatial domain low-pass filtering and decimation. However, use of DCT-domain downscaling to obtain this term results in a reduction of complexity and better PSNR and visual quality. DCT-domain downscaling results are substantially better than results obtained through spatial domain bi-linear filtering or spatial domain 7-tap filtering with coefficients (−1, 0, 9, 16, 9, 0, −1)/32. In this implementation, DCT-domain downscaling retains only the top-left 4×4 low-frequency DCT coefficients. That is, applying a standard 4×4 IDCT on the DCT coefficients retained will result in a spatially 2:1 downscaled image (i.e., transcoded media 414 of
The second term in Equation 8, MC′mp2({circumflex over (b)}i−{tilde over (b)}i, mvmp2), implies requantization error compensation on a downscaled resolution. In this implementation, the MC in MPEG-2 decoder and the MC in WMV encoder are merged to a single MC process that operates on accumulated requantization errors at the reduced resolution.
For example, let {circumflex over (B)}1, {circumflex over (B)}2, {circumflex over (B)}3, and {circumflex over (B)}4 represent the four 4×4 low-frequency sub-blocks of B1, B2, B3, and B4, respectively; C4 be the 4×4 standard IDCT transform matrix; T8 be the integer WMV transform matrix; and further let T8=[TL, TR] where TL and TR are 8×4 matrices. In this scenario, {circumflex over (B)} is directly calculated from {circumflex over (B)}1, {circumflex over (B)}2, {circumflex over (B)}3, and {circumflex over (B)}4 using the following equation:
{circumflex over (B)}=(TLC4′){circumflex over (B)}1(TLC4′)′+(TLC4′){circumflex over (B)}2(TRC4′)′+(TRC4′){circumflex over (B)}3(TLC′)′+(TRC4′){circumflex over (B)}4(TRC4′)′
After some manipulation, {circumflex over (B)} is more efficiently calculated as follows:
{circumflex over (B)}=(X+Y)C′+(X−Y)D′
wherein
C=(TLC4′+TRC4′)/2
D=(TLC4′−TRC4′)/2
X=C({circumflex over (B)}1+{circumflex over (B)}3)+D({circumflex over (B)}1−{circumflex over (B)}3)
Y=C({circumflex over (B)}2+{circumflex over (B)}4)+D({circumflex over (B)}2−{circumflex over (B)}4)
In one implementation, both C and D of the above equation are pre-computed. The final results are normalized with N88.
Compared to a conventional drift-low transcoder with drifting error compensation in reduced resolution, the transcoders of
Existing mixed block processing operations typically require a decoding loop to reconstruct a full resolution picture. Therefore, the removal of mixed block processing provides substantial computation savings as compared to conventional systems.
Simplified DCT-domain 2:1 resolution downscaling transcoding architecture 800 is substantially drifting-free for P-frames. This is a result of the four-MV coding mode. The only cause of drifting error, as compared with a CPDT architecture with downscaling filtering, is the rounding of MVs from quarter resolution to half resolution (which ensures mvmp2=mvvc1) and the non-commutative property of MC and downscaling. Any such remaining errors are negligible due to the low-pass downscaling filtering (e.g., achieved in the DCT domain or in the pixel domain).
Although WMV supports four MV coding mode, it is typically only intended for coding P-frames. As a result, system 400 (
Again, referring to the architecture of
ei+1=D({circumflex over (r)}i+1)+D(MCmp2({circumflex over (B)}i, MVmp2))−MCvc1({tilde over (b)}i, mvvc1) (9);
with the approximation that
D(MCmp2({circumflex over (B)}i, MVmp2))=MC′mp2(D({circumflex over (B)}i), D(MVmp2))=MC′mp2({circumflex over (b)}i, mvmp2)) (10)
Equation 9 is simplified to
ei+1=D({circumflex over (r)}i+1)+MC′mp2({circumflex over (b)}i, mvmp2)−MC′vc1({tilde over (b)}i, mvvc1) (11)
In view of Equation 11, the following is obtained:
ei+1=D({circumflex over (r)}i+1)+MC′mp2({circumflex over (b)}i, mvmp2)−MC′vc1({tilde over (b)}i, mvvc1)=D({circumflex over (r)}i+1)+[MC′mp2({circumflex over (b)}i, mvmp2)−MC′vc1({circumflex over (b)}i, mvvc1)]+MC′vc1({circumflex over (b)}i, mvvc1)−MC′vc1({tilde over (b)}i, mvvc1)=D({circumflex over (r)}i+1)+[MC′mp2({circumflex over (b)}i, mvmp2)−MC′vc1({circumflex over (b)}i, mvvc1)]+MC′vc1({circumflex over (b)}i−{tilde over (b)}i, mvvc1) (12)
The two terms in the square brackets in Equation 12 compensate for the motion errors caused by inconsistent MVs (i.e., mvmp2 is different from mvvc1) or caused by different MC filtering methods between MPEG-2 and WMV. The corresponding modules for this purpose are highlighted and grouped into a light-yellow block in
As to the MC, Intra-to-Inter or Inter-to-Intra conversion can be applied. This is because the MPEG-2 decoder reconstructed the B-frame and the reference frames. In this implementation, this conversion is done in the mixed block-processing module in
An exemplary architecture according to Equation 12 is shown in
The four frame-level switches ensure different coding paths for different frame types. Specifically, the architecture does not perform: residue-error accumulation for B-frames (SIP), does not perform MV error compensation for I- and P-frames (SB), and does not reconstruct reference frames if there is no B-frames to be generated (SIP/B). Please note the frame-level switch SB can be turned into block-level switch since the MV error needs to be compensated only when the corresponding four original MVs are significantly inconsistent.
More particularly, switch SIP is closed only for I-frames or P-frames, Switch SP is closed only for P-frames, and switch SB is closed only for B-frames. The resulting architecture is not as complex as the reference cascaded pixel-domain transcoder of
For applications that demand ultra-fast transcoding speed, the architecture of
With respect to chrominance components in MPEG-2 and in WMV, the MV and the coding mode of chrominance components (UV) are derived from those of luminance component (Y). If all the four MBs at the original resolution that correspond to the MB at the reduced resolution have consistent coding mode (i.e., all Inter-coded or all Intra-coded), there is no problem. However, if it is not case, problems result due to different derivation rules of MPEG-2 and WMV. In MPEG-2, the UV blocks are Inter coded when the MB is coded with Inter mode. However, in WMV, the UV blocks are Inter coded only when the MB is coded with Inter mode and there are less than three Intra-coded 8×8 Y blocks. This issue exists for both P-frames and B-frames. Transcoding module 408 of
Using error concealment operations to handle mode conversion for chrominance component, error introduced into a current frame is negligible and can be ignored, although it may cause color drifting in subsequent frames. Drifting for the chrominance component is typically caused by incorrect motion. To address this and improve quality, in one implementation, transcoding module 408 uses reconstruction based compensation for the chrominance component (i.e., always applying the light-yellow module for the chrominance component).
Rate Control
For high bit rate, there is an approximate formula between coding bits (B) and quantization step (QP) which is also used in MPEG-2 TM-5 rate control method.
where S is the complexity of frame, X is model parameters. Assuming the complexity of a frame remains the same for different codecs:
where QPvc1 is the QP value used in WMV re-quantization, QPmp2 is QP value of MPEG-2 quantization, and k is the model parameter related to the target bit rate. In one implementation, the following linear model is utilized:
QPvc1/QPmp2=k·(Bmp2/Bvc1)+t (14)
The values of parameter k and t for low, medium and high bit rate cases are listed in TABLE 4 using the linear regression method.
An exemplary detailed rate control algorithm based on Equation 14 is shown in TABLE 5, where the meanings of various symbols in the algorithm presented in TABLE 5 are defined in following TABLE 6.
Arbitrarily Resolution Change
Conversion of contents from HD resolution to SD resolution, for example to support legacy SD receivers/players, is useful. Typical resolutions of HD format are 1920×1080i and 1280×720p while those for SD are 720×480i, 720×480p for NTSC. The horizontal and vertical downscaling ratios from 1920×1080i to 720×480i are 8/3 and 9/4, respectively. To keep the aspect ratio, the final downscaling ratio is chosen to be 8/3 and the resulting picture size is 720×404. Similarly, for 1280×720p to 720×480p, the downscaling ratio is chosen to be 16/9 and the resulting picture size is 720×404. Black banners are inserted to make a full 720×480 picture by the decoder/player (instead of being padded into the bitstream).
According to digital signal processing theory, a substantially optimal downscaling methodology for a downscaling ratio m/n, would be to first up sample the signal by n-fold (i.e., insert n−1 zeros between every original samples), apply a low-pass filter (e.g., a sinc function with many taps), and then decimate the resulting signal by m-fold. Performing such operations, any spectrum aliasing introduced by the down-scaling would be maximally suppressed. However, this process would also be very computationally expensive, and difficult to implement with in real-time because the input signal is high definition. To reduce this computational complexity, a novel two-stage downscaling strategy is implemented.
Referring to
Referring to
are associated with a new MB (the MV scaling and filtering modules).
Exemplary Procedure
At block 1310, the data decoded according to the first set of compression techniques is encoded with a second set of compression techniques. In one implementation, procedure 1300 is implemented within a non-integrated transcoding architecture, such as that shown and described with respect to
An Exemplary Operating Environment
The methods and systems described herein are operational with numerous other general purpose or special purpose computing system, environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use include, but are not limited to personal computers, server computers, multiprocessor systems, microprocessor-based systems, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and so on. Compact or subset versions of the framework may also be implemented in clients of limited resources, such as handheld computers, or other computing devices. The invention is practiced in a networked computing environment where tasks are performed by remote processing devices that are linked through a communications network.
With reference to
A computer 1410 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computer 1410, including both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 1410.
Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example and not limitation, communication media includes wired media such as a wired network or a direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer-readable media.
System memory 1430 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 1431 and random access memory (RAM) 1432. A basic input/output system 1433 (BIOS), containing the basic routines that help to transfer information between elements within computer 1410, such as during start-up, is typically stored in ROM 1431. RAM 1432 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 1418. By way of example and not limitation,
The computer 1410 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
A user may enter commands and information into the computer 1410 through input devices such as a keyboard 1462 and pointing device 1461, commonly referred to as a mouse, trackball or touch pad. Other input devices (not shown) may include a microphone, joystick, graphics pen and pad, satellite dish, scanner, etc. These and other input devices are often connected to the processing unit 1418 through a user input interface 1460 that is coupled to the system bus 1421, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). In this implementation, a monitor 1491 or other type of user interface device is also connected to the system bus 1421 via an interface, for example, such as a video interface 1490.
The computer 1410 operates in a networked environment using logical connections to one or more remote computers, such as a remote computer 1480. In one implementation, remote computer 1480 represents computing device 106 of a responder, as shown in
When used in a LAN networking environment, the computer 1410 is connected to the LAN 1471 through a network interface or adapter 1470. When used in a WAN networking environment, the computer 1410 typically includes a modem 1472 or other means for establishing communications over the WAN 1473, such as the Internet. The modem 1472, which may be internal or external, may be connected to the system bus 1421 via the user input interface 1460, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 1410, or portions thereof, may be stored in the remote memory storage device. By way of example and not limitation,
Conclusion
Although the above sections describe efficient digital video transcoding architectures in language specific to structural features and/or methodological operations or actions, the implementations defined in the appended claims are not necessarily limited to the specific features or actions described. Rather, the specific features and operations of the described efficient integrated digital video transcoding architecture are disclosed as exemplary forms of implementing the claimed subject matter.
For example, in one implementation, the described fast and high quality transcoding systems and methodologies, including transcoding, arbitrary sized downscaling, and rate reduction are used for MPEG-2 to MPEG-4 transcoding and MPEG-4 to WMV transcoding. For instance, the simplified closed-loop DCT-domain transcoder in
Number | Name | Date | Kind |
---|---|---|---|
6392706 | Sugiyama | May 2002 | B1 |
6393059 | Sugiyama | May 2002 | B1 |
6452973 | Hwang | Sep 2002 | B1 |
6618442 | Chen et al. | Sep 2003 | B1 |
6647061 | Panusopone et al. | Nov 2003 | B1 |
6671322 | Vetro et al. | Dec 2003 | B2 |
7388913 | Christopoulos et al. | Jun 2008 | B1 |
20020126752 | Kim | Sep 2002 | A1 |
20030043908 | Gao | Mar 2003 | A1 |
20030058940 | Klein Gunnewiek et al. | Mar 2003 | A1 |
20050132264 | Joshi et al. | Jun 2005 | A1 |
20050147163 | Li et al. | Jul 2005 | A1 |
20050169377 | Lin et al. | Aug 2005 | A1 |
20050175099 | Sarkijarvi et al. | Aug 2005 | A1 |
20050213664 | Mahkonen et al. | Sep 2005 | A1 |
20060140274 | Lu et al. | Jun 2006 | A1 |
Number | Date | Country |
---|---|---|
2005201439 | Oct 2005 | AU |
1407808 | Apr 2003 | CN |
1435056 | Aug 2003 | CN |
11073410 | Mar 1999 | JP |
2001145113 | May 2001 | JP |
2003189309 | Jul 2003 | JP |
2004015744 | Jan 2004 | JP |
2004504739 | Feb 2004 | JP |
WO2004093461 | Oct 2004 | WO |
Entry |
---|
PCT International Search Report and Written Opinion and dated Feb. 7, 2007, from counterpart PCT patent application serial No. PCT/US2006/035640, 10 pages. |
Chinese Office Action for Application No. 200680033558.3 filed Sep. 13, 2006, based on U.S. Appl. No. 11/226,590. 8 pages. |
Japanese Office Action mailed Oct. 7, 2011 for Japanese patent application No. 2008-531271, a counterpart foreign application of U.S. Appl. No. 11/226,590, 7 pages. |
Number | Date | Country | |
---|---|---|---|
20070058718 A1 | Mar 2007 | US |