Synchronization of video input data streams and video output data streams

Information

  • Patent Grant
  • 8704834
  • Patent Number
    8,704,834
  • Date Filed
    Monday, December 3, 2007
    16 years ago
  • Date Issued
    Tuesday, April 22, 2014
    10 years ago
Abstract
A method for synchronizing an input data stream with an output data stream in a video processor. The method includes receiving an input data stream and receiving an output data stream, wherein the input data stream and the output data stream each comprise a plurality of pixels. The method further includes sequentially storing pixels of the input data stream using an input buffer and sequentially storing pixels of the output data stream using an output buffer. Timing information is determined by examining the input data stream and the output data stream. A synchronization adjustment is applied to the input buffer and the output buffer in accordance with the timing information. Pixels are output from the input buffer and the output buffer to produce a synchronized mixed video output stream.
Description
FIELD OF THE INVENTION

The present invention is generally related to hardware accelerated graphics computer systems.


BACKGROUND OF THE INVENTION

Digital computers are being used today to perform a wide variety of tasks. A primary means for interfacing a computer system with its user is through its graphics display. The graphical depiction of data, through, for example, full motion video, detailed true color images, photorealistic 3D modeling, and the like, has become a preferred mechanism for human interaction with computer systems. For example, the graphical depiction of data is often the most efficient way of presenting complex data to the user. Similarly, high-performance interactive 3D rendering has become a compelling entertainment application for computer systems.


Computer systems are increasingly being used to handle video streams and video information in addition to high performance 3D rendering. Typical video processing applications utilize computer systems that have been specifically configured for handling video information. Such computer systems usually include dedicated video processing hardware for the processing and handling of constituent video frame data comprising a video stream. Such video processing hardware includes, for example, video processor amplifiers (e.g., procamps), overlay engines (e.g., for compositing video or images), specialized DACs (digital to analog converters), and the like.


Problems exist with the implementation of video processing hardware that is configured to handle multiple video streams. The video technology deployed in many consumer electronics-type and professional level devices relies upon one or more video processors to mix multiple video streams and/or format and/or enhance the resulting video signals for display. For example, when performing video mixing or keying, it is important to align an input video stream to an output video stream before performing the mixing. Even when the systems are in synchronization (e.g., “genlock”), the output video stream can be several pixels offset from the input video stream.


The undesirable offset causes problem with the mixing process. One prior art solution to the offset problem is to buffer the entire frame with external memory, and then perform the mixing of video frame data on the next frame. This solution is problematic because it requires a large amount of external memory to buffer an entire video frame. This solution is also difficult to implement in real-time due to the fact that large amounts of video data need to be accessed in the memory and processed at 30 frames per second or more. Consequently, such solutions are inordinately expensive to implement for high resolution video (e.g., HDTV, etc.). Thus what is required is a solution that can implement a high-quality video stream synchronization while eliminating undesirable offset effects.


SUMMARY OF THE INVENTION

Embodiments of the present invention provide a solution that can implement high-quality video stream synchronization while eliminating undesirable offset effects. Embodiments of the present invention can implement real-time multiple video stream synchronization of high-resolution video sources.


In one embodiment, the present invention is implemented as a video processor based method for synchronizing an input data stream with an output data stream in a video processor. The method includes receiving an input data stream (e.g., from a first video source) and receiving an output data stream (e.g., from a second video source). The input data stream and the output data stream each comprise a plurality of pixels, for example, from a scanline of a video frame. The method further includes sequentially storing pixels of the input data stream using an input buffer and sequentially storing pixels of the output data stream using an output buffer. Timing information is determined by examining the input data stream and the output data stream. In one embodiment, the timing information comprises the horizontal sync signals of each stream. A synchronization adjustment is applied to the input buffer and the output buffer in accordance with the timing information. Pixels are output from the input buffer and the output buffer to produce a synchronized mixed video output stream.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements.



FIG. 1 shows a computer system in accordance with one embodiment of the present invention.



FIG. 2 shows a diagram of a video processor synchronization system in accordance with one embodiment of the present invention.



FIG. 3 shows a portion of an output buffer and a portion of an input buffer in accordance with one embodiment of the present invention.



FIG. 4 shows a diagram of the internal components of the phase comparator in accordance with one embodiment of the present invention.



FIG. 5 shows an exemplary timing diagram of a phase comparator in accordance with one embodiment of the present invention.



FIG. 6 shows a diagram of an output pipe and an input pipe where the output stream and the input stream are in alignment in accordance with one embodiment of the present invention.



FIG. 7 shows a diagram of an output pipe and an input pipe where the input stream is leading the output stream in accordance with one embodiment of the present invention.



FIG. 8 shows a diagram of an output pipe and an input pipe where the input stream is lagging the output stream in accordance with one embodiment of the present invention.





DETAILED DESCRIPTION OF THE INVENTION

Reference will now be made in detail to the preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. While the invention will be described in conjunction with the preferred embodiments, it will be understood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of embodiments of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be recognized by one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the embodiments of the present invention.


Notation and Nomenclature:


Some portions of the detailed descriptions, which follow, are presented in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, computer executed step, logic block, process, etc., is here, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.


It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present invention, discussions utilizing terms such as “processing” or “accessing” or “executing” or “storing” or “rendering” or the like, refer to the action and processes of a computer system (e.g., computer system 100 of FIG. 1), or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.


Computer System Platform:



FIG. 1 shows a computer system 100 in accordance with one embodiment of the present invention. Computer system 100 depicts the components of a basic computer system in accordance with embodiments of the present invention providing the execution platform for certain hardware-based and software-based functionality. In general, computer system 100 comprises at least one CPU 101, a system memory 115, and at least one graphics processor unit (GPU) 110. The CPU 101 can be coupled to the system memory 115 via a bridge component/memory controller (not shown) or can be directly coupled to the system memory 115 via a memory controller (not shown) internal to the CPU 101. The GPU 110 is coupled to a display 112. One or more additional GPUs can optionally be coupled to system 100 to further increase its computational power. The GPU(s) 110 is coupled to the CPU 101 and the system memory 115. System 100 can be implemented as, for example, a desktop computer system or server computer system, having a powerful general-purpose CPU 101 coupled to a dedicated graphics rendering GPU 110. In such an embodiment, components can be included that add peripheral buses, specialized graphics memory, IO devices, and the like. Similarly, system 100 can be implemented as a handheld device (e.g., cellphone, etc.) or a set-top video game console device such as, for example, the Xbox®, available from Microsoft Corporation of Redmond, Wash., or the PlayStation3®, available from Sony Computer Entertainment Corporation of Tokyo, Japan.


It should be appreciated that the GPU 110 can be implemented as a discrete component, a discrete graphics card designed to couple to the computer system 100 via a connector (e.g., AGP slot, PCI-Express slot, etc.), a discrete integrated circuit die (e.g., mounted directly on a motherboard), or as an integrated GPU included within the integrated circuit die of a computer system chipset component (not shown). Additionally, a local graphics memory 114 can be included for the GPU 110 for high bandwidth graphics data storage.


EMBODIMENTS OF THE INVENTION

Embodiments of the present invention provide a video processor stream synchronization solution that can implement a high-quality video stream synchronization while eliminating undesirable offset effects, and that can implement real-time multiple video stream synchronization of high-resolution video sources. In one embodiment, the present invention is implemented as a video processor based method for synchronizing an input data stream with an output data stream in a video processor (e.g., via processor 120). The method includes receiving an input data stream (e.g., from a first video source) and receiving an output data stream (e.g., from a second video source). The input data stream and the output data stream each comprise a plurality of pixels, for example, from a scanline of a video frame.


The method further includes sequentially storing pixels of the input data stream using an input buffer and sequentially storing pixels of the output data stream using an output buffer. Timing information is determined by examining the input data stream and the output data stream. In one embodiment, the timing information comprises the horizontal sync signals of each stream, or the like. A synchronization adjustment is applied to the input buffer and the output buffer in accordance with the timing information. Pixels are output from the input buffer and the output buffer to produce a synchronized mixed video output stream. The video processor can be mounted on a graphics card coupled to the computer system (e.g., via a PCI express connector, AGP connector, etc.), can be integrated within the GPU integrated circuit die, can be implemented as its own stand-alone add-in card, or the like. Embodiments of the present invention and their benefits are further described below.



FIG. 2 shows a diagram of a video processor synchronization system 200 in accordance with one embodiment of the present invention. As depicted in FIG. 2, system 200 includes a first input 231 coupled to a buffer 201 and a second input 232 coupled to a buffer 202. The inputs 201-202 are also coupled to input timing extractors 204-205 as shown. The outputs of the timing extractors 204-205 are coupled to a phase comparator 210, which controls an output multiplexer 212. Pixels from the buffers 201-202 are coupled to the mixer 215 for mixing into the resulting synchronized output video data signal 220.


The system 200 embodiment implements video data stream synchronization between the video data streams that are arriving from two separate video sources (e.g., inputs 231-232). System 200 implements the stream synchronization intelligently while using a minimal amount of memory.


The input data stream received at the input 231 is from a first video source. For example, this video source can be a broadcaster describing a sporting event, video of a remote location, or the like. This video stream needs to be properly synchronized with the input data stream received at the input 202. The video stream received at the input 232 can be, for example, a video of a news anchor, a sport studio, or the like. The input data stream and the output data stream each comprise a plurality of pixels, for example, from a scanline of a video frame. The objective of the system 200 is to mix the two video streams 231-232 together such that they can be properly rendered on a display. The mixing can involve, for example, picture in picture with one video stream being displayed within a small window inside the second larger video stream, two pictures side-by-side, or the like. The mixing needs to be performed such that the two video streams are in alignment with regard to their pixel clock accuracies. In other words, the system 200 needs to be “genlocked” with the timing of both video streams interfaced with a reference timing.


System 200 uses the buffers 201-202 as pixel pipelines. The buffers 201-202 sequentially store pixels of the input data stream and the output data stream. As depicted in FIG. 2, pixels arrive on the left-hand side of the buffers 201-202 and are shift along towards the right hand side of the buffers 201-202. Timing information is determined by input timing extractor 204 and the output timing extractor 205 examining the input data stream and the output data stream respectively. The timing information examined by the extractors 204-205 typically comprises one of the horizontal sync signals, vertical sync signals, and field signals, or some combination thereof.


The phase comparator 210 examines the timing information provided by the extractors 204-205 and produces a synchronization adjustment signal 211. This synchronization adjustment signal 211 is a selector signal that controls the multiplexer 212. The multiplexer 212 then applies the synchronization adjustment to the input buffer 201 by selecting an appropriate stage of the buffer 201 to tap the input data stream such that the input data stream arrives at the input 221 of the mixer 215 in synchronization with the output data stream arriving at the input 222 of the mixer 215. The streams arriving at the inputs 221-222 are in accordance with the timing information determined by the extractors 204-205. The mixer 215 then mixes the data streams from the inputs 221-222 to produce a synchronized mixed video output stream at the output of 220. For example, the mixer 215 can perform video key and compositing on the streams. In this manner, pixels are transmitted from the input buffer 201 and the output buffer 202 to produce a synchronized mixed video output stream 220.



FIG. 3 shows a portion of an output buffer 301 and a portion of an input buffer 302 in accordance with one embodiment of the present invention. As depicted in FIG. 3, the buffers 301-302 are depicted holding pixels of a respective data stream, represented by the numbers within the stages.


As described above, the buffers 301-302 can be thought of as pipelines that sequentially store arriving pixels of the data streams. The pipelines are used to track past, present, and future data. The input pipeline keeps track of past, present, and future data. The output pipeline is used to allow a look ahead of future input data. As used herein, the term past data refers to the data which has already been sent with respect to the current data flow. In this context, it is the data which has been sent with respect to the current output video data. The term present data refers to the data which is being sent out. The term future data refers to the data which has not yet been sent out with respect to the current data flow. In the pipe line architecture of system 200 and with respect to the output data flow through the pipe, the future input data is known at the present. Thus it is called future data.


The term genlock is used to describe a situation when system timing is in alignment with the input reference to pixel clock accuracy. When a system is genlocked to the reference input, its timing is in phase with the reference timing. While the system is genlocked, the input can either be a few pixels in advance of or a few pixels lagging behind the output stream. Having both pipes (e.g., buffers 301-302) allows for synchronization of data regardless of where the data streams are with respect to each other.


The output pipeline 301 is used to allow a look ahead of input data stream. The pipe length is dependent on the maximum number of pixels the input stream can lead the output stream while in genlocked. The input pipeline 302 is used for tracking the past, present, and future data. Its pipe length should be longer than the output pipeline 301 (e.g., approximately twice as long, etc.). The data can be extracted at any point on this pipe (e.g., via the multiplexer 212 of FIG. 2). In the FIG. 3 illustration, the timing reference pixel is shown as “0” with future pixels being negative numbers and past pixels being positive numbers.



FIG. 4 shows a diagram of the internal components of the phase comparator 210 in accordance with one embodiment of the present invention. As depicted in FIG. 4, the phase comparator 210 includes an input synchronization pipe and an output synchronization pipe. The pipes are used to compare the sync information provided by the extractors 204-205. Its job is to determine the relative position of the input data with respect to the output data. Based on this sync information, the phase comparator generates a selector index (e.g., a synchronization adjustment signal). This selector index 211 controls the multiplexer 212 to extract the aligned data from the appropriate stage of the input pipe. Thus, the selector index 211 effectively aligns the input stream to the output stream. The comparator detects where the alignment point is and generates a pointer via the selector index 211.



FIG. 5 shows an exemplary timing diagram of signals of the phase comparator 210 in accordance with one embodiment of the present invention. In the FIG. 5 example, it can be seen that the input sync is leading the output sync by one clock. Thus, the selector (e.g., selector index 211) is set to minus one.



FIG. 6, FIG. 7, and FIG. 8 show examples for the respective output pipe and input pipe where the streams are in alignment, the input stream is leading the output stream, and the input stream lagging the output stream, respectively. As shown in FIG. 6, the reference pixels “0” are in alignment. Thus, the multiplexer selector (e.g., multiplexer 212 of FIG. 2) is set to extract pixel data at the index 0. FIG. 7 shows a case where the input data stream leads the output data stream. In this case, the multiplexer selector will extract the pixel data at the −2 position. FIG. 8 shows a case where the input data stream lags the output data stream. In this case, the multiplexer selector will extract the pixel data at the +1 position.


The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.

Claims
  • 1. In a video processor, a method for synchronizing an input data stream with an output data stream, comprising: receiving an input data stream using said video processor; andreceiving an output data stream, wherein the input data stream and the output data stream each comprise a plurality of pixels;sequentially storing pixels of the input data stream using an input buffer;sequentially storing pixels of the output data stream using an output buffer;determining timing information by examining the input data stream and the output data stream;apply a synchronization adjustment to the input buffer and the output buffer in accordance with the timing information; andoutputting pixels from the input buffer and the output buffer to produce a synchronized mixed video output stream, wherein a mixer is coupled to the input buffer and the output buffer outputs pixels from the input buffer and the output buffer to produce a synchronized mixed video output stream without buffering a full frame.
  • 2. The method of claim 1, wherein the pixels of the input data stream and the pixels of the output data stream each comprise a scan line of a video frame.
  • 3. The method of claim 1, wherein the input buffer comprises an input pixel pipeline and the output buffer comprises an output pixel pipeline.
  • 4. The method of claim 3, wherein the input buffer is longer than the output buffer.
  • 5. The method of claim 4, wherein the timing information comprises horizontal sync and vertical sync information.
  • 6. The method of claim 1, wherein the timing information is determined by using an input timing extractor coupled to receive the input data stream and an output timing extractor coupled to receive the output data stream.
  • 7. The method of claim 1, wherein the synchronization adjustment is determined by a phase comparator coupled to receive the timing information and generate a selector index, wherein the selector controls the outputting of pixel from the input buffer and the output buffer.
  • 8. A video processor for synchronizing an input data stream with an output data stream, comprising: an first input for receiving an input data stream; andan second input for receiving an output data stream, wherein the input data stream and the output data stream each comprise a plurality of pixels;an input buffer coupled to the first input for sequentially storing pixels of the input data stream;an output buffer coupled to the second input for sequentially storing pixels of the output data stream;a timing extractor coupled to the first input and the second input for determining timing information by examining the input data stream and the output data stream;a phase comparator coupled to the input buffer and the output buffer for applying a synchronization adjustment to the input buffer and the output buffer in accordance with the timing information; anda mixer coupled to the input buffer and the output buffer for outputting pixels from the input buffer and the output buffer to produce a synchronized mixed video output stream in accordance with the synchronization adjustment from the phase comparator, wherein the mixer is coupled to the input buffer and the output buffer outputs pixels from the input buffer and the output buffer to produce a synchronized mixed video output stream without buffering a full frame.
  • 9. The video processor of claim 8, wherein the pixels of the input data stream and the pixels of the output data stream each comprise a scan line of a video frame.
  • 10. The video processor of claim 8, wherein the input buffer comprises an input pixel pipeline and the output buffer comprises an output pixel pipeline.
  • 11. The video processor of claim 10, wherein the output pixel pipeline is at least twice as long as the input pixel pipeline.
  • 12. The video processor of claim 8, wherein the timing extractor further comprises: an input timing extractor coupled to receive the input data stream; andan output timing extractor coupled to receive the output data stream, wherein the timing information comprises horizontal sync and vertical sync information.
  • 13. The video processor of claim 8, further comprising: a multiplexer coupled to the input buffer and coupled to the mixer, wherein the synchronization adjustment is determined by a phase comparator coupled to receive the timing information and generate a selector index, wherein the selector controls the multiplexer to ensure the outputting of pixels from the input buffer to the mixer is in synchronization with the outputting of pixels from the output buffer to the mixer.
  • 14. A computer system, comprising: a system memory;a central processor unit coupled to the system memory; anda graphics processor unit communicatively coupled to the central processor unit;a video processor coupled to the graphics processor unit for synchronizing a video input data stream with a video output data stream for a video mixing process, wherein the video processor further comprises: an first input for receiving an input data stream; andan second input for receiving an output data stream, wherein the input data stream and the output data stream each comprise a plurality of pixels;an input buffer coupled to the first input for sequentially storing pixels of the input data stream;an output buffer coupled to the second input for sequentially storing pixels of the output data stream;a timing extractor coupled to the first input and the second input for determining timing information by examining the input data stream and the output data stream;a phase comparator coupled to the input buffer and the output buffer for applying a synchronization adjustment to the input buffer and the output buffer in accordance with the timing information; anda mixer coupled to the input buffer and the output buffer for outputting pixels from the input buffer and the output buffer to produce a synchronized mixed video output stream in accordance with the synchronization adjustment from the phase comparator, wherein t mixer is coupled to the input buffer and the output buffer outputs pixels from the input buffer and the output buffer to produce a synchronized mixed video output stream without buffering a full frame.
  • 15. The computer system of claim 14, wherein the pixels of the input data stream and the pixels of the output data stream each comprise a scan line of a video frame.
  • 16. The computer system of claim 14, wherein the input buffer comprises an input pixel pipeline and the output buffer comprises an output pixel pipeline.
  • 17. The computer system of claim 16, wherein the input buffer is longer than the output buffer.
  • 18. The computer system of claim 14, wherein the timing extractor further comprises: an input timing extractor coupled to receive the input data stream; andan output timing extractor coupled to receive the output data stream, wherein the timing information comprises horizontal sync and vertical sync information.
  • 19. The computer system of claim 14, further comprising: a multiplexer coupled to the input buffer and coupled to the mixer, wherein the synchronization adjustment is determined by a phase comparator coupled to receive the timing information and generate a selector index, wherein the selector controls the multiplexer to ensure the outputting of pixels from the input buffer to the mixer is in synchronization with the outputting of pixels from the output buffer to the mixer.
US Referenced Citations (104)
Number Name Date Kind
4665556 Fukushima et al. May 1987 A
5163136 Richmond Nov 1992 A
5189671 Cheng Feb 1993 A
5426731 Masukane et al. Jun 1995 A
5585931 Juri et al. Dec 1996 A
5774206 Wasserman et al. Jun 1998 A
5781239 Mattela et al. Jul 1998 A
5793863 Hashimoto Aug 1998 A
5796743 Bunting et al. Aug 1998 A
5818529 Asamura et al. Oct 1998 A
5821886 Son Oct 1998 A
5850482 Meany et al. Dec 1998 A
5946037 Ahnn Aug 1999 A
5969750 Hsieh et al. Oct 1999 A
5990812 Bakhmutsky Nov 1999 A
6008745 Zandi et al. Dec 1999 A
6023088 Son Feb 2000 A
6026506 Anderson et al. Feb 2000 A
6041403 Parker et al. Mar 2000 A
6047357 Bannon et al. Apr 2000 A
6144322 Sato Nov 2000 A
6298370 Tang et al. Oct 2001 B1
6317063 Matsubara Nov 2001 B1
6339658 Moccagatta et al. Jan 2002 B1
6404928 Shaw et al. Jun 2002 B1
6462744 Mochida et al. Oct 2002 B1
6480489 Muller et al. Nov 2002 B1
6493872 Rangan et al. Dec 2002 B1
6507614 Li Jan 2003 B1
6529631 Peterson et al. Mar 2003 B1
6543023 Bessios Apr 2003 B2
6552673 Webb Apr 2003 B2
6556252 Kim Apr 2003 B1
6563440 Kangas May 2003 B1
6563441 Gold May 2003 B1
6573946 Gryskiewicz Jun 2003 B1
6577681 Kimura Jun 2003 B1
6654539 Duruoz et al. Nov 2003 B1
6675282 Hum et al. Jan 2004 B2
6738522 Hsu et al. May 2004 B1
6795503 Nakao et al. Sep 2004 B2
6839624 Beesley et al. Jan 2005 B1
6925119 Bartolucci et al. Aug 2005 B2
6981073 Wang et al. Dec 2005 B2
7016547 Smirnov Mar 2006 B1
7051123 Baker et al. May 2006 B1
7068407 Sakai et al. Jun 2006 B2
7068919 Ando et al. Jun 2006 B2
7069407 Vasudevan et al. Jun 2006 B1
7074153 Usoro et al. Jul 2006 B2
7113546 Kovacevic et al. Sep 2006 B1
7119813 Hollis et al. Oct 2006 B1
7158539 Zhang et al. Jan 2007 B2
7209636 Imahashi et al. Apr 2007 B2
7230986 Wise et al. Jun 2007 B2
7248740 Sullivan Jul 2007 B2
7286543 Bass et al. Oct 2007 B2
7324026 Puri et al. Jan 2008 B2
7327378 Han et al. Feb 2008 B2
7606313 Raman et al. Oct 2009 B2
7613605 Funakoshi Nov 2009 B2
7627042 Raman et al. Dec 2009 B2
7724827 Liang et al. May 2010 B2
7765320 Vehse et al. Jul 2010 B2
20010010755 Ando et al. Aug 2001 A1
20010026585 Kumaki Oct 2001 A1
20020001411 Suzuki et al. Jan 2002 A1
20020094031 Ngai et al. Jul 2002 A1
20020135683 Tamama et al. Sep 2002 A1
20030043919 Haddad Mar 2003 A1
20030067977 Chu et al. Apr 2003 A1
20030142105 Lavelle et al. Jul 2003 A1
20030156652 Wise et al. Aug 2003 A1
20030179706 Goetzinger et al. Sep 2003 A1
20030191788 Auyeung et al. Oct 2003 A1
20030196040 Hosogi et al. Oct 2003 A1
20040028127 Subramaniyan Feb 2004 A1
20040028142 Kim Feb 2004 A1
20040056787 Bossen Mar 2004 A1
20040059770 Bossen Mar 2004 A1
20040067043 Duruoz et al. Apr 2004 A1
20040081245 Deeley et al. Apr 2004 A1
20040096002 Zdepski et al. May 2004 A1
20040130553 Ushida et al. Jul 2004 A1
20040145677 Raman et al. Jul 2004 A1
20050008331 Nishimura et al. Jan 2005 A1
20050123274 Crinon et al. Jun 2005 A1
20050182778 Heuer et al. Aug 2005 A1
20050207497 Rovati et al. Sep 2005 A1
20060013321 Sekiguchi et al. Jan 2006 A1
20060083306 Hsu Apr 2006 A1
20060176309 Gadre et al. Aug 2006 A1
20060256120 Ushida et al. Nov 2006 A1
20070006060 Walker Jan 2007 A1
20070083788 Johnson et al. Apr 2007 A1
20070263728 Yanagihara et al. Nov 2007 A1
20070288971 Cragun et al. Dec 2007 A1
20070291858 Hussain et al. Dec 2007 A1
20080069464 Nakayama Mar 2008 A1
20080162860 Sabbatini et al. Jul 2008 A1
20080219575 Wittenstein Sep 2008 A1
20080285648 Plagne Nov 2008 A1
20080317138 Jia Dec 2008 A1
20090083788 Russell et al. Mar 2009 A1
Related Publications (1)
Number Date Country
20090141032 A1 Jun 2009 US