This application is related to application Ser. No. 10/751,035, entitled “Method and System for Synchronizing Multimedia I/O with CPU Clock”, filed on Dec. 31, 2003, and application Ser. No. 10/754,977, entitled “Method and System for Adaptation of Time Synchronization of a Plurality of Multimedia Streams”, filed on Jan. 9, 2004, which applications are assigned to the assignee of the present application.
Contained herein is material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction of the patent disclosure by any person as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights to the copyright whatsoever.
The present invention generally relates to the field of distributed multimedia synchronization. More particularly, an embodiment of the present invention relates to synchronizing platform clocks on CPUs, chipsets and I/O devices in a distributed wireless platform.
One approach to provide additional computing power has been to utilize distributed computer environments. This approach enables several computers to collaboratively perform computational tasks within a reduced amount of time. Generally, the divide and conquer approach provided by such parallel computing approaches enables utilization of available personal computers, rather than purchasing of a high performance, server-based computer system for performing the computationally intensive tasks.
Distributed computing has generally, however, been applied to performing purely computational tasks and not to synchronized capture and/or processing of signals, especially audio/video signals (and data streams). Signal processing of audio/video signals (and data streams) are generally very sensitive to even very small differences in sampling rates (e.g., clock skew), jitter, and delays. Therefore, precise synchronization is very critical for high quality input/output processing, as well as for real-time performance and in general, robustness and reliability issues. But, precise capture and synchronized inputs are not guaranteed on current platforms.
For example, on the same personal computer (PC) platform, problems can arise when several input/output (I/O) devices are used to capture audio and visual information from video camera(s) and microphone(s). Due to the fact that the different I/O devices will be triggered by separate oscillators, resulting audio samples and video frames will not be aligned on an absolute time line (thus inducing some relative offsets). Moreover, due to differences in the oscillators' frequencies, audio and/or visual data will drift away across multiple channels/streams over time. Instabilities in the oscillators' frequencies will also not be perfectly correlated between each other.
Similarly, in the case of multiple PC platforms audio and visual I/O devices will not be synchronized in time scale inducing some relative offsets and data samples to drift relative to each other. The extent of the relative offset, drift, and jitter on the existing platforms depends on many hardware and software parameters and can be very significant, sometimes causing total degradation of the processed signals (from the non-synchronized input streams). Such drifts, delays, and jitters can cause significant performance degradation for instance for array signal processing algorithms.
For example, in an acoustic beam former with 10 centimeter (cm) spacing between microphones, an error of only 0.01 percent in time can cause error of 20 degrees in the beam direction. Due to this fact, current implementations of audio array process algorithms may rely on dedicated circuitry for the synchronization between multiple I/O channels. Unfortunately, implementing such an approach with existing PC platforms would require a major overhaul of the current hardware utilized by the PC platforms. Therefore, there remains a need to overcome one or more of the limitations in the above-described existing art.
The invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar or identical elements, and in which:
a-5b illustrate piecewise linear model for GTC (Global Time Conversion), in accordance with one embodiment; and
In the following detailed description of the present invention numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
Also, the use of the term general purpose computer (GPC) herein is intended to denote laptops, PDAs, tablet PCs, mobile phones, and similar devices that can be a part of a distributed audio/visual system.
A chipset 107 is also coupled to the bus 105. The chipset 107 includes a memory control hub (MCH) 110. The MCH 110 may include a memory controller 112 that is coupled to a main system memory 115. Main system memory 115 stores data and sequences of instructions that are executed by the CPU 102 or any other device included in the system 100. In one embodiment, main system memory 115 includes dynamic random access memory (DRAM); however, main system memory 115 may be implemented using other memory types. Additional devices may also be coupled to the bus 105, such as multiple CPUs and/or multiple system memories.
The MCH 110 may also include a graphics interface 113 coupled to a graphics accelerator 130. In one embodiment, graphics interface 113 is coupled to graphics accelerator 130 via an accelerated graphics port (AGP) that operates according to an AGP Specification Revision 2.0 interface developed by Intel Corporation of Santa Clara, Calif. In an embodiment of the present invention, a flat panel display may be coupled to the graphics interface 113 through, for example, a signal converter that translates a digital representation of an image stored in a storage device such as video memory or system memory into display signals that are interpreted and displayed by the flat-panel screen. It is envisioned that the display signals produced by the display device may pass through various control devices before being interpreted by and subsequently displayed on the flat-panel display monitor. The display device may be a liquid crystal display (LCD), a flat panel display, a plasma screen, a thin film transistor (TFT) display, and the like.
In addition, the hub interface couples the MCH 110 to an input/output control hub (ICH) 140 via a hub interface. The ICH 140 provides an interface to input/output (I/O) devices within the computer system 100. In one embodiment of the present invention, the ICH 140 may be coupled to a Peripheral Component Interconnect (PCI) bus adhering to a Specification Revision 2.1 bus developed by the PCI Special Interest Group of Portland, Oreg. Thus, the ICH 140 includes a bus bridge 146 that provides an interface to a bus 142. In one embodiment of the present invention, the bus 142 is a PCI bus. Moreover, the bus bridge 146 provides a data path between the CPU 102 and peripheral devices.
The bus 142 includes I/O devices 200 (which are further discussed with reference to
In addition, other peripherals may also be coupled to the ICH 140 in various embodiments of the present invention. For example, such peripherals may include integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive(s), universal serial bus (USB) port(s), a keyboard, a mouse, parallel port(s), serial port(s), floppy disk drive(s), digital output support (e.g., digital video interface (DVI)), and the like. Moreover, the computer system 100 is envisioned to receive electrical power from one or more of the following sources for its operation: a power source (such as a battery, fuel cell, and the like), alternating current (AC) outlet (e.g., through a transformer and/or adaptor), automotive power supplies, airplane power supplies, and the like.
Additionally, the computer system 100 can also be coupled to a device for sound recording and playback 230 such as an audio digitization device coupled to a microphone for recording voice input for speech recognition or for recording sound in general. The I/O devices 200 of computer system 100 may also include a video digitizing device 220 that can be used to capture video images alone or in conjunction with sound recording device 230 to capture audio information associated with the video images. Furthermore, the input devices 200 may also include a hard copy device 204 (such as a printer) and a CD-ROM device 202. The input devices 200 (202-212) are also coupled to bus 142.
Accordingly, the computer system 100 as depicted in
In one embodiment of the present invention, an audio capture device such as a microphone may be utilized by the computer system 100 to capture audio information associated with the captured multimedia scene data. Accordingly, as individuals attempt to utilize their personal computers in order to capture, for example, live audio/video data, it is generally recognized that audio/video data is most effectively captured utilizing one or more data capture devices.
With reference to
Unfortunately, the time it takes for a block of data to travel between I/O device, main memory, and CPU is variable and depends on many factors like the CPU load, cache state, activity of other I/O devices that share the bus, and the operating system behavior. Therefore, applications that process data have no way to know precisely the time the data enters or leaves the I/O devices. The propagation delay may range from nanoseconds to milliseconds depending on the conditions mentioned above.
In existing applications, multiple video and audio streams are usually captured using a single I/O device such as a multi-channel analog to digital (A/D) or audio/video (A/V) capture cards. Special methods are needed to use multiple I/O devices synchronously even on a single PC platform.
The situation becomes more complex when synchronization of I/O devices on separate platforms is desired. There, in addition to I/O-CPU latencies, network connection introduces additional delays, that are variable due to best-effort (and therefore variable transmission delay) type of Media Access Protocols used in existing wired and wireless Ethernet.
Overview of the Synchronization Variations
In one embodiment, each GPC has a local CPU clock (e.g., Real-Time Counter). As described herein, t(ti) is a value of the global time t at the CPU clock ti on the i-th device. As further described herein, t(ti)=ai(ti)ti+bi(ti), where ai(ti) and bi(ti) are timing model parameters for the i-th device. The dependency of the model parameters on the CPU time ti approximates instabilities in the clock frequency due to variations. Given this linear model, one embodiment described herein provides for generating values for ai(ti) and bi(ti) to synchronize platform clocks in a network of wireless platforms.
In order to understand the synchronization technique, a brief description is provided describing the operations and timing relationships on a GPC, in accordance with one embodiment, as illustrated in
The incoming packet is received and processed by a hardware device 306 (e.g., network card), and eventually is put into a Direct Memory Access (DMA) buffer 308. The time for the hardware component is modeled in
A DMA controller transfers the data to a memory block allocated by the system and signals the event to the CPU by an Interrupt ReQuest (IRQ). The stage issuing the IRQ introduces variable delay due to memory bus arbitration between different agents (i.e., CPU, graphics adapter, other DMA's).
The interrupt controller (APIC) 310 queues the interrupt and schedules a time slot for handling. Because APIC is handling requests from multiple devices this stage introduces variable delay. Both previous stages are modeled by disr in
As described above, the data packet traverses multiple hardware and software stages in order to travel from network adapter to the CPU and back. The delay introduced by the various stages is highly variable making the problem of providing a global clock to the GPCs a very complicated one.
Inter-Platform Synchronization
In one embodiment, a series of arrival times of multicast packets sent by the wireless access point (AP) are used to synchronize CPU clocks of distributed platforms over a wireless network. In one embodiment, a pair wise time synchronization technique is used with one node chosen as the master (i.e., t(t0)=t0). Other nodes in the wireless network (i.e., clients) synchronize their clocks to the master. In an alternative embodiment, a joint timing synchronization may be used.
In one embodiment, to provide a global clock to distributed platforms, the global clock is to be monotonically increasing; and the speed of model parameter updates are limited to provide smooth time interval length adaptation.
The pair wise synchronization mechanism is now described in greater detail. Assuming the beacon packet j arrives to multiple platforms approximately at the same global time corresponding to local clocks tij(dprop=0), the set of observations available on the platforms consist of a pairs of timestamps (T0j, Tij). As stated above, in the section on system synchronization, Tj=tj+dhw+disr (omitting dependency on i) was introduced, which can also be approximated as a Tj=tj+d+n.
In the approximation, d represents the constant delay component and n represents the stochastic component. Given the set of observations (T0j, Tij) the timing model parameters ai(ti) and bi(ti) for the client/slave platforms are to be generated.
In one embodiment upon receiving a new observation pair, updated current values of ai(ti) and bi(ti) are generated. In one embodiment, the updated values of ai(ti) and bi(ti) are generated using a least trimmed squares (LTS) regression. In one embodiment, LTS is equivalent to performing least squares fit, trimming the observations that correspond to the largest residuals (i.e., defined as the distance of the observed value to the linear fit), and then computing a least squares regression model for the remaining observations.
In one embodiment, the adaptive model update described above is based on a piecewise linear model for global time conversion (GTC). In the description below, a model adaptation example on a single host, is presented. Additionally, in the following, Yj=T0j is denoted to be the sequence of master timestamps for packets j, and index i is omitted.
In one embodiment, the LTS regression over a current window of time takes as an input a set of observations (Yj, Tj) and produces estimates of parameters (a,b). When a subsequent pair of observations becomes available the parameters are updated to track the current time model, thereby producing a piecewise timing model function.
a illustrates one embodiment of the piecewise linear model with three linear regions for CPU clock ti: [inf, ts], [ts, tf] and [tf, inf] and their respective parameters (a0,b0), (a1,b1), and (a2,b2). As previously stated, in one embodiment, the resulting function of the piecewise linear model is to be monotonically increasing, and slowly varying.
An example model rule adaptation is illustrated in
In one embodiment the current model parameters are stored in a shared computer memory in the form of two buffers containing (a0,b0, ts,a1,b1, tf,a2,b2) and an additional bit indicating which of the two buffers is currently valid.
Intra-Platform Synchronization
Having described synchronizing platform clocks on the devices forming the distributed rendering/capturing system (i.e., wireless network),
Architecture of Distributed Synchronization System
As illustrated in
At the third layer 816, the LDIO devices are combined into a single distributed I/O (DIO) device 818 using the principles and techniques described above for providing inter-platform synchronization. In one embodiment, the third layer 814 is also responsible for transferring data between nodes (e.g., over a wireless network).
Whereas many alterations and modifications of the present invention will no doubt become apparent to a person of ordinary skill in the art after having read the foregoing description, it is to be understood that any particular embodiment shown and described by way of illustration is in no way intended to be considered limiting. For example, although much of the description herein references the multimedia stream as audio, the techniques described herein would also apply to video streams. Therefore, references to details of various embodiments are not intended to limit the scope of the claims which in themselves recite only those features regarded as essential to the invention.
Number | Name | Date | Kind |
---|---|---|---|
4021784 | Kimlinger | May 1977 | A |
5689689 | Meyers et al. | Nov 1997 | A |
5697051 | Fawcett | Dec 1997 | A |
5794019 | Genduso et al. | Aug 1998 | A |
5974056 | Wilson et al. | Oct 1999 | A |
6009236 | Mishima et al. | Dec 1999 | A |
6028853 | Haartsen | Feb 2000 | A |
6138245 | Son et al. | Oct 2000 | A |
6188703 | Dobson et al. | Feb 2001 | B1 |
6229479 | Kozlov et al. | May 2001 | B1 |
6246325 | Chittipeddi | Jun 2001 | B1 |
6279058 | Gulick | Aug 2001 | B1 |
6347084 | Hulyalkar et al. | Feb 2002 | B1 |
6351235 | Stilp | Feb 2002 | B1 |
6359985 | Koch et al. | Mar 2002 | B1 |
6375630 | Gelvin et al. | Apr 2002 | B1 |
6381402 | Sugita et al. | Apr 2002 | B1 |
6490256 | Jones et al. | Dec 2002 | B1 |
6591370 | Lovett et al. | Jul 2003 | B1 |
6640253 | Schaefer | Oct 2003 | B2 |
6697103 | Fernandez et al. | Feb 2004 | B1 |
6714611 | Du et al. | Mar 2004 | B1 |
6870503 | Mohamadi | Mar 2005 | B2 |
6882309 | Bromley et al. | Apr 2005 | B2 |
6904536 | Hasegawa | Jun 2005 | B2 |
6906741 | Canova, Jr. et al. | Jun 2005 | B2 |
6937680 | Fong et al. | Aug 2005 | B2 |
6965590 | Schmidl et al. | Nov 2005 | B1 |
7030812 | Bekritsky et al. | Apr 2006 | B2 |
20010056501 | Law et al. | Dec 2001 | A1 |
20020018458 | Aiello et al. | Feb 2002 | A1 |
20020059535 | Bekritsky et al. | May 2002 | A1 |
20020064134 | Lee et al. | May 2002 | A1 |
20020069299 | Rosener et al. | Jun 2002 | A1 |
20020114303 | Crosbie et al. | Aug 2002 | A1 |
20030012176 | Kondylis et al. | Jan 2003 | A1 |
20030069025 | Hoctor et al. | Apr 2003 | A1 |
20030172179 | del Prado Pavon et al. | Sep 2003 | A1 |
20040125822 | Jun et al. | Jul 2004 | A1 |
20050001742 | Small | Jan 2005 | A1 |
20050166079 | Lienhart et al. | Jul 2005 | A1 |
20050228902 | Lienhart et al. | Oct 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
20050228902 A1 | Oct 2005 | US |