This disclosure relates generally to wireless communications systems. Embodiments of this disclosure relate to methods and apparatuses for time synchronization across wireless devices.
Event synchronization across multiple wireless platforms can enhance synchronization performance and help remove cables for media applications, like televisions, soundbars, and video walls. Wireless setups provide obvious benefits, such as reducing costs on equipment connection and enhancing flexibility of equipment setup. Also, when a media product is equipped such that wireless-fidelity (Wi-Fi) handles all signals and a wireless power pad handles all power, such a media product can be easily made waterproof for outdoor usage.
Embodiments of the present disclosure provide methods and apparatuses for time synchronization across wireless devices.
In one embodiment, a method includes calibrating, at a wireless device, a Wi-Fi chipset-level timer based on a time synchronization function (TSF) frame received from a Wi-Fi access point. The method also includes performing, at the wireless device, a coarse synchronization of a microcontroller-level or CPU-level timer based on the calibrated Wi-Fi chipset-level timer. The method further includes performing, at the wireless device, a fine synchronization of the microcontroller-level or CPU-level timer after the coarse synchronization. The fine synchronization is performed according to a first time scale measured in first time units and the coarse synchronization is performed according to a second time scale measured in second time units that are a multiple of the first time units.
In another embodiment, a device includes a transceiver and a processor operably connected to the transceiver. The processor is configured to: calibrate a Wi-Fi chipset-level timer based on a TSF frame received from a Wi-Fi access point; perform a coarse synchronization of a microcontroller-level or CPU-level timer based on the calibrated Wi-Fi chipset-level timer; and perform a fine synchronization of the microcontroller-level or CPU-level timer after the coarse synchronization. The processor is configured to perform the fine synchronization according to a first time scale measured in first time units and perform the coarse synchronization according to a second time scale measured in second time units that are a multiple of the first time units.
In another embodiment, a non-transitory computer readable medium includes program code that, when executed by a processor of a device, causes the device to: calibrate a Wi-Fi chipset-level timer based on a TSF frame received from a Wi-Fi access point; perform a coarse synchronization of a microcontroller-level or CPU-level timer based on the calibrated Wi-Fi chipset-level timer; and perform a fine synchronization of the microcontroller-level or CPU-level timer after the coarse synchronization. The program code causes the device to perform the fine synchronization according to a first time scale measured in first time units and perform the coarse synchronization according to a second time scale measured in second time units that are a multiple of the first time units.
Other technical features may be readily apparent to one skilled in the art from the following figures, descriptions, and claims.
Before undertaking the DETAILED DESCRIPTION below, it may be advantageous to set forth definitions of certain words and phrases used throughout this patent document. The term “couple” and its derivatives refer to any direct or indirect communication between two or more elements, whether or not those elements are in physical contact with one another. The terms “transmit,” “receive,” and “communicate,” as well as derivatives thereof, encompass both direct and indirect communication. The terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation. The term “or” is inclusive, meaning and/or. The phrase “associated with,” as well as derivatives thereof, means to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, have a relationship to or with, or the like. The term “controller” means any device, system or part thereof that controls at least one operation. Such a controller may be implemented in hardware or a combination of hardware and software and/or firmware. The functionality associated with any particular controller may be centralized or distributed, whether locally or remotely. The phrase “at least one of,” when used with a list of items, means that different combinations of one or more of the listed items may be used, and only one item in the list may be needed. For example, “at least one of: A, B, and C” includes any of the following combinations: A, B, C, A and B, A and C, B and C, and A and B and C. As used herein, such terms as “1st” and “2nd,” or “first” and “second” may be used to simply distinguish a corresponding component from another and does not limit the components in other aspect (e.g., importance or order). It is to be understood that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with,” “coupled to,” “connected with,” or “connected to” another element (e.g., a second element), it means that the element may be coupled with the other element directly (e.g., wiredly), wirelessly, or via a third element.
As used herein, the term “module” may include a unit implemented in hardware, software, or firmware, and may interchangeably be used with other terms, for example, “logic,” “logic block,” “part,” or “circuitry”. A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to an embodiment, the module may be implemented in a form of an application-specific integrated circuit (ASIC).
Moreover, various functions described below can be implemented or supported by one or more computer programs, each of which is formed from computer readable program code and embodied in a computer readable medium. The terms “application” and “program” refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer readable program code. The phrase “computer readable program code” includes any type of computer code, including source code, object code, and executable code. The phrase “computer readable medium” includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory. A “non-transitory” computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals. A non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overwritten, such as a rewritable optical disc or an erasable memory device.
Definitions for other certain words and phrases are provided throughout this patent document. Those of ordinary skill in the art should understand that in many if not most instances, such definitions apply to prior as well as future uses of such defined words and phrases.
For a more complete understanding of the present disclosure and its advantages, reference is now made to the following description taken in conjunction with the accompanying drawings, in which like reference numerals represent like parts:
Aspects, features, and advantages of the disclosure are readily apparent from the following detailed description, simply by illustrating a number of particular embodiments and implementations, including the best mode contemplated for carrying out the disclosure. The disclosure is also capable of other and different embodiments, and its several details can be modified in various obvious respects, all without departing from the spirit and scope of the disclosure. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive. The disclosure is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings.
The present disclosure covers several components which can be used in conjunction or in combination with one another or can operate as standalone schemes. Certain embodiments of the disclosure may be derived by utilizing a combination of several of the embodiments listed below. Also, it should be noted that further embodiments may be derived by utilizing a particular subset of operational steps as disclosed in each of these embodiments. This disclosure should be understood to cover all such embodiments.
The wireless network 100 includes access points (APs) 101 and 103. The APs 101 and 103 communicate with at least one network 130, such as the Internet, a proprietary Internet Protocol (IP) network, or other data network. The AP 101 provides wireless access to the network 130 for a plurality of stations (STAs) 111-114 within a coverage area 120 of the AP 101. The APs 101-103 may communicate with each other and with the STAs 111-114 using Wi-Fi or other WLAN (wireless local area network) communication techniques. The STAs 111-114 may communicate with each other using peer-to-peer protocols, such as Tunneled Direct Link Setup (TDLS).
Depending on the network type, other well-known terms may be used instead of “access point” or “AP,” such as “router” or “gateway.” For the sake of convenience, the term “AP” is used in this disclosure to refer to network infrastructure components that provide wireless access to remote terminals. In WLAN, given that the AP also contends for the wireless channel, the AP may also be referred to as a STA. Also, depending on the network type, other well-known terms may be used instead of “station” or “STA,” such as “mobile station,” “subscriber station,” “remote terminal,” “user equipment,” “wireless terminal,” or “user device.” For the sake of convenience, the terms “station” and “STA” are used in this disclosure to refer to remote wireless equipment that wirelessly accesses an AP or contends for a wireless channel in a WLAN, whether the STA is a mobile device (such as a mobile telephone or smartphone) or is normally considered a stationary device (such as a desktop computer, AP, media player, stationary sensor, television, etc.).
Dotted lines show the approximate extents of the coverage areas 120 and 125, which are shown as approximately circular for the purposes of illustration and explanation only. It should be clearly understood that the coverage areas associated with APs, such as the coverage areas 120 and 125, may have other shapes, including irregular shapes, depending upon the configuration of the APs and variations in the radio environment associated with natural and man-made obstructions.
As described in more detail below, one or more of the APs may include circuitry and/or programming to enable time synchronization across wireless devices. Although
The AP 101 includes multiple antennas 204a-204n and multiple transceivers 209a-209n. The AP 101 also includes a controller/processor 224, a memory 229, and a backhaul or network interface 234. The transceivers 209a-209n receive, from the antennas 204a-204n, incoming radio frequency (RF) signals, such as signals transmitted by STAs 111-114 in the network 100. The transceivers 209a-209n down-convert the incoming RF signals to generate IF or baseband signals. The IF or baseband signals are processed by receive (RX) processing circuitry in the transceivers 209a-209n and/or controller/processor 224, which generates processed baseband signals by filtering, decoding, and/or digitizing the baseband or IF signals. The controller/processor 224 may further process the baseband signals.
Transmit (TX) processing circuitry in the transceivers 209a-209n and/or controller/processor 224 receives analog or digital data (such as voice data, web data, e-mail, or interactive video game data) from the controller/processor 224. The TX processing circuitry encodes, multiplexes, and/or digitizes the outgoing baseband data to generate processed baseband or IF signals. The transceivers 209a-209n up-converts the baseband or IF signals to RF signals that are transmitted via the antennas 204a-204n.
The controller/processor 224 can include one or more processors or other processing devices that control the overall operation of the AP 101. For example, the controller/processor 224 could control the reception of forward channel signals and the transmission of reverse channel signals by the transceivers 209a-209n in accordance with well-known principles. The controller/processor 224 could support additional functions as well, such as more advanced wireless communication functions. For instance, the controller/processor 224 could support beam forming or directional routing operations in which outgoing signals from multiple antennas 204a-204n are weighted differently to effectively steer the outgoing signals in a desired direction. The controller/processor 224 could also support OFDMA operations in which outgoing signals are assigned to different subsets of subcarriers for different recipients (e.g., different STAs 111-114). Any of a wide variety of other functions could be supported in the AP 101 by the controller/processor 224 including time synchronization across wireless devices. In some embodiments, the controller/processor 224 includes at least one microprocessor or microcontroller. The controller/processor 224 is also capable of executing programs and other processes resident in the memory 229, such as an OS. The controller/processor 224 can move data into or out of the memory 229 as required by an executing process.
The controller/processor 224 is also coupled to the backhaul or network interface 234. The backhaul or network interface 234 allows the AP 101 to communicate with other devices or systems over a backhaul connection or over a network. The interface 234 could support communications over any suitable wired or wireless connection(s). For example, the interface 234 could allow the AP 101 to communicate over a wired or wireless local area network or over a wired or wireless connection to a larger network (such as the Internet). The interface 234 includes any suitable structure supporting communications over a wired or wireless connection, such as an Ethernet or RF transceiver. The memory 229 is coupled to the controller/processor 224. Part of the memory 229 could include a RAM, and another part of the memory 229 could include a Flash memory or other ROM.
As described in more detail below, the AP 101 may include circuitry and/or programming for time synchronization across wireless devices. Although
The STA 111 includes antenna(s) 205, transceiver(s) 210, a microphone 220, a speaker 230, a processor 240, an input/output (I/O) interface (IF) 245, an input 250, a display 255, and a memory 260. The memory 260 includes an operating system (OS) 261 and one or more applications 262.
The transceiver(s) 210 receives from the antenna(s) 205, an incoming RF signal (e.g., transmitted by an AP 101 of the network 100). The transceiver(s) 210 down-converts the incoming RF signal to generate an intermediate frequency (IF) or baseband signal. The IF or baseband signal is processed by RX processing circuitry in the transceiver(s) 210 and/or processor 240, which generates a processed baseband signal by filtering, decoding, and/or digitizing the baseband or IF signal. The RX processing circuitry sends the processed baseband signal to the speaker 230 (such as for voice data) or is processed by the processor 240 (such as for web browsing data).
TX processing circuitry in the transceiver(s) 210 and/or processor 240 receives analog or digital voice data from the microphone 220 or other outgoing baseband data (such as web data, e-mail, or interactive video game data) from the processor 240. The TX processing circuitry encodes, multiplexes, and/or digitizes the outgoing baseband data to generate a processed baseband or IF signal. The transceiver(s) 210 up-converts the baseband or IF signal to an RF signal that is transmitted via the antenna(s) 205.
The processor 240 can include one or more processors and execute the basic OS program 261 stored in the memory 260 in order to control the overall operation of the STA 111. In one such operation, the processor 240 controls the reception of forward channel signals and the transmission of reverse channel signals by the transceiver(s) 210 in accordance with well-known principles. The processor 240 can also include processing circuitry configured to enable time synchronization across wireless devices. In some embodiments, the processor 240 includes at least one microprocessor or microcontroller.
The processor 240 is also capable of executing other processes and programs resident in the memory 260, such as operations for enabling time synchronization across wireless devices. The processor 240 can move data into or out of the memory 260 as required by an executing process. In some embodiments, the processor 240 is configured to execute a plurality of applications 262, such as applications to enable time synchronization across wireless devices. The processor 240 can operate the plurality of applications 262 based on the OS program 261 or in response to a signal received from an AP. The processor 240 is also coupled to the I/O interface 245, which provides STA 111 with the ability to connect to other devices such as laptop computers and handheld computers. The I/O interface 245 is the communication path between these accessories and the processor 240.
The processor 240 is also coupled to the input 250, which includes for example, a touchscreen, keypad, etc., and the display 255. The operator of the STA 111 can use the input 250 to enter data into the STA 111. The display 255 may be a liquid crystal display, light emitting diode display, or other display capable of rendering text and/or at least limited graphics, such as from web sites. The memory 260 is coupled to the processor 240. Part of the memory 260 could include a random-access memory (RAM), and another part of the memory 260 could include a Flash memory or other read-only memory (ROM).
Although
As discussed earlier, event synchronization across multiple wireless platforms can enhance synchronization performance and help remove cables for media applications, like televisions, soundbars, and video walls. Wireless setups provide obvious benefits, such as reducing costs on equipment connection and enhancing flexibility of equipment setup. Also, when a media product is equipped such that Wi-Fi handles all signals and a wireless power pad handles all power, such a media product can be easily made waterproof for outdoor usage.
Recent Wi-Fi developments have enabled use of a Time Synchronization Function (TSF) clock. However, event and time synchronization requires more than just a synchronized clock. It is important to manage the latencies created by wireless platforms. For example, there are latencies from getting a synchronized clock time, scheduling target events, and triggering target events. Currently, although Wi-Fi modules are synchronized with a dedicated clock, the clock does not have an inherent trigger mechanism to properly handle latencies and synchronize events on main processors and other modules across multiple System-on-Chips (SoCs).
To address these and other issues, this disclosure provides systems and methods for time synchronization across wireless devices. As described in more detail below, the disclosed embodiments can achieve accurate event and time synchronization through Wi-Fi TSF clocks. At a high level, the disclosed embodiments include a two-step approach to enable accurate event synchronization. The steps include (1) a coarse timer synchronization that aligns multiple wireless platforms on a milliseconds scale prior to fine event synchronization, and (2) a fine event synchronization that triggers target events across multiple wireless platforms and reduces misalignments or jittering to the scale of microseconds.
In general, the disclosed embodiments can trigger target events at the required time by manipulating the latencies created by wireless platforms. With this approach, either hardware or software (or a combination of both) could be used to achieve accurate synchronization. The disclosed embodiments enable various applications such as synched audio experience across different devices and the like. Note that while some of the embodiments discussed below are described in the context of Wi-Fi-enabled audio systems—such as televisions, soundbars, and video walls—these are merely examples. It will be understood that the principles of this disclosure may be implemented in any number of other suitable contexts or systems.
In some embodiments, the audio devices 301-303 are configured for synchronized audio service. Synchronized audio service is a feature that allows distributed audio devices, such as the audio devices 301-303, to transmit synchronized audio signals. In some embodiments, the audio devices 301-303 sample at ˜40 kHz; therefore, the time interval between two digital samples is approximately 25 microseconds. In order to achieve distributed beamforming, it is helpful or necessary to synchronize the audio devices 301-303 with low latency (e.g., less than 60 microseconds).
The AP 310 periodically broadcasts beacon signals that include one or more time synchronization function (TSF) frames. Whenever a beacon signal is detected by one of the audio devices 301-303, that audio device 301-303 can use the TSF frame to calibrate the TSF timer inside the Wi-Fi chipset. Once the chipset time is synchronized, there are several different ways to synchronize the microcontroller or CPU of the audio device 301-303.
As shown in
Whenever a beacon from the AP 310 is received by the Wi-Fi chipset 405, a UDP packet 430 with a TSF time is sent from the upper MAC layer 420 to the OS 415. Once the OS 415 receives the UDP packet 430, the OS 415 takes note of the OS time tlocal and the received TSF time tglobali. Since every audio device 301-303 is supposed to receive the beacon at the same time, the received TSF time will be treated as global time. The audio device 301-303 can conduct a regression operation to calibrate the local CPU time to global time, such as by the following:
As shown in
In some implementations, the jitter present in the sleep function could cause an error (for example, an error of approximately 500 microseconds), which can be more than the permissible system error requirement. It is possible to consider this jitter timing error during synchronization. For example, a two-step hierarchical synchronization solution can be implemented to address the jitter, such as described in
As shown in
At operation 710, the audio device 301 performs a one-time coarse synchronization to synchronize a microcontroller-level or CPU-level timer based on the TSF timer 520. The coarse synchronization operation 710 is based on the initial reading of the TSF timer 520 (from operation 705) and a sleep function. For example, the microcontroller or CPU (e.g., the MCU 505) is instructed to sleep for T1−mod(tsf_cur1, T1) seconds, where T1 is the time period for how often the coarse synchronization operation 710 is performed. The coarse synchronization operation 710 can achieve a coarse synchronization accuracy of approximately 500 microseconds.
After the coarse synchronization operation 710, the audio device 301 performs a fine synchronization that includes operations 715 and 720 in an iterative algorithm, such as a while loop. In the while loop, the CPU continuously reads the TSF timer 520 until a period T2 is achieved. Here, T2 represents the time period for how often the while loop is performed in the fine synchronization. Let tsf_cur2 be the new current time of the TSF timer 520. In operation 715, the audio device 301 compares the current time to T2. For example, in operation 715, it is determined if T2−mod(tsfcur2, T2)<0. If this is true, the CPU reads the TSF timer 520 again to get the new current time tsf_cur2. If false, then the synchronization is complete, and the audio device 301 can control audio at operation 725. The fine synchronization further improves the synchronization accuracy to ±10 microseconds. Thus, as described, the process 700 can be performed to achieve a 10 microsecond latency. Also, the process 700 can reduce the CPU overhead for synchronization at low latency.
The software architecture used to perform the process 800 can be independent from operation systems and SoCs. The implementation of the process 800 can be effective as long as the conditions shown below in List 1 are met.
In List 1, there are 2 time scales—Time Scale 1 and Time Scale 2—with different time units, TimeUnit_1 and TimeUnit_2. The different time units represent different scales of elapsed time. In the examples described below, the Time Scale 1 and TimeUnit_1 represent microseconds, while Time Scale 2 and TimeUnit_2 represent milliseconds. Time Scale 1 and Time Scale 2 enable event synchronization on the scale of microseconds to be created. However, other time scales are possible and within the scope of this disclosure. Also, the concept could be expanded to multiple levels of synchronization. For example, time units could be cascaded and expanded to achieve synchronization at different levels, like 1 μs, 1 ms, 1 s, and so on, or 5 μs, 5 ms, 5 s, and so on. In general, the following can be observed:
TimeUnit_1=the time unit of the TSF Clock
TimeUnit_2=TimeUnit_1×UnitCnt_2
. . .
TimeUnit_N=TimeUnit_(N−1)×UnitCnt_N
Here, UnitCnt_2, . . . , UnitCnt_N represent integer multipliers (e.g., 1000) to scale between the time units (e.g., microseconds to milliseconds or milliseconds to seconds). The selection of each time unit should be greater that the jitter caused by the timer and trigger mechanism on that time scale.
The process 800 can be summarized as comprising two main steps. The first step is a coarse synchronization to synchronize local timers with the TSF Clock on Time Scale 1 (e.g., a milliseconds scale) with atomic operations 1 and 2, and part of atomic operation 3. The second step is a fine synchronization to trigger target events with a lumped compensation for previous latencies on Time Scale 2 (e.g., hundreds of microseconds scale) in part of atomic operation 3. It is important for accurate event synchronization to allow the atomic operations to hold the CPU without interruption. Outside of the atomic operations, while timers are running in the background, the CPU is free to work on other tasks.
As shown in
At operation 815, the audio device 301 performs atomic operation 1 to achieve timer synchronization. During atomic operation 1, the audio device 301 captures TSF Clock values and calculates a delay to configure timers for Time Scale 2 alignment across multiple wireless platforms. The execution of atomic operation 1 is set with an interval greater than the Wi-Fi beacon interval for TSF Clock synchronization.
At step 910, the audio device 301 reads the current TSF clock in TimeUnit_1 and keeps the previous reading as TSF_TIME_PRE in a loop until TSF_TIME_CUR is equal to TSF_TIME_PREV at a scale of TimeUnit_2. It is noted that in some implementations, step 910 is optional.
At step 915, the audio device 301 reads the current TSF clock in TimeUnit_1 and keeps the previous reading as TSF_TIME_PRE in a loop until TSF_TIME_CUR is not equal to TSF_TIME_PREV at a scale of TimeUnit_2. At this point, MOD(TSF_TIME_CUR, 1 TimeUnit_2) is close to 0, and there is enough time to complete step 920.
At step 920, the audio device 301 sets a trigger from a one-shot timer with a timeout equal to (1 TimeUnit_2−MOD(TSF_TIME_CUR, 1 TimeUnit_2)+Adjustment_x).
After completion of atomic operation 1, at operation 820, the CPU can work on other tasks until it is time for operation 825, which includes atomic operation 2.
At operation 825, the audio device 301 performs atomic operation 2 to achieve event synchronization. Atomic operation 2 is the second operation in the coarse synchronization to synchronize local timers. Atomic operation 2 sets the period of a periodic timer to generate callbacks on the milliseconds scale at the timeout event of the one-shot timer. The execution of atomic operation 2 follows the execution of atomic operation 1.
After completion of atomic operation 2, at operation 830, the CPU can work on other tasks until it is time for operation 835, which includes atomic operation 3.
At operation 835, the audio device 301 performs atomic operation 3 to achieve event synchronization. The first part of atomic operation 3 is the last operation in the coarse synchronization to synchronize local timers. Atomic operation 3 is created by the timeout of the periodic timer configured in atomic operation 2. Once the callback function obtains the CPU, the second part of this operation starts and decouples event synchronization from the timeout event. Atomic operation 3 captures TSF Clock values immediately and generates synchronized events at the qualified time. The execution of atomic operation 3 is set at every timeout of the periodic timer.
At step 1110, the audio device 301 checks if the current time is the time to generate a synchronization pulse. If it is not the time, the audio device 301 returns to operation 810. If it is time to generate a synchronization pulse, then the process moves to step 1115.
At step 1115, the audio device 301 reads the current TSF clock in TimeUnit_2, TSF_TIME_CUR, in a loop until TSF_TIME_CUR is greater than TSF_TIME_ENTER. Then, at step 1120, the audio device 301 generates the desired synchronization pulse.
As shown in
In the process 1200, the atomic operations 1-3 are the same as, or similar to, the atomic operations 1-3 shown in
Atomic Operation 1: CPU Synchronized with TimeUnit_2
Schedule: The execution of this operation is expected with intervals close to TimeUnit_3.
1. Read the current TSF clock in TimeUnit_1 as TSF_TIME_CUR.
2. Read the current TSF clock in TimeUnit_1 and keep the previous reading as TSF_TIME_PRE in a loop until TSF_TIME_CUR is equal to TSF_TIME_PREV at a scale of TimeUnit_2 (Note: This step is optional).
3. Read the current TSF clock in TimeUnit_1 and keep the previous reading as TSF_TIME_PRE in a loop until TSF_TIME_CUR is not equal to TSF_TIME_PREV at a scale of TimeUnit_2. At this point, MOD(TSF_TIME_CUR, 1 TimeUnit_2) is close to 0, and there is enough time to complete the following step.
4. Set a trigger from the one-shot timer with timeout of (1 TimeUnit_2−MOD(TSF_TIME_CUR, 1 TimeUnit_2)+Adjustment_x).
Atomic Operation 2: CPU Synchronized with TimeUnit_1
Schedule: The execution of this operation is expected at the timeout of the one-shot timer started by atomic operation 1.
1. Start or restart a trigger from the periodic timer with a timeout of (TimeUnit_2+Adjustment_y).
Atomic Operation 3: CPU Synchronized with TimeUnit_1
Schedule: The execution of this operation is expected at the timeout of the periodic timer started or restarted by atomic operation 2.
1. Read the current TSF clock in TimeUnit_2 as TSF_TIME_ENTER.
2. Check if this is the time to generate a synchronization pulse; if not, return.
3. Read the current TSF clock in TimeUnit_2, TSF_TIME_CUR, in a loop until TSF_TIME_CUR is greater than TSF_TIME_ENTER.
4. Generate the desired synchronization pulse.
From
It is noted that the processes disclosed above (such as the process 800 and the process 1200) are described as being implemented in software. However, the disclosed approaches can also be generalized into a hardware implementation. A hardware accelerator can be built to conserve CPU time. The use of the accelerator can be effective as long as the conditions shown below in List 2 are met. The implementation is similar to the software solution.
Although
As illustrated in
At step 1303, the wireless device performs a coarse synchronization of a microcontroller-level or CPU-level timer based on the calibrated Wi-Fi chipset-level timer. This could include, for example, the audio device 301 performing the coarse synchronization operation 710 of
At step 1305, the wireless device performs a fine synchronization of the microcontroller-level or CPU-level timer after the coarse synchronization. Here, the fine synchronization is performed according to a first time scale measured in first time units and the coarse synchronization is performed according to a second time scale measured in second time units that are a multiple of the first time units. This could include, for example, the audio device 301 performing the fine synchronization operation 715 of
At step 1307, the wireless device (and optionally one or more additional audio devices) provide stereo audio playback based on the microcontroller-level or CPU-level timer synchronization. This could include, for example, the audio device 301 (and optionally one or more of the audio devices 302 and 303) providing stereo audio playback after synchronization.
Although
Although the present disclosure has been described with an exemplary embodiment, various changes and modifications may be suggested to one skilled in the art. It is intended that the present disclosure encompass such changes and modifications as fall within the scope of the appended claims. None of the description in this application should be read as implying that any particular element, step, or function is an essential element that must be included in the claims scope. The scope of patented subject matter is defined by the claims.
This application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application No. 63/522,990, filed on Jun. 23, 2023, which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63522990 | Jun 2023 | US |