The present disclosure relates to correcting for a latency of a speaker.
A speaker can include a processor that converts a digital input to the speaker into an analog current that drives an air-vibrating element or elements in the speaker. The sound produced by the speaker can lag behind the digital input by a particular time known as a latency. Unfortunately, such a latency is not standard from speaker to speaker, or from speaker manufacturer to speaker manufacturer, or from speakers to video displays. Such non-standard latencies can desynchronize the speakers in a multi-speaker system, or can desynchronize an audio signal from a corresponding video signal.
One example includes a method for correcting for a latency of a speaker. A user device can communicate an indication to the speaker to play a sound at a first time. In some examples, the first time can be synchronized to a clock of a computer network. The user device can record a second time at which a microphone on the user device detects the sound. In some examples, the second time can be synchronized to the clock of the computer network. The user device can compare the first and second times to determine a latency of the speaker. The user device can communicate adjustment data corresponding to the determined latency to the speaker. The adjustment data can be used by the speaker to correct for the determined latency.
Another example includes a system, which can include a microphone; a processor; and a memory device storing instructions executable by the processor. The instructions can be executable by the processor to perform steps for correcting for a latency of a speaker. The steps can include communicating an indication to the speaker to play a sound at a first time, the first time being synchronized to a clock of a computer network; recording a second time at which the microphone detects the sound, the second time being synchronized to the clock of the computer network; comparing the first and second times to determine a latency of the speaker; and communicating adjustment data corresponding to the determined latency to the speaker. The adjustment data can be used by the speaker to correct for the determined latency.
Another example includes a method for correcting for a latency of a speaker. A user interface on a smart phone can display instructions to position the smart phone a specified distance from the speaker. The smart phone can communicate an indication to the speaker to play a sound at a first time. The first time can be being synchronized to a clock of a computer network. The smart phone can timestamp a second time at which a microphone on the smart phone detects the sound. The second time can be synchronized to the clock of the computer network. The smart phone can subtract a time stamp corresponding to the second time from a time stamp corresponding to the first time, and account for a time-of-flight of sound to propagate along the specified distance, to determine a latency of the speaker. The smart phone can communicate adjustment data corresponding to the determined latency to the speaker. The adjustment data can be used by the speaker to correct for the determined latency.
Corresponding reference characters indicate corresponding parts throughout the several views. Elements in the drawings are not necessarily drawn to scale. The configurations shown in the drawings are merely examples, and should not be construed as limiting the scope of the invention in any manner.
The system 100 for controlling speaker latency can run as an application on a user device 104. In the example of
The user device 104 can include a processor 108 and a memory device 110 for storing instructions 112 executable by the processor 108. The processor 108 can execute the instructions 112 to perform steps to correct for a latency of the speaker 102. The steps can include communicating an indication to the speaker 102 to play a sound at a first time 114, the first time 114 being synchronized to a clock of a computer network 116; recording a second time 118 at which the microphone 106 detects the sound, the second time 118 being synchronized to the clock of the computer network 116; comparing the first and second times to determine a latency of the speaker 102; and communicating adjustment data corresponding to the determined latency to the speaker 102, the adjustment data used by the speaker 102 to correct for the determined latency.
The user device 104 can include a user interface 120 having a display. In some examples, the user device 104 can display instructions to position the user device 104 a specified distance from the speaker 102. The user device 104 can further account for a time-of-flight of sound to propagate along the specified distance. Time-of-flight refers to the amount of time a sound takes to propagate in air from the speaker 102 to the microphone 106.
These steps and others are discussed in detail below with regard to
At operation 202, the smart phone can display, on a user interface on the smart phone, instructions to position the smart phone a specified distance from the speaker. For instance, the display on the smart phone can present instructions to position the smart phone one meter away from the speaker, and can present a button to be pressed by the user when the smart phone is suitably positioned. Other user interface features can also be used.
At operation 204, the smart phone can communicate an indication to the speaker to play a sound at a first time. For example, the indication can include instructions to play the sound at a specified first time in the future. In some examples the first time can be synchronized to a clock of a computer network. In some examples, the first time can be synchronized to an absolute time standard determined by the computer network. For example, the first time can be synchronized to the absolute time standard via a Precision Time Protocol, or by another suitable protocol. In other examples, the first time can be synchronized to a relative time standard communicated via the computer network. For example, the relative time standard can be determined by the smart phone, the speaker, or another element not controlled directly by the computer network.
At operation 206, the smart phone can timestamp a second time at which a microphone on the smart phone detects the sound. In some examples, the second time can be synchronized to the clock of the computer network, optionally in the same manner as the first time. In some examples, the second time can be synchronized to an absolute time standard determined by the computer network, such as via a Precision Time Protocol. In other examples, the second time can be synchronized to a relative time standard communicated via the computer network. In other examples, the first and second times can be synchronized to one another without using a network-based time, such as by using a Network Time Protocol or another suitable technique.
At operation 208, the smart phone can subtract a time stamp corresponding to the second time from a time stamp corresponding to the first time, to determine a latency of the speaker. In some examples, the smart phone can additionally account for a time-of-flight of sound to propagate along the specified distance, to determine the latency of the speaker. For example, if the smart phone is positioned one meter from the speaker, the time-of-flight can be expressed as the quantity, one meter, divided by the speed of sound in air, approximately 344 meters per second, to give a time-of-flight of about 2.9 milliseconds.
At operation 210, the smart phone can communicate adjustment data corresponding to the determined latency to the speaker. The speaker can use the adjustment data to correct for the determined latency. By adjusting or controlling the latency of the speaker, the latency of the speaker can optionally be set to match the latency of one or more additional audio or visual components.
In some examples, the latency-adjustment system 300 can be configured as software executable on a user device, such as a smart phone, a tablet, a laptop, a computer, or another suitable device. In the specific example of
The latency-adjustment system 300 can include a processor 304, and a memory device 306 storing instructions executable by the processor 304. The instructions can be executed by the processor 304 to perform a method for correcting for a latency of a speaker.
The mobile device 302 can include a processor 304. The processor 304 may be any of a variety of different types of commercially available processors 304 suitable for mobile devices 302 (for example, an XScale architecture microprocessor, a microprocessor without interlocked pipeline stages (MIPS) architecture processor, or another type of processor 304). A memory 306, such as a random access memory (RAM), a flash memory, or other type of memory, is typically accessible to the processor 304. The memory 306 may be adapted to store an operating system (OS) 308, as well as application programs 310, such as a mobile location enabled application. In some examples, the memory 306 can be used to store the lookup table discussed above. The processor 304 may be coupled, either directly or via appropriate intermediary hardware, to a display 312 and to one or more input/output (110) devices 314, such as a keypad, a touch panel sensor, a microphone, and the like. In some examples, the display 312 can be a touch display that presents the user interface to a user. The touch display can also receive suitable input from the user. Similarly, in some examples, the processor 304 may be coupled to a transceiver 316 that interfaces with an antenna 318. The transceiver 316 may be configured to both transmit and receive cellular network signals, wireless data signals, or other types of signals via the antenna 318, depending on the nature of the mobile device 302. Further, in some configurations, a GPS receiver 320 may also make use of the antenna 318 to receive GPS signals. In some examples, the transceiver 316 can transmit signals over a wireless network that correspond to logical volume levels for respective speakers in a multi-speaker system.
The techniques discussed above are applicable to a speaker, but can also be applied to other sound-producing devices, such as a set-top box, an audio receiver, a video receiver, an audio/video receiver, or a headphone jack of a device.
While this invention has been described as having example designs, the present invention can be further modified within the spirit and scope of this disclosure. This application is therefore intended to cover any variations, uses, or adaptations of the invention using its general principles. Further, this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains and which fall within the limits of the appended claims.
This application is a Continuation of U.S. patent application Ser. No. 15/617,673, filed on Jun. 8, 2017, the contents of which are incorporated herein in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
7054544 | Tanaka | May 2006 | B1 |
7555354 | Walsh et al. | Jun 2009 | B2 |
8995240 | Erven et al. | Mar 2015 | B1 |
9219460 | Bush | Dec 2015 | B2 |
9226087 | Ramos | Dec 2015 | B2 |
9329831 | Fullerton et al. | May 2016 | B1 |
9330096 | Fullerton et al. | May 2016 | B1 |
9331799 | Gao et al. | May 2016 | B2 |
9363601 | Ramos | Jun 2016 | B2 |
9367283 | Kuper | Jun 2016 | B2 |
10334358 | Lau | Jun 2019 | B2 |
20140177864 | Kidron | Jun 2014 | A1 |
20150078596 | Sprogis | Mar 2015 | A1 |
20160011850 | Sheen et al. | Jan 2016 | A1 |
20160080887 | Tikkanen et al. | Mar 2016 | A1 |
20160255302 | Greene et al. | Sep 2016 | A1 |
20170346588 | Prins et al. | Nov 2017 | A1 |
20180359561 | Lau | Dec 2018 | A1 |
20190342659 | Lau | Nov 2019 | A1 |
Number | Date | Country |
---|---|---|
WO-2018227103 | Dec 2018 | WO |
Entry |
---|
“U.S. Appl. No. 15/617,673, Final Office Action dated Sep. 4, 2018”, 15 pgs. |
“U.S. Appl. No. 15/617,673, Non Final Office Action dated Feb. 28, 2018”, 13 pgs. |
“U.S. Appl. No. 15/617,673, Notice of Allowance dated Mar. 22, 2019”, 9 pgs. |
“U.S. Appl. No. 15/617,673, Pre-Appeal Brief filed Dec. 7, 2018”, 5 pgs. |
“U.S. Appl. No. 15/617,673, Response Filed May 7, 2018 to Non Final Office Action dated Feb. 28, 2018”, 10 pgs. |
“International Application Serial No. PCT/US2018/036680, International Search Report dated Jul. 9, 2018”, 3 pgs. |
“International Application Serial No. PCT/US2018/036680, Written Opinion dated Jul. 9, 2018”, 6 pgs. |
“International Application Serial No. PCT/US2018/036680, International Preliminary Report on Patentability dated Dec. 19, 2019”, 8 pgs. |
Number | Date | Country | |
---|---|---|---|
20190268694 A1 | Aug 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15617673 | Jun 2017 | US |
Child | 16406601 | US |