Method and system for improved acoustic transmission of data

Information

  • Patent Grant
  • 11870501
  • Patent Number
    11,870,501
  • Date Filed
    Thursday, December 20, 2018
    5 years ago
  • Date Issued
    Tuesday, January 9, 2024
    4 months ago
Abstract
The present invention relates to a method for communicating data acoustically. The method includes segmenting the data into a sequence of symbols; encoding each symbol of the sequence into a plurality of tones; and acoustically generating the plurality of tones simultaneously for each symbol in sequence. Each of the plurality of tones for each symbol in the sequence may be at a different frequency.
Description

This application is the U.S. national phase of International Application No. PCT/GB2018/053733 filed 20 Dec. 2018, which designated the U.S. and claims priority to GB Patent Application No. 1721457.8 filed 20 Dec. 2017, the entire contents of each of which are hereby incorporated by reference.


FIELD OF INVENTION

The present invention is in the field of data communication. More particularly, but not exclusively, the present invention relates to a method and system for acoustic transmission of data.


BACKGROUND

There are a number of solutions to communicating data wirelessly over a short range to and from devices using radio frequencies. The most typical of these is WiFi. Other examples include Bluetooth and Zigbee.


An alternative solution for a short range data communication uses a “transmitting” speaker and “receiving” microphone to send encoded data acoustically over-the-air.


Such an alternative may provide various advantages over radio frequency-based systems. For example, speakers and microphones are cheaper and more prevalent within consumer electronic devices, and acoustic transmission is limited to “hearing” distance.


There exist several over-the-air acoustic communications systems. A popular scheme amongst over-the-air acoustic communications systems is to use Frequency Shift Keying as the modulation scheme, in which digital information is transmitted by modulating the frequency of a carrier signal to convey 2 or more integer levels (M-ary fixed keying, where M is the distinct number of levels).


One such acoustic communication system is described in US Patent Publication No. US2012/084131A1, DATA COMMUNICATION SYSTEM. This system, invented by Patrick Bergel and Anthony Steed, involves the transmission of data using an audio signal transmitted from a speaker and received by a microphone where the data, such as a shortcode, is encoded into a sequence of tones within the audio signal.


Acoustic communication systems using Frequency Shift Keying such as the above system can have a good level of robustness but are limited in terms of their throughput. The data rate is linearly proportional to the number of tones available (the alphabet size), divided by the duration of each tone. This is robust and simple in complexity, but is spectrally inefficient.


Radio frequency data communication systems may use phase- and amplitude-shift keying to ensure high throughput. However, both these systems are not viable for over-the-air data transmission in most situations, as reflections and amplitude changes in real-world acoustic environments renders them extremely susceptible to noise.


There is a desire for a system which provides improved throughput in acoustic data communication systems.


It is an object of the present invention to provide a method and system for improved acoustic data transmission which overcomes the disadvantages of the prior art, or at least provides a useful alternative.


SUMMARY OF INVENTION

According to a first aspect of the invention there is provided a method for communicating data acoustically, including:


a) Segmenting the data into a sequence of symbols;


b) Encoding each symbol of the sequence into a plurality of tones; and


c) Acoustically generating the plurality of tones simultaneously for each symbol in sequence;


wherein each of the plurality of tones for each symbol in the sequence are at a different frequency.


Other aspects of the invention are described within the claims.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings in which:



FIG. 1: shows block diagram illustrating a data communication system in accordance with an embodiment of the invention; and



FIG. 2: shows a flow diagram illustrating a data communication method in accordance with an embodiment of the invention.





DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

The present invention provides an improved method and system for acoustically communicating data.


The inventors have discovered that throughput can be increased significantly in a tone-based acoustic communication system by segmenting the data into symbols and transmitting K tones simultaneously for each symbol where the tones are selected from an alphabet of size M. In this way, a single note comprising multiple tones can encode symbols of size log2 (M choose K) bits compared to a single tone note which encodes a symbol into only log2 (M) bits. The inventors have discovered that this method of increasing data density is significantly less susceptible to noise in typical acoustic environments compared to PSK and ASK at a given number of bits per symbol.


In FIG. 1, an acoustic data communication system 100 in accordance with an embodiment of the invention is shown.


The system 100 may include a transmitting apparatus 101 comprising an encoding processor 102 and a speaker 103.


The encoding processor 102 may be configured for segmenting data into a sequence of symbols and for encoding each symbol of the sequence into a plurality of tones. Each symbol may be encoded such that each of the plurality of tones are different. Each symbol may be encoded into K tones. The data may be segmented into symbols corresponding to B bits of the data. B may be log 2 (M choose K) where M is the size of the alphabet for the tones. The alphabet of tones may be spread evenly over a frequency spectrum or may be spread in ways to improve transmission.


The processor 102 and/or speaker 103 may be configured for acoustically transmitting the plurality of tones simultaneously for each symbol in sequence. For example, the processor 102 may be configured for summing the plurality of tones into a single note or chord for generation at the speaker 103. Alternatively, the speaker 103 may include a plurality of cones and each cone may generate a tone.


The system 100 may include a receiving apparatus 104 comprising a decoding processor 105 and a microphone 106.


The microphone 106 may be configured for receiving an audio signal which originates at the speaker 103.


The decoding processor 105 may be configured for decoding the audio signal into a sequence of notes (or chords), for identifying a plurality of tones within each note, for decoding the plurality of tones for each note into a symbol to form a sequence of symbols, and for reconstituting data from the sequence of symbols.


It will also be appreciated by those skilled in the art that the above embodiments of the invention may be deployed on different apparatuses and in differing architectures. For example, the encoding processor 102 and speaker 103 may exist within different devices and the audio signal to be generated may be transmitted from the encoding processor 102 (e.g. the processor 102 may be located at a server) to the speaker 103 (e.g. via a network, or via a broadcast system) for acoustic generation (for example, the speaker 103 may be within a television or other audio or audio/visual device). Furthermore, the microphone 106 and decoding processor 105 may exist within different devices. For example, the microphone 106 may transmit the audio signal, or a representation thereof, to a decoding processor 105 in the cloud.


The functionality of the apparatuses 101 and 104 and/or processors 102 and 105 may be implemented, at least in part, by computer software stored on an intangible computer-readable medium.


Referring to FIG. 2, a method 200 for communicating data acoustically will be described.


The data may be comprised of a payload and error correction. In some embodiment, the data may include a header. The header may include a length related to the transmission (e.g. for the entire data or the payload). The length may be the number of symbols transmitted.


In step 201, the data is segmented into a sequence of symbols (e.g. at transmitting apparatus 101 by encoding processor 102). The data may be segmented by first treating the data as a stream of bits. The segment size (B) in bits may be determined by:

B=log2(M choose K)


M is the size of the alphabet of the tones at different frequencies spanning an audio spectrum and K is the number of tones per note or chord.


The audio spectrum may be wholly or, at least partially, audible to human beings (e.g. within 20 Hz to 20 kHz), and/or may be wholly, or at least partially, ultrasonic (e.g. above 20 kHz). In one embodiment, the audio spectrum is near-ultrasonic (18 kHz to 20 kHz).


In step 202, each symbol in the sequence may be mapped to a set of tones (e.g. at transmitting apparatus 101 by encoding processor 102). Each set may comprise K tones. The tones may be selected from the alphabet of M tones. Preferably each tone within a set is a different tone selected from the alphabet. The symbol may be mapped to the set of tones via bijective mapping. In one embodiment, a hash-table from symbol to tone set may be used to encode the symbol (a second hash-table may map the set of tones to symbol to decode a detected set of tones). One disadvantage of using hash-tables is that because the hash-table must cover all possible selections of tones for the set, as M and/or K increases, the memory requirements may become prohibitively large. Therefore, it may be desirable if a more efficient bijective mapping schema could be used. One embodiment, which addresses this desire, uses a combinatorial number system (combinadics) method to map symbols to tone sets and detected tone sets to symbols.


In the combinadics method, each symbol (as an integer) can be translated into a K-value combinatorial representation (e.g. a set of K tones selected from the alphabet of M tones). Furthermore, each set of K tones can be translated back into a symbol (as an integer).


In step 203, the set of tones may be generated acoustically simultaneously for each symbol in the sequence (e.g. at the transmitting apparatus 101). This may be performed by summing all the tones in the set into an audio signal and transmitting the audio signal via a speaker 103. The audio signal may include a preamble. The preamble may assist in triggering listening or decoding at a receiving apparatus (e.g. 104). The preamble may be comprised of a sequence of single or summed tones.


In step 204, the audio signal may be received by a microphone (e.g. 106 at receiving apparatus 104).


In step 205, the audio signal may be decoded (e.g. via decoding processor 105) into a sequence of notes. Decoding of the audio signal may be triggered by detection first of a preamble.


Each note may comprise a set of tones and the set of tones may be detected within each node (e.g. by decoding processor) in step 206. The tones may be detected by computing a series of FFT frames for the audio signal corresponding to a note length and detecting the K most significant peaks in the series of FFT frames. In other embodiments, other methods may be used to detect prominent tones.


The set of detected tones can then be mapped to a symbol (e.g. via a hash-table or via the combinadics method described above) in step 207.


In step 208, the symbols can be combined to form data. For example, the symbols may be a stream of bits that is segmented into bytes to reflect the original data transmitted.


At one or more of the steps 205 to 208, error correction may be applied to correct errors created during acoustic transmission from the speaker (e.g. 103) to the microphone (e.g. 106). For example, forward error correction (such as Reed-Solomon) may form a part of the data and may be used to correct errors in the data.


Embodiments of the present invention will be further described below:


Symbols, Lexical Mappings and the Combinatorial Number System (Combinadics)


In monophonic M-ary FSK, each symbol can represent M different values, so can store at most log2M bits of data. Within multi-tone FSK, with a chord size of K and an alphabet size of M, the number of combinatoric selections is M choose K:

M!/(K!(M−K)!)


Thus, for an 6-bit (64-level) alphabet and a chord size K of 4, the total number of combinations is calculated as follows:

64!/(4!60!)=635,376


Each symbol should be expressible in binary. The log2 of this value is taken to deduce the number of combinations that can be expressed, which is in this case 219. The spectral efficiency is thus improved from 6-bit per symbol to 19-bit per symbol.


Combinadics


To translate between K-note chords and symbols within the potential range, a bijective mapping must be created between the two, allowing a lexographic index A to be derived from a combination {X1, X2, . . . XK} and vice-versa.


A naive approach to mapping would work by:

    • generating all possible combinations, and
    • storing a pair of hashtables from A<→{X1, X2, . . . XK}


      Example for M=4, K=3
  • 0—{0, 1, 2}
  • 1—{0, 1, 3}
  • 2—{0, 1, 4}
  • 3—{0, 2, 3}
  • 4—{0, 2, 4}
  • 5—{0, 3, 4}
  • 6—{1, 2, 3}
  • 7—{1, 2, 4}
  • 8—{1, 3, 4}
  • 9—{2, 3, 4}


As the combinatoric possibilities increase, such as in the above example, the memory requirements become prohibitively large. Thus, an approach is needed that is efficient in memory and CPU.


Mapping from Data to Combinadics to Multi-tone FSK


To therefore take a stream of bytes and map it to a multi-tone FSK signal, the process is as follows:

    • segment the stream of bytes into B-bit symbols, where 2B is the maximum number of binary values expressible within the current combinatoric space (e.g. M choose K)
    • translate each symbol into its K-value combinatorial representation
    • synthesize the chord by summing the K tones contained within the combination


In one embodiment, a transmission “frame” or packet may be ordered as follows:


1. Preamble/wakeup symbols (F)


2. Payload symbols (P)


3. Forward error-correction symbols (E)


FF PPPPPPPP EEEEEEEE


At decoding, a receiving may:

    • decode each of the constituent tones using an FFT
    • segment the input into notes, each containing a number of FFT frames equal to the entire expected duration of the note
    • use a statistical process to derive what seem to be the K most prominent tones within each note
    • translate the K tones into a numerical symbol using the combinatorial number system process described above
    • concatenate the symbols to the entire length of the payload (and FEC segment)
    • re-segment into bytes
    • and finally, apply the FEC algorithm to correct any mis-heard tones


In another embodiment, the FEC algorithm may be applied before re-segmentation into bytes.


A potential advantage of some embodiments of the present invention is that data throughput for acoustic data communication systems can be significantly increased (bringing throughput closer to the Shannon limit for the channel) in typical acoustic environments by improved spectral efficiency. Greater efficiency results in faster transmission of smaller payloads, and enables transmission of larger payloads which previously may have been prohibitively slow for many applications.


While the present invention has been illustrated by the description of the embodiments thereof, and while the embodiments have been described in considerable detail, it is not the intention of the applicant to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details, representative apparatus and method, and illustrative examples shown and described. Accordingly, departures may be made from such details without departure from the spirit or scope of applicant's general inventive concept.

Claims
  • 1. A method for communicating data acoustically, comprising: segmenting the data into a sequence of symbols, each of the symbols having a preset number of bits;determining for each symbol of the sequence of symbols a plurality of tones based on a bijective mapping between symbols and sets of tones selected from tones spread evenly over a frequency spectrum, wherein each set of tones among the sets of tones includes a same number of tones and each of the plurality of tones for each symbol in the sequence of symbols is at a different frequency; andfor each symbol in the sequence of symbols, simultaneously acoustically generating, using a speaker, the plurality of tones determined for the respective symbol,wherein the bijective mapping includes a mapping between the symbols and a multi-tone Frequency Shift Keying signal including the plurality of tones.
  • 2. The method of claim 1, further comprising: generating a transmission packet for a first set of symbols of the sequence of symbols, wherein the transmission packet comprises a payload, the payload comprising the respective plurality of tones corresponding to the first set of symbols.
  • 3. The method of claim 2, wherein the transmission packet further comprises a header before the payload and error correction data after the payload.
  • 4. The method of claim 3, wherein the header comprises an indication of a length of the transmission packet.
  • 5. The method of claim 1, wherein simultaneously acoustically generating the plurality of tones for each symbol in sequence comprises: summing the plurality of tones into an audio signal; andtransmitting, via the speaker, the audio signal.
  • 6. The method of claim 1, further comprising: receiving an audio signal comprising a sequence of a plurality of simultaneous tones at a microphone;decoding each of the plurality of simultaneous tones into a symbol; andcombining the symbols into the data.
  • 7. The method of claim 6, further comprising: converting the received audio signal into a series of Fast Fourier Transform (FFT) frames.
  • 8. The method of claim 7, further comprising: after analyzing the FFT frames for a set of tones, detecting K most prominent peaks corresponding to K tones.
  • 9. The method of claim 1, wherein each symbol is associated with a plurality of tones via a combinadics encoding method.
  • 10. The method of claim 6, wherein each of the plurality of simultaneous tones is decoded into a symbol using a combinadics decoding method.
  • 11. The method of claim 10, wherein the combinadics encoding and decoding methods associate integers with a set of tones.
  • 12. The method of claim 1, wherein the plurality of tones are audibly generated.
  • 13. The method of claim 1, wherein the each of the plurality of tones for each symbol is within an audio spectrum from 18 kHz to 20 kHz.
  • 14. A system for communicating data acoustically, comprising: an encoding device, comprising: one or more first processors; anda first non-transitory computer-readable medium storing instructions that, when executed by the one or more first processors, cause the encoding device to: segment the data into a sequence of symbols, each of the symbols having a preset number of bits; anddetermine for each symbol of the sequence of symbols a plurality of tones based on a bijective mapping between symbols and sets of tones selected from tones spread evenly over a frequency spectrum, wherein each set of tones among the sets of tones includes a same number of tones and each of the plurality of tones for each symbol in the sequence of symbols is at a different frequency; anda playback device, comprising: one or more second processors;a speaker; anda second non-transitory computer-readable medium storing instructions that, when executed by the one or more second processors, cause the playback device to: for each symbol in the sequence of the symbols, cause, via the speaker, simultaneous playback of the plurality of tones determined for the respective symbolwherein the bijective mapping includes a mapping between the symbols and a multi-tone Frequency Shift Keying signal including the plurality of tones.
  • 15. The system of claim 14, wherein the speaker comprises a plurality of cones, wherein each cone of the speaker is controlled to output a different tone of the plurality of tones determined for the respective symbol.
  • 16. The system of claim 14, wherein the first non-transitory computer-readable medium storing instructions that, when executed by the one or more first processors, cause the encoding device to: generate a transmission packet for a first set of symbols of the sequence of symbols, wherein the transmission packet comprises a payload comprising the respective plurality of tones corresponding to the first set of symbols.
  • 17. The system of claim 14, wherein the second non-transitory, computer-readable medium storing instructions that, when executed by the one or more second processors, cause the playback device to: sum the plurality of tones into an audio signal; andtransmit, via a speaker, the audio signal.
  • 18. A non-transitory computer-readable medium storing instructions that, when executed by one or more second processors, cause a system to: segmenting data into a sequence of symbols, each of the symbols having a preset number of bits;determine for each symbol of the sequence of symbols a plurality of tones based on a bijective mapping between symbols and sets of tones selected from tones spread evenly over a frequency spectrum, wherein each set of tones among the sets of tones includes a same number of tones and each of the plurality of tones for each symbol in the sequence of symbols is at a different frequency; andfor each symbol in the sequence of symbols, providing simultaneous playback of the plurality of tones determined for the respective symbol using a speaker,wherein the bijective mapping includes a mapping between the symbols and a multi-tone Frequency Shift Keying signal including the plurality of tones.
  • 19. The non-transitory computer readable medium of claim 18, wherein the instructions, when executed by the one or more second processors, further cause the system to: generating a transmission packet for a first set of symbols of the sequence of symbols, wherein the transmission packet comprises a payload comprising the respective plurality of tones corresponding to the first set of symbols.
  • 20. An apparatus for communicating data acoustically, comprising: a memory;a speaker; anda processing system coupled to the memory and the speaker and configured to control the apparatus to: segment a stream of bytes into a sequence of symbols, each symbol comprising B number of bits;determine for each symbol of the sequence of symbols a plurality of tones based on a bijective mapping between symbols and sets of tones selected from tones spread evenly over a frequency spectrum, wherein each set of tones mapped to a respective symbol includes a same number of tones and each tone in the set of tones mapped to the respective symbol has a different frequency defined by the mapping; andsequentially for each symbol of the sequence of symbols, acoustically output, using the speaker, the set of tones determined for the respective symbol, wherein each of the tones of the set of tones for the respective symbol are simultaneously acoustically output by the speakerwherein the bijective mapping includes a mapping between the symbols and a multi-tone Frequency Shift Keying signal including the plurality of tones.
  • 21. The apparatus of claim 20, wherein the speaker comprises a plurality of cones and each cone of the speaker is controlled to output a different tone of the plurality of tones determined for the respective symbol.
  • 22. The method of claim 1, wherein each set of the sets of tones corresponds to a chord.
  • 23. The method of claim 1, wherein the speaker comprises a plurality of cones and each cone of the speaker is controlled to output a different tone of the plurality of tones determined for the respective symbol.
Priority Claims (1)
Number Date Country Kind
1721457 Dec 2017 GB national
PCT Information
Filing Document Filing Date Country Kind
PCT/GB2018/053733 12/20/2018 WO
Publishing Document Publishing Date Country Kind
WO2019/122910 6/27/2019 WO A
US Referenced Citations (114)
Number Name Date Kind
4045616 Sloane Aug 1977 A
4048074 Bruenemann et al. Sep 1977 A
4101885 Blum Jul 1978 A
4323881 Mori Apr 1982 A
4794601 Kikuchi Dec 1988 A
6133849 McConnell et al. Oct 2000 A
6163803 Watanabe Dec 2000 A
6532477 Tang et al. Mar 2003 B1
6711538 Omori et al. Mar 2004 B1
6766300 Laroche Jul 2004 B1
6909999 Thomas et al. Jun 2005 B2
6996532 Thomas Feb 2006 B2
7058726 Osaku et al. Jun 2006 B1
7349668 Ilan et al. Mar 2008 B2
7379901 Philyaw May 2008 B1
7403743 Welch Jul 2008 B2
7944847 Trine et al. May 2011 B2
8494176 Suzuki et al. Jul 2013 B2
8594340 Takara et al. Nov 2013 B2
8782530 Beringer et al. Jul 2014 B2
9118401 Nieto et al. Aug 2015 B1
9137243 Suzuki et al. Sep 2015 B2
9237226 Frauenthal et al. Jan 2016 B2
9270811 Atlas Feb 2016 B1
9344802 Suzuki et al. May 2016 B2
10090003 Wang Oct 2018 B2
10186251 Mohammadi Jan 2019 B1
20020107596 Thomas et al. Aug 2002 A1
20020152388 Linnartz et al. Oct 2002 A1
20020184010 Eriksson et al. Dec 2002 A1
20030065918 Willey Apr 2003 A1
20030195745 Zinser, Jr. et al. Oct 2003 A1
20030212549 Steentra et al. Nov 2003 A1
20040002858 Attias et al. Jan 2004 A1
20040081078 McKnight Apr 2004 A1
20040133789 Gantman et al. Jul 2004 A1
20040148166 Zheng Jul 2004 A1
20040264713 Grzesek Dec 2004 A1
20050049732 Kanevsky et al. Mar 2005 A1
20050086602 Philyaw et al. Apr 2005 A1
20050219068 Jones et al. Oct 2005 A1
20060167841 Allan et al. Jul 2006 A1
20060253209 Hersbach et al. Nov 2006 A1
20060287004 Fuqua Dec 2006 A1
20070063027 Belfer et al. Mar 2007 A1
20070121918 Tischer May 2007 A1
20070144235 Werner et al. Jun 2007 A1
20070174052 Manjunath et al. Jul 2007 A1
20070192672 Bodin et al. Aug 2007 A1
20070192675 Bodin et al. Aug 2007 A1
20070232257 Otani et al. Oct 2007 A1
20080002882 Voloshynovskyy et al. Jan 2008 A1
20080011825 Giordano et al. Jan 2008 A1
20080027722 Haulick et al. Jan 2008 A1
20080031315 Ramirez et al. Feb 2008 A1
20080059157 Fukuda et al. Mar 2008 A1
20080112885 Okunev et al. May 2008 A1
20080232603 Soulodre Sep 2008 A1
20080242357 White Oct 2008 A1
20080262928 Michaelis Oct 2008 A1
20090034712 Grasley et al. Feb 2009 A1
20090119110 Oh May 2009 A1
20090141890 Steenstra et al. Jun 2009 A1
20090254485 Baentsch et al. Oct 2009 A1
20100030838 Atsmon et al. Feb 2010 A1
20100064132 Ravikiran Sureshbabu Mar 2010 A1
20100088390 Bai et al. Apr 2010 A1
20100134278 Srinivasan et al. Jun 2010 A1
20100146115 Bezos Jun 2010 A1
20100223138 Dragt Sep 2010 A1
20100267340 Lee Oct 2010 A1
20100290641 Steele Nov 2010 A1
20110173208 Vogel Jul 2011 A1
20110276333 Wang et al. Nov 2011 A1
20110277023 Meylemans et al. Nov 2011 A1
20110307787 Smith Dec 2011 A1
20120084131 Bergel et al. Apr 2012 A1
20120214416 Kent et al. Aug 2012 A1
20120214544 Shivappa et al. Aug 2012 A1
20130010979 Takara et al. Jan 2013 A1
20130030800 Tracey et al. Jan 2013 A1
20130034243 Yermeche et al. Feb 2013 A1
20130077798 Otani et al. Mar 2013 A1
20130216058 Furuta et al. Aug 2013 A1
20130223279 Tinnakornsrisuphap et al. Aug 2013 A1
20130275126 Lee Oct 2013 A1
20140028818 Brockway, III et al. Jan 2014 A1
20140046464 Reimann Feb 2014 A1
20140053281 Benoit et al. Feb 2014 A1
20140074469 Zhidkov Mar 2014 A1
20140142958 Sharma et al. May 2014 A1
20140164629 Barth et al. Jun 2014 A1
20140172141 Mangold Jun 2014 A1
20140172429 Butcher et al. Jun 2014 A1
20140258110 Davis et al. Sep 2014 A1
20150004935 Fu Jan 2015 A1
20150088495 Jeong et al. Mar 2015 A1
20150141005 Suryavanshi et al. May 2015 A1
20150215299 Burch et al. Jul 2015 A1
20150248879 Miskimen et al. Sep 2015 A1
20150271676 Shin et al. Sep 2015 A1
20150349841 Mani et al. Dec 2015 A1
20150371529 Dolecki Dec 2015 A1
20150382198 Kashef et al. Dec 2015 A1
20160007116 Holman Jan 2016 A1
20160098989 Layton et al. Apr 2016 A1
20170279542 Knauer et al. Sep 2017 A1
20180106897 Shouldice et al. Apr 2018 A1
20180115844 Lu et al. Apr 2018 A1
20180167147 Almada Jun 2018 A1
20180359560 Defraene et al. Dec 2018 A1
20200105128 Frank Apr 2020 A1
20200169327 Lin May 2020 A1
20210098008 Nesfield et al. Apr 2021 A1
Foreign Referenced Citations (26)
Number Date Country
105790852 Jul 2016 CN
106921650 Jul 2017 CN
1760693 Mar 2007 EP
2334111 Jun 2011 EP
2916554 Sep 2015 EP
3275117 Jan 2018 EP
3526912 Aug 2019 EP
2369995 Jun 2002 GB
2484140 Apr 2012 GB
H1078928 Mar 1998 JP
2001320337 Nov 2001 JP
2004512765 Apr 2004 JP
2004139525 May 2004 JP
2007121626 May 2007 JP
2007195105 Aug 2007 JP
2008219909 Sep 2008 JP
0115021 Mar 2001 WO
0150665 Jul 2001 WO
0161987 Aug 2001 WO
0163397 Aug 2001 WO
0211123 Feb 2002 WO
0235747 May 2002 WO
2004002103 Dec 2003 WO
2005006566 Jan 2005 WO
2008131181 Oct 2008 WO
2016094687 Jun 2016 WO
Non-Patent Literature Citations (72)
Entry
International Search Report for PCT/GB2018/053733, dated Apr. 11, 2019, 4 pages.
Written Opinion of the ISA for PCT/GB2018/053733, dated Apr. 11, 2019, 6 pages.
Advisory Action dated Mar. 1, 2022, issued in connection with U.S. Appl. No. 16/342,078, filed Apr. 15, 2019, 3 pages.
Bourguet et al. “A Robust Audio Feature Extraction Algorithm for Music Identification,” AES Convention 129; Nov. 4, 2010, 7 pages.
C. Beaugeant and H. Taddei, “Quality and computation load reduction achieved by applying smart transcoding between CELP speech codecs,” 2007, 2007 15th European Signal Processing Conference, pp. 1372-1376.
European Patent Office, Decision to Refuse dated Nov. 13, 2019, issued in connection with European Patent Application No. 11773522.5, 52 pages.
European Patent Office, European EPC Article 94.3 dated Oct. 8, 2021, issued in connection with European Application No. 17790809.2, 9 pages.
European Patent Office, European EPC Article 94.3 dated Dec. 10, 2021, issued in connection with European Application No. 18845403.7, 41 pages.
European Patent Office, European EPC Article 94.3 dated Oct. 12, 2021, issued in connection with European Application No. 17795004.5, 8 pages.
European Patent Office, European EPC Article 94.3 dated Oct. 28, 2021, issued in connection with European Application No. 18752180.2, 7 pages.
European Patent Office, European Extended Search Report dated Aug. 31, 2020, issued in connection with European Application No. 20153173.8, 8 pages.
European Patent Office, Summons to Attend Oral Proceedings mailed on Mar. 15, 2019, issued in connection with European Application No. 11773522.5-1217, 10 pages.
Final Office Action dated Oct. 16, 2014, issued in connection with U.S. Appl. No. 12/926,470, filed Nov. 19, 2010, 22 pages.
Final Office Action dated Aug. 17, 2017, issued in connection with U.S. Appl. No. 12/926,470, filed Nov. 19, 2010, 22 pages.
Final Office Action dated Nov. 30, 2015, issued in connection with U.S. Appl. No. 12/926,470, filed Nov. 19, 2010, 25 pages.
Final Office Action dated May 10, 2022, issued in connection with U.S. Appl. No. 16/496,685, filed Sep. 23, 2019, 15 pages.
Final Office Action dated Mar. 18, 2022, issued in connection with U.S. Appl. No. 16/623,160, filed Dec. 16, 2019, 14 pages.
Final Office Action dated Apr. 20, 2020, issued in connection with U.S. Appl. No. 16/012,167, filed Jun. 19, 2018, 21 pages.
Gerasimov et al. “Things That Talk: Using sound for device-to-device and device-to-human communication”, Feb. 2000 IBM Systems Journal 39(3.4): 530-546, 18 pages. [Retrieved Online] URIhttps://www.researchgate.net/publication/224101904_Things_that_talk_Using_sound_for_device-to-device_and_device-to-human_communication.
Glover et al. “Real-time detection of musical onsets with linear prediction and sinusoidal modeling.”, 2011 EURASIP Journal on Advances in Signal Processing 2011, 68, Retrieved from the Internet URL: https://doi.org/10.1186/1687-6180-2011-68, Sep. 20, 2011, 13 pages.
Gomez et al: “Distant talking robust speech recognition using late reflection components of room impulse response”, Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on, IEEE, Piscataway, NJ, USA, Mar. 31, 2008, XP031251618, ISBN: 978-1-4244-1483-3, pp. 4581-4584.
Gomez et al., “Robust Speech Recognition in Reverberant Environment by Optimizing Multi-band Spectral Subtraction”, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing ICASSP, Jan. 1, 2008, 6 pages.
Goodrich et al., Using Audio inn Secure Device Pairing, International Journal of Security and Networks, vol. 4, No. 1.2, Jan. 1, 2009, p. 57, Inderscience Enterprises Ltd., 12 pages.
International Bureau, International Preliminary Report on Patentability and Written Opinion, dated Apr. 16, 2019, issued in connection with International Application No. PCT/GB2017/053112, filed on Oct. 13, 2017, 12 pages.
International Bureau, International Preliminary Report on Patentability and Written Opinion, dated Apr. 16, 2019, issued in connection with International Application No. PCT/GB2017/053113, filed on Oct. 13, 2017, 8 pages.
International Bureau, International Preliminary Report on Patentability and Written Opinion, dated Dec. 17, 2019, issued in connection with International Application No. PCT/GB2018/051645, filed on Jun. 14, 2018, 7 pages.
International Bureau, International Preliminary Report on Patentability and Written Opinion, dated Mar. 19, 2019, issued in connection with International Application No. PCT/GB2017/052787, filed on Sep. 19, 2017, 7 pages.
International Bureau, International Preliminary Report on Patentability and Written Opinion, dated Jun. 23, 2020, issued in connection with International Application No. PCT/GB2018/053733, filed on Dec. 20, 2018, 7 pages.
International Bureau, International Preliminary Report on Patentability and Written Opinion, dated Sep. 24, 2019, issued in connection with International Application No. PCT/GB2018/050779, filed on Mar. 23, 2018, 6 pages.
International Bureau, International Search Report and Written Opinion dated Oct. 4, 2018, issued in connection with International Application No. PCT/GB2018/051645, filed on Jun. 14, 2018, 14 pages.
International Searching Authority, International Search Report and Written Opinion dated Jan. 5, 2022, issued in connection with International Application No. PCT/US2021/048380, filed on Aug. 31, 2021, 15 pages.
International Searching Authority, International Search Report and Written Opinion dated Mar. 13, 2018, issued in connection with International Application No. PCT/GB2017/053112, filed on Oct. 13, 2017, 18 pages.
International Searching Authority, International Search Report and Written Opinion dated Nov. 29, 2017, in connection with International Application No. PCT/GB2017/052787, 10 pages.
International Searching Authority, International Search Report and Written Opinion dated Nov. 30, 2011, in connection with International Application No. PCT/GB2011/051862, 6 pages.
International Searching Authority, International Search Report dated Jan. 18, 2018, issued in connection with International Application No. PCT/GB2017/053113, filed on Oct. 17, 2017, 11 pages.
International Searching Authority, International Search Report dated Jun. 19, 2018, issued in connection with International Application No. PCT/GB2018/050779, filed on Mar. 23, 2018, 8 pages.
Japanese Patent Office, Office Action dated Jun. 23, 2015, issued in connection with JP Application No. 2013-530801, 8 pages.
Japanese Patent Office, Office Action dated Apr. 4, 2017, issued in connection with JP Application No. 2013-530801, 8 pages.
Japanese Patent Office, Office Action dated Jul. 5, 2016, issued in connection with JP Application No. 2013-530801, 8 pages.
Lopes et al. “Acoustic Modems for Ubiquitous Computing”, IEEE Pervasive Computing, Mobile and Ubiquitous Systems. vol. 2, No. 3 Jul.-Sep. 2003, pp. 62-71. [Retrieved Online] URL https://www.researchgate.net/publication/3436996_Acoustic_modems_for_ubiquitous_computing.
Madhavapeddy, Anil. Audio Networking for Ubiquitous Computing, Oct. 24, 2003, 11 pages.
Madhavapeddy et al., Audio Networking: The Forgotten Wireless Technology, IEEE CS and IEEE ComSoc, Pervasive Computing, Jul.-Sep. 2005, pp. 55-60.
Madhavapeddy et al., Context-Aware Computing with Sound, University of Cambridge 2003, pp. 315-332.
Monaghan et al. “A method to enhance the use of interaural time differences for cochlear implants in reverberant environments.”, published Aug. 17, 2016, Journal of the Acoustical Society of America, 140, pp. 1116-1129. Retrieved from the Internet URL: https://asa.scitation.org/doi/10.1121/1.4960572 Year: 2016, 15 pages.
Non-Final Office Action dated Mar. 25, 2015, issued in connection with U.S. Appl. No. 12/926,470, filed Nov. 19, 2010, 24 pages.
Non-Final Office Action dated Mar. 28, 2016, issued in connection with U.S. Appl. No. 12/926,470, filed Nov. 19, 2010, 26 pages.
Non-Final Office Action dated Jan. 6, 2017, issued in connection with U.S. Appl. No. 12/926,470, filed Nov. 19, 2010, 22 pages.
Non-Final Office Action dated Aug. 9, 2019, issued in connection with U.S. Appl. No. 16/012,167, filed Jun. 19, 2018, 15 pages.
Non-Final Office Action dated Feb. 5, 2014, issued in connection with U.S. Appl. No. 12/926,470, filed Nov. 19, 2010, 22 pages.
Non-Final Office Action dated Aug. 12, 2021, issued in connection with U.S. Appl. No. 16/342,060, filed Apr. 15, 2019, 88 pages.
Non-Final Office Action dated Oct. 15, 2021, issued in connection with U.S. Appl. No. 16/496,685, filed Sep. 23, 2019, 12 pages.
Non-Final Office Action dated Sep. 24, 2020, issued in connection with U.S. Appl. No. 16/012,167, filed Jun. 19, 2018, 20 pages.
Non-Final Office Action dated Jan. 29, 2021, issued in connection with U.S. Appl. No. 16/342,060, filed Apr. 15, 2019, 59 pages.
Non-Final Office Action dated Feb. 5, 2021, issued in connection with U.S. Appl. No. 16/342,078, filed Apr. 15, 2019, 13 pages.
Non-Final Office Action dated Sep. 7, 2021, issued in connection with U.S. Appl. No. 16/623,160, filed Dec. 16, 2019, 11 pages.
Notice of Allowance dated Mar. 15, 2018, issued in connection with U.S. Appl. No. 12/926,470, filed Nov. 19, 2010, 10 pages.
Notice of Allowance dated Mar. 19, 2021, issued in connection with U.S. Appl. No. 16/012,167, filed Jun. 19, 2018, 9 pages.
Notice of Allowance dated Feb. 18, 2022, issued in connection with U.S. Appl. No. 16/564,766, filed Sep. 9, 2019, 8 pages.
Notice of Allowance dated Mar. 29, 2022, issued in connection with U.S. Appl. No. 16/342,060, filed Apr. 15, 2019, 24 pages.
Soriente et al., “HAPADEP: Human-Assisted Pure Audio Device Pairing*” Computer Science Department, University of California Irvine, 12 pages. [Retrieved Online] URLhttps://www.researchgate.net/ publication/220905534_HAPADEP_Human-assisted_pure_audio_device_pairing.
Tarr, E.W. “Processing perceptually important temporal and spectral characteristics of speech”, 2013, Available from ProQuest Dissertations and Theses Professional. Retrieved from https://dialog.proquest.com/professional/docview/1647737151?accountid=131444, 200 pages.
United Kingdom Patent Office, United Kingdom Examination Report dated Oct. 8, 2021, issued in connection with United Kingdom Application No. GB2113511.6, 7 pages.
United Kingdom Patent Office, United Kingdom Examination Report dated Jun. 11, 2021, issued in connection with United Kingdom Application No. GB1716909.5, 5 pages.
United Kingdom Patent Office, United Kingdom Examination Report dated Feb. 2, 2021, issued in connection with United Kingdom Application No. GB1715134.1, 5 pages.
United Kingdom Patent Office, United Kingdom Examination Report dated Oct. 29, 2021, issued in connection with United Kingdom Application No. GB1709583.7, 3 pages.
United Kingdom Patent Office, United Kingdom Office Action dated May 10, 2022, issued in connection with United Kingdom Application No. GB2202914.4, 5 pages.
United Kingdom Patent Office, United Kingdom Office Action dated Jan. 22, 2021, issued in connection with United Kingdom Application No. GB1906696.8, 2 pages.
United Kingdom Patent Office, United Kingdom Office Action dated Mar. 24, 2022, issued in connection with United Kingdom Application No. GB2202914.4, 3 pages.
United Kingdom Patent Office, United Kingdom Office Action dated Jan. 28, 2022, issued in connection with United Kingdom Application No. GB2113511.6, 3 pages.
United Kingdom Patent Office, United Kingdom Office Action dated Feb. 9, 2022, issued in connection with United Kingdom Application No. GB2117607.8, 3 pages.
United Kingdom Patent Office, United Kingdom Search Report dated Sep. 22, 2021, issued in connection with United Kingdom Application No. GB2109212.7, 5 pages.
Wang, Avery Li-Chun. An Industrial-Strength Audio Search Algorithm. Oct. 27, 2003, 7 pages. [online]. [retrieved on May 12, 2020] Retrieved from the Internet URL: https://www.researchgate.net/publication/220723446_An_Industrial_Strength_Audio_Search_Algorithm.
Related Publications (1)
Number Date Country
20210344428 A1 Nov 2021 US