The present invention is in the field of device authentication for communications. More particularly, but not exclusively, the present invention relates to a method and system for authenticating a device with a wireless access point.
Internet of Things (IoT) devices are computing devices which do not have the form factor of a traditional PC computer and usually perform a limited set of functions such as measuring temperature, recording video or providing lighting control. They often are connected to the internet and send/receive data over a network in order to coordinate and control the behaviour of these devices from a central service.
Due to their form factor IoT devices often do not have screens or extensive user input controls, such as a keyboard. Often, but not always, user input is limited to a small number of buttons, and output reduced to a small number indicator lights.
During the initial setup process, the IoT device must be brought onto a wireless network by passing the network's credentials to the IoT device such that it can then connect directly to the wireless network via a wireless access point. This is often done by configuring a temporary wireless network on the IoT device that a second device, often a mobile phone, can connect to and then pass network credentials.
Current methods often rely on the creation of a temporary ad hoc ‘hotspot’ to be created by the offline device. Typically a device owner will place the device into a configuration mode by pressing a button or interface element. Once in configuration mode, the device will create a hotspot network to which the owner can connect an additional device. Once a wireless connection is established between the two devices, credentials can be passed from the additional device to the offline device. When the credentials have been transferred the offline device can be reconfigured to connect directly to the network.
There is a desire to make this setup process faster and simpler for the owner/user of the IoT device.
It is an object of the present invention to provide a method and system for authenticating a device with a wireless access point which overcomes the disadvantages of the prior art, or at least provides a useful alternative.
According to a first aspect of the invention there is provided a method for authenticating a device with a wireless access point, including:
Other aspects of the invention are described within the claims.
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings in which:
The present invention provides a method and system for authenticating a device with a wireless access point.
The inventors have determined that existing methods for authenticating new network-capable devices to wireless access points are cumbersome, particularly, when the devices are not general-purpose computing devices, such as IoT devices.
The inventors have discovered that audio can be used to facilitate the authentication process by encoding information in an audio signal for receipt by a network-capable device to assist that device in authenticating itself with a wireless network. The information might include, for example, WiFi credentials.
In
A wireless access point 101 is shown. The wireless access point may be configured to broadcast a SSID (Service Set IDentifier) over a wireless protocol such as 802.11 or 802.15.1. In some embodiments, instead of WiFi, the wireless access point may be Bluetooth, Zigbee, or any other wireless standard.
A network-capable device 102 is shown. The network-capable device may be a non-general purpose computing device, such as an Internet-of-Things (IoT) device. The IoT device include, for example, sensors (e.g. for sensing light, heat, humidity, electricity, liquid levels, temperature, smoke, etc.) and/or control apparatus (e.g. to control electricity, mechanical/electrical apparatus, etc.)
The network-capable device 102 may include a processor 103, a wireless communication module 104 and a microphone 105.
The processor 103 may be configured for receiving an audio signal via the microphone 105, processing the audio signal to extract a code, and using the code to authenticate the device 102 with the wireless access point 101 via the wireless communication module 104.
A router 106 is shown. The router may be configured for mediating connections between devices across a network 107. The router 106 and wireless access point 101 may be collocated within the same apparatus.
A second device 108 is shown. The second device 108 may include or be connected to a speaker 109. The device 108 may be a user device such as a mobile user device (e.g. portable computer, smartphone, or tablet), a desktop computer, a television, a radio, or a landline telephone. In one embodiment, the second device 108 is another IoT device.
The second device 108 may include a user input apparatus 110 (e.g. a physical button, a touch-pad, a touch-screen, etc.), a processor 111, a memory 112, and a communications module 113.
The second device 108 may be configured to generate an audio signal at the speaker 109 for receipt by the microphone 105 at the network-capable device 102. The audio signal may encode the code which is subsequently extracted by the network-capable device 102. The second device 108 may generate the audio signal at the speaker 109 in response to input received at the user input apparatus.
It will be appreciated by those skilled in the art that the above embodiments of the invention may be deployed on different devices and in differing architectures.
Referring to
In step 201, an audio signal is received at the device (e.g. 102) via a microphone (e.g. 105). The audio signal may be received from a speaker (e.g. 109) at another device (e.g. 108). The code may be encoded within the audio signal via an audio protocol (such as described in US Patent Publication No. 2012/084131A1). The encoding may happen at the other device (e.g. 108) or the other device (e.g 108) may receive an audio signal for play-back encoded at another location (e.g. server or device) which may be local or remote to the devices.
In step 202, the audio signal is processed to extract a code (e.g. at processor 103). The audio signal may be processed locally or remotely. The code may include WiFi credentials such as a SSID and passphrase for the wireless access point. In some embodiments, the code may include additional information such as user account information. The code may be encrypted. The encryption may be via symmetric or asymmetric keys. In one embodiment, the device transmits its public key which is used to encrypt the code via PKI during encoding by the other device (e.g. 108).
The code may be embedded within a packet structure within the audio signal. The packet structure may comprise one or more of a header, a payload (e.g. for the code), error correction, and a checksum. Part of the packet may be encrypted (e.g. just the payload). Exemplary packet structures are shown in
In step 203, the code is used to authenticate the device, at least in part, with the wireless access point. For example, the device may utilise its wireless communications module (104) to connect to the SSID using the passphrase.
In step 204, in response to the authentication, access is provided to one or more network services to the device via the wireless access point. Partial authentication may be provided, for example, the device may utilise pre-stored identity information and/or security information to further validate itself with the wireless access point, the router, or a server to access network services.
In some embodiments, the same audio signal may be received by microphones at multiple devices, each device may process the audio signal to extract the code, and use the code, at least in part, to authenticate each device with the wireless access point. In this way, multiple devices may “onboarded” with the wireless access point at one time.
In embodiments, the device may be configured to listen for audio signals at the microphone or to process received audio signals or to use codes extracted from audio signals when the device is not authenticated with the wireless access point. That is, if the device is already authenticated, it may not continuously attempt to reauthenticate. In embodiments, where the device subsequently loses authentication (for example, if the credentials are no longer valid), it may go again into “listening mode” where audio signals received are processed and the extracted code used to authenticate.
In one embodiment, the device may go into “listening mode” for a period of time after a user actuates a user input at the device (e.g. by pressing a physical button or virtual button), or when the device is powered up.
In embodiments, the device may always be in “listening mode” Embodiments of the present invention will be now be described with reference to
In one embodiment, the user provides power to the offline device. After checking its connection status, this device may automatically start listening for audio codes, this would allow the configuration mode to be entered without user input. In one embodiment, the user presses an input button to enter this mode. In one embodiment, the device is always listening for audio codes this allows the device it to respond to new codes at any point.
A second device, having the network credentials provided to it by input from the user from a network connection or by the operating system of the device is used to encode network credentials and extra arbitrary application information into audio. These credentials may comprise of SSID and password as defined by 802.11i or 802.11i-2004. This device may be physically at the same location as the offline device or may have its audio transmitted by a third channel such as a telephone line or internet streamed audio to a speaker for local audio generation. In one embodiment, the audio code recorded and subsequently played from an audio storage medium. It is understood that the encoding of the data into an audio signal, and the broadcasting of this audio signal from a loudspeaker may occur on separate devices.
The offline device, receiving audio from the credentialed device decodes the audio code and uses these credentials to connect to a wired or wireless network.
In an alternative embodiment, the user provides power to the offline device. After checking its connection status, this device may automatically start broadcasting an audio signal to request credentials from a credentialed device. This broadcast may include the device's public key. In one embodiment, the user presses an input button to enter this mode. In one embodiment, the public key is provided to the credentialed device by means of a QR code, NFC Forum compatible tag or Bluetooth connection.
A second device, having the network credentials provided to it by input from the user, from a network connection or by the operating system of the device, is used to encode network credentials and extra arbitrary application information into audio. It may encrypt this data before sending using the offline device's public key. These credentials may comprise a SSID and passphrase as defined by 802.11i or 802.11i-2004. This device may be physically at the same location as the offline device or may have its audio transmitted by a third channel such as a telephone line or internet streamed audio. In one embodiment, the audio code is recorded to and subsequently played from an audio storage medium. It is understood that the encoding of the data into an audio signal, and the broadcasting of this audio signal from a loudspeaker may occur on separate devices.
The offline device, receiving audio from the credentialed device may decode the audio code and decrypt the received data to extract network credentials. The device may use these credentials to connect to a wired or wireless network. In one embodiment, the received data are used by the offline device to share the credentials with a third device.
In one embodiment shown in
It can be seen that, in some embodiments, in order to provide a code to the offline device, the sending device does not itself need to be connected to a network.
In one embodiment, the first device (e.g. 301 to 303) activates the microphone only if it is not connected to a wired or wireless network.
The second device (e.g. 303) may be actuated by the user of the first device (e.g. 300 to 302) to transmit the audio signal. For example, by pressing a virtual button, or a voice command. In one embodiment, the second device may transmit the audio code continuously.
The audio signal may decoded at the first device to extract a code. The code may be encoded within the audio signal via an audio protocol (such as described in US Patent Publication No. 2012/084131A1).
This encoding may use a series of pitched tones to designate each symbol in the data to be broadcast. These tones may be audible or contain only high frequencies such that they are inaudible to humans. The series of pitched tones may contain broadcast designator tones at the beginning of the series which the receiver may use to initiate the decoding sequence on the receiver. The broadcast may vary in length such that more complex credentials take more time to broadcast, and less complex credentials take less time to broadcast.
Those knowledgeable in the art will understand that pitches may be modulated by a number of encoding strategies. A preferred embodiment uses Multi-Frequency Shift Keying (MFSK). It is understood that other modulation strategies can be used, these may include Frequency Shift Keying (FSK) or Frequency Division Multiplexing techniques (FDM).
The data symbols in each broadcast may be grouped such that they designate information about the broadcast, device or may contain other information useful to the receiver to aid decoding or device functionality after the decoding of the modulated audio. The data symbols may represent the network credentials directly or may represent the network credentials in an encrypted or tokenized form. The data symbols may be grouped such that there is a checksum to validate the broadcast data integrity.
The broadcast may contain additional application information in addition to the network credentials. For example, this information may reference the device owner's account or be used by the device (e.g. 300 to 302) to configure its application code or own configuration.
It is understood that the data broadcast may contain additional data to be used by the receiving device or to be passed via the network once a connection is established. For example, the sending device may send the network credentials as well as a customer account identifier, allowing the receiving device to connect to the network using the credentials, and subsequently retrieve relevant customer account information in order to be correctly configured for use. In one embodiment, network credentials and additional configuration data are within separate acoustic broadcasts.
In
The code may include login credentials (for example, for an open network), and/or a wireless password (such as WPA2 or WEP). The code may include WiFi details such as the SSID (Service Set IDentifier).
The code may provide temporary or limited access to the network, further authentication steps may then be taken between the device and network access point.
In one embodiment the device 501 is able to receive audio data broadcasts continuously. Alternatively the device 501 may enable audio data functionality only when no network wired or wireless network are present.
In another embodiment shown in
In embodiment shown in
Potential advantages of some embodiments of the present invention are:
While the present invention has been illustrated by the description of the embodiments thereof, and while the embodiments have been described in considerable detail, it is not the intention of the applicant to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details, representative apparatus and method, and illustrative examples shown and described. Accordingly, departures may be made from such details without departure from the spirit or scope of applicant's general inventive concept.
Number | Date | Country | Kind |
---|---|---|---|
1704636 | Mar 2017 | GB | national |
This application is a continuation of U.S. application Ser. No. 16/496,685, filed 23 Sep. 2019, which is the U.S. national phase of International Application No. PCT/GB2018/050779 filed 23 Mar. 2018, which designated the U.S. and claims priority to GB Patent Application No. 1704636.8 filed 23 Mar. 2017, the entire contents of each of which are hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
4045616 | Sloane | Aug 1977 | A |
4048074 | Bruenemann et al. | Sep 1977 | A |
4088030 | Iversen et al. | May 1978 | A |
4101885 | Blum | Jul 1978 | A |
4323881 | Mori | Apr 1982 | A |
4794601 | Kikuchi | Dec 1988 | A |
6133849 | McConnell et al. | Oct 2000 | A |
6163803 | Watanabe | Dec 2000 | A |
6272535 | Iwamura | Aug 2001 | B1 |
6532477 | Tang et al. | Mar 2003 | B1 |
6711538 | Omori et al. | Mar 2004 | B1 |
6766300 | Laroche | Jul 2004 | B1 |
6798889 | Dicker et al. | Sep 2004 | B1 |
6909999 | Thomas et al. | Jun 2005 | B2 |
6996532 | Thomas | Feb 2006 | B2 |
7058726 | Osaku et al. | Jun 2006 | B1 |
7349668 | Ilan et al. | Mar 2008 | B2 |
7379901 | Philyaw | May 2008 | B1 |
7403743 | Welch | Jul 2008 | B2 |
7571014 | Lambourne et al. | Aug 2009 | B1 |
7944847 | Trine et al. | May 2011 | B2 |
8483853 | Lambourne | Jul 2013 | B1 |
8494176 | Suzuki et al. | Jul 2013 | B2 |
8594340 | Takara et al. | Nov 2013 | B2 |
8782530 | Beringer et al. | Jul 2014 | B2 |
9118401 | Nieto et al. | Aug 2015 | B1 |
9137243 | Suzuki et al. | Sep 2015 | B2 |
9237226 | Frauenthal et al. | Jan 2016 | B2 |
9270811 | Atlas | Feb 2016 | B1 |
9288597 | Carlsson et al. | Mar 2016 | B2 |
9344802 | Suzuki et al. | May 2016 | B2 |
10090003 | Wang | Oct 2018 | B2 |
10186251 | Mohammadi | Jan 2019 | B1 |
10236006 | Gurijala et al. | Mar 2019 | B1 |
10236031 | Gurijala | Mar 2019 | B1 |
10498654 | Shalev et al. | Dec 2019 | B2 |
20020054608 | Wan et al. | May 2002 | A1 |
20020107596 | Thomas et al. | Aug 2002 | A1 |
20020152388 | Linnartz et al. | Oct 2002 | A1 |
20020184010 | Eriksson et al. | Dec 2002 | A1 |
20030065918 | Willey | Apr 2003 | A1 |
20030195745 | Zinser, Jr. et al. | Oct 2003 | A1 |
20030212549 | Steentra et al. | Nov 2003 | A1 |
20040002858 | Attias et al. | Jan 2004 | A1 |
20040081078 | McKnight et al. | Apr 2004 | A1 |
20040133789 | Gantman et al. | Jul 2004 | A1 |
20040148166 | Zheng | Jul 2004 | A1 |
20040264713 | Grzesek | Dec 2004 | A1 |
20050049732 | Kanevsky et al. | Mar 2005 | A1 |
20050086602 | Philyaw et al. | Apr 2005 | A1 |
20050219068 | Jones et al. | Oct 2005 | A1 |
20060167841 | Allan et al. | Jul 2006 | A1 |
20060253209 | Hersbach et al. | Nov 2006 | A1 |
20060287004 | Fuqua | Dec 2006 | A1 |
20070063027 | Belfer et al. | Mar 2007 | A1 |
20070121918 | Tischer | May 2007 | A1 |
20070144235 | Werner et al. | Jun 2007 | A1 |
20070174052 | Manjunath et al. | Jul 2007 | A1 |
20070192672 | Bodin et al. | Aug 2007 | A1 |
20070192675 | Bodin et al. | Aug 2007 | A1 |
20070232257 | Otani et al. | Oct 2007 | A1 |
20070268162 | Viss et al. | Nov 2007 | A1 |
20080002882 | Voloshynovskyy et al. | Jan 2008 | A1 |
20080011825 | Giordano et al. | Jan 2008 | A1 |
20080027722 | Haulick et al. | Jan 2008 | A1 |
20080031315 | Ramirez et al. | Feb 2008 | A1 |
20080059157 | Fukuda et al. | Mar 2008 | A1 |
20080112885 | Okunev et al. | May 2008 | A1 |
20080144624 | Marcondes et al. | Jun 2008 | A1 |
20080232603 | Soulodre | Sep 2008 | A1 |
20080242357 | White | Oct 2008 | A1 |
20080262928 | Michaelis | Oct 2008 | A1 |
20090034712 | Grasley et al. | Feb 2009 | A1 |
20090119110 | Oh et al. | May 2009 | A1 |
20090123002 | Karthik et al. | May 2009 | A1 |
20090141890 | Steenstra et al. | Jun 2009 | A1 |
20090175257 | Belmonte et al. | Jul 2009 | A1 |
20090254485 | Baentsch et al. | Oct 2009 | A1 |
20100030838 | Atsmon et al. | Feb 2010 | A1 |
20100064132 | Ravikiran Sureshbabu | Mar 2010 | A1 |
20100088390 | Bai et al. | Apr 2010 | A1 |
20100134278 | Srinivasan et al. | Jun 2010 | A1 |
20100146115 | Bezos | Jun 2010 | A1 |
20100223138 | Dragt | Sep 2010 | A1 |
20100267340 | Lee | Oct 2010 | A1 |
20100290504 | Torimoto et al. | Nov 2010 | A1 |
20100290641 | Steele | Nov 2010 | A1 |
20110173208 | Vogel | Jul 2011 | A1 |
20110216783 | Takeuchi et al. | Sep 2011 | A1 |
20110276333 | Wang et al. | Nov 2011 | A1 |
20110277023 | Meylemans et al. | Nov 2011 | A1 |
20110307787 | Smith | Dec 2011 | A1 |
20120045994 | Koh et al. | Feb 2012 | A1 |
20120075083 | Isaacs | Mar 2012 | A1 |
20120084131 | Bergel et al. | Apr 2012 | A1 |
20120214416 | Kent et al. | Aug 2012 | A1 |
20120214544 | Shivappa et al. | Aug 2012 | A1 |
20130010979 | Takara et al. | Jan 2013 | A1 |
20130030800 | Tracey et al. | Jan 2013 | A1 |
20130034243 | Yermeche et al. | Feb 2013 | A1 |
20130077798 | Otani et al. | Mar 2013 | A1 |
20130113558 | Pfaffinger et al. | May 2013 | A1 |
20130216058 | Furuta et al. | Aug 2013 | A1 |
20130216071 | Maher et al. | Aug 2013 | A1 |
20130223279 | Tinnakornsrisuphap et al. | Aug 2013 | A1 |
20130275126 | Lee | Oct 2013 | A1 |
20130331970 | Beckhardt et al. | Dec 2013 | A1 |
20140003625 | Sheen et al. | Jan 2014 | A1 |
20140028818 | Brockway, III | Jan 2014 | A1 |
20140037107 | Marino, Jr. et al. | Feb 2014 | A1 |
20140046464 | Reimann | Feb 2014 | A1 |
20140053281 | Benoit et al. | Feb 2014 | A1 |
20140074469 | Zhidkov | Mar 2014 | A1 |
20140108020 | Sharma et al. | Apr 2014 | A1 |
20140142958 | Sharma et al. | May 2014 | A1 |
20140164629 | Barth et al. | Jun 2014 | A1 |
20140172141 | Mangold | Jun 2014 | A1 |
20140172429 | Butcher et al. | Jun 2014 | A1 |
20140258110 | Davis et al. | Sep 2014 | A1 |
20150004935 | Fu | Jan 2015 | A1 |
20150088495 | Jeong et al. | Mar 2015 | A1 |
20150141005 | Suryavanshi et al. | May 2015 | A1 |
20150215299 | Burch et al. | Jul 2015 | A1 |
20150248879 | Miskimen et al. | Sep 2015 | A1 |
20150271676 | Shin et al. | Sep 2015 | A1 |
20150349841 | Mani et al. | Dec 2015 | A1 |
20150371529 | Dolecki | Dec 2015 | A1 |
20150382198 | Kashef | Dec 2015 | A1 |
20160007116 | Holman | Jan 2016 | A1 |
20160021473 | Riggi et al. | Jan 2016 | A1 |
20160098989 | Layton et al. | Apr 2016 | A1 |
20160309276 | Ridihalgh et al. | Oct 2016 | A1 |
20170208170 | Mani et al. | Jul 2017 | A1 |
20170279542 | Knauer et al. | Sep 2017 | A1 |
20180106897 | Shouldice et al. | Apr 2018 | A1 |
20180115844 | Lu et al. | Apr 2018 | A1 |
20180167147 | Almada et al. | Jun 2018 | A1 |
20180213322 | Napoli et al. | Jul 2018 | A1 |
20180359560 | Defraene et al. | Dec 2018 | A1 |
20190035719 | Daitoku et al. | Jan 2019 | A1 |
20190045301 | Family et al. | Feb 2019 | A1 |
20190096398 | Sereshki | Mar 2019 | A1 |
20190348041 | Cella et al. | Nov 2019 | A1 |
20200091963 | Christoph et al. | Mar 2020 | A1 |
20200105128 | Frank | Apr 2020 | A1 |
20200169327 | Lin et al. | May 2020 | A1 |
20210098008 | Nesfield et al. | Apr 2021 | A1 |
Number | Date | Country |
---|---|---|
103259563 | Aug 2013 | CN |
105790852 | Jul 2016 | CN |
106921650 | Jul 2017 | CN |
1760693 | Mar 2007 | EP |
2334111 | Jun 2011 | EP |
2916554 | Sep 2015 | EP |
3275117 | Jan 2018 | EP |
3408936 | Dec 2018 | EP |
3526912 | Aug 2019 | EP |
2369995 | Jun 2002 | GB |
2484140 | Apr 2012 | GB |
H1078928 | Mar 1998 | JP |
2001320337 | Nov 2001 | JP |
2004512765 | Apr 2004 | JP |
2004139525 | May 2004 | JP |
2007121626 | May 2007 | JP |
2007195105 | Aug 2007 | JP |
2008219909 | Sep 2008 | JP |
0016497 | Mar 2000 | WO |
0115021 | Mar 2001 | WO |
0150665 | Jul 2001 | WO |
0161987 | Aug 2001 | WO |
0163397 | Aug 2001 | WO |
0211123 | Feb 2002 | WO |
0235747 | May 2002 | WO |
2004002103 | Dec 2003 | WO |
2005006566 | Jan 2005 | WO |
2008131181 | Oct 2008 | WO |
2016094687 | Jun 2016 | WO |
Entry |
---|
Advisory Action mailed on Mar. 1, 2022, issued in connection with U.S. Appl. No. 16/342,078, filed Apr. 15, 2019, 3 pages. |
Advisory Action mailed on Aug. 19, 2022, issued in connection with U.S. Appl. No. 16/496,685, filed Sep. 23, 2019, 3 pages. |
Bourguet et al. “A Robust Audio Feature Extraction Algorithm for Music Identification,” AES Convention 129; Nov. 4, 2010, 7 pages. |
C. Beaugeant and H. Taddei, “Quality and computation load reduction achieved by applying smart transcoding between CELP speech codecs,” 2007, 2007 15th European Signal Processing Conference, pp. 1372-1376. |
European Patent Office, Decision to Refuse mailed on Nov. 13, 2019, issued in connection with European Patent Application No. 11773522.5, 52 pages. |
European Patent Office, European EPC Article 94.3 mailed on Oct. 8, 2021, issued in connection with European Application No. 17790809.2, 9 pages. |
European Patent Office, European EPC Article 94.3 mailed on Dec. 10, 2021, issued in connection with European Application No. 18845403.7, 41 pages. |
European Patent Office, European EPC Article 94.3 mailed on Oct. 12, 2021, issued in connection with European Application No. 17795004.5, 8 pages. |
European Patent Office, European EPC Article 94.3 mailed on Oct. 25, 2022, issued in connection with European Application No. 20153173.8, 5 pages. |
European Patent Office, European EPC Article 94.3 mailed on Oct. 28, 2021, issued in connection with European Application No. 18752180.2, 7 pages. |
European Patent Office, European EPC Article 94.3 mailed on Jul. 6, 2022, issued in connection with European Application No. 20153173.8, 4 pages. |
European Patent Office, European Extended Search Report mailed on Aug. 31, 2020, issued in connection with European Application No. 20153173.8, 8 pages. |
European Patent Office, Summons to Attend Oral Proceedings mailed on Jul. 13, 2023, issued in connection with European Application No. 18752180.2, 6 pages. |
European Patent Office, Summons to Attend Oral Proceedings mailed on Mar. 15, 2019, issued in connection with European Application No. 11773522.5-1217, 10 pages. |
Final Office Action mailed Oct. 16, 2014, issued in connection with U.S. Appl. No. 12/926,470, filed Nov. 19, 2010, 22 pages. |
Final Office Action mailed Aug. 17, 2017, issued in connection with U.S. Appl. No. 12/926,470, filed Nov. 19, 2010, 22 pages. |
Final Office Action mailed Nov. 30, 2015, issued in connection with U.S. Appl. No. 12/926,470, filed Nov. 19, 2010, 25 pages. |
Final Office Action mailed on Nov. 1, 2022, issued in connection with U.S. Appl. No. 16/623,160, filed Dec. 16, 2019, 10 pages. |
Final Office Action mailed on May 10, 2022, issued in connection with U.S. Appl. No. 16/496,685, filed Sep. 23, 2019, 15 pages. |
Final Office Action mailed on Nov. 15, 2022, issued in connection with U.S. Appl. No. 16/956,905, filed Jun. 22, 2020, 16 pages. |
Final Office Action mailed on Mar. 18, 2022, issued in connection with U.S. Appl. No. 16/623,160, filed Dec. 16, 2019, 14 pages. |
Final Office Action mailed on Apr. 20, 2020, issued in connection with U.S. Appl. No. 16/012,167, filed Jun. 19, 2018, 21 pages. |
Gerasimov et al. “Things That Talk: Using sound for device-to-device and device-to-human communication”, Feb. 2000 IBM Systems Journal 39(3.4):530-546, 18 pages. [Retrieved Online] URIhttps://www.researchgate.net/publication/224101904_Things_that_talk_Using_sound_for_device-to-device_and_device-to-human_communication. |
Glover et al. “Real-time detection of musical onsets with linear prediction and sinusoidal modeling.”, 2011 EURASIP Journal on Advances in Signal Processing 2011, 68, Retrieved from the Internet URL: https://doi.org/10.1186/1687-6180-2011-68, Sep. 20, 2011, 13 pages. |
Gomez et al: “Distant talking robust speech recognition using late reflection components of room impulse response”, Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on, IEEE, Piscataway, NJ, USA, Mar. 31, 2008, XP031251618, ISBN: 978-1-4244-1483-3, pp. 4581-4584. |
Gomez et al., “Robust Speech Recognition in Reverberant Environment by Optimizing Multi-band Spectral Subtraction”, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing ICASSP, Jan. 1, 2008, 6 pages. |
Goodrich et al., Using Audio inn Secure Device Pairing, International Journal of Security and Networks, vol. 4, No. 1.2, Jan. 1, 2009, p. 57, Inderscience Enterprises Ltd., 12 pages. |
International Bureau, International Preliminary Report on Patentability and Written Opinion, mailed on Apr. 16, 2019, issued in connection with International Application No. PCT/GB2017/053112, filed on Oct. 13, 2017, 12 bages. |
International Bureau, International Preliminary Report on Patentability and Written Opinion, mailed on Apr. 16, 2019, issued in connection with International Application No. PCT/GB2017/053113, filed on Oct. 13, 2017, 8 pages. |
International Bureau, International Preliminary Report on Patentability and Written Opinion, mailed on Dec. 17, 2019, issued in connection with International Application No. PCT/GB2018/051645, filed on Jun. 14, 2018, pages. |
International Bureau, International Preliminary Report on Patentability and Written Opinion, mailed on Mar. 19, 2019, issued in connection with International Application No. PCT/GB2017/052787, filed on Sep. 19, 2017, 7 pages. |
International Bureau, International Preliminary Report on Patentability and Written Opinion, mailed on Jun. 23, 2020, issued in connection with International Application No. PCT/GB2018/053733, filed on Dec. 20, 2018, 7 pages. |
International Bureau, International Preliminary Report on Patentability and Written Opinion, mailed on Sep. 24, 2019, issued in connection with International Application No. PCT/GB2018/050779, filed on Mar. 23, 2018, 6 pages. |
International Bureau, International Search Report and Written Opinion mailed on Apr. 11, 2019, issued in connection with International Application No. PCT/GB2018/053733, filed on Dec. 20, 2018, 10 pages. |
International Bureau, International Search Report and Written Opinion mailed on Sep. 21, 2022, issued in connection with International Application No. PCT/US2022/072465, filed on May 20, 2022, 32 pages. |
International Bureau, International Search Report and Written Opinion mailed on Oct. 4, 2018, issued in connection with International Application No. PCT/GB2018/051645, filed on Jun. 14, 2018, 14 pages. |
International Searching Authority, International Search Report and Written Opinion mailed on Jan. 5, 2022, issued in connection with International Application No. PCT/US2021/048380, filed on Aug. 31, 2021, 15 pages. |
International Searching Authority, International Search Report and Written Opinion mailed on Mar. 13, 2018, issued in connection with International Application No. PCT/GB2017/053112, filed on Oct. 13, 2017, 18 bages. |
International Searching Authority, International Search Report and Written Opinion mailed on Nov. 29, 2017, in connection with International Application No. PCT/GB2017/052787, 10 pages. |
International Searching Authority, International Search Report and Written Opinion mailed on Nov. 30, 2011, in connection with International Application No. PCT/GB2011/051862, 6 pages. |
International Searching Authority, International Search Report mailed on Jan. 18, 2018, issued in connection with International Application No. PCT/GB2017/053113, filed on Oct. 17, 2017, 11 pages. |
International Searching Authority, International Search Report mailed on Jun. 19, 2018, issued in connection with International Application No. PCT/GB2018/050779, filed on Mar. 23, 2018, 8 pages. |
Japanese Patent Office, Office Action dated Jun. 23, 2015, issued in connection with JP Application No. 2013-530801, 8 pages. |
Japanese Patent Office, Office Action dated Apr. 4, 2017, issued in connection with JP Application No. 2013-530801, 8 pages. |
Japanese Patent Office, Office Action dated Jul. 5, 2016, issued in connection with JP Application No. 2013-530801, 8 pages. |
Lopes et al. “Acoustic Modems for Ubiquitous Computing”, IEEE Pervasive Computing, Mobile and Ubiquitous Systems. vol. 2, No. 3 Jul.-Sep. 2003, pp. 62-71. [Retrieved Online] URL https://www.researchgate.net/publication/3436996_Acoustic_modems_for_ubiquitous_computing. |
Madhavapeddy, Anil. Audio Networking for Ubiquitous Computing, Oct. 24, 2003, 11 pages. |
Madhavapeddy et al., Audio Networking: The Forgotten Wireless Technology, IEEE CS and IEEE ComSoc, Pervasive Computing, Jul.-Sep. 2005, pp. 55-60. |
Madhavapeddy et al., Context-Aware Computing with Sound, University of Cambridge 2003, pp. 315-332. |
Monaghan et al. “A method to enhance the use of interaural time differences for cochlear implants in reverberant environments.”, published Aug. 17, 2016, Journal of the Acoustical Society of America, 140, pp. 1116-1129. Retrieved from the Internet URL: https://asa.scitation.org/doi/10.1121/1.4960572 Year: 2016, 15 pages. |
Non-Final Office Action mailed Mar. 25, 2015, issued in connection with U.S. Appl. No. 12/926,470, filed Nov. 19, 2010, 24 pages. |
Non-Final Office Action mailed Mar. 28, 2016, issued in connection with U.S. Appl. No. 12/926,470, filed Nov. 19, 2010, 26 pages. |
Non-Final Office Action mailed Jan. 6, 2017, issued in connection with U.S. Appl. No. 12/926,470, filed Nov. 19, 2010, 22 pages. |
Non-Final Office Action mailed Aug. 9, 2019, issued in connection with U.S. Appl. No. 16/012,167, filed Jun. 19, 2018, 15 pages. |
Non-Final Office Action mailed on Oct. 4, 2022, issued in connection with U.S. Appl. No. 16/496,685, filed Sep. 23, 2019, 15 pages. |
Non-Final Office Action mailed on Feb. 5, 2014, issued in connection with U.S. Appl. No. 12/926,470, filed Nov. 19, 2010, 22 pages. |
Non-Final Office Action mailed on Jul. 1, 2022, issued in connection with U.S. Appl. No. 16/623,160, filed Dec. 16, 2019, 10 pages. |
Non-Final Office Action mailed on Jul. 11, 2022, issued in connection with U.S. Appl. No. 17/660,185, filed Apr. 21, 2022, 20 pages. |
Non-Final Office Action mailed on Aug. 12, 2021, issued in connection with U.S. Appl. No. 16/342,060, filed Apr. 15, 2019, 88 pages. |
Non-Final Office Action mailed on Oct. 15, 2021, issued in connection with U.S. Appl. No. 16/496,685, filed Sep. 23, 2019, 12 pages. |
Non-Final Office Action mailed on May 19, 2023, issued in connection with U.S. Appl. No. 16/956,905, filed Jun. 22, 2020, 20 pages. |
Non-Final Office Action mailed on Jul. 21, 2022, issued in connection with U.S. Appl. No. 16/956,905, filed Jun. 22, 2020, 15 pages. |
Non-Final Office Action mailed on Sep. 24, 2020, issued in connection with U.S. Appl. No. 16/012,167, filed Jun. 19, 2018, 20 pages. |
Non-Final Office Action mailed on Dec. 27, 2021, issued in connection with U.S. Appl. No. 16/956,905, filed Jun. 22, 2020, 12 pages. |
Non-Final Office Action mailed on Jan. 29, 2021, issued in connection with U.S. Appl. No. 16/342,060, filed Apr. 15, 2019, 59 pages. |
Non-Final Office Action mailed on Feb. 5, 2021, issued in connection with U.S. Appl. No. 16/342,078, filed Apr. 15, 2019, 13 pages. |
Non-Final Office Action mailed on Sep. 7, 2021, issued in connection with U.S. Appl. No. 16/623,160, filed Dec. 16, 2019, 11 pages. |
Notice of Allowance mailed Mar. 15, 2018, issued in connection with U.S. Appl. No. 12/926,470, filed Nov. 19, 2010, 10 pages. |
Notice of Allowance mailed Mar. 19, 2021, issued in connection with U.S. Appl. No. 16/012,167, filed Jun. 19, 2018, 9 pages. |
Notice of Allowance mailed on Feb. 8, 2023, issued in connection with U.S. Appl. No. 16/623,160, filed Dec. 16, 2019, 10 pages. |
Notice of Allowance mailed on Aug. 11, 2022, issued in connection with U.S. Appl. No. 16/342,078, filed Apr. 15, 2019, 15 pages. |
Notice of Allowance mailed on Aug. 11, 2023, issued in connection with U.S. Appl. No. 17/883,020, filed Aug. 8, 2022, 21 pages. |
Notice of Allowance mailed on Feb. 18, 2022, issued in connection with U.S. Appl. No. 16/564,766, filed Sep. 9, 2019, 8 pages. |
Notice of Allowance mailed on Jan. 27, 2023, issued in connection with U.S. Appl. No. 16/496,685, filed Sep. 23, 2019, 7 pages. |
Notice of Allowance mailed on Mar. 29, 2022, issued in connection with U.S. Appl. No. 16/342,060, filed Apr. 15, 2019, 24 pages. |
Notice of Allowance mailed on Apr. 5, 2022, issued in connection with U.S. Appl. No. 16/956,905, filed Jun. 22, 2020, 9 pages. |
Notice of Allowance mailed on Feb. 7, 2023, issued in connection with U.S. Appl. No. 16/342,078, filed Apr. 15, 2019, 12 pages. |
Soriente et al., “HAPADEP: Human-Assisted Pure Audio Device Pairing*” Computer Science Department, University of California Irvine, 12 pages. [Retrieved Online] URLhttps://www.researchgate.net/publication/220905534_HAPADEP_Human-assisted_pure_audio_device_pairing. |
Tarr, E.W. “Processing perceptually important temporal and spectral characteristics of speech”, 2013, Available from ProQuest Dissertations and Theses Professional. Retrieved from https://dialog.proquest.com/professional/docview/1647737151?accountid=131444, 200 pages. |
United Kingdom Patent Office, United Kingdom Examination Report mailed on Oct. 8, 2021, issued in connection with United Kingdom Application No. GB2113511.6, 7 pages. |
United Kingdom Patent Office, United Kingdom Examination Report mailed on Jun. 11, 2021, issued in connection with United Kingdom Application No. GB1716909.5, 5 pages. |
United Kingdom Patent Office, United Kingdom Examination Report mailed on Feb. 2, 2021, issued in connection with United Kingdom Application No. GB1715134.1, 5 pages. |
United Kingdom Patent Office, United Kingdom Examination Report mailed on Oct. 29, 2021, issued in connection with United Kingdom Application No. GB1709583.7, 3 pages. |
United Kingdom Patent Office, United Kingdom Office Action mailed on May 10, 2022, issued in connection with United Kingdom Application No. GB2202914.4, 5 pages. |
United Kingdom Patent Office, United Kingdom Office Action mailed on Jan. 22, 2021, issued in connection with United Kingdom Application No. GB1906696.8, 2 pages. |
United Kingdom Patent Office, United Kingdom Office Action mailed on Mar. 24, 2022, issued in connection with United Kingdom Application No. GB2202914.4, 3 pages. |
United Kingdom Patent Office, United Kingdom Office Action mailed on Jan. 28, 2022, issued in connection with United Kingdom Application No. GB2113511.6, 3 pages. |
United Kingdom Patent Office, United Kingdom Office Action mailed on Feb. 9, 2022, issued in connection with United Kingdom Application No. GB2117607.8, 3 pages. |
United Kingdom Patent Office, United Kingdom Search Report mailed on Sep. 22, 2021, issued in connection with United Kingdom Application No. GB2109212.7, 5 pages. |
Wang, Avery Li-Chun. An Industrial-Strength Audio Search Algorithm. Oct. 27, 2003, 7 pages. [online]. [retrieved on May 12, 2020] Retrieved from the Internet URL: https://www.researchgate.net/publication/220723446_An_Industrial_Strength_Audio_Search_Algorithm. |
Number | Date | Country | |
---|---|---|---|
20230388798 A1 | Nov 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16496685 | US | |
Child | 18140393 | US |