Field of the Invention
Embodiments of the present invention relate generally to associating two or more devices.
Description of the Related Art
Typically, electronic devices are paired using the Bluetooth™ technology. The term “pairing” means that two devices exchange some data to agree to work together to provide a predefined function. For example, a Bluetooth™ enabled mobile phone may be paired with a Bluetooth™ headset and upon a successful pairing, the headset provides speakers and microphone to the mobile phone.
There are many issues with the above stated method of pairing or associating two or more devices. First, a special hardware is needed at both ends to effect such pairing. Second, such pairing can only be used for prewired specific functions based on configured profiles. Also, the Bluetooth™ signals have wider range, hence, without a proper security, unintended pairing may occur. Another issue with the Bluetooth pairing is that the paired devices must stay within a physical proximity to each other after the pairing. Moreover, having extra hardware in devices can put more stress on device batteries.
Using digital codes in radio transmissions have been used in the past. For example, the National Weather Service, in the past, has used digital codes at the beginning and end of every message concerning life- or property-threatening weather conditions targeting a specific area. The inclusion of these digital codes produces a distinct sound which is easily recognized by most people as an emergency alert sounds.
Motorola two way radios also use transmission of data between two devices using audio frequency shift keying (AFSK). These radios have an option allowing the radio to filter out data bursts from the receive audio. Instead of hearing the AFSK data, the user hears a short chirp from the radio speaker each time a data burst occurs.
Hosei Matsuoka, Yusuke Nakashima, Takeshi Yoshimura, and Tashiro Kawahara. 2008. Acoustic OFDM: embedding high bit-rate data in audio. In Proceedings of the 14th international conference on Advances in multimedia modeling (MMM'08), Shin'ichi Satoh, Frank Nack, and Minoru Etoh (Eds.). Springer-Verlag, Berlin, Heidelberg, 498-507 (“Acoustic OFDM”) discloses a method of aerial acoustic communication in which data is modulated using OFDM (Orthogonal Frequency Division Multiplexing) and embedded in regular audio material without significantly degrading the quality of the original sound. It can provide data transmission of several hundred bps, which is much higher than is possible with other audio data hiding techniques. The proposed method replaces the high frequency band of the audio signal with OFDM carriers, each of which is power-controlled according to the spectrum envelope of the original audio signal. The implemented system enables the transmission of short text messages from loudspeakers to mobile handheld devices at a distance of around 3 m.
The above usages of the voice signals for communication includes sending the encoded signals over the audible part of a radio transmission and detecting these encoded signals at a receiver.
In one embodiment, a method of associating a first device with a second device is disclosed. The first device through its speaker broadcasts a request for association using an audio signal. The broadcasted audio signal is received by the second device through its microphone. The first and second devices then cooperatively verifies a security code and upon a successful verification of the security code, the first and the second devices are enabled to communicate with each other.
In another embodiment, a computer readable storage medium containing programming instructions for associating a first device and a second device, the programming instructions includes programming instructions for receiving, via a microphone, a request for association sent using an audio signal from another device. The request includes an audio signal that is outputted from a speaker of the another device. Programming instructions for verifying a security code and programming instructions for enabling the first device and the second device to communicate with each other are also included.
In yet another embodiment, a system, comprises a first device and a second device. The first device includes a speaker and the second device includes a microphone. The system further includes a modulator incorporated in the first device. The modulator is coupled to the speaker, wherein the modulator being configured to generate an audio signal; and a demodulator incorporate in the second device. The demodulator is coupled to the microphone. The demodulator being configured to receive the audio signal. The first device is configured to include data in the audio signal. The data is encoded using a frequency shift keying mechanism and the demodulator is configured to retrieved the data from the audio signal, wherein the data includes a request for association.
Other embodiments include, without limitation, a non-transitory computer-readable storage medium that includes instructions that enable a processing unit to implement one or more aspects of the disclosed methods as well as a system configured to implement one or more aspects of the disclosed methods.
So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
In the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one of skill in the art that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the present invention.
Reference throughout this disclosure to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Frequency-shift keying (FSK) is a method of encoding digital signals and transmitting the encoded signals as analog signals. The two binary states (i.e., logic 0 and logic 1) are represented by analog waveforms for different frequencies. FSK is often used to transmit low-speed binary data over wireline and wireless links.
Minimum shift keying (MSK) is an efficient form of FSK. In MSK, the difference between the higher and lower frequency is identical to half of the bit rate. Consequently, the waveforms used to represent a 0 and a 1 bit differ by exactly half a carrier period.
Audio FSK is a modulation technique by which digital data is represented by changes in the frequency and an audio tone, yielding an encoded signal suitable for transmission via radio. Normally, the transmitted audio alternates between two tones, one represents a binary one and the other represents a binary zero. There are some other types of special FSK techniques based on the basic FSK principles. The Dual-tone multi-frequency signaling (D™F) modulation techniques may also be used instead of or in conjunction with the FSK techniques. D™F is typically used for telecommunication signaling over analog telephone lines in the voice-frequency band between telephone handsets and other communications devices and the switching center.
There are many other modulation techniques available to encode a digital signal in a voice signal. A person skilled in the art would realize that the embodiments described herein can be implemented using any other modulation technique so long as the modulation technique provides encoding coded data in audio signals.
Typically, the process of pairing of two devices using the Bluetooth protocol initiates with a discovery process, which involves one device broadcasting the device identity using a low powered signal transmission. The other device then contacts the first device using the broadcasted information and the devices typically synchronize frequency and clocks and a communication channel is established between the two devices. Subsequently, typically upon entering a security key for the first device in the second device, the pairing is confirmed. The Bluetooth standard defines a certain number of application profiles in order to define which kinds of services are offered by a Bluetooth device. Even though one device can support multiple applications, these applications must be prewired and preconfigured in the device. For example, a Bluetooth™ wireless headset may only act as a headset because the device is prewired with a particular type of Bluetooth profile.
The device 120 also includes an application. The application in the device 120 may be a copy of the application 110 of the device 102. In another embodiment, the application in the device 120 may be a different application. In one embodiment, the application in the device 120 may be any application so long as the application in the device 120 complies with the data transport protocol of the application 110 in the device 102. The data transport protocol may be any communication protocol based on standard data transmission protocols such as TCP, UDP, etc. This data transport protocol is used by the device 102 to communicate data with the device 120. The device 102 and the device 120 may include an interface layer 108, for example to connect to a network such as the Internet.
In the device 102, the application 110 is coupled to a digital-to-analog (D/A) converter and a FM modulator 114. It may be noted that the D/A converter and/or FM modulator may be implemented in hardware or these modules may also be a part of the application 110. Alternatively, the D/A converter and/or FM modulator may be implemented as software drivers and services running in the OS 106. The D/A converter converts the binary data that the application 110 wants to transmit into analog signals. The FM modulator encodes the data to produce a FSK signal. The FSK signal is then outputted to the speaker of the device 102. Note that other techniques, which are well known to a person skilled in the art, may be used to convert the binary data produced by the application 110 to an FSK signal.
In the device 120, the application 110 (which may or may not be the same as the application 110 of the device 102) is coupled to a FM demodulator 116, which in turn is coupled to the microphone of the device 120. It should be noted that the device 120 may also include a D/A converter and/or a FM modulator and likewise the device 102 may include a FM demodulator.
The OS 106 is a general purpose operating system such as Android™, iOS™, Windows™ Mobile, WebOS™, Linux™, Windows™, etc. Although it is not shown, the OS 106 may include necessary drivers for network interface, sound, and other hardware such as FM modulator/demodulator, and other hardware components that are necessary to operate the device 102 and the device 120.
As noted above, a typical mobile device or computing device may not need any extra hardware (such as FM modulator, FM demodulator, D/A converter, etc.) in order to practice the embodiments as described herein because such functionality may be implemented in software itself For example, if a device wish to associated with another device through the methods described herein, an application (e.g., the application 110) may be installed on the device. Software or hardware implementations of FM modulators, FM demodulator, D/A converter, etc. are well known in the art. Hence, such details are being omitted.
In one embodiment, the FM modulator and demodulator may be tuned to use above audible frequencies. If frequencies above (or below) audible frequencies are used, the choice of frequencies may on the speaker and microphone configurations because the device 102 and device 120 may have filtered installed to filter out the out of range frequencies. Further, RF frequencies may not be used because a RF signal may not be able to drive an audio speaker. Audible frequencies (including slightly higher frequencies, e.g., ultrasonic frequencies) may always be used because microphones and speakers are typically configured to work in these audible frequencies. In yet another embodiment, one set of frequencies may be used during the initial association and another set of frequencies may be used for subsequent data communication between the devices. It may be noted that even though
The application 110 may include a user interface (UI) 112. A user of the device 102 may use the UI 112 to instruct the device 102 to initiate the pairing or association process. In another embodiment, one or more buttons of the device 102 may be programmed to instruct the device 102 either directly or via the application 110 to initiate the association process. In one embodiment, the device 102 initiates the association process by broadcasting the identification of the device 102 using FSK/MSK. Any other type of audio signal may be used so long as the broadcasted signal can contain encoded identification of the device 102 and the broadcasted audio signal may be detected by the speaker of the device 120. In one embodiment, the IP address of the device 102 is used for the identification. In other embodiments, other attributes of the device 102 may be used so long as the device 102 can be found and communicated with using those attributes. Along with the identification, a random code may also be broadcasted by the device 102.
The broadcasted audio signal is received at the speakers of all other devices in the vicinity. The receiving devices may have an “always active” listener module to capture these broadcasted audio signals. Alternatively, the listener module may be turned on or off as and when desired. The listener module (which may be a part of the FM demodulator 116 module of the application 110) may include programming to distinguish broadcast audio signals for device association from other audio signals. For example, the broadcasted signal may include a flag or code to inform the receiving devices that the audio signal contains pertains to device association. In one embodiment, the strength of the audio signals is configured to have a short range. For example, the range may be limited to the dimensions of a typical office conference room.
During the association process, the device 102 may display a security code on the display of the device 102. The security code may be a text string, a numeric or alpha numeric code. In another embodiment, the device 102 may display a graphical representation (e.g., QR code, or bar code) of the security code. In yet another embodiment, the device 102 may display a graphics such a duck, a house, etc. In another embodiment, the security code may be given to the user of the device 120 privately orally or via a text message or an email, away from the public view.
In another embodiment, a user of the device 120 provides a special security code to the user of the device 102. This special security code is associated with the device 120. This security code is then entered into the device 102 and then this security code is transmitted to the device 120 along with the request for association. Upon a successful verification of the security code, the device 120 sends a success signal back to the device 102. The success signal may be sent in the same way as the request for association sent from the device 102.
When the device 120 receives the transmitted audio signal, the device 120′s FM demodulator/data extractor extracts the identification of the device 102. In one embodiment, the user of the device 120 enters the security code displayed on the display of the device 102. If the device 102 displays an encoded graphics, the device 120 may use a built-in camera to scan the encoded graphics. If the device 102 displays other types of graphics, such as a duck, the user of the device 120 may enter the name of the object in the displayed graphics. It should be noted that unlike Bluetooth pairing mechanism, the above pairing mechanism described herein does include Bluetooth type discovery mechanism.
After the user of the device 120 enters the security code, in one embodiment, the device 120 encode its own identification in a FSK/MSK signal and broadcasts, along with the security code entered by the user. The device 102 receives the data through the FSK signal and checks the security code. The association process concludes if the security code matches. The two devices may exchange more data to determine what functions need to be distributed between them. In one embodiment, the devices may be configured to skip security mechanism. Alternatively, in another embodiment, the security code may be generated by the device 120 and the device 102 transmits this security code for verification by the device 120. Still further, both devices 102, 120 generate security codes that need to be verified through the above stated mechanism (e.g., the device 102 verifies the security code of the device 120 and vice versa).
In another embodiment, instead of broadcasting the FSK signal, the device 120 send a response message through a network to which the both devices are connected, using the identification of the device 102. Hence, in this embodiment, only the device 102 uses the FSK signal to broadcast its identification. Any subsequent communication between the device 102 and the device 120 takes place using the network (e.g., the Internet).
In one embodiment, the device 102 and the device 120 include applications that are connected to a server 204. For example, the applications of the device 102 and the device 120 may be messaging clients and the server 204 may be a messaging server through which the messaging clients connect to other messaging clients. Upon conclusion of the initial handshake as described above, the applications in the device 102 and the device 120 may communicate with the server 204 to cooperatively distribute the functionality between the two devices. For example, the device 102 may be in an active communication session with a third device and may wish to transfer the active session to the device 120. Alternatively, the device 102 may want to transfer the camera functions of the active session to the device 120 instead of transferring the entire active session to the device 102. In other words, the application 110 and the server 204 may be configured to provide various options to the devices after they associate with each other, without any need for prewiring the devices for particular usage.
In one embodiment, to reduce noise interference, the encoded data in the broadcasted signal further includes a special flag or code so that the application in the device 120 may distinguish the audio signals for initiating the device association from the device 102.
At step 306, a second device receives the broadcasted message via the microphone of the second device. The second device then decodes the encoded signal and extracts the identification of the first device. Optionally, the first device also displays a security code or security graphics. If the security code or security graphics is displayed, after extracting the identification of the first device, the second device requests its user to enter the security code in the second device. The second device then contacts the first device either using the identification of the first device or through a broadcast message. The response broadcast message or the message sent to the first device via the network includes the identification of the second device and optionally the security code of the first device.
At step 308, the first and the second devices negotiate distribution of desired functions between them. A user may also participate in the selection of desired functions for each of the two devices after the device association process. At optional step 310, a communication channel may be established between the two devices.
While the forgoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. For example, aspects of the present invention may be implemented in hardware or software or in a combination of hardware and software. One embodiment of the invention may be implemented as a program product for use with a computer system. The program(s) of the program product define functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, flash memory, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. Such computer-readable storage media, when carrying computer-readable instructions that direct the functions of the present invention, are embodiments of the present invention.
This application is a continuation of and claims priority to U.S. patent application Ser. No. 13/293,245, entitled “Device Association Using an Audio Signal” filed Nov. 10, 2011, the disclosure of which is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
7209947 | Lee | Apr 2007 | B1 |
7254708 | Silvester | Aug 2007 | B2 |
7729489 | Lee et al. | Jun 2010 | B2 |
8224354 | De Vries et al. | Jul 2012 | B2 |
9288229 | Kaufman | Mar 2016 | B2 |
9450930 | Foulds et al. | Sep 2016 | B2 |
9628514 | Kaufman | Apr 2017 | B2 |
20030095521 | Haller et al. | May 2003 | A1 |
20040253923 | Braley et al. | Dec 2004 | A1 |
20050159132 | Wright | Jul 2005 | A1 |
20060046719 | Holtschneider | Mar 2006 | A1 |
20060143455 | Gitzinger | Jun 2006 | A1 |
20060282649 | Malamud | Dec 2006 | A1 |
20070094490 | Lohr | Apr 2007 | A1 |
20070173212 | Mergler | Jul 2007 | A1 |
20070238505 | Okada | Oct 2007 | A1 |
20080049704 | Witteman et al. | Feb 2008 | A1 |
20080244721 | Barrus et al. | Oct 2008 | A1 |
20090199279 | Lange et al. | Aug 2009 | A1 |
20090240814 | Brubacher | Sep 2009 | A1 |
20090247152 | Manne | Oct 2009 | A1 |
20090287922 | Herwono et al. | Nov 2009 | A1 |
20100043056 | Ganapathy | Feb 2010 | A1 |
20100053169 | Cook | Mar 2010 | A1 |
20100115591 | Kane-Esrig | May 2010 | A1 |
20100197322 | Preston et al. | Aug 2010 | A1 |
20100227549 | Kozlay | Sep 2010 | A1 |
20100262696 | Oshiba | Oct 2010 | A1 |
20100278345 | Alsina et al. | Nov 2010 | A1 |
20110047607 | Chen et al. | Feb 2011 | A1 |
20110072263 | Bishop | Mar 2011 | A1 |
20110086593 | Hardacker | Apr 2011 | A1 |
20110092155 | Piemonte et al. | Apr 2011 | A1 |
20110093266 | Tham | Apr 2011 | A1 |
20110096174 | King | Apr 2011 | A1 |
20110179182 | Vadia Ravnas | Jul 2011 | A1 |
20110183614 | Tamura | Jul 2011 | A1 |
20110208659 | Easterly et al. | Aug 2011 | A1 |
20110219105 | Kryze et al. | Sep 2011 | A1 |
20110281523 | Oshiba | Nov 2011 | A1 |
20110295502 | Faenger | Dec 2011 | A1 |
20110296506 | Caspi | Dec 2011 | A1 |
20120011575 | Cheswick et al. | Jan 2012 | A1 |
20120017081 | Courtney | Jan 2012 | A1 |
20120044057 | Kang | Feb 2012 | A1 |
20120045994 | Koh | Feb 2012 | A1 |
20120054046 | Albisu | Mar 2012 | A1 |
20120131186 | Klos et al. | May 2012 | A1 |
20120140925 | Bekiares et al. | Jun 2012 | A1 |
20120158581 | Cooley et al. | Jun 2012 | A1 |
20120158898 | van Deventer et al. | Jun 2012 | A1 |
20120184372 | Laarakkers | Jul 2012 | A1 |
20120188147 | Hosein et al. | Jul 2012 | A1 |
20120189140 | Hughes | Jul 2012 | A1 |
20120198531 | Ort et al. | Aug 2012 | A1 |
20120214416 | Kent | Aug 2012 | A1 |
20120278727 | Ananthakrishnan et al. | Nov 2012 | A1 |
20120322376 | Couse | Dec 2012 | A1 |
20120324076 | Zerr | Dec 2012 | A1 |
20130031275 | Hanes | Jan 2013 | A1 |
20130036461 | Lowry | Feb 2013 | A1 |
20130088649 | Yum | Apr 2013 | A1 |
20130110723 | Huang | May 2013 | A1 |
20130115880 | Dal Bello et al. | May 2013 | A1 |
20130122810 | Kaufman | May 2013 | A1 |
20130124292 | Juthani | May 2013 | A1 |
20130125224 | Kaufman | May 2013 | A1 |
20130265857 | Foulds | Oct 2013 | A1 |
20130276079 | Foulds | Oct 2013 | A1 |
20140117075 | Weinblatt | May 2014 | A1 |
20140256260 | Ueda | Sep 2014 | A1 |
20140305828 | Salvo | Oct 2014 | A1 |
20150131539 | Tsfaty | May 2015 | A1 |
20150215299 | Burch | Jul 2015 | A1 |
20160171592 | Pugh | Jun 2016 | A1 |
Number | Date | Country |
---|---|---|
1638383 | Jul 2005 | CN |
101350723 | Jan 2009 | CN |
101872448 | Oct 2010 | CN |
1551140 | Jul 2005 | EP |
2005122651 | May 2005 | JP |
WO-0158080 | Aug 2001 | WO |
WO-2011010925 | Jan 2011 | WO |
Entry |
---|
“Advisory Action”, U.S. Appl. No. 13/828,717, dated Jul. 27, 2016, 3 pages. |
“Final Office Action”, U.S. Appl. No. 13/293,242, dated Apr. 17, 2015, 23 pages. |
“Final Office Action”, U.S. Appl. No. 13/293,242, dated Sep. 20, 2013, 17 pages. |
“Final Office Action”, U.S. Appl. No. 13/293,245, dated Mar. 25, 2014, 14 pages. |
“Final Office Action”, U.S. Appl. No. 13/293,245, dated Nov. 5, 2014, 17 pages. |
“Final Office Action”, U.S. Appl. No. 13/293,245, dated Dec. 9, 2015, 17 pages. |
“Final Office Action”, U.S. Appl. No. 13/828,343, dated Nov. 14, 2014, 14 pages. |
“Final Office Action”, U.S. Appl. No. 13/828,717, dated Feb. 7, 2017, 25 pages. |
“Final Office Action”, U.S. Appl. No. 13/828,717, dated May 20, 2015, 20 pages. |
“Final Office Action”, U.S. Appl. No. 13/828,717, dated Jun. 13, 2016, 25 pages. |
“Foreign Office Action”, CN Application No. 201210585999.1, dated May 6, 2016, 6 pages. |
“Foreign Office Action”, CN Application No. 201210585999.1, dated Jul. 27, 2015, 16 pages. |
“Foreign Office Action”, CN Application No. 201210597199.1, dated Jan. 29, 2015, 14 pages. |
“Foreign Office Action”, CN Application No. 201210597199.1, dated May 6, 2016, 6 pages. |
“Foreign Office Action”, CN Application No. 201210597199.1, dated Oct. 19, 2015, 13 Pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2012/064576, dated May 7, 2013, 10 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2012/064577, dated Feb. 21, 2013, 22 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/293,245, dated Nov. 26, 2013, 13 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/293,242, dated Jun. 3, 2013, 12 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/293,242, dated Dec. 4, 2014, 19 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/293,245, dated Apr. 17, 2015, 18 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/293,245, dated Jun. 15, 2016, 18 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/293,245, dated Jul. 3, 2014, 17 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/828,343, dated Mar. 27, 2014, 13 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/828,343, dated Dec. 7, 2015, 14 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/828,717, dated Aug. 25, 2016, 26 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/828,717, dated Nov. 28, 2014, 20 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/828,717, dated Dec. 17, 2015, 25 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/293,242, dated Nov. 12, 2015, 13 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/293,242, dated Dec. 2, 2015, 11 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/293,245, dated Dec. 6, 2016, 7 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/828,343, dated Apr. 6, 2016, 10 pages. |
Goodrich,“Loud and Clear—Human-Verifiable Authentication Based on Audio”, Proceedings of the 26th IEEE International Conference on Distributed Computing Systems, Available at <https://www.cs.duke.edu/˜msirivia/publications/icdcs.pdf>, Jul. 4, 2006, 15 pages. |
Prasad,“Efficient Device Pairing using Human-Comparable Synchronized Audiovisual Patterns”, Proceedings of the 6th International Conference on Applied Cryptography and Network Security, Available at <http://www.cis.uab.edu/saxena/docs/sr07.pdf>, Jun. 3, 2008, 19 pages. |
Saxena,“Secure Device Pairing Based on a Visual Channel”, IEEE Symposium on Security and Privacy, 2006, Available at <http://eprint.iacr.org/2006/050.pdf>, May 2006, pp. 1-17. |
Soriente,“HAPADEP—Human-Assisted Pure Audio Device Pairing”, Proceedings of the 11th International Conference on Information Security, Available at <http://sprout.ics.uci.edu/papers/hapadep.pdf>, Sep. 15, 2008, 11 pages. |
“Corrected Notice of Allowance”, U.S. Appl. No. 13/293,245, dated Mar. 17, 2017, 2 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/828,717, dated Jul. 6, 2017, 27 pages. |
“Foreign Office Action”, EP Application No. 12798499.5, dated Mar. 8, 2017, 5 pages. |
Number | Date | Country | |
---|---|---|---|
20170180350 A1 | Jun 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13293245 | Nov 2011 | US |
Child | 15449737 | US |