Providing access with a portable device and voice commands

Information

  • Patent Grant
  • 10224039
  • Patent Number
    10,224,039
  • Date Filed
    Friday, July 29, 2016
    7 years ago
  • Date Issued
    Tuesday, March 5, 2019
    5 years ago
Abstract
A computing device may compare a voice command to a customized voiceprint of a user. The computing device may, if a result of the comparison exceeds a threshold, determine the voice command matches the voiceprint, determine a security level associated with the voice command, generate a signal comprising an audible announcement, access website related information, and utilize customized user settings.
Description
BACKGROUND

The present invention relates to security systems. In particular, the invention relates to a smart security device or system which incorporates, amongst other features, voice-control and proximity detection to provide enhanced security.


Voice recognition has been, and continues to be, developed to provide control of electronic systems using spoken commands. Spoken commands may not only facilitate easier use of a system, but also enhance the security aspects of a system. For instance, the voice pattern of the user may be part of the security system which authenticates a user's authorization for access. U.S. Pat. No. 4,856,072 discloses a vehicle security system which can be voice actuated by any authorized person. In this manner, voice activated control systems are known which control various functions within an automobile.


Electronic identification devices which incorporate radio transponders are also known in the art. Transponders are generally categorized as either active or passive, based upon the power source used to power the transponder. Active transponders periodically transmit encoded data on a selected frequency to a remote receiver located on the item to which access is desired. For example, U.S. Pat. No. 3,772,668 discloses a freight security system comprising a base station and active transponders, each of which include a self-contained power supply.


In contrast, for passive transponders, the RF carrier signal received from an interrogating unit is the source of operating power. Accordingly, passive transponders are not activated until they come within the vicinity of the interrogating unit. U.S. Pat. No. 5,153,583 discloses a portable passive transponder having a single inductive coil for simultaneous reception and transmission of signals to and from an interrogating unit.


The security system according to the present invention increases security while eliminating the need for a key and/or a multiplicity of control buttons.


SUMMARY

The present invention is a security device or system which permits enhanced security and control using a voice signal in combination with a transponder unit. A user who desires the ability to control an object, such as an automobile, or to access and control an area, such as a building, carries a small portable radio transponder unit. The transponder unit is preferably incorporated into a plastic card, such as a credit card, but can also be incorporated in a wristband, watch, wallet or piece of jewelry. When the transponder comes within a predetermined range of a base unit located on the object to be controlled, the transponder transmits an RF signal including an identification code and interfaces with a base unit. The base unit receives the RF signal and verifies if that particular identification code is valid. If the identification code is valid, the base unit instructs the security device or system to await a vocal command.


Upon receiving a voice command, the security device or system authenticates the voice command by matching it with a voiceprint template that is previously stored. If there is a match, the device or system performs the function that was requested. For example, an authorized user may direct an automobile security system to perform various functions such as “unlock and open driver door” or “unlock and open all doors.” If both the transponder signal and voice command are approved, the system will perform the function accordingly. The present invention increases security by ensuring that the transponder and voice command are authentic and are within a predefined distance from each other and the base unit. The present invention further provides a variety of optional, smart features including virtual digital assistance, interactive personalities, and various smart function control features.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of a security system according to the present invention.



FIG. 2 is a flow diagram which illustrates the method of the present invention.



FIG. 3 is a block diagram of the correlator.



FIG. 4 is an alternative embodiment of the present invention utilizing the GPS.



FIG. 5 is a flow diagram which illustrates an alternative method of the present invention.





DETAILED DESCRIPTION OF THE INVENTION

The present invention will be described with reference to the drawing Figures where like numerals represent like elements throughout.


To facilitate understanding of the present invention, the invention will be described with reference to an automobile security system 20. However, those of skill in the art realize that the teachings of the present invention are equally applicable to a security system for a building such as a residence, commercial establishment or professional office building. The present invention is also adaptable to be installed as a security device on an individual door, window, any type of machinery; any type of computer or electronic equipment; any type of recreational equipment, or even individual pad or combination locks.


A block diagram of an automobile security system 20 embodying the present invention is shown in FIG. 1. A dashed line 10 graphically represents a vehicle, in which all of the components located within the area defined by the dashed line 10 are typically internal to the vehicle. The system 20 includes a portable radio transponder 22, an electronic receiver 24, a microcomputer 30 including an accessible memory circuit 32, a voice-recognition and synthesis unit 44, a microphone input circuit 45, external and internal microphones 50, 52, a speaker output circuit 42 and external and internal speakers 46, 48. The transponder 22 continually transmits an identification signal 15 which is unique to that user or that particular transponder 22.


As aforementioned, there are two types of transponders: 1) active transponders; and 2) passive transponders. Active transponders include a power source such as a battery and a transmitter, and periodically transmit an identification signal. A passive transponder is inactive until it comes within a predetermined range of an activation unit. The passive transponder is activated when it receives an RF signal from an activating unit; whereupon the transponder is energized and begins to transmit an identification signal. Although the present invention is being described with reference to an active transponder, it is equally applicable to the use of a passive transponder.


When a user carrying the active transponder 22, (which continually transmits an RF signal including an identification code 15), comes within a predetermined range of the receiver 24 which is mounted on the vehicle 10, the receiver 24 receives the RF signal 15. The receiver 24 then feeds the identification code to the microcomputer 30 which processes the identification code to determine whether the identification code matches the identification data previously stored in the memory circuit 32.


Once a positive identification has been made, the microcomputer 30 generates a control signal to activate the voice recognition and synthesis unit 44. The voice recognition and synthesis unit 44, which is connected to a microphone input circuit 45 and a speaker output circuit 42, sends a signal to the microphone input circuit 45 to begin detecting all audible signals within the range of the microphone 50.


Voice commands are recognized by either an external or internal microphone 50, 52 and are input signals 55 to the microphone input circuit 45. The microphone input circuit 45 converts the input signals 55 to digital format. The voice recognition and synthesis unit 44 then receives the digital input signal from the microphone input circuit 45 and sends it to the microcomputer 30, which analyzes the digital input signal.


The microcomputer 30 has an associated memory (not shown) which includes a plurality of “templates” for each potential spoken command as spoken by the user. Each spoken command will have at least one action for the system to initiate. For example, the template for the command “unlock driver's door” includes an entry in memory which comprises a recording of the user's voice speaking the command “unlock driver's door”, which was digitized, processed and stored in memory. As those skilled in the art should realize, there are many different signal processing techniques which may be utilized to process and store a voice signal. These voice recognition and processing techniques are being increasingly used for word processing (such as Dragon Naturally Speaking and Via Voice typing software), VCR programming and in many other types of consumer applications. It is intended that the present invention be utilized in conjunction with any current or future type of voice recognition. Accordingly, a voiceprint template in digital format for the words “unlock driver's door” as spoken by the user is stored in memory.


The system 20 permits errors in the commands within a predetermined tolerance. The system 20 is configurable to tailor the spoken commands particularly to a single user's voice, may be programmed for several or many users, or may include a large error tolerance such that it is generalized to any person's spoken commands. If a particular command is not tailored to a specific user's voice, the received voice command voiceprint template stored in memory will permit a greater degree of error than with the templates which are customized to a particular user's voice.











TABLE 1









COLUMN















1
2

4






VOICE
SECURITY
3
VOICEPRINT
5
6
7



COMMAND
LEVEL
USER
TEMPLATE
ENABLED
ACTION
HIERARCHY



















ROW
1
Unlock
1
Richard
01101111 . . .
Yes
Unlock Door
1




Driver's

Victoria
01110110 . . .
Yes
Unlock Door
2




Door



2
Turn On
2
Richard
10101110 . . .
Yes
Turn On 1060 AM
1




Stereo




Volume Setting 2






Victoria
11011011 . . .
Yes
Turn On 100.3 FM
2









Max Volume



3
Turn Off
10
Richard
11101110 . . .
Yes
Turn Off
1




Stereo

Victoria
01101110 . . .
Yes
Turn Off
2



4
Turn On
10
Richard
11111000 . . .
Yes
Turn On Setting 3
1




Wipers Low

Victoria
10101010 . . .
No
XXXX
X



5
Turn On
10
Richard
10111110 . . .
Yes
Turn On Setting 9
1




Wipers High

Victoria
10100001 . . .
No
XXXX
X



6
Sunny Day
5
Richard
10111011 . . .
Yes
1) Open All
1









Windows









2) Open Sunroof









3) Turn On Stereo









To 100.3FM Max









Volume









4) Turn On Fan









High









Table 1 shows the voiceprint templates as stored in memory. As shown, each voice command is associated with at least six additional fields of information as shown by column 2-7 in Table 1: security level, user, voiceprint template, enabled, action, hierarchy. For example, if the voice command is “unlock driver's door”, as shown in column 1, row 1 there are two associated voiceprint templates as shown in column 4, row 1. The first template is associated with the user Richard, and the second template is associated with the user Victoria. In this example, Richard and Victoria can each create their own voiceprint template by speaking the command “unlock driver's door” into the internal or external microphones 50, 52. This information is processed as described hereinbefore and stored in memory as a voiceprint template.


The user may also customize the particular action associated with the voice command. As shown in column 6, row 1, the only action associated with “unlock driver's door” is to unlock the door. In a second example, however, for the voice command “turn on stereo” the user Richard has customized the action associated with turning on the stereo to the following: turn on 1060 am, volume setting 2. In contrast, the user Victoria has customized the action associated with her voiceprint template for turn on stereo to: turn on 100.3 FM, max volume. In this manner, each voice command is customized in accordance with the user's specific requirements.


There may be situations where different users' commands may conflict. In the aforementioned example regarding turning on the stereo, the actions associated with the voice command “turn on stereo” as shown in column 6 conflict regarding both the station selection and the volume setting. In this case, the hierarchy field shown in column 7 gives priority to the user Richard. The user Victoria has second priority. If additional users were specified, they can in turn have their own priority under the hierarchy.


The security level column, shown as column 2, permits the system 20 to accept varying thresholds for voiceprint template identification. This enables certain voice commands to have a higher level of security than other commands. For example, the voice command “unlock driver's door” has a security level of 1, indicating that it is a highly secure function. For a higher security level, the threshold for voiceprint template identification will be extremely high; i.e., the voiceprint template must be highly correlated with the received voice command. However, other voice commands may have lower security levels. For some commands such as “turn on wipers low” or “turn on wipers high”, a security level of 10 permits an almost “generic” user to perform the command. This is a particularly useful feature, such as when valet personnel require access to certain functions and have not previously entered a voiceprint template into the system 20. It should be noted that, as shown in row 5 for the voice command “turn on wipers high”, although a “generic” user may operate the wipers, the system 20 may detect the voiceprint template for Victoria and refuse to grant access to that command since the enabling feature shown in column 5 is “no”. This will “lock out” the user Victoria and keep the user Victoria from performing that particular command. A user with a higher hierarchy can modify, override or disable any commands executed by a user with a lower hierarchy.


In the same manner, the system 20 may be utilized to create certain “mood” voice commands. For example, as shown in row 6, the user Richard may create a “sunny day” voice command with a plurality of associated actions to be undertaken by the system 20, such as:


1) open all windows;


2) open sun roof;


3) turn on stereo to 100.3 FM, max volume; and


4) turn on fan high.


In this matter, the system 20 may be customizable for any type of mood, weather or driving condition, and may also be tailored to plurality of individual users in a predetermined hierarchical basis.


Referring to FIG. 3, the voiceprint template correlator 100 is shown. The correlator 100 is executed via the microcomputer 30 and the associated memory 32, (shown in FIG. 1), and includes a bitwise XNOR 106, an adder 108 a comparator 110 and a register 112. The voiceprint template 104 is compared to the received voiceprint 102 in a bitwise XNOR 106. The bitwise XNOR 106 compares a received voiceprint 102 to the voiceprint template 104 on a bit-by-bit basis. Each bit match is output as a logical one, and each bit mismatch is output as a logical zero. The adder 108 adds up all of the matching bits and the comparator 110 compares the total number of matching bits to a threshold stored in the register 112. If the total number of matching bits exceeds the threshold as output by the register 112, a “hit” is registered and the received voiceprint 102 is considered to be the same as the voiceprint template 104. If the total number of matching bits is less than the threshold, a “miss” is output and the received voiceprint 102 is considered not to be the same as the voiceprint template 104. As aforementioned, the threshold set in register 112 is variable depending upon the security level shown column 2 of Table 1. Although a specific correlator 100 is shown in FIG. 3, it should be recognized that there are many types of correlators and comparators that perform the same type of function which are known to those skilled in the art. The correlator 100 as shown in FIG. 3 is intended to be illustrative and not exclusive.


All received voice commands are analyzed to determine whether they correspond to one of the voiceprint templates stored in memory. If there is no match, the system 20 will prompt the user to re-input the voice command by re-speaking the command. Once a predetermined number of attempts has been permitted, the system 20 will enter a “safe mode” and will refuse to accept any more voice commands for a predetermined period of time, which may be set by an authorized user upon initial system setup.


If the voice command corresponds to a selected one of the voiceprint templates stored in the memory circuit 32, the microcomputer 30 generates a control signal to the voice recognition and synthesis unit 44. The voice recognition and synthesis unit 44 provides a digital output signal to the speaker output circuit 42, which converts the digital output signal to an audible output signal and supplies the output instruction signal 47 to the external or internal speakers 46, 48. These speakers 46, 48 generate alarm signals as well as audible announcements to inform the user of the present operating status of the automobile security system 20. Additionally, the microcomputer 30 implements the actions associated with the particular voiceprint template via an input/out (I/O) interface 57. The I/O 57 interfaces with many of the vehicle's electrical and electro-mechanical systems to implement the associated actions. In such a manner, the system 20 may confirm that the instruction contained within the voice command is being carried out, or whether additional security information or clarification is necessary.


Voice commands can be used in this manner to control the functions of the vehicle. The functions include turning the ignition switch on/off, locking/unlocking doors, opening/closing (i.e. power) doors and windows, controlling turn signals, hazard lights, head lights, parking lights, interior lights, horn, radio (on, off, scan, set and other specific settings), defroster, temperature controls, wipers (including variable speeds), mirrors and all other basic functions of a vehicle. The system also permits voice activation or deactivation of the alarm system.


The transponder 22 and receiver 24 are also capable of using various platforms including, but not limited to, Bluetooth technology. A security system equipped with a Bluetooth radio establishes instant connection to another Bluetooth radio as soon as it comes into range. Since the Bluetooth technology supports both point-to-point and point-to-multipoint connections, several piconets can be established and linked together ad hoc. Bluetooth topology is best described as a multiple piconet structure. Using a technology such as Bluetooth allows transponders 22 and receivers 24 to communicate through other transponders 22 and receivers 24 (i.e. repeaters).


Optionally, the system 20 may allow the processing of the transponder signal 15 and voice command simultaneously. When both the transponder signal 15 and voice command are approved, the system 20 will perform the function being requested.


It is to be emphasized that the vehicle security application depicted in FIG. 1 is only exemplary of one of the many and varied applications with which the present invention may be used. For instance, the present invention can permit a user in a work environment to have access through locked doors, cabinets or computers, etc. Another use of this system 20 can be for hotels, whereby a guest can be given a plastic card having a transponder 22. The voice of the guest can be programmed at the front desk, upon registration such that the guest can open the door to selective hotel rooms. Further, various functions inside the hotel room may also be voice-controlled. This security system of the present invention can also secure electronic and other equipment, if a user's voice and identification data contained in the transponder unit have been pre-programmed. For instance, in order to use phones, computers other electronic devices, or have access to various web sites, the transponder 22 can include a memory and these devices can be programmed into the transponder 22 along with the user's voice. This permits one to use the particular electronic device or surf the web using their voice, and the transponder 22 can control the access.


Referring to FIG. 2, a method of voice-controlling a security system with a proximity detector is described in accordance with the present invention. The method includes the steps of: positioning a radio transponder within a predetermined range from the activation unit (step 200); detecting an RF signal generated by the radio transponder including the identification code (step 201); verifying if the identification code detected is the identification code assigned to an authorized transponder (step 202); detecting all audible signals within range of the activation unit (step 203); converting the audible signal to a digital signal (step 204); verifying if the digital signal matches that of an authorized user (step 205); generating a digital instruction signal (step 206); and converting the digital instruction signal to an audible instruction signal (step 207).


An alternative embodiment 401 of the present invention is shown in FIG. 4. This embodiment is similar to the embodiment shown in FIG. 1, except that the radio transponder 122 includes a GPS locator 158 and the vehicle includes a GPS receiver 160 and a detector 162, which have outputs fed to the microcomputer 30. In this embodiment, the transponder 122 also sends a location signal to the receiver 24 as output by the GPS receiver 158. The microcomputer 30 also receives a signal from the onboard GPS receiver 160. The detector 162 detects the location of a user. Although the user and the transponder 122 are generally intended to be co-located, i.e., the transponder 122 is within the possession of the user, this may not necessarily be the case. Accordingly, the system 401 determines with precise accuracy where the transponder 22 is located and where the user is located. The detector 162 may utilize infrared, microwave, radar or any other known or future systems for locating objects, or detecting the proximity of one object to another object. If both the user and the transponder 122 are within a predetermined distance from each other, the microcomputer 30 proceeds with the authentication process as previously described hereinbefore. This provides an extra level of security.


The transponder 122 can also be programmed to advise the security device or system of the exact location of the transponder 122 and the distance between the origin of the voice command and the transponder 122. For example, if the security system 401 described above can require that the user must be in the driver's seat to activate those controls that pertain to operating the vehicle, then the commands spoken by the user will be valid only when the user is positioned in the driver's seat. In other words, the system validates that the transponder 122 is located at an exact location within the “driver's space” and then confirms the voice command is “X” distance (in a predefined range) from the transponder 122, calculating that it is also within the driver's space.


Reiterating, the microcomputer 30 may not permit certain actions to be taken depending upon the location of the transponder 122 and the proximity of the user. For example, the microcomputer 30 may refuse to engage the transmission from “park” to “drive” until the transponder 122 and the user are both in the “driver's space”. In the same manner, if the vehicle is in motion, the microcomputer will disallow an otherwise authorized voice command to open the doors.


The present invention may take the place of existing remote keyless entry systems whereby a button on a keyfob may be depressed and then the voice command entered into a speaker located on the keyfob. This provides certain functions, such as those performed by the microphone, within the keyfob rather than within the vehicle. In this manner, a user may from a distance, for example in inclement weather, start up the car, turn on the windshield wipers, set the rear defroster on, and set the heat for 80°.


As aforementioned, although the present invention was described with reference to a vehicle, it should be recognized that this is equally applicable to any other individually controllable items. For example, the components shown in FIG. 1 or 4 may not be located and interfaced with a vehicle, but rather they may comprise a security system for a computer. This system, if combined or integrated with any network system or computer using a standardized or uniform interface, can also transmit an encrypted user password (or passwords). Accordingly, this not only identifies the user but by providing the password (or passwords), allows the network or computer system to automatically log in the user (when they arrive) and set the level of access being provided to that particular user. The system also automatically logs out the user when they depart. In this manner, the I/O interface 57 may interface with a computer system either electro-mechanically or via software. The particular levels of security may permit or restrict access to the computer, or even access to certain websites. These security levels and specific actions were implemented and described with reference to Table 1.


If the present invention is utilized with the doors and windows of a residence or a business, the invention may be embodied in a complete “system” whereby all of the security of the windows and doors is centrally located. In contrast, each window and door may include a separate security device as shown in FIGS. 1 and 4. In this manner, each window and door is pre-programmable to receive the user's voiceprint templates. Upon initially setting up the security of each individual window and door, a security system may record an individual's voice command, and then replay those voice commands to each individual security device. These voiceprint templates can also be stored in the central files of the security company for even greater identification levels.


An alternative method of voice-controlling a security system with a proximity detector and GPS locating is described in FIG. 5. The method includes the steps of: positioning a radio transponder within a predetermined range from the activation unit (step 600); detecting an RF signal generated by the radio transponder including the identification code and location (step 601); verifying if the identification code detected is the identification code assigned to an authorized transponder (step 602); detecting all audible signals within range of the activation unit (step 603); converting the audible signal to a digital signal (step 604); verifying if the digital signal matches that of an authorized user and validating that, the transponder and voice are within a predefined distance from each other and the receiver for the requested function (step 605); generating a digital instruction signal (step 606); and converting the digital instruction signal to an audible instruction signal (step 607).


In an alternative embodiment of the present invention, the system may act as a smart virtual assistant (hereinafter referred to as “VA”) to each user in control of a transponder. It is well known in the art that companies are developing artificial intelligence. For example, Artificial Intelligence Enterprises N.V. (Ai) is one company that has a commercially-oriented research effort to achieve true artificial intelligence through the development of conversational skills in computers. They are using cutting-edge mathematical and cognitive theories and techniques, Ai's technology will be able to conduct a conversation that is not discernible from a conversation with a human being. It is in the process of developing technology that can perform numerous functions such as allowing a VA to book airline tickets for user's upcoming vacation. One can be able to ask the VA to find and book the best fare. One can ask the VA to browse the Web for a specific present at a specific price range. The VA can track the package ensuring that it arrives on time or order another item should it be delayed.


This VA can perform numerous tasks such as maintaining your schedule, take phone calls, type memos that you dictate, receiving charges from 3rd party systems (such as a car at a gas station or going thru a toll booth) and authorizing payment via a credit or debit card or asking the user for authorization to pay, etc. In this embodiment, the VA does not have to wait for a command. Rather, it can be proactive initiating a request or action as long as a user is in control of a transponder.


For a particular system, one or more transponders can be the alpha or master transponder (hereinafter referred to as “AMT”) and can allow new transponders to be added to the system. When new transponders are added, the user in possession of the AMT can select from various options. These options can be chosen verbally (i.e., being led by the VA or thru a GUI (graphic user interface). Once a transponder is programmed, it may require the user of that particular transponder to walk thru the commands to obtain the voiceprints of the user and to refine the commands with respect to what function the system is to perform when each command is spoken. Optionally, the system may initiate questions using the user's response to refine each command in real time and may parse the user's answers to create the voiceprints for future reference (i.e., to use said voiceprints as templates when the system receives a future command).


As a VA, the system may use different voices and have different personalities (i.e., identities) depending on which transponder is present and which user is speaking. These VAs may be identified and respond to specific names. This allows the system to represent itself as a different VA to each user. For example, the system may be setup to have a “Motherly” or a “Fatherly” tone/voice for a child. The VA may try to simulate another user's voice. Or, as an alternative, it could have a sexy male or female voice. A VA can be programmed to be grumpy, sad, happy etc. An AMT may be able to speak to or call multiple system identities merely by calling their name(s).


If a plurality of transponders are present, the system may respond to a command by asking questions to clarify a user's command. For example, if the command was “Open door”, the system can ask if you want more than one door opened. If a child with a transponder is getting into the rear of the car, the system may ask the user with the highest hierarchy if the user wants any other doors opened; Or it may ask the child with the transponder which door she wants opened. The system may be able to retain previous results to commands or questions asked by the VA and can respond to future commands in a similar fashion based upon which transponders/users are present. The system can retain and record previous conversations. The system may summarize or replay its conversation with another user to the AMT. While summarizing, it may, or may not, use the other VA's voice/identity.


The system may interface with other systems or modules. The system may modify or change its standard operating procedure based upon the present circumstances. For example, the system may have communication with a vehicle with a power door and door sensor device. The door sensor device determines that it can only open the door a particular distance before it strikes an object. The power door only opens the door as far as it can and the VA advises this to the user. Optionally, the VA may control the door sensor and power door.


The system may send information back to a transponder that has a memory. For security reasons, the system may modify the password or identifying information and may send this updated information to the transponder. The system may update the transponder so that it can also retain and may replay previous recorded conversations or commands made by the user or other users. One significant benefit for the transponder to have a memory and be updated by the system, is that the system can create an appearance of continuity between locations. For example, if systems were installed on a laptop, car, house, and office, the user can have a conversation with the VA anytime they are, or become, within the predefined distance from each of these locations or items. And there is not lost information in that the transponder can send the previous conversation to the system at each of these locations or items updating the VA “on the fly” so that there is a seamless transparency or continuity with the VA.

Claims
  • 1. A computing device comprising: a microphone configured to receive an auditory signal comprising a voice command;a processor configured to: process the auditory signal based on a determined proximity to a portable radio device, wherein the proximity is determined based on a radio frequency (RF) signal received by the computing device from the portable radio device; andcompare the voice command to a customized voiceprint of a user, and on a condition that a result of the comparison exceeds a threshold, determine the voice command matches the voiceprint, determine a security level associated with the voice command, generate a signal comprising an audible announcement, access website related information, and utilize customized user settings.
  • 2. The computing device of claim 1, wherein the voice command includes a password, includes a security code, or includes an identification code.
  • 3. The computing device of claim 1, wherein the voice command is stored locally in a memory device.
  • 4. The computing device of claim 1, wherein the voice command is utilized to log the user in and out.
  • 5. The computing device of claim 1, wherein the proximity is determined based on a bluetooth signal.
  • 6. The computing device of claim 1, wherein the audible announcement is a confirmation.
  • 7. A method performed by a computing device, the method comprising: receiving, by a microphone of the computing device, an auditory signal comprising a voice command;processing, by the computing device, the auditory signal based on a determined proximity to a portable radio device, wherein the proximity is determined based on a radio frequency (RF) signal received by the computing device from the portable radio device;comparing the voice command to a customized voiceprint of a user; andon a condition that a result of the comparison exceeds a threshold, determining the voice command matches the voiceprint, determining a security level associated with the voice command, generating a signal comprising an audible announcement, accessing website related information, and utilizing customized user settings.
  • 8. The method of claim 7, wherein the voice command includes a password, includes a security code, or includes an identification code.
  • 9. The method of claim 7, wherein the voice command is stored locally in a memory device.
  • 10. The method of claim 7, wherein the voice command is utilized to log the user in and out.
  • 11. The method of claim 7, wherein the proximity is determined based on a bluetooth signal.
  • 12. The method of claim 7, wherein the audible announcement is a confirmation.
CROSS REFERENCE TO RELATED APPLICATION(S)

This application is a continuation of U.S. patent application Ser. No. 14/173,426, filed Feb. 5, 2014, which is a continuation of U.S. patent application Ser. No. 11/205,891, filed Aug. 17, 2005, which issued as U.S. Pat. No. 8,648,692 on Feb. 11, 2014, which is a continuation in part of U.S. patent application Ser. No. 10/315,886, filed Dec. 10, 2002, which issued as U.S. Pat. No. 6,952,155 on Oct. 4, 2005, which is a continuation of U.S. patent application Ser. No. 09/624,300, filed Jul. 24, 2000, which issued as U.S. Pat. No. 6,496,107 on December 17, 2002, which claims the benefit of U.S. Provisional Application Nos. 60/147,057, filed Aug. 3, 1999, and 60/145,378, filed Jul. 23, 1999, which are incorporated by reference as if fully set forth.

US Referenced Citations (219)
Number Name Date Kind
3772668 Smith Nov 1973 A
4506378 Noso et al. Mar 1985 A
4591823 Horvat May 1986 A
4593403 Kishi et al. Jun 1986 A
4677569 Nakano et al. Jun 1987 A
4827520 Zeinstra May 1989 A
4856072 Schneider et al. Aug 1989 A
5040212 Bethards Aug 1991 A
5086394 Shapira Feb 1992 A
5117460 Berry et al. May 1992 A
5153583 Murdoch Oct 1992 A
5153918 Tuai Oct 1992 A
5170499 Grothause Dec 1992 A
5199080 Kimura et al. Mar 1993 A
5226256 Fries et al. Jul 1993 A
5335276 Thompson et al. Aug 1994 A
5365574 Hunt et al. Nov 1994 A
5375063 Peck et al. Dec 1994 A
5389824 Moroto Feb 1995 A
5420794 James May 1995 A
5422816 Sprague et al. Jun 1995 A
5425058 Mui Jun 1995 A
5450525 Russell et al. Sep 1995 A
5451964 Babu Sep 1995 A
5459304 Eisenmann Oct 1995 A
5465290 Hampton et al. Nov 1995 A
5465401 Thompson Nov 1995 A
5473602 McKenna et al. Dec 1995 A
5479490 Nakashima et al. Dec 1995 A
5495609 Scott Feb 1996 A
5509852 Clark Apr 1996 A
5539398 Hall et al. Jul 1996 A
5572201 Graham et al. Nov 1996 A
5650774 Drori Jul 1997 A
5663734 Krasner Sep 1997 A
5668880 Alajajian Sep 1997 A
5668929 Foster, Jr. Sep 1997 A
5673034 Saliga Sep 1997 A
5675633 Kopp et al. Oct 1997 A
5704008 Duvall, Jr. Dec 1997 A
5705991 Kniffin Jan 1998 A
5706399 Bareis Jan 1998 A
5731785 Lemelson et al. Mar 1998 A
5731788 Reeds Mar 1998 A
5732074 Spaur et al. Mar 1998 A
5740532 Fernandez et al. Apr 1998 A
5767795 Schaphorst Jun 1998 A
5774858 Taubkin Jun 1998 A
5777571 Chuang Jul 1998 A
5801616 Ghazarian et al. Sep 1998 A
5805082 Hassett Sep 1998 A
5812067 Bergholz et al. Sep 1998 A
5847661 Ricci Dec 1998 A
5864603 Haavisto et al. Jan 1999 A
5875183 Nitadori Feb 1999 A
5884312 Dustan Mar 1999 A
5900825 Pressel et al. May 1999 A
5907793 Reams May 1999 A
5908454 Zyburt et al. Jun 1999 A
5917408 Cardillo et al. Jun 1999 A
5929769 Garnault Jul 1999 A
5952941 Mardirossian Sep 1999 A
5959540 Walter Sep 1999 A
5960085 De la Huerga Sep 1999 A
5970457 Brant et al. Oct 1999 A
5974368 Schepps et al. Oct 1999 A
5982281 Layson Nov 1999 A
5983161 Lemelson et al. Nov 1999 A
6009323 Heffield et al. Dec 1999 A
6014090 Rosen et al. Jan 2000 A
6018659 Ayyagari et al. Jan 2000 A
6018699 Baron et al. Jan 2000 A
6028537 Suman et al. Feb 2000 A
6032054 Schwinke Feb 2000 A
6041410 Hsu et al. Mar 2000 A
6052598 Rudrapatna et al. Apr 2000 A
6060995 Wicks et al. May 2000 A
6073840 Marion Jun 2000 A
6084512 Elberty et al. Jul 2000 A
6084870 Wooten et al. Jul 2000 A
6097313 Takahashi et al. Aug 2000 A
6097974 Camp et al. Aug 2000 A
6119088 Ciluffo Sep 2000 A
6119095 Morita Sep 2000 A
6122514 Spaur et al. Sep 2000 A
6140939 Flick Oct 2000 A
6150937 Rackman Nov 2000 A
6157317 Walker et al. Dec 2000 A
6161005 Pinzon Dec 2000 A
6175922 Wang Jan 2001 B1
6177873 Cragun Jan 2001 B1
6181026 Treharne et al. Jan 2001 B1
6189105 Lopes Feb 2001 B1
6195542 Griffith Feb 2001 B1
6198996 Berstis Mar 2001 B1
6199045 Giniger et al. Mar 2001 B1
6208844 Abdegany Mar 2001 B1
6208862 Lee Mar 2001 B1
6218958 Eichstaedt et al. Apr 2001 B1
6225890 Murphy May 2001 B1
6252539 Phillips et al. Jun 2001 B1
6252544 Hoffberg Jun 2001 B1
6253184 Ruppert Jun 2001 B1
6255953 Barber Jul 2001 B1
6275231 Obradovich et al. Aug 2001 B1
6275774 Baron, Sr. et al. Aug 2001 B1
6282464 Obradovich Aug 2001 B1
6288634 Weiss et al. Sep 2001 B1
6295001 Barber Sep 2001 B1
6295346 Markowitz et al. Sep 2001 B1
6295492 Lang et al. Sep 2001 B1
6304816 Berstis Oct 2001 B1
6311060 Evans et al. Oct 2001 B1
6323803 Jolley et al. Nov 2001 B1
6330499 Chou et al. Dec 2001 B1
6356868 Yuschik et al. Mar 2002 B1
6366893 Hannula et al. Apr 2002 B2
6377825 Kennedy et al. Apr 2002 B1
6389397 Otto May 2002 B1
6401068 Cherveny Jun 2002 B1
6405033 Kennedy, III et al. Jun 2002 B1
6408272 White Jun 2002 B1
6424912 Correia et al. Jul 2002 B1
6437689 Kawai et al. Aug 2002 B2
6463415 St John Oct 2002 B2
6463462 Smith et al. Oct 2002 B1
6466796 Jacobson et al. Oct 2002 B1
6480699 Lovoi Nov 2002 B1
6490525 Baron, Sr. et al. Dec 2002 B2
6493550 Raith Dec 2002 B1
6493633 Baron, Sr. et al. Dec 2002 B2
6493669 Curry et al. Dec 2002 B1
6498987 Kelly et al. Dec 2002 B1
6498989 Pisetski et al. Dec 2002 B1
6505123 Root et al. Jan 2003 B1
6510415 Talmor et al. Jan 2003 B1
6522772 Morrison et al. Feb 2003 B1
6522875 Dowling et al. Feb 2003 B1
6522977 Corrigan et al. Feb 2003 B2
6526268 Marrah et al. Feb 2003 B1
6535743 Kennedy, III et al. Mar 2003 B1
6539232 Hendrey et al. Mar 2003 B2
6542748 Hendrey et al. Apr 2003 B2
6542749 Tanaka et al. Apr 2003 B2
6542750 Hendrey et al. Apr 2003 B2
6549130 Joao Apr 2003 B1
6574592 Nankawa et al. Jun 2003 B1
6577227 Kirchlinde et al. Jun 2003 B1
6581036 Varney et al. Jun 2003 B1
6584439 Geilhufe et al. Jun 2003 B1
6590529 Schwoegler Jul 2003 B2
6594630 Zlokarnik et al. Jul 2003 B1
6603405 Smith Aug 2003 B2
6611201 Bishop et al. Aug 2003 B1
6618593 Drutman et al. Sep 2003 B1
6624739 Stobbe Sep 2003 B1
6628928 Crosby et al. Sep 2003 B1
6646559 Smith Nov 2003 B2
6647270 Himmelstein Nov 2003 B1
6650224 Weigl et al. Nov 2003 B1
6654689 Kelly et al. Nov 2003 B1
6678516 Nordman et al. Jan 2004 B2
6690681 Preston et al. Feb 2004 B1
6691089 Su et al. Feb 2004 B1
6700482 Ververs et al. Mar 2004 B2
6711474 Treyz et al. Mar 2004 B1
6726636 Der Ghazarian et al. Apr 2004 B2
6728323 Chen et al. Apr 2004 B1
6728528 Loke Apr 2004 B1
6745021 Stevens Jun 2004 B1
6748318 Jones Jun 2004 B1
6751452 Kupczyk et al. Jun 2004 B1
6754485 Obradovich et al. Jun 2004 B1
6754585 Root et al. Jun 2004 B2
6772331 Hind Aug 2004 B1
6812860 Schwarzwalder Nov 2004 B1
6816878 Zimmers et al. Nov 2004 B1
6819919 Tanaka Nov 2004 B1
6842774 Piccioni Jan 2005 B1
6853849 Taognazzini Feb 2005 B1
6915126 Mazzaro Jul 2005 B2
7042337 Borders et al. May 2006 B2
7054819 Loveland May 2006 B1
7062379 Videtich Jun 2006 B2
7092723 Himmelstein Aug 2006 B2
7123926 Himmelstein Oct 2006 B2
7155335 Rennels Dec 2006 B2
7174253 Videtich Feb 2007 B2
7216145 Collings, III May 2007 B2
7257426 Witkowski et al. Aug 2007 B1
7330693 Goss Feb 2008 B1
7450955 Himmelstein Nov 2008 B2
7463896 Himmelstein Dec 2008 B2
7505772 Himmelstein Mar 2009 B2
7536189 Himmelstein May 2009 B2
20010044309 Bar et al. Nov 2001 A1
20010052847 Auerbach Dec 2001 A1
20020016171 Doganata et al. Feb 2002 A1
20020118579 Lucy et al. Aug 2002 A1
20020147766 Vanska et al. Oct 2002 A1
20020174073 Nordman Nov 2002 A1
20030169181 Taylor Sep 2003 A1
20030169185 Taylor Sep 2003 A1
20040002348 Fracaroli Jan 2004 A1
20040010591 Sinn et al. Jan 2004 A1
20040014457 Stevens Jan 2004 A1
20040067752 Himmelstein Apr 2004 A1
20040145459 Himmelstein Jul 2004 A1
20040162064 Himmelstein Aug 2004 A1
20040203931 Karaoguz Oct 2004 A1
20040248569 Kondou et al. Dec 2004 A1
20050021666 Dinnage et al. Jan 2005 A1
20050233741 Zamani Oct 2005 A1
20070054677 Himmelstein Mar 2007 A1
20070082678 Himmelstein Apr 2007 A1
20080119212 Himmelstein May 2008 A1
20080146248 Himmelstein Jun 2008 A1
20090156125 Himmelstein Jun 2009 A1
20090156203 Himmelstein Jun 2009 A1
Foreign Referenced Citations (1)
Number Date Country
1998010412 Mar 1998 WO
Non-Patent Literature Citations (28)
Entry
US 6,731,928, 05/2004, Tanaka et al. (withdrawn)
“Match.com Launches Matrch Mobile with AT&T Wireless,” Match.com Corporate, Dec. 2003.
“Match.com's Wireless Dating Service MatchMobile Launches Photo Profiles,” Match.com Corporate, Dec. 2003.
“Our Biggest Store: Organization Find People Nearby,” AT&T Wireless, http://www.attwireless.com/personal/features/organization/findfriends.jhtml, Feb. 2004.
AT&T Wireless, Features and Services User Guide, How to Use Your At&T Wireless Features and Services, Feb. 6, 2004.
Auerbach, Emergency Management Communciations System, U.S. Appl. No. 60/111,281, filed Dec. 7, 1998.
Auerbach, Emergency Management Communciations System, U.S. Appl. No. 60/120,096, filed Feb. 16, 1999.
Final Rejection, U.S. Appl. No. 11/205,891, dated Apr. 11, 2013.
Final Rejection, U.S. Appl. No. 11/205,891, dated Jun. 18, 2010.
Final Rejection, U.S. Appl. No. 11/205,891, dated May 6, 2009.
Final Rejection, U.S. Appl. No. 11/205,891, dated Nov. 1, 2011.
IEEE Std. 802.11b-1999 (R2003); Supplement to ANSI/IEEE Std. 802.11, 1999 Edition, (96 pgs.).
IEEE Std. 802.11p, May 25, 2006 (4 pgs.).
Non-Final Rejection, U.S. Appl. No. 10/315,886, now U.S. Pat. No. 6,952,155, dated Dec. 17, 2004.
Non-Final Rejection, U.S. Appl. No. 11/205,891, dated Aug. 3, 2012.
Non-Final Rejection, U.S. Appl. No. 11/205,891, dated Apr. 21, 2011.
Non-Final Rejection, U.S. Appl. No. 11/205,891, dated Oct. 30, 2009.
Non-Final Rejection, U.S. Appl. No. 11/205,891, dated Nov. 20, 2008.
Non-Final Rejection, U.S. Appl. No. 14/173,426, dated Sep. 21, 2015.
Notice of Allowance, U.S. Appl. No. 10/315,886, now U.S. Pat. No. 6,952,155, dated May 17, 2005.
Notice of Allowance, U.S. Appl. No. 11/205,891, dated Oct. 2, 2013.
Notice of Allowance, U.S. Appl. No. 14/173,426, dated Mar. 28, 2016.
PCT International Search Report for PCT/US02/40114, dated Apr. 12, 2005 (24 pgs.).
Press Release, GM's OnStar Subsidiary Brings Location-Based, Real-Time Traffic and Road Condition Information Into the Vehicle, Nov. 13, 2000.
Selingo, Protecting the Cellphone User's Right to Hide, The New York Times, Feb. 5, 2004.
Specification of the Bluetooth System; Specification vol. 1; Wireless connections made easy; Core; V1.0 B, Dec. 1, 1999 (1082 pgs.).
Wireless Services from AccuWeather.com, http://www.accuweather.com/iwxpage/adc/mobile/messenger.htm, Sep. 9, 2004.
Yahoo Finance, Private Inventor Files Patent Application for Telematics-Based Public and Emergency First Responders Safety Advisory System, Feb. 2004.
Related Publications (1)
Number Date Country
20160351198 A1 Dec 2016 US
Provisional Applications (2)
Number Date Country
60147057 Aug 1999 US
60145378 Jul 1999 US
Continuations (3)
Number Date Country
Parent 14173426 Feb 2014 US
Child 15223977 US
Parent 11205891 Aug 2005 US
Child 14173426 US
Parent 09624300 Jul 2000 US
Child 10315886 US
Continuation in Parts (1)
Number Date Country
Parent 10315886 Dec 2002 US
Child 11205891 US