The following relates to the field of telecommunications and more specifically to embodiments of a device, system, and method for conducting real time messaging using speech-to-text conversion that can identify, prioritize, and authenticate incoming communications and voice commands.
Current telecommunications systems that include speech-to-text capabilities are inaccurate and cannot be customized to perform certain functions automatically. These devices require a physical interaction with the device to perform certain desired functions. Often times, any individual can access and set the device to perform functions, including speech-to-text conversion of voice commands. For instance, current speech-to-text devices do not customize themselves based on the primary user(s), nor can they automatically authenticate the user based purely on the speech being produced by the user. Furthermore, current telecom devices are not selective when performing various functions of the device, such as selectively managing/allowing incoming messages.
Thus, a need exists for a device, system, and method for speech-to-text communication that allows users to authorize, prioritize and customize speech-to-text functionality by identifying the user's vocal signature and selectively managing incoming communications.
A first aspect relates to a computing device comprising: a receiver coupled to a processor for receiving an electronic communication from a separate computing device, a local storage medium coupled to the processor, the local storage medium storing an identification system for identifying the electronic communication received from the separate computing device, wherein the processor notifies a user of the identified information of the electronic communication, and a voice user interface coupled to the processor for receiving a voice communication from the user in response to the notifying of the identified information, without a physical interaction between the user and the computing device, wherein the processor determines an action based on the voice communication.
A second aspect relates to a computing device comprising: a receiver coupled to a processor for receiving an electronic communication from a separate computing device, and a local storage medium coupled to the processor, the local storage medium storing an identification system for identifying the electronic communication received from the separate computing device, and a priority database that contains a priority level specific to a source of the identified information, wherein, in response to the receiving the electronic communication from the separate computing system, the processor accesses the priority database, and if the priority level equals or exceeds a pre-determined value, a user is notified by the processor that the electronic communication is being received, and if the priority level is below the pre-determined value, the user is not notified that the electronic communication is being received from the separate computing device.
A third aspect relates to a computing device comprising: a voice user interface coupled to a processor for receiving a voice communication from a user and converting the voice communication into a computer readable data, without a physical interaction between the user and the computing device, a local storage medium coupled to the processor, the local storage medium storing an authentication system for analyzing the computer readable data to authenticate a voice signature of the user communication with the computing device, and a transmitter for transmitting the computer readable data to a separate computing device if the voice signature is authenticated.
A fourth aspect relates to a method comprising: receiving, by a processor of the computing device, an electronic communication from a separate computing device, identifying, by the processor, the electronic communication received from the separate computing device, notifying, by the processor, the identified information of the electronic communication, receiving, through a voice user interface coupled to the processor, a voice communication from a first user in response to the notifying of the identified information, without a physical interaction with the computing device, and determining, by the processor, an action based on the voice communication.
A fifth aspect relates to a method comprising: receiving, by a processor of a computing device, an electronic communication from a separate computing device, identifying, by the processor, the electronic communication received from the separate computing device; and accessing, by the processor, a priority database to determine a priority level specific to a source of the identified information, in response to receiving the electronic communication from the separate computing system, wherein, if the priority level equals or exceeds a pre-determined value, a user is notified by the processor that the electronic communication is being received, wherein, if the priority level is below the pre-determined value, the user is not notified that the electronic communication is being received from the separate computing device.
Some of the embodiments will be described in detail, with reference to the following figures, wherein like designations denote like members, wherein:
A detailed description of the hereinafter described embodiments of the disclosed system and method are presented herein by way of exemplification and not limitation with reference to the Figures. Although certain embodiments of the present invention will be shown and described in detail, it should be understood that various changes and modifications may be made without departing from the scope of the appended claims. The scope of the present disclosure will in no way be limited to the number of constituting components, the materials thereof, the shapes thereof, the relative arrangement thereof, etc., and are disclosed simply as an example of embodiments of the present disclosure.
An a preface to the detailed description, it should be noted that, as used in this specification and the appended claims, the singular forms “a”, “an” and “the” include plural referents, unless the context clearly dictates otherwise.
Embodiments of processor 103 may be any device or apparatus capable of carrying out the instructions of a computer program. The processor 103 may carry out instructions of the computer program by performing arithmetical, logical, input and output operations of the system. In some embodiments, the processor 103 may be a central processing unit (CPU) while in other embodiments, the processor 103 may be a microprocessor. In an alternative embodiment of the computing system, the processor 103 may be a vector processor, while in other embodiments the processor may be a scalar processor. Additional embodiments may also include a cell processor or any other existing processor available. Embodiments of a computing device 100 may not be limited to a single processor 103 or a single processor type, rather it may include multiple processors and multiple processor types within a single system that may be in communication with each other.
Moreover, embodiments of the computing device 100 may also include a local storage medium 105. Embodiments of the local storage medium 105 may be a computer readable storage medium, and may include any form of primary or secondary memory, including magnetic tape, paper tape, punch cards, magnetic discs, hard disks, optical storage devices, flash memory, solid state memory such as a solid state drive, ROM, PROM, EPROM, EEPROM, RAM, DRAM. Embodiments of the local storage medium 105 may be computer readable memory. Computer readable memory may be a tangible device used to store programs such as sequences of instructions or systems. In addition, embodiments of the local storage medium 105 may store data such as programmed state information, and general or specific databases. For instance, embodiments of the local storage medium 105 may include, contain, store, or otherwise include an identification system 210, a priority database 220, a general database 230, and an authentication system 240. Embodiments of the identification system 210, the priority database 220, and the authentication system 240 are described in greater detail infra. Moreover, the local storage medium 105 may store programs or data on a temporary or permanent basis. In some embodiments, the local storage medium 105 may be primary memory while in alternative embodiments, it may be secondary memory. Additional embodiments may contain a combination of both primary and secondary memory. Although embodiments of computing device 100 are described as including a local storage medium, it may also be coupled over wireless or wired network to a remote database or remote storage medium that contains embodiments of the identification system 210, the priority database 220, the general database 230, and the authentication system 240.
Moreover, embodiments of local storage medium 105 may be primary memory that includes addressable semi-conductor memory such as flash memory, ROM, PROM, EPROM, EEPROM, RAM, DRAM, SRAM and combinations thereof. Embodiments of a computing device 100 that includes secondary memory may include magnetic tape, paper tape, punch cards, magnetic discs, hard disks, and optical storage devices. Furthermore, additional embodiments using a combination of primary and secondary memory may further utilize virtual memory. In an embodiment using virtual memory, a computing device 100 may move the least used pages of primary memory to a secondary storage device. In some embodiments, the secondary storage device may save the pages as swap files or page files. In a system using virtual memory, the swap files or page files may be retrieved by the primary memory as needed.
Referring still to
With continued reference to
Moreover, embodiments of the computing device 100 may include a voice user interface 108. Embodiments of a voice user interface 108 may be a speech recognition platform that can convert an analog signal or human voice communication/signal to a digital signal to produce a computer readable format in real-time. One example of a computer readable format is a text format. Embodiments of the voice user interface 108 may continually process incoming audio, programmed to recognize one or more triggers, such as a keyword or command by the user operating the computing device 100. For example, embodiments of the voice user interface 108 coupled to the processor 103 may receive a voice communication from a user without a physical interaction between the user and the device 100. Because the voice user interface may continually process incoming audio, once the voice user interface 108 recognizes a trigger/command given by the user, the processor coupled thereto determines and/or performs a particular action. The continuous processing of audio may commence when the electronic communication is first received, or may be continuously processing audio so long as power is being supplied to the computing device 100. Furthermore, embodiments of the voice user interface 108 may continuously collect and process incoming audio through one or more microphones of the computing device 100. However, external or peripheral accessories that are wired or wirelessly connected to the computing device 100 may also collect audio for processing by the processor 103 of the computing device 100. Embodiments of the collected and processed audio may be the voice of the user of the computing device 100, and may have a variable range for collecting the audio.
Referring still to the drawings,
Referring now to
Embodiments of the computing device 100 executing the steps of software may authenticate, prioritize, and/or analyze incoming communication from both a first user and electronic data coming from a second user. Moreover, embodiments of the computing system 10 may also covert voice signals from a user to computer readable data, such as text for transmitting to a second electronic device, such as an electronic device operated by the second user. Embodiments of an electronic communication from a separate computing device or system 401, 402, 403 may be a SMS message, a MMS message, a text message, an email, a radio link, a signal from the Internet, a satellite communication, a signal sent over a cellular network, Wi-Fi network, or Bluetooth® network, or any communication using an electrical signal or electromagnetic waves.
Embodiments of computing system 100 running the software described herein may execute or implement the steps of receiving, by a processor 103 of the computing device 100, an electronic communication from a separate computing device 401, identifying, by the processor 103, the electronic communication received from the separate computing device 401, notifying, by the processor 103, the identified information of the electronic communication, receiving, through a voice user interface 108 coupled to the processor 103, a voice communication from a first user in response to the notifying of the identified information, without a physical interaction with the computing device, and determining, by the processor 103, an action based on the voice communication. Further embodiments of the computing device 100 running software may execute or implement the steps of receiving, by a processor 103 of a computing device 100, an electronic communication from a separate computing device 401, identifying, by the processor 103, the electronic communication received from the separate computing device 401, accessing, by the processor 103, a priority database 230 to determine a priority level specific to a source of the identified information, in response to receiving the electronic communication from the separate computing system 401, wherein, if the priority level equals or exceeds a pre-determined value, a user may be notified by the processor 103 that the electronic communication is being received, and wherein, if the priority level is below the pre-determined value, the user may not notified that the electronic communication is being received from the separate computing device 401.
While this disclosure has been described in conjunction with the specific embodiments outlined above, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, the preferred embodiments of the present disclosure as set forth above are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the invention, as required by the following claims. The claims provide the scope of the coverage of the invention and should not be limited to the specific examples provided herein.
Number | Name | Date | Kind |
---|---|---|---|
5991799 | Yen et al. | Nov 1999 | A |
6366651 | Griffith et al. | Apr 2002 | B1 |
6668278 | Yen et al. | Dec 2003 | B1 |
6721729 | Nguyen et al. | Apr 2004 | B2 |
6834107 | Hurst | Dec 2004 | B1 |
6895257 | Boman et al. | May 2005 | B2 |
6968333 | Abbott et al. | Nov 2005 | B2 |
7027804 | Mufti | Apr 2006 | B2 |
7072948 | Yen et al. | Jul 2006 | B2 |
7130401 | Rampey et al. | Oct 2006 | B2 |
7136462 | Pelaez et al. | Nov 2006 | B2 |
7155451 | Torres | Dec 2006 | B1 |
7219136 | Danner et al. | May 2007 | B1 |
7231439 | Abbott et al. | Jun 2007 | B1 |
7236580 | Sarkar et al. | Jun 2007 | B1 |
7489767 | Hikishima | Feb 2009 | B2 |
7512678 | Crabtree et al. | Mar 2009 | B2 |
7512889 | Newell et al. | Mar 2009 | B2 |
7577665 | Ramer et al. | Aug 2009 | B2 |
7596591 | Titmuss | Sep 2009 | B2 |
7664734 | Lawrence et al. | Feb 2010 | B2 |
7689919 | Abbott et al. | Mar 2010 | B2 |
7702318 | Ramer et al. | Apr 2010 | B2 |
7769757 | Grefensette et al. | Aug 2010 | B2 |
7792253 | Agapi et al. | Sep 2010 | B2 |
7818179 | Krasikov et al. | Oct 2010 | B2 |
7848265 | Levy et al. | Dec 2010 | B2 |
8013734 | Saigh et al. | Sep 2011 | B2 |
8020104 | Robarts et al. | Sep 2011 | B2 |
8103665 | Abbott et al. | Jan 2012 | B2 |
8150024 | Martin | Apr 2012 | B1 |
8442969 | Gross | May 2013 | B2 |
8676904 | Lindahl | Mar 2014 | B2 |
8719198 | Zheng et al. | May 2014 | B2 |
8903759 | King et al. | Dec 2014 | B2 |
9203979 | Jaccino | Dec 2015 | B1 |
9454918 | Hlatky, Jr. | Sep 2016 | B1 |
20020076033 | Baweja | Jun 2002 | A1 |
20020191778 | Che | Dec 2002 | A1 |
20040176114 | Northcutt | Sep 2004 | A1 |
20040267527 | Creamer et al. | Dec 2004 | A1 |
20050033867 | Hong et al. | Feb 2005 | A1 |
20050198026 | Dehlinger et al. | Sep 2005 | A1 |
20050243975 | Reich et al. | Nov 2005 | A1 |
20050266829 | Tran et al. | Dec 2005 | A1 |
20050282590 | Haparnas | Dec 2005 | A1 |
20060093098 | Tarn | May 2006 | A1 |
20070140471 | Gutta | Jun 2007 | A1 |
20070192318 | Ramer et al. | Aug 2007 | A1 |
20080170683 | Zernovizky | Jul 2008 | A1 |
20080250026 | Linden et al. | Oct 2008 | A1 |
20090012944 | Rodriguez et al. | Jan 2009 | A1 |
20100211389 | Marquardt | Aug 2010 | A1 |
20100330964 | Chen | Dec 2010 | A1 |
20110034156 | Gatti | Feb 2011 | A1 |
20110135086 | Sun | Jun 2011 | A1 |
20110201385 | Higginbotham | Aug 2011 | A1 |
20110289224 | Trott | Nov 2011 | A1 |
20120023226 | Petersen et al. | Jan 2012 | A1 |
20120064917 | Jenkins et al. | Mar 2012 | A1 |
20120269116 | Xing et al. | Oct 2012 | A1 |
20120322510 | Epley | Dec 2012 | A1 |
20140072137 | Nelson | Mar 2014 | A1 |
20140187225 | Miller | Jul 2014 | A1 |
20150340042 | Sejnoha | Nov 2015 | A1 |
20150365759 | Dimitriadis et al. | Dec 2015 | A1 |
Entry |
---|
Office Action for U.S. Appl. No. 14/174,986 (filed Feb. 7, 2014), dated Feb. 26, 2016. |
Final Office Action for U.S. Appl. No. 14/174,986 (filed Feb. 7, 2014), dated Oct. 5, 2016. |
Office Action for U.S. Appl. No. 14/174,986 (filed Feb. 7, 2014), dated Apr. 6, 2017. |
Office Action for U.S. Appl. No. 14/030,595 (filed Sep. 18, 2013), dated May 21, 2015. |
Final Office Action for U.S. Appl. No. 14/030,595 (filed Sep. 18, 2013), dated Dec. 11, 2015. |
Office Action for U.S. Appl. No. 14/030,595 (filed Sep. 18, 2013), dated Nov. 9, 2016. |
Final Office Action for U.S. Appl. No. 14/030,595 (filed Sep. 18, 2013), dated Jun. 16, 2017. |
International Search Report for PCT Application No. PCT/US16/36895, dated Aug. 31, 2016. |
Number | Date | Country | |
---|---|---|---|
20150229756 A1 | Aug 2015 | US |