One method of inputting text into an electronic device is to use a speech-to-text engine that converts a user's speech into text and displays the text on an electronic display. Accuracy of speech-to-text conversion has improved over the years, but sometimes the speech-to-text engine inserts erroneous characters or words when a user attempts to input punctuation using the speech-to-text engine.
An apparatus for using a gesture to insert a character is disclosed. A method and computer program product also perform the functions of the apparatus. The apparatus includes a processor and a memory that stores code executable by the processor. The executable code causes the processor to receive speech input from a user of an electronic device, convert the speech to text, receives a gesture from the user, associate the received gesture with a character and input the character into the text.
A method for using a gesture to insert a character includes receiving, by use of a processor, speech input from a user of an electronic device, converting, by use of a processor, the speech to text, receiving, by use of a processor, a gesture from the user, associating, by use of a processor, the received gesture with a character, and inputting, by use of a processor, the character into the text.
A program product for using a gesture to insert a character includes a computer readable storage medium that stores code executable by a processor, where the executable code includes code to perform receiving speech input from a user of an electronic device, converting the speech to text, receiving a gesture from the user, associating the received gesture with a character and inputting the character into the text.
A more particular description of the embodiments briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Understanding that these drawings depict only some embodiments and are not therefore to be considered to be limiting of scope, the embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings, in which:
As will be appreciated by one skilled in the art, aspects of the embodiments may be embodied as a system, method or program product. Accordingly, embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, embodiments may take the form of a program product embodied in one or more computer readable storage devices storing machine readable code, computer readable code, and/or program code, referred hereafter as code. The storage devices may be tangible, non-transitory, and/or non-transmission. The storage devices may not embody signals. In a certain embodiment, the storage devices only employ signals for accessing code.
Many of the functional units described in this specification have been labeled as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.
Modules may also be implemented in code and/or software for execution by various types of processors. An identified module of code may, for instance, comprise one or more physical or logical blocks of executable code which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.
Indeed, a module of code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different computer readable storage devices. Where a module or portions of a module are implemented in software, the software portions are stored on one or more computer readable storage devices.
Any combination of one or more computer readable medium may be utilized. The computer readable medium may be a computer readable storage medium. The computer readable storage medium may be a storage device storing the code. The storage device may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, an infrared storage device, a holographic storage device, a micromechanical storage device, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
More specific examples (a non-exhaustive list) of the storage device would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Code for carrying out operations for embodiments may be written in any combination of one or more programming languages including an object oriented programming language such as Python, Ruby, Java, Smalltalk, C++, or the like, and conventional procedural programming languages, such as the “C” programming language, or the like, and/or machine languages such as assembly languages. The code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment, but mean “one or more but not all embodiments” unless expressly specified otherwise. The terms “including,” “comprising,” “having,” and variations thereof mean “including but not limited to,” unless expressly specified otherwise. An enumerated listing of items does not imply that any or all of the items are mutually exclusive, unless expressly specified otherwise. The terms “a,” “an,” and “the” also refer to “one or more” unless expressly specified otherwise.
Furthermore, the described features, structures, or characteristics of the embodiments may be combined in any suitable manner. In the following description, numerous specific details are provided, such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments. One skilled in the relevant art will recognize, however, that embodiments may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of an embodiment.
Aspects of the embodiments are described below with reference to schematic flowchart diagrams and/or schematic block diagrams of methods, apparatuses, systems, and program products according to embodiments. It will be understood that each block of the schematic flowchart diagrams and/or schematic block diagrams, and combinations of blocks in the schematic flowchart diagrams and/or schematic block diagrams, can be implemented by code. These code may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the schematic flowchart diagrams and/or schematic block diagrams block or blocks.
The code may also be stored in a storage device that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the storage device produce an article of manufacture including instructions which implement the function/act specified in the schematic flowchart diagrams and/or schematic block diagrams block or blocks.
The code may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the code which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The schematic flowchart diagrams and/or schematic block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of apparatuses, systems, methods and program products according to various embodiments. In this regard, each block in the schematic flowchart diagrams and/or schematic block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions of the code for implementing the specified logical function(s).
It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more blocks, or portions thereof, of the illustrated Figures.
Although various arrow types and line types may be employed in the flowchart and/or block diagrams, they are understood not to limit the scope of the corresponding embodiments. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the depicted embodiment. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted embodiment. It will also be noted that each block of the block diagrams and/or flowchart diagrams, and combinations of blocks in the block diagrams and/or flowchart diagrams, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and code.
The description of elements in each figure may refer to elements of proceeding figures. Like numbers refer to like elements in all figures, including alternate embodiments of like elements.
An apparatus for using a gesture to insert a character is disclosed. A method and computer program product also perform the functions of the apparatus. The apparatus includes a processor and a memory that stores code executable by the processor. The executable code causes the processor to receive speech input from a user of an electronic device, convert the speech to text, receives a gesture from the user, associate the received gesture with a character and input the character into the text.
In one embodiment, associating a character with the received gesture is based on semantics of the text converted from the speech including a likelihood that the character belongs at an insertion point in the text converted from the speech. In another embodiment, the semantics includes determining a likelihood that the character is expected after text converted from the speech. In another embodiment, the character includes a punctuation character or a special character, where the special character is a character other than punctuation and alphabet characters.
In some embodiments, the gesture is input through a camera of the electronic device. In other embodiments, the gesture is input through a gyroscope, a proximity sensor and/or an accelerometer of the electronic device. In other embodiments, the gesture is input through a portion of a touchscreen of the electronic device, where the portion of the touchscreen is a portion of the touchscreen not displaying keyboard characters. In another embodiment, inputting the character into the text includes inputting the character after the text converted from speech and before additional text converted from additional speech.
In some embodiments, the memory includes a data structure with a plurality of characters and a stored gesture associated with each character. In the embodiment, associating a character with the received gesture includes looking up a stored gesture matching the received gesture and retrieving the character associated with the stored gesture. In other embodiments, the speech is received through a microphone in communication with the electronic device.
A method for using a gesture to insert a character includes receiving, by use of a processor, speech input from a user of an electronic device, converting, by use of a processor, the speech to text, receiving, by use of a processor, a gesture from the user, associating, by use of a processor, the received gesture with a character, and inputting, by use of a processor, the character into the text.
In some embodiments, associating a character with the received gesture is based on semantics of the text converted from the speech including a likelihood that the character belongs at an insertion point in the text converted from the speech. In some embodiments, the gesture is input through a camera of the electronic device, a gyroscope of the electronic device, an accelerometer of the electronic device and/or a portion of a touchscreen of the electronic device, where the portion of the touchscreen is a portion of the touchscreen not displaying keyboard characters. In some embodiments, inputting the character into the text includes inputting the character after the text converted from speech and before additional text converted from additional speech.
In some embodiments, the method includes a data structure with a plurality of characters and a stored gesture associated with each character, where associating a character with the received gesture includes looking up a stored gesture matching the received gesture and retrieving the character associated with the stored gesture. In some embodiments, inputting the character into the text includes inputting the character after the text converted from speech and before additional text converted from additional speech.
A program product for using a gesture to insert a character includes a computer readable storage medium that stores code executable by a processor, where the executable code includes code to perform receiving speech input from a user of an electronic device, converting the speech to text, receiving a gesture from the user, associating the received gesture with a character and inputting the character into the text.
In some embodiments, associating a character with the received gesture is based on semantics of the text converted from the speech including a likelihood that the character belongs at an insertion point in the text converted from the speech. In other embodiments, the gesture is input through a camera of the electronic device, a gyroscope of the electronic device, an accelerometer of the electronic device and/or a portion of a touchscreen of the electronic device, where the portion of the touchscreen is a portion of the touchscreen not displaying keyboard characters. In some embodiments, inputting the character into the text includes inputting the character after the text converted from speech and before additional text converted from additional speech.
The system 100 includes one or more electronic devices 102, such as a smart phone, a tablet computer, a smart watch, a fitness band or other wearable activity tracking device, a gaming device, a laptop computer, a virtual reality headset, smart glasses, a personal digital assistant, a digital camera, a video camera, and the like. Typically, an electronic device 102 is a mobile electronic device and is portable by a user. In other embodiments, the electronic device 102 is a desktop computer, a server, a workstation, a security system, a set-top box, a gaming console, a smart TV, etc. and may have components suitable for receiving a gesture.
Each electronic device 102 is capable of converting text to speech, receiving a gesture from the user, associating a gesture with a character, and inserting the character into the text. In some embodiments, the gesture is input by the user and the electronic device 102 is capable of detecting the gesture using one or more sensors. An electronic device 102, in some embodiments includes one or more sensors that are able to detect a gesture of the user. The sensors will be described further in regards the electronic device 102 of
The electronic device 102 includes a processor (e.g., a central processing unit (“CPU”), a processor core, a field programmable gate array (“FPGA”) or other programmable logic device, an application specific integrated circuit (“ASIC”), a controller, a microcontroller, and/or another semiconductor integrated circuit device) and memory 204, such as a volatile memory, a non-volatile storage medium, and the like.
In certain embodiments, the electronic devices 102 are communicatively coupled to one or more other electronic devices 102 and/or to one or more servers 108 over a data network 106, described below. The electronic devices 102, in a further embodiment, may include processors, processor cores, and FPGA, an ASIC, etc. that are configured to execute various programs, program code, applications, instructions, functions, and/or the like. The electronic devices 102 may include speakers, or other hardware, configured to produce sounds, such as those played during startup, execution of an application, etc.
The gesture module 104 that receives a gesture during a speech to text operation of the electronic device 102 associates the gesture with a character and inserts the character into the text. The gesture differs from other input of the electronic device 102 to insert text, such as typing, selecting characters on a display of characters, etc. For example, the gesture is not typing on a keyboard or keyboard display or input from a mouse. The gesture module 104 solves the technical problem of inputting punctuation, special characters, etc. an electronic device 102 during a speech-to-text operation where speaking the character may be misinterpreted. For example, during a speech-to-text operation a user may want to insert an ellipsis, which includes three periods, into the text and the gesture module 104 avoids the problem of a speech-to-text engine writing “dot dot dot” instead of “ . . . ” as the user desires.
In various embodiments, the gesture module 104 may be embodied as a hardware appliance that can be installed or deployed on an electronic device 102, or elsewhere on the data network 106. In certain embodiments, the gesture module 104 may include a hardware device such as a secure hardware dongle or other hardware appliance device (e.g., a set-top box, a network appliance, or the like) that attaches to a device such as a laptop computer, a server 108, a tablet computer, a smart phone, a security system, or the like, either by a wired connection (e.g., a universal serial bus (“USB”) connection) or a wireless connection (e.g., Bluetooth®, Wi-Fi, near-field communication (“NFC”), or the like); that attaches to an electronic display device (e.g., a television or monitor using an HDMI port, a DisplayPort port, a Mini DisplayPort port, VGA port, DVI port, or the like); and/or the like. A hardware appliance of the gesture module 104 may include a power interface, a wired and/or wireless network interface, a graphical interface that attaches to a display, and/or a semiconductor integrated circuit device as described below, configured to perform the functions described herein with regard to the gesture module 104.
The gesture module 104, in such an embodiment, may include a semiconductor integrated circuit device (e.g., one or more chips, die, or other discrete logic hardware), or the like, such as an FPGA or other programmable logic, firmware for an FPGA or other programmable logic, microcode for execution on a microcontroller, an ASIC, a processor, a processor core, or the like. In one embodiment, the gesture module 104 may be mounted on a printed circuit board with one or more electrical lines or connections (e.g., to volatile memory, a non-volatile storage medium, a network interface, a peripheral device, a graphical/display interface, or the like). The hardware appliance may include one or more pins, pads, or other electrical connections configured to send and receive data (e.g., in communication with one or more electrical lines of a printed circuit board or the like), and one or more hardware circuits and/or other electrical circuits configured to perform various functions of the gesture module 104.
The semiconductor integrated circuit device or other hardware appliance of the gesture module 104, in certain embodiments, includes and/or is communicatively coupled to one or more volatile memory media, which may include but is not limited to random access memory (“RAM”), dynamic RAM (“DRAM”), cache, or the like. In one embodiment, the semiconductor integrated circuit device or other hardware appliance of the gesture module 104 includes and/or is communicatively coupled to one or more non-volatile memory media, which may include but is not limited to: NAND flash memory, NOR flash memory, nano random access memory (nano RAM or NRAM), nanocrystal wire-based memory, silicon-oxide based sub-10 nanometer process memory, graphene memory, Silicon-Oxide-Nitride-Oxide-Silicon (“SONOS”), resistive RAM (“RRAM”), programmable metallization cell (“PMC”), conductive-bridging RAM (“CBRAM”), magneto-resistive RAM (“MRAM”), dynamic RAM (“DRAM”), phase change RAM (“PRAM” or “PCM”), magnetic storage media (e.g., hard disk, tape), optical storage media, or the like.
The data network 106, in one embodiment, includes a digital communication network that transmits digital communications. The data network 106 may include a wireless network, such as a wireless cellular network, a local wireless network, such as a Wi-Fi network, a Bluetooth® network, a near-field communication (“NFC”) network, an ad hoc network, and/or the like. The data network 106 may include a wide area network (“WAN”), a storage area network (“SAN”), a local area network (LAN), an optical fiber network, the internet, or other digital communication network. The data network 106 may include two or more networks. The data network 106 may include one or more servers, routers, switches, and/or other networking equipment. The data network 106 may also include one or more computer readable storage media, such as a hard disk drive, an optical drive, non-volatile memory, RAM, or the like.
The wireless connection may be a mobile telephone network. The wireless connection may also employ a Wi-Fi network based on any one of the Institute of Electrical and Electronics Engineers (“IEEE”) 802.11 standards. Alternatively, the wireless connection may be a Bluetooth® connection. In addition, the wireless connection may employ a Radio Frequency Identification (“RFID”) communication including RFID standards established by the International Organization for Standardization (“ISO”), the International Electrotechnical Commission (“IEC”), the American Society for Testing and Materials® (“ASTM®”), the DASH7™ Alliance, and EPCGlobal™.
Alternatively, the wireless connection may employ a ZigBee® connection based on the IEEE 802 standard. In one embodiment, the wireless connection employs a Z-Wave® connection as designed by Sigma Designs®. Alternatively, the wireless connection may employ an ANT® and/or ANT+® connection as defined by Dynastream® Innovations Inc. of Cochrane, Canada.
The wireless connection may be an infrared connection including connections conforming at least to the Infrared Physical Layer Specification (“IrPHY”) as defined by the Infrared Data Association® (“IrDA®”). Alternatively, the wireless connection may be a cellular telephone network communication. All standards and/or connection types include the latest version and revision of the standard and/or connection type as of the filing date of this application.
The electronic device 102 includes a processor 202 and memory 204 in communication with the processor 202. The processor 202 may be as described above and may execute commands stored in memory 204. The memory 204 is depicted in
The memory 204 may be implemented as a volatile memory and or a non-volatile memory as described above. In one embodiment, the gesture module 104 is loaded into memory 204 before execution, for example from a hard disk drive to RAM. In another embodiment, the gesture module 104 is stored in solid-state storage, ROM, etc. that is accessible by the processor 202. One of skill in the art will recognize other ways to implement a processor 202 and memory 204 to store and execute the gesture module 104.
The electronic device 102, in some embodiments, includes a touchscreen 206 or other electronic display (not shown) in communication with the processor 202 and other devices within the electronic device 102. The touchscreen 206 or other electronic display may display text converted from speech of a user of the electronic device 102. The touchscreen 206, in one embodiment, may receive a gesture from a user. In the embodiment, the gesture includes the user touching, swiping, etc. the touchscreen 206 while gesturing to input a character. The touchscreen 206 or other electronic display may be integrated into the electronic device 102 or may be separate from the electronic device 102 and is in communication with the electronic device 102 wirelessly or through a cable.
The electronic device 102, in some embodiments, includes a gyroscope 208, an accelerometer 210, a proximity sensor 212 or other motion sensing device that may be used to detect a gesture from the user in the form of movement of the electronic device 102. For example, the user may move the electronic device 102 in a particular direction, may shake the electronic device 102, may tilt the electronic device 102, etc. as part of the gesture. The gesture may include a pattern of movements.
The electronic device 102, in some embodiments, includes a camera 214 that may be pointed at the user to receive a gesture from the user. For example, the gesture may include a hand signal, movement of a body part, etc. as a way for the user to input a character into text. In one embodiment, the camera 214 is integrated with the electronic device 102, such as a forward facing camera 214 that is on a same side as a touchscreen 206 of the electronic device 102. In another embodiment, the camera 214 is separate from the electronic device 102 and is connected to the electronic device 102 wirelessly or through a cable. The camera 214 may be controlled and directed by the processor 202 during execution of instructions that are part of the gesture module 104.
The electronic device 102, in some embodiments, includes a microphone 216, which may receive voice input from the user, for example during a speech-to-text operation. The microphone 216 may be integrated with or separate from the electronic device 102. The microphone 216, in one embodiment, may receive input from the user during execution of the gesture module 104. In one embodiment, a gesture may be received by a variety of sensors. For example, a gesture may include movement of a hand or other body part of the user along with movement of the electronic device 102 so that the camera 214, gyroscope 208, accelerometer 210, proximity sensor 212, etc. may receive input for the gesture.
The electronic device 102 includes communication hardware 218, such as a network adapter that may communicate with the server 108 or other electronic devices 102, as described above. The communication hardware 218 may allow the user to communicate, for example, during a phone call or through digital communications. The communication hardware 218 may include one or more input devices, such as a keyboard, a mouse, etc. The keyboard and/or mouse may be integrated with the electronic device 102, such as keys on the electronic device 102, a touchpad, a keyboard on the touchscreen 206, etc. or may be separate from the electronic device 102.
The electronic device 102 includes a speech-to-text engine 220 that receives speech of a user of the electronic device 102, for example through the microphone 216, and converts the speech to text. The speech-to-text engine 220, in one embodiment, is a hardware device, which may include a processor, memory, hardware circuits, etc. and the hardware device may be connected to the microphone 216. In some embodiments, the speech-to-text engine 220 is located in memory 204 and includes executable code stored in memory 204 where the executable code may be executed on the processor 202.
The speech-to-text engine 220 may be integrated with the processor 202 and may include other components useful in converting speech to text. The speech-to-text engine 220 interprets speech of a user to determine appropriate text that corresponds to speech. The speech-to-text engine 220 uses semantic analysis and other semantic tools to aid in conversion of speech to text. The speech-to-text engine 220 may include adaptive learning methods that recognize speech of a particular user to improve accuracy of speech-to-text conversion.
The gesture module 104 includes a speech receiver module 302 that receives speech input from the user of an electronic device 102. For example, the speech receiver module 302 may be part of the speech-to-text engine 220. In one embodiment, the electronic device 102 is in a speech-to-text mode while receiving speech input from the user. For example, the user may press a microphone button or other object displayed on a touchscreen 206 or other electronic display to signal the speech-to-text engine 220 and/or speech receiver module 302 to start receiving and analyzing speech of a user of the electronic device 102.
For instance, the user may be inputting text into a messaging application, a writing application, a note taking application, or other application where the user inputs text. The speech receiver module 302 may ignore speech prior to entering the speech-to-text mode and may then start receiving speech. In another instance, entering a speech-to-text mode is automatic without input from the user, for example when a particular application starts up. In another embodiment, the user may start an application and then may enter the speech-to-text mode.
The gesture module 104 includes a speech conversion module 304 that converts the speech to text. In one embodiment, the speech conversion module 304 is part of the speech-to-text engine 220 and receives speech from the speech receiver module 302 and then converts the received speech to text. In one embodiment, the speech conversion module 304 displays the text converted from speech on the touchscreen 206 or other electronic display. For example, the speech conversion module 304 displays text on a continuous basis as the user speaks so the user is able to view progress of the speech-to-text process. The speech conversion module 304 may input the text into a message of the user in a messaging application, as text on a page of a writing application, as text in an email, etc.
The gesture module 104 includes a gesture receiver module 306 that receives a gesture from the user of the electronic device 102. The gesture is received through a sensor of the electronic device 102. In one embodiment, the gesture receiver module 306 receives the gesture while the electronic device 102 is in a speech-to-text mode. In the embodiment, receiving a gesture is part of the speech to text operation.
In one embodiment, the sensor of the electronic device 102 includes a touchscreen 206 and the gesture module 104 receives the gesture by sensing touches and/or swipes across the touchscreen 206 in a particular pattern. For example, the user may tap and or swipe the touchscreen 206 in a particular pattern to input a particular gesture and the gesture module 104 may detect the gesture. In the embodiment, the gesture excludes tapping, swiping or otherwise selecting icons or similar items displayed on the touchscreen 206, such as particular keys on a virtual keyboard on the touchscreen 206. For example, the gesture does not include tapping a punctuation character or other special character or group of characters. In one embodiment, the special character is a character different than other punctuation and alphabet characters. The gesture, in one embodiment, is performed in regions of the touchscreen 206 without displayed keys, controls, etc.
In another embodiment, the sensor of the electronic device 102 includes an accelerometer 210, a proximity sensor 212 a gyroscope 208, etc. and the gesture receiver module 306 receives the gesture by sensing one or more movements of the electronic device 102 in a particular pattern. The movements may include tilting, shaking, rotating, etc. the electronic device 102 in a particular pattern where each pattern corresponds to one or more characters, such as a period, a question mark, an exclamation point, a special character, etc. For example, tilting the electronic device 102 to the right three times and once to the left may correlate to an ellipsis.
In another embodiment, the sensor of the electronic device 102 includes the camera 214 and the gesture receiver module 306 receives the gesture by detecting a finger, a hand, etc. of the user moving in a particular pattern. In the embodiment, activating the speech-to-text mode activates the camera 214. The camera 214 may receive a gesture that corresponds to a character. For example, the user may point an index finger at the electronic device 102, to the touchscreen 206 or another electronic display, and the user may move the index finger toward the electronic device 102, the touchscreen 206, the electronic display, the camera 214, etc. and the gesture may correlate to a period. In another embodiment, a dot gesture with a slash gesture may correlate to a semicolon.
In one embodiment, the sensor of the electronic device 102 includes a microphone 216 and the gesture receiver module 306 receives the gesture by detecting a particular sound or series of sounds. For example, the gesture receiver module 306 may detect a whistle, a clap or other sound other than language of the user. The sound may be a special sound that differs significantly from a user speaking a particular language.
In some embodiments, a gesture is received by multiple sensors. For example, a gesture may include movement detected by the camera 214 and by the touchscreen 206. In another example, a gesture may include movements of the electronic device 102 as sensed by the gyroscope 208, accelerometer 210, etc. in combination with movements of a body part as sensed by the camera 214. One of skill in the art will recognize other combination gestures that use multiple sensors.
The gesture module 104 includes an association module 308 that associates the received gesture with a character. For example, the association module 308 may include a data structure such as a table, a database, a library, etc. with characters that are associated with gestures. The association module 308, in one embodiment, interprets the received gesture and converts the received gesture to a set of instructions, motions, etc. that are entered in a data structure where the set is linked to a particular character. For example, the set may include interpretations, such as “swipe right,” “point index finger,” “tilt electronic device to the right,” etc.
In another embodiment, each gesture includes previously recorded input from one or more sensors associated with one or more characters and the association module 308 compares the previously recorded input with the received gesture. For example, for a particular character or set of characters a user may record a gesture with the camera 214, touchscreen 206 or sensors that react to motion, such as the gyroscope 208, accelerometer 210, proximity sensor 212, etc. and the association module 308 may associate the recorded sensor data with the character(s) in an operation prior to a speech-to-text operation. The previously recorded gesture may be input by the user or may be a feature with the gesture module 104. The association module 308 may then compare sensor data of the received gesture with the recorded sensor data and may select a character where the received gesture is deemed a match to the recorded sensor data associated with the character.
In one embodiment, the association module 308 may match sensor input with sensor data from the received gesture by looking at amplitudes and timing of the recorded sensor data and the received gesture sensor data. In another embodiment, the association module 308 compiles a gesture confidence score based on comparing data from various sensors and the association module 308 may associate the received gesture with a particular character when the gesture confidence score for that character is above a gesture confidence threshold.
In another embodiment, the association module 308 associates a character with the received gesture is based on semantics of the text converted from the speech including a likelihood that the character belongs at an insertion point in the text converted from the speech. In one example, the semantics includes determines a likelihood that the character is expected after the text converted from the speech. The insertion point, in one embodiment is at an end of text converted to speech just before the gesture receiver module 306 received a gesture. In another embodiment, the insertion point is at a cursor in the text.
The association module 308, may analyze text before the insertion point to predict which character belongs at the insertion point. For example, the association module 308 may associate a character confidence score for the particular insertion point. In one example, the association module 308 uses a character confidence score between 1 and 100 and a period may have a character confidence score of 73, a question mark may have a character confidence score of 27, a comma may have a character confidence score of 19, etc. In the embodiment, the association module 308 may use the character confidence score when associating a character with a received gesture.
In another embodiment, the association module 308 uses both the character confidence score and the gesture confidence score in a combined confidence score to associate a character with a received gesture. For example, a gesture confidence score for a period may be relatively low, due to the user inputting a gesture that for a period that is somewhat different than a recorded gesture for a period and the character confidence score is relatively high so the combined confidence score is higher than a combined threshold and the association module 308 may then associate the period with the received gesture.
The gesture module 104 includes a character input module 310 that inputs the character into the text. For example, the character input module 310 may insert the character in response to the association module 308 associating a character with the received gesture. In one embodiment, the character input module 310 inputs the character at the end of text converted from speech and before text from additional speech. In another embodiment, the character input module 310 inserts the character at a cursor location.
The gesture module 104 is beneficial to prevent errors when inserting punctuation and other special characters from speech where a typical speech-to-text engine 220 may misinterpret the speech as literal instead of insertion of punctuation or a special character. For example, a user may enter a gesture associated with “it” so that the speech-to-text engine 220 does not enter “pie.” In another example, a user may enter a gesture associated with a period so that the character input module 310 enters a “.” instead of the word “period.”
A message window 406 may display messages 410 previously sent to the contact and messages 412 received from the contact. In the embodiment, messages received from the contact include an arrow to the left and messages sent by the user include an arrow to the right. The touchscreen 206 may also include a lower portion 408 that includes a camera icon 414 for attaching photographs or other media to a text message, a microphone icon 416, a message window 418 for entering messages and a send button 420 for sending the message in the message window 418. The microphone icon 416 may be used to enter a speech-to-text mode. The electronic device 102 may include a button 422 to be used for various purposes, such as returning to a home screen, and may include other buttons, such as a “back” button, a windows button, etc. The electronic device 102 may also include a speaker 424, a proximity sensor 212, a camera 214 as well as other buttons and sensors that are now shown.
In one embodiment, the user may press the microphone icon 416 to enter a speech-to-text mode where speech of the user is received through a microphone 216 (not shown) and by the speech receiver module 302 and is converted to text and input into the message window 418, for example by the speech conversion module 304. In the depicted embodiment, the message “That sounds fun. What is playing_” is displayed where a cursor, depicted with an underscore, is after the word “playing.” The user may gesture, for example with a hand signal, by moving the smartphone, etc. to insert a question mark. The gesture corresponds to the question mark symbol and may be received through the gesture receiver module 306 using one or more sensors in the smartphone. The association module 308 associates the received gesture with a question mark and the character input module 310 inserts the question mark at the cursor. The user may then press the send button 420. The illustrated embodiment is one example and the gesture module 104 may be used for other devices and other applications.
If the method 600 determines 610 that the gesture matches a character or a group of characters, the method 600 associates the matched character or group of characters with the character(s) and determines 614 a combined confidence score. The combined confidence score is based on a character confidence score and a gesture confidence score. The gesture confidence score, in one embodiment, is a measure of how close the gesture is to a stored gesture associated with a character and the character confidence score, in one embodiment, is a prediction of a next character after the text.
The method 600 determines 616 if the combined confidence score is above a combined threshold. If the method 600 determines 616 that the combined confidence score is not above a combined threshold, the method 600 returns and receives 602 speech from the user. If the method 600 determines 616 that the combined confidence score is above a combined threshold, the method 600 inserts 618 the character(s) into the text, and the method 600 ends. In various embodiments, the method 600 may use the speech receiver module 302, the speech conversion module 304, the gesture receiver module 306, the association module 308 and/or the character input module 310.
Embodiments may be practiced in other specific forms. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.