One of the benefits of providing content to users over the Internet and/or via stand-alone applications that consume web-based content is that a relatively consistent user experience can be provided to users via this medium. Specifically, multiple users viewing content using different web browsers and/or different computing platforms may nonetheless have substantially identical experiences because the content is rendered from the same source code and/or interacted with using simple text input.
At the same time, the consistent input mechanisms used to interact with Internet and/or application content can be a limitation as users interact with Internet and/or application content using increasingly diverse devices and/or device platforms. More particularly, users may be limited to interacting with content via text input and/or mouse gestures, though a device used to access the content may lack a mouse and/or a standard keyboard.
It is with respect to these and other considerations that the disclosure made herein is presented.
Concepts and technologies are described herein for multi-mode text input. In accordance with the concepts and technologies disclosed herein, users can interact with content using various input devices such as microphones or cameras in addition to, or instead of, using standard text input and/or mouse gestures. Input collected using the input devices can be converted into text and submitted to an entity associated with the content to allow interactions with the content to occur via multiple input modes. As such, users can interact with applications and sites via various types of input devices.
According to one aspect, a client device having an input device executes an application. The application is configured to receive content from a source and to analyze the content to determine if the content supports text input and/or if a user can interact with the content via other types of input devices. In some implementations, the determination is made by analyzing the content to identify one or more input indicators. The input indicators can include explicit indicators that the content supports multi-mode text input including, but not limited to, meta tags, flags, or other data that enable the interactions via the various input devices. In other implementations, the input indicators include implicit indicators such as, for example, one or more fields, forms, or other mechanisms by which users interact with the content.
According to another aspect, the application is configured to filter the input captured via the various input devices. The application can filter the input based upon contextual information associated with the content and/or a client device associated with a user submitting the input. Thus, for example, the application can take into account a location of a user or a client device when converting the input to text. If the input is submitted via a microphone or other audio input device, the application can filter the input based upon grammar, accents, dialects, and/or other information to apply contextual information to the speech to text process. Similar filtering can be completed for image to text processes as well, if desired, and location information can be filtered based upon a current location of the user or a client device associated with the user. In other embodiments, mapping applications, address books, and/or other applications or data sources can be used to obtain address information, telephone numbers, and the like.
It should be appreciated that the above-described subject matter may be implemented as a computer-controlled apparatus, a computer process, a computing system, or as an article of manufacture such as a computer-readable storage medium. These and various other features will be apparent from a reading of the following Detailed Description and a review of the associated drawings.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended that this Summary be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
The following detailed description is directed to concepts and technologies for multi-mode text input. According to the concepts and technologies described herein, content is received at and/or presented on a client device. The content can include one or more explicit or implicit input indicators. The input indicators can indicate to an application executed by the client device that user input can be used in conjunction with consumption or use of the content. The application is configured to recognize the input indicators and to analyze the content to determine context associated with the content and/or the client device executing the application. The application also is configured to determine, based upon the content and/or the contextual information, which input device to use to obtain input associated with use or consumption of the content.
The application also is configured to perform filtering on the input and to generate text based upon the filtered input. Thus, for example, the application can apply grammar, dialect, accent, and/or other information to speech or other audio input when performing a speech to text process. Similarly, images can be captured with a camera, and text can be recognized in the image using optical character recognition, by understanding the content of the image, by converting a logo to text, by reading a bar code or other information in the image, and/or by other processes. Other types of input, input devices, and filtering can be used in accordance with the concepts and technologies disclosed herein. Additionally, in some embodiments an input server executes an application so that conversion of the input to text can be performed remotely from the client device. The text generated by the application can be provided in place of text input and therefore can be submitted without use of a mouse, keyboard, and/or other traditional input devices.
While the subject matter described herein is presented in the general context of program modules that execute in conjunction with the execution of an operating system and application programs on a computer system, those skilled in the art will recognize that other implementations may be performed in combination with other types of program modules. Generally, program modules include routines, programs, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the subject matter described herein may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like.
In the following detailed description, references are made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments or examples. Referring now to the drawings, in which like numerals represent like elements throughout the several figures, aspects of a computing system, computer-readable storage medium, and computer-implemented methodology for multi-mode text input will be presented.
Referring now to
The client device 102 is configured to execute an operating system and one or more application programs such as, for example, an application 106 and/or other application programs. The operating system is a computer program for controlling the operation of the client device 102. The application 106 is an executable program configured to execute on top of the operating system to provide the functionality described herein for providing multi-mode text input. According to various implementations of the concepts and technologies disclosed herein, the application 106 is configured to control one or more input devices 108 associated with the client device 102. According to several illustrative embodiments, the input devices 108 include a microphone, a camera, and/or other types of input devices.
As will be described in more detail herein, the application 106 is configured to receive content 110 from a source 112 such as a website, a server computer, a database, or other systems or devices in communication with the client device 102. In some embodiments, the content 110 corresponds to a web page that is hosted or served by the source 112. As such, the functionality of the source 112 can be provided by one or more web servers or other hosting platforms. In other embodiments, the source 112 can correspond to an application or content hosted or executed by the client device 102. As such, it should be understood that the source 112 can be, but is not necessarily, a remotely executed or accessed source of the content 110. As such, it should be understood that the embodiments illustrated and described herein are illustrative, and should not be construed as being limiting in any way.
The application 106 is configured to analyze the content 110 to determine if the content 110 is configured to support multiple input modes. In particular, the content 110 can include a meta tag, a flag, or other data (“input indicator”) 114 for indicating to the application 106 whether or not the content 110 supports functionality for providing multiple modes of text input. In some embodiments, the meta tag or flag can be provided in a syntax associated with hypertext markup language (“HTML”), extensible markup language (“XML”), and/or other languages for coding and/or scripting various web pages, applications, scripts, and/or other types of content. The meta tags also can be provided in other syntaxes and/or formats, if desired.
In other embodiments, the content 110 does not include an explicit indication of whether multiple input modes are supported. As such, the input indicators 114 in the content 110 can include one or more form elements or other elements associated with inputting data into applications, web pages, or other types of files or documents, and the application 106 can determine, based upon contextual information associated with the content 110, whether or not the content 110 can be interacted with via multiple input modes.
If the application 106 determines that the content 110 can be interacted with via multiple input modes, the application 106 can determine what input modes can be supported for interacting with the content 110. More particularly, the application 106 is configured to recognize types of data and/or types of data input associated with the content 110 and/or fields or forms associated with the content 110, and to determine one or more input modes for interacting with the content 110. For example, the application 106 can determine that content 110 includes a form for inputting an address. For example, the application 106 can recognize a form element in the content 110, and can recognize that the form element is associated with an address, for example by recognizing that a value input into the form element is given the label or attribute “address” or other similar label or attribute.
Based upon this determination, the application 106 can determine that in addition to rendering the form element to enable text input, that a microphone for collecting speech input, a camera for collecting image input, and/or an address book or map interface for providing addresses can be provided to enable input of the address via multiple input modes. Thus, the application 106 can be configured to determine if content 110 includes input mechanisms, and if so, what type of input is to be captured. As will be explained in more detail, the application 106 also is configured for filtering the multi-mode input based upon the contextual information associated with the content 110.
The application 106 also is configured to activate and/or control the input device 108 to obtain non-textual input (“input”) 116 from a user or other entity. For example, the application 106 is configured to activate a camera to capture one or more images as the input 116, and to convert the one or more images into text input (“text”) 118 for submission to an application, website, or other source 112 associated with the content 110. Similarly, the application 106 is configured to activate a microphone to obtain speech or sound as the input 116, and to convert the speech or sound into a text format and to pass text 118 corresponding to the speech to the source 112. As such, from the perspective of the source 112, the client device 102 interacts with the content 110 via text commands, though in reality a user may interact with the input 116 via multiple input modes such as speech, imaging, address book entries, map locations, and/or other types of input that are converted by the application 106 into the text 118. The application 106 can be configured to activate additional or alternative input devices 108, and as such, it should be understood that the above-mentioned embodiments for interacting with the content 110 are illustrative, and should not be construed as being limiting in any way.
As discussed above, the application 106 can be executed by the client device 102 to provide the functionality described herein. In some embodiments, the application 106 executed by the client device 102 is configured to capture the input 116 and to transmit the input 116 to an input server 120 that executes a server-side application 106′. As such, in some embodiments, the conversion of the input 116 from images, sound, address book information, map information, and/or other types of input to the text 118 can be completed remotely from the client device 102. As such, some embodiments of the concepts and technologies disclosed herein can be used to conserve computing, power, and/or other resources associated with the client device 102. In some embodiments, the input server 120 is configured to return the text 118 to the client device 102 for consumption or use at the client device 102 and/or for communication to the source 112 or another entity associated with the content 110. Although not shown in
According to various implementations, as mentioned above, the application 106 also is configured to filter the input 116 based upon contextual information associated with the client device 102 and/or the content 110. More particularly, the application 106 can be configured to filter the input 116 by altering a vocabulary used by a speech-to-text and/or image-to-text converter based upon various considerations such as, for example, a type of input expected from a user or other entity, an accent, dialect, and/or speech patterns associated with a location at which the client device 102 is operating, local roads or street names, types of fields or form elements associated with a particular data input, combinations thereof, and the like. As such, embodiments of the concepts and technologies disclosed herein can be used to allow filtering of the input 116 to improve the results obtained via use of the application 106. These and other aspects of the application 106 are described below in more detail.
Turning now to
It also should be understood that the illustrated method 200 can be ended at any time and need not be performed in its entirety. Some or all operations of the method 200, and/or substantially equivalent operations, can be performed by execution of computer-readable instructions included on a computer-storage media, as defined herein. The term “computer-readable instructions,” and variants thereof, as used in the description and claims, is used expansively herein to include routines, applications, application modules, program modules, programs, components, data structures, algorithms, and the like. Computer-readable instructions can be implemented on various system configurations, including single-processor or multiprocessor systems, minicomputers, mainframe computers, personal computers, hand-held computing devices, microprocessor-based, programmable consumer electronics, combinations thereof, and the like.
Thus, it should be appreciated that the logical operations described herein are implemented (1) as a sequence of computer implemented acts or program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within the computing system. The implementation is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations described herein are referred to variously as states, operations, structural devices, acts, or modules. These operations, structural devices, acts, and modules may be implemented in software, in firmware, in special purpose digital logic, and any combination thereof.
For purposes of illustrating and describing the concepts of the present disclosure, the method 200 disclosed herein is described as being performed by the client device 102 via execution of the application 106. As explained above with reference to
The method 200 begins at operation 202, wherein the client device 102 receives content 110. The content 110 can be received from a source 112 such as a web server or other host for providing web pages or applications accessible or executable by the client device 102 and/or other entities. The source 112 also can correspond to an application executed by the client device 102. As such, the content 110 can correspond to a web page, an application page, application data, other types of files, user interfaces (“UIs”), combinations thereof, and the like.
From operation 202, the method 200 proceeds to operation 204, wherein the application 106 determines if the content 110 is configured to allow submission of data as multi-mode input. As mentioned above, the application 106 determines if the content 110 supports multi-mode input in a number of ways. For example, in some embodiments, the application 106 searches the content 110 for an explicit input indicator 114 such as, for example, a meta tag, a flag, or other data indicating that the content 110 supports submission of multi-mode input. In other embodiments, the application 106 determines if the content 110 supports multimode input by searching for an implicit input indicator 114 such as one or more keywords, form elements, and/or other input mechanisms, combinations thereof, and the like that indicate that input is used in conjunction with consumption or use of the content 110. Because other methods of determining if the content 110 and/or the client device 102 support multi-mode input are contemplated, it should be understood that the above-mentioned embodiments are illustrative, and should not be construed as being limiting in any way.
If the application 106 determines, in operation 204, that multi-mode input is supported, the method 200 proceeds to operation 206. At operation 206, the application 106 determines a context associated with the content 110 and/or the client device 102. As mentioned above, the context associated with the content 110 and/or the client device 102 can be used to determine which, if any, of the input devices 108 to activate and/or what types of information should be captured via the input devices 108. Additionally, or alternatively, the context associated with the content 110 and/or the client device 102 can be used to filter input 116 obtained via the input devices 108, as noted above. As noted above, the context can include a location associated with the client device 102, keywords or tags associated with or included in the content 110, names of variables or values associated with form elements in the content 110, combinations thereof, and the like.
From operation 206, the method 200 proceeds to operation 208, wherein the application 106 analyzes the content 110 and identifies one or more input fields in the content 110. For example, if the content 110 corresponds to a web page that includes a form, the application 106 can analyze the web page to identify each input field or other form element in the form. As is generally understood, input fields or other form elements can be identified by tags, flags, and/or other data. As such, the application 106 can identify the input fields in the content 110 by identifying these flags, fields, other form elements, and/or other data. It should be understood that this embodiment is illustrative, and should not be construed as being limiting in any way.
From operation 208, the method 200 proceeds to operation 210, wherein the application 106 captures the input 116 via operation of the input device 108 and/or via enabling capturing of the input 116 via one or more commands issued to the input device 108. For example, in some embodiments, as explained herein, the application 106 is configured to control one or more input devices 108 associated with the client device 102 to obtain the input 116. In other embodiments, the application 106 is configured to command the input devices 108 as to how to capture the input 116. Regardless of how the application 106 interacts with the input devices 108, operation 210 can include capturing the input 116.
From operation 210, the method 200 proceeds to operation 212, wherein the application 106 filters the input 116 received in operation 210. As explained above, the application 106 is configured to filter the input 116 based upon various types of information including, but not limited to, operating information associated with the client device 102 such as a current location of the client device 102, enabled input devices 108 associated with the client device 102, keywords, tags, and/or other data associated with the content 110 received in operation 202, combinations thereof, and the like. Filtering the input 116 can include, for example, restricting or broadening a vocabulary used in speech-to-text conversions and/or image to text conversions such as optical character recognition (“OCR”) and/or other processes, restricting street names or other map information based upon a present location, altering the input 116 to reflect accent, dialect, and/or other language differences, combinations thereof, and the like.
From operation 212, the method 200 proceeds to operation 214, wherein the application 106 converts the filtered input 116 to text 118. As noted herein, the input 116 can be converted from audio to text, from an image to text, from a click on a map or other similar interface to coordinates, ZIP codes, location coordinates, combinations thereof, and the like. In one illustrative example, the input 116 can include sound input captured via a microphone of a client device 102, wherein the microphone captured a word pronounced like the word “one.” As is known, this word can correspond at least to the words “won” and “one,” and the numeral “1.” According to various implementations of the concepts and technologies disclosed herein, the contextual information determined in operation 210 is used to determine which word or number is likely intended. For example, if the field into which this word is entered corresponds to a field for collecting data indicating a ZIP code, the application 106 can determine that the user intended to enter the numeral “1,” and therefore can filter the input to reflect this expected intention. It should be understood that this embodiment is illustrative, and should not be construed as being limiting in any way.
From operation 214, and/or from operation 204 if the application 106 determines that multi-mode input is not supported, the method 200 proceeds to operation 216. The method 200 ends at operation 216.
Turning now to
The screen display 300A shown in
The screen display 300A also includes a menu area 306 that includes various UI controls for providing various functionality associated with the application 106. In the illustrated embodiment, the menu area 306 includes a UI control 308 for accessing a microphone mode of input, the selection of which causes the application 106 to enable or control operation of the microphone to capture sound as the input 116. The menu area 306 also includes a UI control 310 for accessing a camera, the selection of which causes the application 106 to enable or control operation of the camera to capture one or more images as the input 116. Illustrative user interfaces for capturing sound, images, and map information and converting these types of input to data are illustrated and described below with reference to
Turning to
The screen display 300B also includes a UI control 316 for starting recording, the selection of which causes the application 106 to begin recording audio. The screen display 300B also includes a UI control 318 for stopping recording of the audio, the selection of which causes the application 106 to stop recording audio. The screen display 300B also includes a UI control 320 for indicating that recording of audio is complete and/or a UI control 322 for cancelling recording of the audio. It will be appreciated that the screen display 300B can be provided to provide feedback to a user during recording of audio and/or to allow a user to control recording of the audio. In some embodiments, the application 106 is configured to record audio and stop recording of the audio automatically, without the presentation of the screen display 300B. As such, the illustrated embodiment should be understood as being illustrative.
Turning to
Turning now to
Furthermore, as shown in
Turning now to
In addition to displaying a map, the map display 340 also can display a current location indicator 342. The screen display 300E also includes a UI control 344 for entering location using a current location detected or known by the client device 102 and a UI control 346 for cancelling providing input via selection of allocation on a map. In the illustrated embodiment, a user selects a point on the map display 340 by touching the screen of the client device 102 at the point. In response to touching the screen of the client device 102 at the point, the application 106 determines coordinates, a street address, and/or another indication of location corresponding to the selected point. The coordinates, street address, and/or other indication can be converted to the text 118, as explained herein, and the text 118 can be submitted.
The computer architecture 400 illustrated in
The mass storage device 412 is connected to the CPU 402 through a mass storage controller (not shown) connected to the bus 410. The mass storage device 412 and its associated computer-readable media provide non-volatile storage for the computer architecture 400. Although the description of computer-readable media contained herein refers to a mass storage device, such as a hard disk or CD-ROM drive, it should be appreciated by those skilled in the art that computer-readable media can be any available computer storage media or communication media that can be accessed by the computer architecture 400.
Communication media includes computer readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics changed or set in a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer-readable media.
By way of example, and not limitation, computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, digital versatile disks (“DVD”), HD-DVD, BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer architecture 400. For purposes the claims, the phrase “computer storage medium” and variations thereof, does not include waves, signals, and/or other transitory and/or intangible communication media, per se.
According to various embodiments, the computer architecture 400 may operate in a networked environment using logical connections to remote computers through a network such as the network 104. The computer architecture 400 may connect to the network 104 through a network interface unit 416 connected to the bus 410. It should be appreciated that the network interface unit 416 also may be utilized to connect to other types of networks and remote computer systems, for example, the source 112, the input server 120, and/or other networks, systems, devices, and/or other entities. The computer architecture 400 also may include an input/output controller 418 for receiving and processing input from a number of other devices, including a keyboard, mouse, or electronic stylus (not shown in
It should be appreciated that the software components described herein may, when loaded into the CPU 402 and executed, transform the CPU 402 and the overall computer architecture 400 from a general-purpose computing system into a special-purpose computing system customized to facilitate the functionality presented herein. The CPU 402 may be constructed from any number of transistors or other discrete circuit elements, which may individually or collectively assume any number of states. More specifically, the CPU 402 may operate as a finite-state machine, in response to executable instructions contained within the software modules disclosed herein. These computer-executable instructions may transform the CPU 402 by specifying how the CPU 402 transitions between states, thereby transforming the transistors or other discrete hardware elements constituting the CPU 402.
Encoding the software modules presented herein also may transform the physical structure of the computer-readable media presented herein. The specific transformation of physical structure may depend on various factors, in different implementations of this description. Examples of such factors may include, but are not limited to, the technology used to implement the computer-readable media, whether the computer-readable media is characterized as primary or secondary storage, and the like. For example, if the computer-readable media is implemented as semiconductor-based memory, the software disclosed herein may be encoded on the computer-readable media by transforming the physical state of the semiconductor memory. For example, the software may transform the state of transistors, capacitors, or other discrete circuit elements constituting the semiconductor memory. The software also may transform the physical state of such components in order to store data thereupon.
As another example, the computer-readable media disclosed herein may be implemented using magnetic or optical technology. In such implementations, the software presented herein may transform the physical state of magnetic or optical media, when the software is encoded therein. These transformations may include altering the magnetic characteristics of particular locations within given magnetic media. These transformations also may include altering the physical features or characteristics of particular locations within given optical media, to change the optical characteristics of those locations. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this discussion.
In light of the above, it should be appreciated that many types of physical transformations take place in the computer architecture 400 in order to store and execute the software components presented herein. It also should be appreciated that the computer architecture 400 may include other types of computing devices, including hand-held computers, embedded computer systems, personal digital assistants, and other types of computing devices known to those skilled in the art. It is also contemplated that the computer architecture 400 may not include all of the components shown in
Based on the foregoing, it should be appreciated that technologies for multi-mode text input have been disclosed herein. Although the subject matter presented herein has been described in language specific to computer structural features, methodological and transformative acts, specific computing machinery, and computer readable media, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features, acts, or media described herein. Rather, the specific features, acts and mediums are disclosed as example forms of implementing the claims.
The subject matter described above is provided by way of illustration only and should not be construed as limiting. Various modifications and changes may be made to the subject matter described herein without following the example embodiments and applications illustrated and described, and without departing from the true spirit and scope of the present invention, which is set forth in the following claims.
This application is a continuation of co-pending U.S. patent application Ser. No. 13/109,023 filed May 17, 2011 entitled “Multi-Mode Text Input” which is expressly incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5165095 | Borcherding | Nov 1992 | A |
5600765 | Ando et al. | Feb 1997 | A |
5748841 | Morin | May 1998 | A |
6018736 | Gilai et al. | Jan 2000 | A |
6112174 | Wakisaka et al. | Aug 2000 | A |
6154526 | Dahlke et al. | Nov 2000 | A |
6501833 | Phillips et al. | Dec 2002 | B2 |
6674426 | McGee et al. | Jan 2004 | B1 |
6868383 | Bangalore et al. | Mar 2005 | B1 |
6983307 | Mumick et al. | Jan 2006 | B2 |
6999874 | Seto et al. | Feb 2006 | B2 |
7197331 | Anastasakos et al. | Mar 2007 | B2 |
7269563 | Douros | Sep 2007 | B2 |
7318031 | Bantz et al. | Jan 2008 | B2 |
7343288 | Geppert et al. | Mar 2008 | B2 |
7363027 | Hon et al. | Apr 2008 | B2 |
7415409 | Simoneau et al. | Aug 2008 | B2 |
7434158 | Chi et al. | Oct 2008 | B2 |
7555725 | Abramson et al. | Jun 2009 | B2 |
7610547 | Wang et al. | Oct 2009 | B2 |
7620549 | Di Cristo | Nov 2009 | B2 |
7640160 | Di Cristo | Dec 2009 | B2 |
7664649 | Jost et al. | Feb 2010 | B2 |
7787693 | Siegemund | Aug 2010 | B2 |
7805299 | Coifman | Sep 2010 | B2 |
7853582 | Gopalakrishnan | Dec 2010 | B2 |
7869998 | Di Fabbrizio et al. | Jan 2011 | B1 |
7945566 | James et al. | May 2011 | B2 |
8010286 | Templeton et al. | Aug 2011 | B2 |
8176438 | Zaman et al. | May 2012 | B2 |
8213917 | Anderl | Jul 2012 | B2 |
8219406 | Yu et al. | Jul 2012 | B2 |
8255218 | Cohen et al. | Aug 2012 | B1 |
8259082 | Shahraray | Sep 2012 | B2 |
8332224 | Di Cristo et al. | Dec 2012 | B2 |
8355915 | Rao | Jan 2013 | B2 |
8428654 | Oh et al. | Apr 2013 | B2 |
8442964 | Safadi et al. | May 2013 | B2 |
8543394 | Shin | Sep 2013 | B2 |
8571866 | Melamed et al. | Oct 2013 | B2 |
8635068 | Pulz et al. | Jan 2014 | B2 |
8700392 | Hart et al. | Apr 2014 | B1 |
8862475 | Ativanichayaphong et al. | Oct 2014 | B2 |
20020107696 | Thomas et al. | Aug 2002 | A1 |
20020169604 | Damiba et al. | Nov 2002 | A1 |
20030046401 | Abbott et al. | Mar 2003 | A1 |
20030055651 | Pfeiffer et al. | Mar 2003 | A1 |
20030093419 | Bangalore et al. | May 2003 | A1 |
20030125948 | Lyudovyk | Jul 2003 | A1 |
20040001100 | Wajda | Jan 2004 | A1 |
20040006480 | Ehlen et al. | Jan 2004 | A1 |
20040243415 | Commarford et al. | Dec 2004 | A1 |
20050027538 | Halonen et al. | Feb 2005 | A1 |
20050149337 | Asadi et al. | Jul 2005 | A1 |
20050251746 | Basson et al. | Nov 2005 | A1 |
20050283532 | Kim | Dec 2005 | A1 |
20060218191 | Gopalakrishnan | Sep 2006 | A1 |
20060218192 | Gopalakrishnan | Sep 2006 | A1 |
20060242576 | Nagel | Oct 2006 | A1 |
20060247925 | Haenel et al. | Nov 2006 | A1 |
20070005363 | Cucerzan et al. | Jan 2007 | A1 |
20070043758 | Bodin et al. | Feb 2007 | A1 |
20070124507 | Gurram | May 2007 | A1 |
20070136222 | Horvitz | Jun 2007 | A1 |
20070156853 | Hipp et al. | Jul 2007 | A1 |
20070168191 | Bodin et al. | Jul 2007 | A1 |
20070168578 | Balchandran | Jul 2007 | A1 |
20070266087 | Hamynen | Nov 2007 | A1 |
20080118162 | Siegemund | May 2008 | A1 |
20080255843 | Sun et al. | Oct 2008 | A1 |
20080300871 | Gilbert | Dec 2008 | A1 |
20080316888 | Reifman | Dec 2008 | A1 |
20090144056 | Aizenbud-Reshef et al. | Jun 2009 | A1 |
20090285444 | Erol et al. | Nov 2009 | A1 |
20090299745 | Kennewick et al. | Dec 2009 | A1 |
20090315740 | Hildreth et al. | Dec 2009 | A1 |
20090319266 | Brown et al. | Dec 2009 | A1 |
20100066684 | Shahraray et al. | Mar 2010 | A1 |
20100105364 | Yang | Apr 2010 | A1 |
20100169246 | Jang et al. | Jul 2010 | A1 |
20100217604 | Baldwin et al. | Aug 2010 | A1 |
20100280983 | Cho et al. | Nov 2010 | A1 |
20100312469 | Chen | Dec 2010 | A1 |
20100332234 | Agapi | Dec 2010 | A1 |
20110022393 | Waller et al. | Jan 2011 | A1 |
20110032845 | Agapi | Feb 2011 | A1 |
20110074693 | Ranford et al. | Mar 2011 | A1 |
20110093264 | Gopalakrishnan | Apr 2011 | A1 |
20110093271 | Bernard | Apr 2011 | A1 |
20110153324 | Ballinger et al. | Jun 2011 | A1 |
20110282664 | Tanioka et al. | Nov 2011 | A1 |
20120078597 | Ishaq | Mar 2012 | A1 |
20120296646 | Varthakavi et al. | Nov 2012 | A1 |
20160373553 | Mikkelsen | Dec 2016 | A1 |
Entry |
---|
Chang, et al., “Efficient Web Search on Mobile Devices with Multi-Modal Input and Intelligent Text Summarization”, Retrieved at <<http://www2002.org/CDROM/poster/190.pdf>>, In Proceedings of the eleventh international World Wide Web conference, 2002, pp. 4. |
Liu, et al., “Multiscale Edge-Based Text Extraction from Complex Images”, Retrieved at <<http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4036951>>, Proceedings of the 2006 IEEE International Conference on Multimedia and Expo, ICME, Jul. 9-12, 2006, pp. 1721-1724. |
Mueller, et al., “Interactive Multimodal User Interfaces for Mobile Devices”, Retrieved at <<http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1265674>>, Proceedings of the 37th Annual Hawaii International Conference on System Sciences, Jan. 5-8, 2004, pp. 1-10. |
Office action for U.S. Appl. No. 13/109,023, dated Apr. 1, 2015, Varthakavi et al., “Multi-mode Text Input”, 32 pages. |
Office action for U.S. Appl. No. 13/109,023, dated May 8, 2014, Varthakavi et al., “Multi-mode Text Input”, 27 pages. |
Office action for U.S. Appl. No. 13/109,023, dated Sep. 12, 2014, Varthakavi et al., “Multi-mode Text Input”, 29 pages. |
Siegemund, et al., “Using Digital Cameras for Text Input on Mobile Devices”, Retrieved at <<http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4144763>>, Fifth Annual IEEE International Conference on Pervasive Computing and Communications, Mar. 19-23, 2007, pp. 8. |
“Vlingo”, Retrieved at <<http://www.vlingo.com/>>, Retrieved Date: Jan. 7, 2011, pp. 2. |
Number | Date | Country | |
---|---|---|---|
20160148617 A1 | May 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13109023 | May 2011 | US |
Child | 15009375 | US |