In many computer systems, it is common for an application to present a user interface in which a user is prompted to enter an access code, often called a personal identification number (PIN). The access code is a sequence of characters, i.e., numbers and/or letters and/or symbols, which is typically short, e.g., about four to twelve characters. The access code typically is transmitted to another device, which in turn displays the access code on a display, or otherwise communicates the access code to the user. The user then enters the access code through the user interface of the computer system, typically using an alphanumeric keyboard, which can be a separate device connected to the computer or a “soft” keyboard displayed on a touch screen, such as on a tablet computer.
Such transmission of access codes generally is used as a form of authentication before allowing the computer system and the other device to communicate with each other. Such an exchange of access codes occurs, for example, when two devices connect over a Bluetooth wireless connection.
Another form of authentication can occur using one dimensional barcodes, two-dimensional matrix codes (e.g., quick response (QR) codes) or other optically scannable encoded information. However, both the computer system and the other device must have the capability of handling such codes. That is, one of the devices must be able to display a readable barcode, while the other of the devices must be able to read the barcode as displayed.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is intended neither to identify key or essential features, nor to limit the scope, of the claimed subject matter.
The manual entry of displayed access codes can be avoided by using a camera connected to or integrated with a computer system to capture an image of a display on another device containing a displayed access code. In response to an indication of where a pin is located in the captured image, optical character recognition is performed on the captured image to extract the access code and enter the access code into the computer system.
In the following description, reference is made to the accompanying drawings which form a part hereof, and in which are shown, by way of illustration, specific example implementations of this technique. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the disclosure.
The following section describes an example computer system that automatically captures and enters access codes through a camera.
Referring to
Such a pairing protocol typically includes a form of authentication, in which the computer system 100 transmits an access code 104 to the other device 102. The access code is a sequence of characters, i.e., numbers and/or letters and/or symbols, which is typically short, e.g., about four to twelve characters. The other device presents the access code received from the computer system to an individual. In turn, the individual inputs the access code through a user interface to the computer system.
To provide a simple mechanism to enter the access code, the computer system is connected to or incorporates a camera 108 which captures an image 110 of the displayed access code 106 as presented on a display of the other device 102. The image is processed using character recognition, to extract the characters of the access code from the display. The extracted characters are presented to the application on the computer systems which requested entry of the access code. Using the camera avoids keypad-based entry of access codes, which can be cumbersome on touch-based devices, many of which today incorporate a camera.
A data flow diagram of an example implementation of how an image captured by a camera can be processed to extract access codes will now be described in connection with
In this example implementation, the computer system includes an access code capture module 200, which is provided as a component of the operating system. This module can be used by an application 202 to capture an access code. The application 202 issues a request 204 to capture an access code, in response to which the access code capture module 200 provides characters 206 of the access code. In this implementation, a user can identify a selected region of an image to assist in capturing the access code, thus the characters 206 are from a selected region of a captured image.
An interface component 208 receives and processes the request 204. A number of implementations are possible for the interface component, for which the task is to coordinate a user presenting the other device with the access code to the camera with the camera capturing an image of the other device for processing. In this example implementation, the interface component 208 generates first display data 216. This display data is for a graphical user interface to prompt the user to enter the access code. An example is described below in connection with
A text region and character recognition component 220 receives the captured image data 214 from the camera 212, typically stored in memory of the computer to which the camera is connected or integrated. The text region and character recognition component 220 processes the image data to identify regions 222, which are areas in the image data that contain text. While it is possible that only one region of text, containing the desired access code, may be detected in an image, it is also possible that the image captures other extraneous data from a display, the other device itself, background or interfering foreground objects. Thus, any regions of characters are first identified, and the characters within those regions are recognized to provide region data 222. The text region and character recognition component can be implemented using conventional optical character recognition techniques, which, given an image, output data indicating characters and locations of those characters in the image.
An example data structure for representing the region data output by the text region and character recognition component will now be described in connection with
Referring back again to
Example user interface displays are provided in
In
Referring now to
In
The captured image is processed 510 to extract one or more regions and recognize characters within those regions. The access code capture module presents 512 an interface to the user, and then receives 514 an input indicating one or more of the presented regions as the regions containing the access code. The characters recognized from the selected regions is provided 508 to the application as the access code.
With such an access code capture module on a computer, entering of access codes, particularly when pairing a tablet or other touch-centric device with another device, can be simplified by automatically extracting the access code from an image of the other device.
Having now described an example implementation,
With reference to
A computer storage medium is any medium in which data can be stored in and retrieved from addressable physical storage locations by the computer. Computer storage media includes volatile and nonvolatile, removable and non-removable media. Memory 604, removable storage 608 and non-removable storage 610 are all examples of computer storage media. Some examples of computer storage media are RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optically or magneto-optically recorded storage device, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Computer storage media and communication media are mutually exclusive categories of media.
Computer 600 may also contain communications connection(s) 612 that allow the device to communicate with other devices over a communication medium. Communication media typically transmit computer program instructions, data structures, program modules or other data over a wired or wireless substance by propagating a modulated data signal such as a carrier wave or other transport mechanism over the substance. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal, thereby changing the configuration or state of the receiving device of the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Communications connections 612 are devices, such as a network interface or radio transmitter, that interface with the communication media to transmit data over and receive data from communication media.
Computer 600 may have various input device(s) 614 such as a keyboard, mouse, pen, camera, touch input device, and so on. Output device(s) 616 such as a display, speakers, a printer, and so on may also be included. All of these devices are well known in the art and need not be discussed at length here. Various input and output devices can implement a natural user interface (NUI), which is any interface technology that enables a user to interact with a device in a “natural” manner, free from artificial constraints imposed by input devices such as mice, keyboards, remote controls, and the like.
Examples of NUI methods include those relying on speech recognition, touch and stylus recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, voice and speech, vision, touch, gestures, and machine intelligence, and may include the use of touch sensitive displays, voice and speech recognition, intention and goal understanding, motion gesture detection using depth cameras (such as stereoscopic camera systems, infrared camera systems, and other camera systems and combinations of these), motion gesture detection using accelerometers or gyroscopes, facial recognition, three dimensional displays, head, eye, and gaze tracking, immersive augmented reality and virtual reality systems, all of which provide a more natural interface, as well as technologies for sensing brain activity using electric field sensing electrodes (EEG and related methods).
Each component of this system that operates on a computer generally is implemented using one or more computer programs processed by one or more processing units in the computer. A computer program includes computer-executable instructions and/or computer-interpreted instructions, which instructions are processed by one or more processing units in the computer. Generally, such instructions define routines, programs, objects, components, data structures, and so on, that, when processed by a processing unit, instruct the processing unit to perform operations on data, or configure the computer to include various devices or data structures. This computer system may be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer programs may be located in both local and remote computer storage media.
Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc.
The terms “article of manufacture”, “process”, “machine” and “composition of matter” in the preambles of the appended claims are intended to limit the claims to subject matter deemed to fall within the scope of patentable subject matter defined by the use of these terms in 35 U.S.C. §101.
Any or all of the aforementioned alternate embodiments described herein may be used in any combination desired to form additional hybrid embodiments. It should be understood that the subject matter defined in the appended claims is not necessarily limited to the specific implementations described above. The specific implementations described above are disclosed as examples only.