User interfaces provide a convenience to a user who is accessing information by manipulation of on-screen content. A user may implement, for example, a mouse or trackpad to actuate on-screen content displayed on the user interface. In so doing, the user may manipulate any data associated with the on-screen content. A user may also interact with the user interface via a touch screen device to accomplish similar tasks. However, a user implementing these devices may have difficulties in operating them. For example, a user manipulating on-screen content with his or her finger experiences occlusion or covering up of various objects on the screen while touching the screen. Additionally, it is difficult to select small objects on the screen, especially if the user's fingers are relatively larger than the average user's fingers for which the user interface device is designed for. Still further, there is a limit on what a user may do with an on-screen object due to the binary functionality of either touching or not touching the object being manipulated.
As user interfaces also become more diverse and unique, it further becomes difficult to manipulate or transfer data between them. Indeed, due to the diversity and uniqueness of the various types of user interfaces, a user is left to use or implement multiple types of controls for each user interface. Additionally, the user interfaces may lack support used to manipulate data between them.
The accompanying drawings illustrate various examples of the principles described herein and are a part of the specification. The examples do not limit the scope of the claims.
Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
The present system, product and method disclose a user interface implementing a camera to track the location of a user's eye and fingers relative to the user interface. A user may be allowed to move a cursor across any screen by, for example, holding their index finger and thumb in an open-pinch configuration a distance form the screen of the user interface. The location of the cursor on the screen is determined by a line or vector created by the two spatial points defined by the user's eye and fingers. Therefore, the camera may determine the point in space where the user's eye exists; determine the point in space where the user's finger or fingers exist, and determine or calculate a line or vector created by those two points to determine which location on the screen of the user interface the user is attempting to interact with.
As briefly discussed above, the use of touch screen systems has a number of disadvantages. Specifically, handheld devices such as tablets and smart phones allow users to directly interact with a two-dimensional user interface depicting various on-screen content. A finger, and sometimes a stylus, may be used to manipulate that on-screen content. However, when directly interacting with the surface of the screen on the user interface, the user may cover up or obstruct other on-screen content or manipulatable objects on the screen. This directly affects the user's ability to easily interact with the on-screen content.
Additionally, it may be difficult for a user to precisely select or manipulate objects on the screen of a user interface. This issue may occur when the screen size is relatively small, the objects on the screen are relatively small, if the user's fingers are relatively large, or combinations of these. When this happens, the user interface may not be able to distinguish which, if any, of the objects on the screen are to be manipulated.
Further, a touch screen user interface is binary which may limit the interaction expressivity of the user. Specifically, an object on a touch screen may not be manipulated unless and until the user physically touches the screen. Therefore, the screen may merely sense a selected or unselected on-screen object. This further limits the user's ability to initiate a hover state or temporary selection state of the on-screen content. A desire to have the ability to temporarily select on-screen content on a touch screen has lead to awkward temporal modes being used to determine whether or not a selected object on a user interface has been selected long enough to qualify as a temporarily selected object.
Still further, the use of a touch screen may prove to be undesirable especially if the touch screen is to be used in a public setting. Various people using a single device may lead to the spread of bacteria and viruses. Consequently, this may deter users from touching the touch screen thereby resulting in a decrease in the use of services provided by the touch screen.
User interfaces also come in various forms, some of which may not be completely compatible with each other. The use of many varying types of user interfaces may result in the user implementing a different method of interaction with each user interface. For example, a user may need to actuate a number of push buttons on one device, use a remote control on another device, touch a screen on yet another, and use an external hardware device such as a mouse for another device. This may lead to user confusion and dissatisfaction with using these varying types of input methods. Additionally, the interaction support may not exist to allow users to manipulate data between the various displays. For, example, a user reviewing an image on a mobile device may wish to move the data associated with that image to another device so as to view the image on the other device. Consequently, the first device is communicatively coupled to the other device and the data transferred to the other device. The transfer of such information is usually accomplished via a physical cable between the two devices or through a wireless connection.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present systems and methods. It will be apparent, however, to one skilled in the art that the present apparatus, systems and methods may be practiced without these specific details. Reference in the specification to “an example” or similar language indicates that a particular feature, structure, or characteristic described in connection with that example is included as described, but may not be included in other examples.
In the present specification and in the appended claims, the term “user interface” is meant to be broadly understood broadly as any hardware, or a combination of hardware and software, that enables a user to interact with a system, program, or device. In one example of the present specification, the user interface may comprise a screen. In another example, the user interface may comprise a screen and a camera. In yet another example, the user interface may comprise a screen and a camera integrated into a mobile device such as a tablet computer, a smart phone, a personal digital assistant (PDA), a laptop computer, or a desktop computer, a television, and a printer, among others.
Additionally, in the present specification and in the appended claims, the term “user interface device” is meant to be understood broadly as any device that enables a user to interact with a system, program, or device through any hardware or a combination of hardware and software. In one example, a user interface device may comprise a mobile device such as a tablet computer, a smart phone, a personal digital assistant (PDA), a laptop computer, or a desktop computer, a television, and a printer, among others.
Further, in the present specification and in the appended claims the term “on-screen content” is meant to be understood broadly as any data or symbol representing data that is displayed on a two-dimensional screen associated with a mobile device such as a tablet computer, a smart phone, a personal digital assistant (PDA), a laptop computer, or a desktop computer, a television, and a printer, among others.
As described above, the user interface device (105) may be a tablet computer, a smart phone, a personal digital assistant (PDA), a laptop computer, a desktop computer, a television, and a printer, among others. As will be described later, the user interface device (105) may contain hardware or a combination of hardware and software that accomplishes at least the functionality of determining the spatial position of a user's (115) eye (120) and fingers (125) and determining the position on the screen (130) of the user interface device (105) by calculating a line or vector (135) using the two spatial positions.
The camera (110) may be any type of camera that takes a number of consecutive frames within a certain timeframe. In one example, the camera may have a frame rate of up to 30 frames per second. In another example, the frame rate may be greater than 30 frames per second. In another example, a user (115) may be allowed to adjust the frame rate of the camera (110). This may be done so that the camera (110) may sufficiently determine the spatial position of the user's (115) facial features or eye (120) and fingers (125) while still increasing or decreasing the processing time of the images as they are produced and analyzed.
The camera (110) may further determine the distance of an object relative to the screen (130) of the user interface device (105). In one example, the camera may be a range imaging camera that determines the distance of objects from the camera (110). The images captured by the camera (110) may then be processed to determine the spatial position of the user's (115) eye (120) and fingers (125). Additionally, a processor may be used with the camera (110) to recognize facial features of a human face as well as the user's (115) fingers (125). The camera (110) may further capture images of the user's (115) face with sufficient resolution to determine the position of the user's (115) face, eye socket, eyeball, pupil, or combinations thereof. The resolution of the images may be increased to determine more accurately where on the screen (130) the user (115) is looking.
In one example, the processor may further be used to track the dominant eye (120) of the user and disregard the other eye. In this example, the user may identify his or her dominant eye (120) by entering the information into the user interface device (105).
In one example, the camera is a three-dimensional imaging camera that uses a number of lenses that each capture an image at the same time and combines that image to form a three-dimensional image. From the three-dimensional image, the system (100) may be able to determine the spatial position of the user's (115) eye (120) and fingers (125) and calculate the position on the screen (130) that the user (115) is looking at. As previously discussed, with a frame rate of, for example, about 30 frames per second, the system (100) may determine whether the user (115) is adjusting the distance between his or her fingers (135) a certain distance thereby determining if the user is selecting any on-screen content on the screen (130) of the user interface device (105).
The camera (205) may be any type of camera that takes a number of consecutive frames in a specific timeframe. The camera (205) may form a part of the user interface device (105) or may be a peripheral device that is communicatively coupled to the user interface device (105) by, for example, a peripheral device adapter (260). As mentioned above, the camera may capture and process a number of sequential images of a user's (
The user interface device (105) may also include an image processor (210). The image processor (210) may include hardware architecture that retrieves executable code form a data storage device (235) and executes the executable code. The executable code may, when executed by the image processor (210), cause the image processor (210) to implement at least the functionality of determining the spatial location of a user's (
The user interface device (105) may further comprise a number of output devices (215). In one example, the output device (215) is a screen (
In another example, the number of output devices may include a device to produce haptic feedback such as a vibratory motor or other actuator, and a speaker, among others. These other output devices (215) may work in cooperation with the screen (
The user interface device (105) may further include a data storage device (235) and a peripheral device adapter (260). The data storage device (235) may digitally store data received from or produced by a processor (210, 230) associated with the user interface device (105). The data storage device (235) may include Random Access Memory (RAM) (250), Read Only Memory (ROM) (255), flash memory (245), and Hard Disk Drive (HDD) memory (240). Many other types of memory are available, and the present specification contemplates the use of any type of data storage device (235) as may suit a particular application of the principles described herein. In certain examples, different types of memory in the data storage device (235) may be used for different data storage needs.
The peripheral device adapter (260) may provide an interface between the user interface device (105) and the camera (205). The peripheral device adapter (260) may thereby enable the transmission of data related to the captured images to be provided to the user interface device (105) and more specifically to the image processor (210).
An input device (230) may also be included in the user interface device (105). In one example, the user interface device (220) may include input devices (220) such as a microphone, a soft key alpha-numeric keyboard, and a hard key alpha-numeric keyboard, among others.
During operation of the user interface device (105) the user (
In one example, the user (
In another example, the user (
In yet another example, the user (
In another example, a single finger (
In yet another example, the user (
In still a further example, the user interface device (105), and more specifically, the image processor (210) may detect an eye blink by the user (
As previously mentioned, the user (
The user interface device (105) may further allow a user (
In another example, the user interface device (105) may allow a user (
Turning now to
After the camera (
A processor (
After it has been determined (Block 315) where on the screen (
When the user (
Throughout the process, the camera (
The methods described above may be accomplished in conjunction with a computer program product comprising a non-transitory computer readable medium having computer usable program code embodied therewith that, when executed by a processor, performs the above processes and methods. Specifically, the computer program product may comprise computer usable program code embodied therein that, when executed by a processor, receives a captured image from a camera (
The specification and figures describe a user interface device. The user interface device includes a camera and a processor. The processor may receive images from the camera and determine the spatial location of a user's facial features and fingers and determine, using that information, where on the screen the user if viewing on-screen content. This user interface device may have a number of advantages, including manipulation of on-screen content without touching the screen or using a mouse or track pad. Additionally, the user interface device allows a user to drag on-screen content from the screen of the user interface device to another screen of another user interface device. Still further, the user interface device of the present specification allows a user to select on-screen content without obstructing the view of the on-screen content with, for example, a finger.
The preceding description has been presented to illustrate and describe examples of the principles described. This description is not intended to be exhaustive or to limit these principles to any precise form disclosed. Many modifications and variations are possible in light of the above teaching.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IN2011/000897 | 12/27/2011 | WO | 00 | 6/27/2014 |