This application claims priority to Chinese Patent Application No. 201510507497.0 filed on Aug. 18, 2015, the contents of which are entirely incorporated by reference herein.
The subject matter herein generally relates to user interface technology, and particularly to an electronic device and hands-free control method of the electronic device.
Electronic devices are widely used. People control the electronic device by an interface of human-computer interaction, such as a keyboard, a mouse, and others.
But usage of the electronic device is limited when the abilities of a keyboard or mouse may not be enough.
Many aspects of the disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
It will be appreciated that for simplicity and clarity of illustration, where appropriate, reference numerals have been repeated among the different figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein can be practiced without these specific details. In other instances, methods, procedures, and components have not been described in detail so as not to obscure the related relevant feature being described. Also, the description is not to be considered as limiting the scope of the embodiments described herein. The drawings are not necessarily to scale and the proportions of certain parts may be exaggerated to better illustrate details and features of the present disclosure.
The present disclosure, including the accompanying drawings, is illustrated by way of examples and not by way of limitation. It should be noted that references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references mean “at least one.”
The term “module”, as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, Java, C, or assembly. One or more software instructions in the modules can be embedded in firmware, such as in an EPROM. The modules described herein can be implemented as either software and/or hardware modules and can be stored in any type of non-transitory computer-readable medium or other storage device. Some non-limiting examples of non-transitory computer-readable media include CDs, DVDs, BLU-RAY™, flash memory, and hard disk drives. The term “comprising” means “including, but not necessarily limited to”; it specifically indicates open-ended inclusion or membership in a so-described combination, group, series and the like.
In at least one embodiment, the display device 11 can display information. The information can include pictures, WebPages, documents, and any other subject which is capable of being displayed on the display device 11. The display device 11 can be in the front of the electronic device 1. In some embodiments, the display device 11 can display raster regions which have m rows and n lines, where m and n are positive integers.
The camera 12 can be an infrared image capturing device. In at least one embodiment, the camera 12 can capture an infrared image of a user's eye 20 (shown in
In at least one embodiment, the infrared source 13 can emit infrared light to the user's eye 20. The infrared source 13 can constantly emit infrared light to the user's eye 20 when the infrared source 13 is activated. In some embodiments, the infrared source 13 can be a Light Emitting Diode (LED). The infrared source 13 can facilitate capture of a clear infrared image of the user's eye 20 even under a condition of poor light because the user's eye 20 is not sensitive to the infrared light. As shown in
In at least one embodiment, the processor 14 can be a central processing unit (CPU), a microprocessor, or other data processor chip that performs functions of the hands-free control system 10. The processor 14 is connected to the display device 11, the camera 12, the infrared source 13 and the storage device 15.
In at least one embodiment, the storage device 15 can include various types of non-transitory computer-readable storage mediums. For example, the storage device 15 can be an internal storage system, such as a flash memory, a random access memory (RAM) for temporary storage of messages, and/or a read-only memory (ROM) for permanent storage of messages. The storage device 15 can also be an external storage system, such as a hard disk, a storage card, or a data storage medium.
In some embodiments, the storage device 15 can store preset data. The preset data can include a relationship between control instructions and movements, and/or a relationship between link instructions and focus of an eye 20 on the display device 11. The movements can include moving up, moving down, moving right, and moving left. The control instructions corresponding to the movements can include controlling the information currently displayed on the display device 11 (hereinafter referred as to current information) to move up, controlling the current information displayed on the display device 11 to move down, controlling information previously (hereinafter referred as to previous information) which the current information replaces to display on the display device 11, and controlling information to replace the current information (hereinafter referred as to next information).
The acquiring module 101 can activate the camera 12 to capture infrared images of the user's eye 20, when detecting an activation of the hands-free control system 10.
In some embodiments, the acquiring module 101 is connected to the camera 12 and the infrared source 13. The display device 11 can display an application icon in accordance with the hands-free control system 10. When the user touches the application icon, the hands-free control system 10 is activated. The acquiring module 101 starts the infrared source 13 to emit infrared light and activates the camera 12 to capture infrared images.
The location module 102 can analyze a direction of gaze of the user's eye 20 according to the captured infrared images, to determine a focus of the eye 20 on the display device 11.
In some embodiments, the location module 102 determines a focus of the eye 20 on the display device 11 by using a pupil-corneal reflection method. As shown in
The analyzing module 103 can generate a control instruction according to the relationship between control instructions and movements stored in the storage device 15. The movement of the user's eye 20 on the display device 11 can be analyzed. Details of the analyzing of movement are not described here.
In at least one embodiment, in order to allow for limitations of the user, the analyzing module 103 can ignore a deviation angle of the direction of gaze of the user within a predefined angle. That is to say, if the deviation angle of the direction of gaze of the user does not exceed the predetermined angle (e.g., 20 degrees), the analyzing module 103 can analyze the movement of the user's eye 20 without consideration of the deviation angle, that is to say, within a particular tolerance of 20 degrees.
The analyzing module 103 can also generate a link instruction according to a relationship between link instructions stored in the storage device 15 and the focus of the eye 20 on the display device 11, when the eye 20 is focusing on a link on the display device 11.
The voice module 104 can receive voice commands from the user and generate a voice instruction based on the received voice commands. For example, when the user say “next information”, the voice module 104 can generate an instruction to control the display device 11 to display next information.
The determination module 105 can firstly determine whether the voice instruction matches with the link instruction. If a determination is made that the voice instruction matches with the link instruction, the control module 106 can control the information linked to the link address to be displayed on the display device 11. If a determination is made that the voice instruction does not match with the link instruction, the determination module 105 can secondly determine whether the voice instruction matches with the control instruction.
If a determination is made that the voice instruction matches with the control instruction, the control module 106 can control the display device 11 to perform predetermined actions. The predetermined actions can include, for example, controlling the current information to move up, controlling the current information to move down, controlling the previous information to replace the current information, or controlling the next information to replace the current information. If a determination is made that the voice instruction does not match with the control instruction, the control module 106 can control the display device 11 to retain the display of current information.
The following are several exemplary embodiments about the usage of the hands-free control system 10.
In a first exemplary embodiment, the user can view pictures or documents, or other information, without a link. The analyzing module 103 analyzes the user's eye 20 as moving left, and then generates a control instruction to control previous information to be displayed on the display device 11. The voice module 104 generates a voice instruction to control next information to be displayed on the display device 11 after receiving a “next information” voice command from the user. The determination module 105 determines that the control instruction to control previous information to display does not match with the voice instruction to control next information to display on the display device 11. The control module 106 controls the display device 11 to display unchanged current information.
In a second exemplary embodiment, the user can view pictures or documents, or other information, without a link. The analyzing module 103 analyzes the user's eye 20 as moving right, and then generates a control instruction to control next information to display on the display device 11. The voice module 104 generates a voice instruction to control next information to display on the display device 11 after receiving a “next information” voice command from the user. The determination module 105 determines that the control instruction to control next information to display matches with the voice instruction to control next information to display on the display device 11. The control module 106 controls the display device 11 to display next information.
In a third exemplary embodiment, when the user is accessing the Internet, the display device 11 can display a plurality of links. The previous focus of the eye 20 on the display device 11 was in row 4, column 2 The analyzing module 103 generates a link instruction to link the “TV” information, referencing row 4, and column 2 If the user says “TV”, the voice module 104 can generate a voice instruction to control TV information to display on the display device 11. The determination module 105 determines that the link instruction to link “TV” information matches with the voice instruction to control TV information to display. The control module 106 controls the display device 11 to display the TV information. If the user says “music”, the voice module 104 can generate a voice instruction to control music to play by a loudspeaker of the electronic device 1. The determination module 105 determines that the link instruction to link “TV” information does not match with the voice instruction to control the music to play. The control module 106 controls the display device 11 to retain the display of current information.
That is to say, when the user is searching for the Internet, the analyzing module 103 firstly analyzes a link instruction in accordance with the focus of the eye 20.
If there is no link instruction in accordance with the focus of the eye 20, the analyzing module 103 secondly analyzes any movement of the focus of the user's eye 20. For example, if the previous focus of the eye 20 was in row 9, column 5, and the current focus of the eye 20 on the display device 11 is in row 1, column 5, the analyzing module 103 firstly determines there is no link instruction in accordance with the row 1, column 5 direction of gaze. Then, the analyzing module 103 secondly determines that the focus of the eye 20 is moving up, and generates a control instruction to control the current information displayed on the display device 11 to move up.
An exemplary method 600 is provided by way of example, as there are a variety of ways to carry out the method. The exemplary method 600 described below can be carried out using the configurations illustrated in
At block 41, an acquiring module activates the camera to capture an infrared image of the user's eye, when the hands-free control system is activated by the user.
At block 42, a location module analyzes the direction of gaze of the user's eye according to the captured infrared image, to determine a focus of the eye on the display device.
At block 43, an analyzing module analyzes a movement of the focus of the eye on the display device according to a previous position and a current position of the focus of the eye, and then generates a control instruction according to the relationship between control instructions and movements stored in the storage device.
At block 44, a voice module receives voice commands from the user and generates a voice instruction based on the received voice commands.
At block 45, a determination module determines whether the voice instruction matches with the control instruction. If the voice instruction matches with the control instruction, the process goes to block 46. If the voice instruction does not match with the control instruction, the process goes to block 47.
At block 46, a control module controls the display device to perform predetermined actions, for example, controls the current information displayed on the display device to move up, controls the current information to move down, controls the previous information to replace the current information, or controls the next information to replace the current information.
At block 47, a control module controls the display device to retain the display of current information.
At block 71, an acquiring module activates the camera to capture an infrared image of the user's eye, when the hands-free control system is activated by the user.
At block 72, a location module analyzes the direction of gaze of the user's eye according to the captured infrared image, to determine a focus of the eye on the display device.
At block 73, an analyzing module generates a link instruction according to the relationship between the link instructions stored in the storage device and the focus of the eye on the display device, when the user' eye is focusing on a link on the display device.
At block 74, a voice module receives voice command from the user and generates a voice instruction based on the received voice command
At block 75, a determination module determines whether the voice instruction matches with the link instruction. If a determination is made that the voice instruction matches with the link instruction, the process goes to block 76. If a determination is made that the voice instruction does not match with the link instruction, the process goes to block 77.
At block 76, a control module controls information linked to the link address to be displayed on the display device.
At block 77, the analyzing module further analyzes a movement of the focus of the eye on the display device according to a previous position and a current position of the focus of the eye, and then generates a control instruction according to the relationship between control instructions and movements stored in the storage device.
At block 78, the determination module determines whether the voice instruction matches with the control instruction. If the voice instruction matches with the control instruction, the process goes to block 79. If the voice instruction does not match with the control instruction, the process goes to block 710.
At block 79, the control module controls the display device to perform the predetermined actions, for example, controls the current information displayed on the display device to move up, controls the current information to move down, controls the previous information to replace the current information, or controls next information to replace the current information.
At block 710, the control module controls the display device to retain the display of current information.
It should be emphasized that the above-described embodiments of the present disclosure, including any particular embodiments, are merely possible examples of implementations, set forth for a clear understanding of the principles of the disclosure. Many variations and modifications can be made to the above-described embodiment(s) of the disclosure without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
Number | Date | Country | Kind |
---|---|---|---|
201510507497.0 | Aug 2015 | CN | national |