1. Field of the Invention
The present invention relates to an operation device for a graphical user interface (GUI) in particular, the present invention relates to an operation device for a graphical user interface (GUI) that senses the user's body motions so that the user utilizes the body motions to operate the graphical user interface.
2. Description of Related Art
The graphical user interface (GUI) is a user interface that uses graphics as the front-end for control operations. It uses a uniform graphic and control operations, such as windows, menu, and cursor, as the interact interface between the user and a computer system. Thereby, even though the user cannot input a direct command to the computer system, the user still can input an instruction to the computer system via the GUI to search and operate the computer functions.
Since 1980, the GUI is considered a mature market, and is applied to a variety of electronic devices, such as desktop computer, laptop, mobile communication device, PDA, and mobile GPS, etc. It is a handy, user-friendly, and rapid operation interface. However, when the user uses the GUI to operate the computer system or interact with the computer system, the user still needs to use a keyboard, a mouse, a touch panel, or other operation device to input the related instruction. It is a limitation for the GUI, and cannot provide a situational operation environment.
Therefore GUI that is operated by detecting the user's body motions is developed. However, the user still needs to uses a specific input device, such as a handle, or a remote control, and the GUI cannot exactly react to the user's specific motion so that the cursor displayed on the computer system or electronic game machine cannot react to the user's body motion sensitively and immediately.
One particular aspect of the present invention is to provide an operation device for a graphical user interface (GUI) that senses the user's body motions to operate a corresponding GUI.
The operation device for a graphical user interface includes an image sensing unit and a GUI. The image sensing unit includes an IR lighting device, an image obtaining device, and a calculation control module. The IR lighting device is used for emitting IR to the user. The IR reflected from the user pass through the image obtaining device that forms a photo image to obtain an IR image. The image obtaining device digitalizes the IR image and outputs a digital image signal. The calculation control module is connected with the image obtaining device for receiving the digital image signal so as to identify change of an user image, which is the photo image that represents the user; wherein the change is identified according to a time coordinate axis on a two-dimensional reference coordinate that corresponds to the user's body motions to generate an operation signal.
The GUI is displayed on a display screen and is connected with the image sensing unit for receiving and reacting to the operation signal to display a specific output response.
The present invention has the following characteristics:
1. The image sensing unit of the present invention can exactly identify the user's body image according to the photo image and has an excellent sensitivity.
2. The image sensing unit can perform a calculation to the depth of field of the user's body motions to obtain the user's body dimension or the movements. Thereby, the user can fully utilize the body motions to operate the GUI.
3. The user does not need to use the other input device, such as keyboard, mouse, touch panel, or joystick, etc. The user naturedly involves the situational environment of the GUI and utilizes the body motions to operate the GUI.
4. The present invention can provide a variety of operations for the GUI, including a two-dimensional GUI and a three-dimensional GUI. The virtual simulation effect is versatile.
For further understanding of the present invention, reference is made to the following detailed description illustrating the embodiments and examples of the present invention. The description is for illustrative purpose only and is not intended to limit the scope of the claim.
The drawings included herein provide a further understanding of the present invention. A brief introduction of the drawings is as follows:
Reference is made to
The image sensing unit 1 includes an IR lighting device 11, an image obtaining device 12, and a calculation control module 13. The IR lighting device 11 is used for emitting IR to the user, and includes a plurality of IR lighting units 111. For example, the LED can emit IR with wavelength between 750 nm and 1300 nm. In this embodiment, the wavelength of the IR is 850 nm. As shown in
The image obtaining device 12 includes an IR filter module 121 and an image sense module 122. The IR filter module 121 includes a color filter plate for filtering the light that is not within the IR wavelength. The image obtaining device 12 uses the IR filter module 121 to make the IR reflected from the user pass through the image obtaining device 12 and form an photo image, and thereby an IR image is received by the image obtaining device 12. The image sense module 122 receives the IR image, increases the contrast between the photo image that represents the user (a.k.a. the user image) and the environmental background image in the IR image, digitalizes the IR image, and outputs a digital image signal. The digital image signal includes the user image and the environmental background image. In this embodiment, to increase the contrast between the user image and the environmental background image can be implemented by the brightness of the user image being higher than the brightness of the environmental background image, or the brightness of the user image being lower than the brightness of the environmental background image. Alternatively, an auxiliary information is pre-provided. For example, an image reference value is set, and the digital image signal is localized. When the change rate of the localized digital image signal is larger than the image reference value, the user image is set (foreground). When the change rate of the localized digital image signal is lower than the image reference value, the environmental background image is set (foreground). Thereby, the user image is obtained and selected, and the environmental background image is removed to identify the user's body motions.
Because the distance of the user to the image obtaining device 12 versus the distance of the environmental background to the image obtaining device 12 are different, the respective associated depths of field are also different. Therefore, the calculation control module 13 is connected with the image obtaining device 12 for receiving the digital image signal and calculating the depth of filed of the user image in the digital image signal to provide the necessary auxiliary information to remove the environmental background image from the digital image signal. Thereby, once the user image is locked it is tracked so that only the relevant user image is kept, and the subsequent extra image that happens to have the same depth of field as the user image is filtered. Next, the locked user image is defined a two-dimensional reference coordinate and calculated to identify the change of the user image according to a time coordinate axis on the two-dimensional reference coordinate and generate an operation signal corresponding to the user's body motion. As shown in
The GUI 2 receives the operation signal generated by the calculation control module 13, and displays a specific output response corresponding to the operation signal. The display screen 20 can be a plane display or a projector screen projected by a projector. As shown in
As shown in
In the embodiments, the GUI 2 can display a variety of specific situations, such as living room, meeting, or party, etc. As shown in
The present invention has the following characteristics:
1. The image sensing unit 1 of the present invention has an excellent background image filter effect, can exactly identify the user's body image through utilization of depth of field and tracking, and has an excellent sensitivity.
2. The image sensing unit 1 can perform a calculation to the depth of field of the user's body motions to obtain the user's body dimension or the movements. Thereby, the user can fully utilize the body motions to operate the GUI.
3. The user does not need to use the other input devices, such as keyboard, mouse, touch panel, or joystick, etc. The user naturedly involves the situational environment of the GUI and utilizes the body motions to operate the GUI.
4. The prevent invention can provide a variety of operations for the GUI, including a two-dimensional GUI and a three-dimensional GUI. The virtual simulation effect is versatile.
The description above only illustrates specific embodiments and examples of the present invention. The present invention should therefore cover various modifications and variations made to the herein-described structure and operations of the present invention, provided they fall within the scope of the present invention as defined in the following appended claims.