This application claims the priority benefit of Taiwan application serial no. 101111860, filed on Apr. 3, 2012. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
1. Field of the Invention
The invention relates to a sensing apparatus and particularly relates to a gesture sensing apparatus.
2. Description of Related Art
The conventional user interface usually utilizes keys, keyboard, or mouse to control an electronic apparatus. As technology advances, new user interfaces are becoming more and more user-friendly and convenient. The touch control interface is one of the successful examples, which allows the user to intuitionally touch and select the items on the screen to control the apparatus.
However, the touch control interface still requires the user to touch the screen with fingers or a stylus, so as to control the apparatus, and the methods for achieving touch control are still limited to the following types: single-point touch control, multiple-point touch control, dragging, etc. In addition, touch control requires the user to touch the screen with his fingers, which also limits the applicability of touch control. For example, when a housewife is cooking, if she touches the screen with her greasy hands to display recipes, the screen may be greased as well, which is inconvenient. In addition, when the surgeon is wearing sterile gloves and performing an operation, it is inconvenient for him/her to touch the screen to look up image data of patient because the gloves may be contaminated. Or, when the mechanist is repairing a machine, it is inconvenient for the mechanist to touch the screen to display maintenance manual because his/her hands may be dirty. Moreover, when the user is watching television in the bathtub, touching the screen with wet hands may cause bad influence to the television.
By contrast, the operation of a gesture sensing apparatus allows the user to perform control by posing the user's hands or other objects spatially in a certain way, so as to control without touching the screen. The conventional gesture sensing apparatus usually uses a three-dimensional camera to sense the gesture in space, but the three-dimensional camera and the processor for processing three-dimensional images are usually expensive. As a result, the costs for producing the conventional gesture sensing apparatuses are high and the conventional gesture sensing apparatuses are not widely applied.
The invention provides a gesture sensing apparatus, which achieves efficient gesture sensing with low costs.
According to an embodiment of the invention, a gesture sensing apparatus is provided, which is configured to be disposed on an electronic apparatus. The gesture sensing apparatus includes at least an optical unit set that is disposed beside a surface of the electronic apparatus and defines a virtual plane. The optical unit set includes a plurality of optical units, and each of the optical units includes a light source and an image capturing device. The light source emits a detecting light towards the virtual plane, and the virtual plane extends from the surface towards a direction away from the surface. The image capturing device captures an image along the virtual plane. When an object intersects the virtual plane, the object reflects the detecting light transmitted in the virtual plane into a reflected light. The image capturing device detects the reflected light to obtain information of the object.
According to an embodiment of the invention, an electronic system having a gesture input function is provided, which includes the electronic apparatus and the gesture sensing apparatus.
According to an embodiment of the invention, a gesture determining method is provided, which includes the following. At a first time, a first section information and a second section information of an object are respectively obtained at a first sampling place and a second sampling place. At a second time, a third section information and a fourth section information of the object are respectively obtained at the first sampling place and the second sampling place. The first section information and the third section information are compared to obtain a first variation information. The second section information and the fourth section information are compared to obtain a second variation information. A gesture change of the object is determined according to the first variation information and the second variation information.
Based on the above, the gesture sensing apparatus and the electronic system having gesture input function in the embodiment of the invention utilize the optical unit set to define the virtual plane and detect the light reflected by the object that intersects the virtual plane. Accordingly, the embodiment of the invention uses a simple configuration to achieve spatial gesture sensing. Therefore, the gesture sensing apparatus in the embodiment of the invention achieves efficient gesture sensing with low costs. In addition, according to the embodiment of the invention, the gesture change is determined based on the variation of the section information of the object, and thus the gesture determining method in the embodiment of the invention is simpler and achieves favorable gesture determining effect.
In order to make the aforementioned features and advantages of the invention more comprehensible, exemplary embodiments accompanying figures are described in detail below.
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
The gesture sensing apparatus 200 is configured to be disposed on the electronic apparatus 110. The gesture sensing apparatus 200 includes at least an optical unit set 210, which is disposed beside the surface 111 of the electronic apparatus 110 and defines a virtual plane V (one optical unit set 210 is depicted in
The light source 211 emits a detecting light D towards the virtual plane V, and the virtual plane V extends from the surface 111 towards a direction away from the surface 111. In this embodiment, the light source 211, for example, emits the detecting light D along the virtual plane V. Moreover, in this embodiment, the detecting light D is an invisible light, such as an infrared light. However, in some other embodiments, the detecting light D is a visible light. In addition, in this embodiment, the virtual plane V is substantially perpendicular to the surface 111. However, in some other embodiments, the virtual plane V and the surface 111 form an included angle that is not equal to 90 degrees, but the virtual plane V and the surface 111 are not parallel to each other.
The image capturing device 213 captures an image along the virtual plane V, so as to detect an object in the virtual plane V. In this embodiment, the image capturing device 213 is a line sensor. In other words, the detected plane is linear. For instance, the image capturing device 213 is a complementary metal oxide semiconductor sensor (CMOS sensor) or a charge coupled device (CCD).
When an object 50 (a hand of the user or other suitable objects) intersects the virtual plane V, the object 50 reflects the detecting light D transmitted in the virtual plane V into a reflected light R, and the image capturing device 213 detects the reflected light R so as to obtain information of the object 50, such as position information, size information, etc. of the object 50.
In this embodiment, an optical axis A1 of the light sources 211 and an optical axis A2 of the image capturing devices 213 of the optical units 212a and 212b of the optical unit set 210 are substantially in the virtual plane V, so as to ensure that the detecting light D is transmitted in the virtual plane V and further to ensure that the image capturing device 213 captures the image along the virtual plane V, that is, to detect the reflected light R transmitted in the virtual plane V.
Regarding the aforementioned “the light source 211 emits a detecting light D along the corresponding virtual plane V” and the description “optical axis A1 of the light sources 211 of the optical units 212a and 212b are substantially in the virtual plane V”, the described direction of the light source 211 is one of the embodiments of the invention. For example, in another embodiment as shown in
In this embodiment, the gesture sensing apparatus 200 further includes a memory unit 230, which stores the position and size of the section S of the object 50 calculated by the in-plane position calculating unit 220. In this embodiment, the gesture sensing apparatus 200 further includes a gesture determining unit 240, which determines a gesture generated by the object 50 according to the position and size of the section S of the object 50 stored in the memory unit 230. More specifically, the memory unit 230 stores a plurality of positions and sizes of the section S at different times for the gesture determining unit 240 to determine the movement of the section S and further determine the movement of the gesture. In this embodiment, the gesture determining unit 240 determines the movement of the gesture of the object 50 according to a time-varying variation of the position and a time-varying variation of the size of the section S of the object 50 stored in the memory unit 230.
In this embodiment, the gesture sensing apparatus 200 further includes a transmission unit 250, which transmits a command corresponding to the gesture determined by the gesture determining unit 240 to a circuit unit for receiving the command. For example, if the electronic apparatus 100 is a tablet computer, an all-in-one computer, a personal digital assistant (PDA), a mobile phone, a digital camera, a digital video camera, or a laptop computer, the circuit unit for receiving the command is a central processing unit (CPU) in the electronic apparatus 100. In addition, if the electronic apparatus 100 is a display screen, the circuit unit for receiving the command is a computer electrically connected to the display screen or a central processing unit or control unit of a suitable host.
Take
In this embodiment, the virtual plane V extends from the surface 111 towards a direction away from the surface 111. For example, the virtual plane V is substantially perpendicular to the surface 111. Therefore, the gesture sensing apparatus 200 not only detects the upward, downward, leftward, and rightward movements of the objects in front of the screen 112 but also detects a distance between the object 50 and the screen 112, that is, a depth of the object 50. For instance, the text or figure on the screen 112 is reduced in size when the object 50 moves close to the screen 112; and the text or figure on the screen 112 is enlarged when the object 50 moves away from the screen 112. In addition, other gestures may indicate other commands, or the aforementioned gestures can be used to indicate other commands. To be more specific, when the gesture determining unit 240 detects the continuous increase of a y coordinate of the position of the object 50 and the increase reaches a threshold value, the gesture determining unit 240 determines that the object 50 is moving in a direction away from the screen 112. On the contrary, when the gesture determining unit 240 detects continuous decrease of the y coordinate of the position of the object 50 and the decrease reaches a threshold value, the gesture determining unit 240 determines that the object 50 is moving in a direction towards the screen 112.
The gesture sensing apparatus 200 and the electronic system 100 having the gesture input function in the embodiment of the invention utilize the optical unit set 210 to define the virtual plane V and detect the light (i.e. the reflected light R) reflected by the object 50 that intersects the virtual plane V. Therefore, the embodiment of the invention achieves spatial gesture sensing by a simple configuration. Compared with the conventional technique which uses an expensive three-dimensional camera and a processor that processes three-dimensional images to sense the gesture spatially, the gesture sensing apparatus 200 disclosed in the embodiments of the invention has a simpler configuration and achieves efficient gesture sensing with low costs.
Moreover, the gesture sensing apparatus 200 of this embodiment has a small, thin, and light structure. Therefore, the gesture sensing apparatus 200 is easily embedded in the electronic apparatus 110 (such as tablet computer or laptop computer). In addition, the gesture sensing apparatus 200 and the electronic system 100 having the gesture input function disclosed in the embodiment of the invention sense the position and size of the area the object 50 intersects the virtual plane V (i.e. the section S). Thus, the calculation process is simpler and a frame rate of the gesture sensing apparatus 200 is improved to predict the gesture of the object 50 (gesture of the palm, for example).
When using the gesture sensing apparatus 200 and the electronic system 100 having the gesture input function disclosed in the embodiment of the invention, the user can input by gesture without touching the screen 112. Therefore, the applicability of the gesture sensing apparatus 200 and the electronic system 100 having the gesture input function is greatly increased. For example, when a housewife is cooking, she can wave her hand before the screen 112 to turn the pages of the recipe displayed on the screen 112. She does not need to touch the screen 112 with greasy hands, which may grease the surface of the screen 112. In addition, when a surgeon is wearing sterile gloves and performing an operation, the surgeon can wave his/her hand before the screen 112 to look up image data of a patient and prevent contaminating the gloves. When a mechanist is repairing a machine, the mechanist can wave his/her hand before the screen 112 to look up the maintenance manual without touching the screen with his/her dirty hands. Moreover, when the user is watching television in the bathtub, the user can select channels or adjust volume by hand gesture before the screen 112. Thus, the user does not need to touch the television with wet hands, which may cause bad effects to the television. Commands, such as displaying recipe, checking patient's data or technical manual, selecting channels, adjusting volume, etc., can be easily performed by simple uncomplicated hand gesture. Therefore, the aforementioned can be achieved by the gesture sensing apparatus 200 that has a simple configuration in this embodiment. Since expensive three-dimensional cameras and processors or software for reading three-dimensional images are not required, the costs are effectively reduced.
In this embodiment, the virtual planes V are arranged substantially from top to bottom along the screen 112, and each of the virtual planes V extends substantially from left to right along the screen 112. Therefore, the gesture sensing apparatus 200a not only detects the leftward/rightward and forward/backward movements (that is, movements in the depth direction) of the object 50 but also detects the upward/downward movements of the object 50 with respect to the screen 112. For instance, when the object 50 moves upward in a direction C1, the object 50 sequentially intersects the lower virtual plane V and the upper virtual plane V of
In this embodiment, the optical axes A1 of the light sources 211 and the optical axes A2 of the image capturing devices 213 of the optical units 212 of the optical unit set 210 are substantially in the lower virtual plane V of
In another embodiment, the virtual planes V are arranged substantially from left to right along the screen 112, and each of the virtual planes V substantially extends from top to bottom along the screen 112. In addition, in other embodiments, the virtual planes V are arranged and extend in other directions with respect to the screen 112.
In this embodiment, the screen 112 of the electronic apparatus 110b is located at a side of the virtual planes V1, V2, and V3. For example, the screen 112 can be turned to a position to be substantially parallel to the virtual planes V1, V2, and V3, or turned to an angle that is less inclined relative to the virtual planes V1, V2, and V3. Thereby, the gesture sensing apparatus 200b detects the gesture before the screen 112. In an embodiment, the screen 112 is configured to display a three-dimensional image, and the three-dimensional image intersects the virtual planes V1, V2, and V3 spatially. Accordingly, after the gesture determining unit 240 integrates the position coordinates of the virtual planes V1, V2, and V3 with the position coordinates of the three-dimensional image displayed by the screen 112 or verifies the conversion relationship therebetween, the gesture in front of the screen 112 can interact with an three-dimensional object of the three-dimensional image spatially before the screen 112.
As illustrated in
Next, Step S20 is performed to obtain a third section information (information of section SF, for example) and a fourth section information (information of section S3′, for example) of the object 50 respectively at the first sampling place and the second sampling place at a second time. In this embodiment, information of the section S1′, information of a section S2′, and information of the section S3′ of the object 50 are respectively obtained in the virtual planes V1, V2, and V3 at the second time. The sections S1′, S2′, and S3′ are in the virtual planes V1, V2, and V3 respectively. In this embodiment, information of the sections S1˜S3 and S1′˜S3′ each includes at least one of a section position, a section size, and the number of sections.
Then, Step S30 is performed to compare the first section information (information of the section S1, for example) and the third section information (information of the section S1′) to obtain a first variation information. The second section information (information of the section S3, for example) and the fourth section information (information of the section S3′) are compared to obtain a second variation information. In this embodiment, information of the section S2 and information of the section S2′ are further compared to obtain a third variation information. In this embodiment, the first variation information, the second variation information, and the third variation information each include at least one of the displacement of the section, the rotation amount of the section, the variation of section size, and the variation of the number of the sections.
Thereafter, Step S40 is performed to determine a gesture change of the object according to the first variation information and the second variation information. In this embodiment, the gesture change of the object is determined according to the first variation information, the second variation information, and the third variation information. The gesture of this embodiment refers to various gestures of the hand of the user or various changes of the position, shape, and rotating angle of a touch object (such as a stylus).
For example, referring to
FIGS. 8 and 9A˜9C illustrate four different types of gesture changes as examples. However, it is noted that the electronic system 100b having the gesture input function and the gesture determining unit 240 of
According to this embodiment, the gesture change is determined based on the variation of the section information of the object 50, and thus the gesture determining method of this embodiment is simpler and achieves favorable gesture determining effect. Therefore, an algorithm for performing the gesture determining method is simplified to reduce the costs for software development and hardware production.
The gesture sensing and recognition process of
To conclude the above, the gesture sensing apparatus and the electronic system having the gesture input function in the embodiment of the invention utilize the optical unit set to define the virtual plane and detect the light reflected by the object that intersects the virtual plane. Accordingly, the embodiment of the invention uses a simple configuration to achieve spatial gesture sensing. Therefore, the gesture sensing apparatus of the embodiment of the invention achieves efficient gesture sensing with low costs. In addition, the gesture determining method of the embodiment of the invention determines the gesture change based on variation of the section information of the object, and thus the gesture determining method of the embodiment of the invention is simpler and achieves favorable gesture determining effect.
Although the invention has been described with reference to the above embodiments, it will be apparent to one of ordinary skill in the art that modifications to the described embodiments may be made without departing from the spirit of the invention. Therefore, the scope of the invention falls in the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
101111860 | Apr 2012 | TW | national |