The present disclosure herein relates to a virtual touch device for remotely controlling electronic equipment, and more particularly, to a virtual touch device for exactly controlling electronic equipment remotely without displaying a pointer on a display surface of the electronic equipment.
Recently, electronic equipment such as smart phones including a touch panel is being widely used. Such a touch panel technology needs not to display ‘a pointer’ on a display unlike electronic equipment such as typical computers that is controlled by a mouse. For control of electronic equipment, a user locates his/her finger on icons and touches them without locating a pointer (e.g., a cursor of a computer) on a certain location (e.g., program icons). The touch panel technology enables quick control of electronic equipment because it does not require a ‘pointer’ that is essential to controlling typical electronic equipment.
However, since a user has to directly touch a display surface in spite of the above convenience of the touch panel technology, there is an intrinsic limitation in that the touch panel technology could not be used for remote control. Accordingly, for remote control, even electronic equipment using the touch panel technology has to depend on a device such as a typical remote controller.
A technology capable of generating a pointer on an exact point using a remote electronic equipment control apparatus like in the touch panel technology is disclosed in Korean Patent Publication No. 10-2010-0129629, published Dec. 9, 2010. The technology includes photographing the front of a display using two cameras and then generating a pointer on a point where the straight line extending between the eye and finger of a user meets a display. However, the technology has an inconvenience in that a pointer has to be generated as a preliminary measure for control of electronic equipment (including a pointer controller) and then gestures of a user has to be compared with already-stored patterns for concrete operation control.
The present disclosure provides a convenient user interface for remote control of electronic equipment as if a user touched a touch panel surface. For this, the present disclosure provides a method capable of controlling electronic equipment without using a pointer on a display surface of the electronic equipment and exactly selecting a specific area on the display surface as if a user delicately touched a touch panel.
Embodiments of the present invention provide virtual touch device for remotely controlling electronic equipment having a display surface, and more particularly, to a virtual touch device for exactly controlling electronic equipment remotely without displaying a pointer on a display surface of the electronic equipment, comprising: an image acquisition unit including two image sensors disposed at different locations and photographing a user's body at the front of the display surface; a spatial coordinate calculation unit calculating three-dimensional coordinate data of the user's body using an image from the image acquisition unit; a touch location calculation unit calculating a contact point coordinate where a straight line connecting between a first spatial coordinate and a second spatial coordinate meets the display surface using the first and second spatial coordinates received from the spatial coordinate calculation unit; and a virtual touch processing unit creating a command code for performing an operation corresponding to the contact coordinate received from the touch location calculation unit and inputting the command code into a main controller of the electronic equipment.
In some embodiments, the spatial coordinate calculation unit may calculate the three-dimensional coordinate data of the user's body from the photographed image using an optical triangulation method.
In other embodiments, the first spatial coordinate may be a three-dimensional coordinate of a tip of one user's finger or a tip of a pointer gripped by user's finger, and the second spatial coordinate may be a three-dimensional coordinate of a central point of one of user's eyes.
In still other embodiments, when the virtual touch processing unit determines whether there is a change in the contact point coordinate for a predetermined time or more after the initial contact point coordinate is calculated and there is no change in the contact point coordinate for the predetermined time or more, the virtual touch processing unit may create a command code for performing an operation corresponding to the contact point coordinate, and inputs the command code into the main controller of the electronic equipment.
In even other embodiments, when the virtual touch processing unit determines whether there is a change in the contact point coordinate for a predetermined time or more after the initial contact point coordinate is calculated and there is no change in the contact point coordinate for the predetermined time or more, and then the virtual touch processing unit determines whether there is a distance change between the first spatial coordinate and the second spatial coordinate beyond a predetermined distance and there is a distance change beyond the predetermined distance, the virtual touch processing unit may create a command code for performing an operation corresponding to the contact point coordinate, and may input the command code into the main controller of the electronic equipment.
In yet other embodiments, when the change of the contact point coordinate is within a predetermined region of the display surface, the contact point coordinate may be determined as unchanged.
In further embodiments, the first spatial coordinate may include three-dimensional coordinates of tips of two or more fingers of user, and the second spatial coordinate may include a three-dimensional coordinate of the central point of one of user's eyes.
In still much further embodiments, the first spatial coordinate may include three-dimensional coordinates of tips of one or more fingers provided by two or more users, and the second spatial coordinate may include three-dimensional coordinates of the central points of one of both eyes of two or more users.
The accompanying drawings are included to provide a further understanding of the present invention, and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present invention and, together with the description, serve to explain principles of the present invention. In the drawings:
Exemplary embodiments of the present invention will be described below in more detail with reference to the accompanying drawings. The present invention may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the present invention to those skilled in the art.
Referring to
The image acquisition 10 may include two or more image sensors 11 and 12 such as CCD or CMOS. The image sensors 11 and 12, which are a sort of camera module, may detect and convert an image into an electrical image signal.
The spatial coordinate calculation unit 20 may calculate three-dimensional coordinate data of a user's body using the image received from the image acquisition unit 10. In this embodiment, the image sensor constituting the image acquisition unit 10 may photograph the user's body at different angles, and the spatial coordinate calculation unit 20 may calculate the three-dimensional coordinate data of the user's body using a passive optical triangulation method
Generally, an optical three-dimensional coordinate calculation method may be classified into an active type and a passive type according to a sensing method. In the active type, a predefined pattern or sound wave may be projected on an object, and then a variation of energy or focus through the control of a sensor parameter may be measured to calculate the three-dimensional coordinate data of the object. The active type may be a representative method that uses structured light or laser beam. On the other hand, the passive type may be a method that uses the parallax and intensity of an image photographed when energy is not artificially projected on an object.
In this embodiment, the passive type in which energy is not projected on an object is adopted. The passive type may be slightly low in precision, but may be simple in terms of equipment, and may have an advantage in that a texture can be directly acquired from an input image.
In the passive type, three-dimensional information can be acquired by applying a triangulation to corresponding feature points between photographed images. Examples of various related methods extracting three-dimensional coordinates using the triangulation may include a cameral self calibration method, a Harris corner detection method, a SIFT method, a RANSAC method, and a Tsai method. Particularly, a stereo camera method may also be used to calculate the three-dimensional coordinate data of a user's body. The stereo camera method may measure the same point on the surface of an object from two different points and may acquire a distance from an expectation angle with respect to that point, similarly to a stereo vision structure in which a displacement is obtained by the observation of human two eyes on an object. Since the above-mentioned three-dimensional coordinate calculation methods can be easily known to and implemented by those skilled in the art, a detailed description thereof will be omitted herein. Meanwhile, Korean Patent Application Nos. 10-0021803, 10-2004-0004135, 10-2007-0066382, and 10-2007-0117877 disclose methods of calculating three-dimensional coordinate data using a two-dimensional image.
The touch location calculation unit 30 may serve to calculate a contact point coordinate where a straight line connecting between a first spatial coordinate and a second spatial coordinate that are received from the spatial coordinate calculation unit 20 meets a display surface.
Generally, fingers of human body are the only part that can perform an elaborate and delicate manipulation. Particularly, thumb and/or index finger can perform a delicate pointing operation. Accordingly, it may be very effective to use tips of thumb and/or index finger as the first spatial coordinate.
In a similar context, a pointer (e.g., tip of pen) having a sharp tip and gripped by a hand may be used instead of the tip of finger serving as the first spatial coordinate. When such a pointer is used, a portion blocking user's view becomes smaller and more delicate pointing can be performed compared to the tip of finger.
Also, the central point of only one eye of a user may be used in this embodiment. For example, when a user views his/her index finger at the front of his/her eyes, the index finger may appear two. This is because the shapes of the index finger viewed by both eyes, respectively, are different from each other (i.e., due to an angle difference between both eyes). However, when the index finger is viewed by only one eye, the index finger may be clearly seen. Also, although a user does not close one of eyes, when he views the index finger using only one eye consciously, the index finger can be clearly seen. Aiming at a target with only one eye in archery and shooting that require a high degree of accuracy uses the above principle.
In this embodiment, a principle that the shape of the tip of finger (first spatial coordinate) can be clearly recognized when viewed by only one eye may be applied. Thus, when a user can exactly view the first spatial coordinate, a specific area of a display corresponding to the first spatial coordinate can be pointed.
When one user uses one of his/her fingers, the first spatial coordinate may be the three-dimensional coordinate of the tip of one of the fingers or the tip of a pointer gripped by the fingers of the user, and the second spatial coordinate may be the three-dimensional coordinate of the central point of one of user's eyes.
Also, when one user uses two or more fingers, the first spatial coordinate may include the three-dimensional coordinates of the tips of two or more of the user's fingers, and the second spatial coordinate may include the three-dimensional coordinates of the central points of one of eyes of the user.
When there are two or more users, the first spatial coordinate may include the three-dimensional coordinates of the tips of one or more fingers provided by two or more users, respectively, and the second spatial coordinate may include the three-dimensional coordinates of the central points of one of eyes of two of more users.
In this embodiment, the virtual touch processing unit 40 may determine whether there is a change in the contact point coordinate for a predetermined time or more after the initial contact point coordinate is calculated. If there is no change in the contact point coordinate for the predetermined time or more, the virtual touch processing unit 40 may create a command code for performing an operation corresponding to the contact point coordinate, and may input the command code into a main controller 91 of the electronic equipment. The virtual touch processing unit 40 may similarly operate in the case of one user using two fingers or two users.
Also, when the virtual touch processing unit 40 determines whether there is a change in the contact point coordinate for a predetermined time or more after the initial contact point coordinate is calculated and there is no change in the contact point coordinate for the predetermined time or more, and then the virtual touch processing unit 40 determines whether there is a distance change between the first spatial coordinate and the second spatial coordinate beyond a predetermined distance and there is a distance change beyond the predetermined distance, the virtual touch processing unit 40 may create a command code for performing an operation corresponding to the contact point coordinate, and may input the command code into the main controller 91 of the electronic equipment. The virtual touch processing unit 40 may similarly operate in the case of one user using two fingers or two users.
On the other hand when it is determined that the change of the contact point coordinate is within a predetermined region of the display 90, it may be considered that there is no change in the contact point coordinate. Since a slight movement or tremor of finger or body occurs when a user points the tip of finger or pointer on the display 90, it may be very difficult to maintain the contact point coordinate. Accordingly, when the values of the contact point coordinate exist within the predetermined region of the display 90, it may be considered that there is no change in the contact point coordinate, thereby allowing a command code for performing a predetermined operation to be generated and inputted into the main controller 91 of the electronic equipment.
Electronic equipment subject to remote control according to an embodiment may include digital televisions as a representative example. Generally, a digital television receiver may include a broadcasting signal receiving unit, an image signal processing unit, and a system control unit, but these components are well known to those skilled in the art. Accordingly, a detailed description thereof will be omitted herein. Examples of electronic equipment subject to remote control according to an embodiment may further include home appliances, lighting appliances, gas appliances, heating apparatuses, and the like, which constitute a home networking.
The virtual touch device 1 according to an embodiment of the present invention may be installed on the frame of electronic equipment, or may be installed separately from electronic equipment.
In
In
A virtual touch device according to an embodiment of the present invention has the following advantages.
A virtual touch device according to an embodiment of the present invention enables prompt control of electronic equipment without using a pointer on a display. Accordingly, the present invention relates to a device that can apply the above-mentioned advantages of a touch panel to remote control apparatuses for electronic equipment. Generally, electronic equipment such as computers and digital televisions may be controlled by creating a pointer on a corresponding area, and then performing a specific additional operation. Also, most technologies have been limited to application technologies using a pointer such as a method for quickly setting the location of a display pointer, a method for selecting the speed of a pointer on a display, a method for using one or more pointers, and a method for controlling a pointer using a remote controller.
Also, a user can delicately locate a pointer on a specific area on a display surface of electronic equipment.
For delicate pointing on a display surface of electronic equipment, a virtual touch device adopts a principle in which the location of object can be exactly pointed using a tip and a finger and only one eye (the tip of finger appears two when viewed by both eyes). Thus, a user can delicately point a menu on a remote screen as if the user used a touch panel.
The above-disclosed subject matter is to be considered illustrative and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other embodiments, which fall within the true spirit and scope of the present invention. Thus, to the maximum extent allowed by law, the scope of the present invention is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description.
Number | Date | Country | Kind |
---|---|---|---|
10-2011-0014523 | Feb 2011 | KR | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/KR2012/001198 | 2/17/2012 | WO | 00 | 8/19/2013 |