This application claims the benefit under 35 U.S.C. §119(a) of Korean Patent Application Nos. 10-2009-0024504, filed on Mar. 23, 2009, and 10-2010-0011639, filed on Feb. 8, 2010, in the Korean Intellectual Property Office, the entire disclosures of which are incorporated herein by reference for all purposes.
1. Field
One or more embodiments relate to pointing input technology and gesture recognition technology for controlling a virtual object.
2. Description of the Related Art
In recent times, as the capabilities of terminals such as personal digital assistants (PDAs), mobile phones, etc., are increasingly having additional functions, additional user interfaces have also been provided in response to these additional functions. For example, recently developed terminals include various menu keys or buttons for the additional user interfaces.
However, since many various kinds of functions are provided and the various menu keys or buttons are typically not intuitively disposed, it may be difficult for users of the terminals to learn how to operate the menu keys for specific functions.
One of the typical intuitive interfaces for the purpose of use-convenience is a touch interface, for example. Here, the touch interface is one of the simplest interface methods for directly interacting with virtual objects displayed on a screen or the touch interface.
In one or more embodiments, there is provided a virtual object control method including detecting position information of a virtual object control unit remotely interacting with a virtual object, detecting motion information including at least one of a pointing position, a number of pointed to points, a moving type for moving the virtual object control unit, and a moving position of the virtual object control unit using the detected position information, and selecting a gesture to control the virtual object based on the detected motion information, and linking the selected gesture to the virtual object, and performing an event corresponding to the selected gesture with respect to the virtual object.
In one or more embodiments, there is provided a virtual object display device including a position detector to detect position information of a virtual object control unit to remotely interact with a virtual object, a gesture determination part to detect motion information including at least one of a pointing position, a number of pointed to points, a moving type for moving the virtual object control unit, and a moving position of the virtual object control unit using the detected position information, and to select a gesture for controlling the virtual object based on the detected motion information, and an event executor to link the selected gesture to the virtual object and to execute an event corresponding to the selected gesture with respect to the virtual object.
In one or more embodiments, the selected gesture may be at least one of a selection gesture, an expansion/contraction gesture, and a rotation gesture according to the detection motion information, i.e., a pointing position, a number of pointed to points, a moving type for moving the virtual object control unit, and a moving position of the virtual object control device. The motion information may be detected from the position information of the virtual object control unit, and the position information of the virtual object control unit may be acquired from an optical signal received from the virtual object control unit or a distance measured from the virtual object control unit.
In one or more embodiments, there is provided a multi-telepointer including a light projector to project an optical signal, an input detector to detect touch and moving information, and an input controller to control the light projector and provide detected information including position information and the touch and moving information through the optical signal.
Other features will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the attached drawings, discloses one or more embodiments of the invention.
These and/or other aspects and advantages will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. In this regard, embodiments of the present invention may be embodied in many different forms and should not be construed as being limited to embodiments set forth herein. Accordingly, embodiments are merely described below, by referring to the figures, to explain aspects of the present invention.
Referring to
The virtual object display device 101 provides a virtual object 103. For example, the virtual object display device 101 can display the virtual object 103 on a display screen provided therein. Here, the virtual object 103 may be one of various characters, icons, avatars, and virtual worlds, which are expressed in three-dimensional graphic images. The virtual object display device 101 providing such a virtual object 103 may be a television, a computer, a mobile phone, a personal digital assistant (PDA), etc.
The virtual object control device 102 remotely interacts with the virtual object. The virtual object control device 101 may use a portion of a user's body. In addition, the virtual object control device 102 may be a pointing device such as a remote controller for emitting a predetermined optical signal. For example, a user can operate his/her finger or a separate pointing device to select the virtual object 103 displayed on the virtual object display device 101 or move, rotate or expand/contract the selected virtual object 103.
The virtual object display device 101 detects position information of the virtual object control device 102, and acquires motion information of the virtual object control device 102 on the basis of the detected position information.
The position information of the virtual object control device 102 may be three-dimensional position coordinates of the virtual object control device 102. The virtual object display device 101 can acquire three-dimensional position coordinates of the virtual object control device 102 using an optical response sensor for detecting an optical signal emitted from the virtual object control device 102 or a distance sensor for measuring a distance of the virtual object control device 102.
In addition, the motion information of the virtual object control device 102 may be a pointing position, the number of pointed to points, a moving type for moving the virtual object control device 102, a moving position of the virtual object control device 102, etc., calculated on the basis of the detected position information. Here, the pointing position refers to a specific position of the virtual object display device 101 pointed to by the virtual object control device 102. In addition, the number of points may be the number of pointing positions. Further, the moving type of the virtual object control device 102 may be a straight line or a curved line depending on variation in pointing position. The moving position may indicate whether the moving type is generated from a position inside or outside of the virtual object 103.
The virtual object display device 101 selects an appropriate gesture for controlling the virtual object 103 according to the acquired motion information of the virtual object control device 102. That is, the virtual object display device 101 can analyze a user's action to operate the virtual object control device 102, and determine a gesture appropriate to the user's action according to the analyzed results. The determined gesture may be a selection gesture for selecting the virtual object 103, a moving gesture for changing a display position of the virtual object 103, an expansion/contraction gesture for increasing or reducing the size of the virtual object 103, and a rotation gesture for rotating the virtual object 103. How the virtual object display device 101 selects which gesture using the acquired motion information will be described below in more detail.
When a predetermined gesture is selected, the virtual object display device 101 links the selected gesture to the virtual object 103. Then, the virtual object display device 101 performs an event corresponding to the selected gesture. For example, virtual object display device 101 can select, move, expand/contract, or rotate the virtual object 103.
As described above, since the virtual object display device 101 detects motion information of the virtual object control device 102, selects an appropriate gesture according to the detected motion information, and then controls selection, movement, expansion/contraction, and rotation of the virtual object 103 according to the selected gesture, a user can intuitively operate the virtual object control device 102 to control the virtual object as in the real world.
Referring to
Further, the first virtual object control device 201 may be coupled to the second virtual object control device 202 as shown in
In
The touch sensor 220 detects whether a user contacts it or not. For example, the touch sensor 220 may be formed using a button, a piezoelectric device, a touch screen, etc. The touch sensor 220 may be modified in various shapes. For example, the touch sensor 220 may have circular, oval, square, rectangular, triangular, or other shapes. An outer periphery of the touch sensor 220 defines an operation boundary of the touch sensor 220. When the touch sensor 220 has a circular shape, the circular touch sensor enables a user to freely and continuously move his/her finger in a vortex shape. In addition, the touch sensor 220 may use a sensor for detecting a pressure, etc., of a finger (or a subject). For example, the sensor may be operated on the basis of resistive detection, surface acoustic wave detection, pressure detection, optical detection, capacitive detection, etc. A plurality of sensors may be activated when a finger is disposed on the sensors, taps the sensors, or passes over the sensors. When the touch sensor 220 is implemented as a touch screen, it is also possible to guide various interfaces for controlling the virtual object 103 and controlled results through the touch sensor 220.
The motion detection sensor 230 measures acceleration, angular velocity, etc., of the virtual object control device 200. For example, the motion detection sensor 230 may be a gravity detection sensor or an inertia sensor.
When a user operates the virtual object control device 200, the virtual object control device 200 can put touch information of a user generated from the touch sensor 220 or operation information of a user generated from the motion detection sensor 230 into an optical signal of the emission device 210 to provide the information to the virtual object display device 101.
The virtual object control device 200 may be a standalone unit or may be integrated with an electronic device. In the case of the standalone unit, the virtual object control device 200 has its own housing, and in the case of the integration type, the virtual object control device 200 may use a housing of the electronic device. Here, the electronic device may be a PDA, a media player such as a music player, a communication terminal such as a mobile phone, etc.
Referring to
The light projector 301 corresponds to an emission device 210, and generates a predetermined optical signal.
The input detector 302 receives touch information and motion information from a touch sensor 220 and a motion detection sensor 230, respectively. The input detector 302 can appropriately convert and process the received touch information and motion information. The converted and processed information may be displayed on the touch sensor 220 formed as a touch screen.
The input controller 303 controls the light projector 301 according to the touch information and motion information of the input detector 302. For example, a wavelength of an optical signal can be adjusted depending on whether a user pushes the touch sensor 220 or not. In addition, optical signals having different wavelengths can be generated depending on the motion information.
For example, a user can direct the light projector 301 toward a desired position, and push the touch sensor 220 so that light can enter a specific portion of the virtual object display device 101 to provide a pointing position.
While
Referring to
When the virtual object control device 102 emits an optical signal, the virtual object display device 400 can detect an optical signal of the virtual object control device 102 using the optical response device 401, and acquire three-dimensional position information of the virtual object control device 102 on the basis of the detected optical signal.
Referring to
When the virtual object control device 102 emits an optical signal, the motion detection sensor 402 can detect an optical signal and acquire three-dimensional position information of the virtual object control device 102 on the basis of the detected optical signal. In addition, when a user's hand is used as the virtual object control device 102, it is possible for at least two motion detection sensors 402 to measure a distance to a user's hand and apply trigonometry to the measured distance, acquiring three-dimensional position information of the user's hand.
In
Referring to
The position detector 501 detects position information of the virtual object control device 102 remotely interacting with the virtual object 103. For example, the position detector 501 can detect an optical signal emitted from the virtual object control device 102 through the optical response device 401 to acquire three-dimensional position information on the basis of the detected optical signal. In addition, while the virtual object control device 102 does not emit an optical signal, the position detector 501 can measure a distance to the virtual object control device 102 through the motion detection sensor 402 to acquire three-dimensional position information on the basis of the measured distance.
The gesture determination part 502 detects motion information of the virtual object control device 102 using the detected position information, and selects a gesture for controlling the virtual object 103 on the basis of the detected motion information. The motion information may include at least one of a pointing position, the number of points, a moving type, and a moving position of the virtual object control device 102. The selected gesture may be at least one of a selection gesture for selecting the virtual object 103, a moving gesture for changing a display position of the virtual object 103, an expansion/contraction gesture for increasing or reducing the size of the virtual object 103, and a rotation gesture for rotating the virtual object 103. For example, the gesture determination part 502 can determine whether an operation of the virtual object control device 102 by the user is to select, move, rotate, or expand/contract the virtual object 103 on the basis of the detected motion information.
The event executor 503 links the selected gesture to the virtual object 103, and executes an event corresponding to the selected gesture of the virtual object 103. For example, the event executor 503 can select, move, rotate, or expand/contract the virtual object 103 depending on the selected gesture.
Referring to
The virtual object control method 600 includes determining whether the detected pointing position substantially coincides with a display position of the virtual object 103 (operation 602). According to the embodiment, substantial consistency between a pointing position and a display position of the virtual object 103 may include the case that pointing positions about the virtual object 103 form a predetermined closed loop. For example, even when a user points to the virtual object control device 102 around the virtual object 103 to be selected and draws a circle about the virtual object 103, it may be considered that the pointing position substantially coincides with the display position of the virtual object 103.
When the virtual object control method 600 includes determining whether there is a touch signal or a z-axis motion at a position where the detected pointing position substantially coincides with the display position of the virtual object 103 (operation 603), the touch signal may be a specific optical signal or variation in optical signal of the virtual object control device 102 and z-axis motion may be motion in a vertical direction, i.e., a depth direction in a screen of the virtual object display device 101. The touch signal may be generated when a user touches the touch sensor 220 of the virtual object control device 200. The z-axis motion may be acquired on the basis of the position information detected through the optical response sensor 401 or the motion detection sensor 402.
The virtual object control method 600 includes selecting a gesture for selecting the virtual object 103 when there is a touch signal or z-axis motion (operation 604).
When the gesture is selected, the event executor 503 changes a color of the selected virtual object 103 or executes an event of emphasizing a periphery thereof to inform a user of selection of the virtual object 103.
Therefore, the user can coincide the pointing position of the virtual object control device 102 with the virtual object 103 and push a selection button (for example, a touch sensor 220) or move the virtual object control device 102 on a screen of the virtual object display device 101 in a vertical direction, intuitively selecting the virtual object 103.
Referring to
When the number of points is one, process A is carried out.
Referring to
Referring to
Returning to
Referring to
Referring to
In addition, the virtual object control method 800 includes performing an event corresponding to the selected gesture corresponding to the virtual object 103 (operation 802). For example, when the gesture is selected, an event of changing a color or a periphery of the virtual object 103 can be performed. When the moving gesture is selected, an event of changing a display position of the virtual object 103 can be performed. When the rotation gesture is selected, an event of rotating the virtual object 103 or an environment of the virtual object 103 can be performed. When the expansion/contraction gesture is selected, an event of increasing or reducing the size of the virtual object 103 can be performed.
As described above, the virtual object display device extracts motion information such as a pointing position, the number of points, a moving type, and a moving position on the basis of position information of the virtual object control device 102, and selects an appropriate gesture according to the extracted motion information, allowing a user to control the virtual object 103 as in the real world.
Referring to
For example, a user may coincide a pointing position 901 with a display position of the virtual object 103 and push the touch sensor 220 or change the pointing position 901 of the virtual object control device 102 in a state in which the user is pushing the touch sensor 220 to draw a predetermined closed loop 902 about the virtual object 103.
Meanwhile, according to the embodiment, when the virtual object 103 is selected, a predetermined guide line may be displayed to perform movement, expansion/contraction, and rotation, which will be described.
Referring to
Variation in pointing position, i.e., motion of the virtual object control device 102, can be three-dimensionally performed. For example, when the user selects the virtual object 103 and moves the virtual object control device 102 to the right of the virtual object display device 101 (i.e., a +x-axis direction), the virtual object 103 can move rightward on a screen of the virtual object display device 101. In addition, when the user pulls the virtual object control device 102 in a direction away from the virtual object display device 101 (i.e., a +z-axis direction), the virtual object 103 can move forward from a screen of the virtual object display device 101. Since the screen of the virtual object display device 101 is a two-dimensional plane, forward and rearward movement of the virtual object 103 can be implemented with an appropriate size and variation in position according to the embodiment.
Referring to
Referring to
Referring to
While
Referring to
Referring to
Referring to
Referring to
While
According to an embodiment, the above-mentioned selection, movement, expansion/contraction, and rotation may be individually performed with respect to each virtual object 103, or may be simultaneously performed with respect to any one virtual object 103. For example, it may be possible to move and rotate the virtual object 103, or control movement on an x-y plane to any one pointing position and control movement on a z-axis to another pointing position.
Referring to
According to the embodiment, the gesture recognizer 22 may recognize designation of a specific point or region to be pointed to by the virtual object control device 102 as a selection operation of the virtual object 103. In addition, the gesture recognizer 22 may recognize a user's gesture as a movement, rotation, or expansion/contraction operation according to the number of points, a moving object and a moving position with respect to the virtual object 103.
The pointing linker 24 links the pointing position pointed to by the virtual object control device 102 to the virtual object 103 displayed on the screen according to the gesture recognized through the gesture recognizer 22.
Meanwhile, the event executor 26 performs an event with respect to the virtual object linked through the pointing linker 24. That is, an event with respect to the virtual object of the gesture recognizer corresponding to the pointing position of the virtual object control device 102 is performed according to the gesture recognized through the gesture recognizer 22. For example, it is possible to perform a selection, movement, rotation, or expansion/contraction operation with respect to the subject. Therefore, even at a remote distance, it is possible to provide a user with a feeling of directly operating the subject in a touch manner.
Embodiments of the present invention may be implemented through a computer readable medium that includes computer-readable codes to control at least one processing device, such as a processor or computer, to implement such embodiments. The computer-readable medium includes all kinds of recording devices in which computer-readable data are stored.
The computer-readable recording medium includes a read only memory (ROM), a random access memory (RAM), a compact disk read only memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, etc. In addition, the computer-readable recording medium may be a distributed networked computer system so that computer-readable codes can be stored and executed in a distributed manner.
While aspects of the present invention has been particularly shown and described with reference to differing embodiments thereof, it should be understood that these embodiments should be considered in a descriptive sense only and not for purposes of limitation. Descriptions of features or aspects within each embodiment should typically be considered as available for other similar features or aspects in the remaining embodiments.
Thus, although a few embodiments have been shown and described, with additional embodiments being equally available, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2009-0024504 | Mar 2009 | KR | national |
10-2010-0011639 | Feb 2010 | KR | national |