This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2009-17037, filed on Jan. 28, 2009; the entire contents of which are incorporated herein by reference.
The present invention relates to an apparatus and a method for detecting an object pointed by a user's touch operation on a touch panel.
Recently, a touch panel on which a user performs a pointing operation is widely used. For example, the user points an object on a display screen using his/her finger or a stylus. In order to decide that the user puts his/her finger on a surface of a touch panel and detect the finger position, various methods are developed.
For example, a light source to emit a light and an optical sensor to detect the light (as a pair) are located on both sides of an outer frame of the touch panel. When the user puts his/her finger on a surface of the touch panel, the light emitted from the light source of one side is sheltered by the finger on the surface until the light reaches the optical sensor of the other side. By using the shelter timing, the fingers position along horizontal and vertical directions on the surface can be detected. This technique is disclosed in JP-A 2006-11568 (Kokai).
In this kind of touch panel, a special material is not necessary to use on the surface of the touch panel. For example, by using a transparent acryl plate as the surface, the user can watch the opposite side through the touch panel. Accordingly, by equipping the touch panel with an existing liquid crystal display, above-mentioned function of the touch panel can be assigned.
However, the touch panel having a transparent panel surface is separately located from a display unit which displays an object to be pointed by the touch operation. Under this situation, when the user's facial position (view position) is moving, a motion parallax occurs. In this case, a position of the user's finger on the panel surface is shifted from a pointing position on the display unit. As a result, the user cannot easily point a display object on the display unit.
The present invention is directed to an apparatus and a method for accurately deciding a display object pointed by the user's touch operation on the touch panel, even if the motion parallax occurs by moving the user's view position.
According to an aspect of the present invention, there is provided an apparatus for detecting an object pointed by a user, comprising: a first storage unit configured to store a position information of a touch position detector in a world coordinate system, the touch position detector detecting a user's touch position on a touch panel in a touch panel coordinate system; a first conversion unit configured to convert the user's touch position to a touch position in the world coordinate system using the position information; a half-line generation unit configured to generate a half-line connecting the touch position and a view position of the user in the world coordinate system; a second storage unit configured to store a position information of an object in the world coordinate system, the object being separately located from the touch panel and visible by the user via the touch panel; and a decision unit configured to decide whether the half-line crosses the object using the position information of the object.
Hereinafter, embodiments of the present invention will be explained by referring to the drawings. The present invention is not limited to the following embodiments.
In the first embodiment, by using a touch panel, the information processing apparatus to decide the user's pointed object on a display unit separately located from the touch panel.
The view position calculation unit 11 calculates a user's view position from the user's facial image taken from a camera unit 8. The fourth information storage unit 22 stores a position information of the camera unit 8 in a world coordinate system (space defined by world coordinates). The second conversion unit 12 calculates a view position in the world coordinate system, using the position information of the camera unit 8. The first information storage unit 21 stores a position information of a touch panel detector (touch panel) 6 in the world coordinate system. The first conversion unit 13 calculates a touch position in the world coordinate system from a touch position detected by the touch panel detector 6, using the position information of the touch panel.
The half-line generation unit 14 generates information of a half-line connecting the view position and the touch position in the world coordinate system. The second information storage unit 23 stores a position information of a display unit (displaying an object visible by the user via the touch panel) in the world coordinate space. The first decision unit 15 decides whether the half-line crosses the display unit, using the position information of the display unit. The third information storage unit 24 stores a position information of a display object in a screen coordinate system of the display unit. The second decision unit 16 decides whether the half-line crosses the display object on the display unit decided to cross the half-line, using the position information of the display object.
The information processing apparatus of the first embodiment is packaged as shown in
A program, which is executed by the information processing apparatus of the first embodiment, is read from the external storage 5 to the main storage 3 or recorded into the read-only storage 4, and executed by the processor 2. The main storage 3 stores a parameter and a calculation result of a program executed.
A first display unit 7 is located at a lower half region of the touch position detector 6 to make contact with the touch position detector 6. A second display unit 9 is located separately (the other side) from the touch position detector 6. The first display unit 7 and the second display unit 9 respectively have a display to output a video and a graphics processor to generate the video. As the first display unit 7, for example, a liquid crystal monitor having 19 inches is used. As the second display unit 9, for example, a large-sized display such as a projection screen having 100 inches is used.
The camera unit 8 is located at a lower part of the touch position detector 6. The camera unit 8 equips a video camera to take a facial image of the user 31 at real time, and a video input processor to input the facial image taken. As the camera unit 8, for example, a Web camera connectable to a USB is used. In the first embodiment, an image having a resolution VGA ((640 pixels)×(480 pixels)) is taken at 30 times every second.
In the positional relationship of
In order to represent a coordinate of each apparatus in the positional relationship of
A rectangle on a surface of the touch position detector 6 is defined as points T1, T2, T3 and T4. A rectangle on a surface of the first display unit 7 is defined as points D1, D2, D3 and D4. A rectangle on a surface of the second display unit 9 is defined as points M1, M2, M3 and M4. A position of a right eye of the user 31 is defined as a point Er, a position of a left eye of the user 31 is defined as a point El, a view position of the user 31 is defined as a point E, a touch position of the user 31 on the touch position detector 6 is defined as a point P, and a touch position of the user 31 on the second display unit 9 is defined as a point S. All T1-T4, D1-D4, M1-M4, Er, El, E, P and S are points in a three-dimensional space.
As shown in
As the touch position detector 6 in the first embodiment, “XYFer (registered trademark)” is located by rotating clockwise as 90 degrees. Assume that calibration of the touch position detector 6 is accurately performed. In this case, As an output value of a detection position from the touch position detector 6 along a horizontal direction, an integral number which values “0˜65535” are interiorly divided by a distance from both side of the panel surface to the detection position is output. Briefly, “(x,y)=(0,0)” is output at a point T′3, “(x,y)=(0,65535)” is output at a point T′4, and “(x,y)=(0,32768)” is output at a mid point between points T′3 and T′4. As to a vertical direction, the same processing is executed.
The touch position detector 6 outputs a coordinate P(t)(x, y, z, 1) of a touch position P (in three-dimensional space) on the touch panel coordinate system 51. In this coordinate, P(t) is a homogenious coordinate.
As to “XYFer (registered trademark)” as the touch position detector 6, the panel surface is a rectangular flat shape. Accordingly, two sides of the touch panel along horizontal and vertical directions are defined as x-axis and y-axis, and z-coordinate is defined as “0”. However, if the panel surface of the touch position detector 6 is curved, z-coordinate is defined based on a depth of the touch position.
Furthermore, when the user 31 touches the panel surface with both hands, the touch position detector can detect a plurality of touch positions. In case of using information of a plurality of touch positions on the panel surface, the touch position detector 6 outputs a plurality of coordinates P(t)(x, y, z, 1) in three-dimensional space, as the number of detection positions.
By using information of the touch position detector 6 read from the first information storage unit 21, the first conversion unit 13 calculates a touch position in the world coordinate system from the touch position detected by the touch position detector 6. The information of the touch position detector 6 represents a positional relationship between the touch panel coordinate system 51 and the world coordinate system 41. A coordinate conversion from the touch panel coordinate system to the world coordinate system is represented as a homogenious transformation matrix M(t) having “(4 rows) X (4 columns)”. This matrix M(t) may be determined by previously measuring a position of the touch position detector 6. Alternatively, by equipping a position sensor (for positioning) with the touch position detector 6, the matrix M(t) may be determined by acquiring the position of the touch position detector 6 from the position sensor. As to the processing to calculate the matrix M(t) from the position/posture of the touch position detector 6, a well-known method in the computer-vision field or the spatial positioning field can be used.
By using the matrix M(t), the first conversion unit 13 converts a coordinate P(t) of the touch position in the touch panel coordinate system 51 to a coordinate P(w) of the touch position in the world coordinate system. This conversion processing is represented as following equation (1).
P(w)=M(t) P(t) (1)
In the equation (1), P(w) is a homogenious coordinate. The first conversion unit 13 outputs P(w).
Furthermore, if the user can carry the touch position detector 6 and change a position/direction thereof, a position sensor for positioning with a magnetism or a supersonic wave is equipped with the touch position detector 6, and a position/posture of the touch position detector 6 in the world coordinate system is acquired.
From the facial image of the user 31 (taken by the camera unit 8), the view position calculation unit 11 calculates a view position of the user 31 in a camera coordinate system of an image taken by the camera unit 8. As to the facial image, correction of a lens distortion, and correction of a center position on a CCD surface, are arbitrarily executed. In order to calculate the view position, a well-known method for estimating the view position using facial feature in the image processing field is used.
As shown in
Furthermore, except for usage of facial feature on the image, a color-marker is painted between both eyes of the user's face. By extracting a color of the marker and detecting a position of the color, the view position of the user may be output.
On the assumption that a view point of the user 31 exists at a predetermined depth from the camera unit 8, the view position calculation unit 11 outputs E(c) (x, y, z, 1) as view position information in the camera coordinate system. In order to estimate the depth z where the view point exists, size information of the face in the image is used. In the view position information, E(c) is a homogenious coordinate.
By using position information of the camera unit 8 (read from the fourth information storage unit 22), the second conversion unit 12 calculates a view position of the user 31 in the world coordinate system 41 from the view position E(c) of the user 31 in the camera coordinate system (calculated by the view position calculation unit 11).
The position information of the camera unit 8 represents a coordinate conversion from the camera coordinate system to the world coordinate system 41. This conversion is represented as a homogenious transformation matrix M(c) having “(4 rows)×(4 columns)”. This matrix M(c) may be determined by previously measuring a position of the camera unit 8. Alternatively, by equipping a position sensor (for positioning) with the camera unit 8, the matrix M(c) may be determined by acquiring the position of the camera unit 8 from the position sensor. As to the processing to calculate the matrix M(c) from the position/posture of the camera unit 8, a well-known method in the computer-vision field or the spatial positioning field can be used.
By using the matrix M(c), the second conversion unit 12 converts the view position E(c) in the camera coordinate system to a view position E(w) in the world coordinate system 41. This conversion processing is represented as following equation (2).
E(w)=M(c) E(c) (2)
In the equation (2), E(w) is a homogenious coordinate. The second conversion unit 12 outputs E(w).
In the first embodiment, in order to acquire the view position of the user 31, a single camera unit 8 is used. However, the depth may be acquired by a stereo imaging, i.e., two cameras may be used. Furthermore, the position sensor can be used.
The view position E(w) of the user 31 in the world coordinate system 41 (converted by the second conversion unit 12) is set as a start point. The half-line generation unit 14 generates a half-line from the start point to the touch position P(w) in the world coordinate system 41 (converted by the first conversion unit 13). Concretely, the start point E (w) of the half-line, and a unit vector of a direction vector P(w)−E(w), are generated.
As shown in
In the display position information 71, a position of a corner point of the display unit in the world coordinate system 41 is defined counterclockwise from the user side of the display unit. As the reason of counterclockwise the right-hand system is applied to the coordinate system of the first embodiment. However, clockwise-definition can be also processed by a method for packaging the program.
In case of using a rectangular display such as the first display unit 7 and the second display unit 9, the rectangular display is defined as four points in the world coordinate system 41. Furthermore, if the display has a curved surface, the curved surface is approximated as a set of planes.
In the first embodiment, the first display unit 7 and the second display unit 9 are used. In this case, four points D1, D2, D3 and D4 (in the world coordinate system 41) composing a rectangle of the first display unit 7, and four points M1, M2, M3 and M4 (in the world coordinate system 41) composing a rectangle of the second display unit 9, are stored. When a location of the display unit changes in the world coordinate system 41, the four points are updated at any time. When an ID of the display unit is indicated, information of the display unit corresponding to the ID is output from the second information storage unit 23.
The first decision unit 15 reads information of each display unit in order from the second information storage unit 23, and decides whether the half-line (generated by the half-line generation unit 14) spatially crosses the display unit. If the half-line crosses a plurality of display units, the first decision unit 15 outputs information of the display unit having a cross position nearest to the view position of the user.
Next, processing of the first decision unit 15 is explained by referring to a flow chart of
(A) At S101, as initialization processing, “1” is substituted for a variable “i” representing an ID of the display unit. An infinite value “∞” is substituted for a variable “ds” as a distance between the view position E and the cross position of the display unit (nearest to the view position). An invalid value “0” is substituted for a variable “Is” representing a display unit (crossing the half-line) nearest to the view point. At S102, information of the display unit “ID=i” is read from the second information storage unit 23.
At S103, a polygon of the display unit (display screen) is divided into a set of triangles not mutually overlapped. As a method for dividing the polygon (defined by a contour line) into triangles, a well-known method in the computer graphics field is used. For example, a function “GLUTess” of a utility library “GLU” in a graphics library “OpenGL” is used. As shown in
(B) At S104, as to all triangles divided from the surface, a point crossing the half-line on a plane on which the triangle exists is calculated. It is decided whether the point is included in the triangle. As processing to decide a cross point between the half-line and the triangle, a well-known method in the computer graphics field is used. If the cross point is included in any triangle divided from the surface, it is decided that the half-line crosses the surface. This processing is well known as the ray tracing in computer graphics. At S105, if the half-line crosses any triangle, processing is forwarded to S106. If the half-line does not cross any of triangles, processing is forwarded to S111.
(C) At S106, a distance between the view position E and the cross position S is calculated, and substituted for a variable d. At S107, the variable “ds” is compared to the variable “d”. If the variable “ds” is larger than the variable “d”, processing is forwarded to S108. If the variable “ds” is not larger than the variable “d”, processing is forwarded to S109. At S108, the variable “d” is substituted for the variable “ds”, and “i” is substituted for the variable “Is”.
(D) At S109, it is decided whether all display units registered in the second information storage unit 23 is checked. If all display units registered in the second information storage unit 23 is already checked, processing is forwarded to S110. If at least one display unit is not checked, processing is forwarded to S111. At S111, the variable “i” is increased by “1”, and processing is forwarded to S102.
(E) At S110, it is decided whether the variable “Is” is a valid value except for “0”. If the variable “Is” is a valid value except for “0”, processing is forwarded to S112. If the variable “Is” is “0”, processing is forwarded to S113.
(F) At S112, information E and V of the half-line, and a converted value of the cross position S onto the screen coordinate system 91, are output. For example, as shown in
S(s)=M(d2)S(w) (3)
S(s) is a homogenious coordinate. On the other hand, at S113, information that no display units cross the half-line is output, and processing is completed.
As shown in
For example, as shown in
The second decision unit 16 reads the display object position information from the third information storage unit 24 in order, and decides whether a cross position S is included in a region of the display object on the display unit (output from the first decision unit 15) crossing the half-line. Processing of the second decision unit 16 is explained by referring to
(A) At S201, it is decided whether the variable “i” of the display unit crossing the half-line is a value except for “0”. If the variable “i” is the value except for “0”, processing is forwarded to S202. If the variable “i” is “0”, processing is forwarded to S208.
(B) At S202, as initialization processing, “1” is substituted for a variable “j” representing a display object. At S203, information of the display object “ID=j” on the display unit “ID=i” is read from the third information storage unit 24. At S204, it is decided whether the cross position S is included in a region of the display object “ID=j”. If the cross position s is included, processing is forwarded to S207. If the cross position s is not included, processing is forwarded to S205. For example, it is decided that the cross position S is included in the window 112 defined by diagonal points WA1 and WA2.
(C) At S205, it is decided whether all display objects registered in the third information storage unit 24 is checked. If all display objects is already checked, processing is forwarded to S208. If at least one display object is not checked, processing is forwarded to S206. At S206, a value of the variable “j” is increased by “1”, and processing is forwarded to S203.
(D) At S207, a value of the variable “i” (ID of the display unit), a value of the variable “j” (ID of the display object), and a coordinate of the cross position S, are output as a pointing decision result (information of the user's pointed object), and processing is completed. On the other hand, at S208, information that no objects are pointed is output as the pointing decision result, and processing is completed.
By above processing, as to a plurality of display units 7 and 9 separately located from the touch position detector 6, it is decided that the user 31 points a display object on these displays by a touch operation. Then, a series of processing to output information of the display object is completed.
Next, in the information processing apparatus of the first embodiment, one example of an information processing method is explained by referring to a flow chart of
(A) At S1, the first display unit 7 and the second display unit 9 respectively output a display object visible for a user 31 via the touch position detector 6. The camera unit 8 takes a facial image of the user 31. At S2, the view position calculation unit 11 calculates a view position of the user 31 from the facial image of the user 31 (taken by the camera unit 8). At S3, the second conversion unit 12 calculates a view position in the world coordinate system from the view position of the user 31 (calculated by the view position calculation unit 11) and a position of the camera unit 8 in the world coordinate system.
(B) At S4, the touch position detector 6 detects a touch position of the user 31. At S5, the first conversion unit 13 calculates a touch position in the world coordinate system from the touch position detected by the touch position detector 6.
(C) At S6, the half-line generation unit 14 generates a half-line connecting the view position of the user 31 and the touch position in the world coordinate system.
(D) At S7, the first decision unit 17 reads position information of the first display unit 7 and the second display unit 9 from the second information storage unit 23. Then, the first decision unit 15 decides whether the half-line (generated by the half-line generation unit 14) crosses the first display unit 7 and the second display unit 9 respectively.
(E) At S8, the second decision unit 16 reads information of a display object on the display unit which is decided to cross the half-line, from the third information storage unit 24. Then, the second decision unit 16 decides whether the half-line crosses the display object. If it is decided that the half-line crosses the display object, information of the display object is stored in the fifth information storage unit 25.
As mentioned-above, in the first embodiment, even if a motion parallax occurs when the user 31 moves his/her view position, the motion parallax is corrected. In other words, it is decided that the user 31 points the display object on the second display unit 9 (separately located from the touch position detector 6) by a touch operation. Briefly, by operating the touch position detector 6, the user 31 can accurately point the display object on the second display unit 9.
In the first embodiment, the first display unit 7 and the second display unit 9 are prepared. However, the number of display units and a location position of display units are not limited. Furthermore, in the first embodiment, the first display unit 7 is located to make contact with the touch position detector, and the second display unit 9 is separately located from the touch position detector. However, a plurality of display units may be separately located from the touch position detector.
(The First Modification)
In the first embodiment, a view position of the user is acquired using the view position calculation unit 11 and the second conversion unit 12. However, the case that the view position is assumed to be fixed is explained. As shown in
(The Second Modification)
In the first embodiment, when the user performs a touch operation, a decision result as the pointed object is acquired. In the second modification, after the decision result is acquired, when the user performs a drag operation, processing to move a display position of the display object is explained.
After the user touches the touch position detector (touch panel) 6, the user moves a touch position of his/her finger without detaching the finger from the touch panel. After moving his/her finger on the touch panel, the user detaches the finger from the touch panel. This operation is called “drag operation”. Furthermore, when the user moves a position of his/her face while fixing a touch position of his/her finger on the touch panel, the user's pointing position on the display unit relatively moves. In this case, the same processing of the second modification is also used.
An example of the drag operation is explained by referring to
Next, as to the drag operation, in case that the user moves a touch position of his/her finger without detaching the finger from the touch panel, processing of the second modification is explained by referring to
(A) Before S301, when the user performs touch operation, the pointing decision result (the pointed display object) is output in the information processing apparatus of
(B) At S302, the second decision unit 16 decides whether the present touch position is included in any display unit. If the present touch position is included, processing is forwarded to S303. If the present touch position is not included in any of display units, i.e., if the user points an object except for the display unit, processing is forwarded to S304.
(C) At S303, a touch position (corresponding to the present touch position) on the display unit is set to an argument, and a position of the display object for dragging in the third information storage unit 24 is updated using the argument. Briefly, the position of the display object for dragging is updated to the present touch position. After that, processing is completed.
(D) At S304, the display unit to output the display object does not exist. Accordingly, the second decision unit 16 sets information of the display object for dragging in the third information storage unit 24 to non-display. After that, processing is completed.
Next, in case that the user completes the drag operation by detaching his/her finger from the touch panel, processing of the second modification is explained by referring to
(A) At S401, by using processing of the first decision unit 15 in
(B) At S402, the second decision unit 16 decides whether the detach position is included in any display unit. If the detach position is included in any display unit, processing is forwarded to S403. If the detach position is not included in any of display units (the user detaches the finger after pointing an object except for the display unit), processing is forwarded to S404.
(C) At S403, a touch position (corresponding to the detach position) on the display unit is set to an argument, and a position of the moving display object in the third information storage unit 24 is updated using the argument. Briefly, the position of the moving display object is updated to the detach position. After that, processing is forwarded to S405.
(D) At S404, the display unit to output the display object does not exist. Accordingly, the second decision unit 16 outputs information that the drag operation is invalid. After that, processing is forwarded to S405.
(E) At S405, information of the display object for dragging is deleted from the third information storage unit 24, and processing is completed.
In the second modification, an example that the display object is moved by the drag operation is explained. Whether the display object is moved by the drag operation, or the display object is copied (by the drag operation) and displayed at a moving destination position while the display object is continually displayed at a moving source position, can be arbitrarily defined based on the operation. For example, when the user drags on the first display unit 7, the display object is moved on the first display unit 7. When the user drags from the first display unit 7 to the second display unit 9 (i.e., onto a different display unit), the display object is copied and displayed on the second display unit 9.
In the second embodiment, in addition to the first embodiment, by using position information of a real object except for the display unit, it is decided whether the user points the real object via a touch panel.
In the second embodiment, as shown in
In positional relationship of
As shown in
As shown in
As shown in
The third decision unit 17 reads information of each real object in order from the real object position storage unit 27, and decides whether the half-line (generated from the half-line generation unit 14) spatially crosses the real object. If the half-line crosses a plurality of real objects, the third decision unit 17 selects one real object having a cross position nearest to a view position of the user, from the plurality of real objects.
Next, processing of the third decision unit 17 is explained by referring to a flow chart of
(A) At S501, as initialization processing, “1” is substituted for a variable “k” representing an ID of the real object. An invalid value “0” is substituted for a variable “ks” representing a real object (crossing the half-line) nearest to the view position. As to a variable “ds” representing a distance between the view position E and a cross position of the display unit (nearest to the view position), an output value from the first decision unit 15 is used.
(B) At S502, information of the real object “ID=k” is read from the real object position storage unit 27. At S503, a polygon of the real object is divided into a set of triangles not mutually overlapped. This processing is same as S103.
Next, at S504, as to all triangles, a point crossing the half-line on a plane on which the triangle exists is calculated. It is decided whether the point is included in the triangle. This processing is same as S104.
(C) At S505, if the half-line crosses any triangle, processing is forwarded to S506. If the half-line does not cross any of triangles, processing is forwarded to S511. At S506, as shown in
(D) At S508, the variable “d” is substituted for the variable “ds”, and “k” is substituted for the variable “ks”. At S509, it is decided whether all real objects registered in the real object position storage unit 27 is checked. If all real objects registered in the real object position storage unit 27 is already checked, processing is forwarded to S510. If at least one real object is not checked, processing is forwarded to S511. At S511, the variable “k” is increased by “1”.
(E) At S510, it is decided whether the variable “ks” is a valid value except for “0”. If the variable “ks” is a valid value except for “0”, processing is forwarded to S512. If the variable “ks” is “0”, processing is forwarded to S513.
(F) At S512, information of the real object “ID=ks” crossing the half line is output, and processing is completed.
(G) At S513, it is decided whether the variable “is” is a valid value except for “0”. If the variable “is” is a valid value except for “0”, processing is forwarded to S514. If the variable “is” is “0”, processing is forwarded to S515. At S514, information E and V of the half-line, and a converted value of the cross position S onto the screen coordinate system, are output, and processing is completed. This processing is same as S112. At S515, information that neither the real object nor the display unit crosses the half-line is output, and processing is completed.
If the third decision unit 17 decides that the half-line crosses a real object and the first decision unit 15 decides that the half-line crosses a display unit, the third decision unit 17 decides whether a cross position of the display unit is nearer to a view position (in the world coordinate system) than a cross position of the real object. If the cross position of the display unit is farer from the view position than the cross position of the real object, information of the real object “ID=ks” is output. On the other hand, if the cross position of the display unit is nearer to the view position than the cross position of the real object, the second decision unit 16 decides whether the half-line crosses a display object on the display unit.
As mentioned-above, in the second embodiment, even if a motion parallax occurs by moving a view point of the user 31, the motion parallax is corrected. Briefly, it is decided that the user 31 points an object displayed on the first display unit 7 (separately located from the touch position detector 6) or a real object 181 by a touch operation via a touch panel. Accordingly, by operating the touch position detector 6, the user can accurately point the object displayed on the display unit or the real object (remotely located from the user).
In the second embodiment, a location position of the real object (and the display unit), and the number of the real objects (and the display units), are not limited. Furthermore, in the second embodiment, the first display unit 7 is located to make contact with the touch position detector 6, and the real object 181 is located far from a view point of the user. However, a plurality of display units and the real object may be located far from the view point of the user.
(The First Modification)
In the second embodiment, a view position of the user is acquired using the view position calculation unit 11 and the second conversion unit 12. However, the case that the view position is assumed to be fixed is explained. As shown in
(The Second Modification)
In the information processing apparatus of
First, by referring to
After that, when the user detaches his/her finger 250 from the real object 181, processing corresponding to the moving display object 251 and the real object 181 is respectively executed. This processing can be defined based on characteristic of the real object 181. For example, if the real object 181 is a printer, information of the display object 251 is printed as a predetermined format. Furthermore, without the display object for dragging 252, the display object 251 may be directly moved on the first display unit 7, based on the drag operation.
Furthermore, processing of the drag operation from the display unit to the real object in
Next, by referring to
After that, when the user detaches the finger 260 from the first display unit 7, information of the real object 181 is displayed as a display object 261. This information can be defined based on characteristic of the real object 181. For example, if the real object 181 is some monitoring camera, the display object 261 may be a video taken by the monitoring camera. If the real object 181 is some building viewable from the user, the display object 261 may be information related to the building.
Without the display object for dragging 262, the display object 261 may be directly moved on the first display unit 7, based on the drag operation.
Furthermore, processing of the drag operation from the real object to the display unit in
Furthermore, in the second modification, processing corresponding to the drag operation between the display unit and the real object is explained. However, even if the user points a first real object and drags the first real object into a second real object, this processing can be realized in the same way.
For example, as shown in
The information processing apparatus of above-mentioned embodiments can be utilized by installing onto a personal computer and a cellular-terminal of next generation, a monitoring system and a control system.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and embodiments of the invention disclosed herein. It is intended that the specification and embodiments be considered as exemplary only, with the scope and spirit of the invention being indicated by the claims.
Number | Date | Country | Kind |
---|---|---|---|
2009-017037 | Jan 2009 | JP | national |