This application is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2013-077202, filed on Apr. 2, 2013, the entire contents of which are incorporated herein by reference.
The embodiments discussed herein are related to an information operation display system, a display program, and a display method.
In recent years, augmented reality technology has been known to project a virtual image onto a real object using a projector to present a note, a menu, or the like related to the real object. In such an augmented reality technology, a user interface technique is used to detect a gesture of an operation object such as a hand or a finger and realize an interaction between the operation object and an operation target object such as a virtual image. For example, a command to select a point on the operation target object is issued by making a gesture such as touching a part of the operation target object such as a virtual image with a fingertip.
A description of a related technique may be found, for example, in P. Mistry, P. Maes, “SixthSense-A Wearable Gestural Interface”, in the Proceedings of SIGGRAPH Asia 2009, Emerging Technologies, Yokohama, Japan, 2009.
According to an aspect of the invention, an information operation display system includes a camera, a projector, and an information processing apparatus. The information processing apparatus includes an acquisition unit configured to acquire an image taken by the camera, a measurement unit configured to measure a 3-dimensional coordinate position of an operation object included in the image acquired by the acquisition unit, and a display unit configured to control the projector such that an image indicating a point on which a selection operation is performed by the operation object is displayed on an operation target object according to the 3-dimensional coordinate position of the operation object measured by the measurement unit.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, as claimed.
In the conventional technique described in the background, however, it is difficult for a user to recognize a situation of an operation performed by the user, which may cause an apparatus to incorrectly recognize an input or output given by a user. For example, when an operation using a hand or a finger is performed, the operation is, in most cases, to issue a command to specify a precise position. In such an operation, even a sight difference occurs between an actually intended position and a position recognized by the apparatus, a user may repeat the operation many times until the intended position is correctly recognized by the apparatus, which results in a reduction in operability.
In view of the above situation, it is desired to provide an information operation display system, a display program, and a display method, which allows it to improve the operability.
Referring to figures, embodiments of an information operation display system, a display program, and a display method are described below. Note that the embodiments described below are merely illustrative examples and not for limitation. Also note that the embodiments may be combined in various manners as long as no contradiction is derived.
[Configuration of Information Operation Display System]
The information operation display system 100 has a projection plane onto which an image is projected by the projector 3. The projection plane is used as a work plane, and a virtual image is provided in a work environment by projecting the virtual image onto the projection plane. The projector 3 and the two cameras 1 and 2 are installed above the projection plane so as to face downward in vertical directions. The two cameras 1 and 2 have known parameters, and they are installed such that their optical axes are parallel to each other and their horizontal axes are on the same straight line in the image. The parameters corresponding to each other of the cameras 1 and 2 are preferably equal or same as possible. Using these cameras 1 and 2, color information and depths of the projection plane or the work plane are acquired. A virtual image is projected on the work plane by the projector 3. A user performs an interaction by placing his/her hand on the work plane from a particular direction.
The information processing apparatus 10 calculates the 3-dimensional position of the operation object from a time series of images taken by the cameras 1 and 2. The information processing apparatus 10 then determines an operation performed on an operation target object such as a document based on the calculated 3-dimensional position of the operation object. More specifically, for example, the information processing apparatus 10 determines which information part in the document is touched (selected) or released from the touched (selected) state. A network for connecting the cameras 1 and 2 and the projector 3 may be a wired or wireless communication network such as a local area network (LAN), a virtual private network (VPN), or the like.
Note that the shutter operation timing may not be synchronous between the cameras 1 and 2. That is, the cameras 1 and 2 may not be synchronous in operation. Furthermore, the information operation display system 100 may include three or more cameras. Although in the present embodiment, it is assumed by way of example that the projector 3 is connected to the information processing apparatus 10 via the network, a cable, or a radio, the information processing apparatus 10 may not be connected to the network. Furthermore, in the following description, it is assumed by way of example but not limitation that an object whose image is taken by the cameras 1 and 2 is a hand or a finger of an operator who operates the projected document. Alternatively, the object may be a pen, a stick, or the like.
In the information operation display system 100, a calibration is performed in advance in terms of the relative position between the recognition coordinate system of the cameras 1 and 2 and the display coordinate system of the projector 3. In the information operation display system 100, whenever a change occurs in the relative positional relationship among the cameras 1 and 2 and the projector 3, a calibration is performed. A specific method of the calibration is to read out an image output from the projector 3 using the cameras 1 and 2 and internally perform the calibration as described below. Note that the method of the calibration is not limited to this. In the information operation display system 100, the calibration is performed for each of the two cameras.
In the information operation display system 100, first, a marker is displayed at a position with certain arbitrary coordinate values (x_p, y_p) in the display coordinate system of the projector 3. The marker may have an arbitrary color and a shape that allow the marker to be easily distinguished from a background. The cameras 1 and 2 each take an image of a situation projected on the projection plane. Thereafter, the information processing apparatus 10 reads the marker by performing image processing. In a case where the marker has a circular pattern, the circular pattern may be read out by performing a Hough transform disclosed, for example, in Kimme et al., “Finding circles by an array of accumulators”, Communications of the Association for Computing Machinery”, #18, pp. 120-122, 1975. Coordinate values obtained via the reading process are denoted as (x_i, y_i).
The information processing apparatus 10 performs the process of reading the marker for four points at arbitrary positions. The information processing apparatus 10 determines each component of a homography matrix H with 3 rows and 3 columns by solving a set of 8 simultaneous linear equations given by four sets of coordinate values (x_i, y_i) corresponding to (x_p, y_p) obtained via the marker read process described above. The homography matrix H is a matrix indicating a projective transform from a plane in a 3-dimensional space to another plane. More specifically, in the present embodiment, a correspondence between the camera coordinate plane and the projector coordinate plane is determined. The information processing apparatus 10 stores the homography matrix obtained in the above-described manner for use in projecting a virtual image.
[Configuration of Information Processing Apparatus]
Next, referring to
The communication I/F unit 11 is an interface configured to control communication with another apparatus. The communication I/F unit 11 receives various kinds of information via the network. For example, the communication I/F unit 11 receives an image of a document and/or an operation object from the cameras 1 and 2. An example of the communication I/F unit 11 is a network interface card such as a LAN card.
The display unit 12 is a display device configured to display various kinds of information. Examples of display devices usable as the display unit 12 include a liquid crystal display (LCD), a cathode ray tube (CRT), and the like. The display unit 12 displays various kinds of information. For example, the display unit 12 displays various kinds of information stored in the storage unit 14.
The input unit 13 is an input device for use in inputting various kinds of information. Examples of input devices usable as the input unit 13 include a mouse, a keyboard, and a touch sensor. The input unit 13 outputs information input by a user of the information processing apparatus 10 to the control units 15. For example, when the input unit 13 receives information from which other pieces of information such as work plane coordinate information 141, finger coordinate information 142, display information 143, and the like are to be generated as will be described later, the input unit 13 outputs the received information to the control unit 15 such that the information is stored in the storage unit 14 via the control unit 15.
The storage unit 14 is a nonvolatile storage apparatus such as a hard disk, a solid state drive (SSD), optical disk, or the like. Alternatively, the storage unit 14 may be a data-rewritable semiconductor memory such as a random access memory (RAM), a flash memory, a non-volatile static random access memory (NVSRAM), or the like.
The storage unit 14 stores an operating system (OS) and various programs executed by the control unit 15. The storage unit 14 may also store various kinds of data used or generated by the programs. For example, the storage unit 14 stores the work plane coordinate information 141, the finger coordinate information 142, and the display information 143.
The work plane coordinate information 141 is information associated with a 3-dimensional shape of a work plane. More specifically, for example, the work plane coordinate information 141 is information including coordinates of each pixel with respect to an arbitrary reference point in 3-dimensional orthogonal coordinates in the work plane and a coordinate indicating a depth coupled thereto as illustrated by way of example in a table of
The work plane coordinate information 141 may be acquired and stored in advance. For example, the information processing apparatus 10 may acquire in advance the 3-dimensional shape of the work plane using a method called an active stereoscopic method to acquire the work plane coordinate information 141. In the active stereoscopic method, a predetermined pattern is projected by the projector 3 onto an object, and the 3-dimensional shape of the object is acquired by measuring a change in the projected pattern between the cameras 1 and 2.
The active stereoscopic method has various versions. In the present embodiment, by way of example but not limitation, a space coding method disclosed, for example, in Japanese Laid-open Patent Publication No. 60-152903 is employed. Note that methods other than the space coding method may be used. In the space coding method, a luminance pattern with IDs of coordinates of all pixels of the projector 3 is produced, and the pattern is projected a plurality of times. From a result, a depth [m] of each pixel of the projector 3 is calculated by triangulation.
The finger coordinate information 142 is information associated with 3-dimensional coordinate positions, measured by the measurement unit 152, of fingers given as the operation objects. As illustrated by way of example in tables of
For example, fingertip coordinates of each fingertip are calculated from images taken by the two cameras 1 and 2 in a state in which one hand of a user is opened, and the resultant fingertip coordinates are stored together with fingertip IDs as the finger coordinate information 142. The fingertip IDs may be given, for example, by assigning serial numbers thereto in the order from a smallest value toward greater values in a horizontal coordinate. A reference point for coordinates of the fingertip pixels may be taken, for example, at an upper left corner of the image.
Furthermore, in the finger coordinate information 142, as illustrated by way of example in the table of
The display information 143 is information associated with an image which is displayed by the projector 3 to indicate a point on which a selection operation with a finger is performed. The display information 143 is referred to when the display unit 153 displays the image indicating the point on which the selection operation with the finger is performed.
The control unit 15 is a device that controls the information processing apparatus 10. As for the control unit 15, an electronic circuit such as a central processing unit (CPU), a micro processing unit (MPU), or the like or an integrated circuit such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or the like may be employed. The control unit 15 includes an internal memory for storing a program defining various processing procedures and associated control data thereby executing various processes. In the control unit 15, various programs operate such that the control unit 15 functions as various processing units. The control unit 15 includes an acquisition unit 151, a measurement unit 152, and a display unit 153.
The acquisition unit 151 acquires images taken by the cameras 1 and 2. For example, the acquisition unit 151 acquires images from the two cameras 1 and 2 a predetermined number of times (for example, 60 times) every second.
The acquisition unit 151 then performs a finger position detection on each of the acquired images. The finger position may be detected, for example, by estimating the finger position only using the image via image processing based on a method disclosed, for example, in Japanese Laid-open Patent Publication No. 2003-346162.
The information processing apparatus 10 may store in advance learning data associated with hand shapes, and may estimate the finger shape by calculating the similarity of the current image relative to the learning data. A specific example of the method of estimating the finger shape by calculating the similarity of the current image relative to the learning data may be found, for example, in Yamashita et at., “Hand shape recognition using 3-dimensional active appearance model”, Symposium on image recognition and understanding, MIRU 2012, IS 3-70, 2012-08. In the following description, it is assumed that the finger position is estimated using the method of estimating the finger position only from the image via the image processing. In this method, a flesh color part is extracted from an input image thereby extracting hand areas. Thereafter, the number of hands is recognized, and fingertip coordinates are estimated from a contour of the hand area.
Next, the acquisition unit 151 determines whether there is a finger. More specifically, the acquisition unit 151 checks whether there is output data associated with a finger position detection. In a case where there is no output data, a virtual image of a previous frame is displayed at the same position and the process for the current frame is ended.
The measurement unit 152 measures a 3-dimensional coordinate position of an operation object included in the captured images acquired by the acquisition unit 151. For example, the measurement unit 152 calculates 3-dimensional coordinate positions of fingers as the operation objects. In the present example, the coordinates of the fingers are calculated using a stereoscopic camera as described below. The measurement unit 152 determines the depth Z in a depth direction in a 3-dimensional space based on the triangulation according to equation (1) described below in which b denotes the length (base-line length) of a segment between the two cameras, f denotes the focal length of the cameras, and (u, v) and (u′, v′) denote 2-dimensional coordinates of two corresponding points on right and left sides. An example of the method of calculating the depth Z in the depth direction in the 3-dimensional space is disclosed, for example, in “Digital Image Processing” Edited by CG-ARTS Society, pp. 259.
The measurement unit 152 then estimates the depth of a tip of each finger using equation (1). The measurement unit 152 assigns serial numbers to the fingertips in the order from smallest to greatest in horizontal coordinate value for each of the images taken by the left and right cameras 1 and 2. The measurement unit 152 regards fingertips having the same number as corresponding points and substitutes the values of the corresponding points into the above-described equation thereby obtaining Z. The measurement unit 152 stores the estimated depth of each finger in the storage unit 14. Internal parameters of the cameras used in calculating f may be estimated using, for example, a calibration method described in Zhengyou Zhang, “A flexible new technique for camera calibration”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11), pp. 1330-1334, 2000.
Thereafter, the measurement unit 152 performs a pressing-down judgment. In the present embodiment, by way of example, the press-down is performed by detecting a contact between a finger and the work plane. At the beginning of the execution, the depth of the work plane is measured in advance by using the active stereoscopic method as described above. When the difference between the depth of a finger and the depth of the document plane falls within a threshold range, the measurement unit 152 determines that pressing down is performed. In a case where the depth falls within the threshold range for a plurality of fingers, the measurement unit 152 determines that pressing down is performed with the plurality of fingers. In a case where the depth of any of the plurality of fingers is not within the threshold range, a virtual image of a previous frame is displayed at the same position and the process on the current frame is ended.
The display unit 153 controls the projector 3 such that a selection position indicator image indicating a point on which a selection operation is performed by a finger is displayed on an operation target object on which the virtual image is projected according to the 3-dimensional coordinate position of the finger measured by the measurement unit 152.
The display unit 153 displays the selection position indicator image indicating the selection position on the work plane even when the finger is located in the air apart from the work plane. In the examples illustrated in
On the other hand, in the conventional technique, as illustrated by way of example in
Concerning the display position, the display unit 153 determines coordinates in the projector coordinate plane on which the selection position indicator image is projected from the predetermined homography matrix indicating the relationship between the camera recognition coordinate system and the projector display coordinate system according to equations (2) and (3) described below. Let (x_src, y_src) denote the center coordinates of the display position in the camera recognition coordinate system, and (x_dst, y_dst) denote the center coordinates of the display position in the projector display coordinate system. Note that h—11 to h—33 are components of an inverse matrix −1 of the homography matrix obtained via the calibration process described above. The finger coordinates are displayed using the information illustrated by way of example in
A method of determining the size of the circle depending on the depth is described below. When the depth of the work plane is denoted by L and the depth of the fingertip is denoted by Z, the display unit 153 determines the radius r of the circle according to equation described below.
r=α(L−Z)+b
where α and b are allowed to take arbitrary values. Note that when the fingertip is in a contact with the work plane, r=b.
Note that the manner of displaying the selection position indicator image depending on the depth of the fingertip is not limited to the above-described manner using the circle. For example, brightness may be changed depending on the depth. More specifically, for example, the brightness is reduced as the fingertip goes away from the work plane while the brightness is increased as the fingertip comes closer to the work plane. Still alternatively, the color of the selection position indicator image may be changed depending on the depth of the fingertip. For example, the color may be changed toward red as the fingertip goes away from the work plane, while the color may be changed toward blue as the fingertip comes closer to the work plane.
As illustrated by way of example in
To ensure that the selection position is displayed when the fingertip is located in the air apart from the work plane, the display position may be offset such that the display position is not hidden by the fingertip. However, when the fingertip comes close to the work plane, if there is still an offset, a wrong position is selected. To handle this situation, the display unit 153 changes the offset depending on the depth of the fingertip such that the offset is reduced as the fingertip approaches the work plane.
For example, when the offset is given in a Y direction, and when the depth of the work plane is denoted by L and the depth of the fingertip is denoted by Z, the amount of the offset Yos is given by a following equation.
Yos=α(L−Z)+b
where α and b are allowed to take arbitrary values. In a case where b=0, the offset is equal to zero when the fingertip is in contact with the work plane.
[Process Performed by Information Processing Apparatus]
Next, referring to
As illustrated in
The acquisition unit 151 then extracts only the finger area from the captured images (step S102 ). For example, the acquisition unit 151 detects a flesh color area and extracts the finger area based on a color information of each pixel in the images and according to a condition associated with the color extraction.
The acquisition unit 151 then determines whether there is output data associated with the finger position detection (step S103). In a case where the determination performed by the acquisition unit 151 indicates that there is no output data associated with the finger (i.e., when the answer to step S103 is negative), the processing flow jumps to step S106. In this case, the display unit 153 performs a display update process as a process for a current frame such that a virtual image of a previous frame is displayed at the same position (step S106).
On the other hand, in case where the determination performed by the acquisition unit 151 is that there is output data associated with the finger (i.e., when the answer to step S103 is affirmative), the measurement unit 152 calculates the 3-dimensional coordinates of the finger (step S104). The display unit 153 then determines a position at which to display the selection position indicator image superimposed on an operation target object so as to indicate thereby the position on which the selection operation with the finger is performed (step S105). Concerning the display position, for example, the display unit 153 determines the coordinates in the projector coordinate plane, on which the selection position indicator image is projected, according to equations (2) and (3) described above on the basis of the predetermined homography matrix indicating the relationship between the camera recognition coordinate system and the projector display coordinate system.
After the display unit 153 determines the position at which the selection position indicator image is to be displayed so as to be superimposed on the operation target object, the display unit 153 performs the display update process to display the selection position indicator image at the determined position (step S106). The measurement unit 152 then performs a pressing-down judgment (step S107). For example, the measurement unit 152 performs the press-down determination by detecting a contact of a finger with the work plane.
In a case where the determination by the measurement unit 152 is that pressing down is not detected (i.e., when the answer to step S107 is negative), the processing flow returns to step S101. On the other hand, in case where the determination by the measurement unit 152 is that pressing down is detected (i.e., when the answer to step S107 is affirmative), then a further determination is performed as to whether the finger has been kept at rest for a predetermined period (step S108). In a case where the determination by the measurement unit 152 is that the finger has not been kept at rest for the predetermined period (i.e., when the answer to step S108 is negative), the processing flow returns to step S101. On the other hand, in a case where the determination by the measurement unit 152 is that the finger has been kept at rest for the predetermined period (i.e., when the answer to step S108 is affirmative), the projector 3 displays the selection position indicator image such that the selection position indicator image moves to the exact position of the fingertip (step S109). When the selection position indicator image is moved, a beep sound or the like may be generated. Thereafter, the measurement unit 152detects a meaning of the operation from a gesture given by the finger (step S110). Thereafter, the process is ended.
As described above, the information operation display system 100 includes the cameras 1 and 2, the projector 3, and the information processing apparatus 10. The information processing apparatus 10 acquires images taken by the cameras 1 and 2 and measures the 3-dimensional coordinate position of the finger included in the acquired taken images. According to the measured 3-dimensional coordinate position of the finger, the information processing apparatus 10 controls the projector 3 to display, on the operation target object, the selection position indicator image indicating the point on which the selection operation with the finger is performed. Thus, it becomes possible to reduce an input error against the intention of a user, which allows a reduction in an operation error and allows an improvement in operability.
Furthermore, the information processing apparatus 10 controls the projector 3 to change the contour shape of the selection position indicator image displayed on the operation target object depending on the distance between the finger and the operation object. Thus, a user is allowed to precisely specify an operation position by touching the work plane such that the center point comes to the intended position.
The information processing apparatus 10 controls the projector 3 to change the brightness of the selection position indicator image displayed on the operation target object depending on the distance between the finger and the operation target object. This makes it possible for an operator to easily recognize the selection position indicator image, which allows the operator to precisely specify the operation position.
The information processing apparatus 10 controls the projector 3 to change the color of the selection position indicator image displayed on the operation target object depending on the distance between the finger and the operation target object. This makes it possible for an operator to easily recognize the selection position indicator image, which allows the operator to precisely specify the operation position.
The information processing apparatus 10 displays the selection position indicator image at a position such that the distance from the measured 3-dimensional coordinate position to the position increases as the distance between the finger and the operation target object increases. Conversely, the information processing apparatus 10 controls the projector 3 such that the selection position indicator image is displayed at a position, where the distance from the measured 3-dimensional coordinate position to the selection position indicator image decreases as the distance between the finger and the operation target object decreases. This reduces the possibility that light emitted from the projector 3 is blocked by a fingertip and the selection position indicator image is not displayed on the work plane. Thus, the whole circle and the center point thereof are displayed at any time, which allows a user to easily recognize the selection position.
In the first embodiment described above, displaying the selection position indicator image indicating the selection position on the work plane is started when the finger is located in the air apart from the work plane to make it possible for a user to precisely specify the operation position. However, embodiments may be limited to that described above. For example, the information processing apparatus may detect a resting state of a finger and may detect a command to start or end an operation on the operation target object from the resting state. During a period in which detecting of the resting state is in progress, the information processing apparatus may present information to instruct to keep the resting state of the finger.
In a conventional technique, a copy area is specified, for example, as follows. That is, when the difference between the depth of a fingertip and the depth of the work plane becomes smaller than a threshold value, it is determined that the start position of the copy area is specified. On the other hand, when the difference between the depth of the fingertip and the depth of the work plane becomes greater than a threshold value, it is determined that the end position of the copy area is specified. However, in practice, when the depth detection accuracy is not good enough, the threshold value is set to be large. For example, the threshold value may be set to 5 mm. In such a case, in addition to the depth, changes in X and Y coordinates of the fingertip are monitored, and a contact state or a pressing-down operation is detected based on a combination of detecting a resting state of the fingertip and determining the depth with respect to the threshold value.
However, it may be difficult for a user to know whether the resting state of the fingertip has been kept for a sufficiently long period. Thus, there is a possibility that the period during which the resting state of the fingertip kept by the user is not long enough for the apparatus to regard pressing down to occur. Thus, the user may repeat the same operation many times until the apparatus determines that the pressing down is detected. To handle the above situation, the apparatus may display information on the work plane to inform a user of remaining time during which the fingertip is to be further kept at rest until the apparatus determines that pressing down is detected or to inform the user of a result of the determination thereby prompting the user to keep the fingertip at rest for a particular period.
In view of the above, the second embodiment provides an information operation display system 100A. Referring to
First, referring to
The display unit 153 controls the projector 3 to present information to instruct to keep the resting state of the finger during the period in which detecting of the resting state by the detection unit 154 is in progress. Furthermore, to present information to instruct to keep the resting state of the finger, the display unit 153 controls the projector 3 to present information indicating a progress of the detecting of the resting state until the detection is completed.
Next, referring to
When the end position is reached, displaying of a progress is performed as illustrated by way of example in
Furthermore, as illustrated by way of example in
Next, referring to
First, when the information processing apparatus 10A detects a pressing-down operation with a finger (step S201), the information processing apparatus 10A determines whether the finger has been kept at rest for a predetermined period (step S202). In a case where the determination by the information processing apparatus 10A is that the resting state has not been kept for the predetermined period (i.e., when the answer to step S202 is negative), the process is ended. On the other hand, in a case where the determination by the information processing apparatus 10A is that the resting state has been kept for the predetermined period (i.e., when the answer to step S202 is affirmative), an image is displayed to indicate that specifying of a start point or pressing down is detected (step S203). For example, as illustrated by way of example in
When the information processing apparatus 10A then detects movement of the fingertip (step S204), the information processing apparatus 10A determines whether a resting state of the fingertip is detected (step S205). In a case where the determination by the information processing apparatus 10A is that a resting state of the fingertip is not detected (i.e., when the answer to step S205 is negative), the process is ended. On the other hand, in a case where the determination by the information processing apparatus 10A is that a resting state of the fingertip is detected (i.e., when the answer to step S205 is affirmative), the shading is updated to expand the shaded area (step S206). For example, the information processing apparatus 10A controls the projector 3 to display a shaded area in a superimposed manner such that the shaped area is gradually expanded in a few seconds until the selection area is completely shaded.
Thereafter, the information processing apparatus 10 A determines whether the shading reaches the end point (step S207). In a case where the determination by the information processing apparatus 10A is that the shading has not yet reached the end point (i.e., when the answer to step S207 is negative), the processing flow returns to step S205. In a case where the determination by the information processing apparatus 10A is that the shading has reached the end point (i.e., when the answer to step S207 is affirmative), the information processing apparatus 10A controls the projector 3 to display a copied image in a superimposed manner (step S208). The information processing apparatus 10 A then displays an animation of moving the copied image (step S209). For example, the information processing apparatus 10A controls the projector 3 to display the copied image so as to be superimposed on the source area and display an animation of moving the copied image to a save area.
As described above, in the second embodiment, the information operation display system 100A includes the cameras 1 and 2 and the information processing apparatus 10A. The information processing apparatus 10A acquires images taken by the cameras 1 and 2, and measures the 3-dimensional coordinate position of the operation object included in the acquired images. The information processing apparatus 10A detects a resting state of the finger using the measured 3-dimensional coordinate position of the operation object, and furthermore the information processing apparatus 10A detects a command to start or end an operation on the operation target object from the resting state. During the period in which detecting of the resting state is in progress, the information processing apparatus 10A controls the projector 3 to present information to instruct to keep the resting state of the finger. This makes it possible to properly prompt a user to keep the resting state of the finger for a particular period.
In the second embodiment, to present information to instruct to keep the resting state of the finger, the display unit 153 controls the projector 3 to present information indicating a progress of the detecting of the resting state until the detection is completed. This makes it possible to properly prompt a user to keep the resting state of the fingertip by presenting information indicating the progress of the process of detecting the resting state until the detection is completed.
[System Configuration]
Note that constituent elements of each apparatus illustrated above are conceptual elements that provide particular functions, and they may not be physically configured as illustrated in figures. That is, the manner of distributing or integrating elements or apparatus is not limited to that described above with reference to the figures, but all or part of them may be functionally or physically separated or combined in arbitrary units depending on loads or usage conditions. For example, the acquisition unit 151, the measurement unit 152, and the display unit 153 in
[Program]
One or more processes according to the embodiments described above may be realized by executing a program prepared in advance by a computer system such as a personal computer, a workstation, or the like. Thus, an example of a computer system is described below which is capable of executing a program to realize all or part of the functions according to the embodiments described above.
As illustrated in
In the ROM 320 a display program 320a is stored in advance to realize functions similar to those realized by the respective processing units according to the embodiments described above. For example, the display program 320a stored in the ROM 320 may realize a function similar to that realized by the control unit 15 according to the embodiments described above. Note that the display program 320a may be properly divided into a plurality of parts.
The HDD 330 stores various kinds of data. More specifically, for example, the HDD 330 stores an operating system (OS) and various kinds of data.
The CPU 310 reads out the display program 320a from the ROM 320 and executes the display program 320a to perform operations similar to those performed by the processing units according to the embodiments described above. More specifically, for example, the display program 320a is executed to perform an operation similar to that performed by the control unit 15 according to the embodiments.
Note that the display program 320a described above may not be stored in the ROM 320 at the beginning. The display program 320a may be stored in the HDD 330.
The program may be stored in a portable physical medium inserted in the computer 300 such as a flexible disk (FD), a compact disk read only memory (CD-ROM), a digital versatile disk (DVD), a magneto-optical disk, an IC card, or the like, and the computer 300 may read out the program from the portable physical medium and execute the program.
Alternatively, the program may be stored in another computer (or a server) connected to the computer 300 via a public communication line, the Internet, a LAN, a WAN, or the like, and the computer 300 may read out the program therefrom and execute the program.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although the embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2013-077202 | Apr 2013 | JP | national |