1. Field of the Invention
The present invention relates to an information input device and an information input method, and more particularly, to an information input device and information input method for utilizing motions of operator's hands to perform input operations.
2. Description of the Related Art
In the past, there has been known an information input device that gives an input instruction to a computer by motions of an operator's hand and performs an input operation based on the input instruction. For example, Japan Patent Laid-Open No. 2004-078977 discloses a device that includes a CCD camera, a computer that recognizes the shape and the like of an object in an image imaged by the CCD camera, and a display for displaying the object recognized by the computer. The device is adapted to perform a selecting operation of a cursor on the display by motions of a user's hand.
Japan Patent Laid-Open No. 2004-258714 discloses a device that is adapted to perform, for example, a drag operation by motions of an operator's hand on a virtual plane.
In the device disclosed in Japan Patent Laid-Open No. 2004-078977 or Japan Patent Laid-Open No. 2004-258714, a range of a target screen operated by motions of an operator's hand is large. However, in the situation where the range of the target screen to be operated is large, the motions of the operator's hand are not correctly recognized, and therefore the conventional devices may have a problem that it is not possible to appropriately move an object as a selecting object, such as a cursor, according to the motions of the hand.
In view of the above, it is an object of the present invention to provide an information input device and information input method, whereby motions of an operator's hand can be correctly recognized by narrowing a range of an operation target screen, and an operation on an object based on the motions of the hand can be appropriately performed.
To achieve the objects as described above, the information input device includes: a display part with a display area; an area setting part for setting input instruction areas for an operator to give input instructions; an obtainment part for obtaining a situation of the operator to give the input instructions; and a control part for distinctively arranging a selection area and a decision area in input instruction areas in response to motions of both hands of the operator determined based on information on the obtained situation, the selection area being related to a partial area of an entire display area of a display part and being for receiving a selecting operation by the operator in the partial area, and the decision area being for receiving a deciding operation by the operator.
Also, to achieve the objects as described above, the information input method includes: obtaining a situation where an operator gives input instructions; and distinctively arranging a selection area and a decision area in response to motions of both hands of the operator determined based on information on the obtained situation, the selection area being related to a partial area of an entire display area of a display part and being for receiving a selecting operation by the operator in the partial area, and the decision area being for receiving a deciding operation by the operator.
According to the present invention, by narrowing a range of an operation target screen, motions of an operator's hand can be correctly recognized, and an operation to be performed on an object on the basis of the motions of the hand can be appropriately performed.
Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
In the following, an information input device according to an embodiment of the present invention is described. The information input device according to the embodiment is a device that allows an operator to give input operations in response to motions of operator's hands.
The CPU 11 is connected to the respective components through a bus to perform a transfer process of a control signal or data, as well as executing various types of programs for realizing the overall action of the information input device 1 and performing processes such as an arithmetic process.
The ROM 12 stores the programs and data necessary for the overall action of the information input device 1. These programs are stored on a storage medium such as a DVD-ROM, and read onto the RAM 13 to start the execution by the CPU 11, and thereby the information input device 1 of the present embodiment is realized.
The RAM 13 temporarily retains data or a program.
The camera 14 images a situation of input instructions given by an operator, and an image of the situation is transmitted to the CPU 11, where image recognition is performed. As will be described later, in the information input device 1 of this embodiment, the camera 14 images a two-dimensional or three-dimensional image for recognizing the input instructions given by the operator. However, as long as such imaging processing is possible, any system can be employed as a configuration of the camera 14. The camera 14 is a camera such as a camera with a CCD (Charge Coupled Device) sensor, a camera with a CMOS (Complementary Metal Oxide Semiconductor) sensor or an infrared camera.
The display device 15 can be a flat panel display such as a liquid crystal display or an EL (Electro-Luminescence) display.
The input device 16 includes, for example, a keyboard, mouse, operation buttons, touch panel, input pen, sensor and the like.
Next, the outline of the input instructions that are realized by the information input device 1 and given by an operator will be described.
First, a mode of the input instructions by the operator will be described with reference to
As shown in
The information input device 1 is configured to recognize configurations of the fingers of the both hands 502 and 503 of the operator from an image imaged by the camera 14, and on the basis of a result of the recognition, distinctively arrange the selection area R1 and the decision area R2 as virtual input instruction areas.
A selecting operation is to give a selecting instruction on an object (such as an icon, pointer or cursor) displayed on the display device 15. A deciding operation is to give a deciding instruction such as clicking. The selecting operation and the deciding operation will be described later in detail.
For example, in the example of
In the following description of the embodiment, a pattern of the shape of a hand with an index finger raised is referred to as a “pattern of the hand” for the selection area R1. In the information input device 1 of this embodiment, the pattern of a hand associated with the input instructions by the operator 500 causes the selection area R1 to be arranged on a side closer to the hand. Further, the decision area R2 is arranged on the side closer to the other hand. This means that every time the pattern of a hand for the selection area R1 is recognized by the CPU 11, the selection area R1 and the decision area R2 are interchanged and arranged, and in doing so, a selecting operation by the operator 500 is performed with, for example, an index finger raised.
The two areas shown in
Locations where the selection area R1 and the decision area R2 are arranged are not limited to those described in the present example. It can be changed as long as a situation of the input instructions by the operator 500 can be recognizably imaged.
Next, motions of the hands of the operator 500 for interchanging the selection area R1 and the decision area R2 to arrange them will be described with reference to
In
Next, a selecting operation by the operator 500 in the selection area R1 will be described with reference to
In
In the example of
Also, in
On the other hand, the partial area 151 related to the selection area R1 shown in
The determination of the selecting operation in the selection area R1, shown in
A range of the area 151 related to the selection area R1 is not limited to those described in the above example, but can be changed as long as the area 151 is set to meet the condition “area obtained by halving entire display area<area 151<entire display area”.
As described, in the information input device 1 of the present embodiment, in response to motions of the hands of the operator 500, the selection area R1 and the decision area R2 are interchanged and arranged, and therefore the operator 500 uses any of the hands to perform an appropriate selecting operation. This improves operability because the operator 500 gives a selecting instruction with a hand closer to an object (such as an icon) as a selecting object to thereby select the object, and consequently the selecting operation is more intuitively performed.
Next, a deciding operation by the operator 500 in the decision area R2 will be described with reference to
In the information input device 1, the operator 500 uses the left hand 502 to perform at least one predetermined motion in the decision area R2 shown in
As shown in
The display part 101 is configured by the display device 15 in
The obtainment part 102 obtains a situation at the time when the operator gives the input instructions. In this embodiment, the obtainment part 102 is configured by, for example, the camera (imaging part) 14 in
The storage part 103 is configured by the ROM 12 and the RAM 13 in
The area setting part 104 and the control part 105 are functioned by the CPU 11. The area setting part sets the input instruction areas for the operator 500 to give the input instructions. The input instruction areas are virtual areas for arranging the above-described selection area R1 and decision area R2.
In response to motions of the both hands of the operator 500, which are determined on the basis of information on the situation obtained by the obtainment part 102, the control part 105 arranges the selection area R1, which is related to the partial area 151 of the entire display area of the display part 101 and intended to receive a selecting operation by the operator 500 in the area 151, and the decision area R2, which is intended to receive a deciding operation by the operator 500, distinctively in the input instruction areas. In this embodiment, as an example, the obtainment part 102 is configured as the camera 14, and therefore the control part 105 arranges the selection area R1 and the decision area R2 based on an image imaged by the camera 14, i.e., based on the situation information.
The action of the information input device 1 will be described below with reference to
In
The camera (obtainment part 102) 14 images (obtains) a situation as shown in
The CPU 11 (control part 105) distinctively arranges the selection area R1 and the decision area R2 on the basis of an image (situation information) obtained by the camera 14. At this time, in the case where the CPU 11 determines that the pattern of a hand for the selection area R1 is present in the image, the CPU 11 arranges the selection area Ra on a side closer to the hand, and arranges the decision area R2 on the side closer to the other hand.
For example, in the example of
Note that the locations of the respective areas R1 and R2 arranged in the input instruction areas are related to each other through, for example, coordinate data or the like.
The CPU 11 (control part 105) performs display based on an input instruction by the operator 500 (S4). For example, in the example of
Note that, in
As described above, according to the information input device 1 of the present embodiment, the operator 500 makes motions with the hands to form the pattern of a hand for the selection area R1 with any of the hands, and thereby a selecting operation is performed. The selecting operation can be performed in the partial area of the entire display area, which is related to the selection area R1, and therefore the viewing angle of the camera 14 can be set so as to image the selection area R1. This enables the viewing angle of the camera 14 of the present embodiment to be narrowed, differently from a conventional one adapted to set a viewing angle so as to image an entire display area. Also, in the information input device 1, on the basis of an image from the above-described camera 14 having the narrowed viewing angle, a selecting operation by the operator 500 is determined, so that a target area for image recognition to recognize motions of the hands of the operator 500 is narrowed, and therefore accuracy of the image recognition is increased. For example, in the case where the display device 15 is a horizontally wide display (such as a display having an aspect ratio (horizontal to vertical ratio) of 16:9), an area for a gesture operation to be subjected to image recognition can be narrowed, and therefore the information input device 1 can increase recognition accuracy of an image obtained from the camera 14. In other words, the information input device 1 can more accurately recognize motions of the hands of the operator 500. Accordingly, a selecting operation by the operator 500 can be more accurately performed.
Also, in the information input device 1 of the present embodiment, the selection area R1 is related to the partial area 151 of the entire display area, and therefore a gesture operation by the operator 500 can move, for example, the pointer 40 in the partial area 151 of the entire display area. At this time, a recognition result based on an image obtained from the camera 14 can be reflected in an operation of the pointer 40 not in the entire display area but with a focus on the partial area 151. For this reason, in the information input device 1, an operation to be performed on an object on the basis of a gesture operation by the operator 500 can be appropriately performed.
Further, pointing a selecting object, and gesture as a deciding operation may be performed with the left and right areas interchanged, and therefore a motion of only one hand of the operator 500 can be prevented. For this reason, in addition to achieving a reduction in physical fatigue, a pointing operation can be performed with a hand/finger closer to a pointing object.
Further, by limiting an area related to the selection area R1 to the part of the entire display area, the selection area R1 is made less likely to enter the visual field of the operator 500. Accordingly, the camera 14 can more easily image the shapes of the hands of the operator 500, and therefore the information input device 1 can more easily performs image recognition. Further, the viewing angle of a camera 14 to be equipped can be limited to a necessity minimum, and therefore a common camera for purposes such as a camera conference, which is not wide-angle, can be used.
Next, variations of the information input device 1 of the present embodiment will be described.
In the foregoing, the case of using the one camera 14 to image the situation where the operator gives the input instructions is described with reference to
The locations of the selection area R1 and the decision area R2 can be freely set. For example,
The pattern of a hand for the selection area R1 can be changed. The present invention can also be adapted to, for example, raise a thumb, or bring a hand into a close state.
An information input device including:
a display part with a display area;
an area setting part for setting input instruction areas for an operator to give input instructions;
an obtainment part for obtaining a situation of the operator to give the input instructions; and
a control part for distinctively arranging a selection area and a decision area in input instruction areas in response to motions of both hands of the operator determined based on information on the obtained situation, the selection area being related to a partial area of an entire display area of a display part and being for receiving a selecting operation by the operator in the partial area, and the decision area being for receiving a deciding operation by the operator.
The information input device according to appendix 1, wherein the control part interchanges and arranges the selection area and the decision area in response to patterns of the fingers of the operator, the patterns being recognized from changes in the motions of the both hands of the operator.
The information input device according to appendix 1 or 2, wherein the partial area is set so as to be larger than an area obtained by evenly halving the display area along a centerline of the display area.
The information input device according to any one of appendices 1 to 3, wherein when the obtainment part is an imaging part for imaging the situation of the operator, the imaging part is configured to set a viewing angle so as to image the selection area related to the partial area, and obtain an imaged image as the information on the situation, and
the control part is configured to determine the selecting operation by the operator in the selection area based on the image from the imaging part.
An information input method including:
obtaining a situation where an operator gives input instructions; and
distinctively arranging a selection area and a decision area in response to motions of both hands of the operator determined based on information on the obtained situation, the selection area being related to a partial area of an entire display area of a display part and being for receiving a selecting operation by the operator in the partial area, and the decision area being for receiving a deciding operation by the operator.
The information input method according to appendix 5, wherein in the arranging step, the selection area and the decision area are interchanged and arranged in response to patterns of the fingers of the operator, the patterns being recognized from changes in the motions of the both hands of the operator.
The information input method according to appendix 5 or 6, wherein the partial area is set so as to be larger than an area obtained by evenly halving the display area along a centerline of the display area.
A program for causing a computer to perform the information input method according to any one of appendices 5 to 7.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.