The present application is based on Japanese Patent Applications No. 2008-251783 filed on Sep. 29, 2008 and No. 2009-020635 filed on Jan. 30, 2009, disclosures of which are incorporated herein by reference.
1. Field of the Invention
The present invention relates to a control device, more particularly to a control device for an in-vehicle apparatus.
2. Description of Related Art
There has been proposed various types of control device for an in-vehicle apparatus such as a car navigation apparatus and the like. One type of such control device is an operating device that captures an image of a hand of a user, extracts a finger image from the captured image, and superimposes the extracted finger image on a GUI input window such as a navigation window and the like of the in-vehicle apparatus.
For example, Patent Document 1 discloses an operating device that uses a camera mounted to a ceiling of a vehicle body to capture an image of a hand of a user who is manipulating a switch panel located next to a seat, and causes a liquid crystal panel located in front of the user to display the captured image of the hand and the switch panel. Patent Document 2 discloses another operating device that uses a camera located on a roof of a vehicle to capture an image of a hand of a driver, specifies an outline of the hand and superimposes the image of the outline on an image of buttons. Patent Document 3 discloses another operating device that captures an image of a manipulation button and a hand above a switch matrix, detects the hand getting access to the switch matrix, and superimposes the hand image.
Patent Document 1: JP-2000-335330A (U.S. Pat. No. 6,407,733)
Patent Document 2: JP-2000-6687A
Patent Document 3: JP-2004-26046A
The conventional technique uses information on the captured image, only to superimpose a hand contour image to indicate an operation position on a window. The information on the captured image is not effectively used as input information.
More specifically, the above-described operating devices include a touch input device having a two dimensional input surface. The touch input device is capable of performing continuous two-dimensional position detection, as a mouse, a track ball and a track pad can do. However, when menu selection, character input and point selection on a map are main user operations using the touch input device, and in particular when the operating devices are used for an in-vehicle electronic apparatus, a main user manipulation becomes a touch manipulation aiming for a certain item, a certain button, a desired location on map, or the like. In such a case, the operating devices typically do not allow the manipulation of continuous movement of a finger while the finger is being in contact with the input surface, because an error input can easily occur in the continuous movement. Thus, an input form of the operating device is typically such a discrete one that a finger is spaced apart from the input surface in a case irrelevant to an input, the finger contacts the input surface at only a location relevant to the desired input. One reason of the use of the above described input form is as follows. In the touch input device, a mechanism for detecting a contact on a touch manipulation surface plays both roles of a mechanism for position detection on the touch manipulation surface and a mechanism for detecting an input. A touch input device does not have an input detection mechanism that is provided separately from the mechanism for position detection. Note that a mouse has a click button as such mechanism for input detection.
In a case of a mouse, a user can easily perform a drag operation on a target item on a window through: moving a pointer to the target item such as an icon or the like; clicking a button to switch the target item into a selected state; and moving the mouse on an manipulation plane while maintaining the selected state. When the mouse is moving, the mechanism for position detection detects position of the mouse in real time, and thus, a movement trajectory of the target item on the window can well correspond to that of the mouse, realizing intuitive operation. In a case of a touch input device however, although a user can switch a target item into a selected state and specify a destination by performing a touch manipulation, when the user spaces a finger apart from a touch manipulation surface, the touch input device cannot detect finger position and cannot monitor a drag movement trajectory. As a result, the operating device cannot display the movement of the target item in accordance with a movement trajectory of fingertip. The operating device cannot realize an intuitive operation in the same level as mouse cam realize.
In view of the above and other points, it is an objective of the present invention to provide a control device capable of effectively using information on a captured image as input information and thereby capable of considerably extending an input form. For example, the control device may be configured to display a pointer image indicative of the present position of a fingertip and a move target image so that the pointer image and the move target image are movable together even when a finger of a user is spaced apart from a manipulation surface of a touch input device.
According to a first aspect of the present disclosure, there is provided a control device including: a touch input device that has a manipulation surface adapted to receive a touch manipulation made by a finger of a user, and detects and outputs an input location of the touch manipulation; an imaging device that has a photographing range having one-to-one coordinate relationship to the manipulation surface, and captures an image of a hand of the user getting access to the manipulation surface; a fingertip specifying section that specifies a fingertip of the hand based on data of the image of the hand; a display device that includes a display screen having one-to-one coordinate relationship to the photographing range and the manipulation surface; a pointer image display control section that causes the display device to display a pointer image on the display screen, the pointer image pointing to a place corresponding to the fingertip; a selection reception region setting section that sets a selection reception region on the display screen so that the selection reception region is located at a predetermined place on the display screen; a move target image selection section that switches a move target image prepared on the selection reception region into a selected state when the touch input device detects that the touch manipulation is performed at the input location corresponds to the move target image item; and an image movement display section that (i) detects a target fingertip, which is the fingertip that makes the touch manipulation at the input location corresponding to the move target image item, (ii) causes the display device to display the move target image in the selected state and the pointer image at a place corresponding to position of the target fingertip, and (iii) causes the move target image in the selected state and the pointer image to move together on the display screen in response to movement of the target fingertip in the photographing range, in such manner that a trajectory of movement of the selected move target image and the pointer image corresponds to a trajectory of the movement of the target fingertip.
According to the above control device, it is possible to utilize information on position of the fingertip of the user based on the image of the hand even when the finger is being spaced apart from the manipulation surface. The control device can detect the position of the fingertip and the input location of the touch manipulation independently from each other. It is therefore possible to effectively use information on a captured image as input information and thereby possible to considerably extend an input form. For example, the control device enables input operation such as drag operation on an image item and the like in an intuitive manner based on the captured image.
According to a second aspect of the present disclosure, there is provided a control device for a user to operate an in-vehicle electronic apparatus in a vehicle by manipulating the control device. The control device includes: a manipulation input element that is located so as to be within reach of the user who is sitting in a seat of the vehicle, and that has a manipulation input region having a predetermined area; an imaging device that has a photographing range covering the manipulation input region, and that captures an image including a hand image region representative of the hand of the user getting access to the manipulation input element; a hand image region identification section that identifies the hand image region in the image; an area ratio calculation section that calculates a value of hand image area ratio, which is area ratio of the hand image region to the manipulation input region; and an operation input information generation section that generates and outputs operation input information based on the calculated value of the hand image area ratio and a manipulation state of the manipulation input region, the operation input information being directed to the in-vehicle electronic apparatus.
According to the second aspect, the control device including the manipulation input element and the imaging device can generates and outputs the operation input information directed to the in-vehicle electronic apparatus based on the calculated value of the hand image area ratio and the manipulation state of the manipulation input region. Thus, as input information, it is possible to efficiently use information on the image captured by the imaging device in addition to the input information provided from the manipulation input element. Therefore, it is possible to largely extend input forms in utilizing the control device.
The above and other objects, features and advantages of the present invention will become more apparent from the following detailed description made with reference to the accompanying drawings. In the drawings:
The exemplary embodiments are described below with reference to the accompanying drawings.
The manipulation part 12 has an input manipulation surface acting as an manipulation surface, and is positioned so that the input manipulation surface faces in the upper direction. The manipulation part 12 includes a touch panel 12a providing the input manipulation surface. Touch panel 12a may be a touch-sensitive panel of resistive type, a surface acoustic wave type, a capacitive type or the like. The touch panel 12a includes a transparent resin plate acting as a base, or a glass plate acting as a transparent input support plate. An upper surface of the touch panel 12a receives and supports a touch manipulation performed by a user using a finger. The control device 1 sets an input coordinate system on the input manipulation surface, which has one-to-one coordinate relationship to the display screen of the monitor 15.
The imaging optical system includes a first reflecting portion 12p and a second reflecting portion 12r. The first reflecting portion 12p is, for example, a prism plate 12p, on a surface of which multiple tiny triangular prisms are arranged in parallel rows. The prism plate 12p is transparent and located just below the touch panel 12a. The prism plate 12p and the touch panel 12a are located on opposite sides of the case 12d so as to define therebetween a space 12f. The first reflecting portion 12p reflects the first reflected light RB1 in an upper oblique direction, and thereby outputs a second reflected light RB2 toward a laterally outward side of the space 12f. The second reflecting portion 12r is, for example, a flat mirror 12r located on the laterally outward side of the space 12f. The second reflecting portion 12r reflects the second reflected light RB2 in a lateral direction, and thereby outputs a third reflected light RB3 toward the camera 12b, which is located on an opposite side of the space 12f from the second reflecting portion 12r. The camera 12b is located at a focal point of the third reflected light RB3, and captures and acquires an image (i.e., a hand image) of the hand H and the finger of the user.
As shown in
Since the second reflecting portion 12r and the camera 12b are located on laterally opposite sides of the space 12f, the third reflecting light RB3 can be directly introduced into the camera 12b while traveling across the space 12f. Thus, the second reflecting portion 12r and the camera 12b can be placed close to lateral edges of the touch panel 12a, and, a path of the light from the hand H to the camera 12b can be folded in three in the space 12f. The imaging optical system can be therefore remarkably compact as a whole, and the case 12d can be thin. In particular, since the reducing of size of the touch panel 12a or the reducing of area of the input manipulation surface 102a enables the input part 12 to be remarkably downsized or thinned as a whole, it becomes possible to mount the input part 12 to vehicles whose center console C has a small width or vehicles whose have a small attachment space in front of a gear shift lever.
The input manipulation surface 102a of the touch panel 12a corresponds to a photographing range of the camera 12b. As shown in
An image signal, which is a digital signal or an analog signal representative of an image captured by the camera 12b, is continuously inputted to the video interface 112. The video RAM 113 stores therein the image signal as image frame data at predetermined time intervals. Memory content of the video RAM 113 is updated on an as-needed basis each time the video RAM 113 reads new image frame data.
The touch panel interface 114 includes a driver circuit that may be dedicated to correspond to a type of the touch panel 12a. Based on the input of a signal from the touch panel 12a, the touch panel interface 114 detects an input location of a touch manipulation on the touch panel. 12a and outputs a detection result as location input coordinate information.
Coordinate systems are set on the photographing range of the camera 12b, the input manipulation surface of the touch panel 12a and the display screen of the monitor 15 and have one-to-one correspondence relationship to each other. The photographing range corresponds to an image captured by the camera 12b. The input manipulation surface acts as a manipulation input region. The display screen corresponds to the input window image frame data and the pointer image frame data, which determine display content on the display screen.
The ROM 103 stores therein a variety of software that the CPU 101 can execute. The variety of software includes touch panel control software 103a, fingertip point calculation software 103b, display control software 103c, and image synthesis software 103d.
The touch panel control software 103a is described below. By performing the touch panel control software 103a, the CPU 101 acquires a coordinate of the input location of a touch manipulation from the touch panel interface 114, and acquires the input window image frame data from the navigation ECU 51. The input window image frame data is transmitted from the navigation ECU 51 together with determination reference information used for specifying content of the manipulation input. The determination reference information may include, for example, information used for specifying a region of a soft button and information used for specifying content of an operation command to be issued when the soft button is selected by the touch manipulation. The CPU 101 specifies content of the manipulation input based on the coordinate of the input location and the acquired determination reference information, and issues and outputs a command signal to the navigation ECU 51 to command the navigation ECU 51 to perform an operation corresponding to the manipulation input. The navigation ECU 51 can function as a control command activation means or section.
The fingertip point calculation software 103b is described below. The CPU 101 executing the fingertip point calculation software 103b can function as a fingertip specification means or section that specifies a fingertip of the hand based on data of the image of the hand in the following ways. The CPU 101 uses a fingertip calculation processing memory 102a′ in the RAM 102 as a work area. The CPU 101 binarizes an image of a user's hand captured by the camera 12b, and specifies a fingertip position in the binarized image as a fingertip point. More specifically, a predetermined representation point (e.g., geometrical center) of a tip region “ta” in the binarized image is calculated and specified as an image tip position “tp” (e.g., a fingertip point “tp”). The tip region “ta” may be an end portion of the hand in an insertion direction of the hand. Based on size or area of the tip region “ta”, it is determined whether the image tip position “tp” is a true fingertip point “tp”. In connection with the above process, a circuit for binarizing pixels may be integrated into an output part of the video interface in order to preliminarily perform the process of binarizing the image. As shown in
The display control software 103c is described below. The CPU 101 executing the display control software 103c can function as a selection reception region setting means or section, a move target image selection means or section, and an operation button image display control section or means. The CPU 101 sets a selection reception region at a predetermined place on the display screen of the monitor 15. The CPU 101 causes the display device to display an operation button image 161 to 165 (see
CPU 101 switches a movement target image prepared on the corresponding selection reception region into a selected state. In the present embodiment, as shown in
The image synthesis software 103d is described below. The CPU 101 executing the image synthesis software 103d can function as a pointer image display control section or means and an image movement display section or means. The CPU 101 uses an image synthesis memory 1102b in the RAM 1102 as a work area. The CPU 101 performs a process of pasting a pointer image on a pointer image frame. The pointer image may be a actual finger image FI (see
The car navigation apparatus 200 and the control device 1 are connected with each other via the serial communication bus 30. Manipulation input for operating and controlling the car navigation apparatus 200 can be performed by using the control device 1. Further, a variety of commands can be input to the car navigation apparatus 200 by using a speech recognition unit 230. More specifically, speech can be input to a microphone 231 connected with the speech recognition unit 230, and a signal associated with the speech is processed by a known speech recognition technique and converted into an operation signal in accordance with a result of the processing.
The location detection device 201 includes a geomagnetic sensor 202, a gyroscope 203, a distance sensor 204, and a GPS receiver 205 for detecting the present location of a vehicle based on a GPS signal from satellites. Because the respective sensors 202 to 205 have errors whose properties are different, multiple sensors are used while being complemented each other.
The navigation ECU 51 includes microcomputer hardware as a main component, the microcomputer hardware including a CPU 281, a ROM 282, a RAM 283, an I/O 284, and a bus 515 connecting the foregoing components with each other. The HDD 221 is bus-connected via an interface 229f. A graphic controller 210 can function to output an image to the monitor 15 based on drawing information for displaying a map or a navigation operation window. The graphic controller 210 is connected with the bus 515. A display video RAM 211 for drawing process is also connected with the bus 515. The graphic controller 110 acquires the input window image frame data from the navigation ECU 51. Further, from the control device 1 via the communication interface 226 and the serial communication bus 30, the graphic controller 110 acquires the pointing image frame data, which is made based on the GUI display data 221u such that the pointer image is pasted at a predetermined region. Further, in accordance with needs, the graphic controller 110 acquires the icon 161i to 165i acting as the marking image, which is made based on the GUI display data 221u. The graphic controller 110 then performs a frame synthesis operation by alpha blending or the like on the display video RAM 111 and outputs the synthesized frame to the monitor 15.
When a navigation program 221p is activated by the CPU 281 of the navigation ECU 51, information on the present location of the vehicle is acquired from the location detection device 201, and map data 221m indicative of a map around the present location is read from the HDD 221. Further, the map and a present location mark 152 indicative of the present location are displayed on a map display region 150′ (see an upper part of
The operation button images 161 to 165 are displayed in a periphery of the map display region 150′ of the display screen of the monitor 15. The periphery is, for example, a blank space located on a right side of the map display region 150′, as shown in
Operation will be explained below.
An operation flow in activating the destination setting command by using the operation button image 161 is as follows. In the state 1 shown in
As shown in the state 3 of
Hereinafter, a coupling movement mode may be referred to a mode where the marking image and the hand image are movable together and the marking image 161i is attached to the fingertip of the hand image FI. The CPU 101 can function as a target fingertip movement detection section that detects movement of the target fingertip based on the images captured by the camera 12b. As shown in
The pre-set destination can be canceled by using the eraser tool in the following ways. A user can use the eraser tool through operating the operation button image 165, which is also referred as an eraser button 165. The state 11 of
As shown in the state 21 of
Then, the user can point the fingertip at a desired point on the map, and performs the second touch manipulation, as shown in
Operation of the control device 1 is described below with reference to flowcharts.
At S103, area ratio σ of the photographic subject region in the first image is calculated. At S104, it is determined whether the area ratio σ is larger than a threshold ratio σ0. When the area ratio a is less than or equal to the threshold ratio σ0, the fingertip position specification process is ended because no photographic subject is expected to exist within the photographing range of the camera 12b.
At S105, a second image “B” is created by displacing the first image a predetermined distance in a finger extension direction, which is a direction in which a finger is extended (e.g., Y direction). The second image is, for example, one illustrated in
The difference image “C” illustrated in
Since the first image “A” and the second image “B” are binarized, the non-overlapping region can be specified by calculating image difference between the first image “A” and the second image “B”. Thus, a process of specifying the pixels representative of the non-overlapping region can be a logical operation between pixels of the first image “A” and corresponding pixels of the second image “B”. More specifically, the pixels of the non-overlapping region can be specified as the pixels where the exclusive-or operation between the first and second images “A” and “B” results in “0”. In some cases, the non-overlapping region between the first image “A” and the second image “B” areas in a side part of the finger. Such a side part can be easily removed in the following way for instance. When the number of consecutive “1” pixels in the X direction” is smaller than a predetermined number, the consecutive “1” pixels are inverted into “0”.
As S107 in
After the contraction process is performed, a process of separating tip regions is performed on image data. For example, as shown in
At S108 in
In
However, the present embodiment can reliably distinguish the hand from things other than the hand, because the present embodiment employs the identification method using the width “W' of the tip region “ta”, which is extracted from the different image “C” between the first image “A” and the second image “B”, wherein the first image is a captured image and the second image is one made by parallel-displacing the first image in the Y direction. Thus, as shown in the left side o
When the hand is imaged, there may be arises the following case: one or two fingers are extended (e.g., only the forefinger is extending or the forefinger and the middle finger are extending); and the rest of fingers are closed (e.g., the rest of finger are clenched into a fist). In such a case, width of a tip region of the closed finger may exceed the upper limit W_th1″ of the predetermined range and width of the extended finger is in the predetermined range. In view of the above, when multiple tip regions are extractable from a thing, if at least one of the multiple tip regions is in the predetermined range, the tip region in the predetermined range may be determined as a true fingertip region.
In some cases, there may be a possibility that a user puts to the input manipulation surface 102a of the touch panel 12a a thing whose tip region is not actually a non-fingertip region but the tip region can be wrongly detected as a true finger tip region because width of the tip region is in the predetermined range.
A difference between a finger and a coin on an image includes the followings. In a case of finger, a finger base reaches a back end of the photographing range 102b (the back end is an end in the insertion direction of the hand and may be located closest to the rear of the vehicle among the ends of the photographing range 102b). In a case of coin, on the other hand, the coin forms a circular region that is isolated in the photographing range 102b, and forms the background region (a region with “0” pixel value) between the back end of the circular region and the back end of the photographing range 102b. Taking into account the difference, it is possible to avoid the above described wrong identification in the following way. Total area “S” of a photographing subject is calculated. In the case illustrated in
At S1007 in
A region of a finger that actually contacts the touch panel 12a may be a region around finger pulp that is away from the finger end in the Y direction. In an image “F” related to
There may arise a case where the representation point determined by the above-described algorithm using the difference image does not correspond to a true fingertip point, depending on a positional relationship between the hand and the photographing range. More specifically, there may arise a case where a fingertip region is stick out from the photographing range, as shown in an upper left case in
The present embodiment addresses the above difficulty in the following ways. A non-display imaging region is set to the outer boundary region of the photographing range 102b, as shown in
In one case, as shown in the left of
It should be noted that it is possible to use a variety of algorithms different from the above-described algorithm as an algorithm for determining whether a tip region “ta” is a true fingertip region. For example, the displacement distance, by which the first image is displaced in Y direction to obtain the second image, may be set smaller than a common adult finger width. In such a case, a tip region appearing in the difference image between the first and second image tends to have such dimensions that the dimension WX in the X direction is larger than the dimension WY in the Y direction, and the tip region has the longer dimension in the X direction. Thus, it is possible to determine whether the tip region “ta” is a true fingertip region based on whether an aspect ratio φ (=WX/WY) of the tip region “ta” is in a predetermined range. For example, the aspect ratio φ of a paper or document illustrated in the left of
Taking into consideration a case where an inserted finger is inclined with respect to the Y direction, the aspect ratio φ may be calculated in the following manner. As shown in
Alternatively, as shown in
Explanation is retuned to
More specifically, in a manner shown in
At S307, a history of the fingertip position is read from the icon registration memory 1102c, and a movement indicated by the history is analyzed. At S308, it is determined whether the analyzed movement corresponds to the cancellation movement. When the analyzed movement is determined to correspond to the cancellation movement, the process proceeds to S308 where the icon registration is cancelled. When the left-right finger wave movement is set as the cancellation movement as shown in
When all of the determinations at S302, S304 and S308 results in “YES”, the process proceeds to S310 where the icon registration is maintained. It should be noted that the registered fingertip position is updated as the latest position of the currently-detected fingertip.
At S506, the type of the specified control command is clarified. When the specified control command is in the type of icon pasting command, the process proceeds to S507 where the icon is pasted at a place corresponding to the second touch manipulation. When the specified control command is the type of icon delete command, the process proceeds to S508 while skipping S507. At S508, the corresponding control command is executed. At S509, the icon registration is canceled.
When it is determined at S504 that a touch manipulation on the touch panel is not performed at an input location corresponding to the registered fingertip position, the process proceeds to S510. At S510, it is determined whether a touch manipulation is performed at a place that is inside the map display region 150′ and is different from the registered fingertip position. The detection of the touch manipulation at S510 indicates that the touch manipulation is made by a finger different from the finger whose fingertip position is registered. Thus, when the determination at S510 results in “YES”, the process proceeds to S511 where the map scroll process illustrated in
The first embodiment can be modified in various ways, examples of which are described below.
In the first embodiment, when a finger is escaped to an outside of the display range (corresponding to the pointer displayable region) in the coupling movement mode, the coupling movement mode is turned off. Even if the same finger is then returned to the display range, the coupling movement mode is maintained at an off state. Alternatively, the coupling movement mode may be maintained at an on state when the finger is escaped to the outside of the display range (corresponding to the pointer displayable region). Further, when the finger is returned to the display range, the icon may be displayed so as to be attached to the finger. The above alternative is illustrated in
In the first embodiment, the move target image is the marking image acting as an icon. Alternatively, the move target image may be an icon 701 representative of a folder or a file. In the alternative, the first touch manipulation switches the icon 701 in the selected state. When the finger is then spaced apart from the touch panel 12a and is moved, the icon 701 is moved together with the pointer image until the second touch manipulation is performed. It is thereby possible to perform so called a drag operation on a file or a fold.
In the first embodiment, an actual finger image is used as a pointer image. Alternatively, an image irrelevant in data to the actual finger image may be used as a pointer image.
The tip region that is determined as a non-true fingertip region is not stored as the fingertip position, and as a result, a pointer image is not pasted on the non-true fingertip region. Thus, the following difficulty does not arise fundamentally. A pointer image is pasted at a point that is associated with photographing subject other than a finger but wrongly-detected as a fingertip position. In such a case, although a user is clearly figuring out that the hand is not put in the photographing range 102b, a finger image is displayed on the display screen. The control device 1 can minimize the user feeling that something is wrong.
A simulated finger image imitating an outline shape of a finger may be used as a pointer image. A simulated finger image according to a simple example may be a combination of a circular arc
Alternatively, as shown in
Alternatively, an image of an actual finger, which has been preliminarily imaged for each finger, may be used as a pointer image. For example, the image of an actual finger may be an image of a finger of a user, or an image of a finger of a model, which may be preliminarily obtained from a hand-professional part model. In such a case, an outline may be extracted from the image of the finger by using a known edge detection process, and vector outline data approximating the outline is created. Thereby, finger outline image data SF1 to SF5 similar to that shown in
As shown in the right of
Because bones of fingers are arranged so as to approximately focus at a joint of a wrist, the finger direction regulation point W in
The pointer image frame in
(1) When the pointer image data is described as bitmap data from the beginning, the pointer image with transparency is superimposed on the input window by performing an alpha blending process between corresponding pixels.
(2) When the pointer image data is described as vector outline data, an outline of the pointer image is generated on the pointer image frame data by using the vector outline data, and further, a region inside the outline is converted in bitmap by rasterizing, and then, an alpha blending process is performed in a way similar to that in (1).
(3) An outline is drawn on the input window image frame data by using the vector outline data forming the pointer image data, the pixels located inside the outline on the input window image are extracted, and values of the extracted pixels are uniformly shifted.
According to any one of the methods (1) to (3), regarding the pixels forming the outline of the pointer image data, it is possible to superimpose the pointer image whose outline is highlighted by increasing in blend ratio of the pointer image data. Alternatively, the pointing image data may be image data representing only the outline in the form of bitmap data or victor outline data, and only the outline may be superimposed.
As shown in
The above described merit becomes more notable when the photographing range 102b and the input manipulation surface 102a of the touch panel is downsized, as shown by the dashed-dotted line in
An X direction dimension of the photographing range 102b (the input manipulation surface 102a) in the above case may be in a range between 60 mm and 80 mm and may be 70 mm in an illustrative case, and a Y direction dimension may be in a range between 30 mm and 55 mm and may be 43 mm in an illustrative case.
When the number of fingers received in the photographing range is two, and when the actual finger image FI being only binarized is displayed on the display screen of the monitor 15, the two actual fingers image FI is displayed in a relatively larger size because of the downsizing of the photographing range, as shown in
However, as shown in
When the input location largely varies in the Y direction, it may be necessary to take into account a change in wrist position in Y direction. In such a case, as shown in
In the followings, explanation is given on a situation where a user sitting in the driver seat 2D or the passenger seat 2P manipulates the manipulation part 12 arranged as shown in
In order to reflect the above movement of the hand, as shown in
In
On an assumption that the rotation axis is located inside the palm, the position of the fingertip and the position of the wrist are moved in opposite directions due to the rotation movement. Thus, for the actual finger image FI inclined upper rightward with respect to the Y direction, the X coordinate of the wrist point W is set so as to displace leftward in the X direction from the reference wrist point W0. For the actual finger image FI inclined upper leftward with respect to the Y direction, the X coordinate of the wrist point W is set so as to displace rightward in the X direction from the reference wrist point W0. More specifically, the actual finger image FI (corresponding to the fingertip point G3 in
Alternatively, the representation actual finger image employed may be the actual finger image whose X coordinate or Y coordinate of the fingertip point is closest to the X direction center or the Y direction center of the photographing range among the multiple actual finger images.
Alternatively, the wrist point W may be set by using a representation fingertip point, which is obtained by averaging X coordinates and Y coordinates of multiple fingertip points G1 to G5.
Alternatively, when the number of fingertip positions is odd, the representation fingertip point may be set to the fingertip point of the actual finger image located at the center. When the number of fingertip positions is even, the representation fingertip point may be set to a point obtained by averaging X coordinates and Y coordinates of two actual finger images located close to the center.
In the actual hand, finger bones have respective widths at the wrist, and are connected with difference points of the wrist joint at in the X direction. In view of the above, as shown in
In the above examples, the control device is applied to an in-vehicle electronic apparatus. However, the control device is applicable to another apparatus. For example, the control device may be applied to a GUI input device for a PC.
The first embodiment and modification have the following aspects.
According to an aspect, there is provided a control device including: a touch input device that has a manipulation surface adapted to receive a touch manipulation made by a finger of a user, and detects and outputs an input location of the touch manipulation; an imaging device that has a photographing range having one-to-one coordinate relationship to the manipulation surface, and captures an image of a hand of the user getting access to the manipulation surface; a fingertip specifying section (or means) that specifies a fingertip of the hand based on data of the image of the hand; a display device that includes a display screen having one-to-one coordinate relationship to the photographing range and the manipulation surface; a pointer image display control section (or means) that causes the display device to display a pointer image on the display screen, the pointer image pointing to a place corresponding to the fingertip; a selection reception region setting section (or means) that sets a selection reception region on the display screen so that the selection reception region is located at a predetermined place on the display screen; a move target image selection section (or means) that switches a move target image prepared on the selection reception region into a selected state when the touch input device detects that the touch manipulation is performed at the input location corresponds to the move target image item; and an image movement display section (or means) that (i) detects a target fingertip, which is the fingertip that makes the touch manipulation at the input location corresponding to the move target image item, (ii) causes the display device to display the move target image in the selected state and the pointer image at a place corresponding to position of the target fingertip, and (iii) causes the move target image in the selected state and the pointer image to move together on the display screen in response to movement of the target fingertip in the photographing range, in such manner that a trajectory of movement of the selected move target image and the pointer image corresponds to a trajectory of the movement of the target fingertip.
The imaging device of the control device captures the image representative of a hand of a user getting access to the touch input device, as conventional operating devices disclosed in Patent Documents 1 to 3 do. The conventional operating device utilizes the captured image of the hand as only a hand line image that is superimposed on the display screen to indicate manipulation position, and thus, the conventional operating device cannot effectively utilize the information on the captured image of the hand as input information. The control device of the present disclosure can utilize information on position of the fingertip of the user based on the image of the hand. The control device can detect the position of the fingertip and the input location of the touch manipulation independently from each other.
More specifically, the control device can recognize, as the target fingertip, one of the specified fingertips that is associated with the touch manipulation. The control device displays the move target image being in the selected state and the pointer image at a place on the display screen, the place corresponding to the position of the target fingertip. The control device moves the move target image in the selected state and the pointer image in response to the movement of the target fingertip in the photographing range, in such manner that the trajectory of the movement of the move target image and the pointer image corresponds to the trajectory of the movement of the fingertip. Through the above ways, if the finger is spaced apart from the manipulation surface after the touch manipulation for switching the move target item into the selected state is performed, it is possible to track the position of the fingertip based on the captured image of the hand, and it is possible to display the move target image and the pointer image indicating the present position of the fingertip so that the move target image and the pointer image are movable together. The control device therefore enables input operation such as drag operation on an image item in an intuitive manner.
The above control device may be configured such that the pointer image display control section uses an actual finger image as the pointer image, the actual finger image being extracted from the image of the hand. According to this configuration, a user can perform input operation using the touch input device while seeing the actual finger image superimposed on the display screen. The actual finger image may be an image of the finger of the user. The control device therefore enables input operation in a more intuitive manner.
The above control device may be configured such that the pointer image display control section uses a pre-prepared image item as the pointer image, the pre-prepared image item being different form an actual finger image extracted from the image of the hand. The pre-prepared image item may be, for example, a commonly-used pointer image having an arrow shape, or alternatively, a preliminarily-captured image of a hand or a finger of a user or another person.
When the actual finger image extracted from the captured image is used as the pointer image, and when size of the manipulation surface is relatively smaller than that of the display screen, size of the displayed image of the finger is enlarged on the display screen. Thus, it may become difficult for a user to understand the position of the fingertip precisely, because the finger displayed may be excessively wide in width. In such a case, the actual finger image that is extracted from the captured image of the hand in real time may be used to specify the position of the fingertip only, and the pre-prepared image item may be pasted and displayed on the display screen. Thereby, regardless of how the actual finger image extracted from the image of the hand is, it becomes possible to reliably display the pointer images corresponding to respective fingers such that the displayed pointer images are thinner than the actual finger image representative of fingers, and as a result, it is possible to prevent a user from having odd feeling caused by the display of the excessively wide finger image.
The pre-prepared image item may be a simulated finger image whose width is smaller than that of the actual finger image extracted from the hand image. The simulated finger image may represent an outline of the finger. The use of such a simulated finger image enables a user to catch the present manipulation location in a more intuitive manner.
The control device may be configured such that: the touch manipulation includes a first touch manipulation, which is the touch manipulation that is performed by the target fingertip at the input location corresponding to the selection reception region; the first touch manipulation switches the move target image into the selected state; when the target fingertip is spaced apart form the manipulation surface and is moved after the first touch manipulation is performed, the image movement display section switches display mode into a coupling movement mode, in which the move target image in the selected state and the pointer image are moved together in response to the movement of the target fingertip; the touch manipulation further includes a second touch manipulation, which is the touch manipulation that is performed at the input location corresponding to the target fingertip after the target fingertip is moved in the coupling movement mode; and the image movement display section switches off the coupling movement mode when the touch input device detects that the second touch manipulation is performed. According to the above configuration, the above control device can switches the move target image into the selected state in response to the first touch manipulation performed at the selection reception region. Then, the control device can display and move the move target image and the pointer image to a desired location (e.g., display of a drag operation) in the coupling movement mode while not receiving a touch. Then, when the control device detects that the second touch manipulation is performed, the control device switches off the coupling movement mode. In the above, the first and second touch manipulations have therebetween a period where no touch is made on the manipulation surface. The first and second touch manipulations can respectively indicate a start time and an end time of the coupling movement mode (e.g., display of a drag operation) in a simple and clear manner.
The control device of the present disclosure may be applied to a data-processing device including computer hardware as a main component, the data-processing device being configured to perform a data-processing operation by using input information based on execution of a predetermined program. The target fingertip specified from the Captured image is always associated with a touch manipulation performed at a corresponding position, and the touch manipulation can be used for activating a data-processing operation of the data-processing device. When the touch input device is manipulated by the hand, multiple fingers may be specified from the captured image in some cases. In such cases, multiple finger points are set on the display screen, and multiple pointer images corresponding to the multiple finger points may be displayed. In order to realize an intuitive operation in the above case, it may be necessary for the control device to enable a user to clearly distinguish which one of the multiple fingers has performed the touch manipulation that triggers activation of the data-processing operation. In other words, it may be necessary for the control device to enable a user to clearly distinguish which one of the multiple fingertips is the target fingertip.
According to the conventional technique, a trigger signal for activating the data-processing operation is provided when there occurs a touch manipulation directed to a key or a button fixedly displayed on a display screen. Thus, the conventional technique enables a user to catch which finger performs the touch manipulation by reversing color of the key or the button aimed by the touch manipulation or by outputting operation sound. However, the conventional technique cannot essentially track a change in position of the target finger based on input formation provided by touch, in order to track the movement of the target fingertip after the touch manipulation is finished (i.e., after the target fingertip is spaced apart from the touch input device). In view of the above difficulty of the conventional technique, the control device of the present disclosure may be configured such that the move target image is a marking image that highlights the position of the target fingertip. The control device having the above configuration can track the target fingertip by using the captured image and can use the marking image as the move target image accompanying the target fingertip, and thereby enables a user to grasp the movement of the target fingertip even after the touch manipulation is finished (i.e., after the target fingertip is spaced apart from the touch input device).
The above control device may further include an operation button image display control section (or means) that causes the display device to display an operation button image on the selection reception region of the display screen, the operation button image containing the marking image as design display. When the operation button image is displayed on the reception selection region, a user can intuitively select a function corresponding to the operation button image by performing a touch manipulation directed to the operation button image. The making image on the operation button image can act as the move target image and can be pasted at a target fingertip point, so that the marking image is movable in response to the movement of the target fingertip. Thus, a use can constantly and clearly see which operation button image is in the selected state, even when the user is moving the target fingertip.
The above control device may further include a marking image pasting section (or means) that causes the display device to display the marking image on the display screen, such that the marking image is fixedly pasted at a place corresponding to the input location of the second touch manipulation when the coupling movement mode is switched off. The above configuration is particularly effective for apparatus-function-setting that requires the specifying of a point for setting on a window. By using the above control device, a user can easily grasp the present position and the movement trajectory of the fingertip that has operated the operation button image, through the movement of the marking image in the coupling movement mode. Further, a user can easily grasp the point for setting fixed by the second touch manipulation, through the fixedly-pasted place of the marking image. For canceling the point for setting fixed in the above-described operation for instance, the above control device may further include a marking image deletion section (or means) that deletes the marking image, which has been displayed together with the pointier image, from the place corresponding to the input location of the second touch manipulation when the coupling movement mode is switched off.
The above control device may be configured such that the marking image has a one-to-tone correspondence to a predetermined function of an electronic apparatus, which is a control target of the subject control device. The control device may further include a control command activation section (or means) that activates a control command of the predetermined function corresponding to the marking image when the touch input device detects that the second touch manipulation is performed. According to the above configuration, it becomes possible to specify the point for setting for the predetermined function and activate the predetermined function at the same time. Further, a user can more easily grasp a type of the selected predetermined function and the final point for setting, through the pasted marking image.
The above control device may be configured such that: the selection reception region is multiple selection reception regions; the predetermined function of the electronic apparatus is multiple predetermined functions; the marking image is multiple marking images; and the multiple marking images respectively correspond to the multiple predetermined functions. In such a configuration, the control device may further include: an operation button image display control section (or means) that causes the display device to respectively display a plurality of operation button images on a polarity of selection reception regions, so that the plurality of operation button images respectively contains the plurality of marking images as design display. When the first touch manipulation is performed at the input location corresponding to one operation button images of the operation button images, the image movement display section (i) switches one marking image of the marking images that corresponds to the one operation button image in the selected state, and (ii) switches the display mode into the coupling movement mode. When the touch input device detects that the second touch manipulation is performed, the control command activation section activates the control command of one of the predetermined functions corresponding to the one marking image being in the selected state. By arranging the multiple operation button images on the multiple selection reception regions so that designs of the multiple marking images are different from each other, the control device enables a user to visually and easily grasp a lineup of the multiple predetermined functions. Further, the control device enables a user to distinguish which one of the predetermined factions is being selected, through the design of the marking image in the selected state.
The control device may be configured such that: a part of the manipulation surface is a command activation valid region; a part of the display screen is a window outside part, which corresponds to the command activation enablement part; the operation button image is displayed on the window outside part of the display screen; the control command activation section activates the control command of the predetermined function when the touch input device detects that the second touch manipulation is performed on the command activation enablement part of the manipulation surface; and the control command activation section does not activate the control command of the predetermined function when the touch input device detects that the second touch manipulation is performed outside the command activation enablement part. According to the this configuration, if an error touch manipulation is made at a place corresponding to a region around the operation button image, such error touch manipulation, which is not aimed at activation of the predetermined function, can be an outside of the command activation enablement part. Thus, it is possible to prevent an error operation of the electronic apparatus from occurring.
In particular, when the electronic apparatus is an is an in-vehicle electronic apparatus and the display screen of the display device may be placed so as to be out of sight of the user who is looking straight at the finger on the manipulation surface, the user cannot look straight at both of the display screen and the hand for performing operation at the same time. According to this configuration, since the pointer image and the marking image are movable together on the display screen, a user can intuitively and reliably perform an operation including specification of a point without looking at the hand.
The in-vehicle electronic apparatus may be a car navigation system for instance. In this case, the manipulation surface may be placed next to or obliquely forward of a seat for a user, and the display screen may be placed upper than the manipulation surface so that the display screen may be placed in front of or obliquely forward of the user. The control device may be configured such that a part of the display screen is a map display region for displaying a map for use in the car navigation system; the operation button image is displayed on the selection reception region and is displayed on an outside of the map display region; the control command enables a user to specify a point on the map displayed on the map display region; the control command is assigned to correspond to the operation button image; the control command activation section activates the control command when the touch input device detected that the second touch manipulation is performed inside the map display region; and the control command activation section does not activates the control command when the touch input device detects that the second touch manipulation is performed inside the map display region. In the above, the control command may be associated with specification of a point on the map display region, and may be one of (i) a destination setting command to set a destination on the map display region, (ii) a stopover point setting command to set a stopover point on the map display region, (iii) a peripheral facilities search command, and (iv) a map enlargement command.
The above control device may be configured such that: the display screen has a pointer displayable part, in which the pointer image is displayable; and the image movement display section switches off the coupling movement mode and switches the marking image in an unselected state when the target fingertip escapes from the pointer displayable part in the coupling movement mode. According to this configuration, after the marking image is switched into the selected state, a user can easily switch the marking from the selected state into the un-selected state by moving the finger to an outside of the pointer displayable part.
Alternatively, the above control device may be configured such that, when the target fingertip escapes from the pointer displayable part in the coupling movement mode, the image movement display section maintains the selected state of the marking image. Further, the above control device may be configured such that, when the escaped target fingertip or a substitution fingertip, which is a substation of the escaped target fingertip, is detected in the pointer displayable part after the target fingertip has escaped from the pointer displayable part, the image move display section keeps the coupling movement mode by newly setting the target fingertip to the escaped target fingertip or the substitution fingertip and by using the marking image being in the selected state. According to this configuration, even when the finger moves to an outside of the pointer displayable part, it is possible to keep the selected state of the making image and it becomes unnecessary to select the marking image again.
The above control device may further include a target fingertip movement detection section (or means) that detects the movement of the target fingertip in the coupling movement mode. Further, the control device may be configured such that when the detected movement of the target fingertip in the coupling movement mode corresponds to a predetermined mode switch off movement, the image movement display section switches off the coupling movement mode and switches the marking image in an unselected state. When a certain movement of the target fingertip is preliminarily determined as the predetermined mode switch off movement, a user can switch the marking image into the unselected state by performing the predetermined mode switch off movement after the marking image is switched into the selected state.
The control device may be configured such that the hand of the user is inserted into the photographing range in a predetermined insertion direction. Further, the control device may further include: a tip extraction section (or means) that extract a tip region of the hand on the captured image in the predetermined insertion direction; a tip position specification section (or means) that specifies position of the tip region in the photographing range as a image tip position; a fingertip determination section (or means) that determines whether the image tip position indicates a true fingertip point, based on size or area of the tip region; and a fingertip point coordinate output section (or means) that outputs a coordinate of the image tip position as a coordinate of a true fingertip point when it is determined that the image tip position indicates the true fingertip point.
According to the above configuration, the hand of the user is inserted into the photographing range of the imaging device in the predetermined insertion direction, and the fingertip is located at a tip of the hand in the predetermined insertion direction. Thus, by extracting the tip region of the hand of the captured image in the predetermined insertion direction, and by determining whether the size or the area of the tip region has appropriate values, it is possible to accurately determine whether the tip region indicates a true fingertip.
The control device may be configured such that: the tip extraction section acquires the captured image as a first image; the tip extraction section acquires a second image by parallel-displacing the first image in the predetermined insertion direction, and extracts, as the tip region (fingertip region), a non-overlapping region of the hand between the first image and the second image. According to this configuration, it is possible to easily specify the non-overlapping region between the first and second images as the fingertip region, by parallel-displacing the captured image in the predetermined insertion direction (i.e., a longitudinal direction of a palm of the hand) and by overlapping the parallel-displaced image on the captured image.
The above control device may be configured such that: the imaging device images the hand inserted into the photographing range by utilizing light reflected from a volar aspect of the palm. When the control device extracts and specifies the fingertip region based on difference between the first and second images in the above described way, the control device may be configured such that: the imaging device is located lower than the hand; and the imaging device images the hand that is inserted into the photographing range in a horizontal direction while the volar aspect of the palm being directed in a lower direction. It should be noted that, in Patent Document 2, a camera is mounted to a ceiling of a vehicle body and located so as to be obliquely upper than the hand. Thus, unlike Patent Document 2, the control device of the present disclosure is not influenced by ambient light or foreign substances between the hand and the camera mounted to the ceiling. The above control device may further include an illumination section (or means) that illuminates the photographing range with illumination light. Further, the imaging device of the control device may capture the image of the hand based on the illumination light reflected from the hand. According to this configuration, it becomes possible to easily separate a background region and a hand region from each other on the captured image.
The tip position specification section can specify the position of the tip region as the position of the tip of the hand on the image. In the above, the tip region is specified as the non-overlapping region, and the tip of the hand on the image can be a coordinate of the fingertip point. The position of the non-overlapping region can be specified from a representation position, which satisfies a predetermined geometrical relationship to the non-overlapping region. For example, a geometrical center of the non-overlapping region can be employed as the representation position. It should be noted that the representation position is not limited to the geometrical center.
When the non-overlapping region is a true fingertip region, the size and the area of the non-overlapping region should be in a predetermined range corresponding to fingers of human beings. Thus, when the size or the area of the non-overlapping region is without the predetermined range, it is possible to determine that the non-overlapping region is not the true fingertip region associated with the fingertip of the user and it is possible to determine that the non-overlapping region is associated with a photographing subject other than the fingertip of the user or associated with a part of the hand other than the fingertip. Thus, the control device can be configured such that the fingertip determination section determines whether the non-overlapping region is the true fingertip region based on determining whether the size or area of the non-overlapping region corresponding to the extracted tip region is in the predetermined range.
In connection with the imaging device, the control device may further include a hand guide part that provides a guile direction and regulates the predetermined insertion direction to the guide direction, so that the hand is inserted into the photographing range in the guide direction. According to this configuration, the predetermined insertion direction, in which, the hand of the user is inserted into the photographing range, can be substantially fixed. As a result, a longitudinal direction of the finger of the hand to be image can be substantially parallel to the guide direction, and the size of the non-overlapping region in a direction perpendicular to the guide direction substantially can match or corresponds to a width of the finger. Thus, the control device can be configured such that the fingertip determination section determines whether the tip region is the true fingertip region based on determining whether a width of the non-overlapping region in the direction perpendicular to the guide direction is in a predetermined range. According to this configuration, a measurement direction of the size of the non-overlapping region can be fixed. For example, the measurement direction can be fixed to the direction perpendicular to the guide direction, or a direction in a range between about +45 degrees and −45 degrees from the direction perpendicular to the guide direction. It is possible to remarkably simplify a measurement algorithm for determining whether the tip region is the true tip region.
As described above, the fingertip region can be extracted from the non-overlapping region between the first image, which is the captured image, and the second image, which is obtained by parallel-displacing the first image. When multiple fingers are inserted into the photographing range, multiple non-overlapping regions between the first and second images can be separated into multiple pieces. In such a case, the tip extraction section can extracts the multiple non-overlapping regions as candidates of the fingertip regions. As a result, it becomes possible to utilize the multiple fingertip regions for location inputs at the same time, and thus, it is possible to increase degree of freedom of input in the control device. Further, if some of fingers are closed and in contact with each other, it is possible to reliably separate and specify the fingertip region rounded.
The fingertip determination section may be configured to estimate a value of S/d as a finger width from the multiple non-overlapping regions, where the S is total area of a photographing subject on the captured image and “d” is the sum of distances from the non-overlapping regions to a back end of the photographing range. The back end is an end of the photographing subject in the predetermined insertion direction, so that the hand is inserted into the photographing range through the back end earlier than another end opposite to the back end. Thus, the fingertip determination section can determine whether the non-overlapping region is the true fingertip region based on determining whether S/d is in a predetermined range. According to this configuration, the fingertip determination section can also estimate a value of S/D as the finger width, not only specify the width of the non-overlapping region. Thereby, it is possible to determine whether the captured image includes, as a finger image, a photographing subject that continuously extends from the tip region to an end of the captured image corresponding to the back end of the photographing range. Thus, a photographing subject other than a finger (e.g., a small foreign object such as a coin and the like) is effectively prevented from wrongly being identified as a finger. In the above, a region of the small foreign object around a tip of the photographing subject may be detected. The fingertip determination section may estimate the value of S/N as an average finger area and may be configured to determine whether the non-overlapping region is the true fingertip region based on determining whether S/N is in a predetermined range, where S is total area of the photographing subject and N is the number of non-overlapping regions.
The above control device may be configured such that: the pointer image is displayed at the fingertip point indicated by the fingertip region only when it is determined that the tip region on the captured image is the true fingertip region; and the pointer image is not displayed when it is determined that the tip region on the captured image is not the true fingertip region. According to this configuration, there does not fundamentally arise the following difficulty for example. The pointer image is pasted at a point that is associated with photographing subject other than a finger but wrongly-detected as a fingertip position. In such a case, although a user is clearly figuring out that the hand is not put in the photographing range 102b, a finger image is displayed on the display screen. The control device thus can minimize the user feeling that something is wrong.
There may arise a difficulty that, when a part of the finger of the user moves to an outside of the photographing range, an end of another part of the finger in the photographing range is wrongly identified as the true fingertip region. To address the above difficulty, the control device may be configured such that: the photographing range includes a window corresponding region and a window periphery region; the window corresponding region corresponds to a window on the display screen; the window periphery region is located outside of the window corresponding region, extends along an outer periphery of the window corresponding region, and has predetermined width; the fingertip point coordinate output section is configured to output the coordinate of the tip position when the tip position coordinate specification section determines that the coordinate of the tip position on the captured image is within the window corresponding region. According to this configuration, when the hand of the user protruding into the window periphery region is imaged, it is possible to determine that an actual fingertip is located outside of the window corresponding region, which is a target of display. In such a case, a tip of the hand extracted from the image is not recognized as the fingertip point, and thereby, it is possible to prevent the above difficulty from occurring.
The manipulation part 2012 has a manipulation input surface 2102a acting as a manipulation input region. The manipulation part 2012 is positioned so that the manipulation input surface 2102a faces in the upper direction. A touch panel 2012a provides the manipulation input surface. The touch panel 2012a may be a resistive type panel, a surface acoustic wave (SAW) type panel, a capacitive type panel or the like. The touch panel 2012a includes a transparent resin plate acting as a base, or a glass plate acting as a transparent input support plate. An upper surface of the touch panel 2012a receives and supports a touch manipulation performed by a user using a finger. The control device 2001 sets an input coordinate system on the manipulation input surface. The input coordinate system has one-to-one coordinate relationship to the display screen of the monitor 2015. The touch panel 2012a can act as a manipulation input element or a location input device. The transparent resin plate can act as a transparent input reception plate.
The imaging optical system includes a first reflecting portion 2012p and a second reflecting portion 2012r. As shown in
As shown in
Since the second reflecting portion 2012r and the camera 2012b are located on laterally opposite sides of the space 2012f, the third reflecting light XXRB3 can be directly introduced into the camera 2012b by traveling across the space 2012f. Thus, the second reflecting portion 2012r and the camera 2012b can be placed close to lateral edges of the touch panel 2012a, and, a path of the light from the hand)0(H to the camera 2012b can be, so as to speak, folded in three in the space 2012f. The imaging optical system can therefore be reparably compact as a whole, and the case 2012d can be thin. In particular, since the reducing of size of the touch panel 2012a or the reducing of area of the manipulation input surface 2102a enables the input part 2012 to be remarkably downsized or thinned as a whole, it becomes possible to mount the input part 2012 to vehicles whose center console XXC has a small width or vehicles whose have a small attachment space in front of a gear shift lever. The input part 2012 can detect a hand as a hand image region when the hand is relatively close to the touch panel 2012a, because a large amount of the reflected light can reach the camera 2012b. However, as the hand is spaced apart from the touch panel 2012a, the amount of the reflected light decreases. Thus, the input part 2012 does not recognize a hand spaced a predetermined distance apart from the touch panel 2012a in the image of the hand. For example, when a user moves a hand across above the touch panel 2012a to operate a different control device (e.g., a gear shift lever) located close to the input part 2012, if the hand is sufficiently spaced apart from the touch panel 2012a, the hand image region with a valid area ratio is not detected, and thus, errors hardly occur in the below-described information input process using hand image recognition.
The manipulation input surface 2102a of the touch panel 2012a corresponds to a photographing range of the camera 2012b. As shown in
An image signal, which is a digital signal or an analog signal representing an image captured by the camera 2012b, is continuously inputted to the video interface 2112. The imaging video RAM 2113 stores therein the image signal as image frame data at predetermined time intervals Memory content of the imaging video RAM 2113 is updated on an as-needed basis each time the imaging video RAM 2113 stores new image frame data.
The graphic controller 2110 acquires data of an input window image frame from the navigation ECU 2200 via the serial communication interface 2116 and acquires data of a pointer image frame from the CPU 2101. In the pointer image frame, a pointer image is pasted at a predetermined place. The graphic controller 2110 performs alpha blending or the like to perform frame synthesis on the display video RAM 2111 and outputs to the monitor 2015.
The touch panel interface 2114 includes a drive circuit corresponding to a type of the touch panel 2012a. Based on the input of a signal from the touch panel 2012a, the touch panel interface 2114 detects an input location of a touch manipulation on the touch panel 2012a and outputs a detection result as location input coordinate information.
Coordinate systems having one-to-one correspondence relationship to each other are set on the photographing range of the camera 2012b, the manipulation input surface of the touch panel 2012a and the display screen of the monitor 2015. The photographing range corresponds to an image captured by the camera 2012b. The manipulation input surface acts as a manipulation input region. The display screen corresponds to the input window image frame data and the pointer image frame data, which determine display content on the display screen.
The ROM 2103 stores therein a variety of software to cause the CPU 2101 to function as a hand image region identification means or section, a area ratio calculation means or section, and a operation input information generation means or section. The variety of software includes touch panel control software 2103a, display control software 2103b, hand image area ratio calculation software 2103c and operation input information generation software 2103d.
The touch panel control software 2103a is described below. The CPU 2101 acquires an input location coordinate from the touch panel interface 2114. The CPU 2101 further acquires the input window image frame and determination reference information from the navigation ECU 2200. The determination reference information can be used for determining content of the manipulation input. For example, the determination reference information includes information used for specifying a region for soft button, and information used for specifying content of a control command that is to be outputted in response to a touch manipulation directed to the soft button. The CPU 2101 specifies content of the present manipulation input based on the input location coordinate and the determination reference information, and issues and outputs a command to cause the navigation ECU 2200 to perform an operation corresponding to the manipulation input.
The display control software 2103b is described below. The CPU 2101 instructs the graphic controller 2110 to read the input window image frame data. Further, the CPU 2101 generates the pointer image frame data in the below described way, and transmits the pointer image frame data to the graphic controller 2110.
The hand image area ratio calculation software 2103c is described below. The CPU 2101 identifies a hand region XXFI in the captured image as shown in
The operation input information generation software 2103d is described below. The CPU 2101 generates operation input information directed to the in-vehicle electronic apparatus based on manipulation state on the touch panel and the hand image area ratio.
For example, the followings can be illustrated as the operation input information.
(1) A one-to-one relationship between value of the hand image area ratio and content of the operation input information is predetermined. The CPU 2101 determines the content of the operation input information that corresponds to the calculated value of the hand image area ratio, based on the one-to-one relationship. More specifically, when the calculated value of the hand image area ratio exceeds a predetermined area ratio threshold (in particular, when the hand cover state in which the hand image area ratio exceeds 80% is detected), the CPU 2101 outputs predetermined-function activation request information as the operation input information. The predetermined-function activation request information is for requesting activation of a predetermined function of the in-vehicle electronic apparatus. The predetermined function of the in-vehicle electronic apparatus is, for example, to switch display from a first window 2301, which is illustrated in
(2) When a predetermined manipulation input is provided on the touch panel 2012a after the predetermined function is activated in the in-vehicle electronic apparatus, the CPU 2101 outputs operation change request information for changing operation state of the predetermined function. For example, the operation change request information is operation recover request information that request deactivation of the predetermined function in the in-vehicle electronic apparatus and recovers the in-vehicle electronic apparatus into a pre-activation stage of the predetermined function. For example, when the touch manipulation on the touch panel 2012a is made after the display is switched into the second window 2302 on the display screen of the monitor 2015, the CPU 2101 outputs, as the operation input information, the window recovery request information to switch the display on the monitor into the first window 2301.
In the followings, operation of the control device 2001 is explained.
It is here assumed that, due to an previous input of a command based on a touch manipulation made in another window for example, an input window illustrated in
An outline of the hand image region is extracted. A pixel value for a region inside the outline and that for another region outside the outline are set to different values so that it is possible to visually distinguish between the region' outside the outline and the region inside the outline. The CPU 2101 generates the pointer image frame, in which a pointer image XXSF corresponding to a shape of the finger image region is pasted at a place corresponding to the hand image region. The pointer image frame is transferred to the graphic controller 2110 and is combined with the input window image frame, and is displayed on the display screen of the monitor 2015. A way of combining the input window image frame and the pointer image frame may depend on data format of the pointer image XXSF, and may be the following ways.
(1) When bitmap data is used for the pointer image, an alpha blending process is performed on the corresponding pixels, so that the pointer image with partial transparency can be superimposed on the input window.
(2) Data of the outline of the pointer image is converted into vector outline data. Thereby, it is possible to use the pointer image frame in which its handling points are mapped on frame. In this case, the graphic controller 2110 generates the outline of the pointer image by using the data on the frame, and performs a raster writhing process to generate bit-maps in an inside of the outline, and then performs the alpha blending similar to that used in (1).
(3) In a way similar to the above-described (2), the outline is drawn on the input window image frame by using the vector outline data corresponding to the pointing image data, and the pixels inside the outline in the input window image are extracted, and setting values of the extracted pixels are shifted uniformly.
According to any one of the methods (1) to (3), regarding the pixels forming the outline of the pointer image data, it is possible to superimpose the pointing image whose outline is high lightened due to an increase in blend ratio of the pointer image data. Alternatively, the pointer image may be an image of only the outline in the form of bitmap data or victor outline data, and only the outline may be superimposed.
As shown in
In the above-described described process, the touch panel 2012a independently generates the user operation input information, which does not involve information on an image captured by the camera 2012b. In the present embodiment, an input information generation procedure for generating input information involved in the information on an image captured by the camera 2012b is performed in parallel by the hand image area ratio calculation software 2103c and the operation input information generation software 2103d in accordance with a flowchart illustrated in
The input information generation procedure is described below with reference to
As shown by the state 59A in
When the display is switched into the second window 2302, an accompanying function may be activated as a different predetermined function of the in-vehicle electronic apparatus. Examples of such accompanying function are the followings: (1) to mute, to turn down the volume, to pause on an audio apparatus and the like; (2) to change an amount of airflow on a vehicle air conditioner such as a temporal increase in amount of air flow and the like. In the above cases, the switching of the display into the second window 2302 is used as visual notification information indicative of the activation of the predetermined function. The second window 2302 may be a simple blackout screen, in which the display is OFF. Alternatively, for convenience, an information item showing content of the accompanying function may be displayed.
Explanation returns to
The second embodiment can be modified in various ways, examples of which are described below.
The operation input information generation software 2103d may be configured to detect a time variation in value of the hand image area ratio. When the detected time variation matches a predetermined time variation, the manipulation input information generation software 2103d may generate and output the operation input information having the content corresponding to the predetermined time variation. According to the above configuration, it is possible to relate a more notable input hand movement to the operation input information. It is therefore possible to realize more intuitive input operation.
The control device 2001 can operate the above-described book view BV in the following way. The camera 2012b captures a moving image of a user input movement that is imitative of the flipping of a page. Based on the moving image, a time variation in shape of the hand image region is detected as a time variation of the hand image area ratio. When the time variation of the hand image area ratio matches a predetermined time variation, a command to flip a page is issued as the operation input information. Using the above way, a user can virtually and realistically flip a page on the book viewer displayed on the display screen, by performing the input hand movement that is imitative of the flipping of a page above the touch panel 2012a. As shown in
A graph illustrated in the upper of
In the above, a time variation in position of the center of the hand image region FI may be further determined. When both of the time variation of the hand image area ratio and the time variation in position of the center respectively match predetermined time variations, the command to flip a page may be issued. In this configuration, it is possible to identify the input hand movement that is imitative of the flipping of a page with higher accuracy. Since the manipulation input surface 2102a is relatively small, the palm is reversed while position of the wrist in the X direction is being kept, and thus, the position of the center of the palm is not changed so much. Thus, regarding each of coordinate values of X, Y axes, a determination area (window) having a predetermined allowance width in the coordinate axis is set on a time-coordinate plane, as shown in the bottom of
As another method, as shown in
More specifically, as the predetermined input hand movement, it is possible to use a hand movement including a series of actions respectively illustrated in
When the hand is not close to the touch panel 2012a, all of the sub-regions “XXA1”, “XXA2”, “XXA3” becomes the second state-sub regions. When the hand approaches the touch panel 2012a from the right side of the touch panel 2012a, the hand image area ratio in the sub-region “XXA1” increases, and then, the sub-region “XXA1” becomes the first state sub-region while the sub-regions “XXA2” and “XXA3” are the second state sub-regions. The state distribution in the above case can be expressed as {XXA1, XXA2, XXA3}={1, 0, 0} according to the above-described definition in the encoding. When the hand is further moved from right to left as shown by the states 64B and 64C in
It is possible to detect the movement of the hand from left to right by detecting the state distribution {XXA1, XXA2, XXA3} whose change is opposite to that show in the movement of the hand from right to left. Using the above-described ways, it is possible to distinguishingly detect different manipulations; one is to move a hand from right to left or from left to right while the hand is being spaced apart from the touch panel 2012a; and another is to move a finger or a hand while the finger or the hand is contacting or pressing down the touch panel 2012a. For example, when the control device 2001 is in an audio apparatus operation mode or displays the input window, a command to select a next track or a previous track corresponds to a manipulation of moving the finger between left and right with the finger making touch. A command to select a next album or a previous album corresponds to a manipulation of moving finger between left and right without the finger making touch. Accordingly, it is possible to provide a user with intuitive and natural operation manners.
A shape of the sub-region is not limited to rectangular or square. Alternatively, the sub-region may have other shapes. For example, the sub-region may have any polygonal shape including rectangular, triangular, and the like. As an example, triangular sub-regions are illustrated in
In the above case, the center XXG of the hand image region FI does not changes largely in the X direction but changes remarkably in the Y direction. Thus, when a time variation in coordinate of the center XXG is within a determination area (window) illustrated in the lower part of
The second embodiment and it modifications have the following aspects.
According to an aspect, there is provided a control device for a user to operate an in-vehicle electronic apparatus in a vehicle by manipulating the control device. The control device includes: a manipulation input element that is located so as to be within reach of the user who is sitting in a seat of the vehicle, and that has a manipulation input region having a predetermined area; an imaging device that has a photographing range covering the manipulation input region, and that captures an image including a hand image region representative of the hand of the user getting access to the manipulation input element; a hand image region identification section that identifies the hand image region in the image; an area ratio calculation section that calculates a value of hand image area ratio, which is area ratio of the hand image region to the manipulation input region; and an operation input information generation section that generates and outputs operation input information based on the calculated value of the hand image area ratio and a manipulation state of the manipulation input region, the operation input information being directed to the in-vehicle electronic apparatus.
According to the above aspect, as input information, it is possible to efficiently use information on the image captured by the imaging device in addition to the input information provided from the manipulation input element. Therefore, it is possible to largely extend input forms in utilizing the control device.
The above control device may be configured such that: the operation input information generation section determines content of the operation input information, based on a predetermined correspondence relationship between the content of the operation input information and the value of the hand image area ratio; and the operation input information section generates and outputs the operation input information having the content that corresponds to the calculated value of the hand image area ratio. According to this configuration, by preliminarily determining the content of the operation input information in accordance with the value of the hand image area ration, It is possible to easily determine the content of the operation input information to be outputted.
The above control device may be configured such that, when the calculated value of the hand image area ratio exceeds a predetermined threshold, the operation input information generation section outputs predetermined-function activation request information as the operation input information to request a predetermined-function of the in-vehicle electronic apparatus to be activated. According to this configuration, it is possible to determine a distinctive input manipulation as an operation for calling the predetermined function of the in-vehicle electronic apparatus, and thus, it is possible to activate the predetermined function in an intuitive manner by using a simple manipulation. For example, the predetermined threshold of the hand image area ratio may be set to a large value to the extent that an input manipulation causing the hand image area ratio large than the predetermined threshold is distinguishable from a normal input manipulation such as a mere touch manipulation made by a finger and the like. In such setting, it is possible to minimize an occurrence of error operation of activating the predetermined function in un-desirable timing. For example, the above control device may be configured such that: the predetermined threshold is larger than 0.6 or 0.7 and may be set to 0.7 for instance; the value of the hand image area ratio larger the predetermined threshold corresponds to an occurrence of a hand cover state in the manipulation input region; and the operation input information generation section outputs the predetermined function activation request information when the hand cover state is detected.
The above control device may be configured such that, when the manipulation input element receives a predetermined manipulation input after the predetermined-function of the in-vehicle electronic apparatus is activated, the operation input information generation section outputs operation change request information to request a change in operation state of the predetermined-function of the in-vehicle apparatus. According to this configuration, since it is possible to change the operation state of the predetermined-function based on an input via the manipulation input element, it is possible to increase a variation of control related to operation of the predetermined function.
For example, the operation change request information may be operation recover request information that requests deactivation of the predetermined-function of the in-vehicle electronic apparatus to recover the in-vehicle electronic apparatus into a pre-activation stage of the predetermined-function. In this configuration, it is possible to easily and smoothly suspend the operation of the predetermined function in response to the predetermined input manipulation on the manipulation input element.
The above control device may further include an area ratio variation detection section that detects a time variation in value of the hand image area ratio, the time variation being caused by a predetermined input hand movement in the manipulation input region. Further, when the detected time variation matches a predetermined time variation, the operation input information generation section may generate and output the operation input information having the content that corresponds to the predetermined time variation. In this configuration, it is possible to relate a more distinctive manipulation input to the operation input information, and the control device enables a more intuitive input operation. The above control device may be configured such that: a time variation in location of the center of the hand image region may be further detected in addition to the time variation in value of the hand image area ratio; and the operation input information generation section may generate and output the operation input information having the corresponding content when both of the above time variations respectively matches predetermined time variations. In this configuration, it is possible to more precisely detect and specify hand movement that is defined as a specific input manipulation. Further, it is possible to more reliably to distinguish the specific input manipulation from the normal input manipulation. In such setting, it is possible to further minimize an occurrence of error operation of activating the predetermined function in un-desirable timing.
The above control device may be configured such that: the manipulation input region is divided into multiple sub-regions; the hand image area ratio calculation section calculates the hand image area ratio in each of the multiple sub-regions; the hand image area ratio variation detection section detects and specifies the time variation in value of the hand image area ratio in each of the multiple sub-regions. According to this configuration, it is possible to detect and specify the input hand movement in more details.
The above control device may be configured such that: the hand image area ratio variation detection section detects a first state sub-region, which is the sub-region whose value of the hand image area ratio is grater than or equal to the predetermined threshold; the hand image area ratio variation detection section detects (i) a number of first state sub-regions and (ii) a change in appearance location of, the first sub-region in the multiple sub-regions over time as a transition behavior; and the operation input information generation section generates and outputs the operation input information when the detected transition behavior matches a predetermined transition behavior. According to this configuration, by (i) the number of first state sub-regions and (ii) the change in appearance location of the first sub-region over time, it is possible detect and specify the hand moving above the manipulation input region without detecting the location of the center of the hand image region. It is possible to easily detect movement of hand related to the manipulation input in a more detailed manner.
For example, the above control device may be configured such that: the multiple sub-regions are arranged adjacent to each other in a row extending in a predetermined direction; the hand image area ratio variation detection section detects a second state sub-region, which is the sub-region whose value of the hand image area ratio is less than the predetermined threshold; the hand image area ratio variation detection section detects a state distribution change, which includes a change in distribution of the first state sub-region and the second state sub-region on the manipulation input region over time; and the operation input information generation section generates and outputs the operation input information when the detected state distribution change matches a predetermined state distribution change. According to this configuration, it is possible to perform coding of states of the sub-regions based on whether the hand image area ratio of each sub-region exceeds the predetermined threshold, and thereby, it is possible to more simply describe the states of the sub-regions by using macroscopic bitmap information in the unit of the sub-region.
The above control device may be configured such that: the state distribution change further includes a change in appearance location distribution of the first state sub-region and the second state sub-region on the manipulation input region over time. According to this configuration, it is possible to easily detect the input hand movement of the user by detecting the change in appearance location distribution of the first state sub-region and the second state sub-region over time. Further, the above control device may be configured such that: the hand image area ratio variation detection section determines the state distribution change by detecting one of: a change of the number of first state sub-regions over time; and a change of the number of second state sub-regions over time. In this configuration, it is possible to easily detect the input hand movement of the user in a more detailed manner. For example, the above control device may be configured such that: the manipulation input region has a first end and a second end opposite to each other in the predetermined direction; the multiple sub-region are aligned in the predetermined direction so as to be arranged between the first end and the end; the predetermined input hand movement is movement of the hand across the multiple sub-regions in the predetermined direction; and the hand image area ratio variation detection section determine the state distribution change caused by the predetermined input hand movement, by detecting movement behavior of appearance location of the first state sub-region. According to this configuration, it is possible to more easily detect the hand moving across the manipulation input region in the predetermined direction, based on the movement behavior of the appearance location of the first state sub-region.
The above control device may be configured such that: the manipulation input element is a location input device; the location input device includes a transparent input reception plate; one surface of the transparent input reception plate is included in the manipulation input region and is adapted to receive a touch manipulation made by a finger of the user; the location input device sets an input coordinate system on the manipulation input region; the location input device detects a location of the touch manipulation on the input coordinate system and outputs coordinate information on the location of the touch manipulation on the input coordinate system; and the imaging device is located on an opposite side of the transparent input reception plate from the manipulation input region, so that the imaging device captures, through the transparent input reception plate, the image of the hand. Further, the above control device may further include: a display device that displays an input window, which provides a reference for the user to perform an input operation on the location input device; and a pointer image display section that superimposes a pointer image, which is generated based on image information on the hand image region, on the input window. On the input window, the pointer image is located at a place corresponding to place of the hand image region in the captured image.
According to the above configuration, the user can perceive position of a finger of the user on the manipulation input surface by watching the pointer image on the input window. In particular, when a display screen of the display device is placed so be out of “a line of sight” of the user who is looking straight at the finger on the input manipulation surface, the pointer image on the input widow can be an only information source that the user can use to perceive operation position of the hand because the user can not look straight at both of the input manipulation surface and the display screen. For example, when the control device is used to operate the in-vehicle electronic apparatus such as a car navigation apparatus and the like, the manipulation input surface is placed next to (or obliquely forward of) a vehicle seat for the user to sit down, and the display screen of the display device is placed above the manipulation input surface so that the display screen is located in front of or obliquely in front of the user sitting in the seat.
The above control device may further include an illumination light source that is located on the opposite side of the transparent input reception light from the manipulation input region. The illumination source irradiates the manipulation input region with illumination light. The imaging device captures the image including the hand image region, based on the illumination light reflected from the hand. Based on the hand image ratio of the hand image region to the manipulation input region, the control device uses the information on the image captured by imaging device as the input information. Because of the illumination source, when the hand is relatively close to the transparent input reception plate, the reflected light reaching the imaging device is increased. However, the hand spaced a predetermined distance or more apart from the transparent input reception plate cannot be recognized as the hand image region. Thus, when the hand moves across over the transparent input reception plate to manipulate a different control device (e.g., a gear shift lever) proximal to the subject control device, the hand is not recognized as the hand image region having a valid hand image area ratio and does not cause an error input when a distance between the hand and the transparent input reception plate is sufficiently large.
The above control device may be configured such that: each of the display device and the display control section is a component of the in-vehicle electronic apparatus (e.g., a car navigation system); the operation input information generation section outputs window content change command information as the operation input information to the display control section, based on the calculated value of the hand image area ratio; and the display control section causes the display device to change content of the input window when the display control section receives the window content change command information. According to this configuration, it is possible to control a change in content of the input window based on the hand image area ratio, which is calculated from the captured image, and it is possible to considerably increase freedom of control forms for the change in content of the input window.
For example, the above control device may be configured such that, when the hand image area ratio increases from a value lower than the predetermined threshold to a value larger than the predetermined threshold, the operation input information generation section outputs window switch command information as the outputs window content change command information to request the display device to switch the input window from (i) a first widow that is presently displayed into (ii) a second window different from the first window. According to this configuration, it is possible to perform an operation of switching the window into a certain window by using a characteristic manipulation form based on the hand image area ratio. An intuitive and easy-to-follow window switching operation becomes possible. An operation of switching the behavior of another cooperating electronic apparatus (e.g., an audio apparatus, an air conditioner and the like) is also possible. Further, such a characteristic manipulation form is highly distinguishable from the normal input manipulation such as a mere touch manipulation and the like. It is possible to minimize an occurrence of an error such as the switching of the window or the activation of the predetermined function at an undesirable timing. The above control device may be configured such that, when the location input device receives a predetermined touch manipulation after the input window is switched into the second input window, the operation input information generation section outputs window recovery request information to request the display device to recover the input window into the first window.
While the invention has been described above with reference to various embodiments thereof, it is to be understood that the invention is not limited to the above described embodiments and constructions. The invention is intended to cover various modifications and equivalent arrangements. In addition, while the various combinations and configurations described above are contemplated as embodying the invention, other combinations and configurations, including more, less or only a single element, are also contemplated as being within the scope of embodiments.
Further, each or any combination of processes, steps, or means explained in the above can be achieved as a software section or unit (e.g., subroutine) and/or a hardware section or unit (e.g., circuit or integrated circuit), including or not including a function of a related device; furthermore, the hardware section or unit can be constructed inside of a microcomputer.
Furthermore, the software section or unit or any combinations of multiple software sections or units can be included in a software program, which can be contained in a computer-readable storage media or can be downloaded and installed in a computer via a communications network.
Number | Date | Country | Kind |
---|---|---|---|
2008-251783 | Sep 2008 | JP | national |
2009-020635 | Jan 2009 | JP | national |