This application is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2014-258102, filed on Dec. 19, 2014, the entire contents of which are incorporated herein by reference.
The embodiments discussed herein are related to an input supporting method, an input supporting program, and an input supporting device.
In recent years, wearable devices have been used for work support. Because a wearable device is worn when used, for example, it is not possible to make input by, for example, touching a screen of a smartphone, or the like, and thus it is difficult to make operational input. For this reason, there is a technology for making input by gesture. For example, motions of a finger are detected by a wearable device that is worn on the finger to make input of handwritten characters.
Patent Document 1: Japanese Laid-open Patent Publication No. 2006-53909
Patent Document 3: Japanese Laid-open Patent Publication No. 2002-318662
Patent Document 3: Japanese Laid-open Patent Publication No. 2001-236174
With the conventional technology, however, it may be difficult to make input by a finger. A motion detected by the wearable device worn on the finger contains translational components and rotation components. Rotation components are detected depending on a variation in the posture of the finger, such as bending and stretching of the finger. Translational components are detected depending on translational movement, such as parallel movement, of the hand in the leftward/rightward direction and a lot of components from movement of the whole body are contained in the translational components. For this reason, as for translational components, it is difficult to detect only motions of the finger. Accordingly, a detected motion may differ from that intended by the user and accordingly it may be difficult to make input by using the finger.
According to an aspect of an embodiment, an input supporting method includes detecting, using a processor, a motion of a finger on which a wearable device is worn: detecting, using a processor, an axis representing a posture of the finger on the basis of the detected motion of the finger; and displaying, using a processor, a virtual laser pointer that moves in association with the detected axis and a trace of the axis on a head-mounted display.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, as claimed.
Preferred embodiments of the present invention will be explained with reference to accompanying drawings. The embodiments do not limit the invention. Each embodiment may be combined as long as the process contents keep consistency.
First, an exemplary input system that makes input by using an input supporting device according to a first embodiment will be explained.
The input system 10 is a system that supports a user to make input. For example, the input system 10 is used to support works of users in, for example, a factory, and the input system 10 is used when, for example, a user inputs a memo, or the like, by gesture. A user may work while moving between various locations. For this reason, enabling input by gesture by using not a fixed terminal, such as a personal computer, but the wearable device 11, allows the user to make input while moving between various locations.
The wearable device 11 is a device that is worn and used by a user and that detects gestures of the user. According to the first embodiment, the wearable device 11 is a device that is worn on a finger. The wearable device 11 detects a variation in the posture of the finger as a gesture of the user and transmits information on the variation on the posture of the finger to the input supporting device 13.
The head-mounted display 12 incorporates a camera between two lens parts and the camera enables capturing of an image in the direction of the line of sight of the user wearing the head-mounted display 12.
The input supporting device 13 recognizes an input by a user's gesture on the basis of information on a variation in the posture of the finger that is transmitted from the wearable device 11 and causes the head-mounted display 12 to display information corresponding to the contents of the recognized input.
Configuration of Each Device
A device configuration of each of the wearable device 11, the head-mounted display 12, and the input supporting device 13 will be explained.
First, the wearable device 11 will be explained. As illustrated in
The switch 20 is a device that accepts an input from the user. The switch 20 is provided on a side surface of the ring of the wearable device 11 as illustrated in FIG. 2C. The switch 20 is turned on when pressed and turned off when released. The switch 20 accepts operational input from the user. For example, when the wearable device 11 is worn on the index finger of the user, the switch 20 accepts an operational input by the thumb of the user. The switch 20 outputs operational information representing the accepted operational contents to the control unit 23. The user operates the switch 20 to make various types of input. For example, the user turns on the switch 20 when starting input by gesture.
The posture sensor 21 is a device that detects a gesture of the user. For example, the posture sensor 21 is a three-axis gyro sensor. As depicted in
The wireless communication I/F unit 22 is an interface that performs wireless communication control between the wearable device 11 and other devices. For the wireless communication I/F unit 22, it is possible to use a network interface card, such as a wireless chip.
The wireless communication I/F unit 22 is a device that perform communications wirelessly. The wireless communication I/F unit 22 transmits/receives various types of information to/from other devices wirelessly. For example, under the control of the control unit 23, the wireless communication unit I/F 22 transmits operational information and posture variation information to the input supporting device 13.
The control unit 23 is a device that controls the wearable device 11. For the control unit 23, it is possible to use an integrated circuit, such as a microcomputer, an application specific integrated circuit (ASIC), or a field programmable gate array. The control unit 23 controls the wireless communication I/F unit 22 to transmit operational information from the switch 20 to the input supporting device 13. When the switch 20 is turned on, the control unit 23 controls the posture sensor 21 to cause the posture sensor 21 to detect a variation in the posture. The control unit 23 controls the wireless communication I/F unit 22 to transmit posture variation information that is detected by the posture sensor 21 to the input supporting device 13.
The power unit 24 includes a power supply, such as a battery, and supplies power to each electronic part of the wearable device 11.
The head-mounted display 12 will be explained here. As illustrated in
The display unit 30 is a device that displays various types of information. As illustrated in
The camera 31 is a device that captures an image. As illustrated in
The wireless communication I/F unit 32 is a device that performs communications wirelessly. The wireless communication I/F unit 32 transmits/receives various types of information from/to other devices wirelessly. For example, the wireless communication I/F unit 32 receives image information of an image to be displayed on the display unit 30 and an operation command of an instruction for imaging from the input supporting device 13. The wireless communication I/F unit 32 transmits image information of an image that is captured by the camera 31 to the input supporting device 13.
The control unit 33 is a device that controls the head-mounted display 12. For the control unit 33, an electronic circuit, such as a central processing unit (CPU) or a micro processing unit (MPU), or an integrated circuit, such as a microcomputer, an ASIC, or an FPGA, may be used. The control unit 33 performs control to cause the display unit 30 to display image information received from the input supporting device 13. Upon receiving an operation command of an instruction for imaging from the input supporting device 13, the control unit 33 controls the camera 31 to capture an image. The control unit then controls the wireless communication I/F unit 32 to transmit the image information of the captured image to the input supporting device 13.
The power unit 34 includes a power supply, such as a battery, and supplies power to each electronic part of the head-mounted display 12.
The input supporting device 13 will be explained here. As illustrated in
The wireless communication I/F unit 40 is a device that performs communications wirelessly. The wireless communication I/F unit 40 transmits/receives various types of information from/to other deices wirelessly. For example, the wireless communication I/F unit 40 receives operation information and posture variation information from the wearable device 11. The wireless communication I/F unit 40 transmits image information of an image to be displayed on the head-mounted display 12 and various operation commands to the head-mounted display 12. The wireless communication I/F unit 40 further receives image information of an image that is captured by the camera 31 of the head-mounted display 12.
The storage unit 41 is a storage device, such as a hard disk, a solid state drive (SSD), or an optical disk. The storage unit 41 may be a data rewritable semiconductor memory, such as a random access memory (RAM), a flash memory, or a non-volatile static random access memory (NVSRAM).
The storage unit 41 stores an operating system (OS) and various programs that are executed by the control unit 42. For example, the storage unit 41 stores various programs that are used for supporting input. Furthermore, the storage unit 41 stores various types of data used for the programs to be executed by the control unit 42. For example, the storage unit 41 stores recognition dictionary data, memo information 51, and image information 52.
Recognition dictionary data 50 is dictionary data for recognizing characters that are input by handwriting. For example, the recognition dictionary data 50 stores standard trace information of various characters.
The memo information 51 is data in which information on a memo that is input by handwriting is stored. For example, in the memo information 51, an image of a character that is input by handwriting and character information that is the result of recognition of the character input by handwriting are stored in association with each other.
The image information 52 is image information of the image captured by the camera 31 of the head-mounted display 12.
The control unit 42 is a device that controls the input supporting device 13. For the control unit 42, an electronic circuit, such as a CPU or a MPU, or an integrated circuit, such as a microcomputer, an ASIC, or an FPGA, may be used. The control unit 42 includes an internal memory for storing programs that define various processing procedures and control data and executes various processes by using the programs and control data. The control unit 42 functions as various processing units because the various programs run. For example, the control unit 42 includes an input detection unit 60, a display control unit 61, a calibration unit 62, an axis detection unit 63, a trace recording unit 64, a determination unit 65, a recognition unit 66, a storage unit 67, and an operation command output unit 68.
The input detection unit 60 detects various inputs on the basis of operation information and posture variation information that are received from the wearable device 11. For example, the input detection unit 60 detects an operation on the switch 20 on the basis of the operation information. For example, the input detection unit 60 detects, from the number of times the switch 20 is pressed within a given time, a single click, a double click, a triple click, or a long press operation on the switch 20. The input detection unit 60 detects a variation in the posture of the finger depending on rotation of the three axes from the posture variation information that is received from the wearable device 11.
The display control unit 61 performs various types of display control. For example, the display control unit 61 generates image information on various screens in accordance with the result of detection by the input detection unit 60 and controls the wireless communication I/F unit 40 to transmit the generated image information to the head-mounted display 12. Accordingly, the image of the image information is displayed on the display unit 30 of the head-mounted display 12. For example, when the input detection unit 60 detects a double click, the display control unit 61 causes the display unit 30 of the head-mounted display 12 to display a menu screen.
With the input supporting device 13 according to the first embodiment, it is possible to select items on the menu screen 70 by handwriting input or by using a cursor. For example, when the recognition unit 66, which will be described below, recognizes the trace input by handwriting as a number from “1” to “4”, the display control unit 61 determines that the mode of the item corresponding to the recognized number is selected. The display control unit 61 displays a cursor on the screen and moves the cursor in accordance with the variation in the posture of the finger that is detected by the input detection unit 60. For example, when rotation of the Y-axis is detected, the display control unit 61 moves the cursor leftward/rightward on the screen at a speed according to the rotation. When rotation of the Z-axis is detected, the display control unit 61 moves the cursor upward/downward on the screen at a speed according to the rotation. When the cursor is positioned on any one of the items on the menu screen 70 and the input detection unit 60 detects a single click, the display control unit 61 determines that the mode of the item at which the cursor is positioned is selected. When any one of the items on the menu screen 70 is selected, the display control unit 61 deletes the menu screen 70.
The calibration unit 62 performs calibration on the information on the detected posture of the finger. For example, when the calibration mode is selected on the menu screen 70, the calibration unit 62 performs calibration on the information on the detected posture of the finger.
The wearable device 11 may be worn in a shifted state where the wearable device 11 turns in the circumferential direction with respect to the finger. When the wearable device 11 is worn on the finger in the shifted state, a shift corresponding to the turn may occur in the posture variation detected by the wearable device 11 and thus the detected motion may be different from that intended by the user. In such a case, the user selects the calibration mode on the menu screen 70. Once the user selects the calibration mode on the menu screen 70, the user opens and closes the hand wearing the wearable device 11 on the finger. The wearable device 11 transmits, to the input supporting device 13, posture variation information on the variation in the posture of the finger occurring when the hand is opened and closed.
On the basis of the posture variation information, the calibration unit 62 detects a motion of the finger that is caused when the finger on which the wearable device 11 is worn is bend and stretched and that is a motion caused by opening and closing the hand. The calibration unit 62 performs calibration on the reference direction of finger motion on the basis of the detected motion of the finger.
On the basis of the posture variation information obtained when the finger is bent and stretched, the calibration unit 62 calculates correction information with which the reference direction of the motion of the finger is corrected. For example, the calibration unit 62 calculates, as correction information, angles of rotation to which the rotation axes X, Y, and Z illustrated in
When the calibration by the calibration unit 62 ends, the input detection unit 60 corrects the posture variation information by using the correction information that is calculated by the calibration unit 62 and detects a variation in the posture. By correcting the posture variation information by using the correction information, the posture variation information is corrected to one based on each of the rotation axes of X, Y, and Z illustrated in
The axis detection unit 63 detects an axis representing the posture on the basis of the variation in the posture of the finger that is detected by the input detection unit 60. For example, the axis detection unit 63 detects an axis whose direction moves in accordance with the variation in the posture of the finger. For example, the axis detection unit 63 calculates direction vectors of the axes that pass through the origin in a three-dimensional space and that move in the respective directions of X, Y, and Z in accordance with the respective directions of rotation and the respective rotation speeds with respect to the respective rotation axes of X, Y, and Z. When the motion is detected according to only the posture, it is difficult to move the wrist widely as it separates from the correct direction. When the hand palm is kept horizontal, its rightward and leftward flexibility may be low while its upward and downward flexibility is high. The axis detection unit 63 may change the pointing sensitivity in the upward/downward direction and the leftward/rightward direction from the center point in the axis direction that is corrected by the calibration unit 62. For example, the axis detection unit 63 calculates a vector of the direction of an axis by largely correcting the rotation of the hand in the rightward/leftward direction compared to correction on the rotation of the hand in the upward/downward direction. In other words, when the amounts of rotation are the same, the axis detection unit 63 corrects the amount of move due to the rightward/leftward rotation largely compared to correction on the amount of move due to the upward/downward rotation. Furthermore, the axis detection unit 63 may increase the sensitivity as it is apart from the center point of the direction of the corrected axis. For example, the axis detection unit 63 largely corrects the rotation as it is apart from the center point in the direction of the axis and calculates the direction vector of the axis. In other words, when the amounts of rotation are the same, the axis detection unit 63 corrects the amount of move due to the rotation in a peripheral area apart from the center point in the axis direction largely compared to correction on the amount of move due to rotation near the center point. Accordingly, the sensitivity of rotation is set in accordance with easiness of moving the wrist and this enables the input system 10 to easily perform accurate pointing.
The display control unit 61 causes the display unit 30 of the head-mounted display 12 to display a virtual laser pointer that moves in association with the axis detected by the axis detection unit 63. For example, when the calibration mode is selected on the menu screen 70, the display control unit 61 generates image information of a screen where a virtual laser pointer that moves in association with the axis detected by the axis detection unit 63 is arranged. The display control unit 61 controls the wireless communication I/F unit 40 to transmit the generated image information to the head-mounted display 12. Accordingly, the image of the virtual laser pointer is displayed on the display unit 30 of the head-mounted display 12.
The trace recording unit 64 detects a gesture relating to input. For example, the trace recording unit 64 detects a character that is handwritten by gesture in a free space. For example, when the input detection unit 60 detects an operation of a long-press operation on the switch 20, the trace recording unit 64 detects the handwritten character by recording the trace of the axis during the long-press operation.
The display control unit 61 also displays the trace of the axis recorded by the trace recording unit 64. According to the example illustrated in
Displaying the laser pointer P as described above enables the user wearing the head-mounted display 12 to easily make input to the free space. When the user makes input to the free space by using a finger, the detected motion of the finger contains translational components and rotational components. The translational components come from parallel movement of the hand and movement of the whole body of the user, and thus it is difficult to detect only motions of the finger. For this reason, the detected motion may differ from that intended by the user and it may be difficult to make input by using the finger. The input supporting device 13 detects rotation components that are a variation in the posture of the finger, detects an axis representing the posture of the finger from the detected rotation components, displays a virtual laser pointer that moves in association with the axis, and sends the result of detection as a feedback to the user.
The determination unit 65 determines a gesture that is a subject not to be input. For example, a gesture that satisfies a given condition from among detected gestures is determined as a subject not to be input.
The trace of a character that is handwritten by gesture in the free space contains a line part referred to as a stroke and a moving part that move between line parts. When a handwritten character contains a moving part, it is difficult to recognize the character, and it may be recognized as a character different from that intended by the user. A handwritten character having many strokes tends to be erroneously recognized. Particularly, because a one-stroke character contains many moving parts, it is difficult to recognize the character.
On the other hand, many characters, such as kanji, are written with movement from the left to the right or from the top to the bottom. In many cases, movement to the upper left is movement between line parts.
It is assumed that the given condition is a gesture of movement to the upper left. The determination unit 65 determines a gesture of movement to the upper left as a subject not to be input and determines gestures other than the gesture of movement to the upper left as subjects to be input.
The display control unit 61 displays subjects not to be input and subjects to be input separately. For example, the display control unit 61 displays a subject not to be input with visibility lower than that of a subject to be input. For example, the display control unit 61 displays the trace of a gesture that is determined as a subject not to be input in a color lighter than that of the trace of a gesture determined as a subject to be input. According to the example illustrated in
The display control unit 61 may display subjects not to be input with visibility lower than that of subjects to be input by changing the color. For example, the display control unit 61 may display subjects not to be input in red and display subjects to be input in gray. Alternatively, the display control unit 61 may delete the trace of a gesture determined as a subject not to be input and display the trace of a gesture determined as a subject to be input. In other words, the display control unit 61 may perform display control such that the traces of the points X4 and X5 according to the example illustrated in
The recognition unit 66 recognizes a character from the trace that is recorded by the trace recording unit 64. For example, the recognition unit 66 performs character recognition on traces determined as subjects to be input from among traces that are recorded by the trace recording unit 64. For example, the recognition unit 66 performs character recognition on the traces that are represented by dark lines according to
When a character recognized by the recognition unit 66 has a hook, the display control unit 61 displays the trace corresponding to the hook of the character from among traces of gestures determined as subjects not to be input as the trace of a gesture determined as a subject to be input is displayed. For example, the trace recording unit 64 changes the trace corresponding to the hook of the character from among the recorded traces as the trace of a character part is changed. The display control unit 61 displays an image of the character that is changed by the trace recording unit 64.
The storage unit 67 performs various types of storage. For example, the storage unit 67 stores the trace of a handwritten character and a recognized character in the memo information 51. For example, when the memo input mode is selected on the menu screen 70, the storage unit 67 stores, in the memo information 51, an image of the character recorded by the trace recording unit 64 and the character recognized by the recognition unit 66 in association with each other together with date information. It is possible to refer to the information stored in the memo information 51. For example, when the memo input mode is selected on the menu screen 70, the display control unit 61 displays the information that is stored in the memo information 51 of the storage unit 41.
The operation command output unit 68 outputs an operation command to another device in accordance with the recognized character or a symbol. For example, when performing imaging by using the camera 31 of the head-mounted display 12, the user selects the imaging mode on the menu screen 70. At a timing at which imaging is desired, the user performs the long-press operation on the switch 20 of the wearable device 11 and inputs a given sentence by handwriting. The given character may be any of character, number, and symbol. For example, it may be “1”. When the imaging mode is selected on the menu screen 70, the operation command output unit 68 enters an imaging preparation state. The trace recording unit 64 records the trace that is input by handwriting in the imaging preparation state. The recognition unit 66 recognizes a character from the trace. When the recognition unit 66 recognizes the given character, the operation command output unit 68 transmits an operation command for instruction for imaging to the head-mounted display 12. Upon receiving the operation command for the instruction for imaging, the head-mounted display 12 performs imaging by using the camera 31 and transmits image information of the captured image to the input supporting device 13. The storage unit 67 stores the image information of the captured image as the image information 52 in the storage unit 41. In this manner, the input supporting device 13 is capable of outputting an operation command to another device to perform an operation.
The power unit 43 includes a power source, such as a battery, and supplies power to each electronic part of the input supporting device 13.
Process Flow
A flow of making input by the input supporting device 13 will be described. First, a flow of a menu process of accepting mode selection on the menu screen will be explained.
As illustrated in
The display control unit 61 determines whether the input detection unit 60 detects a long-press operation on the switch 20 (S14). When the long-press operation on the switch 20 is detected (YES at S14), the trace recording unit 64 records the trace of the axis (S15). The determination unit 65 determines whether a gesture is a subject not to be input (S16). For example, the determination unit 65 determines a gesture of move to the upper left as a gesture not to be input and determines gestures other than the gesture of move to the upper left as gestures to be input. The display control unit 61 generates image information of a screen where the recorded trace of the axis is displayed on a virtual surface that is the display area for the virtual laser pointer and transmits the generated image information to the head-mounted display 12 to cause the display unit 30 to also display the trace of the axis (S17). The display control unit 61 displays subjects to be input and subjects not to be input separately. For example, the display control unit 61 displays the trace of a subject not to be input with visibility lower than that of the trace of a subject to be input. The display control unit 61 determines whether the long-press operation on the switch 20 detected by the input detection unit 60 ends (S18). When the long-press operation on the switch 20 does not end (NO at S18), the process moves to S11.
On the other hand, when the long-press operation on the switch 20 ends (YES at S18), the recognition unit 66 recognizes a character from the trace recorded by the trace recording unit 64 (S19). The display control unit 61 determines whether the recognition unit 66 recognizes any one of the numbers of “1” to “4” (S20). When none of the numbers of “1” to “4” is recognized (NO at S20), the trace recording unit 64 deletes the trace of the axis (S21). The display control unit 61 generates image information of the screen from which the trace of the axis displayed on the virtual surface that is the display area for the virtual laser pointer has been deleted and transmits the generated image information to the head-mounted display 12 to delete the trace of the axis (S22), and the process then moves to the above-described S11.
On the other hand, when any one of the numbers of “1” to “4” is recognized (YES at S20), the display control unit 61 determines that the mode of the item corresponding to the recognized number is selected and deletes the menu screen (S23), and thus then process ends.
On the other hand, when the long-press operation on the switch 20 is not detected (NO at S14), the display control unit 61 determines whether the input detection unit 60 detects a single click on the switch 20 (S24). When no single click on the switch 20 is detected (NO at S24), the process moves to S11 described above.
On the other hand, when a single click on the switch 20 is detected (YES at S24), the display control unit 61 determines whether the position at which the single click is detected is on any one of the items on the menu screen 70 (S25). When the position at which the single click is detected is on none of the items on the menu screen 70 (NO at S25), the process moves to the above-described S11. On the other hand, when the position at which the single click is detected is on any of the items on the menu screen 70 (YES at S25), the display control unit 61 determines that the mode of the item on which the cursor is positioned is selected and deletes the menu screen 70 (S26), and the process ends.
A flow of a calibration process of performing calibration on posture information on a finger will be explained here.
As illustrated in
A flow of a memo input process of inputting a memo by handwriting will be explained.
As illustrated in
The display control unit 61 determines whether the input detection unit 60 detects a long-press operation on the switch 20 (S43). When no long-press operation on the switch 20 is detected (NO at S43), the process moves to S40.
On the other hand, when a long-press operation on the switch 20 is detected (YES at S43), the trace recording unit 64 records the trace of the axis (S44). The determination unit 65 determines whether a gesture is a subject not to be input (S45). For example, the determination unit 65 determines a gesture of move to the upper left as a subject not to be input and determines gestures other than the gesture of move to the upper left as gestures to be input. The display control unit 61 generates image information of a screen where the recorded trace of the axis is displayed on a virtual surface that is the display area for the virtual laser pointer and transmits the generated image information to the head-mounted display 12 to cause the display unit 30 to also display the trace of the axis (S46). The display control unit 61 displays subjects to be input and subjects not to be input separately. For example, the display control unit 61 displays the trace of a subject not to be input with visibility lower than that of the trace of a subject to be input. The display control unit 61 determines whether the long-press operation on the switch 20 detected by the input detection unit 60 ends (S47). When the long-press operation on the switch 20 does not (NO at S47), the process moves to S40.
On the other hand, when the long-press operation on the switch 20 ends (YES at S47), the recognition unit 66 recognizes a character from the trace recorded by the trace recording unit 64 (S48). The display control unit 61 determines whether the character recognized by the recognition unit 66 has a hook (S49). When there is no hook (NO at S49), the process moves to S52. When there is a hook (YES at S49), the trace recording unit 64 changes the trace corresponding to the hook of the character from among the recorded trace similarly to the trace of character parts (S50). The display control unit 61 displays the trace changed by the trace recording unit 64 (S51).
The storage unit 67 stores the image of the trace of the character and the character recognized by the recognition unit 66 in the memo information 51 (S52). The trace recording unit 64 deletes the trace of the axis (S53). The display control unit 61 generates image information of the screen from which the trace of the axis displayed on the virtual surface that is the display area for the virtual laser printer has been deleted and transmits the generated image information to the head-mounted display 12 to delete the trace of the axis displayed on the virtual surface of the display unit 30 (S54). The display control unit 61 may temporarily display the character recognized by the recognition unit 66.
The display control unit 61 determines whether the input detection unit 60 detects a given end operation of ending handwriting input (S55). The end operation is, for example, a triple click. When the given end operation is not detected (NO at S55), the process moves to S40 described above. On the other hand, when the end operation is detected (YES at S55), the process ends.
A memo browsing process of browsing a memo will be explained here.
As illustrated in
The display control unit 61 determines whether the input detection unit 60 detects a given end operation of ending handwriting input (S61). The end operation is, for example, a triple click. When the given end operation is not detected (NO at S61), the process moves to S61 described above. On the other hand, when the end operation is detected (YES at S61), the display control unit 61 ends the display of the information in the memo information 51 and the process ends.
An operation command output process of outputting an operation command for imaging will be explained here.
As illustrated in
The display control unit 61 determines whether the input detection unit 60 detects the long-press operation on the switch 20 (S72). When no long-press operation on the switch 20 is detected (NO at S72), the process moves to S81.
On the other hand, when the long-press operation on the switch 20 is detected (YES at S72), the trace recording unit 64 records the trace of the axis (S73). The determination unit 65 determines whether the gesture is a subject not to be input (S74). For example, the determination unit 65 determines a gesture of move to the upper left as a subject not to be input and determines gestures other than the gesture of move to the upper left as subjects to be input. The display control unit 61 determines whether the long-press operation on the switch 20 that is detected by the input detection unit 60 ends (S75). When the long-press operation on the switch 20 does not end (NO at S75), the process moves to S70 described above.
On the other hand, when the long-press operation on the switch 20 ends (YES at S75), the recognition unit 66 recognizes a character from the trace recorded by the trace recording unit 64 (S76). The operation command output unit 68 determines whether the recognition unit 66 recognizes a given character (S77). When the given character is recognized (YES at S77), the operation command output unit 68 transmits an operation command for an instruction for imaging to the head-mounted display 12 (S78). The storage unit 67 stores image information of a captured image received from the head-mounted display 12 as the image information 52 in the storage unit 41 (S79).
On the other hand, when the given character is not recognized (NO at S77), the process moves to S80 described below.
The trace recording unit 64 deletes the trace of the axis (S80).
The display control unit 61 determines whether input detection unit 60 detects a given end operation of ending handwriting input (S81). The end operation is, for example, a triple click. When the given end operation is not detected (NO at S81), the process moves to S70 described above. On the other hand, when the end operation is detected (YES at S81), the process ends.
Effect
As described above, the input supporting device 13 according to the first embodiment detects a motion of a finger on which the wearable device 11 is worn. The input supporting device 13 detects an axis representing the posture of the finger on the basis of the detected motion of the finger. The input supporting device 13 displays a virtual laser pointer that moves in association with the detected axis and the trace of the axis on the head-mounted display 12. Accordingly, the input supporting device 13 is capable of supporting input made by using the finger.
The input supporting device 13 according to the first embodiment detects a motion of the finger being bent and stretched. The input supporting device 13 performs calibration on the reference direction of the motion of the finger. Accordingly, even when the wearable device 11 is shifted and worn on the finger, the input supporting device 13 is capable of accurately detecting a motion of the finger.
Furthermore, the input supporting device 13 according to the first embodiment recognizes a character from the trace of the axis. The input supporting device 13 stores the recognized character and the trace in association with each other. Accordingly, the input supporting device 13 is capable of supporting knowing of what is stored as a memo even when a character is erroneously converted upon recognition of the trace.
Furthermore, the input supporting device 13 according to the first embodiment recognizes a character or symbol from the trace of the axis. The input supporting device 13 outputs an operation command to another device on the basis of the recognized character or symbol. Accordingly, the input supporting device 13 is capable of operating another device by using the handwritten character.
The first embodiment of the disclosed device has been explained above; however, the disclosed technology may be carried out in various different modes in addition to the above-described first embodiment. Another embodiment covered by the invention will be explained here.
For example, for the first embodiment, the case has been described where the input system 10 includes the wearable device 11, the head-mounted display 12, and the input supporting device 13. Alternatively, for example, the wearable device 11 or the head-mounted display 12 may have the functions of the input supporting device 13.
For the first embodiment described above, the case has been described where handwriting input is made by using the wearable device 11. Alternatively, for example, a character that is input by handwriting to a touch panel of an information processing device having a touch panel, such as a smartphone or a tablet terminal, may be used. Alternatively, a character that is input by handwriting to a personal computer by using an input device capable of specifying a position, such as a mouse, may be used. Also in this case, easy recognition of a handwritten character is enabled.
Furthermore, for the first embodiment, the case where display is performed on the head-mounted display 12 has been explained. Alternatively, for example, display may be performed on an external display or a touch panel of, for example, a smartphone or a tablet terminal.
Furthermore, for the first embodiment, an operation command for imaging is output to the head-mounted display 12 has been explained. Alternatively, for example, another device may be any device. For example, the input supporting device 13 may store operation commands to various devices in the storage unit 41. The input supporting device 13 may determine a device by image recognition from an image captured by the camera 31 of the head-mounted display 12 and, in accordance with handwriting input, output an operation command to the determined device.
For the first embodiment, the case where correction information is calculated in the calibration mode has been explained. Alternatively, for example, in the calibration mode, a notification indicating that the wearable device 11 is not worn normally may be made to cause the user to cause the wearable device 11 to enter a normal worn state.
For the first embodiment, the case where the recognition unit 66 performs character recognition on a trace determined as a subject to be input from among traces recorded by the trace recording unit 64 has been explained. Alternatively, the recognition unit 66 may perform character recognition after performing various types of filter processing focusing on an event unique to handwriting input by using the wearable device 11.
For example, as for handwriting input, even when input is to be made in the upward/downward direction and the leftward/rightward direction, input in which the angle of trace may be shifted with respect to the upward/downward direction or the leftward/rightward direction. Particularly in handwriting input into a free space, because there is no surface into which input is made, a shift tends to occur. For this reason, when it is possible to regard a trace as a line within a given angle with respect to the upward/downward direction or the leftward/rightward direction, character recognition may be performed after the angle correction is performed on the trace. When a straight line connecting the start point and the end point of a trace is within a given angle with respect to the upward/downward direction or the leftward/rightward direction and the trace is within a given width from the straight line, the recognition unit 66 may perform character recognition after correcting the angle of the trace in accordance with the shift in the angle with respect to the upward/downward direction or the leftward/rightward direction. The given angle may be, for example, 30 degrees. The given width may be, for example, one tenth of the distance between the start point and the end point. The given angle and the given width may be set from the outside. The user may be caused to input a trace in the upward/downward direction or the leftward/rightward direction and the recognition unit 66 may detect a shift in the angle of a straight line connecting the start point and the end point from the input trace and detect a width from the straight line of the trace to lean a given angle and a given degree.
Furthermore, for example, as for handwriting input by using the wearable device 11, the operation on the switch 20 may be slow or fast. The recognition unit 66 may correct the trace on the basis of the posture variation information before and after the long-press operation on the switch 20 and then perform character recognition. For example, when any one of or both of an abrupt change in the posture and an abrupt change in the acceleration is detected before and after the end of the long-press operation on the switch 20, the recognition unit 66 may perform character recognition excluding the trace part corresponding to the abrupt change. A threshold for detecting the abrupt change may be fixed, or the user may be caused to input a given character by handwriting and a threshold maybe learned from the posture variation information before and after the end of the character.
Furthermore, for example, the recognition unit 66 may perform character recognition by performing correction of adding the trace prior to the long-press operation on the switch 20. For example, as a result of the character recognition, when the recognition rate is low or the same posture variation as that appearing when the long-press operation us started has continued before the long-press operation, the recognition unit 66 may perform correction of adding the trace before a given time or the trace from the stop state just before the long-press operation on the switch 20 and then perform character recognition.
Each component of each device illustrated in the drawings is a functional idea and does not necessarily have to be physically configured as illustrated in the drawings. In other words, a specific mode of dispersion and integration of each device is not limited to that illustrated in the drawings. All or part of the devices may be configured by dispersing or integrating them functionally or physically in accordance with various loads or the usage in an arbitrary unit. For example, the processing units of the input detection unit 60, the display control unit 61, the calibration unit 62, the axis detection unit 63, the trace recording unit 64, the determination unit 65, the recognition unit 66, the storage unit 67, and the operation command output unit 68 may be properly integrated. The processing performed by each processing unit may be separated into processing performed by multiple processing units. Furthermore, all or an arbitrary part of the processing functions implemented by the respective processing units may be implemented by a CPU and by using a program that is analyzed and executed by the CPU, or may be implemented by hard-wired logic.
Input Supporting Program
Various processes explained according to the above-described embodiments may be implemented by executing a prepared program by using a computer system, such as a personal computer or a work station. An exemplary computer system that executes a program having the same functions as those of the above-described embodiments will be explained below.
As illustrated in
The HDD 320 previously stores an input supporting program 320a that implements the same functions as those of the input detection unit 60, the display control unit 61, the calibration unit 62, the axis detection unit 63, the trace recording unit 64, the determination unit 65, the recognition unit 66, the storage unit 67, and the operation command output unit 68. The input supporting program 320a may be properly separated.
The HDD 320 stores various types of information. For example, the HDD 320 stores an OS and various types of data used to determine an OS and the amount of order.
The CPU 310 reads the input supporting program 320a from the HDD 320 and executes the input supporting program 320a so that the same operations as those of the respective processing units according to the embodiments are implemented. In other words, the input supporting program 320a implements the same operations as those of the input detection unit 60, the display control unit 61, the calibration unit 62, the axis detection unit 63, the trace recording unit 64, the determination unit 65, the recognition unit 66, the storage unit 67, and the operation command output unit 68.
The input supporting program 320a does not necessarily has to be stored in the HDD 32 from the beginning.
For example, the program may be stored in a “portable physical medium”, such as a flexible disk (FD), a CD-ROM, a DVD disk, a magneto-optical disk, or an IC card. The computer 300 may read the program from the portable physical medium and execute the program.
Furthermore, the program may be stored in “another computer (or a server)” that is connected to the computer 300 via a public line, the Internet, a LAN, or a WAN and the computer 300 may read the program from “another computer (or a server)”.
According to an aspect of the present invention, an effect that it is possible to support an input by finger is obtained.
All examples and conditional language recited herein are intended for pedagogical purposes of aiding the reader in understanding the invention and the concepts contributed by the inventor to further the art, and are not to be construed as limitations to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although the embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2014-258102 | Dec 2014 | JP | national |