The disclosure is related to a method of receiving input characters, a character input reception apparatus and a character input reception program.
A tool is known which accepts handwritten input, speech input, etc., by a user and processes recognition characters, which has been obtained by character recognition for the handwritten input, the speech input, etc., as input characters.
[Patent Document 1] Japanese Laid-open Patent Publication No. 2012-43385
[Patent Document 2] Japanese Laid-open Patent Publication No. 2005-235116
[Patent Document 3] Japanese Laid-open Patent Publication No. 2001-337993
According to an aspect of a disclosure, a method of receiving input characters with a computer is provided, the computer executing a process, the process comprising: receiving, on a display image screen for displaying handwritten characters, a selection instruction with respect to a handwritten certain character; displaying, on the display image screen, a candidate display image screen for displaying a correction candidate with respect to the certain character, in response to a reception of the selection instruction, a display area for the candidate display image screen extending at least to a display area for the certain character, and performing a correction process for the certain character in response to a handwritten input with respect to the candidate display image screen.
The object and advantages of the embodiment will be realized and attained by means of the elements and combinations particularly pointed out in the claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention.
With respect to the speech input or the handwritten input, when a first candidate recognition application presented by a recognition application is incorrect, the recognition application, in accordance with an instruction from a user, generates a candidate display image screen including correction candidates to be presented to a user so that the user selects the appropriate correction candidate.
However, it is not possible for the tool, which processes a recognized character whose input has been confirmed by the recognition application as an input character, to utilize other information obtained in the recognition of the recognized character (information other than the recognized character itself). Other information obtained in the recognition of the recognized character includes information related to handwritten strokes, and information related to correction candidates which could replace the recognized character, etc., for example.
Therefore, the conventional tool as described above cannot generate a candidate display image screen, and thus cannot display a candidate display image screen for a user who wants to correct the recognized character given as an input character. Accordingly, in the above conventional tool, when the user desires to correct a certain character given as an input character, the user need to select the certain character and then perform speech input and handwriting input again in order that the certain character is deleted or replaced with another character.
Also, when the user selects the character, which the user desires to correct, to correct it, a probability of correction candidates presented via the candidate display image screen being also inappropriate is high because of a fact that the first candidate (character to be corrected by the user) recognized based on the same speech or handwritten input is inappropriate. If the appropriate correction candidate does not exist in the correction candidates presented via the candidate display image screen, the user is required to cancel the candidate displays (i.e., return to the original input screen) to perform an input again.
In the following, embodiments are described in detail with reference to appended drawings.
In the following description, a character string is a concept that represents an aggregate that includes one or more characters. Therefore, the character string may also be a single character. Not only Hiragana, Katakana, alphabet, Chinese characters, numbers, but also symbols fall within the concept of a character. Further, a correction of the character string is a concept that represents, not only replacement of the character string according to a new string, deletion of the character string, but also an insertion of new character string in the character string, etc.
A program PR includes an application 10. In the following description, an apparatus for executing the application 10 is represented as “character input reception apparatus 1”.
Application 10 includes a handwriting character recognition application (an example of a recognition application) that recognizes handwritten input characters, and displays recognized characters. The “handwriting input” is an input method that expresses characters that a user wants to input by moving a hand (finger in the following example), and it is a concept including an input method using a stylus.
The application 10 generates, in response to a selection instruction with respect to a character string whose input has been confirmed by a recognition application, a candidate display image screen that displays correction candidates. The character string whose input has been confirmed by the recognition application is, for example, a character string in a state in which its input has been confirmed at least temporarily, and means that it is different from a character string for which an underline, etc., representing a unconfirmed state (i.e., state in which a correction is possible) is displayed. For example, the character string whose input has been confirmed by the recognition application is in a state in which the character string cannot be corrected as long as a selection instruction is not input for the character string. The election instruction is an instruction which a user inputs. The selection instruction may be in any form as long as the application 10 is able to identify a correction target character in the character string and the application 10 can recognize that the user intends the correction of the character string (character to be corrected). Note that, if the character string to be corrected consists of one character, the selection instruction may be in any form as long as the application 10 can recognize that the user intends the correction of the character. A way of inputting the selection instruction is described hereinafter.
The candidate display image screen is generated based on stroke information stored by the application 10. This stroke information is described hereinafter.
The application 10 recognizes, based on the stroke the user has handwritten to the candidate display image screen, a character representing a correction content with respect to the character to be corrected.
The application 10 corrects, based on a recognition result (a content of a handwritten input) based on strokes handwritten to the candidate display image screen, the character string relating to the selection instruction. A specific example of a correction content will be described hereinafter.
A stroke corresponds to a trajectory (coordinate sequence) of a finger or a stylus at the time of handwriting input. One unit of the stroke is one continuous track (i.e., a trajectory of one handwriting) on coordinate axes and a time axis. The coordinate axes are associated to an operation display surface 90 on which handwriting input is to be performed (see
As illustrated in
For example, as illustrated in
The character input reception apparatus 1 includes a display operation device 101, a drive device 102, an auxiliary storage device 103, a memory device 104, an arithmetic processing unit 105, and an interface device 106 such that they are mutually connected by a bus B.
The display operation device 101 is, for example, a touch panel or the like, and is used for input of various signals and display (output) of various signals. The display operation device 101 includes an operation display surface on which a touch operation is possible. The touch operation includes an operation for handwriting input, and also includes any type of operations such as tap, slide, pinch, and the like. In the case of the device 2 illustrated in
The application 10 is at least a part of various programs for controlling the character input reception apparatus 1. These programs are provided by distribution via a recording medium 107 or downloaded from the network, for example. The recording medium 107 having the application 10 stored therein may be of various types, including a recording medium for optically, electrically or magnetically storing information, such as a CD-ROM, a flexible disk, a magneto-optical disk, and a semiconductor memory for electrically storing information, such as a ROM, a flash memory.
When the recording medium 107 storing the application 10 is set in the drive device 102, each program is installed in the auxiliary storage device 103 from the recording medium 107 via the drive device 102. Each program downloaded from the network is installed in the auxiliary storage device 103 via the interface device 106.
The auxiliary storage device 103 stores the installed application 10 and stores an OS (Operating System) which is basic software, necessary files, data, and the like. The memory device 104 reads and stores the respective programs in the auxiliary storage device 103 at the time of activation of each program. Then, the arithmetic processing unit 105 implements various processes as described hereinafter in accordance with each program stored in the memory device 104.
The processing unit 11 is realized by the arithmetic processing unit 105 executing the application 10. The storage unit 41 is realized by the auxiliary storage device 103 and the memory device 104.
The character input reception apparatus 1 executes a character input process and a character correction process. Hereinafter, examples of the character input process and the character correction process executed by the character input reception apparatus 1 are explained.
In step S600, the processing unit 11 detects a start of the handwriting input. The start of handwriting input is detected when the handwriting input is detected in a state where handwritten input is not detected for a certain time or more, for example.
In step S602, the processing unit 11 detects an end of the handwriting input. The end of handwriting input is detected when the handwriting input is not detected for the certain time.
In step S604, the processing unit 11 recognizes, based on one or more strokes obtained from the start of the handwriting input detected in step S600 until the end of the handwriting input detected in step S602, one or more of the characters corresponding to the one or more strokes. Hereinafter, the character thus recognized is also referred to as “recognition character”. For example, when recognizing one or more characters based on one or more strokes, the processing unit 11 derives a plurality of candidates that can be formed by the one or more strokes, and calculates an evaluation value (score) representing likelihood of each of the derived candidates. In this case, when there is a possibility of a combination of two or more characters, the processing unit 11 creates a plurality of combination candidates, and calculates, for each combination candidate, the evaluation value representing the likelihood of the candidate. Then, the processing unit 11 outputs the candidate having the highest evaluation value as the first candidate (recognized character). Note that the recognized character is different from a character of a correction candidate (suggestion) described hereinafter. The recognized character generally corresponds to the first candidate (the likeliest candidate), and the correction candidates corresponds to the candidate which is likely to be the second and so on. In the following, the characters of the correction candidates are simply referred to as “correction candidates”.
In step S606, the processing unit 11 stores stroke information, which has been used for recognition in step S604, in the storage unit 41 (see the arrow R1). In the following, as an example, the stroke information is assumed to include stroke coordinate information, and recognition result information.
The stroke coordinate information includes coordinate information of each stroke. In the example illustrated in
For example, in the example illustrated in
The recognition result information represents a correspondence between the strokes used for recognition in step S604 and the character (recognized character) recognized by the strokes. The recognition result information is stored for each recognized character, as illustrated in
In step S608, the processing unit 11 displays one or more recognized characters based on the recognition result information. That is, the processing unit 11 displays a character string containing one or more recognized characters on the display operation device 101 based on the recognition result information. In this manner, the character string displayed on the display operation device 101 by a series of handwritten inputs is confirmed when the latest recognition result for the series of handwritten input is reflected thereon.
Further, in step S608, the processing unit 11 generates display position information indicating one or more displayed recognition characters and indicating display positions thereof, and stores the display position information in the storage unit 41. The display position information is stored for each recognized character, as illustrated in
Next, referring again to
In step S700, the processing unit 11 detects the selection instruction for a character string displayed according to the process illustrated in
In step S702, the processing unit 11 obtains the stroke information stored in the storage unit 41 (the stroke coordinate information and the recognition result information) (refer to an arrow R2). In this case, the processing unit 11 obtains the stroke information relating to the recognized character of the correction target.
In step S704, the processing unit 11 derives correction candidates based on the stroke information obtained in step S702, and generates a candidate display image screen that displays the derived correction candidates. The candidate display image screen includes selectable correction candidates. The candidate display image screen is formed with respect to a portion or substantially the entire screen of the operation display surface 90 of the display operation device 101. In this case, the display area of the candidate display of the candidate display image screen extends to the display area of the recognized character of the correction target. For example, if the recognition character is the correction target, the processing unit 11 derives the correction candidates “”, “
”, . . . , “
”. In the following, as an example, it is assumed that the candidate display image screen is formed on the entire screen of the operation display surface 90 of the display operation device 101.
In the example illustrated in ”, “
”, . . . , “
” such that the candidate display 80 is superimposed on the display area of the recognized character “3” of the correction target. Note that, in the example illustrated in
In step S706, the processing unit 11, after having displayed the candidate display image screen, detects a touch input on the operation display surface 90.
In step S708, the processing unit 11 determines whether the touch input detected in step S706 is a selection operation or a handwritten input. For example, when the touch input detected in step S706 is a short single touch input (i.e. a single tap) to the display area of the candidate display, the processing unit 11 determines that the touch input is a selection operation. The handwriting is detected when a stroke of more than a certain length is detected, for example. If the touch input detected in step S706 is determined to be a selection operation, the process goes to step S710, and if it is determined to be a handwritten input, the process goes to step S712.
For example, in the example illustrated in ”, and performs a touch input to the display position of the character “
” of the candidate display 80. In
In the example illustrated in
In the example illustrated in
In step S710, the processing unit 11 identifies (selects) the correction candidate corresponding to the touch input position (the position of the single tap) as a correction recognition character. Then, the processing unit 11 updates (reflects the replacement) the recognition result information in the storage unit 41 such that the recognized character of the correction target is replaced with the selected correction candidate (correction recognition character) (see the arrow R5). As a result, by a screen update process in the next step S728, as illustrated in ”.
In step S712, the processing unit 11 detects the end of the handwriting input. The end of handwriting input is detected when the handwriting input is not detected for the certain time ΔT.
In step S714, the processing unit 11 determines whether handwriting input, which has been detected in step S706, is performed in the candidate display of the display area. For example, if a predetermined ratio or more of a passage region of the stroke of the handwriting input is present in the candidate display of the display area, the processing unit 11 determines that the handwriting input has been made in the candidate display of the display area. If the handwritten input is determined to have been performed in the candidate display of the display area, the process goes to step S718, otherwise, the process goes to step S730. For example, in the example illustrated in
In step S718, the processing unit 11 recognizes, based on one or more strokes obtained from the start of the input detected in step S706 until the end of the handwriting input detected in step S712, one character corresponding to the one or more strokes. The recognition way itself may be similar to step S604. In order to implement the correction of each character, the processing unit 11 performs the character recognition based on a single character candidate, without creating any combination candidate. Hereinafter, the character thus recognized is a correction recognition character. Note that, in step S718, the correction recognition character is recognized together with the correction candidate, like the recognized character in step S604. The correction recognition character generally corresponds to the first candidate (the likeliest candidate), and the correction candidate thereof corresponds to the candidate which is likely to be the second and so on.
In step S720, the processing unit 11 determines whether the correction recognition character is a deletion instruction. The deletion instruction is used to instruct to delete the recognized character of the correction target (an example of a control content). For example, the deletion instruction is realized by handwriting a predetermined character to the candidate display of the display area. Preferably, the predetermined character can conceptually represent a deletion, and may include a horizontal line “−”, “/”, “x” and the like, for example. The predetermined character (“−”, etc.,) is set in advance, and users are informed of the predetermined character. For example, if the correction recognition character is the predetermined character and the evaluation value indicating the likelihood is greater than or equal to a predetermined threshold value, the processing unit 11 determines that the correction recognition character is a deletion instruction. In the example illustrated in
In step S722, the processing unit 11, in response to the deletion instruction, updates (reflects the deletion) the recognition result information in the storage unit 41 (see the arrow R4) such that the recognized character of the correction target is deleted. That is, the processing unit 11 deletes the recognition result information associated with the recognized character of the correction target from the storage unit 41. Further, the processing unit 11 deletes the stroke information relating to the recognized character of the correction target from the storage unit 41. As a result, by a screen update process in the next step S728, as illustrated in
In step S724, the processing unit 11 updates (reflects the replacement) the recognition result information in the storage unit 41 such that the recognized character of the correction target is replaced with the correction recognition character (see the arrow R3). Further, the processing unit 11 updates (reflects the replacement) the stroke information (the stroke coordinate information and the recognition result information) in the storage unit 41 (see the arrow R3). Specifically, the processing unit 11 allocates a new stroke ID to one or more strokes used for recognition in step S718 and generates the stroke coordinate information related to to the one or more strokes. Then, the processing unit 11 replaces the stroke coordinate information, which is related to the recognized character of the correction target in the storage unit 41, with the generated stroke coordinate information. Further, the processing unit 11 updates the recognition result information in the storage unit 41 by replacing the recognized character of the correction target and the stroke IDs thereof with the correction recognition character and the stroke IDs thereof. As a result, by a screen update process in the next step S728, as illustrated in
In step S728, the processing unit 11 updates the screen based on the updated recognition result information. In this case, if the correction is a deletion, the processing unit 11 deletes the display position information of the recognized character of the correction target, to change the coordinate information of other recognized characters (i.e., reflects the change of the display position because of filling an empty portion due to the deletion). Further, if the correction is a replacement, the processing unit 11 updates the display position information (only the recognized character is replaced and the coordinate information is maintained).
In step S730, the processing unit 11 terminates the display of the candidate display to return to the original screen (see
According to the character input reception apparatus 1 according to the first embodiment described above, the following effect can be obtained, for example.
First, as described above, according to the character input reception apparatus 1, because of step S606 of
Here, the display area of the candidate display 80 is used as a writing area for correcting the recognized character of the correction target as described above, and thus it is desirable that the size of the display area of the candidate display 80 is larger in terms of recognition accuracy. In this respect, according to the character input reception apparatus 1, the display area of the candidate display 80 extends to the display area of the recognized character of the correction target. That is, the display area of the candidate display 80 is large enough such that the candidate display 80 hides the display of the recognized character of the correction target. As a result of this, the display area of the candidate display 80 can be enlarged, which improves the recognition accuracy for the handwritten input to the display area of the candidate display 80.
Note that such effects are particularly useful when the operation display surface 90 of the display operation device 101 in the character input reception apparatus 1 is relatively small (e.g., in the case of the device being in a form that is wearable on a user illustrated in
Further, as described above, according to the character input reception apparatus 1, when the deletion instruction is input in the state of the candidate display image screen, the recognized character of the correction target is deleted. Therefore, for a user who wants to delete the recognized character of the correction target, the user can directly remove the recognized character of the correction target by handwriting the predetermined character (e.g., “−”) in the state of the candidate display image screen.
In the first embodiment described above, the processes of
Next, with reference to
Program PR2 includes a character input application 10A (an example of a recognition application), and a character correction application 10B
The character input application 10A is a program that recognizes a character handwritten and displays the recognized character.
The character correction application 10B generates, in response to a selection instruction with respect to a character string (an example of a character string whose input has been confirmed by a recognition application) given as an input from the character input application 10A, a candidate display image screen that displays correction candidates. The candidate display image screen is generated based on stroke information stored by the character input application 10A. The character correction application 10B recognizes a character from strokes handwritten on the candidate display image screen. The character correction application 10B corrects, based on a recognition result (a content of a handwritten input) based on strokes handwritten to the candidate display image screen, the character string relating to the selection instruction. The candidate display and the correction way are the same as those of the first embodiment.
In the following description, an apparatus that performs the character input application 10A and a character correction application 10B is referred to as a “character input reception apparatus 1A”.
The form of the character input reception apparatus 1A may be the same as that of the character input reception apparatus 1 according to the first embodiment described above (see
Further, the hardware configuration of the character input reception apparatus 1A may be the same as that of the character input reception apparatus 1 according to the first embodiment described above (see
The character input processing unit 20 is realized by the arithmetic processing unit 105 executing the character input application 10A. The character correction processing unit 21 is realized by the arithmetic processing unit 105 executing the character correction application 10B. The storage unit 41 is accessible by the character input processing unit 20 and the character correction processing unit 21.
In
In
In the following, first, a process implemented by the character correcting application 10B and then the process implemented by the character input application 10A will be described.
The character correction processing unit 21, upon detecting a touch input on the operation display screen 90 of the display operation device 101 (see the arrow R2) (step S706), obtains the stroke information stored in the storage unit 41 (step S702). The character correction processing unit 21 determines whether the candidates are displayed on the operation display surface 90 (step S902). For example, the character correction processing unit 21 derives the correction candidates based on the obtained stroke information, and determines that the candidates are displayed on the operation display surface 90 if the derived correction candidates correspond to a plurality of characters displayed on the operation display surface 90. This is because, when the candidates are displayed on the operation display surface 90, a plurality of characters displayed on the operation display surface 90 should include a plurality of correction candidates derived based on the obtained stroke information. Note that a plurality of characters (including characters of the candidate display when the candidate display is displayed) displayed on the operation display surface 90 can be determined based on information by OS, for example. If it is determined that the candidate display is not displayed, the character correction processing unit 21 sends the touch input (touch event), which has been detected, to the character input application 10A side as it is (step S901). On the other hand, if it is determined that candidate display is displayed, the character correction processing unit 21 determines whether the input detected in step S706 is a selection operation or a handwriting input (step S708). The character correction processing unit 21 sends the touch input (touch event), which has been detected, to the character input application 10A side as it is (step S901), if it is determined that the input detected in step S706 is a selection operation. On the other hand, the character correction processing unit 21 detects the end of handwritten input (step S712) if it is determined that the input detected in step S706 is a handwritten input. The character correction processing unit 21 determines, when having detected the end of the handwriting input, whether the handwriting input has been performed within the candidate display of the display area (step S714). If it is determined that the handwriting input has been performed within the candidate display of the display area, the character correction processing unit 21 performs, based on one or more strokes input by the handwriting, the recognition processing of the character corresponding to the one or more strokes (Step S718). On the other hand, if it is determined that the handwriting input has not been performed within the candidate display of the display area, the character correction processing unit 21 sends an instruction (hereinafter, referred to as “candidate display end instruction”) to end the display of the candidate display to the character input application 10A side (step S904). If it is determined that the correction recognition character is a deletion instruction (“YES” in step S720), the character correction processing unit 21 sends a correction instruction (hereinafter, referred to as “correction target deletion instruction”) to delete the recognized character of the correction target to the character input application 10A side (step S908). The correction target deletion instruction includes information identifying the recognized character of the correction target. The correction target deletion instruction causes the character correction processing unit 21 to update (reflect the deletion) the stroke information and the recognition result information in the storage unit 41 (arrow R4). If it is determined that the correction recognition character is not a deletion instruction (“NO” in step S720), the character correction processing unit 21 sends a correction target replacement instruction to the character input application 10A side (step S906), after the process of step S724. The correction target replacement instruction is a correction instruction that causes the character input application 10A to replace the recognized character of the correction target with the correction recognition character. The correction target replacement instruction includes information identifying the recognized character of the correction target and information of the correction recognition character.
The character input processing unit 20 operates in response to the inputs (touch events and various instructions) from the character correction application 10B.
When the character input processing unit 20 receives the touch event (step S800), the process goes to step S600, step S700, step S802 or step S730, depending on the form of the touch event.
If the touch event is handwriting input, the character input processing unit 20 detects the start of the handwriting input (step S600), and performs the processes until steps S608 (see
If the touch event is a touch input to the display area of one or more recognized characters displayed on the operation display surface 90, the character input processing unit 20 detects a selection instruction with respect to the one or more recognized characters (an example of a character string whose input has been confirmed by a recognition application) (step S700). In this case, the character input processing unit 20 identifies, based on the display position information stored in the storage unit 41, the recognized character, which corresponds to the touch input position, as a correction target. Then, the character input processing unit 20 derives correction candidates based on the stroke information stored in the storage unit 41, and generates the candidate display image screen that displays the derived correction candidates (step S704).
If the touch event is a single tap of the display area of the candidate (see the selection operation step S708), the character input processing unit 20 identifies (selects) the correction candidate corresponding to the touch input position as a correction recognition character (step S802). In this case, the character input processing unit 20 replaces the recognized characters of the correction target with the correction recognition character (step S804) to perform a screen update processing (step S808). The replacement process using the correction recognition character selected in step S802 causes the character input processing unit 20 to update (reflect the replacement) the recognition result information in the storage unit 41 (see the arrow R5). The screen update processing in step S808 causes the character input processing unit 20 to update the display position information (see the arrow R6).
If the touch event is a single tap instructing the end of the candidate display, the character input processing unit 20 ends the display of the candidate display such that the original screen is restored (see
The character input processing unit 20, when having received the candidate display end instruction (the step S904), ends the display of the candidate display such that the original screen is restored (see
The character input processing unit 20, when having received the correction target deletion instruction (step S908), deletes the recognized character of the correction target (step S806) based on the corrected target deletion instruction, and performs the screen update processing (step S808). The screen update processing in step S808 causes the character input processing unit 20 to update the display position information (see the arrow R6).
The character input processing unit 20, when having received the correction target replacement instruction (step S906) replaces the recognized character of the correction target with the correction recognition character (step S804), and performs the screen update processing (step S808). The replacement process in step S804 causes the character input processing unit 20 to update (reflect the replacement) the stroke information in the storage unit 41 (see the arrow R5). The screen update processing in step S808 causes the character input processing unit 20 to update the display position information (see the arrow R6).
According to the character input reception apparatus 1A of the second embodiment described above, the same effect as the character input reception apparatus 1 according to the first embodiment described above is obtained. Further, according to the character input reception apparatus 1A of the second embodiment, by linking the character correction application 10B to the character input application 10A having high versatility, it is possible to obtain the same effect as the character input reception apparatus 1. Therefore, by additionally mounting the character correction application 10B to a device in which the character input application 10A is mounted, it is possible to enhance the convenience of the correction function.
Note that in the second embodiment, two applications are used; however, it is also possible to realize the same function by using more than two applications.
Next, with reference to
A character input reception apparatus 1B according to the third embodiment (not illustrated) is different in that an application to be executed is not the application 10 but an application 10B (not illustrated).
Application 10 includes a handwriting character recognition application (an example of a recognition application) that recognizes handwritten input characters, and displays recognized characters.
The application 10B generates, in response to a selection instruction with respect to a character string whose input has been confirmed by a recognition application, a candidate display image screen that displays correction candidates. The candidate display image screen is generated based on stored candidate information (see
The form of the character input reception apparatus 1B may be the same as that of the character input reception apparatus 1 according to the first embodiment described above (see
The character input reception apparatus 1B includes a processing unit 11B, and a storage unit 41B (not illustrated). The processing unit 11B is realized by the arithmetic processing unit 105 executing the application 10B. The storage unit 41B is realized by the auxiliary storage device 103 and the memory device 104.
The character input reception apparatus 1B executes a character input process and a character correction process. Hereinafter, examples of the character input process and the character correction process executed by the character input reception apparatus 1B are explained.
In
In step S2506, the processing unit 11B holds information (candidate information) identifying characters of one or more correction candidates derived during recognition in the step S604 in the storage unit 41B (see the arrow R11). The candidate information is stored for each recognized character. For example, in the example illustrated in
Next, the processing in
In step S2602, the processing unit 11B obtains the candidate information stored in the storage unit 41B (see the arrow R12). In this case, the processing unit 11B obtains the candidate information relating to the recognized character of the correction target.
In step S2604, the processing unit 11B generates, based on the candidate information obtained in step S2602, the candidate display image screen that displays the correction candidates. The candidate display image screen includes selectable correction candidates. The candidate display itself is as described above.
In step S2622, the processing unit 11B, in response to the deletion instruction, updates (reflects the deletion) the recognition result information in the storage unit 41B (see the arrow R14) such that the recognized character of the correction target is deleted. That is, the processing unit 11B deletes the recognition result information associated with the recognized character of the correction target. As a result, by a screen update process in the next step S728, the screen of the operation display surface 90 of the display operation device 101 is updated to display a character string in which the recognized character of the correction target is deleted. In this case, the processing unit 11B deletes the candidate information relating to the recognized character of the correction target (see the arrow R14).
In step S2624, the processing unit 11B updates (reflects the replacement) the recognition result information in the storage unit 41B such that the recognized character of the correction target is replaced with the correction recognition character (see the arrow R13). Further, the processing unit 11B updates (reflects the replacement) the candidate information in the storage unit 41B (see the arrow R13). As a result, by a screen update process in the next step S728, the screen of the operation display surface 90 of the display operation device 101 is updated to display a character string in which the recognized character of the correction target is replaced with the correction recognition character.
According to the character input reception apparatus 1B of the third embodiment described above, the same effect as the character input reception apparatus 1 according to the first embodiment described above is obtained. Further, according to the character input reception apparatus 1B according to the third embodiment, as compared with first embodiment described above, the processing load for deriving correction candidates can be reduced. Specifically, in the first embodiment described above, the processing unit 11 obtains the stroke information stored in the storage unit 41, and derives, based on the obtained stroke information, the correction candidates related to the recognized character of the correction target. In contrast, according to the third embodiment, the processing unit 11B obtains the candidate information stored in the storage unit 41B. Therefore, according to the third embodiment, since the processing unit 11B can generate the candidate display using the past character input processing result, it is not necessary for the processing unit 11B to derive the correction candidates again, which can reduce the processing load of the processing unit 11B for generating the candidate display.
Note that, also in the third embodiment, the application 10B may be implemented in two applications in the same manner as in the second embodiment described above. That is, in the second embodiment described above, in a similar manner as the third embodiment, the candidate display may be generated by using the candidate information obtained at the time of handwriting input, instead of the stroke information. In this case, likewise, in step S704, the character input processing unit 20 need not derive the correction candidate again and can generate the candidate display using the past character input processing result. Further, in this case, in step S902, the character correction processing unit 21 can use the candidate information to determine whether the candidate display is displayed on the operation display surface 90.
Next, with reference to
A character input reception apparatus 1C according to the fourth embodiment (not illustrated) is different in that an application to be executed is not the application 10 but an application 10C (not illustrated).
The application 10C includes a speech recognition application (an example of a recognition application) that recognizes speech input characters, and displays recognized characters. The application 10C further includes a program that recognizes a character handwritten and displays the recognized character.
The application 10C generates, in response to a selection instruction with respect to a character string whose input has been confirmed by a recognition application, a candidate display image screen that displays correction candidates. The candidate display image screen is generated based on stored candidate information. The candidate information itself is as described above. The candidate display is the same as that of the first embodiment. The application 10C recognizes a character from strokes handwritten on the candidate display image screen. The application 10C corrects, based on a recognition result (a content of a handwritten input) obtained on the candidate display image screen, the character string relating to the selection instruction. The correction way is the same as that of the third embodiment.
The form of the character input reception apparatus 1C may be the same as that of the character input reception apparatus 1 according to the first embodiment described above (see
The character input reception apparatus 1C includes a processing unit 11C, and a storage unit 41C (not illustrated). The processing unit 11C is realized by the arithmetic processing unit 105 executing the application 10C. The storage unit 41 is realized by the auxiliary storage device 103 and the memory device 104.
The character input reception apparatus 1C executes a character input process and a character correction process. The character correction process that the character input reception apparatus 1C performs is similar to the character correction process (
In
In step S2800, the processing unit 11C detects a start of the speech input. The start of the speech input is detected when the speech input is detected in a state in which any speech input is not detected for a certain time or more, for example.
In step S2802, the processing unit 11C detects an end of the speech input. The end of the speech input is detected when the speech input is not detected for the certain time.
In step S2804, the processing unit 11C, based on speech data obtained from the start of the speech input detected in step S2800 until the end of the speech input detected in step S2802, recognizes one or more corresponding characters to the speech data. Hereinafter, as described above, the character thus recognized is also referred to as “recognition character”. For example, the processing unit 11C, in recognizing a character based on the speech data, creates a plurality of candidates, and calculates, for each candidate, an evaluation value representing the likelihood of the candidate. Then, the processing unit 11C outputs the candidate having the highest evaluation value as the first candidate (recognized character).
In step S2806, the processing unit 11C holds information (candidate information) identifying characters of one or more correction candidates derived during recognition in the step S2804 in the storage unit 41C (see arrow R21). The candidate information is as described above.
According to the character input reception apparatus 1C of the fourth embodiment described above, the same effect as the character input reception apparatus 1 according to the first embodiment described above is obtained. Further, according to the character input reception apparatus 1C according to the fourth embodiment, as compared with the third embodiment described above, the processing load for deriving correction candidates can be reduced. Specifically, according to the fourth embodiment, the processing unit 11C obtains the candidate information stored in the storage unit 41C, and generates the candidate display based on the obtained candidate information. Therefore, according to the fourth embodiment, it is not necessary for the processing unit 11C to derive the correction candidates again, since the processing unit 11C can generate the candidate display using the past character input processing result, which can reduce the processing load of the processing unit 11C for generating the candidate display.
Note that, also in the fourth embodiment, the application 10C may be implemented in two applications in the same manner as in the second embodiment described above. That is, in the second embodiment described above, in a similar manner as the fourth embodiment, the candidate display may be generated by using the candidate information obtained at the time of speech input, instead of the stroke information.
In the present embodiment 4, the candidate information obtained in the speech recognition is used to generate the candidate display; however, it is also possible to use the speech data used for speech recognition to generate the candidate display. In this case, in step S2806, processing unit 11C may hold the speech data used at the time of recognition in step S2804 in the storage unit 41C. In this case, the processing unit 11C, when generating the candidate display, may derives, based on the speech data stored in the storage unit 41C, the correction candidates to generate the candidate display. Thus, the processing unit 11C can generate, in accordance with the selection instruction with respect to the character string whose input has been confirmed by the recognition application, the candidate display image screen, by using the held speech data or candidate information.
Next, with reference to
In the example illustrated in
Here, in the first embodiment described above, the area in which handwriting input for the correction can be accepted is the display area of the candidate display (see
If the way of handwriting the correction content illustrated
According to the example illustrated in
In the example illustrated in
Here, in the first embodiment described above, the correction content to be handwritten includes only one character (see
If the way of handwriting the correction content illustrated in
According to the example illustrated in
In the example illustrated in
In the example illustrated in
Here, in the first embodiment described above, the correction content to be handwritten includes only one character (see
If the way of handwriting the correction content illustrated
According to the example illustrated in
In the example illustrated in
According to the example illustrated in
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although the embodiment(s) of the present inventions have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention. Further, all or part of the components of the embodiments described above can be combined.
For example, in the embodiments described above, the display area of the candidate display 80 is set in a range excluding a portion of the display area on the right side of the candidate display image screen; however, this is not indispensable. The display area of the candidate display 80 may be set to the entire candidate display image screen.
For example, in the embodiments described above, the candidate display 80 includes a plurality of correction candidates arranged in lateral two rows; however, the number of displayed correction candidates and the display way is arbitrary. For example, the candidate display 80 may provide a single correction candidate, or may provide a plurality of correction candidates in a horizontal row.
Further, in the third embodiment described above, the character correction process illustrated in
Further, in the first to fourth embodiments described above, the candidate display is displayed on a part of the candidate display image screen. However, the candidate display may be displayed on the entire candidate display image screen. In this case, it becomes easy for users to handwrite the correction content because the users can handwrite the correction content to a wider area, although the users cannot handwrite the correction content while viewing the character string to be corrected. From the same viewpoint, the candidate display is smaller in an area than the entire candidate display image screen; however, the candidate display may be displayed in a region overlapping the character string to be corrected.
Further, in the first to fourth embodiments described above, the candidate may be formed such that the predetermined character related to the deletion instruction (e.g., “−”) can be selected. That is, the candidate display may include the predetermined character related to a delete instruction along with the correction candidates. In this case, a user who wants to delete the recognized character of the correction target need not handwrite the predetermined character related to the deletion instruction so that the user may simply select the predetermined character related to the deletion instruction included in the candidate display to input the delete instruction.
Further, in the first embodiment described above (the same applies to the second to fourth embodiments), the selection operation with respect to the character string is realized by the touch input to the display position of the character in the character string; however, the selection operation may be input in other ways. For example, in the example illustrated in
This is a continuation of International Application No. PCT/JP2015/063957, filed on May 14, 2015, the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2015/063957 | May 2015 | US |
Child | 15810672 | US |