The present disclosure relates to an input device, an input method, and an input system.
JP1992-101286A discloses a license plate information reading device that captures a scene image containing license plates of vehicles to detect a license plate area from the captured scene image, to thereby read character information written on the license plate.
An object of the present disclosure is to provide an input device, an input method, and an input system capable of easily correcting input information.
The input device of an aspect of the present disclosure is an input device mounted on a moving body, comprising:
an input unit that accepts input information containing character strings and correction information containing one or more characters;
a degree-of-similarity calculation unit that edits character strings of the input information using the one or more characters of the correction information, to calculate degrees of similarity between character strings of the input information before and after editing; and
a correction processing unit that corrects a character string of the input information based on the degrees of similarity calculated by the degree-of-similarity calculation unit.
The input method of an aspect of the present disclosure is an input method which is performed on a moving body, comprising:
accepting input information containing character strings;
accepting correction information containing one or more characters;
editing character strings of the input information using the one or more characters of the correction information, to calculate degrees of similarity between character strings of the input information before and after editing; and correcting a character string of the input information based on the degrees of similarity calculated.
The input system of an aspect of the present disclosure is an input system comprising:
an arithmetic processing device mounted on a moving body; and
a server communicating with the arithmetic processing device via a network,
the arithmetic processing device comprising:
an input unit that accepts input information containing character strings and correction information containing one or more characters; and
a first communication unit communicating with the server via the network,
the server comprising:
a second communication unit communicating with the arithmetic processing device via the network;
a degree-of-similarity calculation unit that edits character strings of the input information using the one or more characters of the correction information, to calculate degrees of similarity between character strings of the input information; and
a correction processing unit that corrects a character string of the input information based on the degrees of similarity calculated by the degree-of-similarity calculation unit.
According to the present disclosure, there can be provided the input device, the input method, and the input system allowing easy correction of input information.
(Circumstances Leading to this Disclosure)
In the reading device described in JP1992-101286A, character information may be incorrectly read. In such a case, the user performs a work of correcting the character information. For example, the user operates a touch panel or the like with a finger to correct the character information. Alternatively, the user corrects the character information by voice input.
Such a reading device may be mounted on police vehicles such as police cars. For example, the user reads the character information of the license plate of an automobile traveling in front of a police vehicle by the reading device. The user uses the character information read by the reading device as input information, to collate the number of the automobile with a database or the like. At this time, if the reading device erroneously reads the character information, the user performs the work of correcting the input information.
Further, as an input form other than use of the reading device, input information may be input by voice input. The police vehicles are in an environment where noise is more likely to occur than in general vehicles. Therefore, when input information is input by voice input, the input information is likely to be erroneously recognized due to noise. For this reason, the number of times of correction of the input information may be larger than that of general vehicles.
When the user is driving, however, it is difficult to correct the input information. Police vehicles therefore require easy correction of the input information. Police vehicles may also require urgency, so that rapid and smooth correction of the input information is required.
It is required even in a moving body such as a general vehicle that the input information be easily corrected. For example, in the car navigation system of a general vehicle, the input information may be erroneously recognized when the destination address, etc. is input by voice input. Also in such a case, easy correction of the input information is required.
Thus, the inventors have diligently studied to solve these problems, and finally have found calculating the degrees of similarity based on the input information and correction information to thereby correct the input information based on the degrees of similarity, leading to the following disclosure.
An input device of a first aspect of the present disclosure is an input device mounted on a moving body, comprising:
an input unit that accepts input information containing character strings and correction information containing one or more characters;
a degree-of-similarity calculation unit that edits character strings of the input information using the one or more characters of the correction information, to calculate degrees of similarity between character strings of the input information before and after editing; and
a correction processing unit that corrects a character string of the input information based on the degrees of similarity calculated by the degree-of-similarity calculation unit.
With such a configuration, the input information can be corrected easily.
In the input device of a second aspect of the present disclosure,
the degree-of-similarity calculation unit may comprise a distance calculation unit that edits character strings of the input information using the one or more characters of the correction information, to calculate distances between character strings of the input information before and after editing, and
the correction processing unit may correct a character string of the input information based on the distances between character strings calculated by the distance calculation unit.
With such a configuration, the input information can be corrected more easily.
In the input device of a third aspect of the present disclosure,
the distance calculation unit may carry out at least any one of editing processes of insert, delete, and replace on character strings of the input information, to thereby calculate distances between character strings of the input information before and after editing.
With such a configuration, the input information can be corrected more easily.
In the input device of a fourth aspect of the present disclosure,
the correction processing unit may correct a character string of the input information of a portion having a smallest distance among the distances between character strings calculated by the distance calculation unit.
With such a configuration, the input information can be corrected more accurately.
In the input device of a fifth aspect of the present disclosure,
the input information may have a plurality of attributes for classifying a plurality of character strings of the input information,
the degree-of-similarity calculation unit may comprise an attribute determination unit that determines into which attribute among the plurality of attributes the correction information is classified, and
the degree-of-similarity calculation unit may calculate the degrees of similarity based on the attributes of the input information and of the input information.
With such a configuration, the input information can be corrected more rapidly.
In the input device of a sixth aspect of the present disclosure,
the correction processing unit may correct a character of a portion having a highest degree of similarity in the character strings of the input information with the same attribute between the input information and the correction information.
With such a configuration, the input information can be corrected more accurately.
In the input device of a seventh aspect of the present disclosure,
if in the character strings of the input information there exist a plurality of portions having a highest degree of similarity among a plurality of degrees of similarity calculated by the degree-of-similarity calculation unit, the correction processing unit may correct a character of a first-calculated portion having a highest degree of similarity.
With such a configuration, the input information can be corrected more rapidly and more accurately.
The input device of an eighth aspect of the present disclosure may further comprise:
a display that displays the input information and the input information corrected.
With such a configuration, the input information can be displayed.
In the input device of a ninth aspect of the present disclosure,
the input unit may comprise a voice input unit that accepts voice information indicative of the input information and voice information indicative of the correction information,
the input device may further comprise:
a determination unit that determines whether the voice information accepted by the voice input unit is the input information or the correction information, and
if the determination unit determines that the voice information is the correction information, the degree-of-similarity calculation unit may calculate the degrees of similarity.
With such a configuration, input and correction of information can be performed easily by voice input.
In the input device of a tenth aspect of the present disclosure,
the input information may include image information having a character string captured,
the correction information may include voice information containing information of one or more characters,
the input unit may comprise an image acquisition unit acquiring the image information and a voice input unit accepting the voice information, and
the input device may further comprise:
a first conversion unit that converts character string information contained in the image information acquired by the image acquisition unit, into text information; and
a second conversion unit that converts information of one or more characters contained in the voice information accepted by the voice input unit, into text information.
With such a configuration, the input information acquired from the image information can be corrected easily by voice input.
An input method of an eleventh aspect of the present disclosure is an input method which is performed on a moving body, comprising:
accepting input information containing character strings;
accepting correction information containing one or more characters;
editing character strings of the input information using the one or more characters of the correction information, to calculate degrees of similarity between character strings of the input information before and after editing; and correcting a character string of the input information based on the degrees of similarity calculated.
With such a configuration, the input information can be corrected easily.
An input system of a twelfth aspect of the present disclosure comprises:
an arithmetic processing device mounted on a moving body; and
a server communicating with the arithmetic processing device via a network,
the arithmetic processing device comprising:
an input unit that accepts input information containing character strings and correction information containing one or more characters; and
a first communication unit communicating with the server via the network,
the server comprising:
a second communication unit communicating with the arithmetic processing device via the network;
a degree-of-similarity calculation unit that edits character strings of the input information using the one or more characters of the correction information, to calculate degrees of similarity between character strings of the input information; and
a correction processing unit that corrects a character string of the input information based on the degrees of similarity calculated by the degree-of-similarity calculation unit.
With such a configuration, the input information can be corrected easily.
Embodiments of the present disclosure will now be described with reference to the accompanying drawings. In the figures, elements are shown exaggerated for the purpose of easy explanation.
The input information is information that is input to the input device 1 and includes character information to be recognized by the input device 1. The correction information is information for correcting the input information and includes character information for correcting character information included in the input information. In the first embodiment, the input information includes character information containing character strings on an automobile license plate. The character strings on the automobile license plate includes e.g. alphabets, numerals, and a place name. The correction information includes information of one or more characters used for the automobile license plate.
Further, the input information has a plurality of pieces of attribute information. Specifically, in the input information, attribute information is imparted to each of the plurality of character strings. In the example shown in
An example of correction of the input information by the input device 1 will then be briefly described with reference to
In the example shown in
In this manner, in the input device 1, the character strings of the input information can be corrected by inputting part of a character string to be corrected as correction information, instead of correcting all the character strings of the input information. Correction of the input information based on the correction information is performed based on the degree of similarity between the character strings. Detailed description of the degree-of-similarity-based correction will be given later.
The detailed configuration of the input device 1 will then be described. As shown in
Input unit 10 accepts input information containing character strings and correction information containing one or more characters.
The input unit 10 comprises a voice input unit that accepts the input information and the correction information by voice for example. Examples of the voice input unit include a microphone. In the first embodiment, the input information and the correction information are input to the input unit 10 by voice input. That is, the input unit 10 accepts voice information indicative of the input information and voice information indicative of the correction information.
The voice information input to the input unit 10 is transmitted to the information processing unit 20.
The information processing unit 20 processes information input to the input unit 10. Specifically, the information processing unit 20 comprises a conversion unit that converts the voice information input to the input unit 10 into text information (character information). By converting the voice information into the text information (character information), the conversion unit acquires input information and correction information. An available algorithm converting voice information into character information can be e.g. various deep learning skills or methods utilizing Hidden Markov Model.
The information processed by the information processing unit 20 is transmitted to the determination unit 30.
The determination unit 30 determines whether the voice information input to the input unit 10 is input information or correction information. For example, the determination unit 30 counts the number of characters, based on the text information acquired in the information processing unit 20. If the number of characters is equal to or greater than a predetermined number, the determination unit 30 determines that the information input to the input unit 10 is the input information. If the number of characters is less than the predetermined number, the determination unit 30 determines that the information input to the input unit 10 is the correction information.
If determining that the information input to the input unit 10 is the input information, the determination unit 30 transmits the input information to the input storage 40. If determining that the information input to the input unit 10 is the correction information, the determination unit 30 transmits the correction information to the degree-of-similarity calculation unit 50.
The input storage 40 is a storage medium that stores input information. The input storage 40 receives and stores the input information from the determination unit 30 and the correction processing unit 60. For example, the input storage 40 can be implemented by a hard disk (HDD), an SSD, a RAM, a DRAM, a ferroelectric memory, a flash memory, a magnetic disk, or a combination thereof.
The degree-of-similarity calculation unit 50 edits character strings of the input information using one or more characters of correction information, to calculate degrees of similarity between character strings of the input information before and after editing. Specifically, the degree-of-similarity calculation unit 50 sets first to n-th characters of the input information as an edit start position and edits to change characters of the input information into characters of the correction information from the edit start position. The degree-of-similarity calculation unit 50 calculates the degree of similarity between character strings of the input information before and after editing. “n-th” is decided based on the number of characters of the input information and the number of characters of the correction information. For example, it is calculated from “n=(the number of characters of input information)-(the number of characters of correction information)”. That is, the degree-of-similarity calculation unit 50 edits character strings of the input information n times, to calculate the degree of similarity for each editing process.
An example of calculation of the degree of similarity between character strings will be described with reference to
As shown in
The degree-of-similarity calculation unit 50 then sets the edit start position to the second character “D” of the input information. The degree-of-similarity calculation unit 50 starts editing from the second character “D” of the input information. As shown in
In this manner, the degree-of-similarity calculation unit 50 edits character strings of the input information sequentially from the edit start position of the first to n-th characters of the input information using one or more characters of the correction information, to calculate respective degrees of similarity between character strings of the input information before and after editing.
Any algorithm may be adopted for a degree-of-similarity calculation method. For example, the degree-of-similarity calculation method may adopt algorithms calculating Levenshtein distance, Jaro-Winkler distance, etc.
In the first embodiment, the degree-of-similarity calculation unit 50 calculates a distance between character strings as the degree of similarity. In the distance between character strings, a smaller distance between character strings shows a higher degree of similarity therebetween, and a larger distance between character strings means a lower degree of similarity therebetween. An example of the configuration calculating the distance between character strings will hereinafter be described.
Referring back to
The distance calculation unit 51 edits character strings of the input information using one or more characters of the correction information, to calculate distances between character strings of the input information before and after editing. Specifically, the distance calculation unit 51 carries out at least any one of editing processes of insert, delete, and replace on character strings of the input information, to thereby calculate the distances between character strings of the input information before and after editing. The distance calculation unit 51 acquires the input information before editing from the input storage 40.
As used herein, “delete” means deleting one character of the input information character string. “Insert” means inserting one character into the input information character string. “Replace” means replacing one character of the input information character string with another one.
Referring to
The examples shown in
The example shown in
In the case of changing the first to third characters “ADC” of the input information into the correction information “ABC”, the distance calculation unit 51 compares the characters of the correction information and the first to third characters of the input information, to identify the position of a character to be changed among the first to third characters of the input information. In the example of
After identifying the position of the character to be changed, the distance calculation unit 51 edits the character at the identified position. For example, the distance calculation unit 51 deletes the second character “D” of the input information. The distance calculation unit 51 then inserts the second character “B” of the correction information into the deleted portion. In this manner, in the example of
The distance calculation unit 51 calculates the distances between character strings of the input information before and after editing, based on the number of edits and the editing cost. For example, if the delete cost is “+1” with the insert cost of “+1”, the distance calculation unit 51 calculates the distance between character strings of the input information before and after editing to be “+2” since the delete and the insert are each performed once in the example shown in
The example shown in
In the example of
After identifying the position of the character to be changed, the distance calculation unit 51 edits the character at the identified position. For example, the distance calculation unit 51 replaces the second character “D” of the input information with “B”. In this manner, in the example of
The distance calculation unit 51 calculates the distance between character strings of the input information before and after editing, based on the number of edits and the editing cost. For example, if the delete cost is “+3”, the distance calculation unit 51 calculates the distance between character strings of the input information before and after editing to be “+3” since the replace is performed once in the example shown in
Referring next to
In the case of changing the second to fourth characters “DCA” of the input information into the correction information “ABC”, the distance calculation unit 51 compares the characters of the correction information and the second to fourth characters of the input information, to identify the position of a character to be changed among the second to fourth characters of the input information. In the example of
After identifying the position of the character to be changed, the distance calculation unit 51 edits the character at the identified position. For example, the distance calculation unit 51 deletes the second character “D” of the input information, and then inserts the first character “A” of the correction information into the deleted portion. The distance calculation unit 51 deletes the third character “C” of the input information, and then inserts the second character “B” of the correction information into the deleted portion. Furthermore, the distance calculation unit 51 deletes the fourth character “A” of the input information, and then inserts the third character “C” of the correction information into the deleted portion. In this manner, in the example of
In the example of
When comparing the example of
In this manner, the distance calculation unit 51 carries out at least any one of editing processes of the insert, delete, and replace on the input information character string, to thereby calculate the distance between character strings of the input information before and after editing. Note that the above numerical values of the editing cost of the delete, insert, and replace are mere exemplifications and that the present disclosure is not limited thereto. The editing cost may be set to any numerical value.
Information of the distances between character strings calculated by the distance calculation unit 51 is transmitted to the correction processing unit 60.
The attribute determination unit 52 determines into which attribute among a plurality of attributes the correction information is classified. For example, the attribute determination unit 52 receives correction information from the determination unit 30 and determines into which attribute between the first attribute information and the second attribute information of the input information shown in
For example, if the character information of the correction information is one or more alphabetical letters, the attribute determination unit 52 recognizes that the correction information is information of the number part of an automobile. In this case, the attribute determination unit 52 determines that the correction information is the first attribute information. Alternatively, if the character information of the correction information is a place name, the attribute determination unit 52 recognizes that the correction information is information of the place name. In this case, the attribute determination unit 52 determines that the correction information is the second attribute information.
The attribute information determined by the attribute determination unit 52 is transmitted to the distance calculation unit 51. The distance calculation unit 51 determines which character string is to be edited among a plurality of character strings of the input information, based on the attribute information determined by the attribute determination unit 52. For example, if the correction information is classified into the first attribute information, the distance calculation unit 51 calculates the distance of the part of “ABC AECD” shown in
Rapid and smooth correction of input information becomes feasible by calculating the distance based on the attribute information in this manner.
The correction processing unit 60 corrects an input information character string based on the degree of similarity calculated by the degree-of-similarity calculation unit 50. As described above, the degree-of-similarity calculation unit 50 edits character strings of the input information n time, to calculate a degree of similarity for each editing process. The correction processing unit 60 identifies an editing process having a highest degree of similarity from among a plurality of degrees of similarity calculated by the degree-of-similarity calculation unit 50. The correction processing unit 60 corrects the input information based on the editing process having the highest degree of similarity.
In the first embodiment, the correction processing unit 60 corrects an input information character string based on the distance between character strings calculated by the distance calculation unit 51. The correction processing unit 60 corrects an input information character string of a portion having a smallest distance among the distances between character strings calculated by the distance calculation unit 51. For example, when comparing the example of
Processing will be described that is performed when there exist a plurality of editing processes having a highest degree of similarity among a plurality of degrees of similarity calculated by the degree-of-similarity calculation unit 50. Due to use of the distance between character strings as the degree of similarity in the first embodiment, description will be made using the distance between character strings.
In the example of
Since the delete and the insert are each performed once in the example of
In this manner, if there exist a plurality of portions having a smallest distance among a plurality of distances calculated by the distance calculation unit 51 in a character string of the input information, the correction processing unit 60 corrects a character of the first-calculated portion having a smallest distance. In other words, if there exist a plurality of portions having a highest degree of similarity among a plurality of degrees of similarity calculated by the degree-of-similarity calculation unit 50 in a character string of the input information, the correction processing unit 60 corrects a character of a first-calculated portion having a highest degree of similarity.
The input information corrected by the correction processing unit 60 is transmitted to the input storage 40.
The display 70 displays input information and corrected input information. The display 70 acquires the input information and the corrected input information from the input storage 40. The display 70 can be implemented by e.g. a display or a head-up display.
The elements making up the input device 1 can be implemented by e.g. a semiconductor element. The elements making up the input device 1 can be e.g. a microcomputer, a CPU, an MPU, a GPU, a DSP, an FPGA, or an ASIC. The functions of the elements making up the input device 1 may be implemented by hardware only or by combination of hardware and software.
The elements making up the input device 1 are collectively controlled by e.g. a controller. The controller comprises e.g. a memory storing programs and a processing circuit (not shown) corresponding to a processor such as a central processing unit (CPU). For example, in the controller, the processor executes a program stored in the memory. In the first embodiment, the controller controls the input unit 10, the information processing unit 20, the determination unit 30, the input storage 40, the degree-of-similarity calculation unit 50, the correction processing unit 60, and the display 70.
Referring to
As shown in
The voice information input at step ST1 is used as input information or correction information. In the case of inputting the input information in the form of voice information, as in the example of
At step ST2, the voice information is converted into text information by the information processing unit 20. At Step ST2, the voice information input to the input unit 10 at step ST1 is converted into text information (character information). The input information and the correction information are hereby acquired. At this time, the information processing unit 20 may erroneously recognize and convert the voice information. For example, as in the example of
At step ST3, it is determined by the determination unit 30 whether the information input to the input 10 is the input information or the correction information. Specifically, the determination unit 30 determines whether it is the input information or the correction information, based on the number of characters of the character information obtained by text conversion.
If at step ST3 the determination unit 30 determines that the information input to the input unit 10 is the input information, the process proceeds to step ST4. If the determination unit 30 determines that the information input to the input unit 10 is the correction information, the process proceeds to step ST5.
At step ST4, the input information is displayed by the display 70.
At step ST5, character strings of the input information are edited using one or more characters of the correction information by the degree-of-similarity calculation unit 50, to calculate the degrees of similarity between character strings of the input information before and after editing. In the first embodiment, at step ST5, the distances between character strings are calculated as the degrees of similarity between character strings.
Step ST5 includes step ST5A determining the attribute of the correction information and step ST5B calculating the distance between character strings.
At step ST5A, it is determined by the attribute determination unit 52 into which attribute among a plurality of attributes the correction information is classified. For example, at step ST5A, it is determined by the attribute determination unit 52 into which attribute between the first attribute information and the second attribute information shown in the example of
At step ST5b, the distances between character strings are calculated by the distance calculation unit 51, based on the attributes of the input information and the correction information. For example, if the correction information is classified into the attribute of the first attribute information, at step ST5B the distance calculation unit 51 edits a portion of the first attribute information of the input information using one or more characters of the correction information, to calculate the distances between character strings of the input information before and after editing.
At step ST6, a character string of the input information is corrected by the correction processing unit 60, based on the degrees of similarity. Specifically, the correction processing unit 60 corrects an input information character string of a portion having a smallest distance among the distances between character strings calculated at step ST5B.
After correcting the input information at step ST6, the process proceeds to step ST4. As a result, the corrected input information is displayed by the display.
Referring to
The example shown in
The example shown in
According to the input device 1 and the input method of the first embodiment, the following effects can be achieved.
The input device 1 is an input device mounted on a moving body and comprises the input unit 10, the information processing unit 20, the determination unit 30, the input storage 40, the degree-of-similarity calculation unit 50, the correction processing unit 60, and the display 70. The input unit 10 accepts input information containing a character string and correction information containing one or more characters through voice input. The information processing unit 20 converts the voice information input to the input unit 10 into text information. The determination unit 30 determines whether the voice information input to the input unit 10 is the input information or the correction information. The input storage 40 is a storage medium storing the input information. The degree-of-similarity calculation unit 50 edits character strings of the input information using one or more characters of the correction information, to calculate the degrees of similarity between character strings of the input information before and after editing. The correction processing unit 60 corrects a character string of the input information based on the degrees of similarity calculated by the degree-of-similarity calculation unit 50. The display 70 displays the input information and the corrected input information.
Such a configuration enables input information to be easily corrected even if the input information is erroneously accepted. Further, rapid and smooth correction of input information can be achieved by voice input even when the user is driving a moving body such as an automobile.
The degree-of-similarity calculation unit 50 includes the distance calculation unit 51 that edits character strings of the input information using one or more characters of the correction information, to calculate the degrees of similarity between character strings of the input information before and after editing. The correction processing unit 60 corrects the input information character string based on the distance between character strings calculated by the distance calculation unit 51.
With such a configuration, the degrees of similarity can be calculated based on the distances between character strings so that the input information can be corrected more easily. Further, the correction accuracy can be improved.
The distance calculation unit 51 carries out at least any one of editing processes of the insert, delete, and replace on character strings of the input information, to thereby calculate distances between character strings of the input information before and after editing. The distance calculation unit 51 acquires the input information before editing from the input storage 40.
With such a configuration, the input information can be corrected more easily. Further, the correction accuracy can be improved.
The correction processing unit 60 corrects an input information character string of a portion having a smallest distance among distances between character strings calculated by the distance calculation unit 51.
With such a configuration, the input information can be corrected more easily. Further, the correction accuracy can be more improved.
Input information has a plurality of attributes for classifying a plurality of character strings of the input information. The degree-of-similarity calculation unit 50 includes the attribute determination unit 52 that determines into which attribute among a plurality of attributes the correction information is classified. The degree-of-similarity calculation unit 50 calculates the degrees of similarity of the input information and the correction information.
Such a configuration enables the input information to be corrected more rapidly and smoothly.
The correction processing unit 60 corrects a character of a portion having a highest degree of similarity in a character string of the input information with the same attribute between the input information and the correction information.
With such a configuration, the input information can be corrected more easily. Further, the input information can be corrected more rapidly and smoothly.
If where there exist a plurality of portions having a highest degree of similarity among a plurality of degrees of similarity calculated by the degree-of-similarity calculation unit 50 in a character string of the input information, the correction processing unit 60 corrects a character of the first-calculated portion having a highest degree of similarity.
With such a configuration, the input information can be corrected more easily. Further, the input information can be corrected more rapidly and smoothly.
The input method of the first embodiment also has the same effect as the effect of the input device 1 described above.
In the first embodiment, the example has been described where the input information is the character string information of the automobile license plate, but the present disclosure is not limited thereto. Any input information is acceptable as long as it has character string information. For example, the input information may include character string information of an address, a place name, a person's name, a building name, a telephone number, etc.
In the first embodiment, the example has been described where the input information has a plurality of character strings, but the present disclosure is not limited thereto. For example, the input information may have one or more character strings.
In the first embodiment, the example has been described where the input information and the correction information have attribute information, but the present disclosure is not limited thereto. For example, the input information and the correction information may not have the attribute information.
In the first embodiment, the example has been described where the attribute information includes the first attribute information indicative of the number part of the automobile license plate and the second attribute information indicative of the place name, but the present disclosure is not limited thereto. The attribute information may be information indicative of an attribute. For example, the attribute information may be a code such as Alpha and Bravo.
Although in the first embodiment,
Although in the first embodiment, the example has been described where the input unit 10 comprises the voice input unit, the present disclosure is not limited thereto. The input unit 10 may allow input of input information and correction information. For example, the input unit 10 may comprise an input interface such as a touch panel or a keyboard. Alternatively, the input unit 10 may comprise an image acquisition unit. In this case, character information is acquired from image information obtained by the image acquisition unit.
Although in the first embodiment, the example has been described where the input device 1 comprises the information processing unit 20 and the determination unit 30, the present disclosure is not limited thereto. The information processing unit 20 and the determination unit 30 are not essential constituent elements. For example, in the case where information input to the input unit 10 is character information that is text information, the input device 1 may not comprise the information processing unit 20. Further, in the case where the input information and the correction information are acquired by respective different devices, the input device 1 may not comprise the determination unit 30.
Although in the first embodiment, the example has been described where the determination unit 30 determines the input information and the correction information based on the number of characters, the present disclosure is not limited thereto. For example, the determination unit 30 may determine the input information and the correction information based on the attribute information, etc.
In the first embodiment, the example has been described where the input device 1 comprises the input storage 40, but this is not limitative. The input storage 40 may not be an essential constituent element.
In the first embodiment, the distances between character strings calculated by the distance calculation unit 51 have been described as the example of the degrees of similarity of the degree-of-similarity calculation unit 50, but this is not limitative. The distance calculation unit 51 may not be an essential constituent element. The degree-of-similarity calculation unit 50 may be able to calculate the degrees of similarity between character strings. For example, algorithms calculating Levenshtein distance, Jaro-Winkler distance, etc. can be used as the algorithm for calculating the degree of similarity between character strings.
In the first embodiment, the example has been described where the degree-of-similarity calculation unit 50 comprises the attribute determination unit 52, but this is not limitative. The attribute determination unit 52 may not be an essential constituent element.
Although in the first embodiment, the example has been described where the input device 1 comprises the display 70, this is not limitative. The display 70 is not an essential constituent element. For example, the input device 1 may comprise a voice output unit audibly outputting the input information, in place of the display 70. Alternatively, the input device 1 may comprise both the display 70 and the voice output unit.
Although in the first embodiment, the example has been described where the input device 1 comprises the input unit 10, the information processing unit 20, the determination unit 30, the input storage 40, the degree-of-similarity calculation unit 50, the correction processing unit 60, and the display 70, this is not limitative. The elements making up the input device 1 may be increased or decreased. Alternatively, two or more elements of the plurality of elements making up the input device 1 may be integrated.
Although in the first embodiment, the example has been described where the input method includes steps ST1 to ST6, this is not limitative. The input method may include an increased or decreased number of steps or an integrated step. For example, if the input information and the correction information are input by different methods, the input method may not include step ST3. Alternatively, if the input information does not include the attribute information, the input method may not include step ST5A.
An input device according to a second embodiment of the present disclosure will be described. In the second embodiment, differences from the first embodiment will mainly be described. In the second embodiment, the same or equivalent constituent elements as those in the first embodiment will be described with the same reference numerals. Further, in the second embodiment, descriptions overlapping with those of the first embodiment are omitted.
An example of the input device of the second embodiment will be described with reference to
The second embodiment differs from the first embodiment in that input information is acquired by an image acquisition unit 11 and that correction information is acquired by a voice input unit 12.
As shown in
The input unit 10A includes the image acquisition unit 11 and the voice input unit 12.
The image acquisition unit 11 acquires image information. The image acquisition unit 11 is e.g. a camera that captures an image of a character string to be input. In the second embodiment, the image acquisition unit 11 acquires image information containing a character string written on a license plate of an automobile. For example, the image acquisition unit 11 acquires image information containing an automobile license plate written as “ABC AECD, Chicago”. The image information acquired by the image acquisition unit 11 is transmitted to the information processing unit 20A. The image information can be e.g. information such as a still image or a moving image.
The voice input unit 12 accepts voice information. The voice input unit 12 is e.g. a microphone that accepts user's voice information. For example, when the user utters “ABC” toward the voice input unit 12, the voice information is input to the voice input unit 12. The voice information input to the voice input unit 12 is transmitted to the information processing unit 20A.
In the second embodiment, the image acquisition unit 11 may be controlled by voice input to the voice input unit 12. For example, the user utters “Capture” as voice input toward the voice input unit 12. In response to this voice input as a trigger, the image acquisition unit 11 may acquire image information.
The information processing unit 20A converts the image information and the voice information acquired by the input unit 10A into text information (character information). The information processing unit 20A includes an image processing unit 21, a voice processing unit 22, a first conversion unit 23, and a second conversion unit 24.
The image processing unit 21 performs a process of extracting character string information from the image information acquired by the image acquisition unit 11. For example, if the image information includes license plates of a plurality of automobiles, the image processing unit 21 extracts character string information written on the license plate of an automobile selected by the user. The image information processed by the image processing unit 21 is transmitted to the first conversion unit 23.
The voice processing unit 22 performs a process of extracting character information from the voice information input to the voice input unit 12. For example, if the voice information contains noise, the voice processing unit 22 extracts information of one or more characters uttered by the user while filtering the noise. The voice information processed by the voice processing unit 22 is transmitted to the second conversion unit 24.
The first conversion unit 23 converts character string information contained in the image information processed by the image processing unit 21, into text information. As a result, input information is acquired. As an algorithm for converting image information into character string information, for example, a method using deep learning, simple pattern matching, or the like can be used.
The second conversion unit 24 converts information of one or more characters contained in the voice information processed by the voice processing unit 22, into text information. As a result, correction information is acquired.
Since the input storage 40, the degree-of-similarity calculation unit 50, the correction processing unit 60, and the display 70 in the first embodiment are the same as those in the first embodiment, description thereof will be omitted. In the second embodiment, the image information acquired by the image acquisition unit 11 and the image information processed by the image processing unit 21 may be transmitted to and displayed on the display 70.
The elements making up the input device 1A can be implemented by, e.g. the semiconductor element. The elements making up the input device 1 Can be e.g. the microcomputer, the CPU, the MPU, the GPU, the DSP, the FPGA, or the ASIC. The functions of the elements making up the input device 1 may be implemented by hardware only or by combination of hardware and software.
The elements making up the input device 1 are collectively controlled by e.g. the controller. The controller comprises e.g. the memory storing programs and the processing circuit (not shown) corresponding to the processor such as the central processing unit (CPU). For example, in the controller, the processor executes a program stored in the memory. In the second embodiment, the controller controls the input unit 10A, the information processing unit 20A, the input storage 40, the degree-of-similarity calculation unit 50, the correction processing unit 60, and the display 70.
Referring to
As shown in
At step ST11, the character string information contained in the image information acquired by the image acquisition unit 11 is converted into text information (character information) by the image processing unit 21 and the first conversion unit 23. For example, if there exists character string information “ABC AECD, Chicago” in the image information, this character string information is converted into text information. Input information is thus acquired. At this time, similar to the example shown in
At step ST12, the input information is displayed by the display 70. At step ST12, the input information acquired based on the image information is displayed by the display 70. The user can confirm the input information displayed on the display 70. As a result, the user can confirm that the input information is erroneously accepted.
At step ST13, voice information is accepted by the voice input unit 12. At step ST13, when the user utters “ABC”, voice information is input to the voice input unit 12.
At step ST14, information of one or more characters contained in the voice information accepted by the voice input unit 12 is converted into text information by the voice processing unit 22 and the second conversion unit 24. Correction information is thus acquired.
At step ST15, character strings of the input information are edited using one or more characters of the correction information by the degree-of-similarity calculation unit 50, to calculate the degrees of similarity between character strings of the input information before and after editing.
Step ST15 includes step ST15A determining the attribute of the correction information and step ST15B calculating the distance between character strings. Since steps ST15A and ST15B are the same as steps ST5A and ST5B of the first embodiment, description thereof will be omitted.
At step ST16, the input information character string is corrected based on the degree of similarity by the correction processing unit 60.
At step ST17, the corrected input information is displayed by the display 70.
Referring to
As shown in
As shown in
As shown in
As shown in
As in the examples of
Referring to
As shown in
As shown in
As shown in
As in the example shown in
Referring to
As shown in
As shown in
As in the example shown in
Referring to
As shown in
For this reason, as shown in
As shown in
In the example shown in
As in the example shown in
According to the input device 1A and the input method of the second embodiment, the following effects can be achieved.
In the input device 1A, the input information includes image information having a character string captured and the correction information includes voice information containing information of one or more characters. The input unit 10A includes the image acquisition unit 11 acquiring image information and the voice input unit 12 accepting voice information. The information processing unit 20A includes the first conversion unit 23 and the second conversion unit 24. The first conversion unit 23 converts character string information contained in the image information acquired by the image acquisition unit 11, into text information. The second conversion unit 24 converts information of one or more characters contained in the voice information input to the voice input unit 12, into text information.
With such a configuration, the input information can be corrected more easily. Further, by acquiring input information from image information and by accepting correction information in the form of voice information, rapid and smooth acquisition and correction of the input information can be achieved.
The input method of the second embodiment also presents the same effects as those of the input device 1A described above.
Although in the second embodiment, the example has been described where the information processing unit 20A comprises the image processing unit 21 and the voice processing unit 22, the present disclosure is not limited thereto. The image processing unit 21 and the voice processing unit 22 may not be essential constituent elements.
Although in the second embodiment, the example has been described where the input method includes steps ST10 to ST17, the present disclosure is not limited thereto. The input method may include an increased or decreased number of steps or an integrated step. For example, the input method may include a step of determining whether or not correction information is input. In this case, if the correction information is input, the process may proceed to steps ST14 to ST17. If the correction information is not input, the process may come to an end.
Although in the second embodiment, the examples of acquisition of the input information have been described with the examples shown in
An input device according to a third embodiment of the present disclosure will be described. In the third embodiment, differences from the second embodiment will mainly be described. In the third embodiment, the same or equivalent constituent elements as those in the second embodiment will be described with the same reference numerals. Further, in the third embodiment, descriptions overlapping with those of the second embodiment are omitted.
An example of the input device of the third embodiment will be described with reference to
The third embodiment differs from the second embodiment in comprising a line-of-sight detection unit 13.
As shown in
The line-of-sight detection unit 13 detects the user's line of sight. The line-of-sight detection unit 13 is e.g. a camera that captures a user's face portion. Information of the user's line-of-sight detected by the line-of-sight detection unit 13 is transmitted to the image processing unit 21.
As shown in
The image processing unit 21 may display a rectangular frame for the automobile C3 determined to be looked at by the user. This enables the user to confirm the automobile being selected by the user's own line of sight.
When the user utters “Capture” toward the voice input unit 12, the image acquisition unit 11 acquires image information of the license plate portion of the automobile C3. The first conversion unit 23 converts character string information contained in the image information into text information.
In this manner, in the image information containing plural pieces of character string information, input information can be acquired by selecting one piece of character string information from among plural pieces of character string information in accordance with the user's line of sight, based on the user's line-of-sight information.
According to the input device 1B of the third embodiment, the following effect can be achieved.
The input unit 10B of the input device 1B comprises the line-of-sight detection unit 13 in addition to the image acquisition unit 11 and the voice input unit 12. With such a configuration, the user's line-of-sight information can be acquired by the line-of-sight detection unit 13. Hereby, for example, in the image information containing plural pieces of character string information, input information can be acquired by selecting one piece of character string information from among plural pieces of character string information, based on the user's line-of-sight information. This results in rapid and smooth acquisition of input information.
An input system according to a fourth embodiment of the present disclosure will be described. In the fourth embodiment, differences from the second embodiment will mainly be described. In the fourth embodiment, the same or equivalent constituent elements as those in the second embodiment will be described with the same reference numerals. Further, in the fourth embodiment, descriptions overlapping with those of the second embodiment are left out.
An example of the input system of the fourth embodiment will be described with reference to
As shown in
The arithmetic processing device 80 acquires image information and voice information, for transmission to the server 90.
The arithmetic processing device 80 comprises the input unit 10A, the display 70, a storage 81, and a first communication unit 82. The input unit 10A and the display 70 are the same as those of the second embodiment and hence will not again be explained.
The storage 81 is a storage medium that stores information acquired by the input unit 10A and information received from the server 90. Specifically, the storage 81 stores image information acquired by the image acquisition unit 11, voice information accepted by the voice input unit 12, and information processed by the server 90.
The storage 81 can be implemented by a hard disk (HDD), an SSD, a RAM, a DRAM, a ferroelectric memory, a flash memory, a magnetic disk, or a combination thereof.
The first communication unit 82 communicates with the server 90 via a network. The first communication unit 82 includes a circuit that communicates with the server 90 in accordance with a predetermined communication standard. The predetermined communication standard includes e.g. LAN, Wi-Fi (registered trademark), Bluetooth (registered trademark), USB, HDMI (registered trademark), controller area network (CAN), and serial peripheral interface (SPI).
The arithmetic processing device 80 stores the image information and the voice information accepted by the input unit 10A, into the storage 81. By the first communication unit 82, the arithmetic processing device 80 transmits the image information and the voice information stored in the storage 81 to the server 90 via the network. By the first communication unit 82, the arithmetic processing device 80 receives input information from the server 90 via the network, for storage in the storage 81. The arithmetic processing device 80 displays the input information by the display 70.
The elements making up the arithmetic processing device 80 can be implemented by, e.g. the semiconductor element. The elements making up the arithmetic processing device 80 can be e.g. the microcomputer, the CPU, the MPU, the GPU, the DSP, the FPGA, or the ASIC. The functions of the elements making up the arithmetic processing device 80 may be implemented by hardware only or by combination of hardware and software.
The elements making up the arithmetic processing device 80 are collectively controlled by e.g. a first controller. The first controller comprises e.g. the memory storing programs and the processing circuit (not shown) corresponding to the processor such as the central processing unit (CPU). For example, in the first controller, the processor executes a program stored in the memory. In the fourth embodiment, the first controller controls the input unit 10A, the display 70, the storage 81, and the first communication unit 82.
The server 90 receives image information and voice information from the arithmetic processing device 80 and acquires input information and correction information based on the image information and the voice information. The server 90 corrects the input information obtained from the image information, based on the correction information obtained from the voice information.
The server 90 comprises the information processing unit 20A, the input storage 40, the degree-of-similarity calculation unit 50, the correction processing unit 60, and a second communication unit 91. The information processing unit 20A, the input storage 40, the degree-of-similarity calculation unit 50, and the correction processing unit 60 are the same as those of the second embodiment and hence will not again be explained.
The second communication unit 91 communicates with the arithmetic processing device 80 via the network. The second communication unit 91 includes a circuit that communicates with the arithmetic processing device 80 in accordance with a predetermined communication standard. The predetermined communication standard includes e.g. LAN, Wi-Fi (registered trademark), Bluetooth (registered trademark), USB, HDMI (registered trademark), controller area network (CAN), and serial peripheral interface (SPI).
By the second communication unit 91, the server 90 receives image information and voice information via the network from the arithmetic processing device 80. In the server 90, the received image information and voice information are transmitted to the information processing unit 20A.
The information processing unit 20A converts image information and voice information into text information to acquire input information and correction information. The input information is transmitted to the input storage 40 and is stored therein. The correction information is transmitted to the degree-of-similarity calculation unit 50. The degree-of-similarity calculation unit 50 edits character strings of the input information using one or more characters of the correction information, to calculate the degree of similarity between character strings of the input information before and after editing. The degree-of-similarity information is transmitted to the correction processing unit 60. The correction processing unit 60 corrects the input information character string based on the degree of similarity. The corrected input information is transmitted to the input storage 40 and stored therein.
By the second communication unit 91, the server 90 transmits the input information stored in the input storage 40 to the arithmetic processing device 80 via the network.
The elements making up the server 90 can be implemented by, e.g. the semiconductor element. The elements making up the server 90 can be e.g. the microcomputer, the CPU, the MPU, the GPU, the DSP, the FPGA, or the ASIC. The functions of the elements making up the server 90 may be implemented by hardware only or by combination of hardware and software.
The elements making up the server 90 are collectively controlled by e.g. a second controller. The second controller comprises e.g. the memory storing programs and the processing circuit (not shown) corresponding to the processor such as the central processing unit (CPU). For example, in the second controller, the processor executes a program stored in the memory. In the fourth embodiment, the second controller controls the information processing unit 20A, the input storage 40, the degree-of-similarity calculation unit 50, the correction processing unit 60, and the second communication unit 91.
Referring to
As shown in
At step ST21, the image information is transmitted via the network to the server 90 by the first communication unit 82 of the arithmetic processing device 80. The server 90 receives the image information by the second communication unit 91.
At step ST22, character string information contained in the image information is converted into text information by the information processing unit 20A of the server 90. The input information is thus acquired.
At step ST23, the input information is transmitted via the network to the arithmetic processing device 80 by the second communication unit 91 of the server 90. The arithmetic processing device 80 receives the input information by the first communication unit 82.
At step ST24, the input information is displayed by the display 70 of the arithmetic processing device 80. This enables the user to confirm whether or not the input information is erroneously accepted.
At step ST25, voice information is accepted by the voice input unit 12 of the arithmetic processing device 80.
At step ST26, the voice information is transmitted via the network to the server 90 by the first communication unit 82 of the arithmetic processing device 80. The server 90 receives the voice information by the second communication unit 91.
At step ST27, information of one or more characters contained in the voice information is converted into text information by the information processing unit 20A of the server 90. The correction information is thus acquired.
At step ST28, character strings of the input information are edited using one or more characters of the correction information by the degree-of-similarity calculation unit 50 of the server 90, to calculate the degrees of similarity between character strings of the input information before and after editing.
Step ST28 includes the step ST28A of determining the attribute of the correction information and the step ST28B of calculating the distance between character strings. Steps ST28A and ST28B are the same as steps ST15A and ST15B of the second embodiment and hence the explanations thereof are omitted.
At step ST29, the input information character string is corrected based on the degree of similarity by the correction processing unit 60 of the server 90.
At step ST30, the corrected input information is transmitted via the network to the arithmetic processing device 80 by the second communication unit 91 of the server 90. The arithmetic processing device 80 receives the corrected input information by the first communication unit 82.
At step ST31, the corrected input information is displayed by the display 70 of the arithmetic processing device 80.
According to the input system and the input method of the fourth embodiment, the following effects can be achieved.
The input system 100 comprises the arithmetic processing device 80 mounted on a moving body and the server 90 that communicates with the arithmetic processing device 80 via a network. The arithmetic processing device 80 comprises the input unit 10A, the display 70, the storage 81, and the first communication unit 82. The input unit 10A accepts image information and voice information. The display 70 displays input information. The storage 81 stores the image information, the voice information, and the input information. The first communication unit 82 communicates with the server 90 via the network. The server 90 includes the information processing unit 20A, the input storage 40, the degree-of-similarity calculation unit 50, the correction processing unit 60, and the second communication unit 91. The information processing unit 20 converts image information and voice information into text information. The input storage 40 stores input information. The degree-of-similarity calculation unit 50 edits character strings of the input information using one or more characters of the correction information and calculates the degrees of similarity between character strings of the input information before and after editing. The correction processing unit 60 corrects a character string of the input information based on the degrees of similarity calculated by the degree-of-similarity calculation unit 50.
Such a configuration enables input information to be corrected more easily. Further, by acquiring the input information as image information and by accepting the correction information as voice information, rapid and smooth acquisition and correlation of the input information can be achieved.
In the input system 100, the image information and the voice information acquired by the arithmetic processing device 80 are transmitted to the server 90 so that the server 90 corrects the input information based on these pieces of information. This achieves a reduction in processing load on the arithmetic processing device 80.
The input method of the fourth embodiment also presents the same effects as the effects of the input system 100 described above.
In the fourth embodiment, the example has been described where the input system 100 acquires the input information based on the image information and acquires the correction information based on the voice information, but the present disclosure is not limited thereto. The input system 100 may be able to acquire input information containing a character string and correction information containing one or more characters. The input information may be acquired based on e.g. voice information acquired by the voice input unit or character information acquired by the input interface. The correction information may also be acquired based on e.g. character information acquired by the input interface.
In the fourth embodiment, the example has been described where the input system comprises the arithmetic processing device 80 and the server 90, but the present disclosure is not limited thereto. The input system 100 may comprise equipment other than the arithmetic processing device 80 and the server 90. The input system 100 may comprise a plurality of arithmetic processing devices 80.
In the fourth embodiment, the example has been described where the arithmetic processing device 80 includes the input unit 10A, the display 70, the storage 81, and the first communication unit 82, but the present disclosure is not limited thereto. The display 70 and the storage 81 are not essential constituent elements. The elements making up the arithmetic processing device 80 may be increased or decreased. Alternatively, two or more elements of a plurality of elements making up the arithmetic processing device 80 may be integrated. For example, the arithmetic processing device 80 may include the information processing unit 20A.
Although in the fourth embodiment, the example has been described where the server 90 includes the information processing unit 20A, the input storage 40, the degree-of-similarity calculation unit 50, the correction processing unit 60, and the second communication unit 91, the present disclosure is not limited thereto. The information processing unit 20A and the input storage 40 are not essential constituent elements. The elements making up the server 90 may be increased or decreased. Alternatively, two or more of a plurality of elements making up the server 90 may be integrated.
Although in the fourth embodiment, the example has been described where the input method includes steps ST20 to ST31, the present disclosure is not limited thereto. The input method may include an increased or decreased number of steps or an integrated step. For example, the input method may include a step of determining whether or not the correction information is accepted. In this case, if the correction information is accepted, the process may proceed to steps ST25 to ST31. If the correction information is not accepted, the process may come to an end.
The input devices 1, 1A, and 1B of the first to third embodiments and the input system 100 of the fourth embodiment may carry out a learning process that learns a best correction by using, as teaching data, the input information and the correction information acquired based on information (e.g. image information and voice information) input to the input units 10, 10A, and 10B. By carrying out the learning process, the accuracy of correction of input information based on information input to the input units 10, 10A, and 10B can be improved. For example, the input devices 1, 1A, and 1B of the first to third embodiments and the input system 100 of the fourth embodiment may comprise a learning unit that learns using, as teaching data, the input information and the correction information acquired based on information (e.g. image information and voice information) input to the input units 10, 10A, and 10B. For example, the learning unit may execute machine learning in accordance with the neural network model.
Although in the first to fourth embodiments, the example has been described where the moving body is an automobile, the present disclosure is not limited thereto. The moving body may be e.g. a motorcycle, an airplane, or a ship.
The input devices 1, 1A, and 1B of the first to third embodiments and the input system 100 of the fourth embodiment are more beneficial in the case where the moving body is a police vehicle. Police vehicles may undergo correction of the input information in urgent situations. Compared to general vehicles, the police vehicles are in an environment where noise is liable to occur and are in situations where input information is likely to be erroneously recognized. Due to easy correction of the input information, the input devices 1, 1A, and 1B and the input system 100 are more beneficial when mounted on the police vehicles.
Although the present disclosure has been fully described in relation to the preferred embodiments with reference to the accompanying drawings, it will be obvious to those skilled in the art that various modifications and alterations are feasible. Such modifications and alterations should be construed as being encompassed within the scope of the present disclosure defined by the appended claims, without departing therefrom.
Because of enabling input information to be corrected easily, the present disclosure is useful for the input device mounted on a moving body such as an automobile.
This is a continuation application of International Application No. PCT/JP2019/038287, with an international filing date of Sep. 27, 2019, which claims priority of U.S. Provisional Application No. 62/740,677 filed on Oct. 3, 2018, the content of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62740677 | Oct 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2019/038287 | Sep 2019 | US |
Child | 17220113 | US |