This application is based on and incorporates herein by reference Japanese Patent Application No. 2003-311633 filed on Sep. 3, 2003.
The present invention relates to a voice (or speech) recognition device that recognizes a voice broadcasted from a broadcast station.
As a distribution method of traffic information, a VICS (Vehicle Information Communication System) is known. However, the VICS is a service being still under development, so that a region of the service is limited to some cities or areas along arterial roads. Traffic information by a radio broadcast dedicated for traffic information that Public Roads Administration or the like broadcasts and traffic information by a private radio broadcast (traffic information being broadcasted between radio programs) are thereby still widely utilized.
Among them, with respect to the radio broadcast dedicated for traffic information, a driver needs to intentionally tune a reception frequency of a radio receiver to a broadcast frequency of the radio broadcast dedicated for traffic information, so that availability is not so acceptable. Further, even unrelated information (for example, information for an opposite direction in a highway) is broadcasted, so that the driver needs to listen to select necessary information from among traffic information broadcasted.
Further, with respect to the private radio broadcast, a primary program is the news program or entertainment program, so that the driver needs to listen to select traffic information from among the programs broadcasted. Furthermore, the driver needs to determine whether the selected information is traffic information necessary for the driver.
Thus, the distribution of the traffic information by the radio broadcast dedicated for traffic information and private radio broadcast does not enhance so much convenience for the driver. A road traffic information notice device described in Patent Document 1 is thereby known. The road traffic information notice device receives a radio broadcast to convert the voice of the radio broadcast received to language data by voice (or speech) recognition, then extracting necessary information for the own vehicle from the language data converted to notify the driver of the necessary information by voice (or speech) synthesis.
According to this road traffic information notice device, the driver can grasp the necessary information for the own vehicle without determining whether information broadcasted is necessary for the own vehicle by listening to the radio broadcast.
[Patent Document 1] JP 2000-222682 A
In recent years, a voice recognition technology has been remarkably developed, so that, irrespective of old or young, male or female, voices can be much accurately recognized to be converted to language data. However, a determination of whether the converted (or generated) language data (information) is really necessary for the own vehicle, requires complicated and skilled inference or objective thought, so that the present technology cannot easily execute the determination. Further, the private radio broadcast often has sentence forms being inconstant, so that it is specifically difficult to determine whether the converted language data is necessary or not. Consequently, direct utilization of the determination result into an important matter still has a problem (e.g., utilization into automatic detour route setting while traveling or automatic destination change or the like).
It is an object of the present invention to provide a voice recognition device or the like enabling utilization, into an important matter, language data based on a broadcast transmitted from a broadcast station.
To achieve the above object, a voice recognition device provided with the following. A voice signal broadcasted is retrieved from a broadcast station. The voice signal retrieved is recognized and converted into language data. The language data converted is then stored. Here, in an extracting process, language data according with a given condition is extracted from among the language data stored; then, in a notifying process, information indicated by the language data extracted is notified so that validity or invalidity of the language data is determined based on a command by a user.
Here, “language data” means data, based on a voice signal, being quasi language that a person understands such as text data. Further, “a given condition” means a condition set for extracting necessary language data such as a condition for extracting information relating to traffic information, or a condition for extracting information relating to weather information. Furthermore, “to determine validity or invalidity” means, in detail, to attach a flag indicating validity to valid language data or to attach a flag indicating being invalid to invalid language data; otherwise, to delete the invalid language data.
Namely, according to a voice recognition device of the present invention, as the first stage, necessary language data is extracted based on the above-described given condition; then, as the second stage, a user is notified of information indicated by the language data extracted, and to judge its validity or invalidity. The validity or invalidity of the language data is thereby determined according to a judgment result by the user. Thus, in the second stage, the validity or invalidity of the language data is judged by the user, so that whether the language data is really necessary for the user is accurately determined. Further, the validity determination is applied not for the entire language data, but for the language data that are screened in the first stage and assumed to be relatively necessary. This decreases a user's load in comparison with the case where the user judges the entire language data.
The above and other objects, features, and advantages of the present invention will become more apparent from the following detailed description made with reference to the accompanying drawings. In the drawings:
An embodiment that the present invention is directed to will be now explained with reference to the drawing. Here, the embodiment of the present invention is not limited to the examples to be described below, including various structures as long as the various structures are within a scope of the present invention.
The position detector 21 includes: a GPS (Global Positioning System) receiver 21a for receiving transmitted radio waves from satellites for GPS via a GPS antenna to detect a current position, orientation, speed, etc. of the vehicle; a gyroscope 21b for detecting a magnitude of rotation movement applied on the vehicle; a distance sensor 21c for detecting a traveling distance from back and forth acceleration or the like of the vehicle; and a geomagnetic sensor 21d for detecting an advancing orientation from earth magnetism. These sensors or the like 21a to 21d cause their own errors due to different characteristics and are therefore configured to be used by a plurality of sensors that complement each other.
The manipulation switch group 22 is constructed of a touch panel integrated with the displaying unit 26 and disposed on a screen as a unit built in the displaying unit 26, a mechanical switch in the periphery of the displaying unit 26, and the like. The touch panel and displaying unit 26 are integrated by being laminated. The touch panel can be any one of various types such as a pressure-sensitive type, an electromagnetic induction type, an electric capacitance type, or a combination of the foregoing.
The external information input and output unit 25 is connected with other external devices or the like to take a role of a function to input and output information with the external devices. The external information input an output unit 25 is connected with an audio device (not shown) as one of the external devices and capable of receiving a voice signal of a radio, further outputting signals for controlling the audio device such as selecting channels, and turning on or off the power.
The map data input unit 25 is a device for receiving various data stored in the storage medium (not shown). The storage medium stores map data (road data, landform data, mark data, intersection data, entity data, etc.), voice data for guidance, voice recognition data, etc. The storage medium typically includes a CD-ROM and a DVD-ROM from the viewpoint of their data volumes; however, a magnetic storage unit such as a hard disk, or a medium such as a memory card can be included.
The displaying unit 26 is a color display device of any one of a liquid crystal display, organic electro luminescent (EL) display, a CRT, and the like. The screen of the displaying unit 26 shows a map with additional data of a mark of current position, a guiding route to a destination, names, landmarks, marks for various entities, using the current position of the vehicle detected by the position detector 21 and the map data inputted by the map data input unit 25. Further, guidance for the entities can be also shown.
The voice output unit 27 can output voice of the guidance for the entities and other guidance, which are inputted from the map data input unit 25. The microphone 28 outputs electric signals based on the voice inputted when the user utters the voice. The user can operate the navigation device 20 by inputting the various voices to the microphone 28.
The processing unit 29 is constructed mainly of a known micro-computer containing a CPU, a ROM, a RAM, an I/O, and a bus line connecting these components. The processing unit 29 executes various processing based on the program stored in the ROM or RAM. For example, in a displaying processing, a current position of the vehicle is computed as a set of coordinates and advancing orientation, based on detection signals from the position detector 21; and then a map or the like around the current position computed is read through the map data input unit 25 to be displayed. In a route guiding processing, an appropriate route to a destination is computed based on: the position data stored in the map data input unit 25; and destination being set according to manipulation of the manipulation switch group 22 or remote controller 23a.
The internal structure of the processing unit 29 will be explained using a block diagram in
The controlling unit 29a is inputted signals or the like from the position detector 21, the manipulation switch group 22, the remote control sensor 23b, the external information input and output unit 24, the map data input unit 25, or the like; further, the controlling unit 29a outputs signals or the like for controlling the external information input and output unit 24, the displaying unit 26, and the voice output unit 27. The controlling unit 29a controls, as a whole, the parts of the processing unit 29.
The voice signal temporary storing unit 29b can store the voice signals inputted for a given period (e.g., one minute). Here, the newest voice signal is constantly stored, while the oldest voice signal is deleted.
The language data generating unit 29c has a function of generating language data based on the voice signals inputted. The language data generated is sent one by one to the language data analyzing unit 29d. The language data analyzing unit 29d analyzes the language data sent from the language data generating unit 29c, sending, to the language data storing unit 29e, only necessary language data from among the analyzed language data.
The language data storing unit 29e stores the necessary language data sent from the language data analyzing unit 29d. The voice signal storing unit 29f stores the language data designated from among the voice signals stored in the voice signal temporary storing unit 29b.
(1) Receiving Processing
In the next place, the receiving processing executed by the processing unit 29 will be explained using the flow chart in
The language data generated by the language data generating unit 29c is sent to the language data analyzing unit 29d; thereby, the language data analyzing unit 29d analyzes the language data (S125). Here, “analyzing” means to determine a context from words or a word order. This analyzing is executed on every sentence, also being executed using the preceding and following sentences. Thus, each processing from S115 to S125 is started and executed one by one.
As each processing at S115 to S125 is started, the language data analyzing unit 29d determines whether or not the voice signal broadcasted from the broadcast station relates to the news, based on the analyzing result of the language data; further, the unit 29d determines whether broadcasted information not relating to the news continues for a given period (e.g., 3 minutes) (S130). The determination of whether the broadcasted information does not relate to the news, is executed by whether person's colloquial expressions are included (information relating to music is thereby removed), by whether more than the given number of terms used in the news are included, or the like. When the broadcasted information not relating to the news continues for the given period, the processing advances to S135; otherwise, to S140.
At S135, the controlling unit 29a outputs, to the audio device via the external information input and output unit 24, a command for changing a broadcast station to be received. The broadcast to be received is thereby changed; then, the voice signal from another broadcast station is newly received via the external input and output unit 24, which returns the processing to S130. Here, whether broadcasted information not relating to the news continues for the given period is determined again.
Meanwhile, at S140, the language data analyzing unit 29d determines whether the language data relates to traffic information. In detail, it is determined by whether terms relating to a traffic or terms relating to a place are included in the language data. When the language data does not relate to the traffic information, the processing returns to S130, where whether broadcasted information not relating to the news continues for the given period is determined again.
By contrast, when the language data relates to the traffic information (S140:YES), the language data is stored along with an ID for identifying in the language data storing unit 29e. Further, the voice signal corresponding to the language data is retrieved from the voice signal temporary storing unit 29b by the voice signal storing unit 29f; thereby, the voice signal storing unit 29f stores the voice signal while attaching the same ID as that used when the language data storing unit 29e stores the language data (S145). Accordingly, the language data stored in the language data storing unit 29e and voice signal stored in the voice signal storing unit 29f are managed by the same ID.
When the language data storing unit 29e and voice signal storing unit 29f complete storing procedures, the processing returns to S130, where whether broadcasted information not relating to the news for the given period is determined again.
Here, it is favorable that a broadcast station constantly broadcast a voice signal including desired contents; however, such a private radio broadcast does not always broadcast desired contents since the private radio broadcast includes various programs. Therefore, it is favorable that the broadcast station be changed when no language data extracted in the receiving process appears for a given period.
The operational example of the receiving processing will be shown below. It is supposed that, as a radio broadcast, “A rescue training took place, the training that assumes that an accident occurs at a five kilometer spot towards Tokyo from Nagoya interchange in Tomei (Tokyo-Nagoya) Expressway” is broadcasted. Here, as language data, “Tomeikousokudouro (Tomei Expressway)-nagoyainta (Nagoya interchange)-noborishasenno (lane directing to Tokyo)-tokyogawa (towards Tokyo)-gokirochitende (a spot of five kilometers)-jikoga (accident)-okitatosouteishita (assuming occurrence)-kyujyokunrenga (rescue training)-jissisaremashita (took place)” is generated and stored in the language data storing unit 29e. Further, as a voice signal, the voice read by an announcer indicating “A rescue training took place, the training that assumes that an accident occurs at a five kilometer spot towards Tokyo from Nagoya interchange in Tomei (Tokyo-Nagoya) Expressway” is directly stored in the voice signal storing unit 29f.
(2) Determining Processing
In the next place, the determining processing executed by the processing unit 29 will be explained using the flow chart in
As the processing is started, the controlling unit 29a retrieves the voice signal corresponding to the ID from the voice signal storing unit 29f (S210). (When the user designates an ID, the ID designated is used for retrieving; at the timing when the language data storing unit 29e stores the language data, an ID that is attached at the described timing is used for retrieving.) Next, the voice output unit 27 is caused to reproduce the voice signal retrieved (S215). Namely, the user can listen to what a radio broadcasted. The user determines whether the information reproduced can be used for route guidance, and intentionally inputs the determination that is to be received by the manipulation switch group 22 or remote control sensor 23b (S220). When the input received means “this information is to be used for route guidance,” namely “information is valid,” mark data (e.g., flag) indicating validity is attached to the language data, which is stored in the language data storing unit 29e and of which ID is the same as ID attached to the reproduced voice signal (S230). The determining processing is then terminated.
By contrast, when the input received means “this information is to be not used for route guidance,” namely “information is invalid,” the reproduced voice signal is deleted from the voice signal storing unit 29. Further, the language data of which ID is the same as ID attached to the reproduced voice signal is deleted from the language data storing unit 29e (S235). The determining processing is then terminated.
The operational example of the determining processing will be shown below. For example, it is supposed that the language data storing unit 29e stores the language data of “Tomeikousokudouro (Tomei Expressway)-nagoyainta (Nagoya interchange)-noborishasenno (lane directing to Tokyo)-tokyogawa (towards Tokyo)-gokirochitende (a spot of five kilometers)-jikoga (accident)-okitatosouteishita (assuming occurrence)-kyujyokunrenga (rescue training)-jissisaremashita (took place).” It is further supposed that a vehicle having the navigation device 20 approaches Nagoya interchange in Tomei Expressway. Here, a notice is shown in the displaying unit 26, the notice that information is unconfirmed by a user, based on the language data (having no mark data indicating validity). The user thereby manipulates to confirm the unconfirmed information, so that the above-described determining processing is started. In the determining processing, radio-broadcasted “A rescue training took place, the training that assumes that an accident occurs at a five kilometer spot towards Tokyo from Nagoya interchange in Tomei (Tokyo-Nagoya) Expressway” is reproduced; thereafter, the user inputs so as to determine whether the information is valid or not. Based on the input by the user, the navigation device 20 either attaches the mark data indicating validity to the language data, or deletes the language data and voice signal.
The language data determined to be valid in this determining processing is used for route change, warning, or the like. In the above example, the rescue training is assumed to take place without preventing the traffic, so that the relevant language data is determined to be invalid in the determining processing. This deletes the relevant language data and voice signal, so that they are not to be used for the route changing or warning.
Thus, according to the navigation device 20 of this embodiment, information only relating to traffic information is extracted from among information broadcasted from a broadcast station, then being confirmed by a user. Further, the information which is determined, by the user, to be necessary is to be used for other processing such as route guidance. Accordingly, accuracy of the information is enhanced; thereby, accuracy of other processing such as route guidance is naturally enhanced.
Other embodiments will be explained.
(1) The voice signal temporary storing unit 29b and voice signal storing unit 29f can be removed. Here, in the determining processing, instead of the voice signal, the language data can be read by a synthetic voice (or speech). In this structure, a hardware resource can be deleted.
(2) In the determining processing, along with reproduction of the voice signal, the language data can be read by a synthetic voice. In this structure, the user can compare the information indicated by the language data with the contents of the voice notified based on the voice signal, so that the user can realize an error of the recognition when the language data is generated. Therefore, when the error of the recognition occurs, the user can take countermeasures such as correction or deletion of the language data, or the like. As a result, accuracy of the language data is enhanced, which enhances utility value of the language data.
(3) The functions of the voice recognition device are realized, in the above embodiment, by being built in the navigation device 20; however, it can be realized as the voice recognition device itself. Furthermore, the valid language data held by the voice recognition device can be retrieved and utilized by other devices (e.g., personal computer, navigation device).
(4) A program to function as the processing unit of the voice recognition device of the embodiment, can be executed by a computer built in a voice recognition device. In this structure, for example, the program is stored in a computer-readable medium such as a flexible disk, a magnetic optical disk, a CD-ROM, a hard disk, a ROM, a RAM, etc. By loading the program into a computer and activating the program as needed, the program functions as the voice recognition device. Further, the program can be distributed via a network, so that functions of the voice recognition device can be upgraded.
It will be obvious to those skilled in the art that various changes may be made in the above-described embodiments of the present invention. However, the scope of the present invention should be determined by the following claims.
Number | Date | Country | Kind |
---|---|---|---|
2003-311633 | Sep 2003 | JP | national |