This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2011-208968, filed on Sep. 26, 2011; the entire contents of which are incorporated herein by reference.
Embodiments described herein relate generally to an information processing apparatus, an information processing method, and a computer program product.
Various techniques for promoting efficiency in the operation of transcribing voice data into text have conventionally been known. For example, known is a technique for estimating and presenting a phrase to be finally inputted while a user is inputting a character string by use of voice data targeted for transcription. As an example, known is a technique for retrieving (searching for) a phrase that at least a part of a character string representing the reading of the phrase agrees with a character string that is being inputted, from among a plurality of phrases obtained by a voice recognition process on voice data, and presenting the retrieved phrase as an input candidate.
However, there is a problem of low accuracy in the conventional techniques since all phrases included in the record of voice recognition process results become search targets; accordingly, the search results in many candidates.
According to an embodiment, an information processing apparatus includes a storage unit, a detector, an acquisition unit, and a search unit. The storage unit stores therein voice indices, each of which associates a character string included in voice text data obtained from a voice recognition process with voice positional information. The voice positional information indicates a temporal position in voice data and corresponds to the character string. The detector detects played-back section information indicating a section that has been played back in the voice data. The acquisition unit acquires reading information being at least a part of a character string representing a reading of a phrase to be transcribed from the voice data that has been played back. The search unit specifies, as search targets, character strings whose associated voice positional information is included in the played-back section information among the character strings included in the voice indices, and retrieves a character string including the reading represented by the reading information from among the specified character strings.
Various embodiments will be described hereinafter with reference to the accompanying drawings. In the following embodiments, a description will be given, taking a PC (Personal Computer) having the function of playing back voice data and the text creation function of creating text in accordance with a user's operation as an example of an information processing apparatus; however, the information processing apparatus is not limited thereto. In the following embodiments, when the transcription operation is performed, the user (transcription operator) operates a keyboard to input text while playing back recorded voice data, and converts the voice data into text.
The first storage unit 11 stores therein voice data. The voice data is, for example, an audio file in a format such as way and mp3. The method for acquiring voice data is arbitrary, and for example, it is possible to acquire voice data via a network such as the Internet, or to acquire voice data by use of a microphone and the like.
The second storage unit 12 configured to store therein voice indices, each of which associates a character string included in text data (referred to as the voice text data) obtained from a voice recognition process with voice positional information, for which the voice positional information indicates a temporal position in the voice data and corresponds to the character string. Various known techniques can be used for the voice recognition process. In the voice recognition process, voice data are processed at regular intervals of approximately 10 to 20 milliseconds (ms). The association of the voice text data with the voice positional information can be acquired during the recognition process on the voice data.
In this embodiment, the voice text data obtained from the voice recognition process is divided into segments, such as words, morphemes, or clauses, each of which is smaller than a sentence, and the voice text data is expressed by a network-structure called lattice in which recognition candidates (candidates for segmentation) are connected. The expression of the voice text data is not limited thereto. For example, the voice text data can also be expressed by a linear structure (one path) indicative of the optimum recognition result of the voice recognition process. In this embodiment, the second storage unit 12 configured to store therein voice indices, each of which associates a morpheme (an example of a character string) included in the voice text data in the lattice structure with voice positional information.
The playback unit 13 is a unit that plays back voice data, and is equipment configured of a speaker, a DA converter, and a headphone, for example. When the playback instruction acceptance unit 14 accepts a playback start instruction to start playback, the playback controller 15 controls the playback unit 13 so as to play back the voice data. Moreover, when the playback instruction acceptance unit 14 accepts a playback stop instruction to stop playback, the playback controller 15 controls the playback unit 13 so as to stop playback of the voice data. The playback controller 15 is realized by the audio function held by the operation system or driver of a PC, for example, but can be made by the hardware circuit such as an electronic circuit, too.
The detection unit 16 detects played-back section information that indicates a section that has been played back by the playback unit 13 in the voice data. More specifically, the detection unit 16 detects temporal information that indicates the section between a playback start position indicating a position where playback by the playback unit 13 is started and a playback stop position indicating a position where playback by the playback unit 13 is stopped, in the voice data as the played-back section information.
The acquisition unit 17 acquires reading information that is at least a part of a character string representing the reading of a phrase to be transcribed from voice data that has been played back by the playback unit 13. For example, when a user attempts to transcribe the Japanese word 10a (it is read “kyouto”) in
The search unit 18 specifies, as search targets, character strings whose associated voice positional information is included in the played-back section information detected by the detection unit 16 among a plurality of character strings included in the voice indices stored in the second storage unit 12. For example, when the playback start position of the voice data is “0 s” and the playback stop position is “1.5 s (1500 ms)”, the detection unit 16 detects temporal information that indicates the section between the playback start position “0 s” and the playback stop position “1.5 s (1500 ms)” as the playback section information. In this case, the search unit 18 specifies, as character strings targeted for a search, character strings whose associated voice positional information is included in the section between “0 s” and “1.5 s (1500 ms)” among the plurality of character strings included in the voice indices stored in the second storage unit 12. The search unit 18 then retrieves, from among the character strings specified in this manner, character strings including the reading represented by the reading information acquired by the acquisition unit 17.
For example, assume that the plurality of character strings illustrated in
The display unit 19 controls an unillustrated display device so as to display the character strings retrieved by the search unit 18 as input candidates. For example, it is possible to display character strings in the unit of words as input candidates, or display character strings in the unit of phrases as input candidates. The user (transcription operator) can make a select input to specify which displayed input candidates is selected. The select input method is arbitrary, and for example, it is possible to make the select input by touching a position where an input candidate the user wishes to select is displayed on a screen of the display device, or it is possible to make the select input by the operation of an operation device such as a keyboard, a mouse, or a pointing device. When accepting the select input of the input candidate, the selection unit 20 selects an input candidate specified by the select input, and determines the selected input candidate as input text. In this embodiment, character strings written in combination with Kanji characters are presented as input candidates to promote efficiency in the input operation by the user.
Next, the search unit 18 specifies, as search targets, character strings by use of the played-back section information detected in Step S402 among the plurality of character strings included in the voice indices stored in the second storage unit 12 (Step S403). Next, the search unit 18 retrieves character strings including the reading represented by the reading information acquired in Step S401 from among the character strings specified in Step S403 (Step S404).
Next, the display unit 19 controls the unillustrated display device so as to display the character strings retrieved in Step S404 as input candidates (Step S405).
As a specific example, assume that the character string “juu” is acquired as reading information (Step S401 in
In this case, the search unit 18 retrieves character strings including the reading “juu” from among the plurality of character strings illustrated in
As described above, in this embodiment, when the acquisition unit 17 acquires reading information that is at least a part of a character string representing the reading of a phrase that the user is attempting to transcribe, the search unit 18 specifies character strings that include the associated voice positional information in the played-back section information detected by the detection unit 16 as search targets among the plurality of character strings included in the voice indices. The search unit 18 then retrieves character strings including the reading represented by the reading information from among the specified character strings; accordingly, it is possible to improve accuracy in the search process compared with a case where all character strings included in the voice indices become search targets.
In this embodiment, the first storage unit 11, the second storage unit 12 and the playback unit 13 are configured of hardware circuits. On the other hand, each of the playback instruction acceptance unit 14, the playback controller 15, the detection unit 16, the acquisition unit 17, the search unit 18, the display unit 19, and the selection unit 20 is realized by a CPU mounted on the PC executing a program stored in ROM or the like; however, the configuration is not limited thereto, and for example, at least parts of the playback instruction acceptance unit 14, the playback controller 15, the detection unit 16, the acquisition unit 17, the search unit 18, the display unit 19, and the selection unit 20 may be configured of hardware circuits.
Moreover, the information processing apparatus may be realized by preinstalling the above program on a computer device, or may be realized by storing the above program in a recording medium such as a CD-ROM or distributing the above program via a network and appropriately installing the program on a computer device. Moreover, if various data files utilized to use a language processing technique and a pronunciation estimation technique are necessary, it is possible to realize recording media for holding the data files by appropriately using memory and a hard disk, which are installed internally or externally on the above computer device, or a CD-R, CD-RW, a DVD-RAM, a DVD-R and the like.
As described above, the description has been given of the embodiment of the present invention; however, the embodiment has been presented by way of an example only, and is not intended to limit the scope of the invention. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions. Moreover, it is also possible to view a configuration excluding at least either of the elements for playing back voice data (the first storage unit 11, the playback unit 13, the playback instruction acceptance unit 14, the playback controller 15, for example) and the element for displaying search results (the display unit 19 as an example here) from all the elements shown in the embodiment (the first storage unit 11, the second storage unit 12, the playback unit 13, the playback instruction acceptance unit 14, the playback controller 15, the detection unit 16, the acquisition unit 17, the search unit 18, the display unit 19 and the selection unit 20) as the information processing apparatus according to the present invention, for example. In short, various inventions can be formed by appropriately combining a plurality of the elements disclosed in the embodiment.
Modifications will be described in the following. The following modifications can be combined arbitrarily.
(1) First Modification
As illustrated in
For example, assume that the plurality of character strings illustrated in
Moreover, for example, assume that the plurality of character strings illustrated in
(2) Second Modification
In the above embodiment, the search unit 18 specifies character strings that include the associated voice positional information in the playback section information detected by the detection unit 16 as search targets from among the plurality of character strings included in the voice indices; however, a search target is not limited to this, and for example, it is also possible to specify character strings that include the associated character strings in the section that the section indicated by the playback section information is extended by a predetermined range as search targets among the plurality of character strings included in the voice indices.
(3) Third Modification
In the above embodiment, the language targeted for the transcription operation is Japanese; however, the language is not limited to this, and the kinds of languages targeted for the transcription operation are arbitrary. For example, the language targeted for the transcription operation may be English or Chinese. Even if the language targeted for the transcription operation is English or Chinese, the configuration of the information processing apparatus is similar to the one for Japanese.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2011-208968 | Sep 2011 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5031113 | Hollerbauer | Jul 1991 | A |
5278943 | Gasper et al. | Jan 1994 | A |
5333275 | Wheatley | Jul 1994 | A |
5649060 | Ellozy et al. | Jul 1997 | A |
5909667 | Leontiades | Jun 1999 | A |
6076059 | Glickman et al. | Jun 2000 | A |
6151576 | Warnock et al. | Nov 2000 | A |
6226615 | Kirby et al. | May 2001 | B1 |
6260011 | Heckerman et al. | Jul 2001 | B1 |
6360237 | Schulz | Mar 2002 | B1 |
6505153 | Van Thong | Jan 2003 | B1 |
6735565 | Gschwendtner | May 2004 | B2 |
6792409 | Wutte | Sep 2004 | B2 |
6882970 | Garner et al. | Apr 2005 | B1 |
6961895 | Beran et al. | Nov 2005 | B1 |
7174295 | Kivimaki | Feb 2007 | B1 |
7212968 | Garner et al. | May 2007 | B1 |
7346506 | Lueck et al. | Mar 2008 | B2 |
7412643 | Fischer et al. | Aug 2008 | B1 |
8019163 | Momosaki | Sep 2011 | B2 |
8355920 | Gopinath et al. | Jan 2013 | B2 |
8407052 | Hager | Mar 2013 | B2 |
8560327 | Neubacher et al. | Oct 2013 | B2 |
20020143534 | Hol | Oct 2002 | A1 |
20020163533 | Trovato | Nov 2002 | A1 |
20050159957 | Roth et al. | Jul 2005 | A1 |
20050203750 | Miyamoto et al. | Sep 2005 | A1 |
20060190249 | Kahn et al. | Aug 2006 | A1 |
20060222318 | Momosaki | Oct 2006 | A1 |
20070038450 | Josifovski | Feb 2007 | A1 |
20070106508 | Kahn et al. | May 2007 | A1 |
20070106509 | Acero | May 2007 | A1 |
20070106685 | Houh | May 2007 | A1 |
20070126926 | Miyamoto | Jun 2007 | A1 |
20070198266 | Li | Aug 2007 | A1 |
20070208567 | Amento | Sep 2007 | A1 |
20070213979 | Meermeier | Sep 2007 | A1 |
20080133219 | Doulton | Jun 2008 | A1 |
20080177623 | Fritsch | Jul 2008 | A1 |
20080195370 | Neubacher et al. | Aug 2008 | A1 |
20080221881 | Carraux | Sep 2008 | A1 |
20080270344 | Yurick | Oct 2008 | A1 |
20090119101 | Griggs | May 2009 | A1 |
20090306981 | Cromack | Dec 2009 | A1 |
20090319265 | Wittenstein | Dec 2009 | A1 |
20100063815 | Cloran | Mar 2010 | A1 |
20110054901 | Qin | Mar 2011 | A1 |
20110288862 | Todic | Nov 2011 | A1 |
20120016671 | Jaggi | Jan 2012 | A1 |
20120078712 | Fontana | Mar 2012 | A1 |
20120245936 | Treglia | Sep 2012 | A1 |
20120278071 | Garland | Nov 2012 | A1 |
20130030805 | Suzuki et al. | Jan 2013 | A1 |
20130030806 | Ueno et al. | Jan 2013 | A1 |
20130080163 | Shimogori et al. | Mar 2013 | A1 |
20130191125 | Suzuki et al. | Jul 2013 | A1 |
20140207454 | Nakata et al. | Jul 2014 | A1 |
20150170649 | Ashikawa | Jun 2015 | A1 |
Number | Date | Country |
---|---|---|
2000-285112 | Oct 2000 | JP |
Entry |
---|
Leung, H.C., et al. “A Procedure for Automatic Alignment of Phonetic Transcriptions With Continuous Speech.” Proceedings of ICASSP 84, 1984, pp. 2.7.1 to 2.7.3. |
Number | Date | Country | |
---|---|---|---|
20130080163 A1 | Mar 2013 | US |