The present disclosure relates to the generation of transcription text from input audio data, particularly the generation of a transcript file that can be printed or displayed in a selected presentation format.
Transcripts of audio recordings are often used for purposes such as analysis, education, and archiving. Transcripts allow transcribed text to be read, searched, and edited, which can be useful for users such as news agencies and other media outlets, universities and other educational institutions, researchers, libraries, and government agencies. For example, news organizations often search transcripts of recorded interviews and broadcasts when performing research for a news story.
Unfortunately, manually transcribing an audio recording into text can be tedious and time-consuming. Some software systems have been developed that can generate transcribed text from audio recordings more quickly. However, such applications often output generated text without formatting that corresponds to the content and/or context of the source audio recording.
Users often need to manually format and edit raw text generated from automatic transcription systems into a format they want to use when printing or displaying the transcribed text. This process can also be tedious and time consuming. For example, some automatic transcription systems generate raw unformatted text when transcribing a recording of a conversation between two people. Users may then need to edit the transcribed text to separate out words spoken by each speaker, indicate which person is speaking at which time, and/or format the text so that it can be read and followed more easily as a back and forth dialog.
What is needed is a system that can transcribe audio into text and format the transcribed text according to a presentation format that corresponds to the type of audio that was transcribed, such that the text can be printed and/or displayed in the selected presentation format.
The present disclosure provides a method of generating a transcript file. Input audio data at an audio transcription device running a transcription component. The transcription component can divide the input audio data into a plurality of sound tokens. The transcription component can identify transcription text for each subset of sound tokens by finding a best match for the subset in sound samples in a sound database. The transcription component can create a transcript file and format the transcription text in the transcript file according to a presentation format that corresponds to a sound type of the transcription text.
The present disclosure also provides a printer comprising a transcription component and a print engine. The transcription component can receive input audio data when a print job is initiated at the printer, and divide the input audio data into a plurality of sound tokens. The transcription component can identify transcription text for each subset of sound tokens by finding a best match for the subset in sound samples in a sound database. The transcription component can create a transcript file and format the transcription text in the transcript file according to a presentation format that corresponds to a sound type of the transcription text. The print engine can then print images on paper based on the transcript file according to its presentation format.
The present disclosure also provides an audio transcription device comprising a processor, digital memory, a microphone, a non-transitory machine-readable medium, and a data interface. The microphone can be configured to record input audio data and store said input audio data as a digital file in the digital memory. The non-transitory machine-readable medium can have instructions recorded thereon for causing the processor to perform the steps of dividing the input audio data into a plurality of sound tokens, identifying transcription text for each subset of the plurality of sound tokens by finding a best match for the subset in sound samples in a sound database, creating a transcript file and formatting the transcription text in the transcript file according to a presentation format that corresponds to a sound type of the transcription text, and storing the transcript file in the digital memory. The data interface can be configured to transfer the transcript file from the digital memory to a separate device.
An audio source 100 can provide live or prerecorded audio data to the audio transcription device 102 via a direct wired or wireless connection, via a network connection, via removable storage, and/or through any other data transfer method. In some embodiments the audio source 100 and the audio transcription device 102 can be directly connected via a cable such as a USB cable, Firewire cable, digital audio cable, or analog audio cable. In other embodiments the audio source 100 and the audio transcription device 102 can both be connected to the same LAN (local area network) through a WiFi or Ethernet connection such that they can exchange data through the LAN. In still other embodiments the audio source 100 and the audio transcription device 102 can be directly connected via Bluetooth, NFC (near-field communication), or any other peer-to-peer (P2P) connection. In yet other embodiments the audio source 100 can be a cloud server, network storage, or any other device that is remote from the audio transcription device 102, and the audio source 100 can provide input audio data to the audio transcription device 102 remotely over an internet connection. In still further embodiments the audio source 100 can load input audio data onto an SD card, removable flash memory, a CD, a removable hard drive, or any other type of removable memory that can be accessed by the audio transcription device 102.
In some embodiments the audio source 100 can be a device comprising a microphone that can provide live input audio data to the audio transcription device 102 while it captures sound from its surrounding environment. In other embodiments the audio source 100 can be a device that can record audio data using a microphone and/or store audio data received from other devices such that it can provide prerecorded audio data to the audio transcription device 102. By way of non-limiting examples, the audio source 100 can be a microphone, telephone, radio, MP3 player, CD player, audio tape player, computer, smartphone, tablet computer, or any other device.
In some embodiments or situations the input audio data can be an audio file or signal provided by the audio source 100. By way of a non-limiting example, the audio source 100 can provide input audio data to the audio transcription device 102 as an encoded audio file in a file format such as MP3, WAV, WMA, ALC, ARF, AAC, or any other audio file format. By way of another non-limiting example, the audio source 100 can provide input audio data to the audio transcription device 102 as analog audio signals or unencoded digital audio, and the audio transcription device 102 or transcription component 104 can convert the input audio into an encoded audio file.
In alternate embodiments or situations the input audio data can be extracted from a video file or signal provided by the audio source 100. By way of a non-limiting example, the audio source 100 can provide the audio transcription device 102 with an encoded video file in a file format such as AVI, WMV, MP4, MOV, MPG, 3GP, or any other video file format. In these embodiments, the audio transcription device 102 or transcription component 104 can extract the video's audio components to use as the input audio data. By way of a non-limiting example, when a provided video file is an MP4 file with video components encoded with H.264 and audio components encoded with MP3, the transcription component 104 can use the MP3 audio components as the input audio data. In some embodiments the audio transcription device 102 or transcription component 104 can also extract one or more frames from the video components as images to include in the transcript file 108, as discussed below. In other embodiments the transcription component 104 can include some or all of the video data in the transcript file 108, such as when the transcript file 108 is itself a video file as described below.
The audio transcription device 102 can be a computing device that comprises, or is connected to, at least one processor and at least one digital storage device. The processor can be a chip, circuit, or controller configured to execute instructions to direct the operations of the device running the audio transcription device 102, such as a central processing unit (CPU), application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), graphics processing unit (GPU), or any other chip, circuit, or controller. The digital storage device can be internal, external, or remote digital memory, such as random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory, a digital tape, a hard disk drive (HDD), a solid state drives (SSD), cloud storage, any/or any other type of volatile or non-volatile digital memory.
In some embodiments the audio transcription device 102 can be a printer, such as a standalone printer, multifunctional printer (MFP), fax machine, or other imaging device. In embodiments in which the audio transcription device 102 is a printer, the printer can directly print transcripts described by a transcript file 108 generated by the transcription component 104. In other embodiments the audio transcription device 102 can be a computer, smartphone, tablet computer, microphone, voice recorder or other portable audio recording device, television or other display device, home theater equipment, set-top box, radio, portable MP3 player or other portable media player, or any other type of computing or audio-processing device. When the audio transcription device 102 comprises a screen or can output images to a connected screen, in some embodiments the audio transcription device 102 can display a transcript file 108 on the screen. By way of a non-limiting example, when the audio transcription device 102 is a television that has a transcription component 104, the television can display on its screen text transcribed by the transcription component 104 from input audio data.
As shown in
In alternate embodiments the audio source 100 can be a part of the audio transcription device 102, such that the audio source 100 directly provides input audio data to the transcription component 104 running on the same device. By way of a non-limiting example the audio transcription device 102 can be a microphone unit or a standalone portable audio recording device comprising a microphone, and the transcription component 104 can be firmware running in the microphone unit or recording device that can receive audio data captured by its microphone. By way of another non-limiting example the audio transcription device 102 can be a smartphone comprising a microphone and/or other audio inputs, and the transcription component 104 can be run as an application on the smartphone.
When the audio source 100 provides live or prerecorded audio data in real time over an audio cable or other connection, the audio transcription device 102 and/or transcription component 104 can digitally record and store the audio data in digital storage. By way of a non-limiting example, the transcription component 104 can encode the received audio data into an audio file, such as an MP3 file or an audio file encoded using any other lossless or compressed format. Similarly, when an audio source 100 provides audio data as an already encoded audio file, the audio transcription device 102 and/or transcription component 104 can store the received audio file in digital storage.
The transcription component 104 can be software or firmware that follows a set of instructions to generate one or more transcript files 108 from input audio data received from an audio source 100. As will be discussed further below, the transcription component 104 can identify individual words and/or sounds present within the input audio data based on a sound database 106, and include text or descriptions associated with those words and sounds in a transcript file 108.
The sound database 106 can be a database of preloaded sound information, including prerecorded sound samples, that the transcription component 104 can use to interpret input audio data and generate a transcript file 108. As shown in
Vocal sounds 202 can comprise spoken words 212 and/or sung words 214. Spoken words 212 can be prerecorded samples of spoken words and/or spoken phonemes by one or more human speakers. Sung words 214 can be prerecorded samples of sung words and/or sung phonemes by one or more human singers.
Each sung or spoken word in the sound database 106 can be mapped to the text of an associated word in the sound database 106. By way of a non-limiting example, a recording of a person saying the word “table” can be mapped to the character string “table” in the sound database 106.
Each sung or spoken phoneme in the sound database 106 can be mapped to one or more strings of characters that can represent that phoneme in the sound database 106, along with rules for combining a character string for the phoneme with other character strings for other phonemes to create a character string for a word. By way of a non-limiting example, the sound database 106 can contain recordings of a person saying the phonemes /t/, /a/, and /bl/. When the transcription component 104 recognizes those individual phonemes being spoken together in input audio data, it can use the associated character strings to build the character string “table” for the transcript file 108.
Animal sounds 204 can be prerecorded samples of noises made by animals. By way of non-limiting examples, animal sounds 204 can be sounds such as dog barks, bird tweets, cow moos, horse neighs, or any other sound produced by an animal. Each animal sound sample in the sound database 106 can be mapped to a text description of that animal sound in the sound database 106.
Musical instrument sounds 206 can be prerecorded samples of musical sounds produced by one or more musical instruments. The sound database 106 can contain a plurality of different sound samples for each musical instrument, such as samples for different notes, chords, beats, or other sounds that the instrument can produce. Each musical sample in the sound database 106 can be mapped to a text description of that musical sound, and an identifier of which type of instrument made the sound. The text description of a musical sample can be the name of the identified note, such as a C note, and/or a solfège syllable that corresponds to its sound, such as “do,” “re,” “mi,” “fa,” “sol,” “la,” and “ti.” The sound database 106 can also comprise a plurality of musical symbols that the transcription component 104 can use to build a musical score in a transcript file 108 based on identified musical sounds, such as symbols for different notes of varying lengths that it can place on a musical staff in a transcript file 108.
Nature sounds 208 can be prerecorded samples of sounds produced by natural or atmospheric conditions. By way of non-limiting examples, nature sounds can be sounds of wind, fire, running river water, thunder, rain, waves on a beach, volcanic eruptions, or any other natural element. Each nature sound sample in the sound database 106 can be mapped to a text description of that nature sound in the sound database 106.
Sound effects 210 can be prerecorded samples of any other sounds, such as man-made sounds, sounds produced by machines, or sound effects commonly used in movies and television shows. By way of non-limiting examples, sound effects can be sounds of footsteps, clapping, tapping, whistling, drumming, car engines, airplane jets, helicopters, explosions, or any other sound effect. Each sound effect sample in the sound database 106 can be mapped to a text description of that sound effect in the sound database 106.
Returning to
The transcription component 104 can compare the speakers' voices against frequencies of audio samples from known speakers in the sound database 106 and attempt to find a matching speaker. If a matching speaker is found, the dialog format can list the speaker's name in front of the transcribed text. However, if no matching speaker is found but the transcription component 104 identifies distinct voices in the input audio, it can use generic identifiers such as “Speaker 1” and “Speaker 2.” In some embodiments when the frequencies of a vocal sample fall into a range that is statistically more likely to be a man's voice, a woman's voice, or a child's voice, the transcription component 104 can identify the speaker in the transcript file 108 as a “Man,” “Woman,” or “Child.”
In some embodiments the transcription component 104 can select the dialog format by default when the input audio data is identified by the transcription component 104 or via user input as a recording of a conversation between more than one person and a low level of background noise is detected. By way of a non-limiting example, a dialog format can be selected when the input audio data is a recording of two people having a phone conversation.
When the transcription component 104 is set to use a screenplay format and it encounters non-vocal data in the input audio data, it can attempt to identify that non-vocal data using animal sounds 204, musical instrument sounds 206, nature sounds 208, and/or sound effects 210 in the sound database 106. When the transcription component 104 identifies non-vocal data, it can insert a text description of the sounds into the transcript file 108.
In some embodiments the transcription component 104 can select the screenplay format by default when the input audio data is identified by the transcription component 104 or via user input as an audio or video recording of a movie, television show, play, or other performance. By way of a non-limiting example, a screenplay format can be selected when three actors in a play are recorded speaking their lines and the transcription component 104 identifies three distinct voices as well as other sound effects.
In some embodiments the transcribed text in an illustrated book format can follow a dialog or screenplay format in which identifiers for different speakers precede transcribed words and/or descriptions of identified non-vocal sounds are included, as described above. By way of a non-limiting example,
In other embodiments the transcribed text in an illustrated book format can follow the essay format without identifying the speaker. By way of a non-limiting example,
In some embodiments the transcription component 104 can select the captioning format by default when the input audio data is identified by the transcription component 104 or via user input as a video. By way of a non-limiting example, a captioning format can be selected when the input audio data is any other type of video, such that the transcript file 108 can be used for subtitles when the video is played. By way of another non-limiting example, a captioning format can be selected when the input audio data is a music video, such that the transcription component 104 can identify the lyrics being sung over background music and generate captioning data that can be used for karaoke when the music video is played. In alternate embodiments still video frame images can be extracted from a music video and transcribed song lyrics can be displayed over or around the video frames in a static format for printing or display, similar to the illustrated book format described above.
As described above, the transcription component 104 can use information in the sound database 106 to generate a transcript file 108 based on audio recognized and identified within the input audio data. The transcription component 104 can use frequency analysis, pattern analysis, statistical analysis, machine learning, and/or artificial intelligence to compare audio segments from the input audio data against sound samples in the sound database 106 to find the best match. In some embodiments individual audio segments from the input audio data can be referred to as sound tokens 1100, as discussed further below. The transcription component 104 can use text mapped to an audio sample in the sound database 106 that best matches a sound token 1100 to transcribe that audio sample into text for the transcript file 108. In some embodiments the transcription component can be set to transcribe spoken words in the input audio data. In other embodiments the transcription component can be set to transcribe spoken words as well as other sounds, such as animal sounds, nature sounds, and other sound effects. In still other embodiments the transcription component can be set to transcribe spoken words and/or other sounds, as well as human singing and/or musical sounds produced by musical instruments.
At step 1002, an audio source 100 can provide input audio data to the audio transcription device 102. As described above the input audio data can be live or prerecorded sounds provided in audio or video data. If the input audio data is provided in an analog format, the audio transcription device 102 and/or transcription component 104 can convert the analog audio to digital audio using a device driver, software utility, or other processing component. Similarly, if the input audio data is provided as an un-encoded raw digital audio signal, the audio transcription device 102 and/or transcription component 104 can convert it into an encoded digital audio file. In some embodiments the transcription component 104 can use digital filtering, noise elimination, modulation, or other processing steps to clean the input audio data, such as eliminating static.
At step 1004, the transcription component 104 can divide the input audio data into discrete sound tokens 1100. As shown in
In some embodiments the transcription component 104 can find local or global crests and troughs in a sound wave, and identify sound tokens 1100 as portions of the wave that are between identified crests or between identified troughs. As such, each sound token 1100 can have at least one wavelength.
By way of a first non-limiting example,
By way of another non-limiting example, the transcription component 104 can identify sound tokens 1100 between each global or local trough or crest, such that each sound token 1100 has one wavelength. For example, the third sound token 1100c shown in
In alternate embodiments the transcription component 104 can identify sound tokens 1100 in the input audio data by finding periods of sound bounded by periods of silence. The transcription component 104 can identify segments of the input audio data's sound wave with amplitudes that are likely to be silence or periods of low volume. The transcription component 104 can then identify sound tokens 1100 as segments of the sound wave that are between such periods of silence or low volume. By way of a non-limiting example, human speech typically has short periods of silence between spoken words. As such, sound tokens 1100 corresponding to individual spoken words can often be found by selecting sound data between identified periods of silence in a sound wave.
After sound tokens 1100 have been identified, the transcription component 104 can track timing attributes of the sound tokens 1100, such as how long they are and a timestamp of when they began within the input audio data. The transcription component 104 can also track other attributes of each sound token 1100, such as the number of wavelengths it contains, its minimum and/or maximum amplitude, information about its sequence of crests and troughs, or other information about the sound token's waveform.
In some embodiments the transcription component 104 can process each channel of multi-channel input audio data separately to identify sound tokens 1100 within each channel. By way of a non-limiting example, when the input audio data is a two-channel stereo or a 5.1 surround sound recording of dialog between two speakers that has been mixed such that one speaker's voice is primarily represented in one channel and the other speaker's voice is primarily represented in another channel, processing the sound channels separately can assist in identifying sound tokens 1100 associated with each distinct speaker.
At step 1006, the transcription component 104 can compare each identified sound token 1100 against audio data in the sound database 106 to find matching text or descriptions. The transcription component 104 can compare each sound token 1100 against prerecorded audio samples in the sound database 106 to find one that best matches the sound token 1100. In some embodiments the transcription component 104 can compare original sound tokens 1100 against audio samples. In other embodiments the transcription component 104 can transform original sound tokens 1100 to make them more similar to audio samples in the sound database 1100 prior to performing a comparison. By way of a non-limiting example the transcription component 104 can adjust the volume of a sound token 1100 by downscaling or upscaling the magnitude of its signal to more closely match the volume of known audio samples. By way of another non-limiting example the transcription component 104 can adjust the pitch of a sound token 1100 by shrinking or expanding the sound wave to more closely match the pitch of known audio samples.
In some embodiments the transcription component 104 can do a bitwise comparison between an original or transformed sound token 1100 and audio samples from the sound database 106 to find the closest matching audio sample. In other embodiments the transcription component 104 can compare an original or transformed sound token's waveform, or other attributes of the sound token 1100 such as its number of wavelengths, sequence of crests and troughs, pitch, frequency, or other attributes, against corresponding attributes of audio samples in the sound database 106 to find the closest match.
In some embodiments if the transcription component 104 does not find a sufficient match for a single sound token 1100, the transcription components 104 can compare combinations of two or more sound tokens 1100 against audio samples in the sound database 106.
As described above, the sound database 106 can comprise audio samples in a plurality of categories, including vocal sounds 202, animal sounds 204, musical instrument sounds 206, nature sounds 208, and/or sound effects 210. When the sound database 106 does contain audio samples from more than one category, the transcription component 104 can attempt to determine the category that is most likely to contain the closest match to an audio segment before performing further comparisons in the category.
In some embodiments the transcription component 104 can receive instructions from users that identify the most likely category. By way of a non-limiting example, a user who has listened to the input audio data can determine that it is primarily dialog between human speakers, and thus can instruct the transcription component 104 to prioritize comparisons between sound tokens 1100 and audio samples in the vocal sounds 202 portion of the sound database 106.
In other embodiments the transcription component 104 can use the file type of the input audio data to identify the most likely category. By way of a non-limiting example, input audio data provided as an MP3 can be likely to include musical sounds, and thus the transcription component 104 to prioritize comparisons between sound tokens 1100 and audio samples in the sung words 214 and/or musical instrument sounds 206 portions of the sound database 106. By way of a non-limiting example, input audio data provided as a video file can be likely to include dialog, music, and sound effects, and thus the transcription component 104 to prioritize comparisons between sound tokens 1100 and audio samples in the spoken words 212 musical instrument sounds 206, and/or sound effects 210 portions of the sound database 106.
In still other embodiments the transcription component 104 can use identify the most likely category based on representative samples in the sound database. In these embodiments one or more audio samples in each category can be designated as a representative sample for that category. The transcription component 104 can compare a sound token 1100 against a representative sample from a category to determine if a match for that sound token 1100 is likely to be found in that category. If the results of comparison between a sound token 1100 and a representative sample for a category is above a predefined threshold, the transcription component 104 can perform additional comparisons on other audio samples in that category. By way of a non-limiting example, the frequencies in a sound token 1100 of a human voice can be a closer match for a representative sample in the spoken words 212 category than one in the sound effects 210 category, and as such the transcription component 104 can prioritize further comparisons against other audio samples in the spoken words 212 category.
In some embodiments the transcription component 104 can group sound tokens 1100 based on common frequency ranges, common pitch ranges, or other factors that can identify sound data produced by the same source. As such, after a likely category of audio samples is determined for one sound token 1100 in a group, subsequent sound tokens 1100 in the group can be compared against audio samples in the same category.
At step 1008, the transcription component 104 can use mapping data between the closest-match audio samples and text descriptions of the audio samples to build the content of a transcript file 108. When an audio sample is determined to be a phoneme of human speech, the transcription component 104 can use surrounding phonemes in the input audio data to build words based on statistical measurements of the phonemes and/or other algorithms. As such, the transcription component 104 can add words or descriptions that match one or more sound tokens 1100 to the transcript file 108.
In some embodiments, when the transcription component 104 grouped sound tokens 1100 based on a common frequency range, pitch range, or other common attributes, the transcription component can add identifying text to the transcript file 108 for sound tokens 1100 in the group. By way of a non-limiting example, the transcription component 104 can use distinct frequency ranges and/or vocal patterns in sound tokens 1100 of human speech to identify sound tokens 1100 associated with different speakers within the input audio data, such that the transcript component 104 can add a label for each distinct speaker in the transcript file 108 in a dialog presentation format or other presentation format.
At step 1010, the transcription component 104 can format the transcript file 108 in a particular presentation format, and output the transcript file 108 for storage, printing, and/or display. In some embodiments, a user can input a command at the audio transcription device 102 to select a desired presentation format for the transcript file 108, either before the process begins or after words and sounds have been recognized. In other embodiments the transcription component 104 can attempt to identify an appropriate presentation format for the transcript file 108 based on one or more groups of recognized sound tokens 1100, as described below.
In some embodiments or situations the input audio data can already contain textual representations of its content when it is provided by an audio source 100. By way of non-limiting examples, the input audio data can be a video that contains closed captioning data and/or descriptive text that describes actions that are occurring during the video. In some embodiments, when the transcription component 104 receives input audio data that already contains textual representations of its content, it can move directly to step 1010 to format that textual representation in a selected presentation format. By way of a non-limiting example, the transcription component 104 can use pre-existing closed captioning data in a video as the text content of a transcript file 108 generated in an illustrated book format.
At step 1202, the transcription component 104 can consider a group of sound tokens 1100 that were determined through the process of
At step 1204, the transcription component 104 can determine if the shared audio type of the group of sound tokens 1100 was human vocal sounds. If the group's audio type was human vocals, at step 1206 the transcription component 104 can determine if the group's audio type was human speech or human singing.
If the transcription component 104 determines that the audio type of the group of sound tokens 1100 is human speech during step 1206, the transcription component 104 can determine at step 1208 whether the audio frequencies in the group indicate more than one speaker.
If at step 1208 the transcription component 104 identifies only one speaker in a group of sound tokens 1100 found to be human speech, at step 1210 it can prepare the text of the words mapped to the sound tokens 1100 during the process of
If at step 1208 the transcription component 104 identifies more than one speaker in a group of sound tokens 1100 found to be human speech, at step 1212 it can prepare the text of the words mapped to the sound tokens 1100 during the process of
Returning to step 1206, if the transcription component 104 determines that the audio type of the group of sound tokens 1100 is human singing, the transcription component 104 can determine at step 1214 whether the audio frequencies in the group indicate more than one singer.
If at step 1214 the transcription component 104 identifies only one singer in a group of sound tokens 1100 found to be human singing, at step 1216 it can prepare the text of the words mapped to the sound tokens 1100 during the process of
If at step 1214 the transcription component 104 identifies more than one singer in a group of sound tokens 1100 found to be human singing, at step 1218 it can prepare the text of the words mapped to the sound tokens 1100 during the process of
Returning to step 1204, if the transcription component 104 found that the group of sound tokens 1100 was not human speech or human singing, it can determine whether the group of sound tokens 1100 was sound produced by musical instruments.
If at step 1220 the group of sound tokens 1100 is found to be sound produced by musical instruments, at step 1220 the transcription component 104 can prepare the musical symbols mapped to the sound tokens 1100 during the process of
If at step 1220 the group of sound tokens 1100 is not found to be sound produced by musical instruments, at step 1224 the transcription component 104 can prepare the text of a description of a nature sound or sound effect mapped to the sound tokens 1100 during the process of
At step 1226 the transcription component 104 can select a final presentation format for the transcript file 108 based on the presentation formats selected during preceding for each group of sound tokens 1100 that share a common audio type. The transcript file 108 can use text formatted for different presentation formats in a single file, such as having some sections formatted in an essay format, some sections formatted in a dialog format, some sections formatted as a musical score, and some sections being descriptions of nature sounds or other sound effects inserted between other transcribed text. By way of a non-limiting example, if all of the sound tokens 1100 were human speech from the same speaker and the text was formatted for an essay format in step 1210, the transcription component 104 can generate the transcript file 108 in the essay format. However, if one group of sound tokens 1100 was human speech from one speaker, another group of sound tokens 1100 was human speech from two different speakers, and yet another group of sound tokens 1100 was sound effects, the transcription file 104 can generate the transcript file 108 in a screenplay format by inserting using the essay-formatted text from the single speaker and identifying it as being spoken by a narrator, inserting the dialog-formatted text from the two other speakers and identifying it as being spoken by individual characters, and inserting the text descriptions of other sound effects between the speech transcriptions.
If the audio input data was a video file, the transcription component can also at step 1226 use extracted frames and add them to the transcript file 108 along with text formatted in any other presentation format to create an illustrated book format. Similarly, the text formatted for any presentation format can be added to the video file as caption data for the captioning format.
At step 1302, a print job can be initiated at the printer. In some embodiments the audio input file can be sent over a network connection to the printer to begin a print job, such as initiating a remote print job with the LPR (line printer remote) protocol or sending a file to the printer via an FTP (file transfer protocol) connection. In other embodiments a user can use a control panel or other user interface on the printer to select an audio input file from a local or networked data location to begin a print job.
At step 1304, the printer can determine whether the received print job is a regular print job or an audio print job. If the printer receives a file that has page content described by page description language (PDL) commands, the printer can determine that this is a regular print job. The printer can proceed at step 1306 to print the file by using a PDL raster image processor (RIP) to create raster representations of pages by interpret its PDL commands, and then printing the raster representations onto paper with the printer's print engine.
However, if at step 1302 the printer received an audio or video file to initiate the print job, it can determine that this is an audio print job and proceed to transcribing the audio at step 1308 and identifying a presentation format for the transcribed audio at step 1310.
At step 1308, the printer can run a transcription component 104 to find text associated with audio in the provided audio or video file for the transcript file 108, as described above with respect to
At step 1310, the printer's transcription component 104 can determine a presentation format for the transcript file 108 based on the audio types for transcribed audio found during step 1308, as described above with respect to the process of
At step 1312, the transcription component 104 can output the transcript file 108 as a print job using PDL commands that describe the content of each page according to the selected presentation format. The printer can then proceed at step 1306 to print the file by using its PDL RIP to create raster representations of each page by interpreting the PDL commands, and then printing the raster representations onto paper with the printer's print engine.
In alternate embodiments, such as ones in which the printer is an MFP, the printer can be set to perform alternate operations based on a transcript file generated from an audio file or video file. By way of non-limiting examples, when an audio file is provided to an MFP to initiate a print job, the MFP can create a transcript file 108 based on the audio file as described in
As described above, a transcription component 104 can generate a transcript file 108 based on input audio data such that it can be printed and/or displayed in a presentation format. As such, audio from interviews, speeches, podcasts, audio blogs, movies, or any other source can be transcribed into text by a transcription component 104. The transcribed text can be printed, displayed on a screen, stored, transferred, searched, and/or be read or used for any desired purpose. By way of non-limiting examples, when audio from an interview with a historical figure is transcribed by the transcription component 104, the transcribed text can be stored in archives, presented or analyzed in news articles, or included in a textbook for students. By way of another non-limiting example, a transcript file 108 generated by a transcription component 104 from an audio source in one language can be provided to a translator to translate into another language.
Additionally, the formatting of a transcript file 108 into an appropriate presentation format can assist in allowing a reader to comprehend the content and/or context of the input audio data by how it is formatted. By way of a non-limiting example, generating a transcript file 108 in a dialog format from a recording of a conversation between two people can help a reader understand which person spoke which word in the original audio.
In some embodiments a transcription component 104 can be activated on demand upon the occurrence of a particular event. By way of non-limiting examples, a printer can activate a transcription component 104 when a print job is initiated at a printer based on an input audio file. Similarly, a transcription component 104 at a standalone audio transcription component 102 can be activated when a user inputs a command to transcribe a particular piece of input audio data.
In other embodiments a transcription component 104 can be activated automatically as part of a service. By way of a first non-limiting example, a transcription component 104 can be integrated into a teleconference service. In this example the audio source 100 or the audio transcription device 102 can be a teleconferencing server through which remote business partners or team members can connect to over the internet or via phone to conduct a meeting. The teleconferencing server can record audio of the meeting, and provide the audio to a transcription component 104. The transcription component 104 can generate a transcript file 108 in a particular presentation format, such as a dialog format. The teleconferencing service can then email the transcript file 108 to the meeting attendees and/or others who were not in attendance, such that they can have a copy of transcribed text from the meeting for their records. Similarly, the teleconferencing service can archive a copy of the transcribed text or make it available to users on a web site.
By way of a second non-limiting example, a transcription component 104 can be integrated into a security service. In this example the audio source 100 can be one or more microphones set up around an environment that is being monitored by the security service. The microphones can be set up to record ambient audio on a permanent basis or during selected time periods. In some embodiments the microphones can record audio when they detects noise at higher than a designated volume threshold, while in other embodiments they can record audio indefinitely. The microphones can pass recorded audio to an audio transcription device 102 running a transcription component 104, such that it can generate a transcript file 108 from the recorded audio. In some embodiments the audio transcription device 102 can enter a power-saving mode when not transcribing recorded audio, but wake up when audio is received from the microphones. When transcript file 108 contains a transcription of recorded noise, the security system can notify a user, email the transcript file 108 to a user, and/or archive the transcript file 108 for later review. In some embodiments, if the security system comprises one or more cameras, images from the cameras can be included in a transcript file 108 along with transcribed text.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, the invention as described and hereinafter claimed is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.