The present invention relates to a caption service system for remote speech recognition, and more particularly of using a caption server and a listener-typist to provide caption service system for remote speech recognition for the hearing impaired.
Because of the COVID-19 outbreak, remote live broadcasting and teaching have become a widely adopted trend. However, the current general remote live broadcasting and teaching do not have captions, so it is impossible for students with hearing impairment to attend classes.
In ordinary classrooms, hearing impaired students also have problems in class, because there is no monitor to directly display the captions of the teacher's lecture content. In various presentations and conferences, the hearing impaired cannot participate because there is no monitor to directly display captions.
Therefore, setting up captions for the hearing impaired that can show what the teacher or speaker says is a great boon for the hearing impaired.
Nowadays, some conferences use a listener-typist to type the content of the speaker with the computer on the spot and display it on the computer screen as captions, so that the hearing impaired can understand the situation on the spot. However, the listener-typist spends a lot of energy listening to the content of the speaker. Once the working hours are too long, there may be missed sentences and typos. Therefore, a more complete remote listener-typist solution must be provided.
The object of the present invention is to provide a caption service system for remote speech recognition, to provide caption service for the hearing impaired with a caption service for remote speech recognition. The contents of the present invention are described as below.
This system includes a speaker and a live broadcast equipment at A, a listener-typist and a computer at B, a hearing impaired and a live screen at C, and an automatic speech recognition (ASR) caption server at D. Connect the live broadcast equipment, the computer, the live screen and the ASR caption server with a network.
The automatic speech recognition (ASR) caption server includes: a real time messaging protocol (RTMP) to receive the live stream from A through the network; an open source speech recognition toolkit for speech recognition and signal processing; a web server is responsible for providing the interface of the web page, which is transmitted to the live broadcast equipment, the computer and the live screen through the HTTP protocol; a recording module is used for the playback function of the listener-typist.
The audio of the speaker is sent to the ASR caption server to be converted into text, the text is corrected by the listener-typist. and then the text caption is sent to the live screen of the hearing impaired together with the speaker's video and audio, so that the hearing impaired can see the text caption spoken by the speaker.
The ASR caption server 7 uses an open source speech recognition toolkit Kaldi ASR 10 for speech recognition and signal processing, which can be obtained freely under Apache License v2.0.
The ASR caption server 7 has to be equipped with a web server 11, which is an interface for providing the web and for being delivered to clients through HTTP (web browser). The clients mean the live broadcast equipment 2, the computer 4 and the live screen 6.
The ASR caption server 7 has a recording module 12 for being used by the listener-typist to conduct a replay function.
Referring to
The second path including only the audio of the speakser 1 is inputted into the ASR uploading interface 14 for packeting the audio, and then passing through the RTMP 9 (or HLS) for live streaming to the ASR caption server 7.
Referring to
Referring to
The listener-typist 3 is set up to have the authority of reading and writing in the ASR caption server 7 so as to be capable to revise the text generated by the Kaldi ASR 10 in the web server 11. Each section of the text has a label, for example, if the listener-typist 3 clicks two times on the C section of the text, the web server 11 will follow the instructions of the related label to ask the audio record 16 to playback the paragraph of the N3 second with time length Z seconds, so that the listener-typist 3 can recognize the contents spoken by the speaker 1 for amending the text.
Referring to
The OBS 13 is capable to merge pictures. The speaker 1 at the the live broadcast equipment 2 selects the caption content 18 from the web server 11 of the ASR caption server 7 and merges with the video and audio 17 from the live broadcast equipment 2 through the OBS 13 to output to the live screen 6 containing the caption generated by the ASR caption server 7, and then inputs to YouTube, Facebook or Twitch platform by the OBS 13, so that the hearing impaired 5 at C can see the caption content 18 from the caption area 61 on the live screen 6.
The scope of the present invention depends upon the following claims, and is not limited by the above embodiments.
Number | Name | Date | Kind |
---|---|---|---|
6856960 | Dragosh | Feb 2005 | B1 |
8209184 | Dragosh | Jun 2012 | B1 |
11069368 | Lipman | Jul 2021 | B2 |
20060122836 | Cross | Jun 2006 | A1 |
20170069311 | Grost | Mar 2017 | A1 |
20180233135 | Talwar | Aug 2018 | A1 |
20200013388 | Lee | Jan 2020 | A1 |
20210314523 | Kamisetty | Oct 2021 | A1 |
20220103683 | Engelke | Mar 2022 | A1 |
20220366904 | Martinson | Nov 2022 | A1 |
Number | Date | Country | |
---|---|---|---|
20230055924 A1 | Feb 2023 | US |