The present invention relates to IPCTS (Internet Protocol captioned telephone service), preferably using (ASR) Automated Speech Recognition, and still more particularly to systems configured to provide a user an ability to have at least one of (a) an ability to connect to and use the IPCTS service when a third party calls a number of the user not associated with the IPCTS service (b) call conferencing IPCTS which identifies each caller's contribution to the call, and/or (c) personal speech preference recognition (such as if odd geographic names or specialized medical or other terms are often used which are not particularly common to the general public).
Ever since the telephone was invented individuals with hearing loss have struggled to communicate on it. Hearing the person on the other end of the call is a major problem for a hard of hearing user and the traditional PSTN (Public Switch Telephone Network) has never been configured to accommodate this need. Historically, the use of a relay service has been the solution. A relay service traditionally involves a third-party operator that helps facilitate communication between the two parties on a call. The first relay service used a TTY (Teletypewriter) that allowed the hearing-impaired user to read what the other party was saying. It would work by having the far end caller speak, then operator would listen to what was being said and then type it word for word, in essence “relaying the message”. The individual with hearing loss would then read conversation that the relay operator sent.
As technology changed so did relay services. The latest iteration of the relay service is IPCTS (Internet Protocol Captioned Telephone Service). IPCTS functions much in the same way as the traditional relay service except it relies on technology such as the internet and computers to speed up the transmission of the captions to the end user. IPCTS currently operates in one of two ways. First, by using a CA (Captioning Assistant) who listens to the conversation and then re-voices it to a computer. The computer then uses voice recognition software to turn the CA's speech into text which is then sent to the end user to read. A competing method of IPCTS uses highly trained transcriptionists that listen to the conversation and then retype every word which is ultimately sent to the user. In both cases the human CAs or Transcriptionists are an integral part of the conversation and service. Thanks to continued advances in technology computers are finally surpassing humans in accuracy and efficiency. ASR (Automated Speech Recognition) is the next major leap forward in IPCTS. Some IPCTS providers are beginning to experiment with a new “hybrid” approach where their call centers give the CA the ability to switch between ASR and the CA computer re-voicing to provide a better experience for the hearing-impaired user. A current U.S. Pat. No. 10,044,854, incorporated herein by reference, is one such improvement.
One feature of IPCTS is that the captioned text is not stored on a server controlled by the IPCTS. Such storage is currently expressly forbidden by the Federal Communication Commission. Once the text is sent, it cannot be recalled by the IPCTS.
Another issue arises for IPCTS users when a doctor or other party calls them on a line that is not associated with an IPCTS service.
Additionally, problems arise for IPCTS users through the prospects of potentially having a conference call. Current IPCTS systems could not handle well, and certainly could not separate out, which parties said what.
Finally, another problem is created by traditional automated speech recognition (ASR) software. Most versions of modern ASR software preferentially “recognize” commonly used words. If someone lives in an odd area such as an area dominated by awkwardly spelled terms, particularly those in low population areas (such as Chuathbaluk or Kupreanof, Ak., etc.), much ASR technology would not normally properly recognize some words. This same problem can happen when trying to identify highly technical language utilized in some fields such as medicine (such as prophylaxis, etc.) and others. While there are software programs that can customize preferred words to specific individual's likely usage, no party is known to employ such software with IPCTS.
The applicant appreciates improvements have been made in the field of IPCTS, but still further improvements are believed to be desirable to provide improved services for at least users with hearing loss.
It is an object of many embodiments of the present invention to provide improved IPCTS services to a user.
It is another object of many embodiments to provide improvements to the field of captioned telephone services.
It is another object of at least some embodiments of the present invention to provide improved communication services though captioned telephone services, such as an ability for an IPCTS subscriber to receive a call on a line not affiliated by an IPCTS service from another party and then conference or otherwise join the user's IPCTS number and/or account so that the IPCTS service can transcribe the call.
Another object of at least some embodiments of the present invention is to have an ability to perform conference call IPCTS whereby each party's contribution to the call is separated out with transcribed text, such as by color, telephone number, caller 1, caller 2, etc., and/or names, and/or by other means.
Finally, for at least some embodiments, personalized vocabulary words can be prioritized for that individual by the IPCTS without contributing to the ASR vocabulary preference body of the public as a whole. The IPCTS system may then recognize odd geographical terms, infrequently utilized technical terms and/or other terms utilized by this individual which may not necessarily correspond to the public as a whole.
Accordingly, in accordance with many embodiments of the present invention, a captioned telephone service is provided. A first caller (aka, a remote party) attempts to call a user through the service. Specifically, the first caller could call the telephone number controlled by the system. Alternatively, the user (aka, a subscriber to the IPCTS system), may call the first caller. However, when this does not occur, i.e., two numbers are connected not supported by a IPCTS system, a problem of lack of transcription capability would normally occur in the product. However, the applicant has discovered and implemented a way for transcription services in such situations. The IPCTS subscriber can conference in or otherwise join the user's IPCTS number or account such as by conferencing in the user's IPCTS number so that the IPCTS system then transcribes the call.
With a call initiated, the user, if not previously connected to the IPCTS system, may connect to the IPCTS system. At least the user preferably has an IPCTS telephone number, and the service preferably applies ASR to the call. The first caller's text, the user's text, along with other caller's text, can be transcribed in almost real time. Furthermore, all of the connected parties with the user to the IPCTS system can be transcribed for the user to view the conversation as it occurs if the user is connected to the IPCTS service during the entire call. The ASR software, if utilized, can transfer to text not only the first (and any other) caller's spoken words, but also the user's speech, while potentially separating the two caller's speech as has been done in application Nos. 62/849,425, filed May 17, 2019, 62/851,918 filed May 23, 2019, and/or 62/954,774 filed May 30, 2019, all of which are incorporated herein by reference in its entirety.
With prior art of IPCTS systems, conference calls would have been extremely difficult to transcribe, if at all. Furthermore, since an individual would have been listening and typing, it is extremely unlikely that the transcriber could or would have identified which party said which text. This would have been more difficult as the number of callers on the conference call increased. However, with one option provided by the applicant, for at least some embodiments, each caller's contribution on a conference call can be transcribed and ascribed to that particular caller. Not only does this work for each telephone number that calls into a number provided through the IPCTS service, but also to a conference call in which the IPCTS service is joined as described above. Specifically, the software utilized by the applicant can, even if not provided through separate telephone lines, still distinguish the voices, and identify as Speaker 1, Speaker 2 etc., or otherwise.
Furthermore, some embodiments of the present invention provide an ability to prioritize certainly vocabulary words. Most ASR software utilizes a global database of terms and likely usage. However, that algorithm is normally in a continuous state of update. However, the applicant's algorithm can be changed for certain individuals and not contribute to the global uses database for at least some embodiments.
Specifically, the applicant intends to have users be able to prioritize certain vocabulary words that they may use, that the general public may not, in an effort to better suit their specific situation. Alternatively, the system may learn words of the user over time. For instance, some users may live in an area for which a location has a relatively insignificant use as it relates to the common uses but would be utilized with great frequency with that particular user (i.e., Kupreanof, Ak., etc). Other users may utilize specific medial terms that are not utilized by the general public in very large frequency but are utilized by that individual on a relatively routine basis (i.e., prophylaxis, etc.). Still other words may have advantage to be prioritized by specific users.
Any of these various improvements may be desirable for various users.
The accompanying drawings illustrate preferred embodiments of the invention and, together with the description, serve to explain the invention. The drawings may not show elements to scale. These drawings are offered by way of illustration and not by way of limitation:
A flow chart of this embodiment is shown in
Finally,
Accordingly, the database 56 is not passed to other users as normally happen in ASR technology. One likely word is “deaf” which is utilized with great frequency as compared to the general public as often the pronunciation of this words is confused with “death” which might otherwise appear more frequently in the text streams as a miscommunication of the term deaf by certain ASR software. Accordingly, the user can help assist the algorithm identify which words are more frequently utilized for themselves than the general public. How much weight is given to any particular word could be at least partially controlled by the user. Furthermore, these preferences could change over time, daily or at other periods. Priorities could range from above average, to high priority, etc.
Numerous alterations of the structure herein disclosed will suggest themselves to those skilled in the art. However, it is to be understood that the present disclosure relates to the preferred embodiment of the invention which is for purposes of illustration only and not to be construed as a limitation of the invention. All such modifications which do not depart from the spirit of the invention are intended to be included within the scope of the appended claims.
This application claims the benefit of U.S. Provisional Application No. 62/980,686 filed Feb. 24, 2020, and is a continuation-in-part of U.S. patent application Ser. No. 15/930,612 filed May 13, 2020, which claims the benefit of US Provisional Application Nos. 62/849,425 filed May 17, 2019, 62/851,918 filed May 23, 2019 and 62/854,774 filed May 30, 2019, all of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5774857 | Newlin | Jun 1998 | A |
6266615 | Jin | Jul 2001 | B1 |
6332122 | Ortega | Dec 2001 | B1 |
7035804 | Saindon | Apr 2006 | B2 |
7822050 | DeGrazia | Oct 2010 | B2 |
7830408 | Asthana | Nov 2010 | B2 |
8140632 | Jablokov | Mar 2012 | B1 |
8407049 | Cromack | Mar 2013 | B2 |
8447285 | Bladon | May 2013 | B1 |
8593501 | Kjeldaas | Nov 2013 | B1 |
8825770 | Jablokov | Sep 2014 | B1 |
9374536 | Nola | Jun 2016 | B1 |
9436357 | Pallakoff | Sep 2016 | B2 |
9674351 | Mason | Jun 2017 | B1 |
9917947 | Charugundla | Mar 2018 | B2 |
9946842 | Stringham | Apr 2018 | B1 |
10389876 | Engelke | Aug 2019 | B2 |
10573312 | Thomson | Feb 2020 | B1 |
20020129106 | Gutfreund | Sep 2002 | A1 |
20060020962 | Stark | Jan 2006 | A1 |
20070011012 | Yurick | Jan 2007 | A1 |
20080215971 | Gillo | Sep 2008 | A1 |
20090276215 | Hager | Nov 2009 | A1 |
20100100376 | Harrington | Apr 2010 | A1 |
20100323728 | Gould | Dec 2010 | A1 |
20160360034 | Engelke | Dec 2016 | A1 |
20170085506 | Gordon | Mar 2017 | A1 |
20170201613 | Engelke | Jul 2017 | A1 |
20180034961 | Engelke | Feb 2018 | A1 |
20200007671 | Engelke | Jan 2020 | A1 |
20200243094 | Thomson | Jul 2020 | A1 |
20200366789 | Patron | Nov 2020 | A1 |
20220115020 | Bradley | Apr 2022 | A1 |
Number | Date | Country | |
---|---|---|---|
20210250441 A1 | Aug 2021 | US |
Number | Date | Country | |
---|---|---|---|
62980686 | Feb 2020 | US | |
62854774 | May 2019 | US | |
62851918 | May 2019 | US | |
62849425 | May 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15930612 | May 2020 | US |
Child | 17184006 | US |