Semiautomated relay method and apparatus

Abstract
A call captioning system for captioning a hearing user's (HU's) voice signal during an ongoing call with an assisted user (AU) includes: an AU communication device with a display screen and a caption service activation feature, and a first processor programmed to, during an ongoing call, receive the HU's voice signal. Prior to activating the caption service via the activation feature, the processor uses an automated speech recognition (ASR) engine to generate HU voice signal captions, detect errors in the HU voice signal captions, use the errors to train the ASR software to the HU's voice signal to increase accuracy of the HU captions generated by the ASR engine; and store the trained ASR engine for subsequent use. Upon activating the caption service during the ongoing call, the processor uses the trained ASR engine to generate HU voice signal captions and present them to the AU via the display screen.
Description
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT

Not applicable.


BACKGROUND OF THE INVENTION

The present invention relates to relay systems for providing voice-to-text captioning for hearing impaired users and more specifically to a relay system that uses automated voice-to-text captioning software to transcribe voice-to-text.


Many people have at least some degree of hearing loss. For instance, in the United states, about 3 out of every 1000 people are functionally deaf and about 17 percent (36 million) of American adults report some degree of hearing loss which typically gets worse as people age. Many people with hearing loss have developed ways to cope with the ways their loss effects their ability to communicate. For instance, many deaf people have learned to use their sight to compensate for hearing loss by either communicating via sign language or by reading another person's lips as they speak.


When it comes to remotely communicating using a telephone, unfortunately, there is no way for a hearing impaired person (e.g., an assisted user (AU)) to use sight to compensate for hearing loss as conventional telephones do not enable an assisted user to see a person on the other end of the line (e.g., no lip reading or sign viewing). For persons with only partial hearing impairment, some simply turn up the volume on their telephones to try to compensate for their loss and can make do in most cases. For others with more severe hearing loss conventional telephones cannot compensate for their loss and telephone communication is a poor option.


An industry has evolved for providing communication services to assisted users whereby voice communications from a person linked to an assisted user's communication device are transcribed into text and displayed on an electronic display screen for the assisted user to read during a communication session. In many cases the assisted user's device will also broadcast the linked person's voice substantially simultaneously as the text is displayed so that an assisted user that has some ability to hear can use their hearing sense to discern most phrases and can refer to the text when some part of a communication is not understandable from what was heard.


U.S. Pat. No. 6,603,835 (hereinafter “the '835 patent) titled system for text assisted telephony teaches several different types of relay systems for providing text captioning services to assisted users. One captioning service type is referred to as a single line system where a relay is linked between an AU's device and a telephone used by the person communicating with the AU. Hereinafter, unless indicated otherwise the other person communicating with the assisted user will be referred to as a hearing user (HU) even though the AU may in fact be communicating with another assisted user. In single line systems, one line links the HU to the relay and one line (e.g., the single line) links to the relay to the AU device. Voice from the HU is presented to a relay call assistant (CA) who transcribes the voice-to-text and then the text is transmitted to the AU device to be displayed. The HU's voice is also, in at least some cases, carried or passed through the relay to the AU device to be broadcast to the AU.


The other captioning service type described in the '835 patent is a two line system. In a two line system a hearing user's telephone is directly linked to an assisted user's device for voice communications between the AU and the HU. When captioning is required, the AU can select a captioning control button on the AU device to link to the relay and provide the HU's voice to the relay on a first line. Again, a relay CA listens to the HU voice message and transcribes the voice message into text which is transmitted back to the AU device on a second line to be displayed to the AU. One of the primary advantages of the two line system over one line systems is that the AU can add captioning to an on-going call. This is important as many AUs are only partially impaired and may only want captioning when absolutely necessary. The option to not have captioning is also important in cases where an AU device can be used as a normal telephone and where non-assisted users (e.g., a spouse that has good hearing capability) that do not need captioning may also use the AU device.


With any relay system, the primary factors for determining the value of the system are accuracy, speed and cost to provide the service. Regarding accuracy, text should accurately represent voice messages from hearing users so that an AU reading the text has an accurate understanding of the meaning of the message. Erroneous words provide inaccurate messages and also can cause confusion for an AU reading the messages.


Regarding speed, ideally text is presented to an AU simultaneously with the voice message corresponding to the text so that an AU sees text associated with a message as the message is heard. In this regard, text that trails a voice message by several seconds can cause confusion. Current systems present captioned text relatively quickly (e.g. 1-3 seconds after the voice message is broadcast) most of the time. However, at times a CA can fall behind when captioning so that longer delays (e.g., 10-15 seconds) occur.


Regarding cost, existing systems require a unique and highly trained CA for each communication session. In known cases CAs need to be able to speak clearly and need to be able to type quickly and accurately. CA jobs are also relatively high pressure jobs and therefore turnover is relatively high when compared jobs in many other industries which further increases the costs associated with operating a relay.


One innovation that has increased captioning speed appreciably and that has reduced the costs associated with captioning at least somewhat has been the use of voice-to-text transcription software by relay CAs. In this regard, early relay systems required CAs to type all of the text presented via an AU device. To present text as quickly as possible after broadcast of an associated voice message, highly skilled typists were required. During normal conversations people routinely speak at a rate between 110 to 150 words per minute. During a conversation between an AU and an HU, typically only about half the words voiced have to be transcribed (e.g., the AU typically communicates to the HU during half of a session). This means that to keep up with transcribing the HU's portion of a typical conversation a CA has to be able to type at around 55 to 75 words per minute. To this end, most professional typists type at around 50 to 80 words per minute and therefore can keep up with a normal conversation for at least some time. Professional typists are relatively expensive. In addition, despite being able to keep up with a conversation most of the time, at other times (e.g., during long conversations or during particularly high speed conversations) even professional typists fall behind transcribing real time text and more substantial delays occur.


In relay systems that use voice-to-text transcription software trained to a CA's voice, a CA listens to an HU's voice and revoices the HU's voice message to a computer running the trained software. The software, being trained to the CA's voice, transcribes the revoiced message much more quickly than a typist can type text and with only minimal errors. In many respects revoicing techniques for generating text are easier and much faster to learn than high speed typing and therefore training costs and the general costs associated with CA's are reduced appreciably. In addition, because revoicing is much faster than typing in most cases, voice-to-text transcription can be expedited appreciably using revoicing techniques.


At least some prior systems have contemplated further reducing costs associated with relay services by replacing CA's with computers running voice-to-text software to automatically convert HU voice messages to text. In the past there have been several problems with this solution which have resulted in no one implementing a workable system. First, most voice messages (e.g., an HU's voice message) delivered over most telephone lines to a relay are not suitable for direct voice-to-text transcription software. In this regard, automated transcription software on the market has been tuned to work well with a voice signal that includes a much larger spectrum of frequencies than the range used in typical phone communications. The frequency range of voice signals on phone lines is typically between 300 and 3000 Hz. Thus, automated transcription software does not work well with voice signals delivered over a telephone line and large numbers of errors occur. Accuracy further suffers where noise exists on a telephone line which is a common occurrence.


Second, most automated transcription software has to be trained to the voice of a speaker to be accurate. When a new HU calls an AU's device, there is no way for a relay to have previously trained software to the HU voice and therefore the software cannot accurately generate text using the HU voice messages.


Third, many automated transcription software packages use context in order to generate text from a voice message. To this end, the words around each word in a voice message can be used by software as context for determining which word has been uttered. To use words around a first word to identify the first word, the words around the first word have to be obtained. For this reason, many automated transcription systems wait to present transcribed text until after subsequent words in a voice message have been transcribed so that context can be used to correct prior words before presentation. Systems that hold off on presenting text to correct using subsequent context cause delay in text presentation which is inconsistent with the relay system need for real time or close to real time text delivery.


BRIEF SUMMARY OF THE INVENTION

It has been recognized that a hybrid semi-automated system can be provided where, when acceptable accuracy can be achieved using automated transcription software, the system can automatically use the transcription software to transcribe HU voice messages to text and when accuracy is unacceptable, the system can patch in a human CA to transcribe voice messages to text. Here, it is believed that the number of CAs required at a large relay facility may be reduced appreciably (e.g., 30% or more) where software can accomplish a large portion of transcription to text. In this regard, not only is the automated transcription software getting better over time, in at least some cases the software may train to an HU's voice and the vagaries associated with voice messages received over a phone line (e.g., the limited 300 to 3000 Hz range) during a first portion of a call so that during a later portion of the call accuracy is particularly good. Training may occur while and in parallel with a CA manually (e.g., via typing, revoicing, etc.) transcribing voice-to-text and, once accuracy is at an acceptable threshold level, the system may automatically delink from the CA and use the text generated by the software to drive the AU display device.


It has been recognized that in a relay system there are at least two processor that may be capable of performing automated voice recognition processes and therefore that can handle the automated voice recognition part of a triage process involving a call assistant. To this end, in most cases either a relay processor or an assisted user's device processor may be able to perform the automated transcription portion of a hybrid process. For instance, in some cases an assisted user's device will perform automated transcription in parallel with a relay assistant generating call assistant generated text where the relay and assisted user's device cooperate provide text and assess when the call assistant should be cut out of a call with the automated text replacing the call assistant generated text.


In other cases where a hearing user's communication device is a computer or includes a processor capable of transcribing voice messages to text, a hearing user's device may generated automated text in parallel with a call assistant generating text and the hearing user's device and the relay may cooperate to provide text and determine when the call assistant should be cut out of the call.


Regardless of which device is performing automated captioning, the call assistant generated text may be used to assess accuracy of the automated text for the purpose of determining when the call assistant should be cut out of the call. In addition, regardless of which device is performing automated text captioning, the call assistant generated text may be used to train the automated voice-to-text software or engine on the fly to expedite the process of increasing accuracy until the call assistant can be cut out of the call.


It has also been recognized that there are times when a hearing impaired person is listening to a hearing user's voice without an assisted user's device providing simultaneous text when the hearing user is confused and would like transcription of recent voice messages of the hearing user. For instance, where an assisted user uses an assisted user's device to carry on a non-captioned call and the assisted user has difficulty understanding a voice message so that the assisted user initiates a captioning service to obtain text for subsequent voice messages. Here, while text is provided for subsequent messages, the assisted user still cannot obtain an understanding of the voice message that prompted initiation of captioning. As another instance, where call assistant generated text lags appreciably behind a current hearing user's voice message an assisted user may request that the captioning catch up to the current message.


To provide captioning of recent voice messages in these cases, in at least some embodiments of this disclosure an assisted user's device stores a hearing user's voice messages and, when captioning is initiated or a catch up request is received, the recorded voice messages are used to either automatically generate text or to have a call assistant generate text corresponding to the recorded voice messages.


In at least some cases when automated software is trained to a hearing user's voice, a voice model for the hearing user that can be used subsequently to tune automated software to transcribe the hearing user's voice may be stored along with a voice profile for the hearing user that can be used to distinguish the hearing user's voice from other hearing users. Thereafter, when the hearing user calls an assisted user's device again, the profile can be used to identify the hearing user and the voice model can be used to tune the software so that the automated software can immediately start generating highly accurate or at least relatively more accurate text corresponding to the hearing user's voice messages.


To the accomplishment of the foregoing and related ends, the invention, then, comprises the features hereinafter fully described. The following description and the annexed drawings set forth in detail certain illustrative aspects of the invention. However, these aspects are indicative of but a few of the various ways in which the principles of the invention can be employed. Other aspects, advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the drawings.





BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS


FIG. 1 is a schematic showing various components of a communication system including a relay that may be used to perform various processes and methods according to at least some aspects of the present invention;



FIG. 2 is a schematic of the relay server shown in FIG. 1;



FIG. 3 is a flow chart showing a process whereby an automated voice-to-text engine is used to generate automated text in parallel with a call assistant generating text where the automated text is used instead of call assistant generated text to provide captioning an assisted user's device once an accuracy threshold has been exceeded;



FIG. 4 is a sub-process that maybe substituted for a portion of the process shown in FIG. 3 whereby a control assistant can determine whether or not the automated text takes over the process after the accuracy threshold has been achieved;



FIG. 5 is a sub-process that may be added to the process shown in FIG. 3 wherein, upon an assisted user's requesting help, a call is linked to a second call assistant for correcting the automated text;



FIG. 6 is a process whereby an automated voice-to-text engine is used to fill in text for a hearing user's voice messages that are skipped over by a call assistant when an assisted user requests instantaneous captioning of a current message;



FIG. 7 is a process whereby automated text is automatically used to fill in captioning when transcription by a call assistant lags behind a hearing user's voice messages by a threshold duration;



FIG. 8 is a flow chart illustrating a process whereby text is generated for a hearing user's voice messages that precede a request for captioning services;



FIG. 9 is a flow chart illustrating a process whereby voice messages prior to a request for captioning service are automatically transcribed to text by an automated voice-to-text engine;



FIG. 10 is a flow chart illustrating a process whereby an assisted user's device processor performs transcription processes until a request for captioning is received at which point the assisted user's device presents texts related to hearing user voice messages prior to the request and on going voice messages are transcribed via a relay;



FIG. 11 is a flow chart illustrating a process whereby an assisted user's device processor generates automated text for a hear user's voice messages which is presented via a display to an assisted user and also transmits the text to a call assistant at a relay for correction purposes;



FIG. 12 is a flow chart illustrating a process whereby high definition digital voice messages and analog voice messages are handled differently at a relay;



FIG. 13 is a process similar to FIG. 12, albeit where an assisted user also has the option to link to a call assistant for captioning service regardless of the type of voice message received;



FIG. 14 is a flow chart that may be substituted for a portion of the process shown in FIG. 3 whereby voice models and voice profiles are generated for frequent hearing user's that communicate with an assisted user where the models and profiles can be subsequently used to increase accuracy of a transcription process;



FIG. 15 is a flow chart illustrating a process similar to the sub-process shown in FIG. 14 where voice profiles and voice models are generated and stored for subsequent use during transcription;



FIG. 16 is a flow chart illustrating a sub-process that may be added to the process shown in FIG. 15 where the resulting process calls for training of a voice model at each of an assisted user's device and a relay;



FIG. 17 is a schematic illustrating a screen shot that may be presented via an assisted user's device display screen;



FIG. 18 is similar to FIG. 17, albeit showing a different screen shot;



FIG. 19 is a process that may be performed by the system shown in FIG. 1 where automated text is generated for line check words and is presented to an assisted user immediately upon identification of the words;



FIG. 20 is similar to FIG. 17, albeit showing a different screen shot; and



FIG. 21 is a flow chart illustrating a method whereby an automated voice-to-text engine is used to identify errors in call assistant generated text which can be highlighted and can be corrected by a call assistant.





While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.


DETAILED DESCRIPTION OF THE INVENTION

The various aspects of the subject invention are now described with reference to the annexed drawings, wherein like reference numerals correspond to similar elements throughout the several views. It should be understood, however, that the drawings and detailed description hereafter relating thereto are not intended to limit the claimed subject matter to the particular form disclosed. Rather, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the claimed subject matter.


As used herein, the terms “component,” “system” and the like are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computer and the computer can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers or processors.


The word “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs.


Furthermore, the disclosed subject matter may be implemented as a system, method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer or processor based device to implement aspects detailed herein. The term “article of manufacture” (or alternatively, “computer program product”) as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick). Additionally it should be appreciated that a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN). Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.


Referring now to the drawings wherein like reference numerals correspond to similar elements throughout the several views and, more specifically, referring to FIG. 1, the present disclosure will be described in the context of an exemplary communication system 10 including an assisted user's communication device 12, a hearing user's telephone or other type communication device 14, and a relay 16. The AU's device 12 is linked to the hearing user's device 14 via any network connection capable of facilitating a voice call between an the AU and the HU. For instance, the link may be a conventional telephone line, a network connection such as an internet connection or other network, a wireless connection, etc. AU device 12 includes a keyboard 20, a display screen 18 and a handset 22. Keyboard 20 can be used to dial any telephone number to initiate a call and, in at least some cases, includes other keys or may be controlled to present virtual buttons via screen 28 for controlling various functions that will be described in greater detail below. Other identifiers such as IP addresses or the like may also be used in at least some cases to initiate a call. Screen 18 includes a flat panel display screen for displaying, among other things, text transcribed from a voice message generated using HU's device 14, control icons or buttons, caption feedback signals, etc. Handset 22 includes a speaker for broadcasting a hearing user's voice messages to an assisted user and a microphone for receiving a voice message from an assisted user for delivery to the hearing user's device 14. Assisted user device 12 may also include a second loud speaker so that device 12 can operate as a speaker phone type device. Although not shown, device 12 further includes a processor and a memory for storing software run by the processor to perform various functions that are consistent with at least some aspects of the present disclosure. Device 12 is also linked or is linkable to relay 16 via any communication network including a phone network, a wireless network, the internet or some other similar network, etc.


Hearing user's device 14 includes a communication device (e.g., a telephone) including a keyboard for dialing phone numbers and a handset including a speaker and a microphone for communication with other devices. In other embodiments device 14 may include a computer, a smart phone, a smart tablet, etc., that can facilitate audio communications with other devices. Devices 12 and 14 may use any of several different communication protocols including analog or digital protocols, a VOIP protocol or others.


Referring still to FIG. 1, relay 16 includes, among other things, a relay server 30 and a plurality of call assistant work stations 32, 34, etc. Each of the call assistant work stations 32, 34, etc., is similar and operates in a similar fashion and therefore only station 32 is described here in any detail. Station 32 includes a display screen 50, a keyboard 52 and a headphone/microphone headset 54. Screen 50 may be any type of electronic display screen for presenting information including text transcribed from a hearing user's voice signal or message. In most cases screen 50 will present a graphical user interface with on screen tools for editing text that appears on the screen. One text editing system is described in U.S. Pat. No. 7,164,753 which issued on Jan. 16, 2007 which is titled “Real Time Transcription Correction System” and which is incorporated herein in its entirety.


Keyboard 52 is a standard text entry QUERTY type keyboard and can be use to type text or to correct text presented on displays screen 50. Headset 54 includes a speaker in an ear piece and a microphone in a mouth piece and is worn by a call assistant. The headset enables a call assistant to listen to the voice of a hearing user and the microphone enables the call assistant to speak voice messages into the relay system such as, for instance, revoiced messages from a hearing user to be transcribed into text. For instance, typically during a call between a hearing user on device 14 and an assisted user on device 12, the hearing user's voice messages are presented to a call assistant via headset 54 and the call assistant revoices the messages into the relay system using headset 54. Software trained to the voice of the call assistant transcribes the assistant's voice messages into text which is presented on display screen 52. The call assistant then uses keyboard 52 and/or headset 54 to make corrections to the text on display 50. The corrected text is then transmitted to the assisted user's device 12 for display on screen 18. In the alternative, the text may be transmitted prior to correction to the assisted user's device 12 for display and corrections may be subsequently transmitted to correct the displayed text via in-line corrections where errors are replaced by corrected text.


Although not shown, call assistant work station 32 may also include a foot pedal or other device for controlling the speed with which voice messages are played via headset 54 so that the call assistant can slow or even stop play of the messages while the assistant either catches up on transcription or correction of text.


Referring still to FIG. 1 and also to FIG. 2, server 30 is a computer system that includes, among other components, at least a first processor 56 linked to a memory or database 58 where software run by server 56 to facilitate various functions that are consistent with at least some aspects of the present disclosure is stored. The software stored in memory 58 includes pre-trained call assistant voice-to-text transcription software 60 for each call assistant where call assistant specific software is trained to the voice of an associated call assistant thereby increasing the accuracy of transcription activities. For instance, Naturally Speaking continuous speech recognition software by Dragon, Inc. may be pre-trained to the voice of a specific call assistant and then used to transcribe voice messages voiced by the call assistant into text.


In addition to the call assistant trained software, a voice-to-text software program 62 that is not pre-trained to a CA's voice and instead that trains to any voice on the fly as voice messages are received is stored in memory 58. Again, Naturally Speaking software that can train on the fly may be used for this purpose.


Moreover, software 64 that automatically performs one of several different types of triage processes to generate text from voice messages accurately, quickly and in a relatively cost effective manner is stored in memory 58. The triage programs are described in detail hereafter.


One issue with existing relay systems is that each call is relatively expensive to facilitate. To this end, in order to meet required accuracy standards for text caption calls, each call requires a dedicated call assistant. While automated voice-to-text systems that would not require a call assistant have been contemplated, none has been implemented because of accuracy and speed problems.


One aspect of the present disclosure is related to a system that is semi-automated wherein a call assistant is used when accuracy of an automated system is not at required levels and the assistant is cut out of a call automatically or manually when accuracy of the automated system meets or exceeds accuracy standards. For instance, in at least some cases a call assistant will be assigned to every new call linked to a relay and the call assistant will transcribe voice-to-text as in an existing system. Here, however, the difference will be that, during the call, the voice of a hearing user will also be processed by server 30 to automatically transcribe the hearing user's voice messages to text (e.g., into “automated text”). Server 30 compares corrected text generated by the call assistant to the automated text to identify errors in the automated text. Server 30 uses identified errors to train the automated voice-to-text software to the voice of the hearing user. During the beginning of the call the software trains to the hearing user's voice and accuracy increases over time as the software trains. At some point the accuracy increases until required accuracy standards are met. Once accuracy standards are met, server 30 is programmed to automatically cut out the call assistant and start transmitting the automated text to the assisted user's device 12.


In at least some cases, when a call assistant is cut out of a call, the system may provide a “Help” or an “Assist” or “Assistance Request” type button (see 68 in FIG. 1) to an assisted user so that, if the assisted user recognizes that the automated text has too many errors for some reason, the assisted user can request a link to a call assistant to increase transcription accuracy (e.g., generated an assistance request). In some cases the help button may be a persistent mechanical button on the assisted user's device 12. In the alternative the help button may be a virtual on screen icon (e.g., see 68 in FIG. 1) and screen 18 may be a touch sensitive screen so that contact with the virtual button can be sensed. Where the help button is virtual, the button may only be presented after the system switches from providing call assistant generated text to an assisted user's device to providing automated text to the assisted user's device to avoid confusion (e.g., avoid a case where an assisted user is already receiving call assistant generated text but thinks, because of a help button, that even better accuracy can be achieved in some fashion). Thus, while call assistant generated text is displayed on an assisted user's device 12, no “help” button is presented and after automated text is presented, the “help” button is presented. After the help button is selected and a call assistant is re-linked to the call, the help button is again removed from the assisted user's device display 18 to avoid confusion.


Referring now to FIGS. 2 and 3, a method or process 70 is illustrated that may be performed by server 30 to cut out a call assistant when automated text reaches an accuracy level that meets a standard threshold level. Referring also to FIG. 1, at block 72, help and auto flags are each set to a zero value. The help flag indicates that an assisted user has selected a help or assist button via the assisted user's device 12 because of a perception that too many errors are occurring in transcribed text. The auto flag indicates that automated text accuracy has exceeded a standard threshold requirement. Zero values indicate that the help button has not been selected and that the standard requirement has yet to be met and one values indicate that the button has been selected and that the standard requirement has been met.


Referring still to FIGS. 1 and 2, at block 74, during a phone call between a hearing user using device 14 and an assisted user using device 12, the hearing user's voice messages are transmitted to server 30 at relay 16. Upon receiving the hearing user's voice messages, server 30 checks the auto and help flags at blocks 76 and 84, respectively. At least initially the auto flag will be set to zero at block 76 meaning that automated text has not reached the accuracy standard requirement and therefore control passes down to block 78 where the hearing user's voice messages are provided to a call assistant. At block 80 the call assistant listens to the hearing user's voice messages and generates text corresponding thereto by either typing the messages, revoicing the messages to voice-to-text transcription software trained to the call assistant's voice or a combination of both. Text generated is presented on screen 50 and the call assistant makes corrections to the text using keyboard 52 and/or headset 54 at block 80. At block 82 the call assistant generated text is transmitted to assisted user device 12 to be displayed for the assisted user on screen 18.


Referring again to FIGS. 1 and 2, at block 84, at least initially the help flag will be set to zero indicating that the assisted user has not requested additional captioning assistance. In fact, at least initially the “help” button 68 may not be presented to an assisted user as call assistant generated text is initially presented. Where the help flag is zero at block 84, control passes to block 86 where the hearing user's voice messages are fed to voice-to-text software run by server 30 that has not been previously trained to any particular voice. At block 88 the software automatically converts the hearing user's voice-to-text generating automated text. At block 90 server 30 compares the call assistant generated text to the automated text to identify errors in the automated text. At block 92 server 30 uses the errors to train the voice-to-text software for the hearing user's voice. In this regard, for instance, where an error is identified, server 30 modifies the software so that the next time the utterance that resulted in the error occurs, the software will generate the word or words that the call assistant generated for the utterance. Other ways of altering or training the voice-to-text software are well known in the art and any way of training the software may be used at block 92.


After block 92 control passes to block 94 where server 30 monitors for a selection of the “help” button 68 by the assisted user. If the help button has not been selected, control passes to block 96 where server 30 compares the accuracy of the automated text to a threshold standard accuracy requirement. For instance, the standard requirement may require that accuracy be great than 96% measured over at least a most recent forty-five second period or a most recent 100 words uttered by a hearing user, whichever is longer. Where accuracy is below the threshold requirement, control passes back up to block 74 where the process described above continues. At block 96, once the accuracy is greater than the threshold requirement, control passes to block 98 where the auto flag is set to one indicating that the system should start using the automated text and delink the call assistant from the call to free up the assistant to handle a different call. A virtual “help” button may also be presented via the assisted user's display 18 at this time. Next, at block 100, the call assistant is delinked from the call and at block 102 the processor generated automated text is transmitted to the AU device to be presented on display screen 18.


Referring again to block 74, the hearing user's voice is continually received during a call and at block 76, once the auto flag has been set to one, the lower portion of the left hand loop including blocks 78, 80 and 82 is cut out of the process as control loops back up to block 74.


Referring again to Block 94, if, during an automated portion of a call when automated text is being presented to the assisted user, the assisted user decides that there are too many errors in the transcription presented via display 18 and the assisted user selects the “help” button 68 (see again FIG. 1), control passes to block 104 where the help flag is set to one indicating that the assisted user has requested the assistance of a call assistant and the auto flag is reset to zero indicating that call assistant generated text will be used to drive the assisted user's display 18 instead of the automated text. Thereafter control passes back up to block 74. Again, at block 76, with the auto flag set to zero the next time through decision block 76, control passes back down to block 78 where the call is again linked to a call assistant for transcription as described above. In addition, the next time through block 84, because the help flag is set to one, control passes back up to block 74 and the automated text loop including blocks 86 through 104 is effectively cut out of the rest of the call.


In at least some embodiments, there will be a short delay (e.g., 5 to 10 seconds in most cases) between setting the flags at block 104 and stopping use of the automated text so that a new call assistant can be linked up to the call and start generating call assistant generated text prior to halting the automated text. In these cases, until the call assistant is linked and generating text for at least a few seconds (e.g., 3 seconds), the automated text will still be used to drive the assisted user's display 18. The delay may either be a pre-defined delay or may have a case specific duration that is determined by server 30 monitoring call assistant generated text and switching over to the call assistant generated text once the call assistant is up to speed.


In some embodiments, prior to delinking a call assistant from a call at block 100, server 30 may store a call assistant identifier along with a call identifier for the call. Thereafter, if an assisted user requests help at block 94, server 30 may be programmed to identify if the call assistant previously associated with the call is available (e.g. not handling another call) and, if so, may re-link to the call assistant at block 78. In this manner, if possible, a call assistant that has at least some context for the call can be linked up to restart transcription services.


In some embodiments it is contemplated that after an assisted user has selected a help button to receive call assistance, the call will be completed with a call assistant on the line. In other cases it is contemplated that server 30 may, when a call assistant is re-linked to a call, start a second triage process to attempt to delink the call assistant a second time if a threshold accuracy level is again achieved. For instance, in some cases, midstream during a call, a second hearing user may start communicating with the assisted user via the hearing user's device. For instance, a child may yield the hearing user's device 14 to a grandchild that has a different voice profile causing the assisted user to request help from a call assistant because of perceived text errors. Here, after the hand back to the call assistant, server 30 may start training on the grandchild's voice and may eventually achieve the threshold level required. Once the threshold again occurs, the call assistant may be delinked a second time so that automated text is again fed to the assisted user's device.


As another example text errors in automated text may be caused by temporary noise in one or more of the lines carrying the hearing user's voice messages to relay 16. Here, once the noise clears up, automated text may again be a suitable option. Thus, here, after an assisted user requests call assistant help, the triage process may again commence and if the threshold accuracy level is again exceeded, the call assistant may be delinked and the automated text may again be used to drive the assisted user's device 12. While the threshold accuracy level may be the same each time through the triage process, in at least some embodiments the accuracy level may be changed each time through the process. For instance, the first time through the triage process the accuracy threshold may be 96%. The second time through the triage process the accuracy threshold may be raised to 98%.


In at least some embodiments, when the automated text accuracy exceeds the standard accuracy threshold, there may be a short transition time during which a call assistant on a call observes automated text while listening to a hearing user's voice message to manually confirm that the handover from call assistant generated text to automated text is smooth. During this short transition time, for instance, the call assistant may watch the automated text on her workstation screen 50 and may correct any errors that occur during the transition. In at least some cases, if the call assistant perceives that the handoff does not work or the quality of the automated text is poor for some reason, the call assistant may opt to retake control of the transcription process.


One sub-process 120 that may be added to the process shown in FIG. 3 for managing a call assistant to automated text handoff is illustrated in FIG. 4. Referring also to FIGS. 1 and 2, at block 96 in FIG. 3, if the accuracy of the automated text exceeds the accuracy standard threshold level, control may pass to block 122 in FIG. 4. At block 122, a short duration transition timer (e.g. 10-15 seconds) is started. At block 124 automated text (e.g., text generated by feeding the hearing user's voice messages directly to voice-to-text software) is presented on the call assistant's display 50. At block 126 an on screen a “Retain Control” icon or virtual button is provided to the call assistant via the assistant's display screen 50 which can be selected by the call assistant to forego the handoff to the automated voice-to-text software. At block 128, if the “Retain Control” icon is selected, control passes to block 132 where the help flag is set to one and then control passes back up to block 76 in FIG. 3 where the call assistant process for generating text continues as described above. At block 128, if the call assistant does not select the “Retain Control” icon, control passes to block 130 where the transition timer is checked. If the transition timer has not timed out control passes back up to block 124. Once the timer times out at block 130, control passes back to block 98 in FIG. 3 where the auto flag is set to one and the call assistant is delinked from the call.


In at least some embodiments it is contemplated that after voice-to-text software takes over the transcription task and the call assistant is delinked from a call, server 30 itself may be programmed to sense when transcription accuracy has degraded substantially and the server 30 may cause a re-link to a call assistant to increase accuracy of the text transcription. For instance, server 30 may assign a confidence factor to each word in the automated text based on how confident the server is that the word has been accurately transcribed. The confidence factors over a most recent number of words (e.g., 100) or a most recent period (e.g., 45 seconds) may be averaged and the average used to assess an overall confidence factor for transcription accuracy. Where the confidence factor is below a threshold level, server 30 may re-link to a call assistant to increase transcription accuracy. The automated process for re-linking to a call assistant may be used instead of or in addition to the process described above whereby an assisted user selects the “help” button to re-link to a call assistant.


In at least some cases when an assisted user selects a “help” button to re-link to a call assistant, partial call assistance may be provided instead of full call assistant service. For instance, instead of adding a call assistant that transcribes a hearing user's voice messages and then corrects errors, a call assistant may be linked only for correction purposes. The idea here is that while software trained to a hearing user's voice may generate some errors, the number of errors after training will still be relatively small in most cases even if objectionable to an assisted user. In at least some cases call assistants may be trained to have different skill sets where highly skilled and relatively more expensive to retain call assistants are trained to re-voice hearing user voice messages and correct the resulting text and less skilled call assistants are trained to simply make corrections to automated text. Here, initially all calls may be routed to highly skilled revoicing or “transcribing” call assistants and all re-linked calls may be routed to less skilled “corrector” call assistants.


A sub-process 134 that may be added to the process of FIG. 3 for routing re-linked calls to a corrector call assistant is shown in FIG. 5. Referring also to FIGS. 1 and 3, at decision block 94, if an assisted user selects the help button, control may pass to block 136 in FIG. 3 where the call is linked to a second corrector call assistant. At block 138 the automated text is presented to the second call assistant via the call assistant's display 50. At block 140 the second call assistant listens to the voice of the hearing user and observes the automated text and makes corrections to errors perceived in the text. At block 142 server 30 transmits the corrected automated text to the assisted user's device for display via screen 18. After block 142 control passes back up to block 76 in FIG. 2.


In some cases where a call assistant generates text that drives an assisted user's display screen 18 (see again FIG. 1), for one reason or another the call assistant's transcription to text may fall behind the hearing user's voice message stream by a substantial amount. For instance, where a hearing user is speaking quickly, is using odd vocabulary, and/or has an unusual accent that is hard to understand, call assistant transcription may fall behind a voice message stream by 20 seconds, 40 seconds or more.


In many cases when captioning falls behind, an assisted user can perceive that presented text has fallen far behind broadcast voice messages from a hearing user based on memory of recently broadcast voice message content and observed text. For instance, an assisted user may recognize that currently displayed text corresponds to a portion of the broadcast voice message that occurred thirty seconds ago. In other cases some captioning delay indicator may be presented via an assisted user's device display 18. For instance, see FIG. 17 where captioning delay is indicated in two different ways on a display screen 18. First, text 212 indicates an estimated delay in seconds (e.g., 24 second delay). Second, at the end of already transcribed text 214, blanks 216 for words already voiced but yet to be transcribed may be presented to give an assisted user a sense of how delayed the captioning process has become.


When an assisted user perceives that captioning is too far behind or when the user cannot understand a recently broadcast voice message, the assisted user may want the text captioning to skip ahead to the currently broadcast voice message. For instance, if an assisted user had difficulty hearing the most recent five seconds of a hearing user's voice message and continues to have difficulty hearing but generally understood the preceding 25 seconds, the assisted user may want the captioning process to be re-synced with the current hearing user's voice message so that the assisted user's understanding of current words is accurate.


Here, however, because the assisted user could not understand the most recent 5 seconds of broadcast voice message, a re-sync with the current voice message would leave the assisted user with at least some void in understanding the conversation (e.g., at least 5 the most recent 5 seconds of misunderstood voice message would be lost). To deal with this issue, in at least some embodiments, it is contemplated that server 30 may run automated voice-to-text software on a hearing user's voice message simultaneously with a call assistant generating text from the voice message and, when an assisted user requests a “catch-up” or “re-sync” of the transcription process to the current voice message, server 30 may provide “fill in” automated text corresponding to the portion of the voice message between the most recent call assistant generated text and the instantaneous voice message which may be provided to the assisted user's device for display and also, optionally, to the call assistant's display screen to maintain context for the call assistant. In this case, while the fill in automated text may have some errors, the fill in text will be better than no text for the associated period and can be referred to by the assisted user to better understand the voice messages.


In cases where the fill in text is presented on the call assistant's display screen, the call assistant may correct any errors in the fill in text. This correction and any error correction by a call assistant for that matter may be made prior to transmitting text to the assisted user's device or subsequent thereto. Where corrected text is transmitted to an assisted user's device subsequent to transmission of the original error prone text, the assisted user's device corrects the errors by replacing the erroneous text with the corrected text.


Because it is often the case that assisted users will request a re-sync only when they have difficulty understanding words, server 30 may only present automated fill in text to an assisted user corresponding to a pre-defined duration period (e.g., 8 seconds) that precedes the time when the re-sync request occurs. For instance, consistent with the example above where call assistant captioning falls behind by thirty seconds, an assisted user may only request re-sync at the end of the most recent five seconds as inability to understand the voice message may only be an issue during those five seconds. By presenting the most recent eight seconds of automated text to the assisted user, the user will have the chance to read text corresponding to the misunderstood voice message without being inundated with a large segment of automated text to view. Where automated fill in text is provided to an assisted user for only a pre-defined duration period, the same text may be provided for correction to the call assistant.


Referring now to FIG. 6, a method 190 by which an assisted user requests a re-sync of the transcription process to current voice messages when call assistant generated text falls behind current voice messages is illustrated. Referring also to FIG. 1, at block 192 a hearing user's voice messages are received at relay 16. After block 192, control passes down to each of blocks 194 and 200 where two simultaneous sub-processes occur in parallel. At block 194, the hearing user's voice messages are stored in a rolling buffer. The rolling buffer may, for instance, have a two minute duration so that the most recent two minutes of a hearing user's voice messages are always stored. At block 196, a call assistant listens to the hearing user's voice message and transcribes text corresponding to the messages via re-voicing to software trained to the call assistant's voice, typing, etc. At block 198 the call assistant generated text is transmitted to assisted user's device 12 to be presented on display screen 18 after which control passes back up to block 192. Text correction may occur at block 196 or after block 198.


Referring again to FIG. 6, at process block 200, the hearing user's voice is fed directly to voice-to-text software run by server 30 which generates automated text at block 202. Although not shown in FIG. 5, after block 202, server 30 may compare the automated text to the call assistant generated text to identify errors and may use those errors to train the software to the hearing user's voice so that the automated text continues to get more accurate as a call proceeds.


Referring still to FIGS. 1 and 6, at decision block 204, controller 30 monitors for a catch up or re-sync command received via the assisted user's device 12 (e.g., via selection of an on-screen virtual “catch up” button 220, see again FIG. 17). Where no catch up or re-sync command has been received, control passes back up to block 192 where the process described above continues to cycle. At block 204, once a re-sync command has been received, control passes to block 206 where the buffered voice messages are skipped and a current voice message is presented to the ear of the call assistant to be transcribed. At block 208 the automated text corresponding to the skipped voice message segment is filled in to the text on the call assistant's screen for context and at block 210 the fill in text is transmitted to the assisted user's device for display.


Where automated text is filled in upon the occurrence of a catch up process, the fill in text may be visually distinguished on the assisted user's screen and/or on the call assistant's screen. For instance, fill in text may be highlighted, underlined, bolded, shown in a distinct font, etc. for example, see FIG. 18 that shows fill in text 222 that is underlined to visually distinguish. See also that the captioning delay 212 has been updated. In some cases, fill in text corresponding to voice messages that occur after or within some pre-defined period prior to a re-sync request may be distinguished in yet a third way to point out the text corresponding to the portion of a voice message that the assisted user most likely found interesting (e.g., the portion that prompted selection of the re-sync button). For instance, where 24 previous seconds of text are filled in when a re-sync request is initiated, all 24 seconds of fill in text may be underlined and the 8 seconds of text prior to the re-sync request may also be highlighted in yellow. See in FIG. 18 that some of the fill in text is shown in a phantom box 226 to indicate highlighting.


In at least some cases it is contemplated that server 30 may be programmed to automatically determine when call assistant generated text substantially lags a current voice message from a hearing user and server 30 may automatically skip ahead to re-sync a call assistant with a current message while providing automated fill in text corresponding to intervening voice messages. For instance, server 30 may recognize when call assistant generated text is more than thirty seconds behind a current voice message and may skip the voice messages ahead to the current message while filling in automated text to fill the gap. In at least some cases this automated skip ahead process may only occur after at least some (e.g., 2 minutes) training to a hearing user's voice so ensure that minimal errors are generated in the fill in text.


A method 150 for automatically skipping to a current voice message in a buffer when a call assistant falls to far behind is shown in FIG. 7. Referring also to FIG. 1, at block 152, a hearing user's voice messages are received at relay 16. After block 152, control passes down to each of blocks 154 and 162 where two simultaneous sub-processes occur in parallel. At block 154, the hearing user's voice messages are stored in a rolling buffer. At block 156, a call assistant listens to the hearing user's voice message and transcribes text corresponding to the messages via re-voicing to software trained to the call assistant's voice, typing, etc., after which control passes to block 170.


Referring still to FIG. 7, at process block 162, the hearing user's voice is fed directly to voice-to-text software run by server 30 which generates automated text at block 164. Although not shown in FIG. 7, after block 164, server 30 may compare the automated text to the call assistant generated text to identify errors and may use those errors to train the software to the hearing user's voice so that the automated text continues to get more accurate as a call proceeds.


Referring still to FIGS. 1 and 7, at decision block 166, controller 30 monitors how far call assistant text transcription is behind the current voice message and compares that value to a threshold value. If the delay is less than the threshold value, control passes down to block 170. If the delay exceeds the threshold value, control passes to block 168 where server 30 uses automated text from block 164 to fill in the call assistant generated text and skips the call assistant up to the current voice message. After block 168 control passes to block 170. At block 170, the text including the call assistant generated text and the fill in text is presented to the call assistant via display screen 50 and the call assistant makes any corrections to observed errors. At block 172, the text is transmitted to assisted user's device 12 and is displayed on screen 18. Again, uncorrected text may be transmitted to and displayed on device 12 and corrected text may be subsequently transmitted and used to correct errors in the prior text in line on device 12. After block 172 control passes back up to block 152 where the process described above continues to cycle.


Many assisted user's devices can be used as conventional telephones without captioning service or as assisted user devices where captioning is presented and voice messages are broadcast to an assisted user. The idea here is that one device can be used by hearing impaired persons and persons that have no hearing impairment and that the overall costs associated with providing captioning service can be minimized by only using captioning when necessary. In many cases even a hearing impaired person may not need captioning service all of the time. For instance, a hearing impaired person may be able to hear the voice of a person that speaks loudly fairly well but may not be able to hear the voice of another person that speaks more softly. In this case, captioning would be required when speaking to the person with the soft voice but may not be required when speaking to the person with the loud voice. As another instance, an impaired person may hear better when well rested but hear relatively more poorly when tired so captioning is required only when the person is tired. As still another instance, an impaired person may hear well when there is minimal noise on a line but may hear poorly if line noise exceeds some threshold. Again, the impaired person would only need captioning some of the time.


To minimize captioning service costs and still enable an impaired person to obtain captioning service whenever needed and even during an ongoing call, some systems start out all calls with a default setting where an assisted user's device 12 is used like a normal telephone without captioning. At any time during an ongoing call, an assisted user can select either a mechanical or virtual “Caption” icon or button (see again 68 in FIG. 1) to link the call to a relay, provide a hearing user's voice messages to the relay and commence captioning service. One problem with starting captioning only after an assisted user experiences problems hearing words is that at least some words (e.g., words that prompted the assisted user to select the caption button in the first place) typically go unrecognized and therefore the assisted user is left with a void in their understanding of a conversation.


One solution to the problem of lost meaning when words are not understood just prior to selection of a caption button is to store a rolling recordation of a hearing user's voice messages that can be transcribed subsequently when the caption button is selected to generate “fill in” text. For instance, the most recent 20 seconds of a hearing user's voice messages may be recorded and then transcribed only if the caption button is selected. The relay generates text for the recorded message either automatically via software or via revoicing or typing by a call assistant or a combination of both. In addition, the call assistant starts transcribing current voice messages. The text from the recording and the real time messages is transmitted to and presented via assisted user's device 12 which should enable the assisted user to determine the meaning of the previously misunderstood words. In at least some embodiments the rolling recordation of hearing user's voice messages may be maintained by the assisted user's device 12 (see again FIG. 1) and that recordation may be sent to the relay for immediate transcription upon selection of the caption button.


Referring now to FIG. 8, a process 230 that may be performed by the system of FIG. 1 to provide captioning for voice messages that occur prior to a request for captioning service is illustrated. Referring also to FIG. 1, at block 232 a hearing user's voice messages are received during a call with an assisted user at the assisted user's device 12. At block 234 the assisted user's device 12 stores a most recent 20 seconds of the hearing user's voice messages on a rolling basis. The 20 seconds of voice messages are stored without captioning initially. At decision block 236, the assisted user's device monitors for selection of a captioning button (not shown). If the captioning button has not been selected, control passes back up to block 232 where blocks 232, 234 and 236 continue to cycle.


Once the caption button has been selected, control passes to block 238 where assisted user's device 12 establishes a communication link to relay 16. At block 240 assisted user's device 12 transmits the stored 20 seconds of the hearing user's voice messages along with current ongoing voice messages from the hearing user to relay 16. At this point a call assistant and/or software at the relay transcribes the voice-to-text, corrections are made (or not), and the text is transmitted back to device 12 to be displayed. At block 242 assisted user's device 12 receives the captioned text from the relay 16 and at block 244 the received text is displayed or presented on the assisted user's device display 18. At block 246, in at least some embodiments, text corresponding to the 20 seconds of hearing user voice messages prior to selection of the caption button may be visually distinguished (e.g., highlighted, bold, underlined, etc.) from other text in some fashion. After block 246 control passes back up to block 232 where the process described above continues to cycle.


Referring to FIG. 9, a relay server process 270 whereby automated software transcribes voice messages that occur prior to selection of a caption button and a call assistant at least initially captions current voice messages is illustrated. At block 272, after an assisted user requests captioning service by selecting a caption button, server 30 receives a hearing user's voice messages including current ongoing messages as well as the most recent 20 seconds of voice messages that had been stored by assisted user's device 12 (see again FIG. 1). After block 27, control passes to each of blocks 274 and 278 where two simultaneous processes commence in parallel. At block 274 the stored 20 seconds of voice messages are provided to voice-to-text software run by server 30 to generate automated text and at block 276 the automated text is transmitted to the assisted user's device 12 for display. At block 278 the current or real time hearing user's voice messages are provided to a call assistant and at block 280 the call assistant transcribes the current voice messages to text. The call assistant generated text is transmitted to an assisted user's device at block 282 where the text is displayed along with the text transmitted at block 276. Thus, here, the assisted user receives text corresponding to misunderstood voice messages that occur just prior to the assisted user requesting captioning. One other advantage of this system is that when captioning starts, the call assistant is not starting captioning with an already existing backlog of words to transcribe and instead automated software is used to provide the prior text.


In addition to using a service provided by relay 16 to transcribe stored rolling text, other resources may be used to transcribe the stored rolling text. For instance, in at least some embodiments an assisted user's device may link via the Internet or the like to a third party provider that can receive voice messages and transcribe those messages, at least somewhat accurately, to text. In these cases it is contemplated that real time transcription where accuracy needs to meet a high accuracy standard would still be performed by a call assistant or software trained to a specific voice while less accuracy sensitive text may be generated by the third party provider, at least some of the time for free, and transmitted back to the assisted user's device for display.


In other cases, it is contemplated that the assisted user's device 12 itself may run voice-to-text software that could be used to at least somewhat accurately transcribe voice messages to text where the text generated by the assisted user's device would only be provided in cases where accuracy sensitivity is less than normal such as where rolling voice messages prior to selection of a caption icon to initiate captioning are to be transcribed.



FIG. 10 shows another method 300 for providing text for voice messages that occurred prior to a caption request, albeit where the an assisted user's device generates the pre-request text as opposed to a relay. Referring also to FIG. 1, at block 310 a hearing user's voice messages are received at an assisted user's device 12. At block 312, the assisted user's device 12 runs voice-to-text software that, in at least some embodiments, trains on the fly to the voice of a linked hearing user and generates caption text. In this embodiment, at least initially, the caption text generated by the assisted user's device 12 is not displayed to the assisted user. At block 314, until the assisted user requests captioning, control simply routes back up to block 310. Once captioning is requested by an assisted user, control passes to block 316 where the text corresponding to the last 20 seconds generated by the assisted user's device is presented on the assisted user's device display 18. Here, while there may be some errors in the displayed text, at least some text associated with the most recent voice message can be quickly presented and give the assisted user the opportunity to attempt to understand the voice messages associated therewith. At block 318 the assisted user's device links to a relay and at block 320 the hearing user's ongoing voice messages are transmitted to the relay. At block 322, after call assistant transcription at the relay, the assisted user's device receives the transcribed text from the relay and at block 324 the text is displayed. After block 324 control passes back up to block 320 where the sub-loop including blocks 320, 322 and 324 continues to cycle.


In at least some cases it is contemplated that voice-to-text software run outside control of the relay may be used to generate at least initial text for a hearing user's voice and that the initial text may be presented via an assisted user's device. Here, because known software still may include more errors than allowed given standard accuracy requirements, a relay correction service may be provided. For instance, in addition to presenting text transcribed by the assisted user's device via a device display 50, the text transcribed by the assisted user's device may also be transmitted to a relay 16 for correction. In addition to transmitting the text to the relay, the hearing user's voice messages may also be transmitted to the relay so that a call assistant can compare the text generated by the assisted user's device to the voice messages. At the relay, the call assistant can listen to the voice of the hearing person and can observe the text. Any errors in the text can be corrected and corrected text blocks can be transmitted back to the assisted user's device and used for in line correction on the assisted user's display screen. One advantage to this type of system is that relatively less skilled call assistants may be retained at a lesser cost to perform the call assistant tasks. A related advantage is that the stress level on call assistants may be reduced appreciably by eliminating the need to both transcribe and correct at high speeds and therefore call assistant turnover at relays may be appreciably reduced which ultimately reduces costs associated with providing relay services.


A similar system may include an assisted user's device that links to some other third party provider transcription/caption server to obtain initial captioned text which is immediately displayed to an assisted user and which is also transmitted to the relay for call assistant correction. Here, again, the call assistant corrections may be used by the third party provider to train the software on the fly to the hearing user's voice. In this case, the assisted user's device may have three separate links, one to the hearing user, a second link to a third party provider server, and a third link to the relay.


Referring to FIG. 11, a method 360 whereby an assisted user's device transcribes hearing user's voice-to-text and where corrections are made to the text at a relay is illustrated. At block 362 a hearing user's voice messages are received at an assisted user's device 12 (see also again FIG. 1). At block 364 the assisted user's device runs voice-to-text software to generate text from the received voice messages and at block 366 the generated text is presented to the assisted user via display 18. At block 370 the transcribed text is transmitted to the relay 16 and at block 372 the text is presented to a call assistant via the call assistant's display 50. At block 374 the call assistant corrects the text and at block 376 corrected blocks of text are transmitted to the assisted user's device 12. At block 378 the assisted user's device 12 uses the corrected blocks to correct the text errors via in line correction. At block 380, the assisted user's device uses the errors, the corrected text and the voice messages to train the captioning software to the hearing user's voice.


In some cases instead of having a relay or an assisted user's device run automated voice-to-text transcription software, a hearing user's device may include a processor that runs transcription software to generate text corresponding to the hearing user's voice messages. To this end, device 14 may, instead of including a simple telephone, include a computer that can run various applications including a voice-to-text program or may link to some third party real time transcription software program to obtain an initial text transcription substantially in real time. Here, as in the case where an assisted user's device runs the transcription software, the text will often have more errors than allowed by the standard accuracy requirements. Again, to correct the errors, the text and the hearing user's voice messages are transmitted to relay 16 where a call assistant listens to the voice messages, observes the text on screen 18 and makes corrections to eliminate transcription errors. The corrected blocks of text are transmitted to the assisted user's device for display. The corrected blocks may also be transmitted back to the hearing user's device for training the captioning software to the hearing user's voice. In these cases the text transcribed by the hearing user's device and the hearing user's voice messages may either be transmitted directly from the hearing user's device to the relay or may be transmitted to the assisted user's device 12 and then on to the relay. Where the hearing user's voice messages and text are transmitted directly to the relay 16, the voice messages and text may also be transmitted directly to the assisted user's device for immediate broadcast and display and the corrected text blocks may be subsequently used for in line correction.


In these cases the caption request option may be supported so that an assisted user can initiate captioning during an on-going call at any time by simply transmitting a signal to the hearing user's device instructing the hearing user's device to start the captioning process. Similarly, in these cases the help request option may be supported. Where the help option is facilitated, the automated text may be presented via the assisted user's device and, if the assisted user perceives that too many text errors are being generated, the help button may be selected to cause the hearing user's device or the assisted user's device to transmit the automated text to the relay for call assistant correction.


One advantage to having a hearing user's device manage or perform voice-to-text transcription is that the voice signal being transcribed can be a relatively high quality voice signal. To this end, a standard phone voice signal has a range of frequencies between 300 and about 3000 Hertz which is only a fraction of the frequency range used by most voice-to-text transcription programs and therefore, in many cases, automated transcription software does only a poor job of transcribing voice signals that have passed through a telephone connection. Where transcription can occur within a digital signal portion of an overall system, the frequency range of voice messages can be optimized for automated transcription. Thus, where a hearing user's computer that is all digital receives and transcribes voice messages, the frequency range of the messages is relatively large and accuracy can be increased appreciably. Similarly, where a hearing user's computer can send digital voice messages to a third party transcription server accuracy can be increased appreciably.


In at least some configurations it is contemplated that the link between an assisted user's device 12 and a hearing user's device 14 may be either a standard analog phone type connection or may be a digital connection depending on the capabilities of the hearing user's device that links to the assisted user's device. Thus, for instance, a first call may be analog and a second call may be digital. Because digital voice messages have a greater frequency range and therefore can be automatically transcribed more accurately than analog voice messages in many cases, it has been recognized that a system where automated voice-to-text program use is implemented on a case by case basis depending upon the type of voice message received (e.g., digital or analog) would be advantageous. For instance, in at least some embodiments, where a relay receives an analog voice message for transcription, the relay may automatically link to a call assistant for full call assistant transcription service where the call assistant transcribes and corrects text via revoicing and keyboard manipulation and where the relay receives a high definition digital voice message for transcription, the relay may run an automated voice-to-text transcription program to generate automated text. The automated text may either be immediately corrected by a call assistant or may only be corrected by an assistant after a help feature is selected by an assisted user as described above.


Referring to FIG. 12, one process 400 for treating high definition digital messages differently than analog voice messages is illustrated. Referring also to FIG. 1, at block 402 a hearing user's voice messages are received at a relay 16. At decision block 404, relay server 30 determines if the received voice message is a high definition digital message or is an analog message. Where a high definition message has been received, control passes to block 406 where server 30 runs an automated voice-to-text program on the voice messages to generate automated text. At block 408 the automated text is transmitted to the assisted user's device 12 for display. Referring again to block 404, where the hearing user's voice messages are in analog, control passes to block 412 where a link to a call assistant is established so that the hearing user's voice messages are provided to a call assistant. At block 414 the call assistant listens to the voice messages and transcribes the messages into text. Error correction may also be performed at block 414. After block 414, control passes to block 408 where the call assistant generated text is transmitted to the assisted user's device 12. Again, in some cases, when automated text is presented to an assisted user, a help button may be presented that, when selected causes automated text to be presented to a call assistant for correction. In other cases automated text may be automatically presented to a call assistant for correction.


Another system is contemplated where all incoming calls to a relay are initially assigned to a call assistant for at least initial captioning where the option to switch to automated software generated text is only available when the call includes high definition audio and after accuracy standards have been exceeded. Here, all analog hearing user's voice messages would be captioned by a call assistant from start to finish and any high definition calls would cut out the call assistant when the standard is exceeded.


In at least some cases where an assisted user's device is capable of running automated voice-to-text transcription software, the assisted user's device 12 may be programmed to select either automated transcription when a high definition digital voice message is received or a relay with a call assistant when an analog voice message is received. Again, where device 12 runs an automated text program, call assistant correction may be automatic or may only start when a help button is selected.



FIG. 13 shows a process 430 whereby an assisted user's device 12 selects either automated voice-to-text software or a call assistant to transcribe based on the type of voice messages received. At block 432 a hearing user's voice messages are received by an assisted user's device 12. At decision block 434, a processor in device 12 determines if the assisted user has selected a help button. Initially no help button is selected as no text has been presented so at least initially control passes to block 436. At decision block 436, the device processor determines if a hearing user's voice signal that is received is high definition digital or is analog. Where the received signal is high definition digital, control passes to block 438 where the assisted user's device processor runs automated voice-to-text software to generate automated text which is then displayed on the assisted user device display 18 at block 440. Referring still to FIG. 13, if the help button has been selected at block 434 or if the received voice messages are in analog, control passes to block 442 where a link to a call assistant at relay 16 is established and the hearing user's voice messages are transmitted to the relay. At block 444 the call assistant listens to the voice messages and generates text and at block 446 the text is transmitted to the assisted user's device 12 where the text is displayed at block 440.


In has been recognized that in many cases most calls facilitated using an assisted user's device will be with a small group of other hearing or non-hearing users. For instance, in many cases as much as 70 to 80 percent of all calls to an assisted user's device will be with one of five or fewer hearing user's devices (e.g., family, close friends, a primary care physician, etc.). For this reason it has been recognized that it would be useful to store voice-to-text models for at least routine callers that link to an assisted user's device so that the automated voice-to-text training process can either be eliminated or substantially expedited. For instance, when an assisted user initiates a captioning service, if a previously developed voice model for a hearing user can be identified quickly, that model can be used without a new training process and the switchover from a full service call assistant to automated captioning may be expedited (e.g., instead of taking a minute or more the switchover may be accomplished in 15 seconds or less, in the time required to recognize or distinguish the hearing user's voice form other voices).



FIG. 14 shows a sub-process 460 that may be substituted for a portion of the process shown in FIG. 3 wherein voice-to-text templates along with related voice recognition profiles for callers are stored and used to expedite the handoff to automated transcription. Prior to running sub-process 460, referring again to FIG. 1, server 30 is used to create a voice recognition database for storing hearing user device identifiers along with associated voice recognition profiles and associated voice-to-text models. A voice recognition profile is a data construct that can be used to distinguish one voice from others. In the context of the FIG. 1 system, voice recognition profiles are useful because more than one person may use a hearing user's device to call an assisted user. For instance in an exemplary case, an assisted user's son or daughter-in-law or one of any of three grandchildren may use device 14 to call an assisted user and therefore, to access the correct voice-to-text model, server 30 needs to distinguish which caller's voice is being received. Thus, in many cases, the voice recognition database will include several voice recognition profiles for each hearing user device identifier (e.g., each hearing user phone number). A voice-to-text model includes parameters that are used to customize voice-to-text software for transcribing the voice of an associated hearing user to text. The voice recognition database will include at least one voice model for each voice profile to be used by server 30 to automate transcription whenever a voice associated with the specific profile is identified. Data in the voice recognition database will be generated on the fly as an assisted user uses device 12. Thus, initially the voice recognition database will include a simple construct with no device identifiers, profiles or voice models.


Referring still to FIGS. 1 and 14 and now also to FIG. 3, at decision block 84 in FIG. 3, if the help flag is still zero (e.g., an assisted user has not requested call assistant help to correct automated text errors) control may pass to block 464 in FIG. 13 where the hearing user's device identifier (e.g., a phone number, an IP address, a serial number of a hearing user's device, etc.) is received by server 30. At block 468 server 30 determines if the hearing user's device identifier has already been added to the voice recognition database. If the hearing user's device identifier does not appear in the database (e.g., the first time the hearing user's device is used to connect to the assisted user's device) control passes to block 482 where server 30 uses a general voice-to-text program to convert the hearing user's voice messages to text after which control passes to block 476. At block 476 the server 30 trains a voice-to-text model using transcription errors. Again, the training will include comparing call assistant generated text to automated text to identify errors and using the errors to adjust model parameters so that the next time the word associated with the error is uttered by the hearing user, the software will identify the correct word. At block 478 server 30 trains a voice profile for the hearing user's voice so that the next time the hearing user calls, a voice profile will exist for the specific hearing user that can be used to identify the hearing user. At block 480 the server 30 stores the voice profile and voice model for the hearing user along with the hearing user device identifier for future use after which control passes back up to block 94 in FIG. 3.


Referring still to FIGS. 1 and 14, at block 468 if the hearing user's device is already represented in the voice recognition database, control passes to block 470 where server 30 runs voice recognition software on the hearing user's voice messages in an attempt to identify a voice profile associated with the specific hearing user. At decision block 472, if the hearing user's voice does not match one of the previously stored voice profiles associated with the device identifier, control passes to block 482 where the process described above continues. At block 472, if the hearing user's voice matches a previously stored profile, control passes to block 474 where the voice model associated with the matching profile is used to tune the voice-to-text software to be used to generate automated text.


Referring still to FIG. 14, at blocks 476 and 478, the voice model and voice profile for the hearing user are continually trained. Continual training enables the system to constantly adjust the model for changes in a hearing user's voice that may occur over time or when the hearing user experiences some physical condition (e.g., a cold, a raspy voice) that affects the sound of their voice. At block 480, the voice profile and voice model are stored with the HU device identifier for future use.


In at least some embodiments server 30 may adaptively change the order of voice profiles applied to a hearing user's voice during the voice recognition process. For instance, while server 30 may store five different voice profiles for five different hearing users that routinely connect to an assisted user's device, a first of the profiles may be used 80 percent of the time. In this case, when captioning is commenced, server 30 may start by using the first profile to analyze a hearing user's voice at block 472 and may cycle through the profiles from the most matched to the least matched.


To avoid server 30 having to store a different voice profile and voice model for every hearing person that communicates with an assisted user via device 12, in at least some embodiments it is contemplated that server 30 may only store models and profiles for a limited number (e.g., 5) of frequent callers. To this end, in at least some cases server 30 will track calls and automatically identify the most frequent hearing user devices used to link to the assisted user's device 12 over some rolling period (e.g., 1 month) and may only store models and profiles for the most frequent callers. Here, a separate counter may be maintained for each hearing user device used to link to the assisted user's device over the rolling period and different models and profiles may be swapped in and out of the stored set based on frequency of calls.


In other embodiments server 30 may query an assisted user for some indication that a specific hearing user is or will be a frequent contact and may add that person to a list for which a model and a profile should be stored for a total of up to five persons.


While the system described above with respect to FIG. 14 assumes that the relay 16 stores and uses voice models and voice profiles that are trained to hearing user's voices for subsequent use, in at least some embodiments it is contemplated that an assisted user's device 12 processor may maintain and use or at least have access to and use the voice recognition database to generate automated text without linking to a relay. In this case, because the assisted user's device runs the software to generate the automated text, the software for generating text can be trained any time the user's device receives a hearing user's voice messages without linking to a relay. For example, during a call between a hearing user and an assisted user on devices 14 and 12, respectively, in FIG. 1, and prior to an assisted user requesting captioning service, the voice messages of even a new hearing user can be used by the assisted user's device to train a voice-to-text model and a voice profile for the user. In addition, prior to a caption request, as the model is trained and gets better and better, the model can be used to generate text that can be used as fill in text (e.g., text corresponding to voice messages that precede initiation of the captioning function) when captioning is selected.



FIG. 15 shows a process 500 that may be performed by an assisted user's device to train voce models and voice profiles and use those models and profiles to automate text transcription until a help button is selected. Referring also to FIG. 1, at block 502, an assisted user's device 12 processor receives a hearing user's voice messages as well as an identifier (e.g. a phone number) of the hearing user's device 14. At block 504 the processor determines if the assisted user has selected the help button (e.g., indicating that current captioning includes too many errors). If an assisted user selects the help button at block 504, control passes to block 522 where the assisted user's device is linked to a call assistant at relay 16 and the hearing user's voice is presented to the call assistant. At block 524 the assisted user's device receives text back from the relay and at block 534 the call assistant generated text is displayed on the assisted user's device display 18.


Where the help button has not been selected, control passes to block 505 where the processor uses the device identifier to determine if the hearing user's device is represented in the voice recognition database. Where the hearing user's device is not represented in the database control passes to block 528 where the processor uses a general voice-to-text program to convert the hearing user's voice messages to text after which control passes to block 512.


Referring again to FIGS. 1 and 15, at block 512 the processor adaptively trains the voice model using perceived errors in the automated text. To this end, one way to train the voice model is to generate text phonetically and thereafter perform a context analysis of each text word by looking at other words proximate the word to identify errors. Another example of using context to identify errors is to look at several generated text words as a phrase and compare the phrase to similar prior phrases that are consistent with how the specific hearing user strings words together and identify any discrepancies as possible errors. At block 514 a voice profile for the hearing user is generated from the hearing user's voice messages so that the hearing user's voice can be recognized in the future. At block 516 the voice model and voice profile for the hearing user are stored for future use during subsequent calls and then control passes to block 518 where the process described above continues. Thus, blocks 528, 512, 514 and 516 enable the assisted user's device to train voice models and voice profiles for hearing users that call in anew where a new voice model can be used during an ongoing call and during future calls to provide generally accurate transcription.


Referring still to FIGS. 1 and 15, if the hearing user's device is already represented in the voice recognition database at block 505, control passes to block 506 where the processor runs voice recognition software on the hearing user's voice messages in an attempt to identify one of the voice profiles associated with the device identifier. At block 508, where no voice profile is recognized, control passes to block 528.


At block 508, if the hearing user's voice matches one of the stored voice profiles, control passes to block 510 where the voice-to-text model associated with the matching profile is used to generate automated text from the hearing user's voice messages. Next, at block 518, the assisted user's device processor determine if the caption button on the assisted user's device has been selected. If captioning has not bee selected control passes to block 502 where the process continues to cycle. Once captioning has been requested, control passes to block 520 where assisted user's device 12 displays the most recent 10 seconds of automated text and continuing automated text on display 18.


In at least some embodiments it is contemplated that different types of voice model training may be performed by different processors within the overall FIG. 1 system. For instance, while an assisted user's device is not linked to a relay, the assisted user's device cannot use any errors identified by a call assistance at the relay to train a voice model as no call assistant is generating errors. Nevertheless, the assisted user's device can use context to identify errors and train a model. Once an assisted user's device is linked to a relay where a call assistant corrects errors, the relay server can use the call assistant identified errors and corrections to train a voice model which can, once sufficiently accurate, be transmitted to the assisted user's device where the new model is substituted for the old content based model or where the two models are combined into a single robust model in some fashion. In other cases when an assisted user's device links to a relay for call assistant captioning, a context based voice model generated by the assisted user's device for the hearing user may be transmitted to the relay server and used as an initial model to be further trained using call assistant identified errors and corrections. In still other cases call assistant errors may be provided to the assisted user's device and used by that device to further train a context based voice model for the hearing user.


Referring now to FIG. 16, a sub-process 550 that may be added to the process shown in FIG. 15 whereby an assisted user's device trains a voice model for a hearing user using voice message content and a relay server further trains the voice model generated by the assisted user's device using call assistant identified errors is illustrated. Referring also to FIG. 15, sub-process 550 is intended to be performed in parallel with block 524 and 534 in FIG. 15. Thus, after block 522, in addition to block 524, control also passes to block 552 in FIG. 16. At block 552 the voice model for a hearing user that has been generated by an assisted user's device 12 is transmitted to relay 16 and at block 553 the voice model is used to modify a voice-to-text program at the relay. At block 554 the modified voice-to-text program is used to convert the hearing user's voice messages to automated text. At block 556 the call assistant generated text is compared to the automated text to identify errors. At block 558 the errors are used to further train the voice model. At block 560, if the voice model has an accuracy below the required standard, control passes back to block 502 in FIG. 15 where the process described above continues to cycle. At block 560, once the accuracy exceeds the standard requirement, control passes to block 562 wherein server 30 transmits the trained voice model to the assisted user's device for handling subsequent calls from the hearing user for which the model was trained. At block 564 the new model is stored in the database maintained by the assisted user's device.


Referring still to FIG. 16, in addition to transmitting the trained model to the assisted user's device at block 562, once the model is accurate enough to meet the standard requirements, server 30 may perform an automated process to cut out the call assistant and instead transmit automated text to the assisted user's device as described above in FIG. 1. In the alternative, once the model has been transmitted to the assisted user's device at block 562, the relay may be programmed to hand off control to the assisted user's device which would then use the newly trained and relatively more accurate model to perform automated transcription so that the relay could be disconnected.


Several different concepts and aspects of the present disclosure have been described above. It should be understood that many of the concepts and aspects may be combined in different ways to configure other triage systems that are more complex. For instance, one exemplary system may include an assisted user's device that attempts automated captioning with on the fly training first and, when automated captioning by the assisted user's device fails (e.g., a help icon is selected by an assisted user), the assisted user's device may link to a third party captioning system via the internet or the like where another more sophisticated voice-to-text captioning software is applied to generate automated text. Here, if the help button is selected again, the assisted user's device may link to a call assistant at the relay for call assistant captioning with simultaneous voice-to-text software transcription where errors in the automated text are used to train the software until a threshold accuracy requirement is met. Here, once the accuracy requirement is exceeded, the system may automatically cut out the call assistant and switch to the automated text from the relay until the help button is again selected. In each of the transcription hand offs, any learning or model training performed by one of the processors in the system may be provided to the next processor in the system to be used to expedite the training process.


In at least some embodiments an automated voice-to-text engine may be utilized in other ways to further enhance calls handled by a relay. For instance, in cases where transcription by a call assistant lags behind a hearing user's voice messages, automated transcription software may be programmed to transcribe text all the time and identify specific words in a hearing user's voice messages to be presented via an assisted user's display immediately when identified to help the assisted user determine when a hearing user is confused by a communication delay. For instance, assume that transcription by a call assistant lags a hearing user's most current voice message by 20 seconds and that an assisted user is relying on the call assistant generated text to communicate with the hearing user. In this case, because the call assistant generated text lag is substantial, the hearing user may be confused when the assisted user's response also lags a similar period and may generate a voice message questioning the status of the call. For instance, the hearing user may utter “Are you there?” or “Did you hear me?” or “Hello” or “What did you say?”. These phrases and others like them querying call status are referred to herein as “line check words” (LCWs) as the hearing user is checking the status of the call on the line.


If the line check words are not presented until they occurred sequentially in the hearing user's voice messages, they would be delayed for 20 or more seconds in the above example. In at least some embodiments it is contemplated that the automated voice engine may search for line check words in a hearing user's voice messages and present the line check words immediately via the assisted user's device during a call regardless of which words have been transcribed and presented to an assisted user. The assisted user, seeing line check words or a phrase can verbally respond that the captioning service is lagging but catching up so that the parties can avoid or at least minimize confusion.


When line check words are presented to an assisted user the words may be presented in-line within text being generated by a call assistant with intermediate blanks representing words yet to be transcribed by the call assistant. To this end, see again FIG. 17 that shows line check words “Are you still there?” in a highlighting box 590 at the end of intermediate blanks 216 representing words yet to be transcribed by the call assistant. Line check words will, in at least some embodiments, be highlighted on the display or otherwise visually distinguished. In other embodiments the line check words may be located at some prominent location on the assisted user's display screen (e.g., in a line check box or field at the top or bottom of the display screen).


One advantage of using an automated voice engine to only search for specific words and phrases is that the engine can be tuned for those words and will be relatively more accurate than a general purpose engine that transcribes all words uttered by a hearing user. In at least some embodiments the automated voice engine will be run by an assisted user's device processor while in other embodiments the automated voice engine may be run by the relay server with the line check words transmitted to the assisted user's device immediately upon generation and identification.


Referring now to FIG. 19, a process 600 that may be performed by an assisted user's device 12 and a relay to transcribe hearing user's voice messages and provide line check words immediately to an assisted user when transcription by a call assistant lags in illustrated. At block 602 a hearing user's voice messages are received by an assisted user's device 12. After block 602 control continues along parallel sub-processes to blocks 604 and 612. At block 604 the assisted user's device processor uses an automated voice engine to transcribe the hearing user's voice messages to text. Here, it is assumed that the voice engine may generate several errors and therefore likely would be insufficient for the purposes of providing captioning to the assisted user. The engine, however, is optimized and trained to caption a set (e.g., 10 to 100) line check words and/or phrases which the engine can do extremely accurately. At block 606 the assisted user's device processor searches for line check words in the automated text. At block 608, if a line check word or phrase is not identified control passes back up to block 602 where the process continues to cycle. At block 608, if a line check word or phrase is identified, control passes to block 610 where the line check word/phrase is immediately presented to the assisted user via display 18 either in-line or in a special location and, in at least some cases, in a visually distinct manner.


Referring still to FIG. 19, at block 612 the hearing user's voice messages are sent to a relay for transcription. At block 614 transcribed text is received at the assisted user's device back from the relay. At block 616 the text from the relay is used to fill in the intermediate blanks (see again FIG. 17 and also FIG. 18 where text has been filled in) on the assisted user's display.


In at least some embodiments it is contemplated that an automated voice-to-text engine may operate all the time and may check for and indicate any potential errors in call assistant generated text so that the call assistant can determine if the errors should be corrected. For instance, in at least some cases, the automated voice engine may highlight potential errors in call assistant generated text on the call assistant's display screen inviting the call assistant to correct the potential errors. In these cases the call assistant would have the final say regarding whether or not a potential error should be altered.


Consistent with the above comments, see FIG. 20 that shows a screen shot of a call assistant's display screen where potential errors have been highlighted to distinguish the errors from other text. Exemplary call assistant generated text is shown at 650 with errors shown in phantom boxes 652, 654 and 656 that representing highlighting. In the illustrated example exemplary words generated by an automated voice-to-text engine are also presented to the call assistant in hovering fields above the potentially erroneous text as shown at 658, 660 and 662. Here, a call assistant can simply touch a suggested correction in a hovering field to make a correction and replace the erroneous word with the automated text suggested in the hovering field. If a call assistant instead touches an error, the call assistant can manually change the word to another word. If a call assistant does not touch an error or an associated corrected word, the word remains as originally transcribed by the call assistant. An “Accept All” icon is presented at 669 that can be selected to accept all of the suggestions presented on a call assistant's display. All corrected words are transmitted to an assisted user's device to be displayed.


Referring to FIG. 21, a method 700 by which a voice engine generates text to be compared to call assistant generated text and for providing a correction interface as in FIG. 20 for the call assistant is illustrated. At block 702 the hearing user's voice messages are provided to a relay. After block 702 control follows to two parallel paths to blocks 704 and 716. At block 704 the hearing user's voice messages are transcribed into text by an automated voice-to-text engine run by the relay server before control passes to block 706. At block 716 a call assistant transcribes the hearing user's voice messages to call assistant generated text. At block 718 the call assistant generated text is transmitted to the assisted user's device to be displayed. At block 720 the call assistant generated text is displayed on the call assistant's display screen 50 for correction after which control passes to block 706.


Referring still to FIG. 21, at block 706 the relay server compares the call assistant generated text to the automated text to identify any discrepancies. Where the automated text matches the call assistant generated text at block 708, control passes back up to block 702 where the process continues. Where the automated text does not match the call assistant generated text at block 708, control passes to block 710 where the server visually distinguishes the mismatched text on the call assistant's display screen 50 and also presents suggested correct text (e.g., the automated text). Next, at block 712 the server monitors for any error corrections by the call assistant and at block 714 if an error has been corrected, the corrected text is transmitted to the assisted user's device for in-line correction.


In at least some embodiments the relay server may be able to generate some type of probability factor related to how likely a discrepancy between automated and call assistant generated text is related to a call assistant error and may only indicate errors and present suggestions for probable errors or discrepancies likely to be related to errors. For instance, where an automated text segment that is different than an associated call assistant generated text segment but the automated segment makes no sense contextually in a sentence, the server may not indicate the discrepancy or show the automated text segment as an option for correction. The same discrepancy may be shown as a potential error at a different time if the automated segment makes contextual sense.


In still other embodiments automated voice-to-text software that operates at the same time as a call assistant to generate text may be trained to recognize words often missed by a call assistant such as articles, for instance, and to ignore other words that a call assistant is trained to transcribe.


The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below.


Thus, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the following appended claims. For example, while the methods above are described as being performed by specific system processors, in at least some cases various method steps may be performed by other system processors. For instance, where a hearing user's voice is recognized and then a voice model for the recognized hearing user is employed for voice-to-text transcription, the voice recognition process may be performed by an assisted user's device and the identified voice may be indicated to a relay 16 which then identifies a related voice model to be used. As another instance, a hearing user's device may identify a hearing user's voice and indicate the identity of the hearing user to the assisted user's device and/or the relay.


As another example, while the system is described above in the context of a two line captioning system where one line links an assisted user's device to a hearing user's device and a second line links the assisted user's device to a relay, the concepts and features described above may be used in any transcription system including a system where the hearing user's voice is transmitted directly to a relay and the relay then transmits transcribed text and the hearing user's voice to the assisted user's device.


As still one other example, while inputs to an assisted user's device may include mechanical or virtual on screen buttons/icons, in some embodiments other inputs arrangements may be supported. For instance, in some cases help or captioning may be indicated via a voice input (e.g., verbal a request for assistance or for captioning).


As another example, in at least some cases where a relay includes first and second differently trained call assistants where first call assistants are trained to capable of transcribing and correcting text and second call assistants are only trained to or capable of correcting text, a call assistant may always be on a call but the automated voice-to-text software may aid in the transcription process whenever possible to minimize overall costs. For instance, when a call is initially linked to a relay so that a hearing user's voice is received at the relay, the hearing user's voice may be provided to a first call assistant fully trained to transcribe and correct text. Here, voice-to-text software may train to the hearing user's voice while the first call assistant transcribes the text and after the voice-to-text software accuracy exceeds a threshold, instead of completely cutting out the relay or call assistant, the automated text may be provided to a second call assistant that is only trained to correct errors. Here, after training the automated text should have minimal errors and therefore even a minimally trained call assistant should be able to make corrections to the errors in a timely fashion.


In other systems an assisted user's device processor may run automated voice-to-text software to transcribe hearing user's voice messages and may also generate a confidence factor to each word in the automated text based on how confident the processor is that the word has been accurately transcribed. The confidence factors over a most recent number of words (e.g., 100) or a most recent period (e.g., 45 seconds) may be averaged and the average used to assess an overall confidence factor for transcription accuracy. Where the confidence factor is below a threshold level, the device processor may link to relay for more accurate transcription either via more sophisticated automated voice-to-text software or via a call assistant. The automated process for linking to a relay may be used instead of or in addition to the process described above whereby an assisted user selects a “caption” button to link to a relay.


In addition to storing hearing user voice models, a system may also store other information that could be used when an assisted user is communicating with specific hearing user's to increase accuracy of automated voice-to-text software when used. For instance, a specific hearing user may routinely use complex words from a specific industry when conversing with an assisted user. The system software can recognize when a complex word is corrected by a call assistant or contextually by automated software and can store the word and the pronunciation of the word by the specific hearing user in a hearing user word list for subsequent use. Then, when the specific hearing user subsequently links to the assisted user's device to communicate with the assisted user, the stored word list for the hearing user may be accessed and used to automate transcription. The hearing user's word list may be stored at a relay, by an assisted user's device or even by a hearing user's device where the hearing user's device has data storing capability.


In other cases a word list specific to an assisted user's device (i.e., to an assisted user) that includes complex or common words routinely used to communicate with the assisted user may be generated, stored and updated by the system. This list may include words used on a regular basis by any hearing user that communicates with an assisted user. In at least some cases this list or the hearing user's word lists may be stored on an internet accessible database so that the assisted user has the ability to access the list(s) and edit words on the list via an internet portal or some other network interface.


In still other embodiments various aspects of a hearing user's voice messages may be used to select different voice-to-text software programs that optimized for voices having different characteristic sets. For instance, there may be different voice-to-text programs optimized for male and female voices or for voices having different dialects. Here, system software may be able to distinguish one dialect from others and select an optimized voice engine/software program to increase transcription accuracy. Similarly, a system may be able to distinguish a high pitched voice from a low pitched voice and select a voice engine accordingly.


In some cases a voice engine may be selected for transcribing a hearing user's voice based on the region of a country in which a hearing user's device resides. For instance, where a hearing user's device is located in the southern part of the United States, an engine optimized for a southern dialect may be used while a device in New England may cause the system to select an engine optimized for another dialect. Different word lists may also be used based on region of a country in which a hearing user's device resides.


To apprise the public of the scope of this invention, the following claims are made:

Claims
  • 1. A call captioning system for captioning a hearing user's (HU's) voice signal during an ongoing call with an assisted user (AU) where the HU uses an HU communication device, the captioning system comprising: an AU communication device including a display screen, a speaker, a microphone and an interface including at least one caption activation feature for activating a caption service;at least a first processor programmed to perform the steps of, during an ongoing call, receiving the HU's voice signal; (a) prior to activating the caption service via the caption activation feature; (i) using an automated speech recognition (ASR) engine to generate HU voice signal captions corresponding to the HU's voice signal;(ii) detecting errors in the HU voice signal captions;(iii) using the errors in the HU voice signal captions to train the ASR software to the HU's voice signal so that accuracy of the HU voice signal captions that can be generated by the ASR engine is increased; and(iv) storing the trained ASR engine for subsequent use;(b) upon activating the caption service via the caption activation feature during the ongoing call: (i) using the trained ASR engine to generate HU voice signal captions; and(ii) presenting the HU voice signal captions to the AU via the display screen.
  • 2. The call captioning system of claim 1 wherein the caption activation feature is presented as a virtual button.
  • 3. The call captioning system of claim 1 wherein the step of detecting errors includes automatically detecting errors in the HU voice signal captions.
  • 4. The call captioning system of claim 3 wherein the AU device includes the at least a first processor that is programmed.
  • 5. The call captioning system of claim 3 wherein the at least a first processor is located at a relay that is remote from the AU device and the HU device.
  • 6. The call captioning system of claim 5 wherein the AU device includes a second processor, the second processor receiving the HU voice signal from the HU device and transmitting the HU voice signal to the relay.
  • 7. The call captioning system of claim 1 further including, storing at least a portion of the HU voice signal captions prior to activation of the caption service and, when the caption service is activated, presenting at least some HU voice signal captions generated prior to activation via the display screen.
  • 8. The call captioning system of claim 7 wherein HU voice signal captions corresponding to HU voice signals received after the caption service is activated are visually distinguished from voice signal captions generated prior to activation of the caption service.
  • 9. The call captioning system of claim 7 wherein the at least a portion of the HU voice signal captions is stored on the AU device.
  • 10. The call captioning system of claim 7 wherein the at least a portion of the HU voice signal captions includes captions corresponding to the most recent 20 seconds of the HU's voice signal.
  • 11. The call captioning system of claim 1 wherein the steps of using an automated speech recognition (ASR) engine to generate HU voice signal captions corresponding to the HU's voice signal, detecting errors in the HU voice signal captions, and using the errors in the HU voice signal captions to train the ASR software to the HU's voice signal are performed by a processor included within the AU device.
  • 12. The call captioning system of claim 1 wherein a call assistant (CA) identifies errors in the HU voice signal captions.
  • 13. The call captioning system of claim 1 wherein the trained ASR engine is stored for use during subsequent calls between the HU and the AU.
  • 14. The call captioning system of claim 12 wherein the HU device is associated with an HU device identifier and wherein the HU device identifier is stored along with the trained ASR engine for use during subsequent calls, upon commencement of a subsequent call, the at least one processor further programmed to identify the HU device via the device identifier and access the trained ASR engine for use in captioning during the call.
  • 15. The call captioning system of claim 13 wherein the device identifier is a phone number.
  • 16. The call captioning system of claim 1 wherein the steps of detecting errors in the HU voice signal captions, using the errors to train the ASR software to the HU's voice signal so that accuracy of the HU voice signal captions that can be generated by the ASR engine is increased, and storing the trained ASR engine, continue cyclically for at least a period after activation of the caption services.
  • 17. The call captioning system of claim 16 wherein the steps continue cyclically for the duration of the ongoing call.
  • 18. The call captioning system of claim 1 wherein the caption activation feature includes a virtual caption button presented via the display screen and wherein the caption button is presented via the AU communication device upon initiation of the call.
  • 19. A call captioning system for captioning a hearing user's (HU's) voice signal during an ongoing call with an assisted user (AU) where the HU uses an HU communication device, the captioning system comprising: an AU communication device including a display screen, a speaker, a microphone and an interface including at least one caption activation feature for activating a caption service;at least a first processor programmed to perform the steps of, during an ongoing call: (i) receiving the HU's voice signal;(ii) using an automated speech recognition (ASR) engine to generate HU voice signal captions corresponding to the received HU voice signal;(iii) upon activating the caption service via the caption activation feature during the ongoing call: (a) presenting at least a portion of the HU voice signal captions that were generated prior to activating the caption service via the display screen; and(b) presenting HU voice signal captions via the display screen as they are generated for HU voice signal received after activation of the caption service.
  • 20. The call captioning system of claim 19 further including storing at least a portion of the HU voice signal that occurs prior to activation of the caption service and wherein the step of using an ASR engine to generate HU voice signal text for HU voice signal received prior to activation of the caption service occurs after activation of the caption service.
  • 21. The call captioning system of claim 19 wherein the step of using an ASR engine to generate HU voice signal text for HU voice signal received prior to activation of the caption service occurs prior to activation of the caption service, the method further including storing the HU voice signal text generated prior to activation of the caption service.
  • 22. The call captioning system of claim 19 wherein the steps of presenting HU voice signal text includes presenting at least a portion of the HU voice signal captions that were generated prior to activating the caption service via the display screen with a first appearance characteristic and presenting HU voice signal captions via the display screen as they are generated for HU voice signal received after activation of the caption service with a second appearance characteristic that is different than the first appearance characteristic.
  • 23. The call captioning system of claim 19 wherein the AU communication device includes the at least a first processor.
  • 24. The call captioning system of claim 23 wherein the AU communication device includes a memory and wherein the at least a portion of the HU voice signal captions that were generated prior to activating the caption service are stored in the AU device memory.
  • 25. The call captioning system of claim 19 wherein the at least a first processor includes an AU communication device processor and a remote processor that is remote from the AU communication device, the remote processor programmed to perform the steps of, during an ongoing call, receiving the HU's voice signal, using an automated speech recognition (ASR) engine to generate HU voice signal captions corresponding to the received HU voice signal, and, upon activating the caption service via the caption activation feature during the ongoing call, transmitting the at least a portion of the HU voice signal captions that were generated prior to activating the caption service and transmitting HU voice signal captions generated after activation of the caption service to the AU communication device, the AU communication device processor programmed to perform the steps of, presenting at least a portion of the HU voice signal captions that were generated prior to activating the caption service via the display screen and presenting HU voice signal captions via the display screen as they are generated for HU voice signal received after activation of the caption service.
  • 26. The call captioning system of claim 25 wherein the remote processor is an HU communication device processor.
  • 27. The call captioning system of claim 25 wherein the remote processor is a relay processor.
  • 28. The call captioning system of claim 19 wherein the at least a first processor is further programmed to perform the steps of detecting errors in the HU voice signal captions, using the errors in the HU voice signal captions to train the ASR software to the HU's voce signal so that accuracy of the HU voice signal captions that can be generated by the ASR engine is increased as the ongoing call progresses.
  • 29. The call captioning system of claim 28 wherein the trained ASR software is stored for captioning other calls after the ongoing call ends.
  • 30. The call captioning system of claim 20 wherein the AU communication device includes the at least a first processor and a memory for storing the HU voice signal that occurs prior to activation of the caption service.
  • 31. The call captioning system of claim 21 wherein the AU communication device includes the at least a first processor and a memory for storing the HU voice signal that occurs prior to activation of the caption service.
  • 32. The call captioning system of claim 23 further including a relay remote from the AU communication device and, wherein the AU communication device processor is further programmed to perform the steps of, upon activating the caption service during the ongoing call, transmitting at least a subset of the captions generated after activation of the caption service to the relay, receiving error corrections to the captions from the relay, and making error corrections to the captions presented via the display screen as the error corrections are received.
  • 33. The call captioning system of claim 19 wherein the AU communication device further includes a receiver and an AU device processor and wherein, as the AU device processor is programmed to perform at least the steps of presenting at least a portion of the HU voice signal captions that were generated prior to activating the caption service via the display screen, presenting HU voice signal captions via the display screen as they are generated for HU voice signal received after activation of the caption service, receiving the HU voice signal, and broadcasting the HU voice signal via the speaker.
  • 34. The call captioning system of claim 33 wherein the AU device processor is the at least a first processor.
  • 35. The call captioning system of claim 23 further including a relay remote from the AU communication device and, wherein the AU communication device processor is further programmed to perform the steps of, upon activating the caption service during the ongoing call, presenting a help activation feature for activating a relay captioning service, upon activating the relay captioning service via the help activation feature during the ongoing call, transmitting at least a subset of the captions generated by the ASR engine after activation of the caption service to the relay, receiving error corrections to the captions from the relay, and making error corrections to the captions presented via the display screen as the error corrections are received.
  • 36. The call captioning system of claim 35 wherein the help activation feature includes a selectable button presented for selection via the AU communication device.
  • 37. The call captioning system of claim 35 wherein the AU device further includes a receiver and wherein the AU communication device processor receives the HU voice signal from the HU communication device and broadcasts the HU voice signal via the speaker and, upon activation of the help activation feature, also transmits the HU voice signal to the relay.
  • 38. The call captioning system of claim 19 wherein HU voice signal captions corresponding to HU voice signals received after the caption service is activated are visually distinguished from voice signal captions generated prior to activation of the caption service when the captions are presented via the display screen.
  • 39. A call captioning system for captioning a hearing user's (HU's) voice signal during an ongoing call with an assisted user where the HU uses an HU communication device, the captioning system comprising: an AU communication device including a display screen, a speaker, a microphone and an interface including at least one caption activation feature for activating a caption service;at least a first processor programmed to perform the steps of, during an ongoing call, receiving the HU's voice signal; (a) prior to activating the caption service via the caption activation feature; (i) using an automated speech recognition (ASR) engine to generate HU voice signal captions corresponding to the HU's voice signal; and(ii) storing at least a most recent portion of the HU voice signal captions;(b) upon activating the caption service via the caption activation feature during the ongoing call: (i) presenting the most recent portion of the stored HU voice signal captions via the display screen;(ii) using an automated speech recognition (ASR) engine to generate ongoing HU voice signal captions corresponding to the ongoing HU's voice signal; and(iii) presenting the ongoing HU voice signal captions via the display screen.
  • 40. The call captioning system of claim 39 wherein a single ASR engine is used to generate captions for the HU voice signal before and after activation of the caption activation feature.
  • 41. The call captioning system of claim 40 wherein the AU communication device includes the at least a first processor.
  • 42. The call captioning system of claim 40 wherein the at least a first processor includes a processor remote from the AU communication device.
  • 43. The call captioning system of claim 42 wherein the at least a first processor is located at a relay station.
  • 44. The call captioning system of claim 40 wherein the at least a first processor is further programmed to perform the steps of automatically identifying errors in the HU voice signal captions prior to activation of the caption service and using the errors to train the ASR engine to the voice of the HU during the call.
  • 45. The call captioning system of claim 44 wherein the at least a first processor is further programmed to perform the steps of automatically identifying errors in the HU voice signal captions subsequent to activation of the caption service and using the errors to train the ASR engine to the voice of the HU during the call.
  • 46. The call captioning system of claim 45 wherein the at least a first processor is further programmed to perform the step of making corrections to the captions presented via the display screen as errors in those captions are identified.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 15/477,958, which was filed on Apr. 3, 2017, and which is titled “SEMIAUTOMATED RELAY METHOD AND APPARATUS,” which is a continuation of U.S. patent application Ser. No. 14/632,257, which was filed on Feb. 26, 2015, which issued as U.S. Pat. No. 10,389,876 on Aug. 20, 2019, and which is titled “SEMIAUTOMATED RELAY METHOD AND APPARATUS,” which claims priority to U.S. Provisional Patent Application Ser. No. 61/946,072, filed on Feb. 28, 2014, and entitled “SEMIAUTOMATIC RELAY METHOD AND APPARATUS.”

US Referenced Citations (549)
Number Name Date Kind
3372246 Knuepfer et al. Mar 1968 A
3507997 Weitbrecht Apr 1970 A
3515814 Morgan et al. Jun 1970 A
3585303 Chieffo et al. Jun 1971 A
3598920 Fischer et al. Aug 1971 A
3800089 Reddick Mar 1974 A
3896267 Sachs et al. Jul 1975 A
3959607 Margo May 1976 A
3976995 Sebestyen Aug 1976 A
4012599 Meyer Mar 1977 A
4039768 O'Maley Aug 1977 A
4126768 Grenzow Nov 1978 A
4151380 Blomeyer et al. Apr 1979 A
4160136 McGough Jul 1979 A
4188665 Nagel et al. Feb 1980 A
4191854 Coles Mar 1980 A
4201887 Burns May 1980 A
4254308 Blomeyer et al. Mar 1981 A
D259348 Sakai et al. May 1981 S
4268721 Nielson et al. May 1981 A
4289931 Baker Sep 1981 A
4302629 Foulkes et al. Nov 1981 A
4307266 Messina Dec 1981 A
4354252 Lamb Oct 1982 A
4415065 Sandstedt Nov 1983 A
4426555 Underkoffler Jan 1984 A
4430726 Kasday Feb 1984 A
D273110 Genaro et al. Mar 1984 S
4451701 Bendig May 1984 A
4471165 DeFino et al. Sep 1984 A
D275857 Moraine Oct 1984 S
4490579 Godoshian Dec 1984 A
4503288 Kessler Mar 1985 A
D278435 Hikawa Apr 1985 S
4524244 Faggin Jun 1985 A
D280099 Topp Aug 1985 S
4533791 Read et al. Aug 1985 A
4568803 Frola Feb 1986 A
4569421 Sandstedt Feb 1986 A
D283421 Brier Apr 1986 S
4625080 Scott Nov 1986 A
RE32365 Sebestyen Mar 1987 E
4650927 James Mar 1987 A
4659876 Sullivan et al. Apr 1987 A
4713808 Gaskill Dec 1987 A
4754474 Feinson Jun 1988 A
D296894 Chen Jul 1988 S
4777469 Engelke et al. Oct 1988 A
4790003 Kepley et al. Dec 1988 A
4799254 Dayton Jan 1989 A
4815121 Yoshida Mar 1989 A
4817135 Winebaum Mar 1989 A
4839919 Borges Jun 1989 A
4849750 Andros Jul 1989 A
4866778 Baker Sep 1989 A
4868860 Andros et al. Sep 1989 A
4879738 Petro Nov 1989 A
4897868 Engelke et al. Jan 1990 A
D306727 Fritzsche Mar 1990 S
4908866 Goldwasser et al. Mar 1990 A
4918723 Iggulden et al. Apr 1990 A
4926460 Gutman et al. May 1990 A
4951043 Minami Aug 1990 A
4959847 Engelke et al. Sep 1990 A
D312457 Inatomi Nov 1990 S
4995077 Malinowski Feb 1991 A
5025442 Lynk et al. Jun 1991 A
5027406 Roberts et al. Jun 1991 A
5033088 Shipman Jul 1991 A
5051924 Bergeron et al. Sep 1991 A
D322785 Wu Dec 1991 S
5081673 Engelke et al. Jan 1992 A
5086453 Senoo et al. Feb 1992 A
5091906 Reed et al. Feb 1992 A
5095307 Shimura et al. Mar 1992 A
5099507 Mukai et al. Mar 1992 A
5121421 Alheim Jun 1992 A
5128980 Choi Jul 1992 A
5134633 Werner Jul 1992 A
5146502 Davis Sep 1992 A
5163081 Wycherley et al. Nov 1992 A
5192948 Neustein Mar 1993 A
5199077 Wilcox et al. Mar 1993 A
5210689 Baker et al. May 1993 A
5214428 Allen May 1993 A
5216702 Ramsden Jun 1993 A
5249220 Moskowitz et al. Sep 1993 A
5280516 Jang Jan 1994 A
5289523 Vasile et al. Feb 1994 A
5294982 Salomon et al. Mar 1994 A
5307399 Dai et al. Apr 1994 A
5311516 Kuznicki et al. May 1994 A
5318340 Henry Jun 1994 A
5325417 Engelke et al. Jun 1994 A
5327479 Engelke et al. Jul 1994 A
5339358 Danish et al. Aug 1994 A
5343519 Feldman Aug 1994 A
5351288 Engelke et al. Sep 1994 A
D351185 Matsuda et al. Oct 1994 S
5359651 Draganoff Oct 1994 A
5375160 Guidon et al. Dec 1994 A
5377263 Bazemore et al. Dec 1994 A
5392343 Davitt et al. Feb 1995 A
5393236 Blackmer et al. Feb 1995 A
5396650 Terauchi Mar 1995 A
D357253 Wong Apr 1995 S
5410541 Hotto Apr 1995 A
5423555 Kidrin Jun 1995 A
5424785 Orphan Jun 1995 A
5426706 Wood Jun 1995 A
5432837 Engelke et al. Jul 1995 A
5459458 Richardson et al. Oct 1995 A
5463665 Millios et al. Oct 1995 A
D364865 Engelke et al. Dec 1995 S
5475733 Eisdorfer et al. Dec 1995 A
5475798 Handlos Dec 1995 A
5477274 Akiyoshi et al. Dec 1995 A
5487102 Rothschild et al. Jan 1996 A
5487671 Shpiro Jan 1996 A
5497373 Hulen et al. Mar 1996 A
5508754 Orphan Apr 1996 A
5517548 Engelke et al. May 1996 A
5519443 Salomon et al. May 1996 A
5519808 Benton, Jr. et al. May 1996 A
5521960 Aronow May 1996 A
5522089 Kikinis et al. May 1996 A
5537436 Bottoms et al. Jul 1996 A
5559855 Dowens et al. Sep 1996 A
5559856 Dowens Sep 1996 A
5574776 Leuca et al. Nov 1996 A
5574784 LaPadula et al. Nov 1996 A
5581593 Engelke et al. Dec 1996 A
5604786 Engelke et al. Feb 1997 A
D379181 Sawano et al. May 1997 S
5649060 Ellozy et al. Jul 1997 A
5671267 August et al. Sep 1997 A
5680443 Kasday et al. Oct 1997 A
5687222 McLaughlin et al. Nov 1997 A
5701338 Leyen et al. Dec 1997 A
5710806 Lee et al. Jan 1998 A
5712901 Meermans Jan 1998 A
5724405 Engelke et al. Mar 1998 A
5745550 Eisdorfer et al. Apr 1998 A
5751338 Ludwig, Jr. May 1998 A
5787148 August Jul 1998 A
5799273 Mitchell et al. Aug 1998 A
5799279 Gould et al. Aug 1998 A
5809112 Ryan Sep 1998 A
5809425 Colwell et al. Sep 1998 A
5815196 Alshawi Sep 1998 A
5826102 Escobar et al. Oct 1998 A
5850627 Gould et al. Dec 1998 A
5855000 Waibel et al. Dec 1998 A
D405793 Engelke et al. Feb 1999 S
5867817 Catallo et al. Feb 1999 A
5870709 Bernstein Feb 1999 A
5883986 Kopec et al. Mar 1999 A
5893034 Hikuma et al. Apr 1999 A
5899976 Rozak May 1999 A
5905476 McLaughlin et al. May 1999 A
5909482 Engelke Jun 1999 A
5915379 Wallace et al. Jun 1999 A
5917888 Giuntoli Jun 1999 A
5926527 Jenkins et al. Jul 1999 A
5940475 Hansen Aug 1999 A
5974116 Engelke et al. Oct 1999 A
5978014 Martin et al. Nov 1999 A
5978654 Colwell et al. Nov 1999 A
5982853 Liebermann Nov 1999 A
5982861 Holloway et al. Nov 1999 A
5991291 Asai et al. Nov 1999 A
5991723 Duffin Nov 1999 A
5995590 Brunet et al. Nov 1999 A
6002749 Hansen et al. Dec 1999 A
6067516 Levay et al. May 2000 A
6072860 Kek et al. Jun 2000 A
6075534 VanBuskirk et al. Jun 2000 A
6075841 Engelke et al. Jun 2000 A
6075842 Engelke et al. Jun 2000 A
6100882 Sharman et al. Aug 2000 A
6122613 Baker Sep 2000 A
6141341 Jones et al. Oct 2000 A
6141415 Rao Oct 2000 A
6173259 Bijl et al. Jan 2001 B1
6175819 Van Alstine Jan 2001 B1
6181736 McLaughlin et al. Jan 2001 B1
6181778 Ohki et al. Jan 2001 B1
6188429 Martin et al. Feb 2001 B1
6233314 Engelke May 2001 B1
6243684 Stuart et al. Jun 2001 B1
6278772 Bowater et al. Aug 2001 B1
6298326 Feller Oct 2001 B1
6307921 Engelke et al. Oct 2001 B1
6314396 Monkowski Nov 2001 B1
6317716 Braida et al. Nov 2001 B1
6324507 Lewis et al. Nov 2001 B1
6345251 Jansson et al. Feb 2002 B1
6366882 Bijl et al. Apr 2002 B1
6374221 Haimi-Cohen Apr 2002 B1
6377925 Greene, Jr. et al. Apr 2002 B1
6381472 LaMedica, Jr. et al. Apr 2002 B1
6385582 Iwata May 2002 B1
6385586 Dietz May 2002 B1
6389114 Dowens et al. May 2002 B1
6424935 Taylor Jul 2002 B1
6430270 Cannon et al. Aug 2002 B1
6445799 Taenzer et al. Sep 2002 B1
6457031 Hanson Sep 2002 B1
6473778 Gibbon Oct 2002 B1
6493426 Engelke et al. Dec 2002 B2
6493447 Goss et al. Dec 2002 B1
6504910 Engelke et al. Jan 2003 B1
6507735 Baker et al. Jan 2003 B1
6510206 Engelke et al. Jan 2003 B2
6549611 Engelke et al. Apr 2003 B2
6549614 Zebryk et al. Apr 2003 B1
6567503 Engelke et al. May 2003 B2
6594346 Engelke Jul 2003 B2
6603835 Engelke et al. Aug 2003 B2
6625259 Hollatz et al. Sep 2003 B1
6633630 Owens et al. Oct 2003 B1
6661879 Schwartz et al. Dec 2003 B1
6668042 Michaelis Dec 2003 B2
6668044 Schwartz et al. Dec 2003 B1
6701162 Everett Mar 2004 B1
6704709 Kahn et al. Mar 2004 B1
6748053 Engelke et al. Jun 2004 B2
6763089 Feigenbaum Jul 2004 B2
6775360 Davidson et al. Aug 2004 B2
6778824 Wonak et al. Aug 2004 B2
6813603 Groner et al. Nov 2004 B1
6816468 Cruickshank Nov 2004 B1
6816469 Kung et al. Nov 2004 B1
6816834 Jaroker Nov 2004 B2
6831974 Watson et al. Dec 2004 B1
6850609 Schrage Feb 2005 B1
6865258 Polcyn Mar 2005 B1
6876967 Goto et al. Apr 2005 B2
6882707 Engelke et al. Apr 2005 B2
6885731 Engelke et al. Apr 2005 B2
6894346 Onose et al. May 2005 B2
6934366 Engelke et al. Aug 2005 B2
6934376 McLaughlin et al. Aug 2005 B1
6947896 Hanson Sep 2005 B2
6948066 Hind et al. Sep 2005 B2
6950500 Chaturvedi et al. Sep 2005 B1
6980953 Kanevsky et al. Dec 2005 B1
7003082 Engelke et al. Feb 2006 B2
7003463 Maes et al. Feb 2006 B1
7006604 Engelke Feb 2006 B2
7016479 Flathers et al. Mar 2006 B2
7016844 Othmer et al. Mar 2006 B2
7035383 ONeal Apr 2006 B2
7042718 Aoki et al. May 2006 B2
7088832 Cooper Aug 2006 B1
7117152 Mukherji et al. Oct 2006 B1
7117438 Wallace et al. Oct 2006 B2
7130790 Flanagan et al. Oct 2006 B1
7136478 Brand Nov 2006 B1
7142642 McClelland et al. Nov 2006 B2
7142643 Brooksby Nov 2006 B2
7164753 Engelke et al. Jan 2007 B2
7191135 O'Hagan Mar 2007 B2
7199787 Lee et al. Apr 2007 B2
7221405 Basson et al. May 2007 B2
7233655 Gailey et al. Jun 2007 B2
7287009 Liebermann Oct 2007 B1
7295663 McLaughlin et al. Nov 2007 B2
7313231 Reid Dec 2007 B2
7315612 McClelland Jan 2008 B2
7319740 Engelke et al. Jan 2008 B2
7330737 Mahini Feb 2008 B2
7346506 Lueck et al. Mar 2008 B2
7363006 Mooney Apr 2008 B2
7406413 Geppert et al. Jul 2008 B2
7428702 Cervantes et al. Sep 2008 B1
7430283 Steel, Jr. Sep 2008 B2
7480613 Kellner Jan 2009 B2
7519536 Maes et al. Apr 2009 B2
7555104 Engelke Jun 2009 B2
7573985 McClelland et al. Aug 2009 B2
7606718 Cloran Oct 2009 B2
7613610 Zimmerman et al. Nov 2009 B1
7660398 Engleke et al. Feb 2010 B2
7747434 Flanagan et al. Jun 2010 B2
7792701 Basson et al. Sep 2010 B2
7831429 O'Hagan Nov 2010 B2
7836412 Zimmerman Nov 2010 B1
7844454 Coles et al. Nov 2010 B2
7848358 LaDue Dec 2010 B2
7881441 Engelke et al. Feb 2011 B2
7904113 Ozluturk et al. Mar 2011 B2
7962339 Pieraccini et al. Jun 2011 B2
8019608 Carraux et al. Sep 2011 B2
8032383 Bhardwaj et al. Oct 2011 B1
8180639 Pieraccini et al. May 2012 B2
8213578 Engleke et al. Jul 2012 B2
8249878 Carraux et al. Aug 2012 B2
8259920 Abramson et al. Sep 2012 B2
8265671 Gould et al. Sep 2012 B2
8286071 Zimmerman et al. Oct 2012 B1
8325883 Schultz et al. Dec 2012 B2
8332212 Wittenstein et al. Dec 2012 B2
8332227 Maes et al. Dec 2012 B2
8335689 Wittenstein et al. Dec 2012 B2
8352883 Kashik et al. Jan 2013 B2
8369488 Sennett et al. Feb 2013 B2
8370142 Frankel et al. Feb 2013 B2
8379801 Romriell Feb 2013 B2
8407052 Hager Mar 2013 B2
8416925 Engelke et al. Apr 2013 B2
8447366 Ungari et al. May 2013 B2
8473003 Jung et al. Jun 2013 B2
8504372 Carraux et al. Aug 2013 B2
8526581 Charugundla Sep 2013 B2
8538324 Hardacker et al. Sep 2013 B2
8605682 Efrati et al. Dec 2013 B2
8626249 Ungari et al. Jan 2014 B2
8645136 Milstein Feb 2014 B2
8682672 Ha et al. Mar 2014 B1
8781510 Gould et al. Jul 2014 B2
8806455 Katz Aug 2014 B1
8867532 Wozniak et al. Oct 2014 B2
8868425 Maes et al. Oct 2014 B2
8874070 Basore et al. Oct 2014 B2
8892447 Srinivasan et al. Nov 2014 B1
8908838 Engelke et al. Dec 2014 B2
8917821 Engelke et al. Dec 2014 B2
8917822 Engelke et al. Dec 2014 B2
8930194 Newman et al. Jan 2015 B2
8972261 Milstein Mar 2015 B2
9069377 Wilson et al. Jun 2015 B2
9124716 Charugundla Sep 2015 B1
9161166 Johansson et al. Oct 2015 B2
9183843 Fanty et al. Nov 2015 B2
9185211 Roach et al. Nov 2015 B2
9191789 Pan Nov 2015 B2
9215406 Paripally et al. Dec 2015 B2
9215409 Montero et al. Dec 2015 B2
9218808 Milstein Dec 2015 B2
9231902 Brown et al. Jan 2016 B2
9245522 Hager Jan 2016 B2
9247052 Walton Jan 2016 B1
9277043 Bladon et al. Mar 2016 B1
9305552 Kim et al. Apr 2016 B2
9318110 Roe Apr 2016 B2
9324324 Knighton Apr 2016 B2
9336689 Romriell et al. May 2016 B2
9344562 Moore et al. May 2016 B2
9355611 Wang et al. May 2016 B1
9380150 Bullough et al. Jun 2016 B1
9392108 Milstein Jul 2016 B2
9460719 Antunes et al. Oct 2016 B1
9495964 Kim et al. Nov 2016 B2
9502033 Carraux et al. Nov 2016 B2
9535891 Raheja et al. Jan 2017 B2
9536567 Garland et al. Jan 2017 B2
9571638 Knighton et al. Feb 2017 B1
9576498 Zimmerman et al. Feb 2017 B1
9628620 Rae et al. Apr 2017 B1
9632997 Johnson et al. Apr 2017 B1
9633657 Svendsen et al. Apr 2017 B2
9633658 Milstein Apr 2017 B2
9633696 Miller et al. Apr 2017 B1
9653076 Kim May 2017 B2
9672825 Arslan et al. Jun 2017 B2
9704111 Antunes et al. Jul 2017 B1
9715876 Hager Jul 2017 B2
9761241 Maes et al. Sep 2017 B2
9774747 Garland et al. Sep 2017 B2
9805118 Ko et al. Oct 2017 B2
9858256 Hager Jan 2018 B2
9858929 Milstein Jan 2018 B2
9886956 Antunes et al. Feb 2018 B1
9916295 Crawford Mar 2018 B1
9947322 Kang et al. Apr 2018 B2
9953653 Newman et al. Apr 2018 B2
10032455 Newman et al. Jul 2018 B2
10044854 Rae et al. Aug 2018 B2
10049669 Newman et al. Aug 2018 B2
10051120 Engelke et al. Aug 2018 B2
10389876 Engelke et al. Aug 2019 B2
10469660 Engelke et al. Nov 2019 B2
10491746 Engelke et al. Nov 2019 B2
10574804 Bullough et al. Feb 2020 B2
10587751 Engelke et al. Mar 2020 B2
10742805 Engelke et al. Aug 2020 B2
11011157 Demoncourt May 2021 B2
11017778 Thomson et al. May 2021 B1
11170782 Stoker et al. Nov 2021 B2
11363141 Friio Jun 2022 B2
11368581 Engelke et al. Jun 2022 B2
20010005825 Engelke et al. Jun 2001 A1
20020007275 Goto et al. Jan 2002 A1
20020049589 Poirier Apr 2002 A1
20020055351 Elsey et al. May 2002 A1
20020085685 Engelke et al. Jul 2002 A1
20020085703 Proctor Jul 2002 A1
20020094800 Trop et al. Jul 2002 A1
20020101537 Basson et al. Aug 2002 A1
20020103008 Rahn et al. Aug 2002 A1
20020114429 Engelke et al. Aug 2002 A1
20020119800 Jaggers et al. Aug 2002 A1
20020161578 Saindon et al. Oct 2002 A1
20020178001 Balluff et al. Nov 2002 A1
20020178002 Boguraev et al. Nov 2002 A1
20020193076 Rogers et al. Dec 2002 A1
20030045329 Kinoshita Mar 2003 A1
20030063731 Woodring Apr 2003 A1
20030097262 Nelson May 2003 A1
20030212547 Engelke et al. Nov 2003 A1
20040066926 Brockbank et al. Apr 2004 A1
20040083105 Jaroker Apr 2004 A1
20040143430 Said et al. Jul 2004 A1
20050025290 Doherty et al. Feb 2005 A1
20050048992 Wu et al. Mar 2005 A1
20050049879 Audu et al. Mar 2005 A1
20050063520 Michaelis Mar 2005 A1
20050094776 Haldeman et al. May 2005 A1
20050094777 McClelland May 2005 A1
20050129185 McClelland et al. Jun 2005 A1
20050144012 Afrashteh et al. Jun 2005 A1
20050180553 Moore Aug 2005 A1
20050183109 Basson et al. Aug 2005 A1
20050225628 Antoniou Oct 2005 A1
20050226394 Engelke et al. Oct 2005 A1
20050226398 Bojeun Oct 2005 A1
20050232169 McLaughlin et al. Oct 2005 A1
20050277431 White Dec 2005 A1
20060026003 Carus et al. Feb 2006 A1
20060089857 Zimmerman et al. Apr 2006 A1
20060105712 Glass et al. May 2006 A1
20060133583 Brooksby Jun 2006 A1
20060140354 Engelke Jun 2006 A1
20060149558 Kahn et al. Jul 2006 A1
20060167686 Kahn Jul 2006 A1
20060172720 Islam et al. Aug 2006 A1
20060190249 Kahn et al. Aug 2006 A1
20060285652 McClelland et al. Dec 2006 A1
20060285662 Yin et al. Dec 2006 A1
20070011012 Yurick et al. Jan 2007 A1
20070024583 Gettemy et al. Feb 2007 A1
20070036282 Engelke et al. Feb 2007 A1
20070118373 Wise et al. May 2007 A1
20070126926 Miyamoto et al. Jun 2007 A1
20070153989 Howell et al. Jul 2007 A1
20070208570 Bhardwaj et al. Sep 2007 A1
20070282597 Cho et al. Dec 2007 A1
20080005440 Li et al. Jan 2008 A1
20080043936 Liebermann Feb 2008 A1
20080064326 Foster et al. Mar 2008 A1
20080129864 Stone et al. Jun 2008 A1
20080152093 Engelke et al. Jun 2008 A1
20080187108 Engelke et al. Aug 2008 A1
20080215323 Shaffer et al. Sep 2008 A1
20080319745 Caldwell et al. Dec 2008 A1
20090037171 McFarland et al. Feb 2009 A1
20090174759 Yeh et al. Jul 2009 A1
20090276215 Hager Nov 2009 A1
20090299743 Rogers Dec 2009 A1
20090306981 Cromack et al. Dec 2009 A1
20090326939 Toner et al. Dec 2009 A1
20100007711 Bell Jan 2010 A1
20100027765 Schultz et al. Feb 2010 A1
20100030738 Geer Feb 2010 A1
20100063815 Cloran et al. Mar 2010 A1
20100076752 Zweig et al. Mar 2010 A1
20100121629 Cohen May 2010 A1
20100141834 Cuttner Jun 2010 A1
20100145729 Katz Jun 2010 A1
20100228548 Liu et al. Sep 2010 A1
20100299131 Lanham et al. Nov 2010 A1
20100323728 Gould et al. Dec 2010 A1
20110013756 Davies et al. Jan 2011 A1
20110022387 Hager Jan 2011 A1
20110087491 Wittenstein et al. Apr 2011 A1
20110123003 Romriell et al. May 2011 A1
20110128953 Wozniak et al. Jun 2011 A1
20110170672 Engelke et al. Jul 2011 A1
20110231184 Kerr Sep 2011 A1
20110289134 de los Reyes et al. Nov 2011 A1
20120016671 Jaggi et al. Jan 2012 A1
20120022865 Milstein Jan 2012 A1
20120062791 Thakolsri et al. Mar 2012 A1
20120108196 Musgrove et al. May 2012 A1
20120178064 Katz Jul 2012 A1
20120214447 Russell et al. Aug 2012 A1
20120245936 Treglia Sep 2012 A1
20120250837 Engleke et al. Oct 2012 A1
20120284015 Drewes Nov 2012 A1
20130013904 Tran Jan 2013 A1
20130017800 Gouvia et al. Jan 2013 A1
20130045720 Madhavapeddl et al. Feb 2013 A1
20130086293 Bosse et al. Apr 2013 A1
20130171958 Goodson et al. Jul 2013 A1
20130219098 Turnpenny et al. Aug 2013 A1
20130254264 Hankinson et al. Sep 2013 A1
20130262563 Lu Oct 2013 A1
20130289971 Parkinson et al. Oct 2013 A1
20130308763 Engleke et al. Nov 2013 A1
20130317818 Bigham et al. Nov 2013 A1
20130331056 McKown et al. Dec 2013 A1
20130340003 Davis et al. Dec 2013 A1
20140018045 Tucker Jan 2014 A1
20140039871 Crawford Feb 2014 A1
20140099909 Daly et al. Apr 2014 A1
20140153705 Moore et al. Jun 2014 A1
20140180667 Johansson Jun 2014 A1
20140270101 Maxwell et al. Sep 2014 A1
20140314220 Charugundla Oct 2014 A1
20140341359 Engelke et al. Nov 2014 A1
20150032450 Hussain et al. Jan 2015 A1
20150073790 Steuble et al. Mar 2015 A1
20150088508 Bharadwaj Mar 2015 A1
20150094105 Pan Apr 2015 A1
20150106091 Wetjen et al. Apr 2015 A1
20150130887 Thelin et al. May 2015 A1
20150131786 Roach et al. May 2015 A1
20150279352 Willett et al. Oct 2015 A1
20150288815 Charugundla Oct 2015 A1
20150341486 Knighton Nov 2015 A1
20150358461 Klaban Dec 2015 A1
20160012751 Hirozawa Jan 2016 A1
20160119571 Ko Apr 2016 A1
20160133251 Kadirkamanathan et al. May 2016 A1
20160155435 Mohideen Jun 2016 A1
20160179831 Gruber et al. Jun 2016 A1
20160277709 Stringham et al. Sep 2016 A1
20160295293 McLaughlin Oct 2016 A1
20170085506 Gordon Mar 2017 A1
20170178182 Kuskey et al. Jun 2017 A1
20170187826 Russell et al. Jun 2017 A1
20170187876 Hayes et al. Jun 2017 A1
20170206808 Engelke et al. Jul 2017 A1
20180013886 Rae et al. Jan 2018 A1
20180081869 Hager Mar 2018 A1
20180102130 Holm et al. Apr 2018 A1
20180197545 Willett et al. Jul 2018 A1
20180270350 Engelke et al. Sep 2018 A1
20180315417 Flaks et al. Nov 2018 A1
20190108834 Nelson et al. Apr 2019 A1
20190295542 Huang et al. Sep 2019 A1
20200143820 Donofrio et al. May 2020 A1
20200364067 Accame et al. Nov 2020 A1
20210073468 Deshmukh et al. Mar 2021 A1
20210210115 Kothari et al. Jul 2021 A1
20210233530 Thomson et al. Jul 2021 A1
20220284904 Pu et al. Sep 2022 A1
20220319521 Liu Oct 2022 A1
Foreign Referenced Citations (53)
Number Date Country
2647097 Apr 1978 DE
2749923 May 1979 DE
3410619 Oct 1985 DE
3632233 Apr 1988 DE
10328884 Feb 2005 DE
0016281 Oct 1980 EP
0029246 May 1981 EP
0651372 May 1995 EP
0655158 May 1995 EP
0664636 Jul 1995 EP
0683483 Nov 1995 EP
1039733 Sep 2000 EP
1330046 Jul 2003 EP
1486949 Dec 2004 EP
2093974 Aug 2009 EP
2373016 Oct 2011 EP
2403697 Apr 1979 FR
2432805 Feb 1980 FR
2538978 Jul 1984 FR
2183880 Jun 1987 GB
2285895 Jul 1995 GB
2327173 Jan 1999 GB
2335109 Sep 1999 GB
2339363 Jan 2000 GB
2334177 Dec 2002 GB
S5544283 Mar 1980 JP
S5755649 Apr 1982 JP
S58134568 Aug 1983 JP
S60259058 Dec 1985 JP
S63198466 Aug 1988 JP
H04248596 Sep 1992 JP
20050004503 Dec 2005 KR
9323947 Nov 1993 WO
9405006 Mar 1994 WO
9500946 Jan 1995 WO
9519086 Jul 1995 WO
9750222 Dec 1997 WO
9839901 Sep 1998 WO
9913634 Mar 1999 WO
9952237 Oct 1999 WO
0049601 Aug 2000 WO
0155914 Aug 2001 WO
0158165 Aug 2001 WO
0180079 Oct 2001 WO
0225910 Mar 2002 WO
02077971 Oct 2002 WO
03026265 Mar 2003 WO
03030018 Apr 2003 WO
03071774 Aug 2003 WO
2005081511 Sep 2005 WO
2008053306 May 2008 WO
2015131028 Sep 2015 WO
2015148037 Oct 2015 WO
Non-Patent Literature Citations (228)
Entry
Choi, et al., Employing Speech Recognition Through Common Telephone Equipment, IBM Technical Disclosure Bulletin, Dec. 1995, pp. 355-356.
Choi, et al., Splitting and Routing Audio Signals in Systems with Speech Recognition, IBM Technical Disclosure Bulletin, Dec. 1995, 38(12):503-504.
Cook, A First Course in Digital Electronics, Published by Prentice-Hall, Inc., 1999, pp. 692-693.
Cooper, R. J., Break Feature for Half-Duplex Modem, IBM Technical Disclosure Bulletin, vol. 17, No. 8, pp. 2386-2387, Jan. 1975.
De Gennaro, et al., (Cellular) Telephone Steno Captioning Service, IBM Technical Disclosure Bulletin, Jul. 1992, pp. 346-349.
Goodrich, et al., Engineering Education for Students with Disabilities: Technology, Research and Support, In Frontiers in Education Conference, 1993, 23rd Annual Conference ‘Engineering Education: Renewing America's Technology’ Proceedings, IEEE, pp. 92-97.
Gopalakrishnan, Effective Set-Up for Performing Phone Conversations by the Hearing Impaired, IBM Technical Disclosure Bulletin, vol. 34, No. 7 8, pp. 423-426, 1991.
IBM, Software Verification of Microcode Transfer Using Cyclic Redundancy Code Algorithm, IBM Technical Disclosure Bulletin, Dec. 1988, 31(7):149-153.
IBM, Use of Cyclic Redundancy Code for Testing ROM and RAM in a Writeable Control Store, IBM Technical Disclosure Bulletin, Nov. 1990, 33(6A):219-220.
Karjalainen, et al., Applications for the Hearing-Impaired: Evaluation of Finnish Phoneme Recognition Methods, Eurospeech, 1997, 4 pages.
Kitai, et al., Trends of ASR and Its Applications in Japan, Third IEEE Workshop on Interactive Voice Technology for Telecommunications Applications, 1996, pp. 21-24.
Kukich, Spelling Correction for the Telecommunications Network for the Deaf, Communications of the ACM, 1992, 35(5):80-90.
Makhoul, et al., State of the Art in Continuous Speech Recognition, Proc. Natl. Acad. Sci. USA, 1995, 92:9956-9963.
Microchip Technology, Inc., MCRF250, Contactless Programmable Passive RFID Device With Anti-Collision, 1998, DS21267C, pp. 1-12.
Moskowitz, Telocator Alphanumeric Protocol, Version 1.8, Feb. 4, 1997.
Oberteuffer, Commercial Applications of Speech Interface Technology: An Industry at the Threshold, Proc. Natl. Acad. Sci. USA, 1995, 92:10007-10010.
Osman-Allu, Telecommunication Interfaces for Deaf People, IEE Colloquium on Special Needs and the Interface, IET, 1993, pp. 811-814.
Paul, et al., The Design for the Wall Street Journal-based CSR Corpus, Proceedings of the Workshop on Speech and Natural Language, Association for Computational Linguistics, 1992, pp. 357-362.
Rabiner, et al., Fundamentals of Speech Recognition, Copyright 1993 by AT&T, Published by Prentice Hall PTR, pp. 1, 6-9, 284-285, 482-488.
Rabiner, Applications of Speech Recognition in the Area of Telecommunications, IEEE Workshop on Automatic Speech Recognition and Understanding, IEEE, 1997, pp. 501-510.
Schmitt, et al., An Experimental Study of Synthesized Speech Intelligibility Using Text Created by Telecommunication Device for the Deaf (TDD) Users, IEEE Global Telecommunications Conference & Exhibition, 1990, pp. 996-999.
Scott, Understanding Cyclic Redundancy Check, ACI Technical Support, Technical Note 99-11, 1999, 13 pages.
Seltzer, et al., Expediting the Turnaround of Radiology Reports in a Teaching Hospital Setting, AJR, 1997, 168:889-893.
Smith, R. L., ASCII to Baudot, Radio Electronics, pp. 51-58, Mar. 1976.
Supnik, et al., Can You Hear Me?—DragonDictate for Windows Minces Words for Your Office, Originally Published in Computer Counselor Column of the May 1995 Issue of the Los Angeles Lawyer Magazine, http://www.supnik.com/voice.htm, accessed Aug. 7, 2012.
Vaseghi, Chapter 14: Echo Cancellation, Advanced Digital Signal Processing and Noise Reduction, Second Edition, John Wiley & Sons, Ltd., 2000, pp. 396-415.
Wactlar, et al., Informedia(TM): News-on-Demand Experiments in Speech Recognition, Proceedings of ARPA Speech Recognition Workshop, 1996, pp. 18-21.
Wegmann, Final Technical Report on Phase I SBIR Study on “Semi-Automated Speech Transcription System” at Dragon Systems, Advanced Research Projects Agency Order No. 5916, 1994, 21 pages.
Williams, A Painless Guide to CRC Error Detection Algorithms, 1993, 35 pages.
Yamamoto, et al., Special Session (New Developments in Voice Recognition) (Invited Presentation), New Applications of Voice Recognition, Proceedings of the Acoustical Society of Japan, Spring 1996 Research Presentation Conrerence, pp. 33-36.
Young, A Review of Large-Vocabulary Continuous-Speech Recognition, IEEE Signal Processing Magazine, 1996, pp. 45-57.
Cyclic Redundancy Check, Source: http://utopia.knoware.nl/users/eprebel/Communication/CRC/index.html, 1998, 4 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of U.S. Pat. No. 10,742,805, CaptionCall LLC v. Ultratec Inc., Case IPR2021-01337, Aug. 24, 2021, 65 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Declaration of Benedict J. Occhiogrosso, Re: U.S. Pat. No. 10,742,805, CaptionCall LLC v. Ultratec Inc., Case No. to be Assigned, Jul. 29, 2021, 83 pages.
Rodman, The Effect of Bandwidth on Speech Intelligibility, White Paper, Jan. 16, 2003, Copyright 2003 Polycom, Inc., 9 pages.
Declaration of Paul W. Ludwick, In Re: U.S. Pat. No. 6,594,346, Case IPR2013-00545, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 11, 2014.
Declaration of Paul W. Ludwick, In Re: U.S. Pat. No. 7,555,104, Case IPR2013-00542 and IPR2013-00543, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014.
Declaration of Paul W. Ludwick, In Re: U.S. Pat. No. 7,319,740, Case IPR2013-00542 and IPR2013-00543, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014.
Declaration of Paul W. Ludwick, In Re: U.S. Pat. No. 6,233,314, Case IPR2013-00540, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014.
Declaration of Paul W. Ludwick, In Re: U.S. Pat. No. 5,909,482, Case IPR2013-00541, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014.
Declaration of Paul W. Ludwick, In Re: U.S. Pat. No. 8,213,578, Case IPR2013-00544, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014.
Declaration of Paul W. Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 7,555,104, Case IPR2013-00543, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014.
Declaration of Paul W. Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 7,319,740, Case IPR2013-00542, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014.
Declaration of Paul W. Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 6,603,835, Case IPR2013-00545, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014.
Declaration of Paul W. Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 6,594,346, Case IPR2013-00545, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014.
Declaration of Paul W. Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 6,233,314, Case IPR2013-00540, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014.
Declaration of Paul W. Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 5,909,482, Case IPR2013-00541, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014.
Declaration of Paul W. Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 8,213,578, Case IPR2013-00544, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014.
Declaration of Kelby Brick, Esq., CDI, In Re: U.S. Pat. No. 7,555,104, Case IPR2013-00543, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,003,082, Case IPR2013-00550, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,555,104, Case IPR2013-00543, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,594,346, Case IPR2013-00545, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,603,835, Case IPR2013-00549, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,319,740, Case IPR2013-00542, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,233,314, Case IPR2013-00540, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 5,909,482, Case IPR2013-00541, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,213,578, Case IPR2013-00544, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Petitioner's Reply to Patent Owner's Response Under 37 C.F.R. 42.23, In Re: U.S. Pat. No. 7,003,082, Case IPR2013-00550, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Petitioner's Reply to Patent Owner's Response Under 37 C.F.R. 42.23, In Re: U.S. Pat. No. 6,594,346, Case IPR2013-00545, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Petitioner's Reply to Patent Owner's Response Under 37 C.F.R. 42.23, In Re: U.S. Pat. No. 8,213,578, Case IPR2013-00544, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Petitioner's Reply to Patent Owner's Response Under 37 C.F.R. 42.23, In Re: U.S. Pat. No. 7,555,104, Case IPR2013-00543, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Petitioner's Reply to Patent Owner's Response Under 37 C.F.R. 42.23, In Re: U.S. Pat. No. 5,909,482, Case IPR2013-00541, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Petitioner's Reply to Patent Owner's Response Under 37 C.F.R. 42.23, In Re: U.S. Pat. No. 7,319,740, Case IPR2013-00542, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Petitioner's Reply to Patent Owner's Response Under 37 C.F.R. 42.23, In Re: U.S. Pat. No. 6,233,314, Case IPR2013-00540, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Petitioner's Reply to Patent Owner's Response Under 37 C.F.R. 42.23, In Re: U.S. Pat. No. 6,603,835, Case IPR2013-00549, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014.
Decision, CaptionCall's Request for Rehearing, In Re: U.S. Pat. No. 6,603,835, Case IPR2013-00549, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Apr. 28, 2014.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claims 1-29 of U.S. Pat. No. 8,917,822, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00636, U.S. Pat. No. 8,917,822, Jan. 29, 2015, 67 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Preliminary Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00636, U.S. Pat. No. 8,917,822, Jun. 9, 2015, 66 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Instituting Review, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00636, U.S. Pat. No. 8,917,822, Sep. 8, 2015, 20 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00636, U.S. Pat. No. 8,917,822, Nov. 23, 2015, 65 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Contingent Motion to Amend, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00636, U.S. Pat. No. 8,917,822, Nov. 23, 2015, 39 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petitioner's Reply to Patent Owner Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00636, U.S. Pat. No. 8,917,822, Jan. 26, 2016, 29 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petitioner's Opposition to Patent Owner's Contingent Motion to Amend, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00636, U.S. Pat. No. 8,917,822, Jan. 26, 2016, 28 pages.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,917,822, Case IPR2015-00636, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jan. 29, 2015, 65 pages.
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,917,822, Case IPR2015-00636, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jan. 26, 2016, 60 pages.
Declaration of Ivan Zatkovich, In Re: U.S. Pat. No. 8,917,822, Case IPR2015-00636, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Nov. 23, 2015, 108 pages.
Declaration of Paul Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 8,917,822, Case IPR2015-00636, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Nov. 23, 2015, 37 pages.
Declaration of Brenda Battat Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 8,917,822, Case IPR2015-00636, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Nov. 23, 2015, 61 pages.
Declaration of Katie Kretschman, In Re: U.S. Pat. No. 8,917,822, Case IPR2015-00636, In the United States Patenl and Trademark Office Before the Patent Trial and Appeal Board, Nov. 23, 2015, 5 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claims 1-30 of U.S. Pat. No. 8,908,838, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00637, U.S. Pat. No. 8,908,838, Jan. 29, 2015, 67 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Preliminary Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00637, U.S. Pat. No. 8,908,838, Jun. 9, 2015, 65 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Instituting Review, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00637, U.S. Pat. No. 8,908,838, Sep. 8, 2015, 25 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00637, U.S. Pat. No. 8,908,838, Nov. 23, 2015, 65 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Contingent Motion to Amend, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00637, U.S. Pat. No. 8,908,838, Nov. 23, 2015, 38 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petitioner's Reply to Patent Owner Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00637, U.S. Pat. No. 8,908,838, Jan. 26, 2016, 29 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petitioner's Opposition to Patent Owner's Contingent Motion to Amend, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00637, U.S. Pat. No. 8,908,838, Jan. 26, 2016, 28 pages.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,908,838, Case IPR2015-00637, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jan. 29, 2015, 62 pages.
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,908,838, Case IPR2015-00637, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jan. 26, 2016, 62 pages.
Declaration of Ivan Zatkovich, In Re: U.S. Pat. No. 8,908,838, Case IPR2015-00637, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Nov. 23, 2015, 110 pages.
Declaration of Paul Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 8,908,838, Case IPR2015-00637, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Nov. 23, 2015, 37 pages.
Declaration of Brenda Battat Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 8,908,838, Case IPR2015-00637, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Nov. 24, 2015, 61 pages.
Declaration of Katie Kretschman, In Re: U.S. Pat. No. 8,908,838, Case IPR2015-00637, In the United States Patenl and Trademark Office Before the Patent Trial and Appeal Board, Nov. 23, 2015, 5 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claims 1-74 of U.S. Pat. No. 9,131,045, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01889, U.S. Pat. No. 9,131,045, Sep. 9, 2015, 66 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Preliminary Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01889, U.S. Pat. No. 9,131,045, Dec. 18, 2015, 26 pages.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 9,131,045, Case IPR2015-01889, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Sep. 9, 2015, 63 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claims 1-18 of U.S. Pat. No. 5,974,116, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01355, U.S. Pat. No. 5,974,116, Jun. 8, 2015, 65 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Preliminary Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01355, U.S. Pat. No. 5,974,116, Sep. 18, 2015, 43 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Instituting Review, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01355, U.S. Pat. No. 5,974,116, Dec. 16, 2015, 34 pages.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 5,974,116, Case IPR2015-001355, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jun. 8, 2015, 45 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claim 1 of U.S. Pat. No. 6,934,366, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01357, U.S. Pat. No. 6,934,366, Jun. 8, 2015, 65 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Preliminary Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01357, U.S. Pat. No. 6,934,366, Sep. 22, 2015, 37 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Instituting Review, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01357, U.S. Pat. No. 6,934,366, Dec. 18, 2015, 16 pages.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,934,366, Case IPR2015-001357, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jun. 8, 2015, 46 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claim 1 of U.S. Pat. No. 7,006,604, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01358, U.S. Pat. No. 7,006,604, Jun. 8, 2015, 65 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Preliminary Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01358, U.S. Pat. No. 7,006,604, Sep. 22, 2015, 34 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Instituting Review, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01358, U.S. Pat. No. 7,006,604, Dec. 18, 2015, 12 pages.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,006,604, Case IPR2015-001358, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jun. 8, 2015, 45 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claims 1-3 and 5-7 of U.S. Pat. No. 6,493,426, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01359, U.S. Pat. No. 6,493,426, Jun. 8, 2015, 65 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Preliminary Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01359, U.S. Pat. No. 6,493,426, Sep. 22, 2015, 40 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Instituting Review, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01359, U.S. Pat. No. 6,493,426, Dec. 18, 2015, 17 pages.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,493,426, Case IPR2015-001359, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jun. 8, 2015, 47 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claims 1-4 of U.S. Pat. No. 8,515,024, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01885, U.S. Pat. No. 8,515,024, Sep. 8, 2015, 35 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Preliminary Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01885, U.S. Pat. No. 8,515,024, Dec. 17, 2015, 25 pages.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,515,024, Case IPR2015-01885, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Sep. 8, 2015, 23 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claims 1, 3, 6, 9-11, 13,15, 19-23, 25-27, 34, and 36-38 of U.S. Pat. No. 7,881,441, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01886, Patent 7,881,441, Sep. 8, 2015, 61 pages.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,881,441, Case IPR2015-01886, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Sep. 8, 2015, 29 pages.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,003,082, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 30, 2013.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,603,835, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 30, 2013.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,233,314, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 30, 2013.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 5,909,482, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 30, 2013.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,319,740, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 30, 2013.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,594,346, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 30, 2013.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,555,104, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 30, 2013.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,213,578, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 30, 2013.
Request for Rehearing Under 37 C.F.R. 42.71(d), In Re: U.S. Pat. No. 6,603,835, Case IPR2013-00549, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Mar. 19, 2014.
Patent Owner Response Under 37 C.F.R. 42.120 (to the Institution of Inter Partes Review), In Re: U.S. Pat. No. 6,594,346, Case IPR2013-00545, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014.
Patent Owner Response Under 37 C.F.R. 42.120 (to the Institution of Inter Partes Review), In Re: U.S. Pat. No. 7,003,082, Case IPR2013-00550, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014.
Patent Owner Response Under 37 C.F.R. 42.120 (to the Institution of Inter Partes Review), In Re: U.S. Pat. No. 7,555,104, Case IPR2013-00543, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014.
Patent Owner Response Under 37 C.F.R. 42.120 (to the Institution of Inter Partes Review), In Re: U.S. Pat. No. 7,319,740, Case IPR2013-00542, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014.
Patent Owner Response Under 37 C.F.R. 42.120 (to the Institution of Inter Partes Review), In Re: U.S. Pat. No. 6,603,835, Case IPR2013-00549, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014.
Patent Owner Response Under 37 C.F.R. 42.120 (to the Institution of Inter Partes Review), In Re: U.S. Pat. No. 8,213,578, Case IPR2013-00544, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014.
Patent Owner Response Under 37 C.F.R. 42.120 (to the Institution of Inter Partes Review), In Re: U.S. Pat. No. 5,909,482, Case IPR2013-00541, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014.
Patent Owner Response Under 37 C.F.R. 42.120 (to the Institution of Inter Partes Review), In Re: U.S. Pat. No. 6,233,314, Case IPR2013-00540, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014.
Declaration of Brenda Battat, In Re: U.S. Pat. No. 8,231,578, Case IPR2013-00544, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 8, 2014.
Declaration of Constance Phelps, In Re: U.S. Pat. No. 6,233,314, Case IPR2013-00540, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 9, 2014.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,603,835, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 19, 2014.
Declaration of James A. Steel, Jr., In Re: U.S. Pat. No. 7,319,740, Case IPR2013-00542, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 10, 2014.
Declaration of James A. Steel, Jr., In Re: U.S. Pat. No. 7,003,082, Case IPR2013-00550, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 10, 2014.
Declaration of James A. Steel, Jr., In Re: U.S. Pat. No. 6,603,835, Case IPR2013-00549, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 11, 2014.
Declaration of James A. Steel, Jr., In Re: U.S. Pat. No. 7,555,104, Case IPR2013-00543, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014.
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1-30 of U.S. Pat. No. 8,908,838 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Jan. 29, 2015, 67 pages.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,908,838, Case IPR2015-00637, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jan. 28, 2015, 62 pages.
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1-29 of U.S. Pat. No. 8,917,822 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Jan. 29, 2015, 67 pages.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,917,822, Case IPR2015-00636, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jan. 28, 2015, 65 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00780, U.S. Pat. No. 6,603,835, Dec. 4, 2014, 14 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner Response Under 37 C.F.R. 42.120, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00780, U.S. Pat. No. 6,603,835, Feb. 11, 2015, 68 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00540, U.S. Pat. No. 6,233,314, Mar. 3, 2015, 55 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00541, U.S. Pat. No. 5,909,482, Mar. 3, 2015, 77 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00542, U.S. Pat. No. 7,319,740, Mar. 3, 2015, 31 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00543, U.S. Pat. No. 7,555,104, Mar. 3, 2015, 29 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00544, U.S. Pat. No. 8,213,578, Mar. 3, 2015, 56 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00545, U.S. Pat. No. 6,594,346, Mar. 3, 2015,41 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00549, U.S. Pat. No. 6,603,835, Mar. 3, 2015, 35 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00550, U.S. Pat. No. 7,003,082, Mar. 3, 2015, 25 pages.
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2020-01215, U.S. Pat. No. 10,469,660, Jan. 27, 2021, 24 pages.
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petitioner's Request for Rehearing Pursuant to 37 C.F.R. 42.71 (d), CaptionCall LLC v. Ultratec Inc., Case IPR2020-01215, U.S. Pat. No. 10,469,660, Feb. 18, 2021, 19 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petitioner's Reply to Patent Owner's Response, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00780, U.S. Pat. No. 6,603,835, Apr. 20, 2015, 30 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00780, U.S. Appl. No. 6,603,835, Dec. 1, 2015, 56 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00780, U.S. Appl. No. 6,603,835, Dec. 31, 2015, 20 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00540, U.S. Appl. No. 6,233,314, Apr. 2, 2015, 19 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Patent Owner's Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00540, U.S. Appl. No. 6,233,314, Dec. 1, 2015, 18 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Notice of Appeal, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00540, U.S. Appl. No. 6,233,314, Feb. 2, 2016, 19 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00541, U.S. Appl. No. 5,909,482, Apr. 2, 2015, 19 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Patent Owner's Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00541, U.S. Appl. No. 5,909,482, Dec. 1, 2015, 18 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Notice of Appeal, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00541, U.S. Appl. No. 5,909,482, Feb. 2, 2016, 19 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00542, U.S. Appl. No. 7,319,470, Apr. 2, 2015, 19 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Patent Owner's Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00542, U.S. Appl. No. 7,319,470, Dec. 1, 2015, 15 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Notice of Appeal, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00542, U.S. Appl. No. 7,319,470, Feb. 2, 2016, 12 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00543, U.S. Pat. No. 7,555,104, Apr. 2, 2015, 19 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Patent Owner's Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00543, U.S. Pat. No. 7,555,104, Dec. 1, 2015, 15 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Notice of Appeal, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00543, U.S. Pat. No. 7,555,104, Feb. 2, 2016, 11 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00544, U.S. Pat. No. 8,213,578, Apr. 2, 2015, 19 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Patent Owner's Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00544, U.S. Pat. No. 8,213,578, Dec. 1, 2015, 19 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Notice of Appeal, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00544, U.S. Pat. No. 8,213,578, Feb. 2, 2016, 11 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00545, U.S. Pat. No. 6,594,346, Apr. 2, 2015, 16 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Patent Owner's Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00545, U.S. Pat. No. 6,594,346, Dec. 1, 2015, 15 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Notice of Appeal, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00545, U.S. Pat. No. 6,594,346, Feb. 2, 2016, 11 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00549, U.S. Pat. No. 6,603,835, Apr. 2, 2015, 19 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Patent Owner's Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00549, U.S. Pat. No. 6,603,835, Dec. 1, 2015, 15 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Notice of Appeal, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00549, U.S. Pat. No. 6,603,835, Feb. 2, 2016, 11 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00550, U.S. Pat. No. 7,003,082, Apr. 2, 2015, 19 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Patent Owner's Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00550, U.S. Pat. No. 7,003,082, Dec. 1, 2015, 10 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Notice of Appeal, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00550, U.S. Pat. No. 7,003,082, Feb. 2, 2016, 11 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2014-01287, U.S. Pat. No. 7,660,398, Feb. 12, 2015, 15 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2014-01287, U.S. Pat. No. 7,660,398, Mar. 13, 2015, 18 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2014-01287, U.S. Pat. No. 7,660,398, Nov. 5, 2015, 7 pages.
Petition for Inter Partes Review for U.S. Pat. No. 10,469,660, CaptionCall LLC v. Ultratec Inc., United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 1, 2020, 68 pages.
Declaration of Benedict J. Occhiogrosso for U.S. Pat. No. 10,469,660, CaptionCall LLC v. Ultratec Inc., United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jun. 23, 2020, 113 pages.
U.S. Appl. No. 60/562,795 to McLaughlin et al., filed Apr. 16, 2004, 126 pages.
Blackberry, Rim Introduces New Color BlackBerry Handheld for CDMA2000 1X Wireless Networks, BlackBerry Press Release, Mar. 22, 2004, 2 pages.
Blackberry Wireless Handheld User Guide, 7750, Mar. 16, 2004, 144 pages.
Federal Communications Commission, Telecommunication Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities, 68 Fed. Reg. 50973-50978 (Aug. 25, 2003).
PhoneDB, RIM BlackBerry 7750 Device Specs, Copyright 2006-2020 PhoneDB, 6 pages.
Phonesdata, Nokia 6620 Specs, Review, Opinions, Comparisons, Copyright 2020, 9 pages.
Sundgot, Nokia Unveils the 6600, InfoSync World, Jun. 16, 2003, 2 pages.
Wikipedia, Dell Axim, https://en.wikipedia.org/wiki/Dell_Axim, Last Edited on Feb. 23, 2020, 4 pages.
Wikipedia, Palm Tungsten, https://en.wikipedia.org/wiki/Palm_Tungsten, Last Edited on Oct. 6, 2019, 10 pages.
Final Written Decision, U.S. Pat. No. 9,131,045, Case IPR2015-01889, CaptionCall, LLC v. Ultratec, Inc., United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Apr. 11, 2017, 118 pages.
Judgment, U.S. Pat. No. 7,881,441, Case IPR2015-01886, CaptionCall, LLC v. Ultratec, Inc., United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jun. 9, 2016, 4 pages.
Petition for Inter Partes Review for U.S. Pat. No. 10,491,746, CaptionCall, LLC v. Ultratec, Inc., United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 1, 2020, 61 pages.
Declaration of Benedict J. Occhiogrosso for U.S. Pat. No. 10,491,746, CaptionCall, LLC v. Ultratec, Inc., United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jun. 23, 2020, 79 pages.
Arlinger, Negative Consequences of Uncorrected Hearing Loss—A Review, International Journal of Audiology, 2003, 42:2S17-2S20.
Petition for Inter Partes Review for U.S. Pat. No. 10,587,751, CaptionCall, LLC v. Ultratec, Inc., United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 1, 2020, 64 pages.
Declaration of Benedict J. Occhiogrosso for U.S. Appl. No. 10,587,751, CaptionCall, LLC v. Ultratec, Inc., United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jun. 23, 2020, 106 pages.
Curtis et al., Doctor-Patient Communication on the Telephone, Can Fam Physician, 1989, 35:123-128.
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1 and 2 of U.S. Pat. No. 7,555,104 Under 35 U.S.C 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 30, 2013, 65 pages.
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1 and 2 of U.S. Pat. No. 6,233,314 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 30, 2013, 39 pages.
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1 and 2 of U.S. Pat. No. 6,594,346 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 30, 2013, 67 pages.
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1-15 of U.S. Pat. No. 5,909,482 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 30, 2013, 67 pages.
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 7-11 of U.S. Pat. No. 8,213,578 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 30, 2013, 67 pages.
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1-8 of U.S. Pat. No. 6,603,835 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 30, 2013, 66 pages.
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1 of U.S. Pat. No. 7,003,082 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 30, 2013, 51 pages.
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1 and 2 of U.S. Pat. No. 7,319,740 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 30, 2013, 67 pages.
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00550, U.S. Pat. No. 7,003,082 B2, Mar. 5, 2014, 13 pages.
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00543, U.S. Pat. No. 7,555,104, Mar. 5, 2014, 16 pages.
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00540, U.S. Pat. No. 6,233,314, Mar. 5, 2014, 17 pages.
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00545, U.S. Pat. No. 6,594,346, Mar. 5, 2014, 21 pages.
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00541, U.S. Pat. No. 5,909,482, Mar. 5, 2014, 32 pages.
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00544, U.S. Pat. No. 8,213,578, Mar. 5, 2014, 22 pages.
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00542, U.S. Pat. No. 7,319,740, Mar. 5, 2014, 17 pages.
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00549, Patent 6,603,835 32, Mar. 5, 2014, 26 pages.
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 6 and 8 of U.S. Pat. No. 6,603,835 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., May 19, 2014, 67 pages.
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 11-13 of U.S. Pat. No. 7,660,398 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 13, 2014, 64 pages.
Prosecution History of the U.S. Pat. No. 7,660,398, 489 pages.
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,660,398, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 13, 2014, 62 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Granting Institution of Inter Partes Review of U.S. Pat. No. 10,587,751, CaptionCall LLC v. Ultratec Inc., Case IPR2020-01217, Jan. 27, 2021, 24 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Judgment Granting Request for Entry of Adverse Judgment After Institution of Trial, U.S. Pat. No. 10,587,751, CaptionCall LLC v. Ultratec Inc., Case IPR2020-01217, Apr. 28, 2021, 3 pages.
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Institution of Inter Partes Review, U.S. Pat. No. 10,491,746, CaptionCall LLC v. Ultratec Inc., Case IPR2020-01216, Jan. 27, 2021, 22 pages.
Related Publications (1)
Number Date Country
20200329139 A1 Oct 2020 US
Provisional Applications (1)
Number Date Country
61946072 Feb 2014 US
Continuations (2)
Number Date Country
Parent 15477958 Apr 2017 US
Child 16911691 US
Parent 14632257 Feb 2015 US
Child 15477958 US