Not applicable.
The present invention is related to communication enhancement systems and more specifically to captioning systems for assisting hard of hearing users (e.g., assisted user's (AU's)) in understanding voice communications by remote hearing users where both users use communication devices.
The ability for people to communicate with each other is very important. While most normally hearing and seeing persons do not think much about their ability to perceive meaning in communications (e.g., written language, audible language, visual representations (e.g., expressions, gestures, etc.), etc.) initiated by other people, the communication process is fairly complex. To this end, think of what it takes for a first person to communicate a complex thought to a second person. First, the first person has to impart the complex thought to the second person in some way that can be perceived by the second person. For instance, in the case of written text, a clear transcript of the text which captures the first person's thought has to be provided to and visually perceivable by the second person. As another instance, in a case where the first person speaks to the second person, the first person's voice has to be clearly received and heard by the second person.
Second, the second person has to understand the meaning of each separate word received either in text or in sound. Thus, if the first person uses one or more words that the second person is unfamiliar with, the second person will be unable to perceive the first person's thought even if the second person correctly hears all the words uttered by the first person.
Third, the second person has to be able to stitch the first person's uttered words together to form a more complex concept corresponding to the sequence of uttered words.
Fourth, in many cases, often the more complex concept has to be modified in the second person's mind based on context. Here, context is typically based at least in part on prior thoughts communicated by the first person. For instance, where recent prior thoughts relate to politics, a new complex thought including the word “party” may be automatically associated with politics as opposed to a fun gathering (e.g., “politics” provides context for use of the word “party”). If the second person misperceived prior communications, context is wrong and an intended thought can be wrong.
Fifth, to really understand a communication, in many cases the second person has to consider communication prosody. Here, the term “prosody” is used to refer to elements of speech that are not individual phonetic segments (vowels and consonants) but are properties of syllables and larger units of speech, including linguistic functions such as intonation, tone, stress, and rhythm. In this regard, a single phrase or word can have several different meanings or intentions where each is belied by prosody associated with utterance of the phrase or word. For instance, consider the different prosody with which a person can utter the words “Yeah, right”, where different prosodies cause the phrase to reveal agreement or skepticism or to indicate a query.
When it comes to verbal communication capabilities, obviously deaf and hard of hearing persons (e.g., assisted users (AUs)) are disadvantaged as, by definition, they simply cannot consistently perceive clear voiced communications. In addition, because these people cannot hear well, often they cannot pick up on communication prosody and therefore are frequently left with no mechanism for perceiving communication related thereto. Moreover, for an AU that has some hearing capability, when the AU misperceives one or more audible communications (e.g., speech), those misperceptions typically wrongly skew understanding of subsequent communications (e.g., any misperceptions contextually affect understanding of subsequent communications).
Inability or compromised ability to hear has made it difficult and in some cases impossible for AUs to carry on conventional phone communications with hearing users (HUs). To overcome the problems that AUs have using conventional phone systems, an entire industry has evolved for supporting AUs during phone type voice communications. One particularly advantageous solution affords a captioned telephone device to an AU for providing captions of an HU's voice signal to the AU essentially in real time as an HU speaks. Here, while an HU is speaking, the AU reads HU voice signal captions when necessary to determine what the HU said.
In some cases, the captions are generated via an automated speech recognition (ASR) engine that feeds the HU voice signal to software that automatically converts that voice signal to captions. Here, initial captions may be erroneous and in many cases the ASR engine continues to consider caption words in context as more HU voice signal words are converted to text subsequent to any uttered word and makes caption error corrections on the fly. In other cases an HU voice signal is provided to a remote human call assistant (CA) who listens to the HU voice signal and generates CA captions and/or corrections which are transmitted back to the AU's captioned device to be presented as text while the HU speaks.
While existing captioning systems work well for quickly generating highly accurate verbatim HU voice signal captions, some AUs find it difficult to discern caption meanings in real time for several reasons. First, many aspects of spoken language result in captions that have characteristics different than the types of things people are accustomed to reading. To this end, published written language (e.g., books, articles in newspapers, magazines, online publications, etc.) typically includes well thought out and properly constructed sentences. In contrast, spoken language, especially in the context of a two or more person conversation, often includes speaking turns that comprise partial and incomplete phrases, single words or other non-word utterances (e.g., a grunt, a moan, a heavy breadth, etc.). While people are generally comfortable reading well thought out written language (e.g., complete sentences), they simply are not used to reading short spoken language utterances and tend to have a hard time following the meaning of those utterances in real time.
Second, ability to follow partial phrase, single word and non-word utterances is exacerbated by, in many cases, generally discontinuous speech where a speaker may utter the beginning of one thought, then redirect verbally to a next thought only to redirect again to yet a third thought or back to the first thought. In these cases, captions corresponding to discontinuous thought are confusing and exacerbate difficulty of perceiving meaning.
Third, lack of context or misperceived context which often occurs when communicating with an AU exacerbates confusion. Consider an HU voice caption “So that's it”. What does this phrase mean without a contextual understanding of what was prior conveyed? Existing captioning systems are supposed to solve this problem by enabling an AU to scroll back in prior text to read prior captions and assess meaning. One problem here is that often, as described above, prior text is in spoken form, not written, and includes only the HU side of the conversation, not the AU's utterances, making it difficult to discern context after the fact. Another problem is that scrolling up requires additional AU activity while actively listening which can be burdensome for some AU's especially older AU's that may be uncomfortable with technology and multitasking while carrying on a conversation.
Fourth, without ability to perceive prolix at times, AUs are at a substantial communication disadvantage that cannot be addressed via captioned devices.
In many cases while two people are talking, one or both are also performing some other parallel task or activity. For instance, in many cases as two people converse via a phone system, one or both of the people may be looking at their schedules to assess availability for a subsequent call or they may be adding tasks to “To Do” lists. Thus, for instance, an AU trying to listen to an HU voice may have to also read and perceive captions and simultaneously use scheduling software to set up a future appointment or add tasks to a list. In the case of an AU, especially an elderly AU that may be uncomfortable with technology in general, multitasking while communicating via voice and viewing captions can be difficult, exhausting and in many cases simply foregone.
In some embodiments, while an HU and an AU participate in a voice call, a system processor generates verbatim HU voice signal captions, identifies an HU's intended communication for each uttered phrase, and then converts the verbatim captions to enhanced captions that are presented to the AU via a display screen or the like. One advantage associated with enhanced captions is that the enhanced captions often result in better communications between an HU and an AU than possible with verbatim captions. For instance, in some cases enhanced captions can include word simplifications so that complex or confusing words are replaced by simpler and more clear words. As another instance, when an HU utters a long phrase, that phrase may be simplified and shortened to convey the HU's intended communication in a more precise and succinct manner. As one other instance, in some cases where an utterance is part of a larger topic of conversation, each phrase uttered by an HU may be presented in context to avoid confusion that can result from utterances consumed out of context.
Another advantage is that enhanced captions may speed up AU consumption of captions so that captioning services can be provided more rapidly resulting in better alignment between HU voice signal broadcast and caption presentation.
Another advantage is related to reducing the time required to bring captions in line with broadcast HU voice signals when captioning is delayed for some reason. In this regard, at times captions are delayed behind HU voice signal broadcast for some reason so that the two types of communication are not synced which can cause AU confusion (e.g., the AU hears one thing and sees captions that are unrelated (e.g., correspond to prior audio broadcast). With verbatim captions, if captioning is behind by 35 words, the solution is to transcribe and present those 35 words to the AU as quickly as possible. Here, simply presenting 35 words in a block can be overwhelming and result in frustration and confusion. In the alternative, if the 35 words are presented progressively at a typical AU reading rate, catching up to the broadcast voice signal can take a long time. In the present case, by providing an abbreviated summary type caption instead of the verbatim caption, the HU's intended communication can be conveyed in a shorter time so that the caption time delay is reduced or substantially eliminated.
In cases where summary type captions are presented to an AU instead of verbatim captions, another advantage is that error corrections required to captions presented to an AU can be reduced or eliminated. In this regard, while accuracy is important in cases where captions are purported to be verbatim, accuracy is less important in cases where captions are summaries. This is because a summary simply has to capture the communication intended by an HU and present that intended communication for consumption and most intended communications can be discerned despite many verbatim captioning errors. Thus, where an erroneous verbatim caption is generated but the errors do not affect an HU's intended communication, once a summary type enhanced caption is presented to an AU, even if the system identifies the verbatim error and corrects the error, if the correction does not change the HU's intended communication, there is no reason to modify the summary caption presented to the AU.
In a similar fashion, in cases where verbatim captions are presented to an AU, if one or more errors are identified in an initial presented caption but the HU's intended communication is the same between the initial presented caption and a corrected caption, the system may automatically skip error correction to the initial presented caption to avoid distracting the AU. In this case, where an error in a caption presented to an AU causes a discrepancy between what the caption conveys and what a corrected caption would convey, the system would automatically affect an error correction to bring the caption in line with the HU's intended communication.
One other form of caption enhancement is communication augmentation. In this regard, while an HU and an AU participate in a call, a system processor may be programmed to examine text captions for words and phrases of particular importance or interest and then seek out additional information to present to the AU to augment captions. For instance, augmented information may include definitions of complex words or words corresponding to acronyms. As another instance, augmented information may include information derived from Google or other types of internet searches related to caption test or phrases. As one example, if an HU utters the phrase “Green Bay Packers”, the system may automatically obtain and present the Packers' last year record, current year schedule and a website link to content related to the Green Bay Packers. Many other forms of content augmentation are contemplated.
In some cases, in addition to or instead of generating enhanced captions, the system may also be programmed to initiate some type of supplemental activity associated with an ongoing HU-AU conversation. For instance, in a case where a physician requests that a patient AU schedule an appointment with an oncologist, the system may automatically access the AU's electronic calendar and identify one or more suitable time slots for scheduling the appointment. Here, the slot selection may be solely based on open or unscheduled time slots or they may be based on more sophisticated information like time to travel to an oncologist's office location from the location at which the AU is scheduled to be located just prior to an open time slot as well as other information.
In some embodiments the disclosure include a method for facilitating communication between an assisted user (AU) using an AU communication device including a display and a hearing user (HU) using an HU communication device, each communication device including a speaker and a microphone and the AU communication device also including a display screen, the method comprising the steps of receiving an HU voice signal as the AU and HU participate in a call using the AU and HU communication devices, respectively, transcribing HU voice signal segments into verbatim caption segments, processing each verbatim caption segment to identify an intended communication (IC) wherein the IC is the communication intended by the HU upon uttering an associated one of the HU voice signal segments, for at least a portion of the HU voice signal segments, (i) using an associated IC to generate an enhanced caption that is different than the associated verbatim caption, (ii) for each of a first subset of the HU voice signal segments, presenting the verbatim captions via the AU communication device display for consumption, and (iii) for each of a second subset of the HU voice signal segments, presenting enhanced captions via the AU communication device display for consumption.
In some cases, the step of transcribing includes using an automated speech recognition engine to convert the HU voice signal segments to verbatim caption segments. In some cases, at least a subset of the enhanced captions includes summary type enhanced segment. In some cases, at least a subset of the enhanced captions includes word simplification enhanced segment.
In some cases, at least a subset of the enhanced captions includes communication contextualization type enhanced segment. In some cases, the method further includes, for each enhanced segment, calculating a confidence factor (CF) indicating likelihood the segment reflects an HU's intended communication. In some embodiments the method further includes the step of, for each CF, comparing the CF to a threshold CF and, when the CF exceeds the threshold CF, presenting the enhanced caption associated with the CF via the AU communication device display.
In some cases, the method includes the step of, for each CF, when the CF is less than the threshold CF, presenting the verbatim caption associated with the CF via the AU communication device display. In some cases, the enhanced captions include first and second sets of enhanced captions and wherein the first set is of a type different than the second set.
In some cases, the method includes visually distinguishing the enhanced captions from the verbatim captions on the display. In some cases, the method includes presenting the verbatim captions in one column on the display and presenting the enhanced captions in a second column on the display. In some cases, the method includes presenting a user interface to the AU and receiving commands via the interface selecting one of verbatim and enhanced captions, the method including presenting verbatim captions when the verbatim option is selected and presenting enhanced captions when the enhanced option is selected. In some cases, the interface also enables the AU to select a third option for presenting each of verbatim captions and enhanced captions.
Other embodiments include a method for facilitating communication between an assisted user (AU) using an AU communication device including a display and a hearing user (HU) using an HU communication device, each communication device including a speaker and a microphone and the AU communication device also including a display screen, the method comprising the steps of receiving an HU voice signal as the AU and HU participate in a call using the AU and HU communication devices, respectively, transcribing HU voice signal segments into verbatim caption segments, presenting each verbatim voice signal segment via the AU communication device display for consumption, processing at least a subset of the verbatim caption segments to identify an intended communication (IC) wherein the IC is the communication intended by the HU upon uttering an associated one of the HU voice signal segments, for each of at least a subset of the HU voice signal segments, using an associated IC to generate an enhanced caption that is different than the associated verbatim caption, and presenting at least a subset of the enhanced captions via the AU communication device display for consumption.
In some cases, an IC is identified for each of the verbatim caption segments. In some cases, the verbatim caption segments are presented in a first vertical column and the enhanced caption segments are presented in a second vertical column. In some cases, each enhanced caption includes a summary type caption that has the same meaning as an associated verbatim caption. In some embodiments the method further includes broadcasting the HU voice signal to the AU via a speaker. In some cases, each of the enhanced captions includes a summary type enhanced caption.
Still other embodiments include a method for facilitating communication between an assisted user (AU) using an AU communication device including a display and a hearing user (HU) using an HU communication device, each communication device including a speaker and a microphone and the AU communication device also including a display screen, the method comprising the steps of receiving an HU voice signal as the AU and HU participate in a call using the AU and HU communication devices, respectively, transcribing HU voice signal segments into verbatim caption segments, presenting each verbatim voice signal segment via the AU communication device display for consumption, for each verbatim caption segment, (i) processing the verbatim caption segment to identify an intended communication (IC) wherein the IC is the communication intended by the HU upon uttering an associated one of the HU voice signal segments, (ii) using the IC to generate an enhanced caption segment that is different than the associated verbatim caption segment, (iii) automatically selecting one of the verbatim caption segment and the enhanced caption segment; and (iv) presenting the selected one of the caption segments on the AU device display screen for consumption.
The various aspects of the subject disclosure are now described with reference to the drawings, wherein like reference numerals correspond to similar elements throughout the several views. It should be understood, however, that the drawings and detailed description hereafter relating thereto are not intended to limit the claimed subject matter to the particular form disclosed. Rather, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the claimed subject matter.
In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration, specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the disclosure. It should be understood, however, that the detailed description and the specific examples, while indicating examples of embodiments of the disclosure, are given by way of illustration only and not by way of limitation. From this disclosure, various substitutions, modifications, additions rearrangements, or combinations thereof within the scope of the disclosure may be made and will become apparent to those of ordinary skill in the art.
In accordance with common practice, the various features illustrated in the drawings may not be drawn to scale. The illustrations presented herein are not meant to be actual views of any particular method, device, or system, but are merely idealized representations that are employed to describe various embodiments of the disclosure. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may be simplified for clarity. Thus, the drawings may not depict all of the components of a given apparatus (e.g., device) or method. In addition, like reference numerals may be used to denote like features throughout the specification and figures.
Information and signals described herein may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. Some drawings may illustrate signals as a single signal for clarity of presentation and description. It will be understood by a person of ordinary skill in the art that the signal may represent a bus of signals, wherein the bus may have a variety of bit widths and the disclosure may be implemented on any number of data signals including a single data signal.
The various illustrative logical blocks, modules, circuits, and algorithm acts described in connection with embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and acts are described generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments of the disclosure described herein.
In addition, it is noted that the embodiments may be described in terms of a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe operational acts as a sequential process, many of these acts can be performed in another sequence, in parallel, or substantially concurrently. In addition, the order of the acts may be re-arranged. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. Furthermore, the methods disclosed herein may be implemented in hardware, software, or both. If implemented in software, the functions may be stored or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
It should be understood that any reference to an element herein using a designation such as “first,” “second,” and so forth does not limit the quantity or order of those elements, unless such limitation is explicitly stated. Rather, these designations may be used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner. Also, unless stated otherwise a set of elements may comprise one or more elements.
As used herein, the terms “component,” “system” and the like are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computer and the computer can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers or processors.
The word “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs.
Furthermore, the disclosed subject matter may be implemented as a system, method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer or processor based device to implement aspects detailed herein. The term “article of manufacture” (or alternatively, “computer program product”) as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick). Additionally, it should be appreciated that a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN). Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.
Referring now to the drawings wherein like reference numerals correspond to similar elements throughout the several views and, more specifically, referring to
In at least some cases the AU's communication device includes a captioned phone device 20 that includes, among other things, a base assembly including a display screen 24 for visual output, a keypad 22 for receiving tactile input and a handset 26 that includes a microphone (not shown) and a speaker 18 for capturing the AU's voice signal to be transmitted to the HU using device 12 and for broadcasting the HU's voice signal for the AU to hear, respectively. Keypad 22 is typically a 0 through 9 numerical type pad (e.g., akin to the pads on a conventional telephone device) suitable for entering a phone number of an HU's phone device 12 or other HU phone numbers but may also include a full QWERTY board for entering other information (e.g., names, addresses and other information for an electronic contacts file, etc.). Device 20 may also include a touch pad 23 or a hardware button 25 that are purposefully provided in a location that is easily accessible to an AU using the device 20 as described in more detail below so that the AU does not become fatigued during device use from having to stretch out and select on screen icons on display 24 in at least some operating modes.
To further facilitate easy access, a remote control device 29 (e.g., dedicated to the system or an application program loaded onto a smart phone or other portable computing device) may be provided so that an AU can hold a device in her hand with her arm in a comfortable position while still wirelessly interacting with captioned phone device 20 via an interface including virtual buttons on device 29 or via a touch sensitive screen on device 29 that can be used to move a cursor around on the device screen 24 for selecting control icons and other information presented on screen 24. Although not shown, the keyboard and other input devices may also be provided as virtual buttons or the like that are touch selectable on display screen 24. Although not shown, in at least some embodiments, device 20 may cooperate with other input or output devices for facilitating enhanced audio and visual communications such as, for instance, larger display screens, more sensitive microphones and/or higher fidelity speakers included in a tablet or phone type device or other standalone devices, earbuds, a headphone set, etc., where linkages to those devices are wired or wireless (e.g., Bluetooth, 802.11b, NFC, or other wireless protocols) and are automatic (e.g., via automatic wireless pairing) or manual based on some user input to initiate a linkage.
Referring still to
Relay 14 includes, among other things, a relay server or other computing/processing device 32 that is linked to an electronic storage memory device or system 40 for accessing software engines, modules or computer programs stored thereon as well as to access and update various types of operational information and user preferences described hereafter. In some cases, server 32 may include more than one networked server or processor that cooperate to perform various functions and processes. In at least some cases one or more call assistant's (CA's) workstations 31 may be linked to server 32 for participating in captioning sessions as described hereafter.
Referring still to
In other cases, the system may be configured to tune one or more ASR engines during operation to quickly and accurately caption specific speaker's voice signals. For instance, during an AU-HU call, as an ASR engine 33 is used to generate HU voice signal captions or text, server 32 may compare the generated text to “true” text (e.g., accurate text for the voices signal) and then, based on the comparison, tune ASR engine parameters to increase caption accuracy and speed metrics over time. For instance, in some cases as an ASR uses context from words surrounding words in a caption hypothesis generated by the engine, the engine will correct word hypothesis automatically until the hypothesis no longer changes and, in that case, the final hypothesis will be treated as “true” text for engine training purposes. As another instance, in some cases a human call assistant (CA) associated with relay 14 may use workstation 31 to manually correct automated caption errors (e.g., listen to an HU voice signal and compare what was voiced to ASR text hypothesis to ID and correct errors) and the corrected text may be treated as “true” text for engine training purposes. True text is used to modify engine filters so that prior voice signal-text errors for a specific user voice signal are substantially less likely to occur subsequently. Similar training may be performed on an AU voice signal during one or more conversations with one or several HUs.
Once an ASR engine has been tuned for a specific HU's voice signal, ASR tuning parameters for that voice signal may be stored for use during subsequent calls with that HU so that the training process does not have to be repeated or can be expedited at the beginning of subsequent calls with that specific HU. In addition to storing HU specific ASR tuning parameter sets for specific HU voice signals, in at least some cases the system will also generate and store HU specific voice profiles for each HU voice signal that are useable to identify a current HU voice signal so that association HU specific tuning parameters can be identified at the beginning of any subsequent call. For instance, where an AU routinely converses with 10 different HUs, during initial calls with each HU, the system may identify 20 different voice characteristics of the HU's voice signal that can be used to distinguish one HU voice signal from others and those 20 characteristics are then stored as an HU specific voice profile for that specific HU. Here, HU specific ASR tuning parameters are also stored with the HU specific voice profile. At the beginning of any subsequent AU-HU voice call, the system processes an HU voice signal and uses characteristics thereof to identify an associated HU voice profile and ultimately the HU specific ASR tuning parameter set for the speaking HU.
In at least some cases the task of identifying the right ASR tuning parameter set for an HU's voice signal is expedited by associating a phone number or other calling address of an HU's device 20 (see again
In other cases, an HU phone or other device 12 may automatically provide an ASR tuning parameter set to the relay 14 when a call or captioning service commences so that there is no need to identify an HU's voice profile and instead the tuning parameters provided by device 12 can simply be used to customize one or more of the ASR engines. Similarly, an AU device may store an AU's ASR tuning parameter set and provide that dataset to relay 14 at the beginning of a call for use in tuning an ASR engine 33 to increase captioning accuracy of the AU voice signal where required by the system.
While both HU and AU voice signals may be captioned during AU-HU communications in at least some embodiments, unless indicated otherwise hereafter, the systems in the present disclosure will be described in the context of processes wherein only HU voice signals are transcribed to text which is used to communicate with an AU in some fashion.
Referring still to
When captioning is enabled or initiated, in some cases a second communication link 19 (e.g., phone line, Internet connection, etc.) is established between captioned device 20 and relay server 32 and, in at least some embodiments, in addition to broadcasting the HU voice signal via speaker 18 to the AU, captioned device 20 also transmits the HU voice signal to server 32 for captioning. In other cases when captioning is initiated, HU device 12 may be controlled to send the HU voice signal directly to relay 14 for captioning. In any event, the HU voice signal is sent in some fashion to the relay or whatever system processor transcribes voice to text for captioning. In cases where the AU voice signal is also captioned, that signal is also transmitted to the relay or other captioning processor operating within the system.
Server 32 performs any of several different processes to generate verbatim caption text (e.g., captions that are essentially (e.g., but for captioning errors) word for word accurate when compared to HU utterances) corresponding to the HU voice signal. For instance, server 32 may use an HU's calling address and an associated HU voice profile to locate an ASR tuning parameter set for the specific HU and may then use that tuning parameter set to tune an ASR engine for optimized verbatim transcription. In other cases, a CA using workstation 31 (
Referring still to
In the case of word simplification, the idea is that a complex word or phrase in HU voice captions is replaced by a simplified word or phrase that has essentially the same meaning. As some instances, in a phrase uttered by an HU, the word “perplexing” may be replaced by the phrase “difficult to understand”, the term “exultant” may be replaced by the word “happy” and the word “staid” may be replaced by the word “serious”. As other instances, where an HU uses an acronym, the system may automatically replace the acronym with the words associated therewith or may add the associated words in brackets after the acronym within a caption phrase.
Here, it is contemplated that enhancement module 36 (
Language swap enhancements may occur when an AU speaks primary and secondary languages and a word or phrase in the AU's secondary language has several different meanings that each correspond to different words in the AU's primary language. For instance, assume an AU's primary and secondary languages are Spanish and English, respectively. Here, if an HU voice caption includes the phrase “Are you going to join the party?”, the word “party” could be confusing as it could mean a fun gathering or a political party. In Spanish, the words “fiesta” and “partido” clearly mean a fun gathering and a political party, respectively. Thus, where context from the HU and AU voice communications can be used to determine that the HU meant a fun gathering when the word “party” was uttered, the system may automatically swap in the word “fiesta” for “party” so that enhanced caption text reads “Are you going to join the fiesta?” Here, again, the system may maintain a language swap reference database accessible and useable for identifying when words should be replaced to increase likelihood of AU understanding an intended communication by an HU.
Filler word removal enhancements simply means that when a sound, word or phrase includes an utterance that adds no discernable meaning to a larger phrase, any caption corresponding to the word, phrase or sound is removed from the HU voice captions. So, for instance, if an HU utters the sounds, “Um, ah, Hmmmmm, . . . ” in the middle of a phrase, captions corresponding thereto can be removed thereby shortening the length of the overall captions presented to the AU. Here, in addition to eliminating meaningless captions, the amount of meaningful captions that can be presented on the display screen 24 of limited size can be increased substantially in many cases. In addition, meaningful captions can persist on a display for relatively longer durations when filler words that have no meaning are not presented.
To assess when utterances have no effect on meaning of larger including phrases, the system may maintain a list of hundreds of utterance captions that typically have no bearing on the meanings of phrases that include those utterances and may, at least initially, remove those captioned utterances from larger including phrases. In at least some cases when an utterance caption is initially removed from an including phrase, the system may perform parallel processes on the including phrase with the removed utterance and without the removed utterance to assess the HU's intended communications for each and to compare if removal of the utterance caption has any effect on caption meaning. In the rate case when a removed caption did affect ultimate meaning of an including phrase, the system may automatically go back in the caption presented to the AU and modify the caption to include the initially removed utterance caption to change meaning. Any corrections in captions presented to an AU may be in line and may be highlighted or otherwise visually distinguished so that the AU can perceive that a caption meaning has changed based on a caption correction.
Meaning summary enhancements occur when enhancement module 36 determines the meaning of a phrase and summarizes that phrase in a shorter, more succinct and ideally more quickly digestible form. In this regard, software programs (e.g., Dialogflow from Google, LEX from Amazon, etc.) are well known that can be trained to process verbatim captions and identify actual meaning of the verbatim captions. Once an intended meaning has been identified, the system can often summarize that meaning in a shorter and clearer manner than the way the information was expressed by a speaker and the summary can be provided instead of or in addition to the verbatim caption to increase consumption speed as well as understanding of the intended communication.
Communication contextualization is used to generate contextualized captions so that a caption or communication and its context can be fully understood on its own substantially independent of other prior communications. To this end, during a typical conversation, participants typically serially communicate about a small set of general topics and, while talking about each separate general topic, talk about more detailed sub-topics or points of interest related to the general topic. Here, instead of prefacing every specific sub-topic or comment with some type of association to one of the general topics, participants typically assume that utterances are contextually associated with a persistent general topic until there is some communication queue that changes the general topic of conversation. Thus, for instance, during the beginning of a conversation, an HU physician and an AU patient may talk initially and briefly about recent weather, then about last weekend's football game, then about test results for a recent patient test and finally about scheduling an appointment with an oncologist. Here, there are four general topics including the weather, a recent game, test results and scheduling a new appointment. While discussing one of the general topics (e.g., weather), the doctor and patient would assume all subtopics discussed to be associated with that one general topic. Thus, a physician's comment that “It was really a mess” during a portion of the conversation that occurs while discussing the weather would be associated with the weather and not the game, the test results or the appointment scheduling topics to follow. Similarly, a physician's comment that “We found a blockage” during a portion of the conversation that occurs while discussing test results would be associated with the test results and not the weather, the game or subsequent appointment topics.
Communication contextualization is used by the system to add context to at least some utterances so that if an AU is confused as to context, display screen 24 can be referenced to quickly assess context and better understand either a verbatim caption or some type of enhanced caption. For instance, in the above example where a doctor utters “It was really a mess”, the system may automatically present the caption “The weather was really a mess” and, where the doctor utters “We found a blockage”, the system may automatically present the caption “Reviewing the test results we located a blockage”, to provide context to each of those utterances so that each utterance is understandable on its own. Context may be provided in many other ways, some of which are described hereafter.
In some cases, contextualization may only be applied to specific topics. For instance, in the case of a physician that routinely discusses test results, diagnosis, prescriptions and next appointments with patients, the system may be programmed for that physician to attempt to identify each of those general topics of conversation and to contextualize only utterances related to those topics as opposed to any of thousands of topics that an HU and AU may discuss. Here, by limiting the number of topics of conversation supported by the system for a specific HU or AU or generally, the ability for the system to clearly and accurately associate utterances with specific topics of conversation is enhanced appreciably.
Cases where contextualization is based on prior HU utterances are examples of multi-utterance enhanced captions where an ultimate enhanced caption associated with a specific HU utterance is modified, typically to be more informative, based on prior HU utterances during a call. Thus, again, a verbatim utterance “We found a blockage” may be enhanced and changed to “Reviewing the test results we located a blockage” based on prior HU utterances that are usable to determine the current topic of conversation related to the verbatim utterance. Thus, in some cases, server 32 uses the captions to assess instantaneous conversation contexts as well as to identify changes between contexts during a conversation. To this end, for instance, server 32 may be programmed to recognize typical context changing phrases such as “Let's turn to . . . ” or “Why don't we talk about your . . . ”, etc. In some cases, server 32 is programmed to simply assume a new context when one or a threshold number of uttered phrases have nothing to do with a prior context and/or are related to a second topic that is different from a prior topic. For instance, where three consecutive phrases are football related and have nothing to do with weather, server 32 may be programmed to determine that the context changed from weather to football and may automatically change captions previously presented to an AU so that the modified text reflects the new topic of conversation (e.g., football).
Communication augmentation is used to embellish the meaning of what is said in a voice signal. For instance, where a hearing user states “I really liked the Hamilton show last night”, in reference to a popular musical/theatrical performance that occurred the prior evening, augmentation may include accessing a review of a show called “Hamilton” that occurred locally near an HU's home and presenting the review via the captioned device display 24. As another instance, augmentation may include identifying the definition of a complex word uttered by an HU and presenting that definition in text adjacent to or in some associating way relative to the word within a caption presented to an AU. As still one other instance, where an HU physician asks an AU to schedule an MRI session, the system may identify MRI as a sub-topic and may use Wikipedia or some other on line resources to provide textual information related to what an MRI is and what it is used for. Many other types of automatic augmentation are contemplated.
In at least some embodiments it is contemplated that, in addition to generating enhanced captions of different types as described above and in more detail hereafter, the system may also operate to automatically initiate various actions that are useful to an AU or at least to generate and provide action suggestions to an AU based on what is said during an AU-HU conversation. For instance, where an HU doctor utters the words “I want you to schedule an appointment with an oncologist can you hold on the line?”, the system may automatically recognize the request to schedule the AU for an appointment, access an AU's electronic on-line schedule to identify scheduling options for the AU and present the AU's next week schedule with open time slots during normal business hours highlighted as selectable options for scheduling the appointment. Once the call is transferred to a scheduling assistant, the system may listen for an oncologist's identity and once the oncologist is identified, the system may automatically identify the oncologist's office location and a duration of time required to travel from the AU's scheduled locations preceding the AU's open schedule time slots to the oncologist's office location as well as travel times from the office location to the AU's scheduled locations subsequent to the open time slots and then limit scheduling options suggested to the patient accordingly. Where an appointment time and location are confirmed verbally, the system may automatically present the scheduling option via display screen 24 (e.g., as a selectable scheduling icon) or the like to be added to the patient's schedule. In at least some embodiments many other action initiation processes based on HU utterances, AU utterances or combinations of HU and AU utterances are contemplated including but not limited to, ordering medications or supplements, placing orders for products or services, identifying products or services that may be useful to an AU, generating reminders for an AU, instantiating or updating AU task lists, etc.
Referring yet again to
AU's may have different preferences when it comes to enhanced captions and action initiation options. For instance, a first AU may want the system to facilitate all enhanced caption options, a second AU may prefer only word simplifications, a third AU may prefer word simplifications and context augmentation, a fourth AU may want meaning summary and context augmentation and a fifth AU may simply want verbatim captions without any enhanced caption options. In many cases a specific AU's preferences will not vary substantially between calls and, for that reason, it is contemplated that in at least some cases memory 40 will include AU preferences database 39 that, for each AU, lists the AU's enhanced caption and action initiation preferences. In some cases, an AU's preferences may be applied to all HU calls. In other cases, an AU may have HU specific preferences which are stored in a relay or device 20 database or memory device and which are implemented once a specific HU's voice signal is identified at the beginning of a call. Thus, for instance, for a first HU, an AU may prefer word simplification and communication contextualization, for a second HU, the AU may prefer meaning summary enhanced captions and for a third HU, the AU may prefer verbatim captions without any enhancements. Here, the system would prescribe AU preferences for each of the first, second and third HUs.
When an AU's device 20 establishes a communication link to the relay 14, relay 14 automatically identifies the AU via a device identifier or the identifier in combination with the AU's voice signal (e.g., based on the AU voice profiles stored in database 35) or other information and accesses the AU's enhanced caption and action initiation preferences in database 39. In other cases, the AU device 20 may maintain preferences for an AU that routinely uses device 20 and may provide those preferences to server 32 upon linking thereto. The preferences are used to configure enhancement module 36 so that server 32 facilitates the AU's preferences automatically for each call unless the AU manually changes one or more preferences for a specific call. Again, in cases where an AU has defined enhanced caption and/or action initiation preferences for a specific HU, upon identifying the HU or the HU's voice signal, those HU specific preferences are implemented for the specific HU.
In at least some cases, some caption enhancements can be performed relatively simply. For instance, in the case of many language simplification enhancements (e.g., where a complex word is replaced via a simpler and more familiar word), the system processor can simply employ a thesaurus to identify synonyms as replacement candidates for relatively complex words that appear in a caption. In at least some cases it is contemplated that thesaurus entries may be ranked from most familiar to least familiar and, when a complex caption word is identified, the server may simply replace the complex word with the most familiar word (e.g., highest ranked word) in the thesaurus.
In most cases caption enhancements require at least some degree of understanding of the context in which words are used in order for the enhancement to be implemented. For example, in many cases even language or word simplifications (e.g., replacement of a complex word with a simple or more widely used word) require more than just a correctly spelled word in order to identify simplifying replacements. For instance, the word “manual” may be used as either a noun (e.g., a handbook) or an adjective (e.g., labor-intensive) and, if that word is to be replaced, the context in which the word is used within an utterance would need to be known in order to replace the word with a simplified adjective or a simplified noun. Similarly, the word “complex” can be used as an adjective or a noun and which words can be used to replace the word “complex” in a sentence will therefore be a function of the context in which that word is used. Likewise, word swaps (e.g., replacing the word “party” with “fiesta” or “partido”) require an understanding of context that surrounds the original word prior to selecting a replacement word. Filler word removal (e.g., eliminating words like “um”, “ah”, “gee”, “Oh”, Hmmm . . . ”, etc.) also requires contextual understanding so that those words are not eliminated from a caption when they actually have meaning.
Hereafter, unless indicated otherwise, the phrase “intended communication” (IC) will be used to refer to the intended meaning of a phrase or voiced communication from an HU and, in cases where captions are generated for an AU's voice signal and meanings of those captions are generated, also to the intended meaning of a phrase or voiced communication from the AU. The process of determining an HU or AU IC will be referred to as “IC identification”.
To enable IC identification needed to drive any of language simplification, word swapping, filler word removal, meaning summaries, communication contextualization and communication augmentation as well as to drive any type of action initiation, full understanding of the meaning of an utterance is required so that a speaker's intended meaning or intended communication is not lost in translation. Thus, here, it is insufficient for the system to simply perceive the meanings of separate words and instead the system has to be able to ascertain overall ideas or intended messages that are being delivered via utterances so that those ideas and messages can be rephrased, shortened or otherwise smartly enhanced resulting in better, more-clear and more rapid communication with an AU.
For these reasons, in at least some embodiments of the present disclosure, an intended communication (IC) identification sub-system is contemplated for identifying the intended meanings of utterances generated during AU-HU communications where those meanings are then used to drive enhanced caption processes in any one or a combination of the ways described above. Here, the general idea is to be able to effectively identify a clear and succinct IC for many and in some cases most HU utterances (and AU utterances in at least some cases) that occur during a normal conversation with an AU and then present enhanced captions that clearly state the IC to the AU as opposed to what the HU actually uttered (e.g., verbatim captions), thereby increasing communication effectiveness.
One problem with identifying ICs for each possible HU utterance (and AU utterance in some cases) is that there are literally thousands of ways to express most ideas or meanings and defining all those ways as part of an IC identification system is daunting and likely impossible via simple brute force manual specification of all possible ways to express all possible intended communications. In the present case one partial solution is to identify a set of intended communications that are commonly expressed by HUs and AUs during phone calls as a possible intended communication (PIC) set and, for each of those PICs, use a machine learning system to generate a large number of “associated phrased” that correspond to the PIC (e.g., have essentially the same meaning as the PIC) so that during operation, when one of the large number of associated phrases is received, the system can associate the utterance with the PIC and then replace the utterance with a simplified phrase or other enhanced caption that is consistent with the intended communication (e.g., consistent with the PIC).
To this end, Google's Dialogflow software and other similar software modules and systems (e.g., Amazon's LEX, etc.) have been developed primarily for fielding verbalized user queries or other user utterances (e.g., answers to automated clarifying queries) wherein any user utterance is associated with a specific user intent and then a response process associated with that intent is performed. For instance, a bike owner may call a bike shop to schedule an appointment to replace a tire and, once greeted by an automated voice and asked how the user can be helped, the user may utter “I need to get a tire fixed.” In the alternative, the user may utter the phrase “I want to have my bike serviced” or “Can I bring my bike in?” Here, regardless of the user's actual utterance, the system examining the utterance and recognizing the user's intent to schedule a maintenance appointment, may confirm the user's intent and may then automatically walk the user through selection of a calendar appointment for dropping off the user's bike for repair.
In at least some cases, once a general intent is identified, the Dialogflow or other system module may be programmed to examine the utterance to identify qualifying parameters that qualify the intent and makes the intent more specific. For instance, once the intent to schedule a maintenance appointment is identified, the system may be programmed to search the transcribed utterance for a date, a day, a time, etc. As another instance, the system may be programmed to search the transcription for the words “tire”, “brake”, “new part”, “inventory”, etc. Knowing a general intent and other uttered qualifying parameters enables the system to ascribe a richer meaning to the user's utterance and may be used to identify a more specific activity for the system to perform in response.
In the above example, the system is programmed to recognize a user's intent regardless of how the intent is phrased. To do this, during a commissioning procedure, a system programmer knowing a possible first user intent (e.g., to schedule an appointment, speak to a sales representative, determine store hours, order a part, etc.) enters that first intent into the system and then specifies (e.g., enters into a computer system) a small set (e.g., 3-6) of differently worded phrases that have similar meaning and that are more or less consistent with the first intent. Once the small intent specific phrase set has been specified for the first intent, a machine learning module uses that small intent specific phrase set to generate a massive intent specific phrase set (e.g., hundreds or even a thousands of phrases) including differently worded phrases that are all generally consistent with the first intent. Once a large intent specific associated phrase set for the first intent is generated and stored, the programmer specifies qualifying parameter sets for the first intent that have a good probability of being uttered by a speaking user in conjunction with a phrase associated with the first intent that can be used to more specifically define user intent associated with an utterance. A list of qualifying parameters is stored for the first intent. The process of specifying an intent, a small subset of phrases that are consistent with the meaning of the intent, using machine learning to generate a large associated phrase set that is associated with the intent and specifying a qualifying parameter set for the intent is performed for many possible or anticipated intents (e.g., 2000) and those intents, associated large phrase sets and qualifying parameter sets are stored for subsequent use in understanding uttered phrases.
After system commissioning, during system use, when a voice signal is received, the signal is transcribed into caption text and the caption text is compared to at least a subset of the phrases in the large associated phrase sets to identify a closest match. Once a closest matching phrase is identified, the intent associated with the closest matching phrase is selected as an intent associated with the caption text. Next, a processor accesses the qualifying parameter set for the selected intent and attempts to identify any of those parameters in the caption text. Identified qualifying parameters are used to more specifically identify the speaker's intent. The intent and qualifying parameters are used to perform one or more actions in response to the received voice signal and that are consistent with the intent associated with the initial utterance.
In the present case, for enhanced caption processes, Dialogflow or some other comparable software application program may be used to specify a large number (e.g., hundreds or even thousands) of possible intended communications (PICs) that are akin to Dialogflow intents and a separate large associated phrase (AP) set and a separate qualifying parameter (QP) list for each of the PICs which are correlated and stored in system database 40. Hereafter, unless indicated otherwise, the phrase “PIC dataset” will be used to refer to a single PIC and its associated large AP set and QP set.
In at least some cases it is contemplated that a generalized set of PIC datasets (e.g., a “generalized PIC set”) may be developed and used to handle all phone calls for all HUs and AUs, regardless of user identities. In other cases specialized PIC datasets may be created and used in special circumstances (e.g., based on HU identity, AU identity, HU specialty, HU employer, etc.) to identify ICs either independent of a generalized set or in parallel or otherwise in conjunction with a generalized set of PIC datasets as is described in greater detail hereafter. Thus, here, when captioning is initiated, the system may use user identity to ID a user specific PIC dataset as well as user specific enhanced captioning preferences. Once a speaker's intended communication is determined, a system processor performs one or more of the caption enhancing processes described above and hereafter per AU preferences 39.
In systems that are consistent with the present disclosure, with most enhanced caption preferences, a speaker's IC is only needed as seed information for generating an enhanced caption or information set of some type to present to the AU. In these cases, there is no action that occurs because of the meaning of the utterance and instead, the caption enhancing action is prescribed by the system as specified in an AU's preferences or as programmed. For instance, where an AU's only caption enhancement preference is to simplify caption words, irrespective of the IC ascribed to the utterance, the only actions performed are to replace complex words with simplified words that have essentially the same meaning and to present the enhanced caption (e.g., caption with simplified words replacing complex words) to the AU. As another instance, where an AU's only caption enhancement preference is to provide meaning summaries, irrespective of the IC ascribed to the utterance, the only actions performed are to generate a summary caption for the transcribed text and present the summary caption (e.g., the enhanced caption) to the AU.
In cases where two or more caption enhancing preferences are specified for an AU, a first of the preferences may automatically take precedence over the second of the preferences so that the first preference is processed first followed by the second preference. For instance, where an AU specifies both a word simplification preference and a meaning summary preference, the system may automatically perform any word simplifications first followed by a meaning summary. Thus, there may be a default hierarchy of caption enhancing preferences that is automatically applied to an AU preference set. In other cases, the system may enable an AU to rank importance of enhanced captioning preferences during a commissioning procedure of some type so that the order or enhancements to captions can be customized for an AU.
In still other cases performance of enhanced caption preferences may be contingent on each other. For instance, where an AU's preference set includes meaning summary and word simplification, the system may be programmed to attempt to perform a meaning summary first on every HU utterance and the system may only perform a word simplification when the system cannot generate a highly accurate meaning summary. As another instance, where an AU's preference set includes meaning summary and communication augmentation, the augmentation may be premised on successful meaning summary.
Regarding the IC identification process in general, while systems are contemplated that can identify ICs for a majority of HU utterances, it has been recognized that in most cases the ICs of at least some HU (or AU) utterances will not be discernable or that the accuracy of a discerned IC will be dubious. For instance, in at least some cases, despite extensive commissioning to support large numbers of PICs, an HU may utter a phrase that cannot be reasonably associated with a specific PIC or where confidence in an utterance-PIC association is low.
In most Dialogflow based systems, inability to identify an intent or meaning is a problem as actions to perform based on an intent obviously cannot be performed unless the speaker's intent is known. In the case of a captioning system as in the present disclosure, even when a system cannot determine a communication meaning for an utterance, there is a relatively good fall back option when a speaker's intended communication cannot be confidently identified which is to simply present verbatim text to an AU instead of a preferred enhanced caption of some type. Thus, in some cases where the disclosed processor/server cannot identify an IC for an utterance, the server may simply default to providing a verbatim caption instead of an enhanced caption.
In some embodiments each IC identified by the system may be assigned a confidence factor indicating a calculated likelihood that the IC is accurate. Here, for instance, assume that a caption corresponding to an HU's utterance closely matches only one phrase in one large associated phrase set associated with a first PIC and is appreciably different than all phrases in all other large associated phrase sets that are associated with all of the other system supported PICs. In this case, a high confidence factor would be assigned to the first PIC. In contrast, assume that a caption corresponding to an HU's utterance is most like one phrase in one large associated phrase set but is still very different than that one phrase. In this case a midlevel confidence factor may be assigned to an identified IC. As another example, assume that a caption corresponding to an HU's utterance is similar to phrases that are in large associated phrase sets corresponding to first, second and third different PICs. Here, a low confidence factor may be assigned to an identified IC. In other cases, verbatim captions may be low confidence, and, in those cases, any derived IC would likewise be associated with a low confidence factor.
In cases where an identified IC confidence factor is below a threshold level (e.g., 85%, 90%, 95%, whatever is acceptable within the industry or to an AU), in at least some cases the system will be programmed to simply ignore low confidence ICs. In these cases, instead of performing some enhanced caption function, original transcribed verbatim text may be presented to the AU for consumption. In other cases where low CF ICs occur, the system may be programmed to provide an enhanced caption based thereon along with an associated verbatim caption (e.g., in left and right columns on a display screen) for AU consideration.
In most cases it is expected that enhanced captioning will be highly effective for most HU utterances. In this regard, while HUs are obviously free to voice whatever they want during phone calls, in most cases and on most calls, most if not all HU utterances can be successfully associated with a relatively small set of anticipated PICs. For example, for most calls, it is believed that if the enhanced captioning system is programmed to support a few hundred anticipated or common PICs, the system will be able to successfully identify more than 85% of the ICs associated with a typical HU's utterances and therefore will be able to successfully enhance more than 85% of the captions generated by the transcription system. In a case where the system is programmed to support two to five thousand PICs, the system should be able to successfully identify more than 90% of the ICs associated with a typical HU's utterances and therefore will be able to successfully enhance more than 90% of the captions generated by the transcription system.
Even in the case of a relatively ineffective enhanced captioning system where only 50% or even 25% of captioned phrases can be turned into enhanced captions of an AU's preferred type, the system can be programmed to provide all possible enhanced captions with verbatim captions provided as a backup to enhance and appreciably expedite communications.
While the system described above assumes a generalized PIC set (e.g., generalized set of PIC datasets (e.g., PICs and corresponding large associated phrase sets and qualifying parameter sets)) developed for general use, in some cases it is contemplated that specialized or customized PIC set (e.g., specialized set of PICs and associated large phrase sets and qualifying parameter sets) may be developed for specific industries, specific HUs, specific enterprises (e.g., a large medical practice) or specific types of HUs (e.g., physicians, a specific physician type, IT specialists, etc.). In these cases, depending on who an AU is speaking to, the relay 14 processor may access one or more of the specialized PIC sets and use that set(s) to perform enhanced caption processes more effectively or accurately. Thus, for instance, if an AU is participating in a phone call with her dermatologist, a dermatology PIC set may be accessed and used for IC identification. Here, in addition to using the dermatology PIC set, the system may also use the generalized PIC set so that more PICs are supported by the system in operation. Again, HU voice or identity may be assessed in many different ways at the beginning of any call and used to select optimized PIC datasets to drive caption enhancements.
As another example, people working in a particular high-tech industry may routinely express intended communications that are unique to that industry and which would be difficult to associate with generalized PICs because of the unique intended meaning, phrases or terms used in the industry, etc. Here, an industry specific PIC set may be specified and stored for use when an AU and/or an HU on a call works in the industry associated with the specific PIC set.
Referring now to
Identification sub-system 41 includes an IC engine 40, a qualifying parameters engine 42, and a possible intended communication (PIC) database 45. Exemplary database 45 includes a generalized PIC dataset 47 that includes a list of possible intended communications (PICs) that are supported by the system. For instance, the PIC list includes PIC-1, PIC-2 through PIC-N. Each PIC comprises a possible intended communication which specifies a possible meaning for an HU utterance. For instance, PIC-1 may be “How are you feeling?” while PIC-2 may be “We received your test results.”
Database 45 also includes a large associated phrase set for each of the PICs. For instance, the associated phrases associated with PIC-1 include AP-1, AP-2, AP-3 through AP-NNNN. Where PIC-1 includes the intended communication “How are you feeling?”, AP-1 may be “What are you feeling right now?”, AP-2 may be “Are you feeling good right now?”, AP-3 may be “What are your current physical conditions?”, etc. Where PIC-2 includes “We received your test results.”, associated Aps may include “Your test results are back.”, “Your labs are complete and we have received data back.”, “Your blood work scores are back from the laboratory.”, etc.
Database further includes a qualifying parameter set for each PIC including, for instance, QP-1 through QP-MM that correspond to PIC-1. Where PIC-2 is “Your test results are back.”, exemplary QPs may include “blood work”, “flu”, “negative”, “positive”, “good”, “bad”, etc.
Each PIC and associated AP set and QP set is referred to above and generally herein as a PIC dataset (e.g., PIC-1, AP-1 through AP-NNNN and QP-1 through QP-MM comprise a first PIC dataset). Exemplary first and second specialized PIC sets 49 and 51 are also illustrated which may be used in some exemplary systems that are consistent with the present disclosure.
In operation, upon receiving verbatim captioned text associated with an HU utterance, IC engine 40 analyzes the verbatim text by comparing the captioned text to at least a subset of the associated phrases in PIC set 47 (and or sets 49, 51, etc.) and, once a best matching associated phrase is identified, the PIC correlated with the best matching phrase is selected as the IC for the phrase. Next, parameters engine 42 accesses the qualifying parameters for the selected PIC and attempts to locate at least a portion of those parameters in the utterance. When QPs are located in the utterance, they are used along with the PIC to generate a more detailed IC with additional QP information. Hereafter, unless indicated otherwise, a more detailed IC based on one or more QPs will be referred to as a qualified IC (“QIC”). At this point a QIC is associated with the utterance and therefore caption enhancement preferences can be applied.
Referring to
In some cases, enhancement modules use only an QIC associated with an utterance to generate enhanced captions (e.g., does not use the verbatim text). For instance, in the case of the meaning summary module 52, that module is provided to simply provide a summary of a verbatim caption and therefore the module only needs the QIC to generate the summary. For instance, where an HU utters the phrase “Well, ah, we received the thrombocytes count and other information from the laboratory today at noon and, um, ah, would like to share the results with you at this, ah, time.”, the summary module may generate the summary phrase “Your blood work test results are back.” As another instance, the same phrase may be summarized as “Your results are back.” and may be presented in the context of a topic indicator (as identified by communication contextualization module 54 described hereafter) like “Blood test results from St. Mary's Lab”.
In other cases a module may use an IC associated with verbatim text to understand the “meaning” of the text or how a specific word in the verbatim text is being used and then may also require the verbatim text which can be modified as a function of the IC. For instance, in at least some cases the language simplification module and filler work removal module may use an IC associated with an utterance to assess the meanings of specific words within the utterance and may then identify simplified words or phrases for portions of the verbatim text and swap the simplified words into the verbatim text and also may remove filler words prior to presenting to the AU. For example, where an HU utters the phrase “Well, ah, we received the thrombocytes count and other information from the laboratory today at noon and, um, ah, would like to share the results with you at this, ah, time.”, the system may identify an IC and use the IC to understand the meaning of the verbatim utterance and then swap in words resulting in the following phrase: “We received the platelets count and other information from the lab today and would like to share the results.”
Referring still to
In some cases, instead of switching the current topic of conversation after an IC topic mismatch to the most recent IC topic, module 54 may implement a hysteretic process whereby two or three consecutive HU utterances have to be at least consistent with a new IC topic that is different than the most recent known topic of conversation in order for the topic engine to switch to a new current topic of conversation. Here, where a current topic is used to qualify an IC or enhanced captions in some way, once a new topic is identified, the module 54 may automatically go back and alter enhanced captions or related information presented to an AU to reflect the change in topic at the right point in a conversation. For instance, where a hysteretic lag requires three ICs associated with a new topic of conversation to confirm the new topic, Once the new topic is identified, module 54 may automatically go back and modify content presented to an AU to indicate in some way that all three of the ICs associated with the new topic are in fact associated with the new topic or may go back and actually generate different ICs for all three topics by using the new topic as a QP for each of the three prior ICs. In at least some cases current topics are fed back from module 54 to IC engine to inform the PIC identification process and/or are provided to others of the enhancement modules 46, 48, 50 and 56 for processing when generating enhanced captions.
Referring still to
Based on CFs associated with enhanced captions, the system may be programmed to operate differently. For instance, where an enhanced caption CF is above some high threshold level, the system may simply present the enhanced caption to an AU without qualification and when the CF for a specific enhanced caption is below the threshold, the system may either present the enhanced caption to the AU with a qualification tag (e.g., “Low CF” above the enhance caption text) or may not present the enhanced caption at all, instead simply presenting the verbatim caption text or a different enhanced caption text. For example, where a CF associated with a caption summary type enhancement is below a CF threshold, the system may instead generate and present an enhanced caption that includes language simplification of verbatim text (e.g., a simpler enhancement).
In some cases the system may generate several enhanced caption options in series when needed (e.g., in a case where a CF for a first caption is low, the second enhanced caption may be generated and its CF compared to a threshold CF level) or in parallel (e.g., generate first and second enhanced captions in parallel. Where two of more enhanced captions are generated in parallel for one verbatim caption, the system may only use or consider using the second when a CF for the first enhanced caption is below a threshold level. In other cases the system may select among two enhanced captions (e.g., one being a summary caption and the other being a language simplification caption) based on CFs (e.g., which one is higher or is one CF close to a second CF (e.g., the CF for a summary caption is within 10% of a language simplification caption), then pick the one CF which is preferred).
In some cases, which caption (e.g., enhanced or verbatim, if enhanced which enhanced, etc.) is presented to an AU may be a function of other factors instead of or in addition to AU preferences and CFs. For instance, in some cases where captioning falls behind AU-HU voice communications, the system may be programmed to favor summary captioning instead of other enhanced captioning forms and/or instead of verbatim captions in order to reduce the voice-captioning lag time. As another instance, where the meaning of a caption is heavily dependent on a current topic of conversation, the system may automatically present an enhanced caption that includes the contextual information as opposed to a verbatim or other enhanced caption type.
Referring now to
Referring still to
Referring now to
In at least some cases an AU will be able to set caption preferences for different calls while a call is ongoing. Thus, for instance, for a first call where an HU speaks slowly, an AU may prefer verbatim captions instead of enhanced as the AU should be able to consume essentially real time captions as the HU speaks. In other cases where an HU speaks quickly and an AU would have to consume a lot of words to keep up with HU utterances and/or where the HU uses many complex words that the AU is unfamiliar with, the AU may prefer some type of enhanced captioning as described herein during a call. In
In other cases, the system may automatically switch between different verbatim and enhanced captioning options based on one or more system operating parameters. For instance, in a case where verbatim captions are preferred and an HU is speaking at a relatively slow rate, the system may automatically present verbatim captions. Here, if the speaker increases speaking rate to above some threshold level, the system may automatically switch to a summary type caption enhancement to minimize the content that an AU needs to consume via presented text. In other cases a switch to summary type captioning may occur only if the ASR or other verbatim captioning process falls behind the broadcast HU voice signal by some threshold duration (e.g., 12 seconds, 20 seconds, etc.) and, once captioning is within a second threshold duration (e.g., three seconds) of the broadcast voice signal 71, the system may revert back to verbatim captions.
In some cases, the system may enable an AU to receive both verbatim and enhanced captions simultaneously so that the AU can view and choose which captions to consume essentially in real time as they are presented. Thus, here, where an AU reads verbatim captions on screen 24 and has a hard time understanding the meaning, the AU could simply refer to summary captions for clarity. Similarly, where an AU reads enhanced captions on screen 24 and is confused as to what was said, the AU could refer to the verbatim captions which may provide clarity. A third on screen “Verbatim/Enhanced Captions” button 109 is presented in
Referring now to
Interface controlled pointing icon 101 can be moved around on display screen 24 to perform pointing and then selection (e.g., click, double click, hover) activities. Here, the interface for controlling pointing icon 101 may include the touch sensitive pad 23 shown in
In other cases, AU preferences regarding verbatim, enhanced and verbatim/enhanced captioning may be settable in the AU preferences database 39 (see again
Referring now to
In the case of the communication contextualization enhanced caption option, the idea is to generate enhanced captions where each caption has more information than an associated verbatim caption where the additional information renders the enhanced caption more informative and easier to understand than the original verbatim caption on its own. In this regard, it has been recognized that simple captions can cause confusion as they are often presented without context. Confusion is particularly problematic in cases of some elder AUs that may become confused when captions are considered out of context. For instance, many elder AUs suffer from many different ailments, see and communicate with doctors and other health service providers, are issued many diagnoses and often undergo many tests and treatments where captions out of context can be erroneously associated with wrong information. In contextual enhanced captions, information from prior utterances during a call is used to add information to a current IC to generate a more complex IC that is more likely to be understood on its own. While these types of enhanced caption are typically more informative than associated verbatim captions that does not necessarily mean the more informative captions will be longer or more verbose. For example, assume Dr. Petrie's nurse practitioner (an HU) and an AU are participating in a voice call and have been discussing a possible skin cancer diagnosis when the nurse utters the phrase “Your test results are back. Um. Well, let me see here. It looks like I can conclude, Hmmm. Ok, there is nothing irregular here. Your tests, ah, the results were normal and you are not sick.”
Referring again to
Consistent with the above example, see
Referring still to
Obviously, the utterance at 130 is long and would likely be voiced over a period of tens of seconds in most cases. Here, the system would continually work up one IC at a time as new voice signal is received so the enhanced caption 132 would be constructed over some time and in fact its format and information may be changed over time. Thus, when the utterance about direct sun light is voiced, the other utterances about sun block and a checkup with Dr. Hart would still not have been voiced. At this point the caption at 132 would only include the single recommendation that the AU stay out of direct sun light and the recommendation likely would not be provided as a list. Once the second recommendation to use 50 UV sunscreen is voiced and associated with an IC, the system may go back and reformat the first light related recommendation as part of a list also including the 50 UV protection sunscreen. Thus, in at least some cases, enhanced captions may be provided to an AU and then subsequently modified in form and/or content based on subsequently voiced content.
Referring now to
In some cases, the system may be programmed to automatically generate separate topic of conversation fields to help an AU understand how a conversation is progressing and how topics are changing over time. This feature can also help an AU locate different parts of a conversation if searching back in captions for specific information that was discussed. To this end see the exemplary screen shot shown on screen 24 in
Again, in general the systems described above are described in the context of processes that caption HU voice signals and ignore AU voice signals or utterances. In other cases, verbatim captions may be generated for AU utterances as well as HU utterances and those AU verbatim captions may be used for various purposes. For instance, as shown in
As another instance, an AU's verbatim captions may be tracked and used as additional context for generating verbatim HU captions and/or enhanced HU captions. For example, an HU may not mention test results during a call but an AU may mention test results and the AU's utterance may then be used to provide context for the HU's utterance “We found a blockage” so that an enhanced caption “The test results showed that you have a blockage.” Thus, in some cases captions of AU utterances may be used to generate enhanced HU captions where only the HU captions are presented to the AU for consumption.
In cases where the system automatically switches between enhanced captions and verbatim captions for any reason, in at least some embodiments the system will visually tag or otherwise highlight or visually distinguish enhanced and verbatim captions in some manner so that the AU understands the difference. For instance, see
In most cases verbatim captions will be generated at least some time (e.g., one or more seconds) prior to summary or other types of enhanced captions that are derivatives of the verbatim captions. In the interest of presenting text representations of HU utterances to an AU as quickly as possible, in at least some cases verbatim text is transmitted to and presented by an AU caption device 20 immediately upon being generated. Here, if the system subsequently generates an enhanced caption, the system may transmit the enhanced caption to the AU device and the AU device may be programmed to swap in any enhanced caption for an associate verbatim caption. In other cases, the AU device may present summary captions adjacent verbatim captions as an alternative text to be consumed.
In at least some embodiments it is contemplated that an HU device or communication system will play a great role in enhanced captioning. For instance, in the case of an oncologist, the oncologist may use complex cancer related and genetics terms, phrases and acronyms which a relay may be ill equipped to simplify. In these cases, the HU system may automatically identify any complex terms of art uttered by an oncologist and may automatically provide information to the relay or other system processing device for developing enhanced captions.
Referring to
Referring still to
One problem with summarizing or otherwise enhancing captions is that there is a risk that any type of enhancement may simply be wrong and intended communication errors could have a substantial effect on AU understanding and even activities. For instance, where the system inaccurately determines that a doctor prescribes a specific dose of pain medication for a patient that could have unintended results. One way to deal with potential enhanced captioning errors is for the system to identify utterances related to topics that are particularly important and, for those utterances, always present verbatim captions. For instance, when a physician HU utters a phrase that has anything to do with a drug, a test result, an appointment or a procedure, the system may automatically recognize the topic of the phrase and then present verbatim HU captions or both verbatim and enhanced captions for that phrase to limit the possibility of any type of enhanced captioning error.
Another solution for minimizing the effects of enhanced captioning errors is to route captioning feedback back to an HU and present that feedback for HU consumption. To this end, see again
As described above, captioning may be dynamic even after initial hypothesis are presented to an AU so that any captioning errors can be corrected upon being identified (e.g., in line corrections may be made in real time as identified). Also, as described above, in many cases, in order to be able to generate enhanced captions, the system has to be able to identify an HU's IC (e.g., intended communication) to inform the enhancement. In addition to ascribing an IC to each initial verbatim caption hypothesis, the system can also ascribe an IC to any corrected verbatim caption. In many cases caption errors will not change the IC associated with an HU's utterance and, in those cases, the system may be programmed to only make caption error corrections when ICs related to an initial caption hypothesis and an error corrected caption are different. In other words, if the IC associated with an initial caption hypothesis is the same as an IC associated with an error corrected caption, the system may forego making the error correction in order to limit error correction changes and distraction that could be associated therewith. Here, only “meaningful” error corrections that change the IC related to an HU utterance would be facilitated on an AU display screen.
While the invention may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the invention is not intended to be limited to the particular forms disclosed. For example, while the systems described above each audibly broadcasts HU voice signals to an AU, in some cases the system may instead broadcast simulated voice messages based on enhanced captions (e.g., a summary voice message as opposed to the actual HU voice message).
As another example, in at least some embodiments essentially all relay functions and processes may be performed by an AU's device 20 where the AU runs one or more ASRs as well as caption enhancing programs or modules as described above. In other cases an HU device may generate HU voice signal verbatim captions and send those captions on to a relay for generating enhanced captions that are then sent to the AU device for display (e.g., either directly or back through the HU device to the AU device).
Thus, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the following appended claims.
This application is a continuation of U.S. patent application Ser. No. 17/180,702, filed on Feb. 19, 2021, which application claims the benefit of priority to U.S. Provisional Application No. 62/979,708, filed Feb. 21, 2020. The contents of each of these applications are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
2004253 | Tasker | Jun 1935 | A |
3372246 | Knuepfer et al. | Mar 1968 | A |
3507997 | Weitbrecht | Apr 1970 | A |
3515814 | Morgan et al. | Jun 1970 | A |
3585303 | Chieffo et al. | Jun 1971 | A |
3598920 | Fischer et al. | Aug 1971 | A |
3800089 | Reddick | Mar 1974 | A |
3896267 | Sachs et al. | Jul 1975 | A |
3959607 | Vargo | May 1976 | A |
3976995 | Sebestyen | Aug 1976 | A |
4012599 | Meyer | Mar 1977 | A |
4039768 | O'Maley | Aug 1977 | A |
4126768 | Grenzow | Nov 1978 | A |
4151380 | Blomeyer et al. | Apr 1979 | A |
4160136 | McGough | Jul 1979 | A |
4188665 | Nagel et al. | Feb 1980 | A |
4191854 | Coles | Mar 1980 | A |
4201887 | Burns | May 1980 | A |
4254308 | Blomeyer et al. | Mar 1981 | A |
D259348 | Sakai et al. | May 1981 | S |
4268721 | Nielson et al. | May 1981 | A |
4289931 | Baker | Sep 1981 | A |
4302629 | Foulkes et al. | Nov 1981 | A |
4307266 | Messina | Dec 1981 | A |
4354252 | Lamb | Oct 1982 | A |
4415065 | Sandstedt | Nov 1983 | A |
4426555 | Underkoffler | Jan 1984 | A |
4430726 | Kasday | Feb 1984 | A |
D273110 | Genaro et al. | Mar 1984 | S |
4451701 | Bendig | May 1984 | A |
4471165 | DeFino et al. | Sep 1984 | A |
D275857 | Moraine | Oct 1984 | S |
4490579 | Godoshian | Dec 1984 | A |
4503288 | Kessler | Mar 1985 | A |
D278435 | Hikawa | Apr 1985 | S |
4524244 | Faggin | Jun 1985 | A |
D280099 | Topp | Aug 1985 | S |
4533791 | Read et al. | Aug 1985 | A |
4568803 | Frola | Feb 1986 | A |
4569421 | Sandstedt | Feb 1986 | A |
D283421 | Brier | Apr 1986 | S |
4625080 | Scott | Nov 1986 | A |
RE32365 | Sebestyen | Mar 1987 | E |
4650927 | James | Mar 1987 | A |
4659876 | Sullivan et al. | Apr 1987 | A |
4713808 | Gaskill | Dec 1987 | A |
4754474 | Feinson | Jun 1988 | A |
D296894 | Chen | Jul 1988 | S |
4777469 | Engelke et al. | Oct 1988 | A |
4790003 | Kepley et al. | Dec 1988 | A |
4799254 | Dayton | Jan 1989 | A |
4815121 | Yoshida | Mar 1989 | A |
4817135 | Winebaum | Mar 1989 | A |
4839919 | Borges | Jun 1989 | A |
4849750 | Andros | Jul 1989 | A |
4866778 | Baker | Sep 1989 | A |
4868860 | Andros et al. | Sep 1989 | A |
4879738 | Petro | Nov 1989 | A |
4897868 | Engelke et al. | Jan 1990 | A |
D306727 | Fritzsche | Mar 1990 | S |
4908866 | Goldwasser et al. | Mar 1990 | A |
4918723 | Iggulden et al. | Apr 1990 | A |
4926460 | Gutman et al. | May 1990 | A |
4951043 | Minami | Aug 1990 | A |
4959847 | Engelke et al. | Sep 1990 | A |
D312457 | Inatomi | Nov 1990 | S |
4995077 | Malinowski | Feb 1991 | A |
5025442 | Lynk et al. | Jun 1991 | A |
5027406 | Roberts et al. | Jun 1991 | A |
5033088 | Shipman | Jul 1991 | A |
5051924 | Bergeron et al. | Sep 1991 | A |
D322785 | Wu | Dec 1991 | S |
5081673 | Engelke et al. | Jan 1992 | A |
5086453 | Senoo et al. | Feb 1992 | A |
5091906 | Reed et al. | Feb 1992 | A |
5095307 | Shimura et al. | Mar 1992 | A |
5099507 | Mukai et al. | Mar 1992 | A |
5121421 | Alheim | Jun 1992 | A |
5128980 | Choi | Jul 1992 | A |
5134633 | Werner | Jul 1992 | A |
5146502 | Davis | Sep 1992 | A |
5163081 | Wycherley et al. | Nov 1992 | A |
5192948 | Neustein | Mar 1993 | A |
5199077 | Wilcox et al. | Mar 1993 | A |
5210689 | Baker et al. | May 1993 | A |
5214428 | Allen | May 1993 | A |
5216702 | Ramsden | Jun 1993 | A |
5249220 | Moskowitz et al. | Sep 1993 | A |
5280516 | Jang | Jan 1994 | A |
5289523 | Vasile et al. | Feb 1994 | A |
5294982 | Salomon et al. | Mar 1994 | A |
5307399 | Dai et al. | Apr 1994 | A |
5311516 | Kuznicki et al. | May 1994 | A |
5318340 | Henry | Jun 1994 | A |
5325417 | Engelke et al. | Jun 1994 | A |
5327479 | Engelke et al. | Jul 1994 | A |
5339358 | Danish et al. | Aug 1994 | A |
5343519 | Feldman | Aug 1994 | A |
5351288 | Engelke et al. | Sep 1994 | A |
D351185 | Matsuda et al. | Oct 1994 | S |
5359651 | Draganoff | Oct 1994 | A |
5375160 | Guidon et al. | Dec 1994 | A |
5377263 | Bazemore et al. | Dec 1994 | A |
5392343 | Davitt et al. | Feb 1995 | A |
5393236 | Blackmer et al. | Feb 1995 | A |
5396650 | Terauchi | Mar 1995 | A |
D357253 | Wong | Apr 1995 | S |
5410541 | Hotto | Apr 1995 | A |
5423555 | Kidrin | Jun 1995 | A |
5424785 | Orphan | Jun 1995 | A |
5426706 | Wood | Jun 1995 | A |
5432837 | Engelke et al. | Jul 1995 | A |
5459458 | Richardson et al. | Oct 1995 | A |
5463665 | Millios et al. | Oct 1995 | A |
D364865 | Engelke et al. | Dec 1995 | S |
5475733 | Eisdorfer et al. | Dec 1995 | A |
5475798 | Handlos | Dec 1995 | A |
5477274 | Akiyoshi et al. | Dec 1995 | A |
5487102 | Rothschild et al. | Jan 1996 | A |
5487671 | Shpiro | Jan 1996 | A |
5497373 | Hulen et al. | Mar 1996 | A |
5508754 | Orphan | Apr 1996 | A |
5517548 | Engelke et al. | May 1996 | A |
5519443 | Salomon et al. | May 1996 | A |
5519808 | Benton, Jr. et al. | May 1996 | A |
5521960 | Aronow | May 1996 | A |
5522089 | Kikinis et al. | May 1996 | A |
5537436 | Bottoms et al. | Jul 1996 | A |
5559855 | Dowens et al. | Sep 1996 | A |
5559856 | Dowens | Sep 1996 | A |
5574776 | Leuca et al. | Nov 1996 | A |
5574784 | LaPadula et al. | Nov 1996 | A |
5581593 | Engelke et al. | Dec 1996 | A |
5604786 | Engelke et al. | Feb 1997 | A |
D379181 | Sawano et al. | May 1997 | S |
5649060 | Ellozy et al. | Jul 1997 | A |
5671267 | August et al. | Sep 1997 | A |
5680443 | Kasday et al. | Oct 1997 | A |
5687222 | McLaughlin et al. | Nov 1997 | A |
5701338 | Leyen et al. | Dec 1997 | A |
5710806 | Lee et al. | Jan 1998 | A |
5712901 | Meermans | Jan 1998 | A |
5724405 | Engelke et al. | Mar 1998 | A |
5745550 | Eisdorfer et al. | Apr 1998 | A |
5751338 | Ludwig, Jr. | May 1998 | A |
5787148 | August | Jul 1998 | A |
5799273 | Mitchell et al. | Aug 1998 | A |
5799279 | Gould et al. | Aug 1998 | A |
5809112 | Ryan | Sep 1998 | A |
5809425 | Colwell et al. | Sep 1998 | A |
5815196 | Alshawi | Sep 1998 | A |
5826102 | Escobar et al. | Oct 1998 | A |
5850627 | Gould et al. | Dec 1998 | A |
5855000 | Waibel et al. | Dec 1998 | A |
D405793 | Engelke et al. | Feb 1999 | S |
5867817 | Catallo et al. | Feb 1999 | A |
5870709 | Bernstein | Feb 1999 | A |
5883986 | Kopec et al. | Mar 1999 | A |
5893034 | Hikuma et al. | Apr 1999 | A |
5899976 | Rozak | May 1999 | A |
5905476 | McLaughlin et al. | May 1999 | A |
5909482 | Engelke | Jun 1999 | A |
5915379 | Wallace et al. | Jun 1999 | A |
5917888 | Giuntoli | Jun 1999 | A |
5926527 | Jenkins et al. | Jul 1999 | A |
5940475 | Hansen | Aug 1999 | A |
5974116 | Engelke et al. | Oct 1999 | A |
5978014 | Martin et al. | Nov 1999 | A |
5978654 | Colwell et al. | Nov 1999 | A |
5982853 | Liebermann | Nov 1999 | A |
5982861 | Holloway et al. | Nov 1999 | A |
5991291 | Asai et al. | Nov 1999 | A |
5991723 | Duffin | Nov 1999 | A |
5995590 | Brunet et al. | Nov 1999 | A |
6002749 | Hansen et al. | Dec 1999 | A |
6067516 | Levay et al. | May 2000 | A |
6072860 | Kek et al. | Jun 2000 | A |
6075534 | VanBuskirk et al. | Jun 2000 | A |
6075841 | Engelke et al. | Jun 2000 | A |
6075842 | Engelke et al. | Jun 2000 | A |
6100882 | Sharman et al. | Aug 2000 | A |
6101532 | Horibe et al. | Aug 2000 | A |
6122613 | Baker | Sep 2000 | A |
6141341 | Jones et al. | Oct 2000 | A |
6141415 | Rao | Oct 2000 | A |
6173259 | Bijl et al. | Jan 2001 | B1 |
6175819 | Van Alstine | Jan 2001 | B1 |
6181736 | McLaughlin et al. | Jan 2001 | B1 |
6181778 | Ohki et al. | Jan 2001 | B1 |
6188429 | Martin et al. | Feb 2001 | B1 |
6233314 | Engelke | May 2001 | B1 |
6243684 | Stuart et al. | Jun 2001 | B1 |
6278772 | Bowater et al. | Aug 2001 | B1 |
6298326 | Feller | Oct 2001 | B1 |
6307921 | Engelke et al. | Oct 2001 | B1 |
6314396 | Monkowski | Nov 2001 | B1 |
6317716 | Braida et al. | Nov 2001 | B1 |
6324507 | Lewis et al. | Nov 2001 | B1 |
6345251 | Jansson et al. | Feb 2002 | B1 |
6345252 | Beigi et al. | Feb 2002 | B1 |
6366882 | Bijl et al. | Apr 2002 | B1 |
6374221 | Haimi-Cohen | Apr 2002 | B1 |
6377925 | Greene, Jr. et al. | Apr 2002 | B1 |
6381472 | LaMedica, Jr. et al. | Apr 2002 | B1 |
6385582 | Iwata | May 2002 | B1 |
6385586 | Dietz | May 2002 | B1 |
6389114 | Dowens et al. | May 2002 | B1 |
6424935 | Taylor | Jul 2002 | B1 |
6430270 | Cannon et al. | Aug 2002 | B1 |
6434599 | Porter | Aug 2002 | B1 |
6445799 | Taenzer et al. | Sep 2002 | B1 |
6457031 | Hanson | Sep 2002 | B1 |
6473778 | Gibbon | Oct 2002 | B1 |
6493426 | Engelke et al. | Dec 2002 | B2 |
6493447 | Goss et al. | Dec 2002 | B1 |
6504910 | Engelke et al. | Jan 2003 | B1 |
6507735 | Baker et al. | Jan 2003 | B1 |
6510206 | Engelke et al. | Jan 2003 | B2 |
6549611 | Engelke et al. | Apr 2003 | B2 |
6549614 | Zebryk et al. | Apr 2003 | B1 |
6567503 | Engelke et al. | May 2003 | B2 |
6594346 | Engelke | Jul 2003 | B2 |
6603835 | Engelke et al. | Aug 2003 | B2 |
6625259 | Hollatz et al. | Sep 2003 | B1 |
6633630 | Owens et al. | Oct 2003 | B1 |
6661879 | Schwartz et al. | Dec 2003 | B1 |
6668042 | Michaelis | Dec 2003 | B2 |
6668044 | Schwartz et al. | Dec 2003 | B1 |
6701162 | Everett | Mar 2004 | B1 |
6704709 | Kahn et al. | Mar 2004 | B1 |
6748053 | Engelke et al. | Jun 2004 | B2 |
6754631 | Din | Jun 2004 | B1 |
6763089 | Feigenbaum | Jul 2004 | B2 |
6775360 | Davidson et al. | Aug 2004 | B2 |
6778824 | Wonak et al. | Aug 2004 | B2 |
6813603 | Groner et al. | Nov 2004 | B1 |
6816468 | Cruickshank | Nov 2004 | B1 |
6816469 | Kung et al. | Nov 2004 | B1 |
6816834 | Jaroker | Nov 2004 | B2 |
6831974 | Watson et al. | Dec 2004 | B1 |
6850609 | Schrage | Feb 2005 | B1 |
6865258 | Polcyn | Mar 2005 | B1 |
6876967 | Goto et al. | Apr 2005 | B2 |
6882707 | Engelke et al. | Apr 2005 | B2 |
6885731 | Engelke et al. | Apr 2005 | B2 |
6894346 | Onose et al. | May 2005 | B2 |
6934366 | Engelke et al. | Aug 2005 | B2 |
6934376 | McLaughlin et al. | Aug 2005 | B1 |
6947896 | Hanson | Sep 2005 | B2 |
6948066 | Hind et al. | Sep 2005 | B2 |
6950500 | Chaturvedi et al. | Sep 2005 | B1 |
6980953 | Kanevsky et al. | Dec 2005 | B1 |
7003082 | Engelke et al. | Feb 2006 | B2 |
7003463 | Maes et al. | Feb 2006 | B1 |
7006604 | Engelke | Feb 2006 | B2 |
7016479 | Flathers et al. | Mar 2006 | B2 |
7016844 | Othmer et al. | Mar 2006 | B2 |
7035383 | ONeal | Apr 2006 | B2 |
7042718 | Aoki et al. | May 2006 | B2 |
7088832 | Cooper | Aug 2006 | B1 |
7117152 | Mukherji et al. | Oct 2006 | B1 |
7117438 | Wallace et al. | Oct 2006 | B2 |
7130790 | Flanagan et al. | Oct 2006 | B1 |
7136478 | Brand et al. | Nov 2006 | B1 |
7142642 | McClelland et al. | Nov 2006 | B2 |
7142643 | Brooksby | Nov 2006 | B2 |
7164753 | Engelke et al. | Jan 2007 | B2 |
7191135 | O'Hagan | Mar 2007 | B2 |
7199787 | Lee et al. | Apr 2007 | B2 |
7221405 | Basson et al. | May 2007 | B2 |
7233655 | Gailey et al. | Jun 2007 | B2 |
7236580 | Sarkar et al. | Jun 2007 | B1 |
7260771 | Chiu et al. | Aug 2007 | B2 |
7287009 | Liebermann | Oct 2007 | B1 |
7292977 | Liu | Nov 2007 | B2 |
7295663 | McLaughlin et al. | Nov 2007 | B2 |
7313231 | Reid | Dec 2007 | B2 |
7315612 | McClelland | Jan 2008 | B2 |
7319740 | Engelke et al. | Jan 2008 | B2 |
7330737 | Mahini | Feb 2008 | B2 |
7346506 | Lueck et al. | Mar 2008 | B2 |
7363006 | Mooney | Apr 2008 | B2 |
7406413 | Geppert et al. | Jul 2008 | B2 |
7428702 | Cervantes et al. | Sep 2008 | B1 |
7430283 | Steel, Jr. | Sep 2008 | B2 |
7480613 | Kellner | Jan 2009 | B2 |
7496510 | Frank et al. | Feb 2009 | B2 |
7519536 | Maes et al. | Apr 2009 | B2 |
7555104 | Engelke | Jun 2009 | B2 |
7573985 | McClelland et al. | Aug 2009 | B2 |
7606718 | Cloran | Oct 2009 | B2 |
7613610 | Zimmerman et al. | Nov 2009 | B1 |
7660398 | Engleke et al. | Feb 2010 | B2 |
7660715 | Thambiratnam | Feb 2010 | B1 |
7739114 | Chen et al. | Jun 2010 | B1 |
7747434 | Flanagan et al. | Jun 2010 | B2 |
7792701 | Basson et al. | Sep 2010 | B2 |
7831429 | O'Hagan | Nov 2010 | B2 |
7836412 | Zimmerman | Nov 2010 | B1 |
7844454 | Coles et al. | Nov 2010 | B2 |
7848358 | LaDue | Dec 2010 | B2 |
7881441 | Engelke et al. | Feb 2011 | B2 |
7904113 | Ozluturk et al. | Mar 2011 | B2 |
7962339 | Pieraccini et al. | Jun 2011 | B2 |
8019608 | Carraux et al. | Sep 2011 | B2 |
8032383 | Bhardwaj et al. | Oct 2011 | B1 |
8055503 | Scarano et al. | Nov 2011 | B2 |
8180639 | Pieraccini et al. | May 2012 | B2 |
8213578 | Engelke et al. | Jul 2012 | B2 |
8249878 | Carraux et al. | Aug 2012 | B2 |
8259920 | Abramson et al. | Sep 2012 | B2 |
8265671 | Gould et al. | Sep 2012 | B2 |
8286071 | Zimmerman et al. | Oct 2012 | B1 |
8325883 | Schultz et al. | Dec 2012 | B2 |
8332212 | Wittenstein et al. | Dec 2012 | B2 |
8332227 | Maes et al. | Dec 2012 | B2 |
8335689 | Wittenstein et al. | Dec 2012 | B2 |
8352883 | Kashik et al. | Jan 2013 | B2 |
8369488 | Sennett et al. | Feb 2013 | B2 |
8370142 | Frankel et al. | Feb 2013 | B2 |
8379801 | Romriell et al. | Feb 2013 | B2 |
8407052 | Hager | Mar 2013 | B2 |
8416925 | Engelke et al. | Apr 2013 | B2 |
8423361 | Chang et al. | Apr 2013 | B1 |
8447366 | Ungari et al. | May 2013 | B2 |
8473003 | Jung et al. | Jun 2013 | B2 |
8504372 | Carraux et al. | Aug 2013 | B2 |
8526581 | Charugundla | Sep 2013 | B2 |
8538324 | Hardacker et al. | Sep 2013 | B2 |
8605682 | Efrati et al. | Dec 2013 | B2 |
8626249 | Ungari et al. | Jan 2014 | B2 |
8645136 | Milstein | Feb 2014 | B2 |
8682672 | Ha et al. | Mar 2014 | B1 |
8781510 | Gould et al. | Jul 2014 | B2 |
8806455 | Katz | Aug 2014 | B1 |
8867532 | Wozniak et al. | Oct 2014 | B2 |
8868425 | Maes et al. | Oct 2014 | B2 |
8874070 | Basore et al. | Oct 2014 | B2 |
8892447 | Srinivasan et al. | Nov 2014 | B1 |
8908838 | Engelke et al. | Dec 2014 | B2 |
8917821 | Engelke et al. | Dec 2014 | B2 |
8917822 | Engelke et al. | Dec 2014 | B2 |
8930194 | Newman et al. | Jan 2015 | B2 |
8972261 | Milstein | Mar 2015 | B2 |
9069377 | Wilson et al. | Jun 2015 | B2 |
9124716 | Charugundla | Sep 2015 | B1 |
9161166 | Johansson et al. | Oct 2015 | B2 |
9183843 | Fanty et al. | Nov 2015 | B2 |
9185211 | Roach et al. | Nov 2015 | B2 |
9191789 | Pan | Nov 2015 | B2 |
9215406 | Paripally et al. | Dec 2015 | B2 |
9215409 | Montero et al. | Dec 2015 | B2 |
9218808 | Milstein | Dec 2015 | B2 |
9231902 | Brown et al. | Jan 2016 | B2 |
9245522 | Hager | Jan 2016 | B2 |
9247052 | Walton | Jan 2016 | B1 |
9277043 | Bladon et al. | Mar 2016 | B1 |
9305552 | Kim et al. | Apr 2016 | B2 |
9318110 | Roe | Apr 2016 | B2 |
9324324 | Knighton | Apr 2016 | B2 |
9336689 | Romriell et al. | May 2016 | B2 |
9344562 | Moore et al. | May 2016 | B2 |
9355611 | Wang et al. | May 2016 | B1 |
9380150 | Bullough et al. | Jun 2016 | B1 |
9392108 | Milstein | Jul 2016 | B2 |
9460719 | Antunes et al. | Oct 2016 | B1 |
9495964 | Kim et al. | Nov 2016 | B2 |
9502033 | Carraux et al. | Nov 2016 | B2 |
9535891 | Raheja et al. | Jan 2017 | B2 |
9536567 | Garland et al. | Jan 2017 | B2 |
9571638 | Knighton et al. | Feb 2017 | B1 |
9576498 | Zimmerman et al. | Feb 2017 | B1 |
9628620 | Rae et al. | Apr 2017 | B1 |
9632997 | Johnson et al. | Apr 2017 | B1 |
9633657 | Svendsen et al. | Apr 2017 | B2 |
9633658 | Milstein | Apr 2017 | B2 |
9633696 | Miller et al. | Apr 2017 | B1 |
9653076 | Kim | May 2017 | B2 |
9672825 | Arslan et al. | Jun 2017 | B2 |
9704111 | Antunes et al. | Jul 2017 | B1 |
9715876 | Hager | Jul 2017 | B2 |
9761241 | Maes et al. | Sep 2017 | B2 |
9774747 | Garland et al. | Sep 2017 | B2 |
9805118 | Ko et al. | Oct 2017 | B2 |
9848082 | Lillard et al. | Dec 2017 | B1 |
9852130 | Oh | Dec 2017 | B2 |
9858256 | Hager | Jan 2018 | B2 |
9858929 | Milstein | Jan 2018 | B2 |
9886956 | Antunes et al. | Feb 2018 | B1 |
9916295 | Crawford | Mar 2018 | B1 |
9947322 | Kang et al. | Apr 2018 | B2 |
9953653 | Newman et al. | Apr 2018 | B2 |
10032455 | Newman et al. | Jul 2018 | B2 |
10044854 | Rae et al. | Aug 2018 | B2 |
10049669 | Newman et al. | Aug 2018 | B2 |
10051120 | Engelke et al. | Aug 2018 | B2 |
10389876 | Engelke et al. | Aug 2019 | B2 |
10469660 | Engelke et al. | Nov 2019 | B2 |
10491746 | Engelke et al. | Nov 2019 | B2 |
10574804 | Bullough et al. | Feb 2020 | B2 |
10581625 | Pandey et al. | Mar 2020 | B1 |
10587751 | Engelke et al. | Mar 2020 | B2 |
10742805 | Engelke et al. | Aug 2020 | B2 |
10878721 | Engelke et al. | Dec 2020 | B2 |
10917519 | Engelke et al. | Feb 2021 | B2 |
11011157 | Dernoncourt | May 2021 | B2 |
11017778 | Thomson et al. | May 2021 | B1 |
11170782 | Stoker et al. | Nov 2021 | B2 |
11176944 | Boekweg et al. | Nov 2021 | B2 |
11363141 | Friio | Jun 2022 | B2 |
11368581 | Engelke et al. | Jun 2022 | B2 |
11539900 | Engelke | Dec 2022 | B2 |
11627221 | Engelke et al. | Apr 2023 | B2 |
11636859 | Boekweg et al. | Apr 2023 | B2 |
11664029 | Engelke et al. | May 2023 | B2 |
11741963 | Engelke et al. | Aug 2023 | B2 |
20010005825 | Engelke et al. | Jun 2001 | A1 |
20020007275 | Goto et al. | Jan 2002 | A1 |
20020049589 | Poirier | Apr 2002 | A1 |
20020055351 | Elsey et al. | May 2002 | A1 |
20020085685 | Engelke et al. | Jul 2002 | A1 |
20020085703 | Proctor | Jul 2002 | A1 |
20020094800 | Trop et al. | Jul 2002 | A1 |
20020101537 | Basson et al. | Aug 2002 | A1 |
20020103008 | Rahn et al. | Aug 2002 | A1 |
20020114429 | Engelke et al. | Aug 2002 | A1 |
20020119800 | Jaggers et al. | Aug 2002 | A1 |
20020161578 | Saindon et al. | Oct 2002 | A1 |
20020178001 | Balluff et al. | Nov 2002 | A1 |
20020178002 | Boguraev et al. | Nov 2002 | A1 |
20020184373 | Maes | Dec 2002 | A1 |
20020193076 | Rogers et al. | Dec 2002 | A1 |
20030045329 | Kinoshita | Mar 2003 | A1 |
20030061396 | Wen et al. | Mar 2003 | A1 |
20030063731 | Woodring | Apr 2003 | A1 |
20030069997 | Bravin et al. | Apr 2003 | A1 |
20030097262 | Nelson | May 2003 | A1 |
20030128820 | Hirschberg et al. | Jul 2003 | A1 |
20030177190 | Moody et al. | Sep 2003 | A1 |
20030212547 | Engelke et al. | Nov 2003 | A1 |
20040064317 | Othmer et al. | Apr 2004 | A1 |
20040064322 | Georgiopoulos et al. | Apr 2004 | A1 |
20040066926 | Brockbank et al. | Apr 2004 | A1 |
20040083105 | Jaroker | Apr 2004 | A1 |
20040122657 | Brants et al. | Jun 2004 | A1 |
20040143430 | Said et al. | Jul 2004 | A1 |
20040186989 | Clapper | Sep 2004 | A1 |
20040266410 | Sand et al. | Dec 2004 | A1 |
20050025290 | Doherty et al. | Feb 2005 | A1 |
20050048992 | Wu et al. | Mar 2005 | A1 |
20050049879 | Audu et al. | Mar 2005 | A1 |
20050063520 | Michaelis | Mar 2005 | A1 |
20050094776 | Haldeman et al. | May 2005 | A1 |
20050094777 | McClelland | May 2005 | A1 |
20050129185 | McClelland et al. | Jun 2005 | A1 |
20050144012 | Afrashteh et al. | Jun 2005 | A1 |
20050180553 | Moore | Aug 2005 | A1 |
20050183109 | Basson et al. | Aug 2005 | A1 |
20050193062 | Komine et al. | Sep 2005 | A1 |
20050225628 | Antoniou | Oct 2005 | A1 |
20050226394 | Engelke et al. | Oct 2005 | A1 |
20050226398 | Bojeun | Oct 2005 | A1 |
20050232169 | McLaughlin et al. | Oct 2005 | A1 |
20050277431 | White | Dec 2005 | A1 |
20060026003 | Carus et al. | Feb 2006 | A1 |
20060047816 | Lawton et al. | Mar 2006 | A1 |
20060080432 | Spataro et al. | Apr 2006 | A1 |
20060089857 | Zimmerman et al. | Apr 2006 | A1 |
20060095575 | Sureka et al. | May 2006 | A1 |
20060105712 | Glass et al. | May 2006 | A1 |
20060133583 | Brooksby | Jun 2006 | A1 |
20060140354 | Engelke | Jun 2006 | A1 |
20060149558 | Kahn et al. | Jul 2006 | A1 |
20060167686 | Kahn | Jul 2006 | A1 |
20060172720 | Islam et al. | Aug 2006 | A1 |
20060190249 | Kahn et al. | Aug 2006 | A1 |
20060285652 | McClelland et al. | Dec 2006 | A1 |
20060285662 | Yin et al. | Dec 2006 | A1 |
20070011012 | Yurick et al. | Jan 2007 | A1 |
20070024583 | Gettemy et al. | Feb 2007 | A1 |
20070036282 | Engelke et al. | Feb 2007 | A1 |
20070048719 | He et al. | Mar 2007 | A1 |
20070074116 | Thomas | Mar 2007 | A1 |
20070100634 | Cooper et al. | May 2007 | A1 |
20070116190 | Bangor et al. | May 2007 | A1 |
20070118373 | Wise | May 2007 | A1 |
20070124405 | Ulmer et al. | May 2007 | A1 |
20070126926 | Miyamoto et al. | Jun 2007 | A1 |
20070130257 | Bedi et al. | Jun 2007 | A1 |
20070153989 | Howell et al. | Jul 2007 | A1 |
20070168552 | Alse et al. | Jul 2007 | A1 |
20070208570 | Bhardwaj et al. | Sep 2007 | A1 |
20070274296 | Cross, Jr. et al. | Nov 2007 | A1 |
20070282597 | Cho | Dec 2007 | A1 |
20080005440 | Li et al. | Jan 2008 | A1 |
20080040111 | Miyamoto et al. | Feb 2008 | A1 |
20080043936 | Liebermann | Feb 2008 | A1 |
20080043953 | Newsom et al. | Feb 2008 | A1 |
20080064326 | Foster et al. | Mar 2008 | A1 |
20080129864 | Stone et al. | Jun 2008 | A1 |
20080152093 | Engelke et al. | Jun 2008 | A1 |
20080187108 | Engelke et al. | Aug 2008 | A1 |
20080215323 | Shaffer et al. | Sep 2008 | A1 |
20080319744 | Goldberg | Dec 2008 | A1 |
20080319745 | Caldwell et al. | Dec 2008 | A1 |
20090037171 | McFarland et al. | Feb 2009 | A1 |
20090089236 | Lamprecht et al. | Apr 2009 | A1 |
20090174759 | Yeh et al. | Jul 2009 | A1 |
20090276215 | Hager | Nov 2009 | A1 |
20090282106 | Jaffer et al. | Nov 2009 | A1 |
20090299743 | Rogers | Dec 2009 | A1 |
20090306981 | Cromack et al. | Dec 2009 | A1 |
20090326939 | Toner et al. | Dec 2009 | A1 |
20100007711 | Bell | Jan 2010 | A1 |
20100027765 | Schultz et al. | Feb 2010 | A1 |
20100030738 | Geer | Feb 2010 | A1 |
20100063815 | Cloran et al. | Mar 2010 | A1 |
20100076752 | Zweig et al. | Mar 2010 | A1 |
20100121629 | Cohen | May 2010 | A1 |
20100141834 | Cuttner | Jun 2010 | A1 |
20100145729 | Katz | Jun 2010 | A1 |
20100158213 | Mikan et al. | Jun 2010 | A1 |
20100204989 | Boes et al. | Aug 2010 | A1 |
20100228548 | Liu et al. | Sep 2010 | A1 |
20100268534 | Kishan Thambiratnam et al. | Oct 2010 | A1 |
20100293232 | Jackson et al. | Nov 2010 | A1 |
20100299131 | Lanham et al. | Nov 2010 | A1 |
20100323728 | Gould et al. | Dec 2010 | A1 |
20110013756 | Davies et al. | Jan 2011 | A1 |
20110022387 | Hager | Jan 2011 | A1 |
20110087491 | Wittenstein et al. | Apr 2011 | A1 |
20110099006 | Sundararaman et al. | Apr 2011 | A1 |
20110123003 | Romriell | May 2011 | A1 |
20110128953 | Wozniak et al. | Jun 2011 | A1 |
20110161085 | Boda et al. | Jun 2011 | A1 |
20110170672 | Engelke et al. | Jul 2011 | A1 |
20110206189 | Kennedy et al. | Aug 2011 | A1 |
20110231184 | Kerr | Sep 2011 | A1 |
20110270609 | Jones et al. | Nov 2011 | A1 |
20110289134 | de los Reyes et al. | Nov 2011 | A1 |
20120016671 | Jaggi et al. | Jan 2012 | A1 |
20120022865 | Milstein | Jan 2012 | A1 |
20120062791 | Thakolsri et al. | Mar 2012 | A1 |
20120108196 | Musgrove et al. | May 2012 | A1 |
20120178064 | Katz | Jul 2012 | A1 |
20120179465 | Cox et al. | Jul 2012 | A1 |
20120214447 | Russell et al. | Aug 2012 | A1 |
20120245936 | Treglia | Sep 2012 | A1 |
20120250837 | Engleke et al. | Oct 2012 | A1 |
20120265529 | Nachtrab et al. | Oct 2012 | A1 |
20120284015 | Drewes | Nov 2012 | A1 |
20130013904 | Tran | Jan 2013 | A1 |
20130017800 | Gouvia et al. | Jan 2013 | A1 |
20130030804 | Zavaliagkos et al. | Jan 2013 | A1 |
20130035936 | Garland et al. | Feb 2013 | A1 |
20130045720 | Madhavapeddl et al. | Feb 2013 | A1 |
20130086293 | Bosse et al. | Apr 2013 | A1 |
20130144610 | Gordon et al. | Jun 2013 | A1 |
20130171958 | Goodson et al. | Jul 2013 | A1 |
20130219098 | Turnpenny et al. | Aug 2013 | A1 |
20130254264 | Hankinson et al. | Sep 2013 | A1 |
20130262563 | Lu | Oct 2013 | A1 |
20130289971 | Parkinson et al. | Oct 2013 | A1 |
20130308763 | Engleke et al. | Nov 2013 | A1 |
20130317818 | Bigham et al. | Nov 2013 | A1 |
20130331056 | McKown et al. | Dec 2013 | A1 |
20130340003 | Davis et al. | Dec 2013 | A1 |
20140018045 | Tucker | Jan 2014 | A1 |
20140039871 | Crawford | Feb 2014 | A1 |
20140039888 | Taubman et al. | Feb 2014 | A1 |
20140099909 | Daly et al. | Apr 2014 | A1 |
20140153705 | Moore et al. | Jun 2014 | A1 |
20140163981 | Cook et al. | Jun 2014 | A1 |
20140180667 | Johansson | Jun 2014 | A1 |
20140270101 | Maxwell et al. | Sep 2014 | A1 |
20140314220 | Charugundla | Oct 2014 | A1 |
20140341359 | Engelke et al. | Nov 2014 | A1 |
20150032450 | Hussain et al. | Jan 2015 | A1 |
20150058005 | Khare et al. | Feb 2015 | A1 |
20150073790 | Steuble et al. | Mar 2015 | A1 |
20150088508 | Bharadwaj et al. | Mar 2015 | A1 |
20150094105 | Pan | Apr 2015 | A1 |
20150106091 | Wetjen et al. | Apr 2015 | A1 |
20150120289 | Lev-Tov et al. | Apr 2015 | A1 |
20150130887 | Thelin et al. | May 2015 | A1 |
20150131786 | Roach et al. | May 2015 | A1 |
20150279352 | Willett et al. | Oct 2015 | A1 |
20150287043 | Michaelis et al. | Oct 2015 | A1 |
20150288815 | Charugundla | Oct 2015 | A1 |
20150341486 | Knighton | Nov 2015 | A1 |
20150358461 | Klaban | Dec 2015 | A1 |
20160012751 | Hirozawa | Jan 2016 | A1 |
20160027442 | Burton et al. | Jan 2016 | A1 |
20160119571 | Ko | Apr 2016 | A1 |
20160133251 | Kadirkamanathan et al. | May 2016 | A1 |
20160155435 | Mohideen | Jun 2016 | A1 |
20160179831 | Gruber et al. | Jun 2016 | A1 |
20160277709 | Stringham et al. | Sep 2016 | A1 |
20160295293 | McLaughlin | Oct 2016 | A1 |
20170004207 | Baughman et al. | Jan 2017 | A1 |
20170083214 | Furesjo et al. | Mar 2017 | A1 |
20170085506 | Gordon | Mar 2017 | A1 |
20170178182 | Kuskey et al. | Jun 2017 | A1 |
20170187826 | Russell et al. | Jun 2017 | A1 |
20170187876 | Hayes et al. | Jun 2017 | A1 |
20170206808 | Engelke | Jul 2017 | A1 |
20170270929 | Aleksic et al. | Sep 2017 | A1 |
20180012619 | Ryan et al. | Jan 2018 | A1 |
20180013886 | Rae et al. | Jan 2018 | A1 |
20180081869 | Hager | Mar 2018 | A1 |
20180102130 | Holm et al. | Apr 2018 | A1 |
20180197545 | Willett et al. | Jul 2018 | A1 |
20180212790 | Jacobson et al. | Jul 2018 | A1 |
20180270350 | Engelke et al. | Sep 2018 | A1 |
20180315417 | Flaks et al. | Nov 2018 | A1 |
20190108834 | Nelson et al. | Apr 2019 | A1 |
20190295542 | Huang et al. | Sep 2019 | A1 |
20200004803 | Dernoncourt et al. | Jan 2020 | A1 |
20200007679 | Engelke et al. | Jan 2020 | A1 |
20200143820 | Donofrio et al. | May 2020 | A1 |
20200153958 | Engelke et al. | May 2020 | A1 |
20200244800 | Engelke et al. | Jul 2020 | A1 |
20200252507 | Engelke et al. | Aug 2020 | A1 |
20200364067 | Accame et al. | Nov 2020 | A1 |
20210058510 | Engelke et al. | Feb 2021 | A1 |
20210073468 | Deshmukh et al. | Mar 2021 | A1 |
20210210115 | Kothari et al. | Jul 2021 | A1 |
20210233530 | Thomson et al. | Jul 2021 | A1 |
20210234959 | Engelke et al. | Jul 2021 | A1 |
20210274039 | Engelke et al. | Sep 2021 | A1 |
20220014622 | Engelke et al. | Jan 2022 | A1 |
20220014623 | Engelke et al. | Jan 2022 | A1 |
20220028394 | Engelke et al. | Jan 2022 | A1 |
20220103683 | Engelke et al. | Mar 2022 | A1 |
20220284904 | Pu et al. | Sep 2022 | A1 |
20220319521 | Liu | Oct 2022 | A1 |
Number | Date | Country |
---|---|---|
102572372 | Jul 2012 | CN |
2647097 | Apr 1978 | DE |
2749923 | May 1979 | DE |
3410619 | Oct 1985 | DE |
3632233 | Apr 1988 | DE |
10328884 | Feb 2005 | DE |
0016281 | Oct 1980 | EP |
0029246 | May 1981 | EP |
0651372 | May 1995 | EP |
0655158 | May 1995 | EP |
0664636 | Jul 1995 | EP |
0683483 | Nov 1995 | EP |
1039733 | Sep 2000 | EP |
1330046 | Jul 2003 | EP |
1486949 | Dec 2004 | EP |
2093974 | Aug 2009 | EP |
2373016 | Oct 2011 | EP |
2403697 | Apr 1979 | FR |
2432805 | Feb 1980 | FR |
2538978 | Jul 1984 | FR |
2183880 | Jun 1987 | GB |
2285895 | Jul 1995 | GB |
2327173 | Jan 1999 | GB |
2335109 | Sep 1999 | GB |
2339363 | Jan 2000 | GB |
2334177 | Dec 2002 | GB |
S5544283 | Mar 1980 | JP |
S5755649 | Apr 1982 | JP |
S58134568 | Aug 1983 | JP |
S60259058 | Dec 1985 | JP |
S63198466 | Aug 1988 | JP |
H04248596 | Sep 1992 | JP |
2011087005 | Apr 2011 | JP |
20050004503 | Dec 2005 | KR |
9323947 | Nov 1993 | WO |
9405006 | Mar 1994 | WO |
9500946 | Jan 1995 | WO |
9519086 | Jul 1995 | WO |
9750222 | Dec 1997 | WO |
9839901 | Sep 1998 | WO |
9913634 | Mar 1999 | WO |
9952237 | Oct 1999 | WO |
0049601 | Aug 2000 | WO |
0155914 | Aug 2001 | WO |
0158165 | Aug 2001 | WO |
0180079 | Oct 2001 | WO |
0225910 | Mar 2002 | WO |
02077971 | Oct 2002 | WO |
03026265 | Mar 2003 | WO |
03030018 | Apr 2003 | WO |
03071774 | Aug 2003 | WO |
2005081511 | Sep 2005 | WO |
2008053306 | May 2008 | WO |
2015131028 | Sep 2015 | WO |
2015148037 | Oct 2015 | WO |
Entry |
---|
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petitioner's Reply to Patent Owner's Response, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00780, U.S. Pat. No. 6,603,835, Apr. 20, 2015, 30 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00780, U.S. Pat. No. 6,603,835, Dec. 1, 2015, 56 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00780, U.S. Pat. No. 6,603,835, Dec. 31, 2015, 20 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00540, U.S. Pat. No. 6,233,314, Apr. 2, 2015, 19 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Patent Owner's Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00540, U.S. Pat. No. 6,233,314, Dec. 1, 2015, 18 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Notice of Appeal, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00540, U.S. Pat. No. 6,233,314, Feb. 2, 2016, 19 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00541, U.S. Pat. No. 5,909,482, Apr. 2, 2015, 19 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Patent Owner's Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00541, U.S. Pat. No. 5,909,482, Dec. 1, 2015, 18 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Notice of Appeal, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00541, U.S. Pat. No. 5,909,482, Feb. 2, 2016, 19 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00542, U.S. Pat. No. 7,319,470, Apr. 2, 2015, 19 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Patent Owner's Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00542, U.S. Pat. No. 7,319,470, Dec. 1, 2015, 15 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Notice of Appeal, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00542, U.S. Pat. No. 7,319,470, Feb. 2, 2016, 12 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00543, U.S. Pat. No. 7,555,104, Apr. 2, 2015, 19 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Patent Owner's Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00543, U.S. Pat. No. 7,555,104, Dec. 1, 2015, 15 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Notice of Appeal, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00543, U.S. Pat. No. 7,555,104, Feb. 2, 2016, 11 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00544, U.S. Pat. No. 8,213,578, Apr. 2, 2015, 19 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Patent Owner's Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00544, U.S. Pat. No. 8,213,578, Dec. 1, 2015, 19 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Notice of Appeal, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00544, U.S. Pat. No. 8,213,578, Feb. 2, 2016, 11 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00545, U.S. Pat. No. 6,594,346, Apr. 2, 2015, 16 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Patent Owner's Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00545, U.S. Pat. No. 6,594,346, Dec. 1, 2015, 15 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Notice of Appeal, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00545, U.S. Pat. No. 6,594,346, Feb. 2, 2016, 11 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00549, U.S. Pat. No. 6,603,835, Apr. 2, 2015, 19 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Patent Owner's Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00549, U.S. Pat. No. 6,603,835, Dec. 1, 2015, 15 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Notice of Appeal, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00549, U.S. Pat. No. 6,603,835, Feb. 2, 2016, 11 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Request for Rehearing by Expanded Panel, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00550, U.S. Pat. No. 7,003,082, Apr. 2, 2015, 19 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Patent Owner's Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00550, U.S. Pat. No. 7,003,082, Dec. 1, 2015, 10 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Notice of Appeal, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00550, U.S. Pat. No. 7,003,082, Feb. 2, 2016, 11 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2014-01287, U.S. Pat. No. 7,660,398, Feb. 12, 2015, 15 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2014-01287, U.S. Pat. No. 7,660,398, Mar. 13, 2015, 18 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Request for Rehearing, CaptionCall LLC v. Ultratec Inc., Case IPR2014-01287, U.S. Pat. No. 7,660,398, Nov. 5, 2015, 7 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review for U.S. Pat. No. 10,469,660, CaptionCall, LLC v. Ultratec, Inc., Case IPR2020-01215, Jul. 1, 2020, 68 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Declaration of Benedict J. Occhiogrosso for U.S. Pat. No. 10,469,660, CaptionCall, LLC v. Ultratec, Inc., Case IPR2020-01215, Jun. 23, 2020, 113 pages. |
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Institution of Inter Partes Review for U.S. Pat. No. 10,469,660, CaptionCall LLC v. Ultratec Inc., Case IPR2020-01215, Jan. 27, 2021, 24 pages. |
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petitioner's Request for Rehearing Pursuant to 37 C.F.R. 42.71(d), CaptionCall LLC v. Ultratec Inc., Case IPR2020-01215, U.S. Pat. No. 10,469,660, Feb. 18, 2021, 19 pages. |
U.S. Appl. No. 60/562,795 Provisional Application to McLaughlin et al., filed Apr. 16, 2004, 126 pages. |
Blackberry, RIM Introduces New Color BlackBerry Handheld for CDMA2000 1X Wireless Networks, BlackBerry Press Release, Mar. 22, 2004, 2 pages. |
Blackberry Wireless Handheld User Guide, 7750, Mar. 16, 2004, 144 pages. |
Federal Communications Commission, Telecommunication Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities, 68 Fed. Reg. 50973-50978 (Aug. 25, 2003). |
PhoneDB, RIM BlackBerry 7750 Device Specs, Copyright 2006-2020 PhoneDB, 6 pages. |
Phonesdata, Nokia 6620 Specs, Review, Opinions, Comparisons, Copyright 2020, 9 pages. |
Sundgot, Nokia Unveils the 6600, InfoSync World, Jun. 16, 2003, 2 pages. |
Wikipedia, Dell Axim, https://en.wikipedia.org/wiki/Dell_Axim, Last Edited on Feb. 23, 2020, 4 pages. |
Wikipedia, Palm Tungsten, https://en.wikipedia.org/wiki/Palm_Tungsten, Last Edited on Oct. 6, 2019, 10 pages. |
In the United States Patent and Trademark Office Before the Patent and Trial Appeal Board, Final Written Decision, U.S. Pat. No. 9,131,045, CaptionCall, LLC v. Ultratec, Inc., Case IPR2015-01889, Apr. 11, 2017, 118 pages. |
In the United States Patent and Trademark Office Before the Patent and Trial Appeal Board, Judgment for U.S. Pat. No. 7,881,441, CaptionCall, LLC v. Ultratec, Inc., Case IPR2015-01886, Jun. 9, 2016, 4 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review for U.S. Pat. No. 10,491,746, CaptionCall, LLC v. Ultratec, Inc., Case IPR2020-01216, Jul. 1, 2020, 61 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Declaration of Benedict J. Occhiogrosso for U.S. Pat. No. 10,491,746, CaptionCall, LLC v. Ultratec, Inc., Case IPR2020-01216, Jun. 23, 2020, 79 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Denying Institution of Inter Partes Review for U.S. Pat. No. 10,491,746, CaptionCall, LLC v. Ultratec, Inc., Case IPR2020-01216, Jan. 27, 2021, 22 pages. |
Arlinger, Negative Consequences of Uncorrected Hearing Loss—A Review, International Journal of Audiology, 2003, 42:2S17-2S20. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,003,082, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 30, 2013. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,603,835, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 30, 2013. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,233,314, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 30, 2013. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 5,909,482, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 30, 2013. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,319,740, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 30, 2013. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,594,346, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 30, 2013. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,555,104, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 30, 2013. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,213,578, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 30, 2013. |
Request for Rehearing Under 37 C.F.R. 42.71(d), In Re: U.S. Pat. No. 6,603,835, Case IPR2013-00549, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Mar. 19, 2014. |
Patent Owner Response Under 37 C.F.R. 42.120 (to the Institution of Inter Partes Review), In Re: U.S. Pat. No. 6,594,346, Case IPR2013-00545, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014. |
Patent Owner Response Under 37 C.F.R. 42.120 (to the Institution of Inter Partes Review), In Re: U.S. Pat. No. 7,003,082, Case IPR2013-00550, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014. |
Patent Owner Response Under 37 C.F.R. 42.120 (to the Institution of Inter Partes Review), In Re: U.S. Pat. No. 7,555,104, Case IPR2013-00543, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014. |
Patent Owner Response Under 37 C.F.R. 42.120 (to the Institution of Inter Partes Review), In Re: U.S. Pat. No. 7,319,740, Case IPR2013-00542, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014. |
Patent Owner Response Under 37 C.F.R. 42.120 (to the Institution of Inter Partes Review), In Re: U.S. Pat. No. 6,603,835, Case IPR2013-00549, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014. |
Patent Owner Response Under 37 C.F.R. 42.120 (to the Institution of Inter Partes Review), In Re: U.S. Pat. No. 8,213,578, Case IPR2013-00544, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014. |
Patent Owner Response Under 37 C.F.R. 42.120 (to the Institution of Inter Partes Review), In Re: U.S. Pat. No. 5,909,482, Case IPR2013-00541, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014. |
Patent Owner Response Under 37 C.F.R. 42.120 (to the Institution of Inter Partes Review), In Re: U.S. Pat. No. 6,233,314, Case IPR2013-00540, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014. |
Declaration of Brenda Battat, In Re: U.S. Pat. No. 8,231,578, Case IPR2013-00544, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 8, 2014. |
Declaration of Constance Phelps, In Re: U.S. Pat. No. 6,233,314, Case IPR2013-00540, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 9, 2014. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,603,835, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 19, 2014. |
Declaration of James A. Steel, Jr., In Re: U.S. Pat. No. 7,319,740, Case IPR2013-00542, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 10, 2014. |
Declaration of James A. Steel, Jr., In Re: U.S. Pat. No. 7,003,082, Case IPR2013-00550, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 10, 2014. |
Declaration of James A. Steel, Jr., In Re: U.S. Pat. No. 6,603,835, Case IPR2013-00549, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 11, 2014. |
Declaration of James A. Steel, Jr., In Re: U.S. Pat. No. 7,555,104, Case IPR2013-00543, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014. |
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1-30 of U.S. Pat. No. 8,908,838 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Jan. 29, 2015, 67 pages. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,908,838, Case IPR2015-00637, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jan. 28, 2015, 62 pages. |
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1-29 of U.S. Pat. No. 8,917,822 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Jan. 29, 2015, 67 pages. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,917,822, Case IPR2015-00636, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jan. 28, 2015, 65 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00780, U.S. Pat. No. 6,603,835, Dec. 4, 2014, 14 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner Response Under 37 C.F.R. 42.120, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00780, U.S. Pat. No. 6,603,835, Feb. 11, 2015, 68 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00540, U.S. Appl. No. 6,233,314, Mar. 3, 2015, 55 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00541, U.S. Pat. No. 5,909,482, Mar. 3, 2015, 77 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00542, U.S. Pat. No. 7,319,740, Mar. 3, 2015, 31 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00543, U.S. Pat. No. 7,555,104, Mar. 3, 2015, 29 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00544, U.S. Pat. No. 8,213,578, Mar. 3, 2015, 56 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00545, U.S. Pat. No. 6,594,346, Mar. 3, 2015, 41 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00549, U.S. Pat. No. 6,603,835, Mar. 3, 2015, 35 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Final Written Decision, CaptionCall LLC v. Ultratec Inc., Case IPR2014-00550, U.S. Pat. No. 7,003,082, Mar. 3, 2015, 25 pages. |
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1 and 2 of U.S. Pat. No. 7,555,104 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 30, 2013, 65 pages. |
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1 and 2 of U.S. Pat. No. 6,233,314 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 30, 2013, 39 pages. |
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1 and 2 of U.S. Pat. No. 6,594,346 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 30, 2013, 67 pages. |
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1-15 of U.S. Pat. No. 5,909,482 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 30, 2013, 67 pages. |
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 7-11 of U.S. Pat. No. 8,213,578 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 30, 2013, 67 pages. |
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1-8 of U.S. Pat. No. 6,603,835 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 30, 2013, 66 pages. |
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1 of U.S. Pat. No. 7,003,082 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 30, 2013, 51 pages. |
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 1 and 2 of U.S. Pat. No. 7,319,740 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 30, 2013, 67 pages. |
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00550, U.S. Pat. No. 7,003,082 B2, Mar. 5, 2014, 13 pages. |
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00543, U.S. Pat. No. 7,555,104, Mar. 5, 2014, 16 pages. |
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00540, U.S. Pat. No. 6,233,314, Mar. 5, 2014, 17 pages. |
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00545, U.S. Pat. No. 6,594,346, Mar. 5, 2014, 21 pages. |
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00541, U.S. Pat. No. 5,909,482, Mar. 5, 2014, 32 pages. |
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00544, U.S. Pat. No. 8,213,578, Mar. 5, 2014, 22 pages. |
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00542, U.S. Pat. No. 7,319,740, Mar. 5, 2014, 17 pages. |
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision—Institution of Inter Partes Review, CaptionCall LLC v. Ultratec Inc., Case IPR2013-00549, U.S. Pat. No. 6,603,835 B2, Mar. 5, 2014, 26 pages. |
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 6 and 8 of U.S. Pat. No. 6,603,835 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., May 19, 2014, 67 pages. |
CaptionCall L.L.C. Petition for Inter Partes Review of Claims 11-13 of U.S. Pat. No. 7,660,398 Under 35 U.S.C. 311-319 and 37 C.F.R. 42.100 Et Seq., Aug. 13, 2014, 64 pages. |
Prosecution History of the U.S. Pat. No. 7,660,398 U.S. Patent, 489 pages. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,660,398, United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Aug. 13, 2014, 62 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claims 1-29 of U.S. Pat. No. 8,917,822, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00636, U.S. Pat. No. 8,917,822, Jan. 29, 2015, 67 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Preliminary Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00636, U.S. Pat. No. 8,917,822, Jun. 9, 2015, 66 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Instituting Review, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00636, U.S. Pat. No. 8,917,822, Sep. 8, 2015, 20 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00636, U.S. Pat. No. 8,917,822, Nov. 23, 2015, 65 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Contingent Motion to Amend, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00636, U.S. Pat. No. 8,917,822, Nov. 23, 2015, 39 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petitioner's Reply to Patent Owner Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00636, U.S. Pat. No. 8,917,822, Jan. 26, 2016, 29 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petitioner's Opposition to Patent Owner's Contingent Motion to Amend, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00636, U.S. Pat. No. 8,917,822, Jan. 26, 2016, 28 pages. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,917,822, Case IPR2015-00636, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jan. 29, 2015, 65 pages. |
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,917,822, Case IPR2015-00636, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jan. 26, 2016, 60 pages. |
Declaration of Ivan Zatkovich, In Re: U.S. Pat. No. 8,917,822, Case IPR2015-00636, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Nov. 23, 2015, 108 pages. |
Declaration of Paul Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 8,917,822, Case IPR2015-00636, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Nov. 23, 2015, 37 pages. |
Declaration of Brenda Battat Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 8,917,822, Case IPR2015-00636, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Nov. 23, 2015, 61 pages. |
Declaration of Katie Kretschman, In Re: U.S. Pat. No. 8,917,822, Case IPR2015-00636, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Nov. 23, 2015, 5 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claims 1-30 of U.S. Pat. No. 8,908,838, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00637, U.S. Pat. No. 8,908,838, Jan. 29, 2015, 67 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Preliminary Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00637, U.S. Pat. No. 8,908,838, Jun. 9, 2015, 65 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Instituting Review, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00637, U.S. Pat. No. 8,908,838, Sep. 8, 2015, 25 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00637, U.S. Pat. No. 8,908,838, Nov. 23, 2015, 65 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Contingent Motion to Amend, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00637, U.S. Pat. No. 8,908,838, Nov. 23, 2015, 38 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petitioner's Reply to Patent Owner Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00637, U.S. Pat. No. 8,908,838, Jan. 26, 2016, 29 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petitioner's Opposition to Patent Owner's Contingent Motion to Amend, CaptionCall LLC v. Ultratec Inc., Case IPR2015-00637, U.S. Pat. No. 8,908,838, Jan. 26, 2016, 28 pages. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,908,838, Case IPR2015-00637, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jan. 29, 2015, 62 pages. |
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,908,838, Case IPR2015-00637, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jan. 26, 2016, 62 pages. |
Declaration of Ivan Zatkovich, In Re: U.S. Pat. No. 8,908,838, Case IPR2015-00637, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Nov. 23, 2015, 110 pages. |
Declaration of Paul Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 8,908,838, Case IPR2015-00637, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Nov. 23, 2015, 37 pages. |
Declaration of Brenda Battat Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 8,908,838, Case IPR2015-00637, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Nov. 24, 2015, 61 pages. |
Declaration of Katie Kretschman, In Re: U.S. Pat. No. 8,908,838, Case IPR2015-00637, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Nov. 23, 2015, 5 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claims 1-74 of U.S. Pat. No. 9,131,045, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01889, U.S. Pat. No. 9,131,045, Sep. 9, 2015, 66 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Preliminary Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01889, U.S. Pat. No. 9,131,045, Dec. 18, 2015, 26 pages. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 9,131,045, Case IPR2015-01889, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Sep. 9, 2015, 63 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claims 1-18 of U.S. Pat. No. 5,974,116, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01355, U.S. Pat. No. 5,974,116, Jun. 8, 2015, 65 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Preliminary Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01355, U.S. Pat. No. 5,974,116, Sep. 18, 2015, 43 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Instituting Review, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01355, U.S. Pat. No. 5,974,116, Dec. 16, 2015, 34 pages. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 5,974,116, Case IPR2015-001355, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jun. 8, 2015, 45 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claim 1 of U.S. Pat. No. 6,934,366, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01357, U.S. Pat. No. 6,934,366, Jun. 8, 2015, 65 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Preliminary Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01357, U.S. Pat. No. 6,934,366, Sep. 22, 2015, 37 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Instituting Review, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01357, U.S. Pat. No. 6,934,366, Dec. 18, 2015, 16 pages. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,934,366, Case IPR2015-001357, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jun. 8, 2015, 46 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claim 1 of U.S. Pat. No. 7,006,604, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01358, U.S. Pat. No. 7,006,604, Jun. 8, 2015, 65 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Preliminary Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01358, U.S. Pat. No. 7,006,604, Sep. 22, 2015, 34 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Instituting Review, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01358, U.S. Pat. No. 7,006,604, Dec. 18, 2015, 12 pages. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,006,604, Case IPR2015-001358, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jun. 8, 2015, 45 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claims 1-3 and 5-7 of U.S. Pat. No. 6,493,426, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01359, U.S. Pat. No. 6,493,426, Jun. 8, 2015, 65 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Preliminary Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01359, U.S. Pat. No. 6,493,426, Sep. 22, 2015, 40 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Instituting Review, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01359, U.S. Pat. No. 6,493,426, Dec. 18, 2015, 17 pages. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,493,426, Case IPR2015-001359, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jun. 8, 2015, 47 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claims 1-4 of U.S. Pat. No. 8,515,024, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01885, U.S. Pat. No. 8,515,024, Sep. 8, 2015, 35 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Patent Owner's Preliminary Response, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01885, U.S. Pat. No. 8,515,024, Dec. 17, 2015, 25 pages. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,515,024, Case IPR2015-01885, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Sep. 8, 2015, 23 pages. |
In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of Claims 1, 3, 6, 9-11, 13, 15, 19-23, 25-27, 34, and 36-38 of U.S. Pat. No. 7,881,441, CaptionCall LLC v. Ultratec Inc., Case IPR2015-01886, U.S. Pat. No. 7,881,441, Sep. 8, 2015, 61 pages. |
Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,881,441, Case IPR2015-01886, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Sep. 8, 2015, 29 pages. |
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review of U.S. Pat. No. 10,742,805, CaptionCall LLC v. Ultratec Inc., Case IPR2021-01337, Aug. 24, 2021, 65 pages. |
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Declaration of Benedict J. Occhiogrosso, Re: U.S. Pat. No. 10,742,805, CaptionCall LLC v. Ultratec Inc., Case IP2021-01337, Jul. 29, 2021, 83 pages. |
Rodman, The Effect of Bandwidth on Speech Intelligibility, White Paper, Jan. 16, 2003, Copyright 2003 Polycom, Inc., 9 pages. |
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Petition for Inter Partes Review for U.S. Pat. No. 10,587,751, CaptionCall, LLC v. Ultratec, Inc., Case IPR2020-01217, Jul. 1, 2020, 64 pages. |
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Declaration of Benedict J. Occhiogrosso for U.S. Pat. No. 10,587,751, CaptionCall, LLC v. Ultratec, Inc., Case IPR2020-01217, Jun. 23, 2020, 106 pages. |
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Decision Granting Institution of Inter Partes Review for U.S. Pat. No. 10,587,751, CaptionCall, LLC v. Ultratec, Inc., Case IPR2020-01217, Jan. 27, 2021, 24 pages. |
United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Judgment Granting Request for Entry of Adverse Judgment After Institution of Trial for U.S. Pat. No. 10,587,751, CaptionCall, LLC v. Ultratec, Inc., Case IPR2020-01217, Apr. 28, 2021, 3 pages. |
Declaration of Paul W. Ludwick, In Re: U.S. Pat. No. 6,594,346, Case IPR2013-00545, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 11, 2014. |
Declaration of Paul W. Ludwick, In Re: U.S. Pat. No. 7,555,104, Case IPR2013-00542 and IPR2013-00543, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014. |
Declaration of Paul W. Ludwick, In Re: U.S. Pat. No. 7,319,740, Case IPR2013-00542 and IPR2013-00543, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014. |
Declaration of Paul W. Ludwick, In Re: U.S. Pat. No. 6,233,314, Case IPR2013-00540, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014. |
Declaration of Paul W. Ludwick, In Re: U.S. Pat. No. 5,909,482, Case IPR2013-00541, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014. |
Declaration of Paul W. Ludwick, In Re: U.S. Pat. No. 8,213,578, Case IPR2013-00544, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014. |
Declaration of Paul W. Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 7,555,104, Case IPR2013-00543, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014. |
Declaration of Paul W. Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 7,319,740, Case IPR2013-00542, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014. |
Declaration of Paul W. Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 6,603,835, Case IPR2013-00545, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014. |
Declaration of Paul W. Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 6,594,346, Case IPR2013-00545, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 12, 2014. |
Declaration of Paul W. Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 6,233,314, Case IPR2013-00540, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014. |
Declaration of Paul W. Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 5,909,482, Case IPR2013-00541, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014. |
Declaration of Paul W. Ludwick Regarding Secondary Considerations of Non-Obviousness, In Re: U.S. Pat. No. 8,213,578, Case IPR2013-00544, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, May 30, 2014. |
Declaration of Kelby Brick, Esq., CDI, In Re: U.S. Pat. No. 7,555,104, Case IPR2013-00543, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,003,082, Case IPR2013-00550, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,555,104, Case IPR2013-00543, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,594,346, Case IPR2013-00545, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,603,835, Case IPR2013-00549, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 7,319,740, Case IPR2013-00542, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 6,233,314, Case IPR2013-00540, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 5,909,482, Case IPR2013-00541, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Supplemental Declaration of Benedict J. Occhiogrosso, In Re: U.S. Pat. No. 8,213,578, Case IPR2013-00544, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Petitioner's Reply to Patent Owner's Response Under 37 C.F.R. 42.23, In Re: U.S. Pat. No. 7,003,082, Case IPR2013-00550, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Petitioner's Reply to Patent Owner's Response Under 37 C.F.R. 42.23, In Re: U.S. Pat. No. 6,594,346, Case IPR2013-00545, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Petitioner's Reply to Patent Owner's Response Under 37 C.F.R. 42.23, In Re: U.S. Pat. No. 8,213,578, Case IPR2013-00544, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Petitioner's Reply to Patent Owner's Response Under 37 C.F.R. 42.23, In Re: U.S. Pat. No. 7,555,104, Case IPR2013-00543, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Petitioner's Reply to Patent Owner's Response Under 37 C.F.R. 42.23, In Re: U.S. Pat. No. 5,909,482, Case IPR2013-00541, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Petitioner's Reply to Patent Owner's Response Under 37 C.F.R. 42.23, In Re: U.S. Pat. No. 7,319,740, Case IPR2013-00542, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Petitioner's Reply to Patent Owner's Response Under 37 C.F.R. 42.23, In Re: U.S. Pat. No. 6,233,314, Case IPR2013-00540, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Petitioner's Reply to Patent Owner's Response Under 37 C.F.R. 42.23, In Re: U.S. Pat. No. 6,603,835, Case IPR2013-00549, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Jul. 7, 2014. |
Decision, CaptionCall's Request for Rehearing, In Re: U.S. Pat. No. 6,603,835, Case IPR2013-00549, In the United States Patent and Trademark Office Before the Patent Trial and Appeal Board, Apr. 28, 2014. |
Curtis et al., Doctor-Patient Communication on the Telephone, Can Fam Physician, 1989, 35:123-128. |
Choi, et al., Employing Speech Recognition Through Common Telephone Equipment, IBM Technical Disclosure Bulletin, Dec. 1995, pp. 355-356. |
Choi, et al., Splitting and Routing Audio Signals in Systems with Speech Recognition, IBM Technical Disclosure Bulletin, Dec. 1995, 38(12):503-504. |
Cook, A First Course in Digital Electronics, Published by Prentice-Hall, Inc., 1999, pp. 692-693. |
Cooper, R. J., Break Feature for Half-Duplex Modem, IBM Technical Disclosure Bulletin, vol. 17, No. 8, pp. 2386-2387, Jan. 1975. |
De Gennaro, et al., (Cellular) Telephone Steno Captioning Service, IBM Technical Disclosure Bulletin, Jul. 1992, pp. 346-349. |
Goodrich, et al., Engineering Education for Students with Disabilities: Technology, Research and Support, In Frontiers in Education Conference, 1993, 23rd Annual Conference ‘Engineering Education: Renewing America's Technology’ Proceedings, IEEE, pp. 92-97. |
Gopalakrishnan, Effective Set-Up for Performing Phone Conversations by the Hearing Impaired, IBM Technical Disclosure Bulletin, vol. 34, No. 78, pp. 423-426, 1991. |
IBM, Software Verification of Microcode Transfer Using Cyclic Redundancy Code Algorithm, IBM Technical Disclosure Bulletin, Dec. 1988, 31(7):149-153. |
IBM, Use of Cyclic Redundancy Code for Testing ROM and RAM in a Writeable Control Store, IBM Technical Disclosure Bulletin, Nov. 1990, 33(6A):219-220. |
Karjalainen, et al., Applications for the Hearing-Impaired: Evaluation of Finnish Phoneme Recognition Methods, Eurospeech, 1997, 4 pages. |
Kitai, et al., Trends of ASR and Its Applications in Japan, Third IEEE Workshop on Interactive Voice Technology for Telecommunications Applications, 1996, pp. 21-24. |
Kukich, Spelling Correction for the Telecommunications Network for the Deaf, Communications of the ACM, 1992, 35(5):80-90. |
Makhoul, et al., State of the Art in Continuous Speech Recognition, Proc. Natl. Acad. Sci. USA, 1995, 92:9956-9963. |
Microchip Technology, Inc., MCRF250, Contactless Programmable Passive RFID Device With Anti-Collision, 1998, DS21267C, pp. 1-12. |
Moskowitz, Telocator Alphanumeric Protocol, Version 1.8, Feb. 4, 1997. |
Oberteuffer, Commercial Applications of Speech Interface Technology: An Industry at the Threshold, Proc. Natl. Acad. Sci. USA, 1995, 92:10007-10010. |
Osman-Allu, Telecommunication Interfaces for Deaf People, IEE Colloquium on Special Needs and the Interface, IET, 1993, pp. 811-814. |
Paul, et al., The Design for the Wall Street Journal-based CSR Corpus, Proceedings of the Workshop on Speech and Natural Language, Association for Computational Linguistics, 1992, pp. 357-362. |
Rabiner, et al., Fundamentals of Speech Recognition, Copyright 1993 by AT&T, Published by Prentice Hall PTR, pp. 1, 6-9, 284-285, 482-488. |
Rabiner, Applications of Speech Recognition in the Area of Telecommunications, IEEE Workshop on Automatic Speech Recognition and Understanding, IEEE, 1997, pp. 501-510. |
Schmitt, et al., An Experimental Study of Synthesized Speech Intelligibility Using Text Created by Telecommunication Device for the Deaf (TDD) Users, IEEE Global Telecommunications Conference & Exhibition, 1990, pp. 996-999. |
Scott, Understanding Cyclic Redundancy Check, ACI Technical Support, Technical Note 99-11, 1999, 13 pages. |
Seltzer, et al., Expediting the Turnaround of Radiology Reports in a Teaching Hospital Setting, AJR, 1997, 168:889-893. |
Smith, R. L., ASCII to Baudot, Radio Electronics, pp. 51-58, Mar. 1976. |
Supnik, et al., Can You Hear Me?—DragonDictate for Windows Minces Words for Your Office, Originally Published in Computer Counselor Column of the May 1995 Issue of the Los Angeles Lawyer Magazine, http://www.supnik.com/voice.htm, accessed Aug. 7, 2012. |
Vaseghi, Chapter 14: Echo Cancellation, Advanced Digital Signal Processing and Noise Reduction, Second Edition, John Wiley & Sons, Ltd., 2000, pp. 396-415. |
Wactlar, et al., Informedia(TM): News-On-Demand Experiments in Speech Recognition, Proceedings of ARPA Speech Recognition Workshop, 1996, pp. 18-21. |
Wegmann, Final Technical Report on Phase I SBIR Study on “Semi-Automated Speech Transcription System” at Dragon Systems, Advanced Research Projects Agency Order No. 5916, 1994, 21 pages. |
Williams, A Painless Guide to CRC Error Detection Algorithms, 1993, 35 pages. |
Yamamoto, et al., Special Session (New Developments in Voice Recognition) (Invited Presentation), New Applications of Voice Recognition, Proceedings of the Acoustical Society of Japan, Spring 1996 Research Presentation Conference, pp. 33-36. |
Young, A Review of Large-Vocabulary Continuous-Speech Recognition, IEEE Signal Processing Magazine, 1996, pp. 45-57. |
Cyclic Redundancy Check, Source: http://utopia.knoware.nl/users/eprebel/Communication/CRC/index.html, 1998, 4 pages. |
PCT International Search Report and Written Opinion, PCT/US2015/017954, dated Aug. 17, 2015, 15 pages. |
Number | Date | Country | |
---|---|---|---|
20230066793 A1 | Mar 2023 | US |
Number | Date | Country | |
---|---|---|---|
62979708 | Feb 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17180702 | Feb 2021 | US |
Child | 17984445 | US |