This disclosure relates to the analysis of audio recordings and specifically the detection of indicia of dictation in audio recordings generated as part of physician/patient encounters.
Physicians have generated recordings regarding patient encounters as a means for improving the accuracy of medical records. Typically, dictations are generated either contemporaneously with the provision of medical care or soon thereafter to preserve the accuracy of the physician's recollections of the encounter.
In asynchronous scribing, a person other than the physician (e.g., a scribe), takes a recording of the patient encounter and enters the information into the medical records on behalf of the physician. In a scribing environment, generally, and in asynchronous scribing more specifically, it often happens that a physician provides some indication that certain portions of the recorded physician/patient encounter are directed at the scribe. Also, depending on the location of the dictation in the recorded audio, the scribe may be required to listen to the entire audio recording. In some situations, information provided earlier in audio recording is rendered obsolete by dictations occurring later in the audio recording.
Existing systems are able to extract dictation from portions of speech that are limited only to dictation but perform poorly when the portions of speech transition between dictation and non-dictation. As a result, scribing systems tend to rely on the physician to identify which portions of the audio are dictation and which portions are not. This may include requiring the physician provide an audio cue before reciting information the physician wants the scribe to enter into the medical records.
The present disclosure describes systems and techniques for automatically identifying portions of a recorded physician/patient encounter without relying on the physician to self-identify those portions of the recorded encounter as being dictation.
One goal of the disclosure is to improve the accuracy of systems that engage in dictation detection by making the dictation detection model more resilient to transition between dictation and non-dictation.
Another goal of the disclosure is to improve the efficiency of the scribing process, allowing the scribe to navigate to identified portions of the audio recording in a non-linear way to improve the efficiency of the scribing process generally and to reduce the utilization of audio playback and editing systems more specifically.
Yet another goal of the disclosure is to improve the accuracy of the scribing process by automatically identifying portions of the audio recording that are intended to be included in electronic medical records without the heavy reliance on the physician self-identifying portions of the audio recording as dictation.
One example scenario in which the system 100 may be deployed is in a setting where the physician 103B examines the patient 103A in an examination room 103, although it should be understood that the system 100 may be deployed in any setting where physician/patient encounters are recorded. It should be noted, that in some situations, the physician 103B may choose to begin the recording before the patient 103A enters the room 103 to provide medical history regarding the patient 103A, the reason for the visit, or for other reasons. Furthermore, the physician 103B may continue recording after the patient 103A leaves the room 103 for any number of reasons. It is also not uncommon for a physician 103B to verbalize findings of an examination while the physician 103B is conducting the examination. For example, a physician 103B may comment during an evaluation of a patient 103A's neck describing irregularities found during the examination, such as swollen lymph nodes or other irregularities. As a result, dictations may occur at the beginning of the recording, at the end of the recording and at any point in between.
The system 100 includes an ambient listening device 105. In general, the ambient listening device 105 can be any number of conventional recording devices such as a Bluetooth® enabled microphone or a microphone embedded in another computing device, such as a smart phone or other mobile device, to name a few examples. According to particular implementations, the ambient listening device 105 may stream the captured audio to one or more dictation detection modules 101 stored on one more computers in communication with networks 101A and 102A or may store the captured audio on the ambient listening device 105. In some implementations, the ambient listening device 105 may be configured to process the stored audio in the same manner as the one or more dictation detection modules 101 in the networks 101A and 102A or may be configured to transmit the stored audio to one or more computers in communication with networks 101A and 102A at some later time for processing.
In addition to the audio, the system 100 receives additional metadata from the ambient listening device 105, including information related to the type of ambient listening device 105 and other information that can be useful in detecting dictated audio. For instance, in some implementations, the ambient listening device 105 may be integrated with motion sensors, position sensors, and other sensors that can be used in determining where the ambient listening device 105 is relative to the physician 103B, the patient 103A, or both.
The system 100 may also be configured to receive information from the physician 103B that identifies the physician in some way. For instance, the system 100 may store one or more unique physician identifiers, one of which being assigned to the physician 103B. In some implementations, the system 100 may recall the physician identifier when the physician 103B provides his or her credentials to the system 100. For instance, the system 100 may accept a username and password from the physician 103B, may scan an RF id tag in possession of the physician 103B, or may take one or more biometric readings of the physician 103B to authenticate the physician 103B's identity, although other means for determining the physician identifier are also possible.
In general, the system 100 attempts to learn the speech patterns of the physician 103B to automatically detect when the physician is dictating medical information and when the physician is not dictating information. One problem that has been identified is that physicians as a group typically use a wide range of vocabulary to start dictations, but individual physicians tend to be somewhat consistent in how they begin dictations. As a result, implementations of the system 100 may be configured to learn when the physician 103B is introducing a dictation based on historical tendencies of the physician 103B. For instance, the system 100 may use natural language processing techniques to aggregate data for particular phases by the physician 103B that do (or do not) suggest the start of a dictation in the audio recording to determine a probability that a particular spoken word or phrase is indicative of dictation. As another example, the system 100 can leverage other types of semantic and/or linguistic patterns in the audio recording to identify indicia of dictation. One example is that physicians tend to recite punctuation (e.g., using words like “comma,” “period,” and the like) when dictating and those types of words typically do not arise when the physician 103B is not dictating. Used herein, the terms audio, audio recording, audio signal, and recorded audio signal are used interchangeably. That is, in general, the audio signal captured by the ambient listening device 105 is received and stored by the system in preparation for the techniques as described. For instance, audio recordings 105A and 105B can be captured by the ambient listening device 105 and transmitted by the system 100 to storage device 107 contained within the computing network 101A.
That said, because physicians are not always consistent when introducing dictations, it may not be sufficient to analyze only linguistic content of the audio recording as an indication of dictation. For example, in some implementations, the system 100 is configured to analyze the recorded audio signal to identify other indicia of dictation like the physician 103B's position relative to the ambient listening device 105, where a closer proximity to the ambient listening device 105 may be suggestive of dictation while a farther proximity to the microphone may be suggestive of non-dictated audio content between the physician 103B and the patient 103A.
As will be described in more detail below, the system 100 is configured to segment the audio recording according to one or more determined probabilities associated with portions of the recorded audio signal to identified those portions of the audio signal that are dictations and those portions of the audio signal that are not dictations. Also, as will be described in more detail below, the system 100 is configured to use these generated segments in several ways. For example, the segments can be provided to a natural language processing module as training data to improve the overall accuracy of the system 100 to predict segments of a received audio recording that are dictation and those segments that are not dictation. As another example, the system 100 can present these segments as a draft dictation to the scribe 104A for the scribe's review. For instance, in some implementations, a computing device 104 that the scribe 104A uses to listen to recorded audio can receive the segments from the computing network 101A. This allows, e.g., the scribe 104A to listen to the audio (such as audio 105A and 105B) and compare it to the draft dictation generated by the system 100. In some implementations, the scribe may make edits to the draft dictation to correct errors caused by the system 100 and ensure that accurate information is included in the EHR 102. As used herein, a scribe 104A is a human who reviews information for inclusion in the EHR 102 and should not be confused with a transcriber—which used herein is a human who listens to recorded audio and generates transcriptions of the recorded audio.
Because the audio is presented in a segmented form, it allows the scribe 104A to easily navigate the audio recording and listen to those segments that appear to the scribe 104A as potentially including dictated information. This can be of particular importance because the scribe 104A can listen to the one or more identified dictations first as a means of contextualizing the entire conversation between the physician 103B and the patient 103A, which can reduce the number of scribing errors that the scribe 104A may need to revisit if the scribe 104A were to listen to the audio recording in a more linear fashion For example, in some situations the audio earlier in non-dictated portions of the audio may be incorrect or inconsistent with dictations found later in the audio recording because, e.g., the physician 103B's opinion may change during examination or for other reasons. As a result, both the accuracy of the information entered into the EHR 102 and the utilization of the computing system 104 can be enhanced by allowing the scribe 104A to first review dictations before listening to the whole audio recording from start to finish.
In the first stage 201A, the dictation detection module 101 may first perform low-level feature extraction 203 on the audio. In general, the low-level feature extraction 203 is conducted on a feature vector describing the audio's characteristics. Stated differently, the system 100 may perform one or more transformations on the audio signal to conduct a spectral analysis of the audio signal to determine one or more characteristics of the audio signal that may be indicative of speech that is dictation or may be indicative of speech that is not dictation.
In some implementations, performing low-level feature extraction 203 may include determining mel-frequency cepstral coefficient (“MFCC”) features, Fourier transformed features, log-mel features, power features, zero crossing features, and other numeric representations that describe characteristics of the audio that are agnostic to the underlying linguistic meaning of the audio. That is, the one or more extracted features can be used to describe various non-linguistic characteristics of the audio signal. One example is that these low-level features may describe the loudness of the audio without regard to the linguistic meaning of the audio itself. Another example is that these low-level features may describe how fast someone is speaking, again without regard to what the speaker is actually saying. In one implementation, the dictation detection module 101 may determine the number of harmonic peaks in the audio signal waveform that are indicative of vowels or syllables spoken during a period of time to determine how fast the speaker is talking. That is, the closer together the harmonic peaks, the fast the rate of speech. This can be useful, e.g., because a high rate of speech may be associated with a dictation while a slower rate of speech may be indicative of speech that is not dictation. Stated differently, the low-level feature extraction 203 may be performed by analyzing the audio waveforms and quantifying the various energy peaks in the waveform.
In one example, low-level feature extraction 203 is performed on a feature vector that describes 10 ms of the audio signal, although it should be understood that particular feature vectors may be of any length and need not be a uniform length across all low-level features. The one or more low-level features that are extracted from the feature vector may be aggregated into higher-level features which may also be analyzed by the system 100 in the first stage 201A. By way of example, a higher-level feature can be generated analyzing how a particular low-level feature changes over time. In such instances where a higher-level feature specifies how a low-level feature changes over time, the higher-level feature may be many orders of magnitude longer in duration than the low-level feature from which the higher-level feature is derived. It should also be appreciated that the system 100 can be configured to analyze other types of higher-level features and may even be configured to generate and analyze mid-level features.
The dictation detection module 101 may perform pitch estimation 205 (or prosodic analysis), which measures how the fundamental frequency of the audio shifts. This may be useful in capturing when questions are asked (e.g., because prosodic frequency may go up at the end of a sentence that contains a question) or when the audio is suggestive of some amount of emotion being presented by the speaker. For instance, the prosodic frequency of the physician 103B may shift when the physician 103B conveys empathy to the patient 103A. Such changes may define a larger dynamic range in the prosodic frequency which may be indicative that the analyzed portion of speech is not dictation. Conversely, a lower dynamic range of the prosodic frequency may be indicative of dictation.
In the first stage 201A, the dictation detection module 101 may also analyze speaker distance 209. In some implementations, the dictation detection module 101 may attempt to determine when the speaker is near-field, far-field, or close-speaking. In one implementation, far-field is defined as being two meters or more away from the ambient listening device 105, close-speaking is defined when the speaker is holding the ambient listening device 105 close (e.g., less than 5 decimeters from the speaker's mouth), and near-field is defined as something greater than close-speaking but less than far-field (e.g., 5 decimeters to 2 meters). This can be a useful distinction because, e.g., it may be more likely that the physician 103B is dictating when he or she is determined to be close-speaking into the ambient listening device 105.
The various steps that the dictation detection module 101 performs during the first stage 201A of the workflow 200 are illustrative and should not be seen as exhaustive. That is, there are many other types of non-speech recognition-based approaches that can be used to determine useful information regarding the portion of the analyzed audio. For instance, the dictation detection module 101 may also attempt to identify change in the speaker based on differences in the statistical distribution of the waveform. In such implementations, one or more changes in the statistical waveform of the audio may be suggestive of more than one speaker which is indicative of a conversation between the physician 103B and the patient 103A and not indicative of dictation by the physician 103B.
In the first stage 201A, the dictation detection module 101 takes all of the information (e.g., the extracted low-level features and the aggregated higher-level features) determined in the previous steps, such as steps 203, 205, and 209 of the workflow 200 and provides this information as inputs to a first classification model 211. In some implementations, the first classification model 211 can be a combination of one or more of a decision tree, max-Entropy models, and support vector machines. In other implementations, the first classification model 211 can be configured to include one or more neural network layers. In some implementations, the one or more neural network layers may include one or more deep neural networks (“DNN”), convolutional neural networks (“CNN”), long short-term memory (“LSTM”) neural networks, and conditional random fields (“CRF”) neural networks, to name a few examples.
In general, the first classification model 211 uses various techniques to determine one or more probabilities that the portion of the audio signal is or is not dictation. By way of example, in implementations where the first classification model 211 is one or more neural network layers, machine learning techniques can be used to determine one or more probabilities that the portion of audio is or is not dictation. That said, regardless of the specific techniques used, the first classification model 211 can be configured to determine one or more probabilities that the analyzed audio signal is dictation based on determinations made during the low-level feature extraction 203, including, e.g., pitch estimation 205, speaker distance 209, change of speaker, and the like. In addition, the first classification model 211 can determine one or more probabilities that the analyzed audio signal is or is not dictation based on higher-level features aggregated from the low-level features extracted by the system 100.
In the second stage 201B, the dictation detection module 101 performs additional analysis of the portion of audio using automatic speech recognition techniques. In one implementation, only portions of the audio that are determined to be dictation in the first stage 201A are passed to the second stage 201B for processing. This can improve the accuracy of the determination because automatic speech recognition models that are trained on detecting dictation can be confused generally when analyzing portions of audio that are not dictation (e.g., a conversation between the physician 103B and the patient 103A), and particularly where there are transitions between dictation and non-dictation in the audio.
In the second stage 201B, the automatic speech recognition engine trained on dictation 213 can be used to extract information from the portions of the audio that were determined to be dictation by the first stage 201A. In general, the automatic speech recognition engine trained on dictation 213 can be used to generate more accurate speech recognition results from the one or more portions of the audio that were determined to be dictation by the first stage 201A. For instance, in one example, the speech results returned by the automatic speech recognition engine trained on dictation 213 can be provided to a natural language understanding engine 215. The natural language understanding engine 215 can extract information from the identified speech that can be provided to the scribe 104A (e.g., as analyzed audio output 225A and 225B). In some implementation, the information extracted may be suggestive of other information that the system 100 should provide to the scribe 104A for entry into the EHR 102. In one example, if the dictated speech includes an indication that patient 103A is a smoker, the natural language understanding engine 215 may be configured to extract medications that it identifies in the dictated speech for inclusion in the EHR 102.
In another example, the system 100 can be configured with one or more templates that are preferred by the physician 103B for medical reporting purposes. Using the natural language understanding engine 215, the system 100 could, by way of example, automatically access a relevant template identified by the natural language understanding engine 215, populate the template with relevant information extracted from the audio signal (as identified by the natural language understanding engine 215), and present the populated template to the scribe 104A for the scribe 104A's review and eventual inclusion in the EHR 102. That is, the system 100 may include any number of templates that are associated with the physician 103B (e.g., according to the physician identifier) and select a relevant template to be automatically populated by the system 100 after conducting automatic speech recognition and natural language understanding analysis of the portions of the audio that are determined to be dictation.
The system 100 may utilize the natural language understanding engine 215 in other ways to suggest information to the scribe 104A that may be entered into the EHR 102. For instance, if the detected dictation was related to a tumor identified in a scan, the natural language understanding engine 215 may suggest one or more headers to be added to the EHR 102 that introduces such things as the type of tumor, the size of the tumor, the location of the tumor, and other characteristics that are established in the medical literature to quantify a tumor. In yet another example, the natural language understanding engine 215 can provide suggestions to the scribe 104A regarding where to insert information into the EHR 102. For instance, after suggesting the inclusion of a “Tumor Type” header in the EHR 102 on the basis of extracting information for the dictated portions of speech, the natural language understanding engine 215 may identify the tumor type as “lymphoma” based on additional extracted speech and suggest that “lymphoma” be included under the “Tumor Type” header.
Stated differently, the dictation detection module 101 can translate one or more words or phrases in the dictated portion of the audio signal (such as audio signals 105A and 105B) into a format (such as analyzed audio output 225A and 225B) that is conducive to scribing using the automatic speech recognition trained on dictation 213. That is, the dictation detection module 101 can translate one or more words or phrases from the audio signal into a format that is used by the scribe 104A when making modifications to the EHR 102. In some implementations, the second classification model 214 can generate a draft of the text extracted from the audio for inclusion in the EHR 102. As will be described below, this draft can be reviewed and edited by the scribe 104A to finalize entries into the EHR 102.
In addition, the system 100 can use the draft data to adapt both the first classification model 211 and the second classification model 214, according to particular implementations. For instance, the first classification model 211 and second classification model 214 can be adapted using implicit feedback. If, for example, the scribe 104A incorporates the portion of audio identified as dictation in the EHR 102, this strongly suggests that the identified audio portion was dictation. In response the system 100 can bias the first classification model 211 in such a way that future processing by the first classification model 211 may assign a higher probability that the portion of audio is dictation when later analyzing a similar audio signal. Conversely, if the scribe 104A does not incorporate portions of audio identified as dictation in the EHR 102, the system 100 may bias the first classification model 211 in such a what that future processing by the first classification model 211 may assign a lower probability that the portion of audio is dictation when later analyzing a similar audio signal.
Regarding the second classification model 214, the draft data can be used to improve the accuracy of the second classification model 214 using one or more conventional approaches. For instance, on example technique that can be used to adapt the automatic speech recognition engine trained on dictation 213 is described in U.S. Pat. No. 9,552,809.
The system 100 may also be configured with one or more feedback models to adapt the first classification model 211 and the second classification model 214 using explicit feedback. For instance, the system 100 can configure and display a graphical user interface to the scribe 104A that includes one or more graphical user interface components that allow the scribe 104A to rate the accuracy of the identified dictation. In this way, the system can bias the first classification model 211 and/or the second classification model 214 based on the scribe 104A's feedback.
In some implementations, the system 100 can provide a selected physician model to an audio classifier and augmentor 301 that applies the selected physician model to the audio signal. In some implementations, the audio classifier and augmentor 301 is the first classification model 211 that uses the selected physician model as part of the first stage 201A analysis of the audio signal. In other words, the audio classifier and augmentor 301 can use the selected physician model in the collection of physician models 303 to determine a probability that a portion of audio is indicative of dictation according to characteristics of the selected physician model.
As described above in reference to
For instance,
The system 100 then generates a graphical user interface 311 that includes one or more different representations of the segmented audio portions. For instance, in
The graphical user interface 311 may also include a preview of one or more words in the draft dictation segment as shown by preview areas 317A, 317B, and 317C. The scribe 104A can expand a preview area to read the draft dictation generated by the dictation detection module 101. The graphic user interface 311 may also include a user interface component that allows the scribe 104A to copy the draft text onto a clipboard of the computing device 104 and allow the scribe 104A to paste the copied text into the correct portion(s) of the EHR 102. In some implementations, portions of the EHR 102 may include binary check boxes or other non-textual information. The system 100 may provide the scribe with a similar data representation that the scribe 104A can select (or de-select) after reviewing a particular draft dictation segment to have the system 100 make a similar modification to the EHR 102. For instance, a patient's status as a smoker may be represented as check box (checked if a smoker, unchecked if a non-smoker). If automatic speech recognition engine 213 in combination with the natural language understanding engine 215 determine that a portion of the dictation includes a discussion regarding the patient's status as a smoker, the system 100 may provide a check-box prompt to allow the scribe 104A to check relevant boxes in the EHR 102 that correspond to this fact. This reduces the resource utilization of the system 100 because the scribe does not need to search the EHR 102 for the one or more relevant check boxes and can therefore complete his or her scribing tasks with more efficiency.
Furthermore, as described above, the system 100 can be configured to select and populate one or more templates for a particular physician according to information identified by the natural language understanding engine 215. In such implementations, the graphical user interface 311 may include a template preview showing the scribe 104A a template stored in the selected physician model in the collection of physician models 303. The system 100 may also preview for the scribe 104A a populated templated according to the information the natural language understanding engine 215 identified for inclusion in the selected template.
The graphical user interface 311 may also include a display area for presenting a waveform representation 321 of the audio signal to demonstrate to the scribe 104A what portion of the audio is currently being played and one or more controls 319 to control playback of the audio. For instance, the graphical user interface 311 may present a play button 319A, a pause button 319B, and a skip back button 319C. In some implementations, the skip back button 319C is automatically configured to skip back to the last spoken word, as identified and time stamped by the dictation detection module 101 when processing the audio signals as described elsewhere in this disclosure. In other implementations, the skip back button 319C can be configured to skip back to the beginning of the selected dictation segment, skip back to the beginning of the previous dictation segment in the analyzed audio output (e.g., analyzed audio output 225A and 225B), or configured in other ways to allow the scribe 104A efficient means of navigating playback.
As described above, the scribe 104A may wish to modify certain portions of the draft dictation segments 317A, 317B, 317C, or the EHR 102 (including prepopulated templates) in response to reviewing the selected audio portion In some implementations, the system 100 can capture these changes and use the changes to improve the accuracy of the underlying physician model maintained in the collection of physician models 303. For instance, similar to how implicit and explicit feedback can be used by the system 100 to bias the first classification model 211 and the second classification model 214, the system 100 can also use implicit and explicit feedback to bias, e.g., the probability that the physician associated with the physician model is dictating when stating punctuation. That is, by way of example, changes to the physician 103B's physician model may be stored by the system 100 in the collection of physician models 303 and used in the next time the system 100 analyzes audio segments generated during an encounter between physician 103B and his or her patients.
As described above, the system 100 may perform signal analysis of the audio signal in a first stage 201A that does not include automatic speech recognition. The system 100 may also perform a different first stage analysis that does include automatic speech recognition (as illustrated by first stage 201C). Many of the elements in
For the purposes of describing
In some implementations, during the first stage 201C, the system 100 may identify regions within the audio where people are speaking using a voice activity detection module 407A. For instance, the first stage 201C can be configured to perform voice activity detection using an adaptive signal-based approach or a model-based approach using gaussian mixture models or neural networks. Once the audio is processed to identify those portions of audio that include speech, the system 100 may process the identified portions of audio with a speaker diarisation module 407B, which allows the system to determine whether the speaker is the physician 103B, the patient 103A, or someone else (e.g., a physician's assistant or a family member of the patient). For instance, a speaker diarisation module 407B subdivides the audio signal into one or more segments (e.g., using MFCC features) and computes a single gaussian value for each segment. The speaker diarisation module 407B may then perform a bottom up clustering by merging neighboring segments into larger segments. The speaker diarisation module 407B may continue to merge segments until the loss in likelihood is greater than a threshold value. In addition, the system 100 can be configured to leverage information contained in the physician model 409 stored in the collection of physician models 303 to improve the accuracy of one or both of the activity detection module 407A and the speaker diarisation module 407B. For instance, the system 100 can match acoustic markers stored in the physician model 409 with acoustic waveforms in the audio signal to determine or otherwise modify a probability determination that physician is speaking in a portion of the audio signal.
In some implementations, the results of the voice activity detection module 101 and speaker diarisation module 407B are provided to the automatic speech recognition as an initial pass. The pass using the automatic speech recognition model 407C in the first stage 201A may improve system performance generally by allowing the system 100 to additionally train or otherwise adapt the automatic speech recognition engine 407C in the first stage 201C and improve the accuracy of extracted words and concepts when the system 100 uses the automatic speech recognition engine 213 to analyze the portions of speech determined to be dictation in the second stage 201B. That said, while automatic speech recognition engine 407C is labeled separately from automatic speech recognition engine 213, it is contemplated that the models may in practice be the same according to particular implementations. But it should be appreciated that some implementations may configure engine 407C differently from engine 213 to leverage the fact that automatic speech recognition engine 407C may not have the benefit of receiving determinations from the entire first classification model 211. Furthermore, as stated above, it should be appreciated that voice activity detection module 407A and speech diarisation module 407B can provide inputs to the first classification model 211 during the first stage 201A that does not use automatic speech recognition. In other words, there is nothing that requires voice activity detection module 407A and speech diarisation module 407B to be used only in connection with automatic speech recognition.
In some implementations, portions of the first stage 201A and second stage 201B may be reversed. Specifically, the voice activity detection 407A can occur after automatic speech recognition (using either automatic speech recognition engine 213 or engine 407A) and not before. As a result, it should be appreciated that the order of steps described is illustrative and should not be seen as limiting the manner in which the audio signal is processed.
In some implementations, the ambient listening device 105 may be a multiple-microphone device. In such implementations, in addition to analyzing speaker distance (in either steps 209 or 415), the system can measure the direction of arrival of the audio. Typically, direction of arrival is represented as an angle measured by determining the direction that speaker's voice propagates incident to the multi-microphone device. This determination can allow the system 100 to determine where the speaker is speaking from to aid in the determination of who is speaking.
In some implementations, the system 100 can be configured to compute WN, represented by element 431, which is defined as the word/token sequence of segment N consisting of n words/tokens. In other words, WN=w1(S1,E1) w2(S2,E2) . . . wn(Sn,En). According to this representation, w1(S1,E1) is the first word/token, S1 is the start time of the first word/token, and E1 is the end time of the first word/token. As a specific example, the system 100 may identify the words “what brings you in Wyatt” in an audio segment starting at 2.7 seconds into the audio recording and ending at 3.45 seconds into the audio recording. The system 100 may internally represent this information using the WN notion, e.g.: “what (2.70, 2.86) brings (2.87, 3.02) you (3.03, 3.12) in (3.18, 3.28) Wyatt (3.33, 3.45).” Accordingly, w1(S1, E1) would be the word “what” which starts at 2.70 seconds in the audio and ends at 2.86 seconds, w2(S2,E2) would be the word “brings,” starting at 2.87 seconds in the audio and ending at 3.02 seconds in the audio, etc.
In some implementations, the system 100 can be configured to determine the value of
represented by elements 435 and 437. In general, rho represents the log-ratio of language model probabilities computed based on identified spoken words (Wi:e) extracted from the audio signal. According to one implementation, a first language model is trained to model word usage in dictation providing a probability for the word sequence under the condition that it was dictated (specified by LIMDict), a second language model computes the probability for the word sequence assuming the condition that the word sequence was not dictated but conversational (specified by LMotDict). If LMDict assigns the word sequence a larger probability than LMNotDict rho will be positive otherwise rho will be negative. In some implementations, the system 100 identifies areas indicative of dictation when the system evaluates rho to be greater than 0. In some implementations, the system 100 may select one or more portions of the audio signal that have rho greater than a separate threshold tau or may perform other filtering on the portions of audio that are determined to have a rho value greater than zero.
It should be appreciated the system 100 configured in accordance with the present disclosure may perform a computer-implement process using one or more of the following steps, according to particular implementations. The system 100 can be used to train an initial first classification model 211 stored in a computer memory of one or more computing devices that makeup the system 100. The first classification model 211 is configured to process audio data to identify one or more audio segments that are indicative of dictation without using automatic speech recognition. The system 100 can be used to train an initial second classification model 214 stored in the computer memory of the one or more computing devices that makeup the system. The second classification model 214 is configured to process audio data using automatic speech recognition. The system 100 can receive audio data from a recording device 105. The system 100 can store the received audio data in the computer memory. The system 100 can analyze the stored audio data using the first classification model 211 to identify one or more segments in the audio data that are indicative of dictation. And the system 100 can analyze the one or more identified segments that are indicative of dictation using the second classification model 214 to extract one or more features from the one or more identified segments.
In some implementations, when analyzing the audio data using the first classification model 211, the system 100 may perform additional steps, including analyzing one or more low-level features 203, pitch estimation 205, speaker distance 209, and change of speaker. The system 100 can then provide the results of the analyzing to the first classification model. Using the first classification model 211, the system 211 can determine one or more probabilities for one or more segments in the audio data, wherein the one or more probabilities describe the likelihood that each of the one or more segments in the audio data is dictation. And the system 100 can select the one or more segments with a determined probability higher than a predetermined threshold as the identified one or more segments in the audio data that are indicative of dictation.
In some implementations, the low-level feature extraction 203 is configured to extract one or more mel-frequency cepstral coefficient features, Fourier transformed features, log-mel features, power features, and zero crossings to identify one or more low-level features. In some implementations, the system 100 can aggregate the results analyzing one or more low-level features, pitch estimation 205, speaker distance 209, and change of speaker into one or more higher-level features. For instance, the system 100 can aggregate one or more low level features into one or more of the higher-level features that describes how a respective low-level feature changes over time. In some implementations, the recording device 105 is a multiple-microphone device and determining speaker distance is determined using stereoscopic information received from the recording device 105.
In some implementations, the first classification model 211 is a first neural network. For instance, the first classification model 211 can be a neural network that is configured to include one or more feed forward layers, one or more convolutional neural network layers, one or more long short-term memory neural network layers, and one or more a conditional random fields neural network layers. In some implementations, the second classification model 214 is a second neural network.
In some implementations, the system 100 can also generate a graphical user interface (e.g., graphical user interface 311) that presents the one or more identified segments that are indicative of dictation to a user and including one or more of the features extracted from the one or more identified segments. The system 100 can present the graphical user interface on a display device of the one or more computing devices, such as computing device 104. The system 100 can receive input from the one or more computing devices that represents agreement or disagreement with the extracted features from the one or more identified segments. The system 100 can store the information in an EHR 102 based on the received input. And the system 100 can adapt, using the received input as additional training data, and store the adapted first classification model and the adapted second classification model. In some implementations, the extracted features are a plurality of textual representations corresponding to a plurality of identified spoken words in the one or more identified segments. In some implementations, the received input is implicit input derived from one or more actions of a human user using the one or more computing devices. For instance, the one or more activities the scribe 104A performs on the computing device 104 can be interpreted by the system 100 as implicit agreement or implicit disagreement with the identified words that are indicative of dictation. In some implementations, the received input is explicit input from the scribe 104A using the computing device 104. For instance, the scribe 104A can provide input into the computing device 104 that corresponds to a correction to at least one textual representation in the plurality of textual representations, and a rating for at least one textual representation in the plurality of textual representations.
The system 100 can also use the adapted first classification model to train a new first classification model stored in the computer memory of the one or more computing devices that is configured to process audio data to identify one or more audio segments that are indicative of dictation without using automatic speech recognition. The system 100 can also use the adapted second classification model 214 to train a new second classification model 214 stored in the computer memory of the one or more computing devices that is configured to process audio data using automatic speech recognition. The system 100 can analyze the stored audio data using the new first classification model to identify one or more segments in the audio data that are indicative of dictation. And the system 100 can analyze the one or more identified segments that are indicative of dictation using the new second classification model 214 to extract one or more features from the one or more identified segments. In some implementations, the system 100 can generate a graphical user interface, present the graphical user interface, receive input from the one or more computing devices, store in formation in the EHR 102 as described above to adapt at least one of the new first classification model and the new second classification model 214 using the received input as additional training data.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2021/059618 | 10/19/2021 | WO |
Number | Date | Country | |
---|---|---|---|
63198474 | Oct 2020 | US |