The subject matter of this invention relates to extracting and processing clinical information from unstructured data and image data collected during a procedure.
Doctors and other medical professionals (i.e., clinicians) spend large amounts of time documenting information obtained from medical procedures. This process includes dictating information, reviewing dictation, editing reports and letters, entering information into software applications, etc. The end result is that medical professionals have less time to spend on patient care.
Aspects of the disclosure provide an artificial intelligence (AI) platform for extracting clinical information from speech and image data collecting during a procedure and generating structured information. The resulting information may for example be utilized to generate a report, populate an electronic medical records (EMR) database, provide quality-of-care indicators, etc. The AI platform utilizes a text classifier with natural language processing, pattern recognition, next word prediction and image classification to integrate and generate structured data.
In one aspect, an AI platform for processing information collected during a medical procedure is provided, comprising: an image processing system that processes images captured during the procedure using a trained classifier to identify image-based quality-of-care indicators (QIs); a natural language (NL) processing system that processes captured speech uttered during the procedure and includes: converting the speech into text; parsing the text into sentences; performing a search and replace on predefined text patterns; identifying text-based QIs in the sentences; classifying sentences into sentence types based on a trained model; and updating sentences by integrating the image-based QIs with text-based QIs; and an output module configured to output structured data that includes sentences organized by sentence type and images organized by image-based QIs.
In another aspect, a method for processing information collected during a medical procedure is provided, comprising: capturing images and speech during a medical procedure; processing the images using a trained classifier to identify image-based quality-of-care indicators (QIs); converting the speech into text; parsing the text into sentences; performing a search and replace on predefined text patterns in the sentences; identifying text-based QIs in the sentences; classifying sentences into sentence types using a trained model; updating sentences by integrating the image-based QIs with text-based QIs; and outputting structured data that includes sentences organized by sentence type.
In a further aspect, a system is provided comprising: a camera for collecting image data during a medical procedure; a microphone for collecting speech during the medical procedure; an image processing system that processes image data using a trained classifier to identify image-based quality-of-care indicators (QIs); a natural language (NL) processing system that processes captured speech uttered during the procedure and is configured to: convert the speech into text; parse the text into sentences; perform a search and replace on predefined text patterns; identify text-based QIs in the sentences; classify sentences into sentence types based on a trained model; and update sentences by integrating the image-based QIs with text-based QIs; and an output module configured to output structured data that includes sentences organized by sentence type and images organized by image-based QIs.
Other aspects may include one or more of the following. The system or methods wherein the procedure includes a colonoscopy and the image-based QIs include landmarks involving at least one of: a cecum, a rectum, an ascending colon, a descending colon, and a hemorrhoid; wherein the image-based QIs further include at least one of: polyps detected, polyp size, and histology; wherein performing a search and replace on predefined text patterns includes using regular expressions (regex) to identify patterns; wherein the procedure involves detecting and/or analyzing lesions; wherein identified patterns are replaced with standardized medical expressions; wherein the NL processing system further includes filtering out irrelevant sentences; wherein the NL processing system further includes providing an editor for displaying and editing sentences, wherein the editor includes a next word prediction system; wherein the next word prediction system uses Markov Chain Algorithm and model trained on a database of medical records; wherein the output module is configurable to output a medical report, an Electronic Medical Record (EMR), or a QI registry entry; and/or wherein the output module is configurable to output a medical report with sections organized by sentence type.
Additional aspects include a method for processing content being collected during a gastrointestinal (GI) procedure, comprising: displaying a live video feed from a scope within a GI tract of a patient; capturing and displaying an image from the live video feed in response to a natural language (NL) command recognized during the GI procedure with an artificial intelligence (AI) language processor; extracting a label and a GI location associated with the image from uttered NL using the AI language processor during the GI procedure; annotating the image with the label and GI location while the image is displayed during the GI procedure; and displaying a GI tract replica during the procedure, wherein the replica includes a visual indicator that maps the GI location of the image in the GI tract.
Still further aspects include a system for processing content during a gastrointestinal (GI) procedure, comprising: a memory; and a processor coupled to the memory and configured to perform processing that includes: displaying a live video feed from a scope within a GI tract of a patient; capturing and displaying an image from the live video feed in response to a natural language (NL) command recognized during the GI procedure with an artificial intelligence (AI) language processing system; extracting a label and a GI location associated with the image from uttered NL using the AI language processing system during the procedure; annotating the image with the label and GI location while the image is displayed during the GI procedure; and displaying a GI tract replica during the GI procedure, wherein the replica includes a visual indicator that maps a GI location of the image in the GI tract.
Other aspects include a method for generating clinical predictions, comprising: displaying a reporting module graphical user interface (GUI) configured to generate post operative reports; generating predictions in the reporting module GUI, wherein the predictions include a predictive diagnosis, suggested recommendations, and CPT codes, and wherein the predictions are generated with an artificial intelligence (AI) model; wherein the AI model includes a generative language model trained on previously published biomedical research articles and historical clinical data; and wherein training of the AI model includes: obtaining the historical clinical data; generating preprocessed data by editing the historical clinical data to remove unnecessary punctuation and to remove privacy details; and utilizing the preprocessed data to further train the generative language model.
Additional aspects may include any of the following in combination with any of the other aspects: extracting biopsy and jar information associated with a removed tissue specimen during the procedure from uttered NL; printing a label and pathology requisitions using the biopsy and jar information for the removed tissue specimen; selecting an annotated image to appear in a post operative report during the procedure based on uttered NL captured AI language processor; displaying a reporting module graphical user interface (GUI); generating predictions in the reporting module GUI, wherein the predictions include a predictive diagnosis, suggested recommendations, and recommended CPT and IDC codes, and wherein the predictions are generated with an artificial intelligence (AI) model; wherein the predictions are generated in response to a set of inputs to the AI model that include an age of the patient, clinical findings, and a procedure name; wherein the AI model includes a generative language model trained on previously published biomedical research articles and historical GI clinical data; wherein the AI language processor includes an automatic speech recognition system and an automated text analysis system.
These and other features of this invention will be more readily understood from the following detailed description of the various aspects of the invention taken in conjunction with the accompanying drawings in which:
The drawings are not necessarily to scale. The drawings are merely schematic representations, not intended to portray specific parameters of the invention. The drawings are intended to depict only typical embodiments of the invention, and therefore should not be considered as limiting the scope of the invention. In the drawings, like numbering represents like elements.
Embodiments of the present disclosure describe a platform for collecting and processing information generated during a medical procedure, such a gastroenterology procedure. Although illustrative embodiments include a colonoscopy procedure, it is understood that the solution could be applied to any clinical procedure, including those that analyze and detect lesions. The platform provides a technical solution to address the time-consuming process of manually managing, analyzing, integrating and reporting information resulting from procedures. In certain aspects, an AI platform is provided that will automate and streamline information processing of mixed media clinical data collected during a procedure, including audio (e.g., natural language “NL” speech uttered by a doctor during a procedure) and video (e.g., streamed images captured by a camera during a procedure). The platform outputs relevant clinical data in a structured format and eliminates manual actions, such as point and click workflows from clinical application settings. The platform uses various techniques to identify relevant information and remove unnecessary data, e.g., unwanted sentences uttered during a procedure. The resulting structured clinical data can for example be used for clinical report generation, statistical analysis, registry reporting, etc.
After the procedure is completed, the clinician is able to interact with the data integration tool 20 via a user interface 22. Data integration tool 20 includes an NL processing system 24 configured to process NL stored in the unstructured NL datastore 14 and generate structured clinical data. Illustrative processes, which are described in further detail herein, include error/misspelling replacement, punctuation evaluation, sentence generation, keyword/expression replacement, noise filtering, sentence post processing, and user editing.
Data integration tool 20 also includes a QI processing/data integrator 26 that integrates image-based QI data with text-based QI data to provide structured clinical data. In one illustrative embodiment, the structure clinical data includes various classifications or types, such as: Procedure Performed, Extent of Exam, Medications, Findings, Diagnosis, Limitations of Exam, ICD, CPT codes, QIs and data element for the registries (e.g., GIQuIC). A sentence classifier 28 is provided to classify sentences that make up the structured clinical data into appropriate categories or sentence types, e.g., procedure performed, findings, diagnosis, recommendations, etc. Report/Output module 30 is configured to generate a report 36 or otherwise output information in a structured format for another system, such as an electronic medical record (EMR) system 32, a QI registry 34, etc.
In one illustrative embodiment, the text is processed using regular expression (regex) techniques. Regular expressions are a series or sequence of characters that can replace a set of patterns in a text dataset. Regular expressions can be used for string related functions such as searching a string or replacing a string. Types of regular expressions may for example include: digits (e.g., 1/2/100); alphabets (e.g., a/v/t); any character (e.g., +/?/<); and set of digits. Illustrative regex functions include: search to find a specific pattern in a string; match to find the very first occurrence of a pattern in a string; find all to find all the patterns in a given string; split to split the text into the given regular expression; and sub to search and replace in string.
Returning to S4 in
Next, at S5 in
Once the sentences are generated, keywords are identified and replaced with standardized expressions. For example, less formal terms uttered by the doctor to describe something in the unstructured text are replaced with standardized expressions from a medical dictionary.
In certain embodiments, if a record is found in the search pattern list 62, the replacement process will further check for other conditions, such as Not Present and Present in Report conditions 66, before performing a replacement. For instance, if the two associated boxes 66 are populated for an identified search pattern 62, two additional conditions must be met to have the identified search pattern replaced with a replace pattern 64. In a first condition, the contents of the first box “Not Present in Report” must not be present anywhere in the text. For example, “Polypectomy performed” must not be present in order for the located text “cold snare used” to be replaced with “Polypectomy performed with a cold snare”. In a second condition, if the second box “Present in Report” contains a value, that value must be present in the text. For instance, if the second box included the term “colonoscopy”, the second condition will be met if “colonoscopy” is present anywhere in the text. In this embodiment, if both conditions are not met, replacement will not happen. These additional conditions allow the users to customize replacement strategies.
Next, at step S7 in
At S8, sentences are classified by a model to determine a sentence type, e.g., which part of the report does it belong to. For example,
The sentence can be further evaluated based on the classification, and unwanted words can be removed using regex patterns. For example, for sentences classified in “Extent of Exam,” the located text “reached cecum” can be replaced with “cecum” using the expression “(\W|{circumflex over ( )}) (reached) (?′VALUE′(.+|)(\W|$))”=“cecum”.
Returning again to the collect video step S11 in
As QI data can be obtained from both the speech data as well as the image data, the QI data from both is processed and integrated at S9. For example, an image-based QI, “One polyp found in sigmoid colon” could be output by a lesion detection AI module during image processing. A physician might also dictate the text-based QI “Two polyps found in ascending colon,” which would be presented in a sentence. Accordingly, the information from both can be integrated together so a final integrated QI finding would be: “Number of polyps=3”, and the associated sentence can be updated, e.g., “Three total polyps found—two polyps found in ascending colon, and one in the sigmoid colon.”
After the QI values are determined from the QI manager 80, they can be further integrated and updated based on the image-based QI data S13 generated from the image data processing step or based on previously stored data. For example, polyps found value may change from yes to no or no to yes, the number of polyps found value may be increased, if a polypectomy was performed, the total number of polypectomies might be recalculated. The generated sentences can be updated based on the integration process.
Once a structured set of sentences are generated, an editor may be utilized at S10 by the user to refine the output. In some embodiments, the editor includes language modeling, i.e., next word prediction which, for a given character, or a sequence of characters, automatically generates the most probable next character or sequence of characters (e.g., words, phrases, etc.). This feature accordingly further reduces the time a clinician is required to interact with the data to complete a report or the like.
Language modeling may utilize a trained model to allow word searching using NLP and deep learning AI. In this case, the input to the model is a sequence of characters, and the model is trained to predict the output using historical information. For example, to predict the next word in the sentence “Polypectomy performed with ______”, (1) a recurrent neural network (RNN) neuron receives a command that indicates the start of a sentence, (2) the neuron receives the word “Polypectomy” and then outputs a vector of numbers that feeds back into the neuron to help it “remember” that it received “Polypectomy” (and that it received its first). The same process occurs when it receives “performed” and “with,” with the state of the neuron updating upon receiving each word; (3) after receiving “with,” the neuron assigns a probability to every word in the clinical findings data vocabulary which is created to complete the sentence. The RNN might assign the word “cold snare” one of the highest probabilities and will choose it to complete the sentence.
Examples of predicting text associated with clinical findings are shown in bold below, e.g., based on highest percentage:
In a further embodiment, predictive typing may utilize a Markov Chain Algorithm. Examples of predicting text associated with clinical findings are shown in bold below and an associated hierarchy is shown in
Flat Polyp was found in the proximal ascending colon.
Flat Polyp was found in the Sigmoid colon.
Flat Polyp, measuring ###mm was found in the proximal ascending colon All the unique phrases from above sentences, i.e., “Flat Polyp”, “was found in the”, “proximal ascending colon”, “measuring ###mm was found in the”, “Sigmoid colon”, and “Descending colon” could form the different states. Representing the above work mathematically as conditional probabilities:
P (was found in the|Flat Polyp)=0.67
P (Measuring ### mm found in the|Flat Polyp)=0.33
P (Proximal ascending colon|was found in the)=P (Sigmoid colon|was found in the)=0.5
P (Descending colon|Measuring ### mm found in the)=1
The same conditional probabilities can be implemented to single word instead of the phrases.
Sections from an EMR database of procedure records, e.g., Findings, Diagnosis, Indications, Recommendations, etc., may be used to train a predictive model. In this case, text in procedure reports is split into sentences. Duplicate sentences are removed and sentences with uncommon words, e.g., the ones found less than four times in all sentences intended for training are removed to eliminate irrelevant words. In one example, 946,198 unique sentences and 15,942 unique words were collected for training.
Different techniques may be used for word prediction and sentence prediction training. Word prediction training includes building a word hierarchy based on frequency of word usage in the training set. The output may show suggested word options according to that hierarchy. Sentence prediction training includes the following steps:
At first, tokenization is performed that breaks down a sentence into words. The second stage consists of forming previous and current state pairs. If a 5th-order Markov model is used, the previous state will consist of one to five words. The words in each sentence are grouped by the number of words with a maximum number of words of 5: one-word groups, two, three, four, and five-word groups. For example, in the sentence “Serrated polyp found in the descending colon,” one-word groups include: serrated, polyp, found, in, the, descending, and colon. Two-word groups include: serrated polyp, polyp found, found in, in the, the descending, descending colon, etc. The groups with the same number of words from all sentences are combined into large groups, so, five final word groups are created. Duplicate word groups are removed.
All sentences in the training set (946,198) are analyzed for each word group to identify the next possible word and probability of its appearance. Then a hierarchy of possible words are built for each word group. For a sample sentence above, the “found” is the next possible word for a two-word group “serrated polyp”. As a result of training, when a user types in “serrated polyp”, the system immediately cycles through the options: “serrated polyp—found—in—the—descending—colon” and comes up with the whole sentence in the suggested options. An illustrative hierarchy is shown in
Referring again to the image processing at S12 in
In this example, a QI for the Total Intubation time for the procedure would be calculated as:
Total Intubation time=Rectum Time (C)−Scope Insert Time (A)
And the Withdrawal Time of the Procedure would be calculated as:
Withdrawal Time=Rectum Time (C)−Cecum Time (B)
Aspects of the disclosure accordingly provide embodiments to recognize patterns in unstructured text by using NLP with Regular Expressions (regex) and deep learning, and image processing using AI to identify QI findings. Classification of data is based on knowledge previously gained or information extracted from patterns and/or their representation. In some embodiments, clinical information is extracted from an unstructured text dataset using a text classifier (NLP) and pattern recognition (regex), after which the user is given an option to use predictive text typing with deep learning to refine the output. Extracted relevant clinical data can be used in clinical report generation, statistical analysis and the discrete data element can be sent to registries. All the unwanted sentences (i.e., noise) in the unstructured text are automatically filtered. In one illustrative embodiment, extracted structured information may include, e.g., Procedure Performed, Extent of Exam, Medications, Findings, Diagnosis, Limitations of Exam, ICD, CPT codes, QIs and data element for registries (e.g., GIQuIC).
During a procedure, two paths collect data. The first path collects speech (i.e., uttered conversation) from the procedure room. The second path collects and classifies live images, e.g., to detect landmarks. In one embodiment involving speech collected from the procedure room, a resulting report is automatically generated. In a second embodiment, speech can be collected during a consultation, and an initial consultation report and progress/follow-up report are automatically generated. In this second embodiment, image data may or may not be included in the reports (i.e., only the first path is utilized).
In further aspects, the platform includes a content processing system that integrates NL inputs with image capture and other actions to automate reporting and requisitioning activities required as part of a GI procedure, such as a colonoscopy. As noted, physicians and other medical providers are required to spend a significant amount of time and effort generating post-operative reports and the like that detail findings, recommendations, diagnosis, CPT and IDC codes, etc. To the extent that automated tools can reduce this burden, providers have more time to focus on providing medical care, rather than dealing with administrative overhead. To facilitate this process, a system is described herein that allows the physician to generate required reporting and biopsy related content during the actual procedure using natural language. The system further includes an AI based prediction system that will automatically predict relevant reporting details within a reporting GUI.
Referring to
ASR system 122 may for example be implemented with a Whisper model from Open AI, which is an AI based sequence-to-sequence Transformer model that provides general-purpose speech recognition. In certain implementations, the Whisper model is trained with training data designed to perform various tasks within system 120 during a procedure. For example, during the procedure, detected key words such as “capture,” “report,” “jar,” “biopsy specimen,” etc., can be recognized to trigger actions within system 120.
Automated text analysis system 123 extracts relevant information from a NL text string, e.g., in response to a detected keyword, and can for example be implemented with a script using Python's Pandas library. The script imports comma-separated value (CSV) files (e.g., “test.csv,” “labels.csv,” and “comments.csv”), which are converted into data frames. (This could be single data.) Labels and comments are then converted into lists for easy processing. The main function iterates through the ‘findings’ column of the ‘test’ data frame, cleaning the text by removing punctuation and converting it to lowercase. The script then checks for n-gram matches between the text and the labels and comments. An n-gram is a contiguous sequence of n words in a text. The function ‘exact_ngram_match ( )’ takes three parameters: the text to search, the label or comment to search for, and the number of words in the n-gram (defaulting to 1). It then generates n-grams for both the text and the label or comment and checks if the label or comment n-grams (here we restrict to the number of words in a label or comment as n) are a subset of the text n-grams. If a match is found, the label or comment is appended to the respective list. Finally, the script appends a dictionary containing the original text, matched labels, and matched comments to the final list. Matched labels and comments can for example be used as inputs to system 120.
As noted, keywords and extracted information from the AI language processor 125 can in turn be utilized to control various modules shown in
Additionally, image labeling and location module 126 can utilize ASR system 122 to identify relevant NL input uttered by the physician and display the information as a text string in a description window 166. ATA system 123 can then extract relevant information from the description, such as a label and a location. For example, as shown in
In addition, visual GI mapping module 128 displays a GI tract replica 173 and automatically maps the captured image 170 onto the replica 173 with an indicator 174. Accordingly, as the physician 121 collects and annotates images during the procedure with NL inputs, the images are automatically mapped to the replica 173 for reference and later reporting purposes.
Image handling module 130 allows physician 121 to utter NL inputs to determine what to do with the captured image, e.g., include an annotated image in a post-operative report 134 or save the image to storage 132. These modules 124, 126, 128 and 130, which create report-ready content during the procedure accordingly greatly reduce the time it later takes to generate post-operative report 134.
As shown in
After the procedure, a reporting module GUI 138 allows the physician to edit and finalize a report 134, which is automatically prepopulated with annotated images, a GI tract replica (with mappings), biopsy details, etc. Furthermore, reporting module GUI 138 includes a prediction system 150 that utilizes AI to predict other components of the report 134.
For example,
At S33, the preprocessed data is used to fine tune a biomedical AI model such as a Generative Pre-trained Transformer for Biomedical Text Generation and Mining model (BioGPT model 181). The resulting trained AI model 182 is thus configured to generate accurate predictions for diagnosis, recommendation, and codes based on the defined inputs 177 (
The fine tuning is a process of adapting a pre-trained language model to a specific task by updating its weights on a smaller domain-specific dataset. The objective is to enhance the performance of the pre trained model by allowing it to learn from the task-specific data. The fine-tuning process involves updating the weights of the model through backpropagation, where the gradient is computed with respect to the task-specific loss function.
In the case of a GPT model for predicting diagnosis, recommendation, and medical codes, the fine-tuning process involved feeding the preprocessed data into the pre-trained BioGPT model and updating its weights to optimize its predictions on the target task. The BioGPT model was fine-tuned by minimizing the cross-entropy loss between the predicted and actual labels for the target task. The model's weights were updated using the backpropagation algorithm, which computes the gradient of the loss function with respect to the model parameters. The fine-tuning process continues until the model's performance on the validation set converges or stops improving.
Given a sequence of input tokens X=[x_1, x_2, . . . , x_n], the goal is to predict the next token x_{n+1}:
p(x_{n+1}|X)=softmax(W_hh_n)
The transformer model used in GPT consists of multiple layers of self-attention and feedforward neural networks. Each layer is designed to process the input tokens and compute intermediate hidden states, which are then fed into the next layer. The final hidden state h_n is then used to compute the probability distribution over the next token x_{n+1} using the softmax function and the weight matrix W_h.
During training, the model is trained to minimize the negative log-likelihood of the ground truth next token x_{n+1} given the input sequence X:
L(X)=−log p(x_{n+1}|X)
This training objective encourages the model to assign higher probabilities to the correct next token and lower probabilities to the incorrect ones, which helps the model learn to generate coherent and fluent text.
In response to the inputs, prediction system 150 automatically generates a suggested diagnosis 184, suggested recommendations 185, suggested CPT codes 186, and suggested ICD codes 187 within the reporting module GUI 138. From the GUI, the physician can accept or reject the predictions for the post operative report 134.
It is understood that the described platform can be implemented using any computing technique, e.g., as a stand-alone system, a distributed system, within a network environment, etc. It is understood that various technologies, e.g., generative AI using LLM and ASR can be installed locally or on a Cloud. Referring to
In some embodiments, the client machines 202A-202N communicate with the remote machines 206A-206N via an intermediary appliance 208. The illustrated appliance 208 is positioned between the networks 204, 204′ and may also be referred to as a network interface or gateway. In some embodiments, the appliance 208 may operate as an application delivery controller (ADC) to provide clients with access to business applications and other data deployed in a datacenter, the cloud, or delivered as Software as a Service (SaaS) across a range of client devices, and/or provide other functionality such as load balancing, etc. In some embodiments, multiple appliances 208 may be used, and the appliance(s) 208 may be deployed as part of the network 204 and/or 204′.
The client machines 202A-202N may be generally referred to as client machines 202, local machines 202, clients 202, client nodes 202, client computers 202, client devices 202, computing devices 202, endpoints 202, or endpoint nodes 202. The remote machines 206A-206N may be generally referred to as servers 206 or a server farm 206. In some embodiments, a client device 202 may have the capacity to function as both a client node seeking access to resources provided by a server 206 and as a server 206 providing access to hosted resources for other client devices 202A-202N. The networks 204, 204′ may be generally referred to as a network 204. The networks 204 may be configured in any combination of wired and wireless networks.
A server 206 may be any server type such as, for example: a file server; an application server; a web server; a proxy server; an appliance; a network appliance; a gateway; an application gateway; a gateway server; a virtualization server; a deployment server; a Secure Sockets Layer Virtual Private Network (SSL VPN) server; a firewall; a web server; a server executing an active directory; a cloud server; or a server executing an application acceleration program that provides firewall functionality, application functionality, or load balancing functionality.
A server 206 may execute, operate or otherwise provide an application that may be any one of the following: software; a program; executable instructions; a virtual machine; a hypervisor; a web browser; a web-based client; a client-server application; a thin-client computing client; an ActiveX control; a Java applet; software related to voice over internet protocol (VoIP) communications like a soft IP telephone; an application for streaming video and/or audio; an application for facilitating real-time-data communications; a HTTP client; a FTP client; an Oscar client; a Telnet client; or any other set of executable instructions.
In some embodiments, a server 206 may execute a remote presentation services program or other program that uses a thin-client or a remote-display protocol to capture display output generated by an application executing on a server 206 and transmit the application display output to a client device 202.
In yet other embodiments, a server 206 may execute a virtual machine providing, to a user of a client device 202, access to a computing environment. The client device 202 may be a virtual machine. The virtual machine may be managed by, for example, a hypervisor, a virtual machine manager (VMM), or any other hardware virtualization technique within the server 206.
In some embodiments, the network 204 may be: a local-area network (LAN); a metropolitan area network (MAN); a wide area network (WAN); a primary public network 204; and a primary private network 204. Additional embodiments may include a network 204 of mobile telephone networks that use various protocols to communicate among mobile devices. For short range communications within a wireless local-area network (WLAN), the protocols may include 802.11, Bluetooth, and Near Field Communication (NFC).
Elements of the described solution may be embodied in a computing system, such as that shown in
Processor(s) 302 may be implemented by one or more programmable processors executing one or more computer programs to perform the functions of the system. As used herein, the term “processor” describes an electronic circuit that performs a function, an operation, or a sequence of operations. The function, operation, or sequence of operations may be hard coded into the electronic circuit or soft coded by way of instructions held in a memory device. A “processor” may perform the function, operation, or sequence of operations using digital values or using analog signals. In some embodiments, the “processor” can be embodied in one or more application specific integrated circuits (ASICs), microprocessors, digital signal processors, microcontrollers, field programmable gate arrays (FPGAs), programmable logic arrays (PLAs), multi-core processors, or general-purpose computers with associated memory. The “processor” may be analog, digital or mixed-signal. In some embodiments, the “processor” may be one or more physical processors or one or more “virtual” (e.g., remotely located or “cloud”) processors.
Communications interfaces 306 may include one or more interfaces to enable computer 300 to access a computer network such as a LAN, a WAN, or the Internet through a variety of wired and/or wireless or cellular connections.
In described embodiments, a first computing device 300 may execute an application on behalf of a user of a client computing device (e.g., a client), may execute a virtual machine, which provides an execution session within which applications execute on behalf of a user or a client computing device (e.g., a client), such as a hosted desktop session, may execute a terminal services session to provide a hosted desktop environment, or may provide access to a computing environment including one or more of: one or more applications, one or more desktop applications, and one or more desktop sessions in which one or more applications may execute.
As will be appreciated by one of skill in the art upon reading the following disclosure, various aspects described herein may be embodied as a system, a device, a method or a computer program product (e.g., a non-transitory computer-readable medium having computer executable instruction for performing the noted operations or steps). Accordingly, those aspects may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, such aspects may take the form of a computer program product stored by one or more computer-readable storage media having computer-readable program code, or instructions, embodied in or on the storage media. Any suitable computer readable storage media may be utilized, including hard disks, CD-ROMs, optical storage devices, magnetic storage devices, and/or any combination thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof “Optional” or “optionally” means that the subsequently described event or circumstance may or may not occur, and that the description includes instances where the event occurs and instances where it does not.
Approximating language, as used herein throughout the specification and claims, may be applied to modify any quantitative representation that could permissibly vary without resulting in a change in the basic function to which it is related. Accordingly, a value modified by a term or terms, such as “about,” “approximately” and “substantially,” are not to be limited to the precise value specified. In at least some instances, the approximating language may correspond to the precision of an instrument for measuring the value. Here and throughout the specification and claims, range limitations may be combined and/or interchanged, such ranges are identified and include all the sub-ranges contained therein unless context or language indicates otherwise. “Approximately” as applied to a particular value of a range applies to both values, and unless otherwise dependent on the precision of the instrument measuring the value, may indicate +/−10% of the stated value(s).
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The embodiment was chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.
The foregoing drawings show some of the processing associated according to several embodiments of this disclosure. In this regard, each drawing or block within a flow diagram of the drawings represents a process associated with embodiments of the method described. It should also be noted that in some alternative implementations, the acts noted in the drawings or blocks may occur out of the order noted in the figure or, for example, may in fact be executed substantially concurrently or in the reverse order, depending upon the act involved. Also, one of ordinary skill in the art will recognize that additional blocks that describe the processing may be added.
This Continuation in Part application claims priority to co-pending application Ser. No. 17/867,778, filed on Jul. 19, 2022, entitled AI PLATFORM FOR PROCESSING SPEECH AND VIDEO INFORMATION COLLECTED DURING A MEDICAL PROCEDURE, the contents of which are hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
63224001 | Jul 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17867778 | Jul 2022 | US |
Child | 18324249 | US |