The present invention relates to Artificial Intelligence (AI) and machine learning (ML), and in particular to medical AI and a method, system, data structure, computer program product, data structure, computer program product and computer-readable medium for generating safe text, having applications to medical reports for medical use cases such as disease classification.
In an embodiment, the present invention provides a computer-implemented, machine learning method for generating safe text. A first portion of a trainable prompt is generated using negative influential features and positive influential features of a predicted condition. A second portion of the trainable prompt is trained to steer a pre-trained large language model (PLLM) to generate the safe text using at least the first portion of the trainable prompt. The method has applications including, but not limited to, use cases in medicine (e.g., digital medicine, personalized healthcare, AI-assisted drug or vaccine development, diagnosis or treatment, disease prediction, etc.), and cyber security.
Embodiments of the present invention will be described in even greater detail below based on the exemplary figures. The present invention is not limited to the exemplary embodiments. All features described and/or illustrated herein can be used alone or combined in different combinations in embodiments of the present invention. The features and advantages of various embodiments of the present invention will become apparent by reading the following detailed description with reference to the attached drawings which illustrate the following:
Embodiments of the present invention provide an approach to adapt a pre-trained large language model to steer it to generate safe medical reports conditioned on facts identified by explainable AI (XAI) from patient data. The approach trains a soft prompt consisting of a finite number of trainable continuous vectors to control the text generation in a safe scope. In addition to providing to be able to generate more accurate and safe reports, the approach is also data- and resource-efficient, thereby conserving computational resources.
Medical reports contain comprehensive summaries of a patient's clinical history including background information, examinations, treatment, doctor opinions and more. Traditionally, these reports have been written by doctors themselves, consuming their valuable time and resources. Large Language Models (LLMs) have the potential to support doctors in writing medical reports more efficiently. The input of the LLM (the prompt) for this could be created by a hard-coded prompt template with manually-selected patient data by doctors. The LLM would then output the final medical reports given the prompt.
Hard-coded prompt templates are templates that contain natural language (e.g., tasks and guides for the LLM) and placeholders for individual data inputs. The prompt templates could be written manually and used by all doctors for all patients. The placeholders could be filled with the patient's data, identified by doctors, enabling the LLM to write a medical report for a specific patient and use case. The patient data could be selected and hand-crafted by the doctor to contain all the important information needed by the LLM to generate the report.
While this approach would offer an efficient way to write medical reports, there are a number of technical problems to overcome in order to be able to safely rely on the output, including that there could be incorrect and unsafe text in the report and that the generated text could contain hallucinations of fact. Thus, it presents a problem especially regarding safety, which is important especially in the medical domain. The doctor lacks control over guiding and improving the automatic report generation process. For example, existing technology does not provide the capability for the doctor to provide feedback to the LLM if the generated report contains unsafe content. The LLM also does not have the technical capability to learn from past mistakes and will repeat them over and over. The doctor will need to recognize and fix the flawed generated outputs every time. Consequently, the doctor needs to spend significant time and resources to evaluate and correct every single generated report, often containing very similar or even the same errors as previous generated ones. Further, the generated medical reports cannot be individualized. Although each doctor has their own style, the LLM is limited by existing technology to always output in the same generic style.
In a first aspect, the present invention provides a computer-implemented, machine learning method for generating safe text. A first portion of a trainable prompt is generated using negative influential features and positive influential features of a predicted condition. A second portion of the trainable prompt is trained to steer a pre-trained large language model (PLLM) to generate the safe text using at least the first portion of the trainable prompt.
In a second aspect, the present invention provides the method according to the first aspect, wherein the second portion of the trainable prompt includes a number of embedding vectors that are unknown prior to training, and wherein the method further comprises: obtaining patient data; and predicting a disease as the predicted condition using explainable artificial intelligence (XAI) and the patient data, wherein the XAI provides the negative influential features and the positive influential features along with explanation values.
In a third aspect, the present invention provides the method according to the first aspect or second aspect, further comprising: receiving new negative influential features and new positive influential features; generating a new first portion of the trainable prompt using the new negative influential features and the new positive influential features; generating an input prompt using the new first portion of the trainable prompt and the second portion of the trainable prompt; and generating a medical report using the PLLM and the input prompt.
In a fourth aspect, the present invention provides the method according to any of the first to third aspects, further comprising: receiving corrected medical reports from a particular user; updating the second portion of the trainable prompt using the corrected medical reports to be personalized to the particular user; and generating new medical reports for the user using the updated second portion of the trainable prompt and the PLLM.
In a fifth aspect, the present invention provides the method according to any of the first to fourth aspects, wherein the embedding vectors are frozen for the PLLM.
In a sixth aspect, the present invention provides the method according to any of the first to fifth aspects, wherein training the second portion of the trainable prompt to steer the PLLM to generate the safe text further comprises: constructing the second portion of the trainable prompt with a set of embedding vectors that will be trained to steer the PLLM to generate safe reports; generating medical reports using the PLLM, the negative influential features, and the positive influential features for pseudo patients; receiving labels and corrections from users analyzing the medical reports to generate corrected medical reports; computing a loss for each training example including a pair of: a) a medical report from the PLLM; and b) a corrected medical report of the corrected medical reports; and optimizing the loss of the training data for the embedding vectors of the second portion of the trainable prompt to learn to steer the PLLM to generate the safe text.
In a seventh aspect, the present invention provides the method according to any of the first to sixth aspects, wherein optimizing the loss of the training data for the embedding vectors includes obtaining optimal values of the embedding vectors to lead the PLLM to fit the corrected medical reports of training examples.
In an eighth aspect, the present invention provides the method according to any of the first to seventh aspects, further comprising predicting a disease as the predicted condition using explainable artificial intelligence (XAI) and patient data by training an XAI model to predict the disease using the patient data, wherein the XAI model ranks the negative influential features and the positive influential features with influential scores.
In a ninth aspect, the present invention provides the method according to any of the first to eighth aspects, wherein generating the first portion of the trainable prompt using the negative influential features and the positive influential features includes mapping the negative influential features and the positive influential features to a text with templates and a verbalizer.
In a tenth aspect, the present invention provides the method according to any of the first to ninth aspects, wherein the verbalizer projects class labels of the negative influential features and the positive influential features to pairs of verbs, and wherein the templates are used to place the negative influential features and the positive influential features.
In an eleventh aspect, the present invention provides the method according to any of the first to tenth aspects, further comprising: generating an input prompt comprising the first portion of the trainable prompt and the second portion of the trainable prompt, wherein the first portion of the trainable prompt is provided to the PLLM as a cloze instruction of influential features that corresponds to the negative influential features and the positive influential features; and generating a report in response to providing the input prompt to the PLLM.
In a twelfth aspect, the present invention provides the method according to any of the first to eleventh aspects, wherein the report identifies a patient, a predicted disease, and an explanation of the negative influential features and the positive influential features from the predicted disease.
In a thirteenth aspect, the present invention provides the method according to any of the first to twelfth aspects, wherein the second portion of the trainable prompt includes a number of continuous vectors of a same size as embedding vectors of the PLLM, wherein the embedding vectors are initialized randomly prior to training.
In a fourteenth aspect, the present invention provides a computer system for generating safe text comprising one or more processors which, alone or in combination, are configured to perform a machine learning method for generating safe text according to any of the first to thirteenth aspects.
In a fifteenth aspect, the present invention provides a tangible, non-transitory computer-readable medium for generating safe text which, upon being executed by one or more hardware processors, provide for execution of a machine learning method according to any of the first to thirteenth aspects.
Referring to
Each step of the method and system is detailed as follows:
Historical patient data is obtained, for example, including:
The data can be obtained from a public dataset, or electronic health record (HER) databases.
Step 2.2 (Represented as 2 in
The patient data is used to train an XAI model to predict a disease of a patient. The XAI will also provide explanations, such as the negative influential features and positive influential features, ranked with influential scores. For example, given a patient's data, an XAI model could predict the patient has a disease of Coronary Artery Atherosclerosis with a risk level of 76% and explain the prediction as:
XAI approaches such as Local Interpretable Model-Agnostic Explanations (LIME), KBLRN (see Alberto Garcia-Duran, Mathias Niepert, “KBLRN: End-to-End Learning of Knowledge Base Representations with Latent, Relational, and Numerical Features,” arXiv:1709:04676 (2018), which is hereby incorporated by reference herein) and gradient rollback (see U.S. Patent Application Publication No. 2022/0366274, which is hereby incorporated by reference herein) could be used to obtain the predictions and explanations.
A trainable mixture prompt is constructed to be used as an input to a pre-trained large language model (PLLM). Embodiments of the present invention provide to learn the prompt to generate safe reports, which avoid hallucinations of fact. The prompt consists of two parts:
Putting the two parts together, the input prompts to a PLLM according to an embodiment of the present invention can be, for example:
A Pre-trained Large Language Model (PLLM) is used to generate the text based on the input prompts. Advantageously, any other private or public PLLMs can be used. The model and its parameters will be frozen, and are not trained during the training process.
Embodiments of the present invention provide a safety-aware training process to learn the trainable soft prompt. First, safe medical reports 200 are collected as training data. An LLM or PLLM can be used to generate some medical reports conditioned on the important features identified by XAI for different (pseudo) patients. The input prompts use the instruction part only. These reports, denoted as oi, are labeled and corrected by medical professionals to be safe, in particular to match the truth of the domain knowledge and the facts of the patients. A corrected report 202 is denoted as ci. Second, for each training example comprising a pair of a LLM-generated report 204 and a corrected report 202 by the medical professionals, embodiments of the present invention use the full prompt to generate a report, denoted as gi, and compute loss as:
where Li,mle is any commonly used maximal likelihood loss of a generated text. The term D(gi, ci) specifies the distance between the generated text with the full prompt, and the corrected one. They should be as close as possible to provide that the generated reports 204 are safe. In contrast, D(gi, oi) denotes the distance of the generated report 204 to the unsafe one, which should be smaller. Third, the loss of the entire training data is optimized with respect to the embedding vectors of the soft prompt. Optimization methods, such as Adafactor, can be used to find the optimal values of the shared embedding vectors by all training examples. Adafactor may refer to a stochastic optimization method that reduces memory usage while retaining empirical benefits of adaptivity. In embodiments the PLLM is frozen, but the soft part of the prompt has K vectors. The methods of the current disclosure will learn to update the K vectors to fit the corrected reports from doctors. In embodiments the soft part of the trainable prompt is not frozen but is optimized during a training period.
Step 2.6—Generating, for a New Patient, a Medical Report with the Trained Prompt:
After the training procedure, the soft part of prompt 206, in particular the number K of embedding vectors, has been learned. The learned embedding vectors are frozen, and used by all prompts, which steers the PLLM to generate safe reports. For a new patient, given the positive and negative influential features 208, the prompt is constructed with the learned embedding vectors followed by the cloze instruction of influential features. With the prompt input 210, the PLLM will generate a safe report.
It is provided according to an embodiment of the present invention to personalize or individualize the prompts, for example to fit different individual styles of individual doctors. In particular, since every doctor has their own background, preference and style in writing medical reports for their patients, an embodiment of the present invention provides a personalization mechanism in a data- and resource-efficient manner. To do so, a global soft prompt is trained with generic corrected reports. Then, for each doctor, a single copy of the global soft prompt is obtained. Next, the corrected medical reports by a particular doctor are collected. After that, the soft prompt is fine-tuned with the doctor-specific reports. Finally, a personalized soft prompt will be learned, which will steer the PLLM to generate the medical report preferred by the doctor. In embodiments the soft prompt can be fine-tuned by changing the soft prompt slightly in the direction of the corrected reports (e.g., doctor-specific reports) and away from the generated ones—back-propagation via the loss function. Personalization of the soft prompt can be performed for one user or entity such as a doctor or a hospital. This provides to further reduce the effort and time, and required resources, of each single doctor. Also, since this personalization mechanism only trains and stores a soft prompt for each doctor, which consists of a limited number of embedding vectors, the training and storage is computational resource friendly.
Embodiments of the present invention thus provide for general improvements to computers in machine learning systems to provide an LLM system capable of generating text that is safer, more trustworthy, reliable and accurate, as well as explainable and personalizable. Although an especially advantageous use of the improved LLM system is for medical report generation, embodiments of the present invention also apply for the generation of other text for other machine learning tasks in other technical domains. Thus, in addition to the improvements in computer functionality in an AI LLM-based system to generate safe and correct text, embodiments of the present invention enable improvements also in the technical fields of application through improved, safer and more reliable decision-making and improved optimization of the machine learning task at hand that uses the generated text. Exemplary use cases in medical AI include disease prediction, digital medicine, personalized healthcare, and AI-assisted drug or vaccine development, etc., and generating cyber-attack reports. For example, collected data can be properties of activities, e.g., time stamp, source and destination IPs, source and destination ports, protocols. The AI-LLM-based system can generate reports that detail why a user's behavior can be an attack on a system. Negative influential factors can include the most suspicious factors such as a source IPs.
In an exemplary embodiment, the present invention can be applied for medical report generation for disease prediction for patients. Besides predictions, the explanations can also be provided to increase the trust of the doctors in the AI system. Since the doctors have no machine learning background, embodiments of the present invention advantageously provide that the explanations can be represented by natural language with some domain knowledge. In addition, since each doctor serves many patients per day, the automatic generation of medical reports in a manner still controlled by doctors and conditioned on the predictions, explanations and domain-specific knowledge, enables to significantly reduce the efforts, time and resources of doctors. To this end, embodiments of the present invention can generate safe medical reports for doctors based on AI-driven predictions and explanations, as well as domain knowledge retained in the LLM system. The data source for this use case includes patient data about symptoms and test results. Corrected medical reports by the doctors of the customer hospitals can also be used if the doctors request personalized reports fitting their background and preferences specifically. Application of the method according to an embodiment of the present invention generate safe medical reports based on AI-driven disease predictions and explanations. The output is safe, reliable, trustworthy, accurate and explainable, and possibly also personalized, medical reports on disease predictions for patients.
In another exemplary embodiment, the present invention can be applied to reduce unsafe generations in medical reports, for example for heart diseases. Heart diseases can be caused by multiple distinct factors. Usually, each patient presents a unique case requiring an individual analysis, evaluation, and medical report. The following steps 1-5 are performed.
The patient's data consists of general information, symptoms, and the test results, for example:
XAI methods such as Decision Trees can not only predict the patient's condition such as a specific heart disease, but they can also provide explanation values for each factor within the patient's data that might have been influential in making the prediction, for example, the factors and explanation values could be:
The prompt template contains not only placeholders for the XAI factors, but also an instruction part, the verbalizer that projects pairs from positive to negative class labels to describe the influential factors. In embodiments, the explanation values above can be used to rank the XAI factors. The top K factors with the largest/smallest values are selected as the positive/negative influential factors. In embodiments, positive influence features can be defined as features which are good for patient health, and negative features can be defined as features which are risky for patient health. The words such as risky, dangerous, bad, negative, can serve as candidates for verbalizations of negative influential features, whereas words such as healthy, good, and positive are candidates for verbalizations of positive influential features. Different verbalizations can have a different effect in steering the language models. The model is trained to find the optimal verbalizations. For example:
Furthermore, the prompt also includes a number, e.g., 8 continuous vectors of the same size as the embedding vectors of the LLM, e.g., 256. The vectors are initialized randomly and will be trained during the training procedure. For example:
A PLLM is used to generate the medical report based on the input prompt. The medical report might be flawed or contain unsafe content, for example:
Medical Report for patient XXX
. . .
Patient XXX suffers from heart disease ABC. The patient's history with smoking might be one of the factors for their condition.
. . .
The healthy influence of the high LDL value suggests that the patient should generally consume more fats.
. . .
After collecting a training set containing generated medical reports with safe/unsafe flags together with corrected versions of the reports, the training set is used to train the soft prompt to stop generating unsafe content and to mimic the corrected medical reports, for example:
In an embodiment, the present invention provides a method to generate safe medical reports for disease predictions or other machine learning tasks in medical AI, the method comprising the steps of:
Embodiments of the present invention provide for the following improvements and technical advantages over existing technology:
Safety is a critical issue in LLMs (see Derek Tam, Anisha Mascarenhas, Shiyue Zhang, Sarah Kwan, Mohit Bansal, Colin Raffel, “Evaluating the Factual Consistency of Large Language Models Through News Summarization,” arXiv:2211.08412 (2023); and Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen, “HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models,” arXiv:2305.11747 (2023), each of which is hereby incorporated by reference herein). For medical report generation, existing technology does not address the safety issue (see Omar Alfarghaly, Rana Khaled, Abeer Elkorany, Maha Helal, Aly Fahmy, “Automated radiology report generation using conditioned transformers,” Informatics in Medicine Unlocked, Vol. 24 (2021); and ZhihongChen, YanSong, Tsung-HuiChang, XiangWan, “Generating Radiology Reports via Memory-driven Transformer,” arXiv:2010.16056 (2020), each of which is hereby incorporated by reference herein). The generated medical report can be hallucinated (thereby containing factually incorrect information), which largely reduces the trust of end users (doctors). Although additional information, e.g., retrieved documents and facts in a knowledge base, can improve faithfulness of the generated texts, there still exists a risk of fact hallucinations and generation of incorrect and unsafe information. Embodiments of the present invention provide to overcome these technical problems by providing a data- and computation-efficient solution, which trains a mixture prompt (˜2000 parameters) with a safety-oriented loss to adapt and steer LLMs to generate truthful texts. An embodiment of the present invention can be especially advantageously applied to generate safe medical reports in a data- and computation-efficient manner via a trainable mixture prompt and safety-oriented training process.
Referring to
Processors 502 can include one or more distinct processors, each having one or more cores. Each of the distinct processors can have the same or different structure. Processors 502 can include one or more central processing units (CPUs), one or more graphics processing units (GPUs), circuitry (e.g., application specific integrated circuits (ASICs)), digital signal processors (DSPs), and the like. Processors 502 can be mounted to a common substrate or to multiple different substrates.
Processors 502 are configured to perform a certain function, method, or operation (e.g., are configured to provide for performance of a function, method, or operation) at least when one of the one or more of the distinct processors is capable of performing operations embodying the function, method, or operation. Processors 502 can perform operations embodying the function, method, or operation by, for example, executing code (e.g., interpreting scripts) stored on memory 504 and/or trafficking data through one or more ASICs. Processors 502, and thus processing system 500, can be configured to perform, automatically, any and all functions, methods, and operations disclosed herein. Therefore, processing system 500 can be configured to implement any of (e.g., all of) the protocols, devices, mechanisms, systems, and methods described herein.
For example, when the present disclosure states that a method or device performs task “X” (or that task “X” is performed), such a statement should be understood to disclose that processing system 500 can be configured to perform task “X”. Processing system 500 is configured to perform a function, method, or operation at least when processors 502 are configured to do the same.
Memory 504 can include volatile memory, non-volatile memory, and any other medium capable of storing data. Each of the volatile memory, non-volatile memory, and any other type of memory can include multiple different memory devices, located at multiple distinct locations and each having a different structure. Memory 504 can include remotely hosted (e.g., cloud) storage.
Examples of memory 504 include a non-transitory computer-readable media such as RAM, ROM, flash memory, EEPROM, any kind of optical storage disk such as a DVD, a Blu-Ray® disc, magnetic storage, holographic storage, a HDD, a SSD, any medium that can be used to store program code in the form of instructions or data structures, and the like. Any and all of the methods, functions, and operations described herein can be fully embodied in the form of tangible and/or non-transitory machine-readable code (e.g., interpretable scripts) saved in memory 504.
Input-output devices 506 can include any component for trafficking data such as ports, antennas (i.e., transceivers), printed conductive paths, and the like. Input-output devices 506 can enable wired communication via USB®, DisplayPort®, HDMI®, Ethernet, and the like. Input-output devices 506 can enable electronic, optical, magnetic, and holographic, communication with suitable memory 506. Input-output devices 506 can enable wireless communication via WiFi®, Bluetooth®, cellular (e.g., LTE®, CDMA®, GSM®, WiMax®, NFC®), GPS, and the like. Input-output devices 506 can include wired and/or wireless communication pathways.
Sensors 508 can capture physical measurements of environment and report the same to processors 502. User interface 510 can include displays, physical buttons, speakers, microphones, keyboards, and the like. Actuators 512 can enable processors 502 to control mechanical forces.
Processing system 500 can be distributed. For example, some components of processing system 500 can reside in a remote hosted network service (e.g., a cloud computing environment) while other components of processing system 500 can reside in a local computing system. Processing system 500 can have a modular design where certain modules include a plurality of the features/functions shown in
The following provides further background and description of exemplary embodiments of the present invention, which may overlap to some extent with some of the information provided above. To the extent the terminology used to describe the exemplary embodiments may differ from the terminology used to describe the above embodiments, a person having skill in the art would understand that certain terms correspond to one another in the different embodiments. Features described in the Attachment can be combined with features described above in various embodiments.
Explanations in machine learning are meaningful only if it is understandable for human users. In daily life, humans explain to each other in natural language. However, eXplainable AI (XAI) identifies, e. g., important attributes from tabular data as explanations to a prediction, which are often difficult for humans to understand, especially when they have no AI background. To solve this limitation, the current disclosure generates text-based comprehensive explanations conditioned on the important features selected by an XAI method. The explanations can borrow strength from XAI and Large Language Models (LLMs), integrating the concrete information extracted by XAI from tabular data and the general domain knowledge retained in LLM from large corpus.
Explainable AI has become essential for AI-driven systems, as users request to know how a black-box AI system arrives at a prediction. For example, the off-the-shelf XAI tool LIME identifies important features from tabular data as explanations of a prediction.
Existing Natural Language Explanation (NLE) methods mainly focus on explaining natural language inference and image captioning. There is a lack of NLE methods to generate texts explaining predictions inferred by an XAI method from tabular data, which is a typical need in real world applications.
To solve this problem, the current disclosure provides users with text-based comprehensive explanations given the identified important features by XAI, such that the explanations can borrow strengths from both areas: integrate the concrete tabular data selected by XAI about how an AI black-box system arrives at a prediction; and the general domain knowledge of the prediction retained by LLMs from large corpus.
Two LLMs, MPT-7B and GPT3.5-175B, were studied to determine answers to questions related to problems implementing an XAI solution such as: can existing LLMs generate good text-based explanations given the identified features by XAI, if identified features by XAI conflict domain knowledge, can LLMs find this, are there hallucinations in the generated explanations by LLMs, which metrics can be used to measure the performance, which datasets can be used for benchmark, is human evaluation needed to test understandability of the text based explanations?Each LLM was used to generate natural language explanations conditioned on the identified features by XAI. In the studies GPT3.5-175B outperformed MPT-7B. The below text shows an example output of the two LLMs given the following crafted prompt: “Explain why the features cause the prediction. Predicted disease: Coronary Artery Atherosclerosis (risk 76%), Good features: no family history, Dangerous features: total cholesterol (7.6 mmol/L), Smoke (Yes), LDL cholesterol (3.5 mmol/L).”
Output by MPT-7B from the prompt above: I am not sure what you mean by “prediction”. If it is about risk assessment then I would say that this patient has an increased chance of developing coronary artery atherosclerosis in his lifetime but there are other factors which may influence whether or not he develops CAD during his life time e.g., smoking status etc.
Output by GPT3.5-175B from the prompt above: The prediction of a 76% risk of Coronary Artery Atherosclerosis (CAA) is based on a combination of good and dangerous features that are well-established risk factors for this condition. Let's break down why each of these features contributes to such a prediction:
With limited manual tests the GPT3.5-175B reported good results without showing obvious hallucinations. It even found conflict when the dangerous feature of smoke was modified from yes to no. Finding appropriate datasets as a benchmark was difficult as both tabular and text data are required. Recruiting human annotators to generate text-based explanations given the identified features by XAI can be expensive and time-consuming, especially when domain knowledge (such as medicine) is needed. A potential solution is to employ huge LLMs to generate labeled data. To reduce possible hallucinations, instruction-based self-validation can be used together with proper human-validation.
The current disclosure designs an initial framework based on prefix-tuning to train a NLE-specific LLM as described above in
A task-specific LLM is trained according to prefix-tuning, which is data and computing efficient. In particular, an additional set of trainable embeddings are prepended to the instruction and the identified positive and negative important features, as shown in the below example:
During a training process, the unknown embeddings are trained to fit the labeled data, i.e., the faithful text-based explanation conditioned on the important features. The current disclosure provides technical solutions to the technical problems presented by integrating XAI methods with medical data by improving the understandability of XAI using the strength of LLMs. This can generate text based on the identified features by XAI and provide proper domain knowledge to further help users understand the predictions.
While subject matter of the present disclosure has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. Any statement made herein characterizing the invention is also to be considered illustrative or exemplary and not restrictive as the invention is defined by the claims. It will be understood that changes and modifications may be made, by those of ordinary skill in the art, within the scope of the following claims, which may include any combination of features from different embodiments described above.
The terms used in the claims should be construed to have the broadest reasonable interpretation consistent with the foregoing description. For example, the use of the article “a” or “the” in introducing an element should not be interpreted as being exclusive of a plurality of elements. Likewise, the recitation of “or” should be interpreted as being inclusive, such that the recitation of “A or B” is not exclusive of “A and B,” unless it is clear from the context or the foregoing description that only one of A and B is intended. Further, the recitation of “at least one of A, B and C” should be interpreted as one or more of a group of elements consisting of A, B and C, and should not be interpreted as requiring at least one of each of the listed elements A, B and C, regardless of whether A, B and C are related as categories or otherwise. Moreover, the recitation of “A, B and/or C” or “at least one of A, B or C” should be interpreted as including any singular entity from the listed elements, e.g., A, any subset from the listed elements, e.g., A and B, or the entire list of elements A, B and C.
Priority is claimed to U.S. Provisional Application Ser. No. 63/535,327 filed on Aug. 30, 2023, the entire contents of which is hereby incorporated by reference herein.
| Number | Date | Country | |
|---|---|---|---|
| 63535327 | Aug 2023 | US |