The present application generally relates to evaluating a medical workflow and, in particular, using machine learning to improve medical environment safety and efficiency.
Existing operating room dashboards present static content. This content may describe certain aspects of a surgical procedure being performed (e.g., sponge count, blood loss, etc.). These aspects, however, change during the surgical procedure. Existing operating room dashboards update content based on human input. This can hinder and/or prevent medical staff from performing tasks that may need to be completed, thereby decreasing operating room safety and efficiency.
Idle and/or inexperienced medical staff may also contribute to operating room inefficiency. Idle/inexperienced medical staff can increase medical procedure durations, which can negatively impact patient care. Inexperienced medical staff may not know what tasks to perform, when to perform those tasks, or how to perform those tasks. As a result, more experienced medical staff may have to provide instructions or perform those tasks themselves.
Therefore, it would be beneficial to have systems, methods, and programming for intelligently selecting and automatically updating content presented during a surgical procedure to improve operating room safety and efficiency.
Described are systems, methods, and programming for evaluating a medical workflow to improve medical environment safety and efficiency. The medical workflow describes different stages associated with a medical procedure. For example, the medical workflow may include one or more stages occurring prior to the medical procedure beginning (e.g., preparing an operating room, bringing in a patient, medical staff entering the operating room, etc.), one or more stages occurring while the medical procedure is performed (e.g., preparing a mixture, cleaning a surface of a medical device within the operating room, making an incision, etc.), and/or one or more stages occurring after the medical procedure has ended (e.g., wheeling a patient out of the operating room, cleaning the operating room, etc.).
Some of these stages may include a set of tasks to be performed by one or more medical staff (e.g., surgeons, nurses, technicians, cleaning staff, etc.). Completion of these tasks may indicate that the current stage has ended and/or a subsequent stage may begin. For example, creating a mixture may include tasks such as measuring the materials, combining the materials, curing the materials, or other tasks. Completion of the tasks may indicate that the mixture has been made. However, not all of the medical staff may know how to perform tasks in a given stage, may be preoccupied with other tasks, or may be idle. This can increase surgical procedure duration, which can negatively impact patient outcomes.
One or more machine learning models may be implemented to intelligently provide medical content to the medical staff to help decrease the amount of time needed to perform certain tasks. The machine learning models may be trained to determine contextual information describing the operating room, identify a current stage of the medical workflow, identify a subsequent stage of the medical workflow, compute an efficiency score, and/or update aspects of the medical workflow to improve the efficiency score. As an example, medical content describing the contextual information (e.g., a user interface displaying an amount of blood lost during the surgery, a number of surgical sponges used, etc.) may be presented to some or all of the medical staff. As another example, medical content describing the subsequent medical stage of the medical workflow (e.g., a video, checklist, audio message, etc.) may be presented to some or all of the medical staff.
A medical workflow efficiency score may be computed indicating the efficiency of the medical staff in carrying out the different stages of the medical workflow. The medical workflow efficiency score may be determined based on the contextual information, the identified medical stage, the subsequent medical stage, or other information. For example, the medical workflow efficiency score may indicate an amount (e.g., a percentage) of time that medical staff were idle during the surgical procedure. Depending on the medical workflow efficiency score, one or more aspects of the medical workflow may be adjusted. For example, the number of medical staff allocated for performances of a given surgical procedure may be adjusted based on the medical workflow efficiency score. This can improve the overall medical workflow efficiency, thereby optimizing resource allocation and improving patient outcomes.
According to some examples, a method for evaluating a medical workflow includes: receiving one or more images depicting a medical environment and a medical workflow performed in the medical environment; determining contextual information associated with the medical workflow based on the one or more images; identifying a stage of the medical workflow based on the one or more images; determining a subsequent stage of the medical workflow based on the identified stage; and presenting, to one or more medical staff located within the medical environment, first medical content associated with the contextual information and second medical content associated with the subsequent stage.
In any of the examples, the method can further include: generating a report for the medical workflow based on the contextual information.
In any of the examples, the medical workflow can comprise a first stage occurring prior to a beginning of a medical procedure, a second stage occurring during the medical procedure, and a third stage occurring after the medical procedure has been completed, wherein identifying the stage of the medical workflow comprises: determining whether the one or more images were captured during the first stage, the second stage, or the third stage.
In any of the examples, the method can further include: inputting the one or more images into one or more machine learning models to obtain one or more image representations.
In any of the examples, identifying the stage of the medical workflow can comprise: identifying activities of the one or more medical staff based on the one or more image representations, the stage of the medical workflow being determined based on the identified activities.
In any of the examples, determining the contextual information can comprise: detecting, based on the one or more image representations, one or more objects present within the one or more images; and generating the contextual information based on the one or more detected objects.
In any of the examples, the method can further include: generating a report comprising at least some of the first medical content, the second medical content, or the first medical content and the second medical content; and presenting, via one or more display devices, the report to the one or more medical staff.
In any of the examples, the method can further include: generating an audio message based on subsequent stage information, the audio message describing the subsequent stage of the medical workflow.
In any of the examples, the method can further include: outputting the audio message describing the subsequent stage to the one or more medical staff.
In any of the examples, presenting the second medical content can comprise: selecting one or more display devices with which to present the second medical content, the one or more display devices being located within the medical environment; and sending the second medical content to the one or more selected display devices.
In any of the examples, selecting the one or more display devices can comprise: identifying, based on the one or more images, one or more relevant medical staff located within the medical environment, the one or more relevant medical staff associated with the subsequent stage; and identifying, based on the one or more images, one or more displays located within the medical environment, the one or more displays proximate to the one or more relevant medical staff.
In any of the examples, identifying the one or more relevant medical staff can comprise: detecting, using one or more machine learning models, activities of the one or more relevant medical staff based on the one or more images, the one or more relevant medical staff being identified from the one or more medical staff based on the detected activities.
In any of the examples, the method can further include: detecting, using one or more machine learning models, activities of the one or more medical staff during the stage of the medical workflow based on the one or more images; and generating a report for the medical workflow based on the detected activities.
In any of the examples, generating the report can comprise: generating an efficiency score indicating an efficiency of the medical workflow, the report comprising the generated efficiency score.
In any of the examples, generating the efficiency score can comprise: determining a number of medical staff associated with the stage of the medical workflow based on the detected activities of the one or more medical staff; and comparing the number of medical staff to a predefined number of medical staff to be used for the stage of the medical workflow, the efficiency score being based at least in part on a result of the comparison.
In any of the examples, the method can further include: detecting, using one or more machine learning models, activities of the one or more medical staff during the stage of the medical workflow based on the one or more images; updating at least one of the first medical content or the second medical content based on the activities; and presenting, using one or more display devices, at least one of the updated first medical content or the updated second medical content to the one or more medical staff.
In any of the examples, the method can further include: determining, based on the contextual information, that the medical workflow has progressed from a first stage to a second stage; and updating the first medical content based on the medical workflow progressing from the first stage to the second stage.
In any of the examples, presenting the first medical content and the second medical content can comprise: identifying at least one of (i) one or more checklists associated with the medical workflow or (ii) one or more videos associated with the medical workflow; and generating at least one of the first medical content or the second medical content based on the at least one of (i) the one or more checklists or (ii) the one or more videos.
In any of the examples, the method can further include: identifying, based on the one or more images, a current task of a plurality of tasks associated with the identified stage; determining, based on the one or more images, that the current task has been completed; and modifying a checklist associated with the identified stage to indicate that the current task has been completed.
In any of the examples, the method can further include: identifying, from a plurality of tasks associated with the identified stage, a current task and a subsequent task; determining, based on the one or more images, that the subsequent task has been started by the one or more medical staff prior to completion of the current task; and sending a notification to the one or more medical staff indicating that the current task has not been completed.
In any of the examples, sending the notification to the one or more medical staff can comprise: generating and outputting at least one of an audible alert, a visual alert, or a haptic alert to the one or more medical staff.
In any of the examples, the method can further include: detecting, using one or more machine learning models, activities of the one or more medical staff during the stage of the medical workflow based on the one or more images; and generating a report indicating that a quantity of the one or more medical staff is to be adjusted.
In any of the examples, the quantity of the one or more medical staff being adjusted can comprise: increasing the quantity of medical staff for the identified stage; or decreasing the quantity of medical staff for the identified stage.
According to some examples, a system includes: one or more processors programmed to perform the method of any of the examples.
According to some examples, a non-transitory computer-readable medium stores computer program instructions that, when executed, effectuate operations including the method of any of the examples.
According to some examples, computer program product comprises software code portions comprising instructions that, when executed, effectuate operations including the method of any of the examples.
According to some examples, a non-transitory computer-readable medium stores computer program instructions that, when executed, effectuate operations including the method of any of the examples. It will be appreciated that any of the variations, aspects, features, and options described in view of the systems apply equally to the methods and vice versa. It will also be clear that any one or more of the above variations, aspects, features, and options can be combined.
The invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Reference will now be made in detail to implementations and various aspects and variations of systems and methods described herein. Although several example variations of the systems and methods are described herein, other variations of the systems and methods may include aspects of the systems and methods described herein combined in any suitable manner having combinations of all or some of the aspects described.
Described are systems, methods, and programming for evaluating a medical workflow. This evaluation may be used to improve operating room efficiency. Operating room efficiency can depend on a variety of factors. For example, medical staff who are inexperienced with certain tasks to be performed may decrease operating room efficiency. As another example, medical staff that remain idle when tasks could be performed can also decrease operating room efficiency. As yet another example, operating room dashboards that present static content can decrease operating room efficiency. Greater operating room efficiency can help improve patient care during the medical procedure and may also improve medical procedure outcomes.
To improve operating room efficiency, one or more machine learning models may be used to intelligently identify a current stage of a medical workflow and retrieve content associated with the current stage and/or a subsequent stage. Additionally, one or more machine learning models may be used to extract contextual information about the medical procedure from operating room images. This contextual information can be used to dynamically update the content presented via operating room dashboards.
The presented content may include checklists of tasks to be performed. The tasks may help facilitate compliance with the medical workflow's requirements. The presented content may also include images, text, and/or videos describing how to perform one or more of the tasks.
The presented content can be utilized by idle roles—medical staff identified as being idle—to assist other medical staff and improve operating room efficiency. For example, the presented content can inform idle medical staff what tasks are to be performed by non-idle medical staff. This may enable those individuals previously identified as idle to provide aid to the other medical staff. Furthermore, the presented content can inform medical staff what should be transpiring within the medical environment, thereby providing improved oversight, and moreover, improving patient outcomes.
Images and/or video of the operating room may be obtained and analyzed via the one or more machine learning models. The one or more machine learning models may be trained to identify objects within the operating room as well as actions, if any, occurring within the operating room. For example, the machine learning models may detect medical sponges and/or blood loss and compute a quantity of medical sponges used and/or blood lost during a surgical procedure. As another example, the machine learning models may detect medical staff within the operating room and surgical activities, if any, being performed by the medical staff.
The outputs from the machine learning models may be used to dynamically update content presented to the medical staff. For example, contextual information describing changes in the quantity of medical sponges used and/or blood lost during the surgery can be determined using the machine learning models. Content presented to the medical staff may be dynamically updated based on changes in contextual information. For example, the content may be updated based on activities of users within the medical environment. The updated content can improve operating room efficiency as compared to conventional techniques. For example, instead of (or in addition to) manually tracking and inputting contextual information to update the presented content, the machine learning models may be configured to detect changes in contextual information and update the presented content automatically. Medical staff may also monitor these changes (e.g., medical sponge count, blood loss) and can compare their findings with the determinations from the machine learning models. Discrepancies between the machine learning models' determinations and the medical staff's findings may be further investigated to identify the cause of the discrepancies and mitigate the discrepancies (e.g., by re-training one or more of the machine learning models).
A report for the medical workflow may be generated describing the presented content, the contextual information, the identified stage, the subsequent stage, and/or other information. For example, a report may be generated that describes discrepancies found between the machine learning models' determinations (e.g., the contextual information) and the medical staff's finding.
As another example, the activities performed by the medical staff may be used to determine stages of the medical workflow. These stages may each have associated content that is to be presented to some or all of the medical staff. By identifying the current stage, content associated with the current stage as well as the known subsequent stage may be retrieved in advance and presented to the medical staff. This can improve operating efficiency by providing medical staff with the necessary information to perform tasks associated with each stage, thereby minimizing idleness and optimizing medical staff resource allocation.
In the following description, it is to be understood that the singular forms “a,” “an,” and “the” used in the following description are intended to include the plural forms as well, unless the context clearly indicates otherwise. It is also to be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It is further to be understood that the terms “includes, “including,” “comprises,” and/or “comprising,” when used herein, specify the presence of stated features, integers, steps, operations, elements, components, and/or units but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, units, and/or groups thereof.
Certain aspects of the present disclosure include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present disclosure could be embodied in software, firmware, or hardware and, when embodied in software, could be downloaded to reside on and be operated from different platforms used by a variety of operating systems. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that, throughout the description, discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” “displaying,” “generating,” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission, or display devices.
The present disclosure in some examples also relates to a device for performing the operations herein. This device may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, computer readable storage medium, such as, but not limited to, any type of disk, including floppy disks, USB flash drives, external hard drives, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability. Suitable processors include central processing units (CPUs), graphical processing units (GPUs), field-programmable gate arrays (FPGAs), and ASICs.
The methods, devices, and systems described herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein.
Medical environment 10 may include devices used to prepare for and/or perform a medical procedure to a patient 12. These devices may also be used after the medical procedure. Such devices may include one or more sensors, one or more medical devices, one or more display devices, one or more light sources, one or more computing devices, or other components. For example, at least one medical device 120 may be located within medical environment 10. Medical device 120 may be used to assist medical staff while performing a medical procedure (e.g., surgery). Medical device 120 may also be used to document events and information from the medical procedure. For example, medical device 120 may be used to input or receive patient information (e.g., to/from electronic medical records (EMRs), electronic health records (EHRs), hospital information system (HIS), communicated in real-time from another system, etc.). The received patient information may be saved onto medical device 120. Alternatively or additionally, the patient information may be displayed using medical device 120. In some aspects, medical device 120 may be used to record patient information. For example, medical device 120 may be used to store the patient information or images in an EMR, EHR, HIS, or other databases.
Medical device 120 may be capable of obtaining, measuring, detecting, and/or saving information related to the patient 12. Medical device 120 may or may not be coupled to a network that includes records of patient 12, for example, an EMR, EHR, or HIS. Medical device 120 may include or be integrated with a computing system 102 (e.g., a desktop computer, a laptop computer, a tablet device, etc.) having an application server. For example, medical device 120 may include processors or other hardware components that enable data to be captured, stored, saved, and/or transmitted to other devices. Computing system 102 can have a motherboard that includes one or more processors or other similar control devices as well as one or more memory devices. The processors may control the overall operation of computing system 102 and can include hardwired circuitry, programmable circuitry that executes software, or a combination thereof. The processors may, for example, execute software stored in the memory device. The processors may include, for example, one or more general- or special-purpose programmable microprocessors and/or microcontrollers, graphics processing unit (GPU), tensor processing unit (TPU), application specific integrated circuits (ASICs), programmable logic devices (PLDs), programmable gate arrays (PGAs), or the like. The memory devices may include any combination of one or more random access memories (RAMs), read-only memories (ROMs) (which may be programmable), flash memory, and/or other similar storage devices. Patient information may be inputted into computing system 102 (e.g., making an operative note during the medical or surgical procedure on patient 12 in medical environment 10) and/or computing system 102 can transmit the patient information to another medical device 120 (via either a wired connection or wirelessly).
Computing system 102 can be positioned in medical environment 10 on a table (stationary or portable), a floor 104, a portable cart 106, an equipment boom, and/or a shelving 103.
In some aspects, medical environment 10 may be an integrated suite used for minimally invasive surgery (MIS) or fully invasive procedures. Video components, audio components, and associated routing may be located throughout medical environment 10. For example, monitor 14 may present video and speakers 118 may output audio. The components may be located on or within the walls, ceilings, or floors of medical environment 10. For example, room cameras 146 may be mounted to walls 148 or a ceiling 150. Wires, cables, and hoses can be routed through suspensions, equipment booms, and/or interstitial space. The wires, cables, and/or hoses in medical environment 10 may be capable of connecting to mobile equipment, such as portable cart 106, C arms, microscopes, etc. to communicate routing audio, video, and data information.
Imaging system 108 may be configured to capture images and/or video, and may route audio, video, and other data (e.g., device control data) throughout medical environment 10. Imaging system 108 and/or associated router(s) may route the information between devices within or proximate to medical environment 10. In some aspects, imaging system 108 and/or associated router(s) (not shown) may be located external to medical environment 10 (e.g., in a room outside of an operating room), such as in a closet. As an example, the closet may be located within a predefined distance of medical environment 10 (e.g., within 325 feet, or 100 meters). In some aspects, imaging system 108 and/or the associated router(s) may be located in a cabinet inside or adjacent to medical environment 10.
The captured images and/or videos may be displayed via one or more display devices. For example, images captured by imaging system 108 may be displayed using monitor 14. Imaging system 108, alone or in combination with one or more audio sensors, may also be capable of recording audio, outputting audio, or a combination thereof. In some aspects, patient information can be inputted into imaging system 108 and associated with the images and videos recorded and/or displayed. Imaging system 108 can include internal storage (e.g., a hard drive, a solid-state drive, etc.) for storing the captured images and videos. Imaging system 108 can also display any captured or saved images (e.g., from the internal hard drive). For example, imaging system 108 may cause monitor 14 to display a saved image. As another example, imaging system 108 may display a saved video using a touchscreen monitor 22. Touchscreen monitor 22 and/or monitor 14 may be coupled to imaging system 108 via either a wired connection or wirelessly. It is contemplated that imaging system 108 could obtain or create images of patient 12 during a medical or surgical procedure from a variety of sources (e.g., from video cameras, video cassette recorders, X-ray scanners (which convert X-ray films to digital files), digital X-ray acquisition apparatus, fluoroscopes, computed tomography (CT) scanners, magnetic resonance imaging (MRI) scanners, ultrasound scanners, charge-coupled (CCD) devices, and other types of scanners (handheld or otherwise)). If coupled to a network, imaging system 108 can also communicate with a picture archiving and communication system (PACS), as is well known to those skilled in the art, to save images and videos in the PACS and to retrieve images and videos from the PACS. Imaging system 108 can couple to and/or integrate with, e.g., an electronic medical records database (e.g., EMR) and/or a media asset management database.
Touchscreen monitor 22 and/or monitor 14 may display images and videos captured live by imaging system 108. Imaging system 108 may include at least one image sensor, for example, disposed within video camera 140. Video camera 140 may be configured to capture an image or a sequence of images (e.g., video frames) of patient 12. Video camera 140 can be a hand-held device, such as an open-field camera or an endoscopic camera. For example, imaging system 108 may be coupled to an endoscope 142, which may include or be coupled to video camera 140. Imaging system 108 may communicate with a camera control unit 144 via a fiber optic cable 147, which may communicate with imaging system 108 (e.g., via a wired or wireless connection).
Room cameras 146 may also be configured to capture an image or a sequence of images (e.g., video frames) of medical environment 10. The captured image(s) may be displayed using touchscreen monitor 22 and/or monitor 14. In addition to room cameras 146, a camera 152 may be disposed on a surgical light 154 within medical environment 10. Camera 152 may be configured to capture an image or a sequence of images of medical environment 10 and/or patient 12. Images captured by video camera 140, room cameras 146, and/or camera 152 may be routed to imaging system 108, which may then be displayed using touchscreen monitor 22, monitor 14, another display device, or a combination thereof. Additionally, the images captured by video camera 140, room cameras 146, and/or camera 152 may be provided to a database for storage (e.g., an EMR).
Room cameras 146, camera 152, and/or video camera 140 (or another camera of imaging system 108) may include at least one solid state image sensor. For example, the image sensor of room cameras 146, camera 152, and/or video camera 140 may include a charge coupled device (CCD), a complementary metal-oxide semiconductor (CMOS) sensor, a charge-injection device (CID), or another suitable sensor technology. The image sensor of room cameras 146, camera 152, and/or video camera 140 may include a single image sensor. The single image sensor may be a grayscale image sensor or a color image sensor having an RGB color filter array deposited on its pixels. The image sensor of room cameras 146, camera 152, and/or video camera 140 may alternatively include three sensors: one sensor for detecting red light, one sensor for detecting green light, and one sensor for detecting blue light.
The medical procedure in which the images may be captured using room cameras 146, camera 152, and/or video camera 140 may be an exploratory procedure, a diagnostic procedure, a study, a surgical procedure, a non-surgical procedure, an invasive procedure, or a non-invasive procedure. As mentioned above, video camera 140 may be an endoscopic camera (e.g., coupled to endoscope 142). It is to be understood that the term endoscopic (and endoscopy in general) is not intended to be limiting, and rather video camera 140 may be configured to capture medical images from various scope-based procedures including but not limited to arthroscopy, ureteroscopy, laparoscopy, colonoscopy, bronchoscopy, etc.
Speakers 118 may be positioned within medical environment 10 to provide sounds, such as music, audible information, and/or alerts, that can be played within the medical environment during the medical procedure. For example, speaker(s) 118 may be installed on the ceiling 150, installed on the wall 148 and/or positioned on a bookshelf, a station, etc.
One or more microphones 16 may sample audio signals within medical environment 10. The sampled audio signals may comprise the sounds played by speakers 118, noises from equipment within medical environment 10, and/or human speech (e.g., voice commands to control one or more medical devices, verbal information conveyed for documentation purposes, etc.). Microphone(s) 16 may be located within a speaker (e.g., a smart speaker) attached to monitor 14, as shown in
Medical devices 120 may include one or more sensors, such as an image sensor, an audio sensor, a motion sensor, or other types of sensors. Image sensors may be configured to capture one or more images, one or more videos, audio data, or other data relating to a medical procedure. As an example, with reference to
Client devices 130-1 to 130-N may be capable of communicating with one or more components of system 100 via a wired and/or wireless connection (e.g., network 170). Client devices 130 may interface with various components of system 100 to cause one or more actions to be performed. For example, client devices 130 may represent one or more devices used to display images and videos to a user (e.g., a surgeon). Examples of client devices 130 may include, but are not limited to, desktop computers, servers, mobile computers, smart devices, wearable devices, cloud computing platforms, display devices, mobile terminals, fixed terminals, or other client devices. Each client device 130-1 to 130-N of client devices 130 may include one or more processors, memory, communications components, display components, audio capture/output devices, imaging components, other components, and/or combinations thereof.
As described above with respect to
In addition to or instead of room cameras 146, camera 152 of
Computing system 102 may include one or more subsystems, such as an image analysis subsystem 110, a content rendering subsystem 112, or other subsystems. Subsystems 110, 112 may be implemented using one or more processors (e.g., CPUs, GPUs, TPUs, and the like), memory, and interfaces. Distributed computing architectures and/or cloud-based computing architectures may alternatively or additionally be used to implement at least a portion of the functionalities associated with subsystems 110, 112.
It should be noted that, while one or more operations are described herein as being performed by particular components of computing system 102, those operations may be performed by other components of computing system 102 or other components of system 100. As an example, one or more operations described herein as being performed by components of computing system 102 may alternatively be performed by one or more of medical devices 120 and/or client devices 130.
Subsystems 110, 112 may be configured to implement various portions of a medical workflow evaluation process. As an example, with respect to
First medical stage 202 may correspond to one or more activities performed prior to a medical procedure beginning. For example, first medical stage 202 may include bringing a patient, such as patient 12 of
Returning to
Returning to
During the various stages of medical workflow 200 of
Returning to
Image analysis subsystem 110 may be configured to analyze the images to extract contextual information associated with the medical procedure and/or identify a medical stage of the medical workflow. For example, image analysis subsystem 110 may be configured to extract contextual information and/or identify the medical stage using one or more machine learning models stored in model database 164. The machine learning models may be arranged as a machine learning pipeline. As an example, with reference to
It is to be understood that although some aspects are described herein with respect to machine learning models, other prediction models (e.g., statistical models or other analytics models) may be used instead of or in addition to the machine learning models described herein. For example, a statistical model may replace a machine learning model, or vice versa.
Contextual ML model 310 may be configured to extract and output contextual information 312 from input images 302. For example, contextual information 312 may be associated with a medical procedure and/or a medical environment where the medical procedure is being performed, such as medical environment 10 of
Medical stage ML model 320 may be configured to output medical stage information 322 based on one or more inputs. For example, medical stage information 322 may indicate a medical stage of the medical workflow identified based on input images 302, as well as, optionally, contextual information 312 extracted using contextual ML model 310. For example, medical stage ML model 320 may determine whether medical workflow 200 of
In some aspect, although contextual ML model 310 is illustrated as analyzing images 302 prior to medical stage ML model 320, the order of the models may be flipped, or, as mentioned above, the models may be executed in parallel. In the instance the models are flipped, medical stage information 322 may be provided as input to contextual ML model 310 such that contextual information 312 may be based on images 302 and/or medical stage information 322. Additionally, other inputs not illustrated in
As mentioned above, contextual ML model 310 may be configured to extract contextual information 312 from images 302. Contextual information 312 may describe aspects of a medical procedure, activities performed within a medical environment, objects detected within the medical environment, or other information. For example, contextual information 312 may include a surgical sponge count, information related to blood loss, patient biometric data (e.g., respiratory rate), the number of medical personnel present within medical environment 10, activities performed by some or all of the medical staff, or combinations thereof. An example of contextual ML model 310 is described below with reference to
Contextual information 312 may include information associated with cleaning of the medical environment. Contextual ML model 310 may be configured to analyze images 302 to detect cleaning of the medical environment and aspects of the cleaning and generate contextual information 312 that includes information about the detected cleaning. For example, contextual ML model 310 may be configured to detect activity of medical personnel that is associated with cleaning, such as the use of cleaning implements by the medical staff in proximity to objects and/or surfaces to be cleaned. This could include, for example, contextual ML model 310 being trained to detect that one or more medical staff members are grasping a cleaning implement (e.g., a cleaning cloth) and moving the cleaning implement in a cleaning motion in proximity to an object and/or surface to be cleaned (e.g., moving the cloth back and forth in proximity to the surface of a surgical table). Contextual ML model 310 may be configured to track which objects and/or surfaces of objects in the medical environment have been cleaned. For example, contextual ML model 310 may be configured to track the cleaning of tables, surgical lights and control handles, imaging equipment (e.g., a mobile CT scanner), suction regulators, anesthesia carts, compressed gas tanks, floors, walls, and/or any other objects and/or surfaces in the medical environment. The contextual information 312 may include any information associated with detected cleaning activities, such as information that a particular object or surface (e.g., a computer mouse or various surfaces of a surgical table) has or has not been cleaned, an order of cleaning of surfaces of an object (e.g., a surgical table top cleaned before a pedestal of the surgical table), and/or an order of cleaning of objects and/or surfaces (e.g., the surgical table is cleaned before the floors).
Encoder 412 may be implemented as a neural network. The neural network of encoder 412 may include one or more fully connected layers, global and/or max pooling layers, input layers, etc. For example, encoder 412 can be implemented using a variant of a recurrent neural network architecture (RNN) (e.g., a long short-term memory (LSTM) model), a convolutional neural network (CNN) (e.g., ResNet), transformer architectures, other machine learning models, or combinations thereof. In an example, encoder 412 may be implemented using a temporal convolutional network (e.g., a 3D CNN).
Encoder 412 may receive one or more images 302, as described above with reference to
Output image representations 414 from encoder 412 may be provided as input to classifier 416 of contextual ML model 310. Classifier 416 may be a regression classifier, such as a logistic regression classifier, configured to classify an image of images 302 into one or more categories. Each category may be associated with contextual information including but not limited to a particular object, activity, and/or other aspect of a medical procedure. For example, contextual ML model 310 may detect one or more objects present within images 302 based on output image representations 414. Contextual ML model 310 may generate contextual information 312 based on the detected objects. As another example, output image representations 414 may be used to identify activities of medical staff within medical environment 10 (shown in
As illustrated in
Contextual ML model 310, including encoder 412 and classifier 416, may be trained using training data including images depicting various medical objects, medical activities, or other aspects of a medical workflow. The images may be inputted to contextual ML model 310, and contextual ML model 310 may generate a prediction of what objects, activities, etc. those images depict. The predictions may be compared to a ground truth, which may be used to compute a loss (e.g., a cross-entropy loss). The computed loss may be used to adjust one or more hyper-parameters of contextual ML model 310. The process of inputting training data, generating a prediction, comparing the prediction to the ground truth, and computing the loss may be repeated (e.g., iterated) any number of times. Training of contextual ML model 310 may end after a predefined number of training iterations have been performed and/or when contextual ML model 310 produces predictions that meet or exceed a threshold level of accuracy.
Returning to
Encoder 452 may be implemented as a neural network. The neural network of encoder 452 may include one or more fully connected layers, global and/or max pooling layers, input layers, etc. For example, encoder 452 can be implemented using a variant of a recurrent neural network architecture (RNN) (e.g., a long short-term memory (LSTM) model), a convolutional neural network (CNN) (e.g., ResNet), transformer architectures, other machine learning models, or combinations thereof. In an example, encoder 452 may be implemented using a temporal convolutional network (e.g., a 3D CNN).
Encoder 452 may receive one or more images 302 and may generate a corresponding image representation 454 for each of received image of images 302. Image representation 454 may be an embedding representing the given image of images 302. The embedding can be a vector representation of an image in a latent space. The embedding may significantly reduce a size and dimension of the original image data. As mentioned above, lower-dimensional embeddings may be used for more efficient downstream processing than processing with the original images 302. The embeddings may be constructed to retain invariant features of the input images 302 while minimizing image-specific characteristics (e.g., imaging angle, resolution, artifacts, etc.). For example, an embedding may be a vector or array of values representing features of a given image of images 302. The quantity of values in the vector may be dependent on the number of features medical stage ML model 320 is trained to identify. For example, the embedding may be a vector having 16 values (corresponding to 16 dimensions in the latent space), 128 values (corresponding to 128 dimensions in the latent space), 512 values (corresponding to 512 dimensions in the latent space), or other quantities.
The output image representations 454 from encoder 452 may be provided as input to classifier 456 of medical stage ML model 320. Classifier 456 may be a regression classifier, such as a logistic regression classifier, configured to classify an image of images 302 into one or more categories. Each category may be associated with a particular medical stage of a medical workflow, such as first medical stage 202, second medical stage 204, third medical stage 206, etc. of medical workflow 200 illustrated in
As illustrated in
Classification result 458 may also include an indication of a subsequent medical stage. The subsequent medical stage may correspond to the known stage of the medical workflow that follows the current medical stage. For example, medical stage ML model 320 may determine that medical workflow 200 is currently in first stage 202. Thus, classification result 458 may indicate the current stage (first medical stage 202 of medical workflow 200), as well as a following medical stage of the medical workflow (e.g., second medical stage 204).
Medical stage ML model 320, including encoder 452 and classifier 456, may be trained using training data including images depicting various medical stages of a medical workflow. The images may be inputted to medical stage ML model 320, and medical stage ML model 320 may generate a prediction of what medical stages those images depict. The predictions may be compared to a ground truth, which may be used to compute a loss (e.g., a cross-entropy loss). The computed loss may be used to adjust one or more hyper-parameters of medical stage ML model 320. The process of inputting training data, generating a prediction, comparing the prediction to the ground truth, and computing the loss may be repeated (e.g., iterated) any number of times. Training of medical stage ML model 320 may end after a predefined number of training iterations have been performed and/or when medical stage ML model 320 produces predictions that meet or exceed a threshold level of accuracy.
Medical content presented to medical staff within a medical environment (e.g., medical environment 10 illustrated in
A report describing the medical workflow may be presented to a user (e.g., a surgeon, medical staff, etc.) via first client device 130-1, second client device 130-2, monitor 14, touchscreen 22, and/or another device. The report may include first medical content 510, second medical content 520, other content, and/or combinations thereof. The report may also include the contextual information, the identified stage, the subsequent stage, and/or other information. For example, a report may be generated that describes discrepancies found between the machine learning models' determinations (e.g., the contextual information) and the medical staff's findings.
As illustrated in
Blood loss information 572 may indicate an estimated amount of blood lost during the medical procedure. Blood loss information 572 may be determined based on contextual ML model 310 (shown in
Used sponge count information 574 may indicate a number of surgical sponges used during surgery. For example, contextual ML model 310 may analyze images of medical environment 10 to determine when a new surgical sponge is obtained, a used surgical sponge is disposed, etc. Additionally, contextual ML model 310 may analyze audio (e.g., captured by microphone 16 shown in
Safety score 576 may be or include a numerical value (e.g., ranging from 0 to 100) indicating a quantified safety of patient 12. Safety score 576 may be computed based on inputs from medical devices 120 and/or sensors within medical environment 10. For example, an entryway to medical environment 10 may include one or more sensors that detect when the entryway has been opened or when someone has passed through the entryway. The number of times that the entryway has been opened or someone has passed through the entryway may be used in determining safety score 576, where a number of entries less than a threshold number of entries may yield a higher safety score than a number of entries greater than the threshold number of entries. As another example, one or more temperature sensors disposed in medical environment 10 may be configured to monitor the temperature of medical environment 10. The monitored temperature may also be used in determining safety score 576, where a monitored temperature within a threshold temperature range may yield a higher safety score 576 than a monitored temperature outside of the threshold temperature range.
Safety score 576 may be associated with effectiveness of cleaning of the medical environment, which can impact patient safety (e.g., by affecting infection risk to a patient). Safety score 576 may correspond with how effectively the medical environment 10 was cleaned according to predetermined rules. For example, the safety score 576 may be based on a proportion of the surfaces and/or objects in the medical environment 10 that were cleaned (such as determined by contextual ML model 310 of
Optionally, multiple different safety scores 576 may be determined. The different safety scores 576 may include different types of safety scores 576. For example, one safety score may be based on a number of entries into a medical environment and another safety score may be based on an effectiveness of cleaning of the medical environment. The different safety scores 576 may include the same type of safety scores 576 but for different medical stages. For example, one safety score may be associated with pre-operative cleaning and another safety score may be associated with post-operative cleaning. The different safety scores 576 may be displayed simultaneously or during different medical stages. For example, a safety score associated with a number of entries into the medical environment may be displayed during a medical procedure and a safety score associated with an effectiveness of cleaning may be displayed after completion of a medical procedure. Different safety scores can be provided simultaneously. For example, a post-operative report may include a safety score associated with a number of entries into the medical environment and a safety score associated with an effectiveness of pre-operative cleaning. A safety score 576 associated with effectiveness of cleaning can be used to improve the cleaning (e.g., cleaners can continue cleaning until the safety score 576 reaches a desired value) and/or can be used for post-cleaning assessment and/or correlation with patient outcomes.
Procedure information 578 may indicate information associated with medical stages of the medical workflow. For example, procedure information 578 may indicate that medical workflow 200 of
Contextual information 512 may be presented using various devices within medical environment 10 (shown in
Returning to
Medical stage information 522 may include information associated with cleaning of the medical environment. For example, medical stage information 522 may include a checklist indicating the objects and/or surface to be cleaned and/or the status of cleaning of the objects and/or surfaces (e.g., whether a given object or surface has been cleaned).
The particular types of medical stage information 522, and to whom that medical stage information is to be presented, may be determined based on images (e.g., images 302) captured during the medical workflow (e.g., medical workflow 200 illustrated in
Returning to
As shown in
Medical workflow database 166 may include a data structure 650. Data structure 650 may store medical workflow information (e.g., WF1, WF2, and WF3) and reference medical workflow information (e.g., RWF1, RWF2, and RWF3). Medical workflow information WF1, WF2, WF3 may be associated with a first medical workflow, a second medical workflow, and a third medical workflow, respectively. For example, information related to different medical procedures may correspond with one or more different medical workflows. Each medical workflow may include one or more stages, which may be indicated by the respective medical workflow information. For example, medical workflow information WF1 may relate to medical workflow 200 of
For each medical workflow, data structure 650 may store reference medical workflow information RWF1, RWF2, RWF3 respectively corresponding to WF1, WF2, WF3. Reference medical workflow information RWF1, RWF2, RWF3 may indicate efficiency parameters. The efficiency parameters may include, for example, benchmark times to complete one or more stages of the medical workflow, a number of medical staff allocated for the medical workflow and/or for particular stages of the medical workflow, etc. The efficiency parameters of a medical workflow may be determined from data related to previously recorded performances of the medical workflow. For example, the amount of time allotted for a given medical workflow may be determined based on an average amount of time expended by one or more other surgeons in completing the given medical workflow.
Each workflow WF1, WF2, WF3 may indicate a total surgical time of a surgical procedure, an operating room turnaround time, staff efficiencies, a surgical procedure efficiency score, or other information gathered during use of the medical environment. The total surgical time may indicate the amount of time that a given surgical procedure takes to complete. The operating room turnaround time may indicate the amount of time that it takes the medical staff to clean and prepare the medical environment (e.g., an operating room) for a subsequent surgical procedure that may occur in the medical environment. The staff efficiencies may indicate the efficiency of some or all of the medical staff during the given surgical procedure. The surgical procedure efficiency score may indicate the overall efficiency of the current surgical procedure in comparison to a reference surgical procedure (e.g., based on the reference workflow information).
Content rendering subsystem 112 may be configured to determine which content to present based on the current stage, the subsequent stage (e.g., subsequent stage information 602), the preceding stage, or combinations thereof. For example, content rendering subsystem 112 may access a data structure 610 (e.g., stored in content database 168) and may identify row 604 in data structure 610 that corresponds to subsequent stage information 602 (e.g., “Stage 2”). Data structure 610 may store data indicating which content can be presented for a particular medical stage. Some stages may have a single content item to present (e.g., Stage 1 includes a video to be presented), whereas other stages may indicate multiple content items to be presented (e.g., Stage 2 includes two checklists and a video to be presented). In the instance multiple content items are to be presented, these content items may be presented on the same display or different displays. For example, some content items may be presented using monitor 14, while other content items may be presented using touchscreen monitor 22 of
As the medical staff proceeds through the different tasks associated with the different stages of the medical workflow, the content may dynamically update. For example, as seen with reference to
Audio message 706 may also indicate tasks and/or actions to be performed by a user during the subsequent medical stage. Audio message 706 describing the tasks and/or actions to be performed during the subsequent medical stage may be stored, for example, in content database 168 (illustrated at least in
As step 1 is being executed, images depicting the medical environment may continue to be received at computing system 102. For example, room cameras 146 shown in
Content rendering subsystem 112 may be configured to present updated medical content to reflect the next step. For example, with reference to
Content rendering subsystem 112 may also be configured to notify one or more medical staff of a task being performed out of order or before another task has been completed. For example, for a given medical stage, a first step associated with a current task and a second step associated with a subsequent task may be identified. The first and second steps may be identified from a plurality of steps associated with the given stage. For example, with reference to
As mentioned above, content rendering subsystem 112 (shown in
User groups 806a-806c may be located proximate display devices 802a-802c, respectively. User groups 806a-806c may be identified as being proximate to display devices 802a-802c based on contextual information (e.g., contextual information 410 described above with respect to
A given member's role in the medical workflow may be determined based on activities performed by that member. As mentioned above with reference to
An efficiency score for the medical procedure (e.g., a surgical efficiency) may be computed based on detected activities. For example, the efficiency score may be computed on a medical procedure level, a medical staff level, a medical stage level, or at other granularities. An efficiency score may be calculated based on a total time that a given medical procedure takes to complete, an amount (e.g., percentage) of idleness detected within the medical environment, a comparison between the total time and an expected time for completion of the given surgical procedure, or as an amount of time taken to clean and/or the adherence of the cleaning to predetermined rules associated with cleaning efficiency. To evaluate the efficiency of a surgery and therefore compute the efficiency score, a video recording of the medical procedure can be analyzed using one or more machine learning models (e.g., contextual ML model 310 and/or medical stage ML model 320 of
As an example, the report may include:
In addition, the machine learning models may be configured to generate a video summary. The video summary may include an identification of the stages of the medical workflow. The video summary may be integrated with the medical report, which may enable a reviewer to easily navigate to a specific time during the medical workflow and watch a corresponding video summary (e.g., a video snippet). The video summary may instead or additionally be generated for each member of the medical staff, particularly corresponding to when each member was actively involved in the various stages of the medical workflow.
A report for the medical workflow may be generated describing the presented content, the contextual information, the identified stage, the subsequent stage, and/or other information. For example, a report may be generated that describes discrepancies found between the machine learning models' determinations (e.g., the contextual information) and the medical staff's findings. The report may include first medical content associated with the contextual information and second medical content associated with the subsequent stage of the medical workflow.
As described above, the efficiency score may be computed in a number of different ways, each of which may be aggregated together to obtain an overall efficiency score for the medical workflow. Alternatively, one or more of the individual efficiency scores may be used to describe the efficiency of a given medical workflow.
In one example, the machine learning models implemented by computing system 102, room cameras 146, medical devices 120, and/or client devices 130 (shown in
In another example, an efficiency score may be computed based on a comparison with reference medical workflow information. For example, the efficiency score may be compared with efficiency scores and/or other benchmark parameters stored in data structure 650 of
As another example, an efficiency score for the medical workflow may be computed based on an amount of time that one or more members of the medical staff were idle during the stages of the medical workflow. Based on images captured by room cameras 146, camera 152, and/or video camera 140 (shown in
As described above, the efficiency score may be used to optimize aspects of the medical workflow. Optimizing the medical workflow may include: (i) notifying the team of an area in the medical environment prone to collision and optimizing the layout of objects in the medical environment based on the notification before surgery, (ii) providing training to medical staff, (iii) increasing the number of medical staff allocated for a medical procedure (e.g., in the instance the efficiency score determined for each of the currently allocated medical staff meets or exceeds a threshold efficiency score, such as greater than 75% efficiency, greater than 85% efficiency, greater than 90% efficiency, etc.), (iv) adjusting a location of medical devices 120 or other equipment within medical environment 10 (shown in
At step 904, contextual information associated with the medical workflow may be determined based on the received images. For example, with reference to
At step 906, a stage of the medical workflow may be identified based on the images. One or more machine learning models may be used to identify the stage from the images. For example, medical stage ML model 320 of
At step 908, a subsequent stage of the medical workflow may be determined based on the identified stage. For example, in the instance the identified stage is first medical stage 202 (shown in
At step 910, first medical content associated with the contextual information may be retrieved. The first medical content may be stored in content database 168 (shown at least in
For example, first medical content 510 of
At step 912, second medical content associated with the subsequent medical stage may be retrieved. The second medical content may be stored in content database 168 (shown at least in
At step 914, the first medical content and the second medical content may be presented. The first medical content and the second medical content may be presented via the same display device or using different display devices. For example, as illustrated in
At step 1004, one or more adjustments to resource allocations may be determined based on the medical workflow efficiency score. For example, reference workflow information RWF1 of
At step 1006, the resource allocations may be updated based on the determined adjustments. For example, the number of medical staff allocated for the medical workflow may be adjusted from the current number (e.g., five medical staff) to an updated number (e.g., three medical staff).
Computing system 1100 may be used for performing any of the methods described herein, including processes 900 and 1000 of
Input device 1120 can be any suitable device that provides input, such as a touch screen, keyboard or keypad, mouse, gesture recognition component of a virtual/augmented reality system, or voice recognition device. Output device 1130 can be or include any suitable device that provides output, such as a touch screen, haptics device, virtual/augmented reality display, or speaker.
Storage 1140 can be any suitable device that provides storage, such as an electrical, magnetic, or optical memory including a RAM, cache, hard drive, removable storage disk, or other non-transitory computer readable medium. Communication device 1160 can include any suitable device capable of transmitting and receiving signals over a network, such as a network interface chip or device. The components of the computer can be coupled in any suitable manner, such as via a physical bus or wirelessly.
Software 1150, which can be stored in storage 1140 and executed by processor 1110, can include, for example, the programming that embodies the functionality of the present disclosure (e.g., as embodied in the devices as described above). For example, software 1150 can include one or more programs for performing one or more of the steps of the methods disclosed herein.
Software 1150 can also be stored and/or transported within any non-transitory computer-readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as those described above, that can fetch instructions associated with the software from the instruction execution system, apparatus, or device and execute the instructions. In the context of this disclosure, a computer-readable storage medium can be any medium, such as storage 1140, that can contain or store programming for use by or in connection with an instruction execution system, apparatus, or device.
Software 1150 can also be propagated within any transport medium for use by or in connection with an instruction execution system, apparatus, or device, such as those described above, that can fetch instructions associated with the software from the instruction execution system, apparatus, or device and execute the instructions. In the context of this disclosure, a transport medium can be any medium that can communicate, propagate or transport programming for use by or in connection with an instruction execution system, apparatus, or device. The transport readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, or infrared wired or wireless propagation medium.
Computing system 1100 may be coupled to a network, which can be any suitable type of interconnected communication system. The network can implement any suitable communications protocol and can be secured by any suitable security protocol. The network can comprise network links of any suitable arrangement that can implement the transmission and reception of network signals, such as wireless network connections, T1 or T3 lines, cable networks, DSL, or telephone lines.
Computing system 1100 can implement any operating system suitable for operating on the network. Software 1150 can be written in any suitable programming language, such as C, C++, C#, Java, or Python. In various examples, application software embodying the functionality of the present disclosure can be deployed in different configurations, such as in a client/server arrangement or through a Web browser as a Web-based application or Web service, for example.
As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). The words “include”, “including”, and “includes” and the like mean including, but not limited to. As used throughout this application, the singular forms “a,” “an,” and “the” include plural referents unless the content explicitly indicates otherwise. Thus, for example, reference to “an element” or “a element” includes a combination of two or more elements, notwithstanding use of other terms and phrases for one or more elements, such as “one or more.” The term “or” is, unless indicated otherwise, non-exclusive, i.e., encompassing both “and” and “or.” Terms describing conditional relationships, e.g., “in response to X, Y,” “upon X, Y,” “if X, Y,” “when X, Y,” and the like, encompass causal relationships in which the antecedent is a necessary causal condition, the antecedent is a sufficient causal condition, or the antecedent is a contributory causal condition of the consequent, e.g., “state X occurs upon condition Y obtaining” is generic to “X occurs solely upon Y” and “X occurs upon Y and Z.” Such conditional relationships are not limited to consequences that instantly follow the antecedent obtaining, as some consequences may be delayed, and in conditional statements, antecedents are connected to their consequents, e.g., the antecedent is relevant to the likelihood of the consequent occurring. Statements in which a plurality of attributes or functions are mapped to a plurality of objects (e.g., one or more processors performing steps A, B, C, and D) encompasses both all such attributes or functions being mapped to all such objects and subsets of the attributes or functions being mapped to subsets of the attributes or functions (e.g., both all processors each performing steps A-D, and a case in which processor 1 performs step A, processor 2 performs step B and part of step C, and processor 3 performs part of step C and step D), unless otherwise indicated. Further, unless otherwise indicated, statements that one value or action is “based on” another condition or value encompass both instances in which the condition or value is the sole factor and instances in which the condition or value is one factor among a plurality of factors. Unless otherwise indicated, statements that “each” instance of some collection have some property should not be read to exclude cases where some otherwise identical or similar members of a larger collection do not have the property, i.e., each does not necessarily mean each and every. Limitations as to a sequence of recited steps should not be read into the claims unless explicitly specified, e.g., with explicit language like “after performing X, performing Y,” in contrast to statements that might be improperly argued to imply sequence limitations, like “performing X on items, performing Y on the X'ed items,” used for purposes of making claims more readable rather than specifying sequence. Statements referring to “at least Z of A, B, and C,” and the like (e.g., “at least Z of A, B, or C”), refer to at least Z of the listed categories (A, B, and C) and do not require at least Z units in each category. Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining” or the like refer to actions or processes of a specific apparatus, such as a special purpose computer or a similar special purpose electronic processing/computing device. Features described with reference to geometric constructs, like “parallel,” “perpendicular/orthogonal,” “square”, “cylindrical,” and the like, should be construed as encompassing items that substantially embody the properties of the geometric construct, e.g., reference to “parallel” surfaces encompasses substantially parallel surfaces. The permitted range of deviation from Platonic ideals of these geometric constructs is to be determined with reference to ranges in the specification, and where such ranges are not stated, with reference to industry norms in the field of use, and where such ranges are not defined, with reference to industry norms in the field of manufacturing of the designated feature, and where such ranges are not defined, features substantially embodying a geometric construct should be construed to include those features within 15% of the defining attributes of that geometric construct. The terms “first”, “second”, “third,” “given” and so on, if used in the claims, are used to distinguish or otherwise identify, and not to show a sequential or numerical limitation. As is the case in ordinary usage in the field, data structures and formats described with reference to uses salient to a human need not be presented in a human-intelligible format to constitute the described data structure or format, e.g., text need not be rendered or even encoded in Unicode or ASCII to constitute text; images, maps, and data-visualizations need not be displayed or decoded to constitute images, maps, and data-visualizations, respectively; speech, music, and other audio need not be emitted through a speaker or decoded to constitute speech, music, or other audio, respectively.
The foregoing description, for the purpose of explanation, has been described with reference to specific aspects. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The aspects were chosen and described in order to best explain the principles of the techniques and their practical applications. Others skilled in the art are thereby enabled to best utilize the techniques and various aspects with various modifications as are suited to the particular use contemplated.
Although the disclosure and examples have been fully described with reference to the accompanying figures, it is to be noted that various changes and modifications will become apparent to those skilled in the art. Such changes and modifications are to be understood as being included within the scope of the disclosure and examples as defined by the claims. Finally, the entire disclosure of the patents and publications referred to in this application are hereby incorporated herein by reference.
This application claims the benefit of U.S. Provisional Application No. 63/477,371, filed Dec. 27, 2022, the entire contents of which are hereby incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
63477371 | Dec 2022 | US |