The present disclosure relates generally to the field of 3-dimensional (3D) modeling, and more specifically to systems and methods for modeling a human respiratory tract.
Accurate 3D models of the human respiratory system (e.g., a respiratory tract, etc.) may be useful in many scenarios. For example, accurate 3D models of the human respiratory system may be used to (i) generate particle deposition profiles, (ii) to analyze drug delivery and inhalation therapies (e.g., in internal radiation dosimetry, in respiratory device design, etc.), (iii) to determine environmental health impacts (e.g., due to inhaled pollutants, etc.), and/or (iv) to determine particle behavior in an airway under varying conditions (e.g., in aerospace/high-altitude contexts, etc.).
An implementation of the present disclosure is a method for modeling a human respiratory tract. The method includes receiving a number of images, wherein each of the number of images include at least a portion of a human respiratory tract, generating, based on the number of images, a 3D model of at least a portion of the human respiratory tract, wherein the 3D model includes an uncapped endpoint of the human respiratory tract, and automatically capping the endpoint within the 3D model by identifying, using a computer-implemented algorithm, the endpoint of the human respiratory tract within the 3D model, generating a 3D model of a shape, aligning the 3D model of the shape with the endpoint by positioning a center of the 3D model of the shape at a centroid corresponding to the endpoint, and merging the 3D model of the shape with the 3D model.
In some embodiments, the number of images include a computed tomography (CT) scan or a magnetic resonance imaging (MRI) image. In some embodiments, generating the 3D model includes generating a first 3D model of an upper respiratory tract comprising a nasal cavity and an oral cavity, generating a second 3D model of a lower respiratory tract comprising a trachea, and merging the first 3D model and the second 3D model to form the 3D model. In some embodiments, the second 3D model of the lower respiratory tract includes a fifth generation of bronchi. In some embodiments, generating the 3D model includes (i) executing a neural network trained on a number of volumetric chest CT scans or (ii) executing morphological-based algorithms. In some embodiments, the method further includes parameterizing the 3D model to identify at least two of (i) a length of a trachea, (ii) an average diameter of the trachea, (iii) a G0-to-G1 branching angle, or (iv) a volume of the 3D model. In some embodiments, the method includes modifying a mesh of the 3D model to increase or decrease a number of polygons in the mesh and applying a smoothing filter to the mesh. In some embodiments, identifying the endpoint of the human respiratory tract within the 3D model includes identifying, based on the 3D model, a centerline associated with a structure of the human respiratory tract and identifying an endpoint of the centerline. In some embodiments, identifying the endpoint of the centerline includes identifying a first endpoint of the centerline, identifying a second endpoint of the centerline, determining a first cross-sectional area associated with the structure of the human respiratory tract within the 3D model at the first endpoint, determining a second cross-sectional area associated with the structure of the human respiratory tract within the 3D model at the second endpoint, and labeling the first endpoint or the second endpoint based on comparing the first cross-sectional area to the second cross-sectional area. In some embodiments, automatically capping the endpoint within the 3D model includes producing a sealed 3D volume representing at least the portion of the human respiratory tract.
Another implementation of the present disclosure is a system for modeling a human respiratory tract. The system includes a processing circuit comprising a processor and memory, the memory having instructions stored thereon that, when executed by the processor, cause the processing circuit to receive a number of images, wherein each of the number of images include at least a portion of a human respiratory tract, generate, based on the number of images, a 3D model of at least a portion of the human respiratory tract, automatically identify, using a computer-implemented algorithm, an endpoint of the human respiratory tract within the 3D model, and automatically cap the endpoint within the 3D model.
In some embodiments, the number of images include a computed tomography (CT) scan or a magnetic resonance imaging (MRI) image. In some embodiments, generating the 3D model includes generating a first 3D model of an upper respiratory tract comprising a nasal cavity and an oral cavity, generating a second 3D model of a lower respiratory tract comprising a trachea, and merging the first 3D model and the second 3D model to form the 3D model. In some embodiments, the second 3D model of the lower respiratory tract includes a fifth generation of bronchi. In some embodiments, generating the 3D model includes (i) executing a neural network trained on a number of volumetric chest CT scans or (ii) executing morphological-based algorithms. In some embodiments, the instruction further cause the processing circuit to parameterize the 3D model to identify at least two of (i) a length of a trachea, (ii) an average diameter of the trachea, (iii) a G0-to-G1 branching angle, or (iv) a volume of the 3D model. In some embodiments, the instructions further cause the processing circuit to modify a mesh of the 3D model to increase a number of polygons in the mesh and apply a smoothing filter to the mesh. In some embodiments, automatically identifying the endpoint of the human respiratory tract within the 3D model includes identifying, based on the 3D model, a centerline associated with a structure of the human respiratory tract, identifying a first endpoint of the centerline, identifying a second endpoint of the centerline, determining a first cross-sectional area associated with the structure of the human respiratory tract within the 3D model at the first endpoint, determining a second cross-sectional area associated with the structure of the human respiratory tract within the 3D model at the second endpoint, and labeling the first endpoint or the second endpoint based on comparing the first cross-sectional area to the second cross-sectional area. In some embodiments, automatically capping the endpoint includes generating a 3D model of a shape, aligning the 3D model of the shape with the endpoint by positioning a center of the 3D model of the shape at a centroid corresponding to the endpoint, and merging the 3D model of the shape with the 3D model to produce a sealed 3D volume representing at least the portion of the human respiratory tract.
Another implementation of the present disclosure is a non-transitory computer-readable storage medium having instructions stored thereon that, when executed by a processor, cause the processor to receive a number of images comprising a computed tomography (CT) scan or a magnetic resonance imaging (MRI) image, wherein each of the number of images include at least a portion of a human respiratory tract, generating, based on the number of images, a 3D model of at least a portion of the human respiratory tract, wherein the 3D model includes an uncapped endpoint of the human respiratory tract, and automatically capping the endpoint within the 3D model by identifying, using a computer-implemented algorithm, the endpoint of the human respiratory tract within the 3D model by, identifying, based on the 3D model, a centerline associated with a structure of the human respiratory tract using a Fast Marching Method (FMM), and identifying an endpoint of the centerline as the endpoint of the human respiratory tract, generating a 3D model of a shape, aligning the 3D model of the shape with the endpoint of the human respiratory tract by positioning a center of the 3D model of the shape at a centroid corresponding to the endpoint of the human respiratory tract, and merging the 3D model of the shape with the 3D model to produce a sealed 3D volume representing at least the portion of the human respiratory tract.
The above and other aspects and features of the present disclosure will become more apparent to those skilled in the art from the following detailed description of the example embodiments with reference to the accompanying drawings.
Referring generally to the FIGURES, described herein are systems and methods for modeling a human respiratory tract. Human respiratory tracts may vary greatly. Because of this variability, there are many scenarios where a “generic” 3D model of a human respiratory tract (e.g., a 3D model of a human respiratory tract that is not specific to a person/application it is being used for, etc.) may be poorly suited. For example, a drug dosage for an inhaled drug may vary greatly depending on the specific characteristics of an individual's respiratory tract. Therefore, it may be useful to create custom 3D models of a human respiratory tract to facilitate determining drug dosing. However, conventional systems and methods for creating 3D models of a human respiratory tract may require substantial human input, thereby making the use of custom 3D models of a human respiratory tract infeasible. For example, creating a custom 3D model of a human respiratory tract may require a person skilled in 3D modeling software to manually generate a 3D model based on images of a human respiratory tract. Manually generating a 3D model based on images may be time consuming, expensive, and/or prone to error (e.g., inaccurate modeling due to human error in the modeling process, etc.). Therefore, use of custom 3D models of a human respiratory tract may be infeasible in many scenarios. Moreover, there are features of the human respiratory tract that may be difficult (e.g., time consuming, require high levels of 3D modeling skill, etc.) to 3D model. For example, a 3D model of a human respiratory tract may have a number of outlets (e.g., openings, etc.) that must be capped in order to use the 3D model (e.g., thereby creating a sealed 3D volume). A 3D model of a human respiratory tract may have 2n+I such outlets, where n is a number of bronchi generations and I varies based on the inclusion/exclusion of upper respiratory cavities (e.g., nasal cavities, oral cavities, etc.). For example, a 3D model of a human respiratory tract including the 12th generation of bronchi may include over 4,000 outlets (e.g., openings, etc.) that must be manually capped by a skilled 3D modeler with advanced knowledge of respiratory anatomy in order to make the 3D model usable (e.g., for computational fluid and particle dynamics (CFPD) simulations, etc.). These challenges may be multiplied at scale. For example, during drug development/drug trials it may be useful to generate a number of custom 3D models of the human respiratory tract corresponding to different patient populations (e.g., young patients, old patients, overweight patients, chronic smokers, scuba divers, etc.) having respiratory tracts with different characteristics (e.g., trachea diameter, lung volume, bronchi length, bronchi branching angle, etc.). However, generating a number of custom 3D models of the human respiratory system may be prohibitively time consuming and/or expensive.
Therefore, there is a need for systems and methods to generate custom 3D models of a human respiratory tract without a skilled 3D modeler. Systems and methods of the present disclosure may facilitate automatically generating custom 3D models of human respiratory tracts based on images such as computed tomography (CT) scan images. Systems and methods of the present disclosure may improve upon conventional systems and methods by automating the process of generating custom 3D models of the human respiratory tract, thereby saving time and money and reducing modeling error. For example, systems and methods of the present disclosure may facilitate automatically identifying and capping endpoints within a 3D model of a human respiratory tract to produce a sealed 3D volume that is suitable for various applications.
Referring now to
At step 102, method 100 may include generating 3D model 120 based on images 110. 3D model 120 may include at least a portion of a human respiratory tract (e.g., shown as structure 122). For example, 3D model 120 may be and/or include a 3D model of an upper respiratory tract. As another example, 3D model 120 may be and/or include a 3D model of a lower respiratory tract. It should be understood that while
In some embodiments, step 102 includes one or more pre-processing and/or post-processing steps. For example, step 102 may include applying a filter to images 110 (e.g., a smoothing filter, a threshold filter, a contrast filter, etc.), performing an operation on images 110 (e.g., a binary dilation operation, a binary erosion operation, Laplacian digging, etc.), and/or removing a background from images 110. As another example, step 102 may include modifying 3D model 120 to increase a number/density of polygons within 3D model 120 and/or applying a filter to 3D model 120 (e.g., applying a Taubin filter to smooth the geometry of 3D model 120 while retaining finer bronchial structures, etc.). In various embodiments, a smoothing filter is applied to preserve the geometrical structures of a mesh of 3D model 120. Pre-processing and/or post-processing may be important for ensuring that 3D model is usable in CFPD simulations. For example, smoothing 3D model 120 may facilitate removing artifacts (e.g., such as sharp wall edges, etc.) that may lead to numerical inaccuracies during CFPD simulation. As another example, remeshing (e.g., modifying 3D model 120 to increase/decrease a number of polygons, etc.) may prevent geometric degradation of 3D model 120 that may result from a smoothing filter, thereby increasing an accuracy of 3D model 120 (e.g., especially in relation to small bronchial structures that may be greatly affected by a smoothing filter, etc.). In various embodiments, remeshing improves a quality of the polygons within the mesh. In some embodiments, step 102 includes merging two or more 3D models to generate 3D model 120. For example, step 102 may include generating a first 3D model of an upper respiratory tract, generating a second 3D model of a lower respiratory tract, and combining the first 3D model and the second 3D model to generate 3D model 120. Generating 3D model 120 is discussed in greater detail below with reference to
At step 104, method 100 may include identifying endpoint 124. Endpoint 124 may correspond to an endpoint within 3D model 120 (e.g., an inlet/outlet of a human respiratory tract represented by 3D model 120, etc.). For example, endpoint 124 may be an endpoint of a bronchi. In some embodiments, step 104 includes identifying a number of endpoints. In various embodiments, step 104 includes automatically identifying endpoint 124 by identifying a centerline (e.g., a representation of a central flow path within the airway geometry, a center of the tubular structure, etc.) associated with structure 122 and identifying an endpoint of the centerline as endpoint 124. For example, step 104 may include computing a minimal cost path through a lumen of structure 122 to determine the centerline associated with structure 122. In some embodiments, step 104 includes using Dijkstra's algorithm. In some embodiments, step 104 includes generating a Voronoi Diagram and identifying the centerline based on the Voronoi Diagram. Additionally or alternatively, step 104 may include performing the Fast Marching Method (FMM). Identifying endpoint 124 is discussed in greater detail below with reference to
At step 106, method 100 may include capping endpoint 124. In various embodiments, step 106 includes generating a cap (e.g., a 3D model of a shape, etc.), positioning/orienting the cap at endpoint 124, and/or merging the cap with 3D model 120. In various embodiments, the result of step 106 is a sealed 3D volume usable for CFPD simulations. Capping endpoint 124 is discussed in greater detail below with reference to
In various embodiments, steps 104-106 are repeated for every inlet/outlet of 3D model 120. For example, method 100 may include identifying and capping every endpoint (e.g., inlet/outlet) corresponding to structure 122 such that method 100 produces a sealed 3D volume usable in CFPD simulations. In various embodiments, method 100 results in a sealed 3D volume that is usable to generate patient-specific particle deposition profiles. In various embodiments, method 100 may be performed partially or wholly by a computer. For example, method 100 may be implemented on a computer and may generate a sealed 3D volume with little to no human intervention.
Referring now to
At step 202, method 200 may include generating first 3D model 230 based on first image 210. Step 202 may include executing a computer-implemented algorithm such as NASAL-Geom. In various embodiments, step 202 includes generating first 3D model 230 such that it includes specific anatomical structures relevant for CFPD simulation. For example, first 3D model 230 may include nasal and oral cavities and the sinuses. Additionally or alternatively, step 202 may include generating first 3D model such that it excludes specific structures (e.g., disconnected/small-volume bodies, etc.).
At step 204, method 200 may include generating second 3D model 240 based on second image 220. Step 204 may include executing a neural network trained on a dataset including medical imaging data. For example, step 204 may include generating second 3D model 240 by using second image 220 as an input for a 3D U-Net Segmentation neural network trained on CT scan images from the EXACT'09 dataset. In some embodiments, step 204 includes parameterizing second 3D model 240. For example, step 204 may include determining (i) a length of a trachea, (ii) an average diameter of a trachea, (iii) a G0-to-G1 branching angle, and/or (iv) an internal volume of second 3D model 240. Second 3D model 240 may have various resolutions based on a resolution of second image 220. For example, second 3D model 240 may include generation zero through generation 23 of the human respiratory tract. As another example, second 3D model 240 may include generation zero (G0) through generation 16 (G16) of the human respiratory tract. Generation zero may correspond to the trachea, generation one may correspond to the primary bronchi, generation two may correspond to the secondary bronchi, generation three may correspond to tertiary bronchi, generation four may correspond to the small bronchi, generation five may correspond to the bronchioles, generations six-sixteen may correspond to the terminal bronchioles, generations 17-19 may correspond to the respiratory bronchioles, and/or generation 23 may correspond to the alveolar sacs. In various embodiment, steps 202-204 includes one or more pre/post-processing steps (e.g., remeshing, filtering, etc.).
At step 206, method 200 may include merging first 3D model 230 and second 3D model 240 to form combined 3D model 250. In various embodiments, step 206 includes merging first 3D model 230 and second 3D model 240 at an overlapping point. For example, first 3D model 230 may include a portion of trachea and second 3D model 240 may include the same portion of trachea (e.g., an overlapping portion of trachea) and step 206 may include merging first 3D model 230 and second 3D model 240 at the overlapping portion of trachea. In some embodiments, step 206 includes (i) determining a first tangent vector corresponding to an endpoint in first 3D model 230 (e.g., where the first tangent vector represents a direction of airflow at that point, tangent to a centerline of first 3D model 230 at the endpoint, etc.), (ii) determining a second tangent vector corresponding to an endpoint in second 3D model 240, (iii) positioning first 3D model 230 and second 3D model 240 such that the first tangent vector is the opposite of the second tangent vector and/or the endpoint in first 3D model 230 is positioned at the same point as the endpoint in second 3D model 240, and/or (iv) executing a Boolean difference operation to merge a mesh structure of first 3D model 230 with a mesh structure of second 3D model 240. In various embodiments, combined 3D model 250 is a sealed 3D volume (e.g., a watertight model) usable in CFPD simulations. For example, combined 3D model 250 may be used to generate a CFPD deposition profile using OpenFOAM and/or StarCCM+. A 3D model may be watertight if its boundary forms a closed, continuous, and orientable 2-manifold, where every edge is shared by exactly two adjacent faces, and no holes or open edges exist on the surface.
Referring now to
Referring now to
3D model 402 may be and/or include a 3D model of at least a portion of a human respiratory tract. In various embodiments, step 420 includes generating a distance map using a Fast-Marching Method and/or using Dijkstra's algorithm to extract centerline 404 from the distance map based on minimal cost paths. In various embodiments, centerline 404 is represented as one or more segments. For example, centerline 404 may be represented by 30 segments where each segment is a straight line. Additionally or alternatively, centerline 404 may be represented as a continuous line. For example, a first centerline may be continuous and a second centerline may be segmented. In various embodiments, each segment of centerline 404 includes two ends. Segmenting (i.e., determining the segments that make up centerline 404) may occur during generation of the 3D model (e.g., in step 102, etc.). Additionally or alternatively, segmenting may occur during identification of centerline 404.
At step 430, method 400 may include identifying an endpoint of centerline 404. For example, step 430 may include identifying first endpoint 406 and/or second endpoint 408. First endpoint 406 and/or second endpoint 408 may correspond to an inlet or an outlet of an airway represented by 3D model 402. In various embodiments, step 430 includes identifying an end of a segment of centerline 404 that is not connected to another segment. For example, step 430 may include identifying a segment of centerline 404 that is only connected to one other segment and then identifying an end of the identified segment that is not connected to another segment. At step 440, method 400 may include labeling the endpoint (e.g., first endpoint 406 and/or second endpoint 408). For example, step 420 may include labeling first endpoint 406 as an outlet. In various embodiments, step 440 includes (i) determining first cross-sectional area 410 of an airway structure of 3D model 402 at first endpoint 406 (shown as step 442), (ii) determining second cross-sectional area 412 of an airway structure of 3D model 402 at second endpoint 408 (shown as step 444), (iii) comparing first cross-sectional area 410 to the second cross-sectional area 412 (shown as step 446), and/or (iv) labeling first endpoint 406 and/or second endpoint 408 based on the comparison. For example, step 440 may include labeling the endpoint associated with the smaller cross-sectional area as an outlet. In some embodiments, step 440 includes comparing a number of cross-sectional areas (e.g., comparing the cross-sectional area associated with every endpoint associated with a 3D model of a human respiratory tract, etc.). For example, step 440 may include comparing a cross-sectional area associated with every endpoint and labeling the endpoint associated with the largest cross-sectional area as an inlet (e.g., trachea) and all the other endpoints as outlets (e.g., terminal bronchioles). In some embodiments, step 440 include slicing a geometry of 3D model 402 perpendicular to centerline 404 to produce a cross-sectional plane (e.g., slice) and computing an area of the slice.
Referring now to
At step 570, method 500 may include aligning cap 520 to 3D model 510. 3D model 510 may be and/or include a 3D model of at least a portion of a human respiratory tract. In various embodiments, there may be a centerline (shown as centerline 512) associated with a structure of 3D model 510. For example, centerline 512 may be a centerline of an airway such as a bronchus represented by 3D model 510. Centerline 512 may have endpoint 514 and tangent vector 518. Tangent vector 518 may represent a tangent of centerline 512 at endpoint 514. In various embodiments, tangent vector 518 represents the direction of airflow within the structure at endpoint 514. In various embodiments, step 570 includes positioning center 526 at a center (shown as center 516) of a cross-sectional plane of the structure of the 3D model 510 taken at endpoint 514. Additionally or alternatively, step 570 may include aligning normal vector 528 with tangent vector 518 (e.g., such that {right arrow over (n)}={right arrow over (t)}). In various embodiments, step 570 includes positioning cap 520 such that a capping surface (e.g., a portion of cap 520 that will be used to seal an opening in the structure of 3D model 510, etc.) is orthogonal to the direction of airflow at endpoint 514.
At step 580, method 500 may include merging cap 520 with 3D model 510. Step 580 may include performing a Boolean difference operation to merge a mesh of cap 520 with a mesh of 3D model 510. Step 580 may result in a sealed 3D volume usable in CFPD simulations (shown as sealed 3D model 530).
Sealed 3D model 530 may be useful in many scenarios. For example, researchers may use sealed 3D model 530 to determine respiratory therapies. This may be particularly important for respiratory conditions such as asthma and chronic obstructive pulmonary disease (COPD), the treatment for which requires drugs to reach specific regions on the lungs. Therefore, individualized CFPD simulations may be useful for developing inhaler designs and dosing strategies that disperse and deposit particles in patient-specific airway geometries (e.g., thereby enhancing treatment effectiveness, reducing side effects, and improving patient outcomes, etc.). As another example, public health officials may use sealed 3D model 530 to analyze the health effects of inhaling radioactive particles (e.g., in the event of a radioactive leak, etc.). As another example, regulators may use sealed 3D model 530 to assess occupational hazards associated with airborne contaminants. In many scenarios sealed 3D model 530 offers benefits over conventional systems and methods by facilitating individualized modeling and simulation (e.g., for precision health risk assessment, determining individualized drug dosing, etc.). For example, conventional systems and methods may not account for complex airflow patterns and particle dynamics with a patient's airway, thereby leading to inaccuracies. Conversely, sealed 3D model 530 enables individualized CFPD simulations that may provide greater accuracy. In various embodiments, systems and methods of the present disclosure enable large-scale population studies that were previously impractical due to resource constraints. For example, researchers may use the systems and methods of the present disclosure to generate customized 3D models of patients' respiratory tracts at scale for use in studies (e.g., where this was previously impractical due to the time and cost associated with hiring a skilled 3D modeler to create custom 3D models of the patients' respiratory tracts). In various embodiments, sealed 3D model 530 may be used to generate subject-specific particle deposition profiles (e.g., via CFPD simulations).
Referring now to
Communication interface 670 may facilitate communication with one or more systems/devices. For example, computer system 600 may communicate with an electronic health records (EHR) system to receive a number of CT scan images via communication interface 670. Communication interface 670 may be or include wired or wireless communications interfaces (e.g., jacks, antennas, transmitters, receivers, transceivers, wire terminals, etc.) for conducting data communications with external systems or devices. In various embodiments, communications via communication interface 670 is direct (e.g., local wired or wireless communications). Additionally or alternatively, communications via communication interface 670 may utilize a network (e.g., a WAN, the Internet, a cellular network, etc.).
Storage 680 may store data/information associated with modeling a human respiratory tract. For example, storage 680 may store a number of CT scan images. As another example, storage 680 may store a 3D model of a human respiratory tract. Storage 680 may be and/or include one or more memory devices (e.g., hard drive storage, temporary storage, non-volatile memory, flash memory, optical memory, and/or any other suitable memory device).
I/O interface 690 may facilitate input/output operations. For example, I/O interface 690 may include a display capable of presenting information from a user and an interface capable of receiving input from the user. In some embodiments, I/O interface 690 includes a display device configured to present a 3D model of a human respiratory tract to a user. I/O interface 690 may include hardware and/or software components. For example, I/O interface 690 may include a physical input device (e.g., a mouse, a keyboard, a touchscreen device, etc.) and software to enable the physical input device to communicate with computer system 600 (e.g., firmware, drivers, etc.).
As utilized herein with respect to numerical ranges, the terms “approximately,” “about,” “substantially,” and similar terms generally mean+/−10% of the disclosed values, unless specified otherwise. As utilized herein with respect to structural features (e.g., to describe shape, size, orientation, direction, relative position, etc.), the terms “approximately,” “about,” “substantially,” and similar terms are meant to cover minor variations in structure that may result from, for example, the manufacturing or assembly process and are intended to have a broad meaning in harmony with the common and accepted usage by those of ordinary skill in the art to which the subject matter of this disclosure pertains. Accordingly, these terms should be interpreted as indicating that insubstantial or inconsequential modifications or alterations of the subject matter described and claimed are considered to be within the scope of the disclosure as recited in the appended claims.
It should be noted that the term “exemplary” and variations thereof, as used herein to describe various embodiments, are intended to indicate that such embodiments are possible examples, representations, or illustrations of possible embodiments (and such terms are not intended to connote that such embodiments are necessarily extraordinary or superlative examples).
The term “coupled” and variations thereof, as used herein, means the joining of two members directly or indirectly to one another. Such joining may be stationary (e.g., permanent or fixed) or moveable (e.g., removable or releasable). Such joining may be achieved with the two members coupled directly to each other, with the two members coupled to each other using a separate intervening member and any additional intermediate members coupled with one another, or with the two members coupled to each other using an intervening member that is integrally formed as a single unitary body with one of the two members. If “coupled” or variations thereof are modified by an additional term (e.g., directly coupled), the generic definition of “coupled” provided above is modified by the plain language meaning of the additional term (e.g., “directly coupled” means the joining of two members without any separate intervening member), resulting in a narrower definition than the generic definition of “coupled” provided above. Such coupling may be mechanical, electrical, or fluidic.
References herein to the positions of elements (e.g., “top,” “bottom,” “above,” “below”) are merely used to describe the orientation of various elements in the figures. It should be noted that the orientation of various elements may differ according to other exemplary embodiments, and that such variations are intended to be encompassed by the present disclosure.
The present disclosure contemplates methods, systems, and program products on any machine-readable media for accomplishing various operations. The embodiments of the present disclosure may be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system. Embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, EPROM, EEPROM, or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions include, for example, instructions and data which cause a general-purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions.
Although the figures and description may illustrate a specific order of method steps, the order of such steps may differ from what is depicted and described, unless specified differently above. Also, two or more steps may be performed concurrently or with partial concurrence, unless specified differently above. Such variation may depend, for example, on the software and hardware systems chosen and on designer choice. All such variations are within the scope of the disclosure. Likewise, software implementations of the described methods could be accomplished with standard programming techniques with rule-based logic and other logic to accomplish the various connection steps, processing steps, comparison steps, and decision steps.
The term “client or “server” include all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing. The apparatus may include special purpose logic circuitry, e.g., a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC). The apparatus may also include, in addition to hardware, code that creates an execution environment for the computer program in question (e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them). The apparatus and execution environment may realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures.
The systems and methods of the present disclosure may be completed by any computer program. A computer program (also known as a program, software, software application, script, or code) may be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program may be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program may be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification may be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows may also be performed by, and apparatus may also be implemented as, special purpose logic circuitry (e.g., an FPGA or an ASIC).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random-access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data (e.g., magnetic, magneto-optical disks, or optical disks). However, a computer need not have such devices. Moreover, a computer may be embedded in another device (e.g., a vehicle, a Global Positioning System (GPS) receiver, etc.). Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD ROM and DVD-ROM disks). The processor and the memory may be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, implementations of the subject matter described in this specification may be implemented on a computer having a display device (e.g., a CRT (cathode ray tube), LCD (liquid crystal display), OLED (organic light emitting diode), TFT (thin-film transistor), or other flexible configuration, or any other monitor for displaying information to the user. Other kinds of devices may be used to provide for interaction with a user as well; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback).
Implementations of the subject matter described in this disclosure may be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer) having a graphical user interface or a web browser through which a user may interact with an implementation of the subject matter described in this disclosure, or any combination of one or more such back end, middleware, or front end components. The components of the system may be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a LAN and a WAN, an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).
This application claims the benefit of U.S. Provisional Patent Application No. 63/601,958, filed on Nov. 22, 2023, the entire contents of which are incorporated herein by reference.
This invention was made with government support under grant no. DOD-PRMRP W81XWH2110984 awarded by the Department of Defense and grant no. 1P01AI165380-01 awarded by the National Institute of Health. The government has certain rights in the invention.
| Number | Date | Country | |
|---|---|---|---|
| 63601958 | Nov 2023 | US |