Various embodiments pertain to an application that is designed to derive shareable healthcare-related insights.
Validation, verification, and registration are important steps in the development and then deployment of medical devices and software to be used as a medical device. The terms “Software as a Medical Device” and “SaMD” refer to software that is intended to be used for medical purposes and that perform these purposes without being part of a medical device. For convenience, such software may simply be referred to as “medical software.”
There are difficulties in gathering the necessary information for validation, verification, and registration, however. These difficulties include issues with obtaining the clinical data needed to establish (or ensure compliance with) the Good Clinical Practice (GCP) standard and the informed consent of keyholders, as well as storing patient health information (also referred to as “protected health information” or “personal health information”) in an accessible-yet-secure manner. These are significant obstacles in the development and deployment of medical devices and medical software.
Embodiments are illustrated by way of example and not limitation in the drawings. While the drawings depict various embodiments for the purpose of illustration, those skilled in the art will recognize that alternative embodiments may be employed without departing from the principles of the technology. Accordingly, while specific embodiments are shown in the drawings, the technology is amenable to various modifications
Introduced here is a decentralized computing system (or simply “decentralized system”) that is designed to efficiently connect various stakeholders involved in the healthcare system. As further discussed below, an application (also referred to as a “platform”) that runs on the decentralized system may be responsible for performing activities in a secure manner to accelerate the accumulation and sharing of information among the various stakeholders.
These activities may be governed by iterative protocols (e.g., blockchain protocols) that are implemented by the application using the decentralized system. As an example, assume that the application obtains an audio file that includes the respiratory sounds of an individual (also referred to as a “patient”). In such a scenario, the application may derive a healthcare-related insight by examining the audio file. This healthcare-related insight may be shared by the application with entities such as medical facilities (e.g., hospitals and clinics) and insurance providers to facilitate the diagnostic process.
For the purpose of illustration, the application may be described in the context of facilitating development and/or deployment of an electronic stethoscope system able to generate audio files that include the respiratory sounds of patients. However, those skilled in the art will recognize that the technology described herein could be used to facilitate development and/or deployment of other medical devices and medical software.
Embodiments may be described with reference to particular medical devices, computing devices, and networks. However, those skilled in the art will recognize that the features are similarly applicable to other medical devices, computing devices, and networks. For example, while the application may be described as configured to obtain and then examine audio files that contain respiratory sounds, the application could be configured to obtain then examine digital images (e.g., of the eye, lungs, etc.). Thus, the high-level approach to improving communication between various stakeholders in the healthcare system may be implemented regardless of the type of analysis performed by the application.
Moreover, while embodiments may be described in the context of computer-executable instructions, aspects of the technology could be implemented via software, firmware, or hardware. As an example, aspects of the application may be implemented on an inference server to simplify deployment of computer-implemented models (or simply “models”) at scale. Such an approach may allow models (e.g., for discovering breathing abnormalities in respiratory sounds) to be deployed from various frameworks and storage mediums (e.g., local memory or cloud-based memory).
Brief definitions of terms, abbreviations, and phrases used throughout the application are given below.
The terms “connected,” “coupled,” or any variant thereof is intended to include any connection or coupling between two or more elements, either direct or indirect. The connection or coupling can be physical, logical, or a combination thereof. For example, objects may be electrically or communicatively connected to one another despite not sharing a physical connection.
The term “module” may be used to refer broadly to components implemented via software, firmware, or hardware. Generally, modules are functional components that generate output(s) based on input(s). A computer program may include one or more modules. Thus, a computer program may include multiple modules responsible for completing different tasks or a single module responsible for completing all tasks.
Overview of Electronic Stethoscope System
Electronic stethoscope systems can be designed to simultaneously monitor sounds originating from within a living body (or simply “body”) under examination and the ambient environment. An electronic stethoscope system can include one or more input units that are connected to a hub unit. Each input unit may have a resonator and/or a diaphragm designed to direct acoustic sound waves toward at least one microphone configured to produce audio data indicative of internal sounds. These microphone(s) may be referred to as “auscultation microphones.” Moreover, each input unit may include at least one microphone configured to produce audio data indicative of sounds external to the body under examination. These microphone(s) may be referred to as “ambient microphones” or “environmental microphones.”
For the purpose of illustration, an “ambient microphone” may be described as capable of producing audio data indicative of “ambient sounds.” However, these “ambient sounds” generally include a combination of sounds produced by three different sources: (1) sounds originating from within the ambient environment (e.g., environmental noises); (2) sounds leaked through the resonator; and (3) sounds the penetrate the body under examination. Examples of ambient sounds include sounds originating directly from the structural body of the input unit (e.g., scratching by the finger or chest) and low-frequency environmental noises that penetrate the structural body of the input unit.
As further described below, the input unit 100 can collect acoustic sound waves representative of biological activities within a body under examination, convert the acoustic sound waves into an electrical signal, and then digitize the electrical signal (e.g., for easier transmission, to ensure higher fidelity, etc.). The input unit 100 can include a structural body 102 comprised of metal, such as stainless steel, aluminum, titanium, or another suitable metal alloy. To make the structural body 102, molten metal will typically be die casted and then either machined or extruded into the appropriate form.
In some embodiments, the input unit 100 includes a casing that inhibits exposure of the structural body 102 to the ambient environment. For example, the casing may prevent contamination, improve cleanability, etc. Generally, the casing encapsulates substantially all of the structural body 102 except for the conical resonator disposed along its bottom side. The conical resonator is described in greater depth below with respect to
With regard to the terms “distal” and “proximal,” unless otherwise specified, the terms refer to the relative positions of the input unit 100 with reference to the body. For example, in referring to an input unit 100 suitable for fixation to the body, “distal” can refer to a first position close to where a cable suitable for conveying digital signals is connected to the input unit 100 and “proximal” can refer to a second position close to where the input unit 100 contacts the body.
To improve the clarity of acoustic sound waves collected by the conical resonator 204, the input unit 200 may be designed to simultaneously monitor sounds originating from different locations. For example, the input unit 200 may be designed to simultaneously monitor sounds originating from within a body under examination and sounds originating from the ambient environment. Thus, the input unit 200 may include at least one microphone 206 (referred to as an “auscultation microphone”) configured to produce audio data indicative of internal sounds and at least one microphone 208 (referred to as an “ambient microphone”) configured to produce audio data indicative of ambient sounds. Each of the auscultation and ambient microphones includes a transducer able to convert acoustic sound waves into an electrical signal. Thereafter, the electrical signal may be digitized prior to transmission to a hub unit. Digitization enables the hub unit to readily clock or synchronize the signals received from multiple input units. Digitization may also ensure that the signal received by the hub unit from an input unit has a higher fidelity than would otherwise be possible.
These microphones may be omnidirectional microphones designed to pick up sound from all directions or directional microphones designed to pick up sounds coming from a specific direction. For example, the input unit 200 may include auscultation microphone(s) 206 oriented to pick up sounds originating from a space adjacent to the outer opening of the conical resonator 204. In such embodiments, the ambient microphone(s) 208 may be omnidirectional or directional microphones. As another example, a set of ambient microphones 208 could be equally spaced within the structural body 202 of the input unit 200 to form a phased array able to capture highly-directional ambient sounds to reduce noise and interference. Accordingly, the auscultation microphone(s) 206 may be arranged to focus on the path of incoming internal sounds (also referred to as the “auscultation path”), while the ambient microphone(s) 208 may be arranged to focus on the paths of incoming ambient sounds (also referred to as the “ambient paths”).
Conventionally, electronic stethoscopes subjected electrical signals indicative of acoustic sound waves to digital signal processing (DSP) algorithms responsible for filtering undesirable artifacts. However, such action may suppress nearly all of the sound within certain frequency ranges (e.g., 100-800 Hz), thereby greatly distorting internal sounds of interest (e.g., those corresponding to heartbeats, inhalations, or exhalations). Here, however, a processor can employ a noise cancellation algorithm that separately examines the audio data generated by the auscultation microphone(s) 206 and the audio data generated by the ambient microphone(s) 208. More specifically, the processor may parse the audio data generated by the ambient microphone(s) 208 to determine how, if at all, the audio data generated by the auscultation microphone(s) 206 should be modified. For example, the processor may discover that certain digital features should be amplified (e.g., because they correspond to internal sounds), diminished (e.g., because they correspond to ambient sounds), or removed entirely (e.g., because they represent noise). Such a technique can be used to improve the clarity, detail, and quality of sound recorded by the input unit 200. For example, application of the noise cancellation algorithm may be an integral part of the denoising process employed by an electronic stethoscope system that includes at least one input unit 200.
For privacy purposes, neither the auscultation microphone(s) 206 nor the ambient microphone(s) 208 may be permitted to record while the conical resonator 204 is directed away from the body. Thus, in some embodiments, the auscultation microphone(s) 206 and/or the ambient microphone(s) 208 do not begin recording until the input unit 200 is attached to body. In such embodiments, the input unit 200 may include one or more attachment sensors 210a-c that are responsible for determining whether the structural body 202 has been properly secured to the surface of the body.
The input unit 200 could include any subset of the attachment sensors shown here. For example, in some embodiments, the input unit 200 only includes attachment sensors 210a-b, which are positioned near the wider opening of the conical resonator 204. As another example, in some embodiments, the input unit 200 only includes attachment sensor 210c, which is positioned near the narrower opening (also referred to as the “inner opening”) of the conical resonator 204. Moreover, the input unit 200 may include different types of attachment sensors. For example, attachment sensor 210c may be an optical proximity sensor designed to emit light (e.g., infrared light) through the conical resonator 204 and then determine, based on the light reflected back into the conical resonator 204, the distance between the input unit 200 and the surface of the body. As another example, attachment sensors 210a-c may be audio sensors designed to determine, with the assistance of an algorithm programmed to determine the drop-off of a high-frequency signal, whether the structural body 202 is securely sealed against the surface of the body based on the presence of ambient noise (also referred to as “environmental noise”). As another example, attachment sensors 210a-b may be pressure sensors designed to determine whether the structural body 202 is securely sealed against the surface of the body based on the amount of applied pressure. Some embodiments of the input unit 200 include each of these different types of attachment sensors. By considering the output of these attachment sensor(s) 210a-c in combination with the aforementioned active noise cancellation algorithm, a processor may be able to dynamically determine the adhesion state. That is, the processor may be able to determine whether the input unit 200 has formed a seal against the body based on the output of these attachment sensor(s) 210a-c.
As shown in
When all of the input units 302a-n connected to the hub unit 304 are in an auscultation mode, the electronic stethoscope system 300 can employ an adaptive gain control algorithm programmed to compare internal sounds to ambient sounds. The adaptive gain control algorithm may analyze a target auscultation sound (e.g., normal breathing, wheezing, crackling, etc.) to judge whether an adequate sound level has been achieved. For example, the adaptive gain control algorithm may determine whether the sound level exceeds a predetermined threshold. The adaptive gain control algorithm may be designed to achieve gain control of up to 100 times (e.g., in two different stages). The gain level may be adaptively adjusted based on the number of input units in the input unit array 308, as well as the level of sound recorded by the auscultation microphone(s) in each input unit. In some embodiments, the adaptive gain control algorithm is programmed for deployment as part of a feedback loop. Thus, the adaptive gain control algorithm may apply gain to audio recorded by an input unit, determine whether the audio exceeds a preprogrammed intensity threshold, and dynamically determine whether additional gain is necessary based on the determination.
Because the electronic stethoscope system 300 can deploy the adaptive gain control algorithm during a post-processing procedure, the input unit array 308 may be permitted to collect information regarding a wide range of sounds caused by the heart, lungs, etc. Because the input units 302a-n in the input unit array 308 can be placed in different anatomical positions along the surface of the body (or on an entirely different body), different biometric characteristics (e.g., respiratory rate, heart rate, or degree of wheezing, crackling, etc.) can be simultaneously monitored by the electronic stethoscope system 300.
The input unit 400 can include one or more processors 404, a wireless transceiver 406, one or more microphones 408, one or more attachment sensors 410, a memory 412, and/or a power component 414 electrically coupled to a power interface 416. These components may reside within a housing 402 (also referred to as a “structural body”).
As noted above, the microphone(s) 408 can convert acoustic sound waves into an electrical signal. The microphone(s) 408 may include auscultation microphone(s) configured to produce audio data indicative of internal sounds, ambient microphone(s) configured to produce audio data indicative of ambient sounds, or any combination thereof. Audio data representative of values of the electrical signal can be stored, at least temporarily, in the memory 412. In some embodiments, the processor(s) 404 process the audio data prior to transmission downstream to the hub unit 450. For example, the processor(s) 404 may apply algorithms designed for digital signal processing, denoising, gain control, noise cancellation, artifact removal, feature identification, etc. In other embodiments, minimal processing is performed by the processor(s) 404 prior to transmission downstream to the hub unit 450. For example, the processor(s) 404 may simply append metadata to the audio data that specifies the identity of the input unit 400 or examine metadata already added to the audio data by the microphone(s) 408.
In some embodiments, the input unit 400 and the hub unit 450 transmit data between one another via a cable connected between corresponding data interfaces 418, 470. For example, audio data generated by the microphone(s) 408 may be forwarded to the data interface 418 of the input unit 400 for transmission to the data interface 470 of the hub unit 450. Alternatively, the data interface 470 may be part of the wireless transceiver 456. The wireless transceiver 406 could be configured to automatically establish a wireless connection with the wireless transceiver 456 of the hub unit 450. The wireless transceivers 406, 456 may communicate with one another via a bi-directional communication protocol, such as Near Field Communication (NFC), wireless Universal Serial Bus (USB), Bluetooth, Wi-Fi, a cellular data protocol (e.g., LTE, 3G, 4G, or 5G), or a proprietary point-to-point protocol.
The input unit 400 may include a power component 414 able to provide power to the other components residing within the housing 402, as necessary. Similarly, the hub unit 450 can include a power component 466 able to provide power to the other components residing within the housing 452. Examples of power components include rechargeable lithium-ion (Li-Ion) batteries, rechargeable nickel-metal hydride (NiMH) batteries, rechargeable nickel-cadmium (NiCad) batteries, etc. In some embodiments, the input unit 400 does not include a dedicated power component, and thus must receive power from the hub unit 450. A cable designed to facilitate the transmission of power (e.g., via a physical connection of electrical contacts) may be connected between a power interface 416 of the input unit 400 and a power interface 468 of the hub unit 450.
The power channel (i.e., the channel between power interface 416 and power interface 468) and the data channel (i.e., the channel between data interface 418 and data interface 470) have been shown as separate channels for the purpose of illustration only. Those skilled in the art will recognize that these channels could be included in the same cable. Thus, a single cable capable of carrying data and power may be coupled between the input unit 400 and the hub unit 450.
The hub unit 450 can include one or more processors 454, a wireless transceiver 456, a display 458, a codec 460, one or more light-emitting diode (LED) indicators 462, a memory 464, and a power component 466. These components may reside within a housing 452 (also referred to as a “structural body”). As noted above, embodiments of the hub unit 450 may include any subset of these components, as well as additional components not shown here. For example, some embodiments of the hub unit 450 include a display 458 for presenting information such as the respiratory status or heartrate of an individual under examination, a network connectivity status, a power connectivity status, a connectivity status for the input unit 400, etc. The display 458 may be controlled via tactile input mechanisms (e.g., buttons accessible along the surface of the housing 452), audio input mechanisms (e.g., voice commands), etc. As another example, some embodiments of the hub unit 450 include LED indicator(s) 462 for operation guidance rather than the display 458. In such embodiments, the LED indicator(s) 462 may convey similar information as the display 458. As another example, some embodiments of the hub unit 450 include a display 458 and LED indicator(s) 462.
Upon receiving audio data representative of the electrical signal generated by the microphone(s) 408 of the input unit 400, the hub unit 450 may provide the audio data to a codec 460 responsible for decoding the incoming data. The codec 460 may, for example, decode the audio data (e.g. by reversing encoding applied by the input unit 400) in preparation for editing, processing, etc. The codec 460 may be designed to sequentially or simultaneously process audio data generated by the auscultation microphone(s) in the input unit 400 and audio data generated by the ambient microphone(s) in the input unit 400.
Thereafter, the processor(s) 454 can process the audio data. Much like the processor(s) 404 of the input unit 400, the processor(s) 454 of the hub unit 450 may apply algorithms designed for digital signal processing, denoising, gain control, noise cancellation, artifact removal, feature identification, etc. Some of these algorithms may not be necessary if already applied by the processor(s) 404 of the input unit 400. For example, in some embodiments the processor(s) 454 of the hub unit 450 apply algorithm(s) to discover diagnostically relevant features in the audio data, while in other embodiments such action may not be necessary if the processor(s) 404 of the input unit 400 have already discovered the diagnostically relevant features. Alternatively, the hub unit 450 may forward the audio data to a destination (e.g., a diagnostic service running on a decentralized system) for analysis, as further discussed below. Generally, a diagnostically relevant feature will correspond to a pattern of values in the audio data matching a predetermined pattern-defining parameter. As another example, in some embodiments the processor(s) 454 of the hub unit 450 apply algorithm(s) to reduce noise in the audio data to improve the signal-to-noise (SNR) ratio, while in other embodiments these algorithm(s) are applied by the processor(s) 404 of the input unit 400.
In addition to the power interface 468, the hub unit 450 may include a power port. The power port (also referred to as a “power jack”) enables the hub unit 450 to be physically connected to a power source (e.g., an electrical outlet). The power port may be capable of interfacing with different connector types (e.g., C13, C15, C19). Additionally or alternatively, the hub unit may include a power receiver having an integrated circuit (“chip”) able to wirelessly receive power from an external source. The power receiver may be configured to receive power transmitted in accordance with the Qi standard developed by the Wireless Power Consortium or some other wireless power standard.
In some embodiments, the housing 452 of the hub unit 450 includes an audio port. The audio port (also referred to as an “audio jack”) is a receptacle that can be used to transmit signals, such as audio, to an appropriate plug of an attachment, such as headphones. An audio port typically includes, two, three, or four contacts that enable audio signals to be readily transmitted when an appropriate plug is inserted into the audio port. For example, most headphones include a plug designed for a 3.5-millimeter (mm) audio port. Additionally or alternatively, the wireless transceiver 456 of the hub unit 450 may be able to transmit audio signals directly to wireless headphones (e.g., via NFC, Bluetooth, etc.).
As noted above, the processor(s) 404 of the input unit 400 and/or the processor(s) 454 of the hub unit 450 can apply a variety of algorithms to support different functionalities. Examples of such functionalities include attenuation of lost data packets in the audio data, noise-dependent volume control, dynamic range compression, automatic gain control, equalization, noise suppression, and acoustic echo cancellation.
Each functionality may correspond to a separate module residing in a memory (e.g., memory 412 of the input unit 400 or memory 464 of the hub unit 450). Thus, the input unit 400 and/or the hub unit 450 may include an attenuation module, a volume control module, a compression module, a gain control module, an equalization module, a noise suppression module, an echo cancellation module, or any combination thereof.
Additional information on electronic stethoscope systems can be found in U.S. Pat. No. 10,555,717, which is incorporated by reference herein in its entirety.
Validation, verification, and registration are important steps in the development and then deployment of medical devices and medical software. There are difficulties in gathering the information needed for validation, verification, and registration, however. This is especially true now, as many entities have begun developing medical software that is designed to accompany a given medical device. As an example, the electronic stethoscope system discussed with reference to
Introduced here is a decentralized computing system (or simply “decentralized system”) that is designed to efficiently connect various stakeholders involved in the healthcare system. An application that runs on the decentralized system may be responsible for accelerating the accumulation and sharing among the various stakeholders so as to facilitate the rendering of services to individuals (also referred to as “patients”).
As shown in
Note that smart contracts may also be used in the validation, verification, and registration of medical devices and medical software. Several examples of such smart contracts are shown in
Note that these smart contracts could be executed by the application in series. Thus, the smart contract in
Another aspect of the application 800 is its ability to operate in compliance with a consensus protocol (or simply “protocol”) agreed upon by the various stakeholders. As discussed above, the application 800 may be spread across various nodes (e.g., computing devices) whose job is to verify and/or record transactions. Because any stakeholder could submit information to be recorded, it is important that there are rules in place that specify what information should be recorded and what information should be discarded. These rules are referred to as the “protocols” by which the application 800 operates. At a high level, the protocol defines the approach to verifying whether a transaction recorded by the decentralized platform 800 is true or not. The protocol may be fully defined before the diagnostic service is offered by the application 800, or the protocol may be altered while the diagnostic service is offered by the decentralized platform 800. Thus, the protocol could be adjusted while the decentralized platform 800 is “live.”
In some embodiments, the application 800 is configured to implement a certificate issuing and validating service in order to validate the various stakeholders. As an example, the application 800 may support and/or utilize a decentralized Public Key Infrastructure (PKI) in which a blockchain is used as a key-value storage. The term “blockchain,” as used herein, refers to a distributed digital ledger containing information that can be simultaneously used and shared within the decentralized system discussed above. Decentralized PKI eliminates dependence on certificate authorities for key management, relying instead on the decentralized system to maintain digital certificates (e.g., authorizing access to data to a given stakeholder).
Decisions made by the application 800 may be made in accordance with an evidence-based decision protocol. To make an evidence-based decision, the application 800 may consider various forms of evidence and then assess appropriateness of the evidence given the situation. Assume, for example, that the application 800 obtains audio data that includes the respiratory sounds of a patient for whom a diagnosis is to be recommended. In such a scenario, the application 800 may consider personal health information regarding the patient in addition to any insights derived from analyzing the audio data. The audio data and personal health information may be obtained from the same source (e.g., a device client) or different sources (e.g., a device client and an administration client).
As discussed above, transactions may be recorded by the application 800 on a distributed digital ledger. The distributed digital ledger may represent the backbone of the application 800, as it represents a consensus of information spread across multiple nodes (e.g., computing devices associated with different stakeholders).
As shown in
Then, the application 1000 can determine, through the use of a smart contract, whether to permit access to the data to the medical facility 1006. If the application 1000 determines that the medical facility 1006 is authorized to access the data, then the data can be transferred to an administration client 1014 that resides on a computing device 1016 accessible to the medical facility 1006. The computing device 1016 may be, for example, a personal computer or computer server. Upon receiving the data through the administration client 1014, the medical facility 1006 may store the data in an electronic medical record that is associated with the patient 1002. The electronic medical record may also include data obtained from other sources (e.g., entered directly by the medical professional 1004).
Similarly, the application 1000 may determine, through the use of a smart contract, whether to permit access to the data to the payer 1008. If the application 1000 determines that the payer 1008 is authorized to access the data, then the data can be transferred to a reimbursement client 1018 that resides on a computing device 1020 accessible to the payer 1008. The computing device 1016 may be, for example, a personal computer or computer server. Upon receiving the data through the reimbursement client 1018, the payer 1008 may examine the data so as to determine whether reimbursement is appropriate.
While the embodiment shown in
In some embodiments, the stakeholders involved in this process are verified by a certificate issuing and validating service implemented by the application 1000. As shown in
As mentioned above, the application 1100 may be configured to execute a diagnostic service upon receiving the audio data. For example, the application 1100 may apply, to the audio data, a model that is designed and trained to calculate the respiratory rate, identify breathing abnormalities, or recommend appropriate treatments. These outputs may be referred to as “analysis” of the audio data.
Thereafter, the application 1100 can determine whether to permit access to a stakeholder, such as a medical facility 1106 or payer 1108. Generally, this is accomplished through the use of smart contracts as discussed above. If the application 1100 determines that access is authorized for a given stakeholder, then the analysis and/or the audio data can be shared. In
In
Methodologies for Facilitating Sharing of Data Using an Application
Initially, the application may receive first input indicative of a request to certify audio data that includes the respiratory sounds of a patient (step 1301). The first input may be received in various ways. For example, the first input may be obtained from a client (e.g., a device client) that resides on the medical device. As another example, the first input may be provided through an interface generated and/or supported by the application. Assume, for example, that the application is able to generate an interface accessible through a computer program, such as a mobile application, desktop application, or web browser. In such a scenario, an individual may seek certification by uploading the audio file to the application through the interface. The individual may be the patient or some other person (e.g., a medical professional or family member).
Then, the application can certify the audio data so as to authorize sharing of the audio data (or analysis of the audio data) with entities that have an interest in the health of the patient (step 1302). For example, the application may obtain information regarding the medical device responsible for generating the audio data and then examine the information to ensure that the medical device satisfies one or more criteria. Examples of such criteria include dynamic range, frequency response, amplification amplitude, firmware version, and application (DApp) version. Upon determining that the medical device satisfies the criteria, the application may produce the analysis of the audio file. As discussed above, the analysis may include information regarding respiratory rate, breathing abnormalities, and the like. As another example, the application may examine the audio data to ensure that the audio data meets one or more criteria. Examples of such criteria include a minimum duration, a minimum number of breathing cycles, a minimum signal-to-noise (SNR) ratio, and the like. Upon determining that the audio data satisfies the criteria, the application may produce the analysis of the audio file.
Thereafter, the application may receive second input indicative of a request from an entity to access the analysis of the audio data (step 1303). This second input may be received from a client that resides on a computing device separate from the medical device. The client could be an administration client associated with a medical facility or regulatory professional, or the client could be a reimbursement client associated with a government organization or insurer. The application can then execute a smart contract that grants access to the entity to the analysis of the audio data responsive to a determination that the entity satisfies a term in the smart contract. As an example, the smart contract may require that the entity have a valid digital certificate stored in, or accessible to, the decentralized system on which the application is running.
Thereafter, the application can examine the audio file to identify a breathing abnormality that is discoverable in the respiratory sounds (step 1403). For example, the application may apply an algorithm to the audio file that is trained to identify instances of wheezing, cracking, stridor, and rhonchi. The application can then forward information regarding the breathing abnormality to a client executing on a second computing device associated with an entity (step 1404). The entity may be a manufacturer of the medical device responsible for generating the audio file, a medical facility that is responsible for treating the patient, or an insurer involved in paying for treatment of the individual. While the first and second computing devices may both be representative of nodes in the decentralized system on which the application is running, the first and second computing devices need not be the same type of computing device. For example, the audio file may be uploaded through an interface accessible via a web browser or a mobile phone or personal computer, and the analysis may be forwarded to a client executing on a computer server.
The application can then perform analysis of the data to derive a diagnostically relevant insight into the health of the patient (step 1502). As an example, the application may examine audio content to identify breathing abnormalities or cardiovascular abnormalities. As another example, the application may examine visual content to identify abnormal growths and lesions. In some embodiments, the application records evidence of the diagnostically relevant insight being derived as a transaction on a digital ledger maintained by the decentralized system on which the application runs.
Moreover, the application may establish that an entity is authorized to access the diagnostically relevant insight (step 1503). For example, the application may execute a smart contract that authorizes access to the entity responsive to a determination that the entity satisfies a term contained in the smart contract. Alternatively, the application may determine that the diagnostically relevant insight is to be shared with the entity based on information related to the patient (e.g., name or date of birth), diagnostic session (e.g., location, time, supervising medical professional), or medical device (e.g., manufacturer), and then confirm that a digital certificate establishing authenticity of the entity is valid. This digital certificate may be stored in the digital ledger maintained by the decentralized system.
Thereafter, the application can provide the diagnostically relevant insight to the entity (step 1504). For example, the application may forward the diagnostically relevant insight (or information regarding the diagnostically relevant insight) to a client that resides on a computing device accessible to the entity. In some embodiments, the application records evidence of the diagnostically relevant insight being provided to the entity as a transaction on the digital ledger maintained by the decentralized system.
As discussed above, the medical data will be acquired by the service (step 1602) after a user has entered and/or uploaded the medical data in the requisite formats through the landing page. To ensure this is properly completed, the user may simply upload the medical data (e.g., by selecting an audio file), or the user may view instructional materials to ensure that the medical data is being properly uploaded.
Thereafter, the medical data input by the user can be verified (step 1603) via a decentralized process that takes place, partially or entirely, on the computing device used by the user to access the landing page. If verification fails, the user may be taken back to the previous step to make a second attempt to input the medical data. The approaches to decentralized verification may include, but are not limited to, log regression. Additionally or alternatively, the service may verify the quality of the medical data uploaded by the user against one or more evaluation parameters (e.g., defined by the manufacturer of the medical device used to generate the medical data).
If the medical data is determined to meet the evaluation parameter(s), then the medical data can be transmitted by the computing device to another computing device (e.g., a computer server) that is responsible for analyzing the medical data (step 1604) and generating appropriate reports (step 1605). As an example, the medical data may be transferred via the Internet to a computer server on which an algorithm driven through artificial intelligence (AI) or machine learning (ML) performs breathing sound identification and compiles reports. Reports can be communicated to the user, for example, via push notification, text message, email, and the like. Additionally or alternatively, reports may be available for download through a web browser or for online viewing only. Reports may include analysis or results as interpreted on behalf of the user, as well as recommended follow-up actions. For example, the report may recommend that, based on breathing abnormalities discovered in an audio file that contains the respiratory sounds of a patient, that the patient visit a healthcare provider.
Unless contrary to physical possibility, it is envisioned that the steps described above may be performed in various sequences and combinations. For example, multiple instances of the processes 1300, 1400, 1500 may be executed by the application as data pertaining to one patient is shared with a first set of entities while data pertaining to another patient is shared with a second set of entities. Other steps may also be included in some embodiments. For example, diagnostically relevant insights, such as the presence of breathing abnormalities, could be posted to an interface for review by a patient in addition to, or instead of, the medical professional responsible for diagnosing the patient.
Processing System
The processing system 1700 may include a processor 1702, main memory 1706, non-volatile memory 1710, network adapter 1712 (e.g., a network interface), video display 1718, input/output device 1720, control device 1722 (e.g., a keyboard, pointing device, or mechanical input such as a button), drive unit 1724 that includes a storage medium 1726, or signal generation device 1730 that are communicatively connected to a bus 1716. The bus 1716 is illustrated as an abstraction that represents one or more physical buses and/or point-to-point connections that are connected by appropriate bridges, adapters, or controllers. The bus 1716, therefore, can include a system bus, Peripheral Component Interconnect (PCI) bus, PCI-Express bus, HyperTransport bus, Industry Standard Architecture (ISA) bus, Small Computer System Interface (SCSI) bus, Universal Serial Bus (USB), Inter-Integrated Circuit (I2C) bus, or a bus compliant with Institute of Electrical and Electronics Engineers (IEEE) Standard 1394.
The processing system 1700 may share a similar computer processor architecture as that of a computer server, router, desktop computer, tablet computer, mobile phone, video game console, wearable electronic device (e.g., a watch or fitness tracker), network-connected (“smart”) device (e.g., a television or home assistant device), augmented or virtual reality system (e.g., a head-mounted display), or another electronic device capable of executing a set of instructions (sequential or otherwise) that specify action(s) to be taken by the processing system 1700.
While the main memory 1706, non-volatile memory 1710, and storage medium 1726 are shown to be a single medium, the terms “storage medium” and “machine-readable medium” should be taken to include a single medium or multiple media that stores one or more sets of instructions 1726. The terms “storage medium” and “machine-readable medium” should also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the processing system 1700.
In general, the routines executed to implement the embodiments of the present disclosure may be implemented as part of an operating system or a specific application, component, program, object, module, or sequence of instructions (collectively referred to as “computer programs”). The computer programs typically comprise one or more instructions (e.g., instructions 1704, 1708, 1728) set at various times in various memories and storage devices in a computing device. When read and executed by the processor 1702, the instructions cause the processing system 1700 to perform operations to execute various aspects of the present disclosure.
While embodiments have been described in the context of fully functioning computing devices, those skilled in the art will appreciate that the various embodiments are capable of being distributed as a program product in a variety of forms. The present disclosure applies regardless of the particular type of machine- or computer-readable medium used to actually cause the distribution. Further examples of machine- and computer-readable media include recordable-type media such as volatile and non-volatile memory devices 1710, removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD-ROMS) and Digital Versatile Disks (DVDs)), cloud-based storage, and transmission-type media such as digital and analog communication links.
The network adapter 1712 enables the processing system 1700 to mediate data in a network 1714 with an entity that is external to the processing system 1700 through any communication protocol supported by the processing system 1700 and the external entity. The network adapter 1712 can include a network adaptor card, a wireless network interface card, a switch, a protocol converter, a gateway, a bridge, a hub, a receiver, a repeater, or a transceiver that includes an integrated circuit (e.g., enabling communication over Bluetooth or Wi-Fi).
The foregoing description of various embodiments of the technology has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the claimed subject matter to the precise forms disclosed.
Many modifications and variation will be apparent to those skilled in the art. Embodiments were chosen and described in order to best describe the principles of the technology and its practical applications, thereby enabling others skilled in the relevant art to understand the claimed subject matter, the various embodiments, and the various modifications that are suited to the particular uses contemplated.
This application is a continuation of International Application No. PCT/US2020/58928, filed on Nov. 4, 2020, which claims priority to U.S. Provisional Application No. 62/929,962, filed on Nov. 5, 2019, each of which is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
62929962 | Nov 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/US2020/058928 | Nov 2020 | US |
Child | 17735662 | US |