N/A.
Medical imaging is a key tool in the practice of modern clinical medicine. Imaging is used in an extremely broad array of clinical situations, from diagnosis to delivery of therapeutics to guiding surgical procedures. While medical imaging provides an invaluable resource, it also consumes extensive resources. For example, imaging systems are expensive and are efficiently utilized when downtime is controlled. Furthermore, imaging systems require extensive human interaction to setup and operate, and then to analyze the images and make clinical decisions.
As the field of machine learning (“ML”) and artificial intelligence (“AI”) becomes more mature, researchers working on medical ML/AI aim to integrate more data to improve their models, as opposed to merely changing algorithm architecture. Ensuring high-quality annotations is critically important in medicine: imaging can be nondiagnostic and intra- and interobserver variability is high. As much as 25% of radiologists don't agree with other radiologists diagnoses and 30% don't agree with their own previous decisions. Ultimate ground truth, such as pathology reports, is not always available, and trained models often rely on “soft” annotated ground truth. Biases from poorly annotated datasets can result in negative consequences for ML algorithms in clinical use. However, to this day, few available collaborative annotation platforms for ML systems are capable of handling medical imaging.
The present disclosure provides systems and methods that reduce the total investment of human time required for medical imaging applications. In one non-limiting example, systems and methods are provided for facilitating the acceleration of the annotation process and development of medical imaging datasets.
The disclosure relates to platforms allowing medical entities, such as, e.g., researchers, commercial vendors, etc., to accelerate the annotation and development of medical imaging datasets. More specifically, embodiments enable labeling for classification and object detection tasks and provide various data and project management tools.
Improving the quality of database includes the participation of well-trained experts and a thorough curation process, which can be based on voluntary commitment. Crowdsourcing data collection methods can be easily contaminated by mislabeling caused by undertrained participants. For example, consider that the value of the data or accuracy of annotation may be easily estimated. In this scenario, it is possible to construct a high-quality dataset with an appropriate proportion of positive features for AI training, by exchanging or trading datasets between medical entities (e.g., researchers, vendors, etc.). Furthermore, this transaction can be fairly evaluated and securely monitored. Accordingly, embodiments described herein introduce a web-based, zero-footprint collaborative annotation tool for medical imaging data. As one non-limiting example, a proof of concept can include implementing the platform with pretrained AI models and blockchain features and using them to create preliminary annotations of a chest X-ray dataset for classification tasks.
Blockchain technology has been widely recognized to deliver decentralization and transparency to solutions in many areas. Some attempts have been made in medicine to utilize those benefits, mostly when handling electronic health records, promising better management for data ownership, sharing, or authorization. Nevertheless, only rarely attempts to utilize blockchain result in developing a tool useful in the clinical setting. Embodiments described herein explored how blockchain could encourage transparency and trust when crowdsourcing annotations are practiced by saving user activity in an immutable ledger. Blockchain technology has the advantage of defending against data manipulation without the installation of an additional security system. Embodiments described herein achieve security for image upload, annotation record modulation, etc, without compromising user convenience. Additionally, blockchain can give incentives for annotators (via, e.g., a blockchain currency or reward), inspire anonymous data sharing, etc.
In accordance with one aspect of the disclosure, a collaborative annotation system is provided that includes an electronic processor. The electronic processor is configured to enabling access to a collaborative annotation project associated with at least one medical image. The electronic processor is also configured to receive crowdsourced annotations associated with the at least one medical image from a set of annotators. The electronic processor is also configured to evaluate the crowdsourced annotations. The electronic processor is also configured to generate an annotation record associated with the at least one medical image based on the evaluation of the crowdsourced annotations.
In accordance with another aspect of the disclosure, a method is provided that provides a collaborative annotation platform. The method includes enabling, with an electronic processor, access to a collaborative annotation project associated with at least one medical image. The method also includes receiving, with the electronic processor, crowdsourced annotations associated with the at least one medical image from a set of annotators. The method also includes evaluating, with the electronic processor, the crowdsourced annotations. The method also includes generating, with the electronic processor, an annotation record associated with the at least one medical image based on the evaluation of the crowdsourced annotations.
In accordance with yet another aspect of the disclosure, a collaborative annotation system is provided that includes an electronic processor. The electronic processor is configured to define a collaborative annotation project associated with a set of medical images. The electronic processor is also configured to obtain crowdsourced annotations for the collaborative annotation project from a dispersed group of annotators. The electronic processor is also configured to evaluate the crowdsourced annotations. The electronic processor is also configured to generate at least one annotation record based on the evaluation of the crowdsourced annotations.
In accordance with yet another aspect of the disclosure, a collaborative annotation system is provided that includes an electronic processor. The electronic processor is configured to access at least one annotation record, wherein the at least one annotation record is based on crowdsourced annotations obtained for a collaborative annotation project associated with a set of medical images. The electronic processor is also configured to generate training data based on the at least one annotation record.
In accordance with yet another aspect of the disclosure, a collaborative annotation system is provided that includes an electronic processor. The electronic processor is configured to access training data associated with annotation records based on crowdsourced annotations obtained for a collaborative annotation project associated with a set of medical images. The electronic processor is also configured to develop a model using machine learning using the training data, wherein the model is associated with a medical image analysis function.
In accordance with yet another aspect of the disclosure, a collaborative annotation system is provided that includes an electronic processor. The electronic processor is configured to obtain crowdsourced annotations associated with at least one medical image from a dispersed group of annotators. The electronic processor is also configured to evaluate the crowdsourced annotations to determine an annotation contribution to the crowdsourced annotations for each annotator included in the dispersed group of annotators. The electronic processor is also configured to generate and associate a digital reward for at least one annotator included in the dispersed group of annotators based on a corresponding annotation contribution for the at least one annotator.
The foregoing and other aspects and advantages of the disclosed embodiments will appear from the following description. In the description, reference is made to the accompanying drawings which form a part hereof, and in which there is shown by way of illustration embodiments of the disclosed technology. Any such embodiment does not necessarily represent the full scope of the disclosed technology, however, and reference is made therefore to the claims and herein for interpreting the scope of the disclosed technology.
The present disclosure provides systems and methods that can reduce human and/or trained clinician time required to analyze medical images. As one non-limiting example, the present disclosure provides example of the inventive concepts provided herein applied to the analysis of x-rays, however, other imaging modalities beyond x-rays and applications within each modality are contemplated, such as echocardiograms, MRI, CT, PET, SPECT, optical, digital pathological images, and the like.
The server 105, the user device 110, the medical image database 115, and the annotation record database 120 communicate over one or more wired or wireless communication networks 130. Portions of the communication networks 130 may be implemented using a wide area network, such as the Internet, a local area network, such as a Bluetooth™ network or Wi-Fi, and combinations or derivatives thereof. Alternatively or in addition, in some embodiments, components of the system 100 communicate directly as compared to through the communication network 130. Also, in some embodiments, the components of the system 100 communicate through one or more intermediary devices not illustrated in
The server 105 is a computing device, such as a server, a database, or the like. As illustrated in
The communication interface 210 may include a transceiver that communicates with the user device 110, the medical image database 115, the annotation record database 120, or a combination thereof over the communication network 130 and, optionally, one or more other communication networks or connections. The electronic processor 200 includes a microprocessor, an application-specific integrated circuit (“ASIC”), or another suitable electronic device for processing data, and the memory 205 includes a non-transitory, computer-readable storage medium.
The electronic processor 200 can access and execute computer-readable instructions (“software”) stored in the memory 205. The software may include firmware, one or more applications, program data, filters, rules, one or more program modules, and other executable instructions. For example, the software may include instructions and associated data for performing a set of functions, including the methods described herein.
For example, as illustrated in
As also illustrated in
As noted above, in some embodiments, the electronic processor 200 executes the application 260 to provide a collaborative annotation platform for performing crowdsourcing of annotation data, performs an evaluation of the crowdsourced annotation data, or a combination thereof. In some embodiments, the electronic processor 200 may use the crowdsourced annotation data (e.g., annotation records stored in the annotation record database 120) as training data for one or more of the models.
Models generated by the learning engine 265 can be stored in the model database 270). In some embodiments, the models generated by the learning engine 265 can perform medical image analysis functions, such as, e.g., predictions, probabilities, feature activation maps, etc. As one example, a model may include, e.g., a classification model, an object detection model, a segmentation model, another type medical imaging analysis model, and the like. As illustrated in
As also illustrated in
Alternatively or in addition, in some embodiments, the electronic ledger 280 is a centralized ledger maintained by the server 105, as illustrated in
In some embodiments, proof of stake, or proof of work, may be used (also referred to herein as “proof of contribution”). Blocks may be created from these hashes and may be verified using a proof of work procedure before new data can be added to the electronic ledger 280. As blocks are added, the proof of work becomes more difficult as the nodes must process all of the previous blocks to add new blocks. Proof of work difficulty increases with increased blocks, nodes, and difficulty in verifying blocks. A server (e.g., the server 105) may accept blocks by creating the next block, those that don't agree are ignored, and only honest servers with duplicate electronic ledgers may be accepted. In some embodiments, servers may be mining computers (or another component of the system 100 may also provide the mining function) for the blockchain and provide proof of work or consensus. In some embodiments, conventional mining, staking pools, or the like may perform the mining, or tokenizing may be used.
Accordingly, in some embodiments, the electronic processor 200 can implement the electronic ledger 280 as a blockchain to track user activity, including, e.g., annotating images, uploading medical image data, exporting medical image data, and the like. Implementing a blockchain can facilitate better security and traceability of medical datasets, especially when considering global platforms that deal with sensitive data, such as medical imaging data. As described in greater detail below, in some embodiments, the blockchain (or blockchain data) may be used to calculate a value for a dataset (e.g., an annotation record) and estimate a reward (e.g., a digital reward or credit) for annotates via blockchain currency (e.g., a cryptocurrency) to facilitate, among other things, accurate annotation. Accordingly, implementation of blockchain technology can encourage transparency and trust when crowdsourcing annotations by saving user activity in an immutable ledger (e.g., as hashed information in the electronic ledger 280)
Returning to
The annotation record database 120 stores one or more annotation records associated with medical imaging data (e.g., crowdsourced training data). An annotation record can include, e.g., one or more annotations (or other user activity) associated with one or more medical images (e.g., a medical image included in the medical imaging data of the medical image database 115). An annotation record may be associated with a single medical image. As one example, when a first annotator and a second annotator annotate the same medical image, the annotation record for that medical image can include the annotations provided by the first annotator and the annotations provided by the second annotator. Alternatively or in addition, in some embodiments, an annotation record may be associated with multiple medical images, where each medical image shares at least one characteristic or parameter (e.g., a type of medical image, an imaging modality or system used to capture the medical image, a viewpoint of the medical image, etc.). As one example, a first annotator annotates a first chest x-ray and a second chest x-ray and a second annotator annotates the first chest x-ray and a third chest x-ray. Following this example, the annotation record may include the first annotators annotations of the first chest x-ray and the second chest x-ray and the second annotators annotations of the first chest x-ray and the third chest x-ray. Alternatively or in addition, an annotation record can be associated with an annotator. As one example, a first annotation record can be associated with user activity with a first medical image of a first annotator and a second annotation record can be associated with user activity with a second medical image of a first annotator. As yet another example, a first annotation record can be associated with user activity with a first medical image of a first annotator and a second annotation record can be associated with user activity with a second medical image of a second annotator.
As noted above, in some embodiments, the annotation record database 120 may be distributed among multiple devices, such as, e.g., multiple databases. Alternatively or in addition, the annotation record database 120 may be combined with another device, such as, e.g., the medical image database 115, the server 105, the user device 110, another component of the system 100, or a combination thereof. As noted above, in some embodiments, the electronic processor 200 may use crowdsourced annotation data (e.g., one or more annotation records) as training data for one or more of the models stored in the model database 270).
The user device 110 can include a computing device, such as a desktop computer, a laptop computer, a tablet computer, a terminal, a smart telephone, a smart television, a smart wearable, or another suitable computing device that interfaces with a user. Although not illustrated in
In the illustrated example of
A user may use the user device 110 to interact with the collaborative annotation platform.
The data owner 310 can include a healthcare entity, group, or organization that manages and maintains medical data (e.g., a hospital, a healthcare clinic, an urgent care clinic, etc.). A data owner 310 can use the user device 110 to upload (or otherwise enable access to) a set of medical images (e.g., as a collaborative annotation project). As one example, as illustrated in
The project manager 305 may use the user device 110 to define a collaborative annotation project, start a collaborative annotation project, manage access to a collaborative annotation project, and the like (based on the medical images provided by the data owner 305). In some embodiments, the project manager 305 is a member, associated with, or otherwise affiliated with the same healthcare entity, group, or organization of the data owner 310. For example, stored studies can be organized privately by users into projects (e.g., one or more collaborative annotation projects). The collaborative annotation platform can allow project managers 305 to assign access privileges to a collaborative annotation project for other users, such as, e.g., other readers, annotators, or the like, which prevents unwanted access to sensitive medical data. Specific users can have various access levels, limiting some features, such as data export, progress tracking, project statistics, and management. As one example, as illustrated in
The annotator 315 can use the user device 110 to access and annotate a collaborative annotation project (e.g., a set of medical images designated or selected for crowdsourcing annotation). An annotation record is generated based on the annotator's 315 interaction with the collaborative annotation project (or a medical image included therein). For example, all annotations (or user activity) are monitored and stored as an annotation record (e.g., in the annotation record database 120). An annotation record may include information on time spent on a single case (e.g., a collaborative annotation project or one or more medical images included therein) from the moment the collaborative annotation project (or medical image(s) therein) is completely loaded to the click of the submit button, the time of mouse clicking for labeling, or motionless duration to evaluate each label's duration of tasks.
As illustrated in
Accordingly, the collaborative annotation platform can fetch (or access) medical images from a vendor-neutral DICOM storage (e.g., the medical image database 115). As one example, the collaborative annotation platform may be implemented via a connection with both standard PACS systems and DICOM web-based RESTful web services and application programming interfaces (“APIs”).
As one proof of concept, Orthanc [https://www.orthanc-server.com, Liege, Belgium] and Google DICOM Store (through Google Healthcare API, CA, USA) can be utilized, where image retrieval can be performed through WADO (Web Access to DICOMR: Persistent Objects) protocols. Connecting to standard PACS systems and fetching images with the C-GET protocol can also be implemented. Additionally, the collaborative annotation platform allows users to use non-DICOM image files, common in large-scale non-volumetric medical datasets (e.g., National Institutes of Health (“NIH”) and Stanford chest X-ray datasets). Within the collaborative annotation platform, patient information can be anonymized.
Users can export annotations in comma-separated values (“CSV”) format (e.g., for classification results, ROI labels, or a combination thereof). It is also possible to import radiological reports to the collaborative annotation platform, matching each radiological report with specific cases. The collaborative annotation platform is developed to optimize annotation workflow, especially in large-scale datasets with multiple collaborators and stakeholders, where the collaborative annotation platform takes into account each user's role, as illustrated in
The collaborative annotation platform can be connected with a locally developed AI inference RESTful (Representational Status Transfer) service, running on the same device (e.g., the server 105) through a Docker container [https://www.docker.com, CA, USA]. This service includes four AI classification models for chest X-ray data, predicting: view position (i.e., AP: anterior-posterior vs. PA: posterior-anterior), pathologic features, gender, and age. View position and gender predictions are framed as binary classification tasks, feature prediction as multilabel classification and age prediction as a regression task. The collaborative annotation platform can further expand the collection of available pre-trained models and improve the performance of current models by changing datasets or model architectures.
A user of the collaborative annotation platform can request model prediction on the loaded image in real-time (or near real-time) by passing the input through a GPU-accelerated inference service. Images are sent to the service from DICOM storage via an API that evaluates sent data and returns predictions, probabilities, and feature activation maps.
Predictions are returned to users via one or more user interfaces. Feature activation maps in a form of gradient-weighted class activation mapping (“Grad-CAM”) can be overlaid over a DICOM image as a Red-Green-Blue-Alpha (“RGBA”) matrix with adjustable opacity.
The collaborative annotation platform can provide a “review mode” to evaluate discrepancies between annotators. The process of annotating medical imaging for machine learning purposes can be different from making diagnosis in the actual clinical environment, and annotators may have different standards for determining whether a particular feature exists or not. Therefore, the collaborative annotation platform can resolve disagreements in order to maintain consistency of the dataset. Project managers may save time by running smaller sample projects before the main annotation project to assess the presence of various problems. As one example, the collaborative annotation platform may implement this function by illustrating the annotators' labeling results and reliability in the form of a graphical representation, such as, e.g., a heatmap. Accordingly, the collaborative annotation platform may allow the second annotators to check the results agreements between preceding annotators. With this mode, annotators can develop better annotating strategies and prevent trial and error in main annotation projects.
Furthermore, this mode can be implemented for training or education purposes. For example, before implementing the main project, the collaborative annotation platform can implement a combined function that reduces unnecessary mistakes and pre-training sessions using the review mode. Users can quickly check their discordance in pretraining sessions within the collaborative annotation platform through review mode, and they can resolve the discordance problem.
Additionally, the collaborative annotation platform can include a blockchain implementation (e.g., the electronic ledger 280) to partially track user activity, including annotating images, uploading and exporting data, etc. As described above, blockchain can facilitate better security and traceability of medical datasets, especially when considering global platforms that deal with sensitive data, such as medical imaging.
Three fellowship-trained radiologists classified the chest X-ray images as proof of concept in the private project. One thousand anonymized PA-view chest X-ray images with DICOM format of Massachusetts general hospital were uploaded to the collaborative annotation platform. Twenty-five classification labels were determined and assigned to the project (e.g., as illustrated in
To suggest a clear analysis method, we only concentrated on seven critical labels with clinically high value (i.e., interstitial lung disease, pneumonia, pulmonary edema, pleural effusion, cardiomegaly, pneumothorax, and atelectasis). Other features can be analyzed in the same way and have similar characteristics.
The inter-rater agreement between the three annotators was measured (=0.90) based on all three annotator's results were matched, and we also measured Fleiss's Kappa value (=0.63) to assess the reliability of agreement between three raters when assigning categorical ratings in this case, seven pathological feature annotations. Among 7,000 labels that were annotated in total (7 labels times 1,000 images), 370 were labeled by all annotators as positive, and 5,954 as negative.
The remaining 676 labels had differences in assessments between readers. The 95% confidence interval of total mean labeling time was 6.16±0.21 seconds, and the cardiomegaly took the shortest labeling time (4.63±0.54 seconds): in contrast, the pneumothorax has the longest labeling time (13.92±3.93 seconds). As illustrated in
When annotations on a new dataset are received, it can be important to understand the following: (1) How much is the data worth? (2) How much is any annotation worth? (3) Which annotator contributed and how much? For that, we formulate the value of the data based on the dataset characteristics, time cost of entering the annotation, and its annotation accuracy. The average labeling time was identified as an indicator for estimating the labor in the annotation. To calculate accuracy, we measured agreement between annotated label and pseudo-ground truth, defined as the majority rule between annotators.
In order to evaluate the annotator's contribution for CXR PA dataset, we exported the binary classification data and generated the annotation evaluation sheet consisting of True or False. As illustrated in
With reference to the algorithm illustrated in
It is obvious that χ∈[a, b] can be normalized oppositely (b maps to −1, a maps to 1) as
Applying it on Dkj∈[0, 1] and rkji∈[0, 0.5], we scale them as follows:
Each annotator's reward reward(i) can be formulated as a linear combination of each factor as:
This approach considers the information on the dataset and the estimated annotation quality and the time it takes to determine it, and the label-specific accuracy of the annotator.
For our experiment, we assumed the value of the entire dataset as 1000T MED Token (a cryptocurrency used in the current research) for the seed money of the data trading system and distributed the collaborative annotation platform currency to three annotators (i.e., Annotator A: 371T, Annotator B: 347T, and Annotator C: 282T MED Token) from Equation 1 (above). Although 1000T MED Token was implemented, another cryptocurrency can be implemented within the collaborative annotation platform.
Equation 1 can be modified in different ways depending on the situation. As one example, the importance of various factors can be considered through the sum of weights. In reality, the value of data will change. Initially, we will start with a small amount of data, and the performance of artificial intelligence developed using this data will also have limitations. However, as a large amount of data is added gradually and more annotators label the data, the data's value will increase, and the entire data's value will increase. In this case, what is calculated by Equation 1 is repeated according to the change in the quantity and quality of the data, and the value will change accordingly.
To prevent non-expert annotators from exceeding experts in number making wrong ground truth, we introduced the AI as a quality controller. According to the AI result, we set a temporary ground truth and assumed that it has better performance than a random choice (i.e., coin tossing). We calculated Cohen's kappa values between AI and each annotator. If this value was greater than 0.05, we assumed that this annotator has better prediction power than random selection. In the example, all annotators have better performance than the threshold in each label, so we used all labels for the reward calculations.
We tested a Panacea blockchain in our implementation, which is developed on top of Cosmos SDK and Tendermint framework, but the collaborative annotation platform can be integrated into any framework that supports blockchain implementations. Interacting with the blockchain can be executed through the RESTful API and the command-line interface (CLI) for a Go (programming language) application. In our experiments, we were saving user activity as hashed information in separate transactions on the blockchain. Using this information, we tried to calculate the dataset's value and estimated the awards for annotators via blockchain currency to facilitate accurate annotation. In addition to this, you will be able to add more various applications and services.
In the medical imaging field, annotation tasks require a trained radiologist's expertise, and even for simple tasks, crowdsourced annotations can be noisy or inaccurate. A successful crowdsourcing platform's benefits include, e.g., faster production of high-quality labeled datasets, more economical cost of obtaining annotations on the large datasets, and accelerated development of ML/AI for multiple medical imaging tasks.
The collaborative annotation platform described herein allows researchers and commercial vendors to accelerate the annotation and development of medical imaging datasets.
Our study confirms that quick annotation of large-scale images is possible in the above-mentioned platform. Results show high variability of the annotation speed between readers, which may help determine annotator engagement in the process.
Accordingly, embodiments described herein provide the development of a zero-footprint, web-based tool that is easy to implement on both local and global scale. Embodiments described herein suggest using a minimal number of features for the user-friendliness of the interface. Furthermore, annotators can be provided with additional clinical information, including, e.g., radiological reports or patient history in order to improve annotation accuracy as an option. High-reliability annotation results can be obtained by providing annotators with information comparable to the real clinical environment. To combat the widely known problem of “soft ground truth,” imaging may not always correlate with hidden clinical information.
The embodiments described herein attempt to overcome previous research limitations, presenting a robust platform dedicated to CNN-based ML/AI research in medical imaging. Connecting multiple RESTful services allows this platform to scale and rapidly increase further functionality.
As noted above, in some embodiments, the collaborative annotation platform supports classification and object detection. However, the collaborative annotation platform may support additional, different, or fewer medical imaging tools or functions. As one example, the collaborative annotation platform may support image segmentation, such as, e.g., a three-dimensional (“3D”) Slicer. Additionally, the collaborative annotation platform can support various medical imaging data and data types, including, e.g., volumetric images, non-image DICOM instances, such as DICOM-SEG or DICOM-SR, HISTI, NRRD, and the like.
As noted above, blockchain will enable us to obtain a free and fair data exchange system among researchers. Currently, most of the data is owned by healthcare providers, major national research institutes, and large research institutes. Ultimately, sharing all data without condition will help create new value. Still, it won't be easy to actively share data without compensation for intellectual labor, such as annotation and curation, reflecting the benefits of the institution that owns it. Accordingly, the collaborative annotation platform described herein can leverage blockchain's potential to reflect the value of the data and use it as currency for data transactions (e.g., the exchange of data among researchers). The issuance of currencies with a specific purpose for data exchange may also help as a means of data acquisition for artificial intelligence development while being less likely to cause ethical problems related to data ownership issues.
As illustrated in
After generating or defining a collaborative annotation project, the user may request (or invite) one or more annotators to interact with the collaborative annotation project (including the one or more medical images included therein). Accordingly, in some embodiments, the electronic processor 200 may generate and transmit one or more requests for annotations (e.g., annotation data) from one or more annotators (e.g., annotators specified by the project manager 310). The one or more requests may be transmitted over the communication network 130 to a user device associated with an annotator (e.g., the user device 110). In response to receiving the request, the annotator may annotate the at least one medical image.
The electronic processor 200 can receive crowdsourced annotations associated with the at least one medical image from a set of annotators (at block 1310). The crowdsourced annotations may include annotation data from a dispersed group of annotators. The crowdsourced annotations may include annotation data (e.g., user activity with the at least one medical image) provided by each annotator included in the set of annotators. Accordingly, the crowdsourced annotations may include, e.g., a set of classification labels, a set of object detection labels, other annotation related data, etc. As noted above, in some embodiments, annotators may indicate a confidence level or metric for each annotation they provide. Accordingly, in such embodiments, the crowdsourced annotations may include a set of confidence metrics indicating a confidence level of a corresponding annotator with an associated crowdsourced annotation.
In response to receiving the crowdsourced annotations (at block 1310), the electronic processor 200 may evaluate the crowdsourced annotations (at block 1315). As described in greater detail above, the electronic processor 200 may evaluate the crowdsourced annotations to determine a value associated with the annotation data, a contribution of the annotator, an accuracy of the annotation data, etc.
In some embodiments, the electronic processor 200 determines a digital reward for each annotator based on the evaluation of the crowdsourced annotations, as described in greater detail above. In some embodiments, the electronic processor 200 determines the digital reward as a blockchain cryptocurrency.
After evaluating the crowdsourced annotations (at block 1315), the electronic processor 200 can generate an annotation record based on the evaluation of the crowdsourced annotations (at block 1320). As described in greater detail above, the annotation record can be used as training data by the learning engine 265 for training one or more machine learning and/or artificial intelligence models associated with performing a medical imaging function. In some embodiments, the electronic processor 200 stores the annotation record. The electronic processor 200 can store the annotation record locally (e.g., in the memory 205), such as, e.g., in the electronic ledger 280) (as described in greater detail above). Alternatively or in addition, the electronic processor 200 can store the annotation record remotely, such as, e.g., in the annotation record database 120.
Thus, the present disclosure provides systems and methods for providing a collaborative artificial intelligence annotation platform leveraging blockchain for medical imaging research.
The present technology has been described in terms of one or more preferred embodiments, and it should be appreciated that many equivalents, alternatives, variations, and modifications, aside from those expressly stated, are possible and within the scope of the invention.
This application claims priority to U.S. Provisional Application No. 63/184,175 filed May 4, 2021, the entirety of which is incorporated herein by reference.
| Filing Document | Filing Date | Country | Kind |
|---|---|---|---|
| PCT/US2022/072103 | 5/4/2022 | WO |
| Number | Date | Country | |
|---|---|---|---|
| 63184175 | May 2021 | US |