This invention relates to systems, methods and analysis pipelines for species detection based on DNA and RNA fragments.
The advent of the next generation sequencing (NGS) technology has revolutionized all aspects of biological and medical research, leading to the emergence of several previously inconceivable cutting-edge fields and applications. Among which is the integration of microbiomes—ubiquitous microbial communities that consist of prokaryotic, eukaryotic, and viral organisms invisible to naked eyes. Studies of microbial communities have revealed previously underappreciated stunning diversity and abundance of microbial organisms in all kinds of sampled places, unraveling complicated intricacies between biotic and abiotic factors in macro- and micro-ecosystems, among which the most prominent being the human body. Studies in human microbiome (microbial communities living on human) have revealed its correlations with a growing list of health problems: allergies, asthma, multiple sclerosis diabetes (types 1 and 2), obesity, autism, inflammatory bowel diseases and even certain types of cancer. These discoveries have led to the emergence of personalized microbiome treatments, aiming to restore or utilize the microbial communities for the benefit of the patients. Despite these remarkable advances, microbiome studies are generally lacking in at least two major aspects:
1. The Lack of Sensitivity in Rigorously Analyzing Low-Abundance Samples Collected from Air, Clear Liquid, and Certain Clinical Samples.
This is due to the ubiquitous DNA/RNA contamination issues during sample preparation protocols, a problem commonly encountered in the field of forensic sciences. A rigorous decontamination protocol and careful experimental designs are required to achieve higher sensitivity in detection and avoid false-positive detection.
2. The Ignorance of Other Types of Organisms that Co-Exist with the Bacterial Counterparts in the Microbial Communities.
Non-bacterial microbial organisms include fungi, protozoa, plant pollens, virus especially RNA virus, and other traces of small insects and invertebrate parasites etc. It is known that these non-bacterial organisms are critical to human health and environments. For example, fungi spores and plant pollens can cause allergic reactions and asthma. Virus (especially RNA virus), Fungi, Protozoa, insects, and other types of human parasites can strongly impact human health in numerous ways (Zika and malaria are carried by mosquito, for example).
In this invention we aimed to address these two aspects concurrently and advance the art with an ultra-sensitive pipeline for universal species detection.
Provided is a presumption-free pipeline that employs experimental and analytic modules to profile samples, including clinical samples, regardless of the complexity and abundance (with unparalleled detection sensitivity down to single microbial cell level, equivalent to 1/500 of a typical human cell in size and 1/1000 in nucleic acid content).
This invention pertains to computer-implemented software method as a pipeline that includes a fully custom built genomic database and its accompanying taxonomy database. The pipeline uses the known search algorithm BLASTN to search DNA fragments against the fully custom built genomic database, and then uses an implementation we developed of lowest common ancestor (LCA) algorithm and the taxonomy database to classify fragments.
We have developed a streamlined procedure to process any samples for ultra-sensitive sequencing analysis. Starting from any samples that contain the microbial communities of interests, our experimental pipeline can efficiently break down any bacterial, fungi, plant, and animal cells, even when embedded in other scaffolds such like soil, human feces, and filters.
The pipeline allows concomitant extraction of DNA and RNA from a single sample. All reagents went through a thorough decontamination procedure to ensure minimal foreign contaminating DNA/RNA introduced. Based on the yield of the extraction step, we include an optional amplification step for both DNA and RNA. Specifically, for DNA, we perform isothermal multiple displacement amplification (MDA) adapted from single-cell studies. For RNA, we perform isothermal RNA linear amplification coupled with rRNA depletion. This is vastly more superior to the conventional mRNA fishing approach using the poly-A tail as a bait, as viral RNA (genomic vRNA) would not have those features. Finally, DNA and converted cDNA (from RNA) are subjected to an automatable single-tube protocol for efficient library preparation for the next generation sequencing (NGS) platform, the sequencing results of which are fed into our analytical module.
Our analytical module is implemented as a computational pipeline that performs deduplication, quality control, in silico decontamination, assembly, and taxonomy classification. The taxonomy classification is achieved by the fully custom-built DARWIN database and the accompanying taxonomy database and our implementation of the lowest common ancestor (LCA) algorithm. The choice of database alone is the most important step in any taxonomy classification related studies, as it is much harder, if not impossible, to classify species that are simply not included in the database (or worse yet, misclassify them). For these reasons, we survey a broad spectrum of organisms spanning across all domains of life in our DARWIN database. To compensate for the potential long computational time due to the inclusiveness of the database, the analytic module includes three searching algorithms that have different trade-offs between time and sensitivity. In addition, we include a continue option for the CPU-intensive database searching step so the user could choose to resume this process in the events of unexpected interruption. Finally, the CPU intensive database searching step is deployable on Cloud Computing platform such like Google Cloud through virtual system encapsulations (docker images) to help with institutions/individuals who do not have access to the cluster computing engine, where the analytic pipeline was originally developed.
It should be noted that our experimental and analytic modules can work independently of each other if the user so desired. The experimental module for ultra-sensitive DNA/RNA extraction and sequencing can be used to extract information from any samples to feed into analytical pipelines chosen by the user. Alternatively, the analytic module for universal species detection can be fed with data generated with other experimental pipelines and different sequencing platforms.
Our ultra-sensitive and universal species detection pipeline has very broad applications, even well beyond the original intended purpose—to study the human and environmental microbiome. In fact, since we survey all domains of life in our database, this pipeline is viable for analyzing extremely diverse biological samples:
Some outstanding examples are:
We attribute the following advantages to this invention:
1. The ability to extract nucleotide information from very low abundance samples (10{circumflex over ( )}1 bacterial cell level) due to our strict decontamination protocols and unbiased amplification protocols.
2. The ability to classify species spanning all domains of life (broad range detection of highly diverse samples). Previous efforts usually only focus on a sub-domain of life, mostly bacteria, virus and maybe some fungi.
We could adapt our experimental pipeline to clinical samples where human tissues are dominant. On the other hand, our database is constantly updated and curated to cover all domains of life heuristically. Finally, a visualization module can be developed for the taxonomy report using open-sourced statistical software R.
In one embodiment, the invention is a pipeline detection with the following steps:
Deduplication, Quality Control, In silico decontamination, Assembly and Taxonomy classification, all implemented by software on a computer system or one or more computer processors. The steps can be regarded as computer-implemented steps executable on and by a computer system.
For Deduplication, the input to the pipeline is raw sequencing reads in fastq format from the sequencing platforms. The deduplication action or process pertains to removing exact paired-duplicated reads from the data. Sequences of each reads are directly hashed and compared to speed up the process. The output of the action or process is processed de-duplicated sequencing reads in fastq format.
For Quality control, the input is the processed de-duplicated sequencing reads in fastq format. The quality control action or process is to use e.g. software Trim_galore, which will remove any remaining sequencing adapters and low quality bases from the 5 prime and 3 prime ends. Trimmed reads shorter than 30 bp are removed all together. The output of the action or process is de-duplicated, trimmed high quality reads in fastq format.
For In silico decontamination, the input is de-duplicated, trimmed high quality reads in fastq format. The in silico decontamination action or process is that the processed reads are mapped to the human reference genome hg19 version by e.g. the bwa-mem algorithm. Reads mapped to the human reference genome are removed from the sequencing data. The output of the action or process is de-duplicated, trimmed, nonhuman reads in fastq format.
For Assembly, the input is de-duplicated, trimmed, nonhuman reads in fastq format. The assembly action or process is that the processed reads are assembled de novo using megahit using the metagenome-sensitive preset. The cut-off for DNA contigs are 300 bp, and 200 bp for RNA contigs. Anything shorter than the cut-off are removed. The output of the action or process is assembled contigs from the input reads.
For Taxonomy classification, the input is assembled contigs. The taxonomy classification action or process is that the assembled contigs are searched against a custom built database that covers all kingdoms of life, using e.g. the BLASTN algorithm. A wrapper was written to introduce the continue option and examine the integrity of the BLASTN results. The BLASTN results are parsed using custom-implemented LCA algorithm to achieve a balance between sensitivity and specificity of classification methods. The results are further parsed through a custom written taxonomy report script which generates taxonomy abundance information at all taxonomy levels, in addition to listing species separately for each kingdom of life. Finally, identity of contigs to reference genomes are retained and display at the species level to facilitate the confidence of taxonomy assignment. The output of the action or process is BLASTN results, LCA results, Taxonomy results.
In one embodiment, the invention is an experimental pipeline for pan-domain species nucleotide extraction and next generation sequencing library preparation. In this pipeline, the following steps are included:
Right, the approximate amount of DNA and RNA from a typical E. coli cell as compared to a typical mammalian cell (HeLa cell).
Next-generation sequencing (NGS), also known as high-throughput sequencing, is a catch-all term used to describe a number of different modern sequencing technologies including:
These technologies allow us to sequence DNA and RNA much more quickly and cheaply than the previously used Sanger sequencing, which is the main reason we are calling it “next generation sequencing”. The massively parallel sequencing technology known as next-generation sequencing (NGS) has revolutionized the biological sciences. With its ultra-high throughput, scalability, and speed, NGS enables researchers to perform a wide variety of applications and study biological systems at a level never before possible.
Ultra-sensitive is a term relevant to our experimental part of the invention, where we show that the pipeline is able to extract sufficient information from bacterial cells and 200 viral particles.
Universal is a term relevant to our custom-built databases, which aim to characterize species from all kingdoms of life, including, but not limited to, bacteria, fungi, viruses, plants, animals, archaea, etc.
The experimental pipeline of this invention is unique in that it is adapted to single-cell level amount of nucleic acids materials from a mixture of diverse organisms. It is noted that it also works if more materials are provided. The details of the steps of pipeline are provided in the Experimental Protocols section. Traditionally, single-cell experiments are only carried out in mammalian cells or bacterial cells where single or a few cells of the same species are processed at a time. The experimental pipeline aims to process a diverse mixture of organisms presented in a very small amount of materials (equivalent or less than 1000 microbial cells, which is about single mammalian cell level for materials). This seemingly contradictory situation requires novel experimental and analytical techniques to faithfully deconvolve the population structure. Therefore, preserving the signatures from diverse organisms and reducing the impact of contamination from either human or reagent sources becomes a paramount task to accomplish.
To this end, we employ rigorous reagent selections and specific in-lab decontamination protocols. Specifically, we have tested majority of commercially available microbiome extraction kits and adopted the one that has the following two traits: 1, efficiently breaking diverse organisms' cells and releasing the nucleic acid contents, 2. high reproducibility and minimal material loss when only a small number of cells are provided (According to the supplier, the use of their kit for such a small number of cells were never done and they consider it impossible). Upon receiving the extraction materials, we then aliquot all reagents that do not contain enzymes into 1.5 ml plastic tubes and place them around 3 cm from the 254 nm UV radiation source inside a commercial Stratalinker 2400 UV CROSSLINKER for 30 minutes (4000 mwatts/cm2). The amount of UV energy exposed is at least twice as much as required to break at least 99.9% contaminating nucleic acids in the reagents to sub-73 bp (Plos ONE), which should have minimal impact on the downstream amplification and library preparation steps. In addition, all personnel are required wear long-sleeve lab coats and face masks, working in a physically separated and designated clean hood when performing the extraction process to minimize human contamination. In a possible variation, the exact amount of UV light exposure and the volume of each aliquot can be adjusted for larger scale of operations.
The successful outcome of decontamination is reflected in the qPCR quantitation results (
The amount of DNA and RNA extracted from our samples are usually in such low quantities that machines such as NanoDrop and Qubit are unable to measure. Thus, the second technological hurdle to overcome is to amplify the nucleic acids to a level where sequencing libraries can be prepared. Commercially available next generation sequencing (NGS) library preparation kits require minimum 1 ng input, which is approximately 1000× more than the amount we obtain from extraction. To this end, we utilize a single-cell Multiple Displacement Amplification kit to amplify DNA. For RNA, a single primer isothermal amplification kit specifically designed to amplify all non-rRNA is used. As most RNA amplification kits are tailored to mRNA, which selectively enrich for RNA that contain poly-A tail, they are unsuitable for our case. This is because almost all bacterial and viral RNA do not have poly-A tail and therefore will not be amplified. Thus, selecting the broad amplification of all non-rRNA technique is important and will preserve the complex community structures of our samples. Following amplification, DNA and cDNA are converted into sequencing libraries using commercially available kit for next generation sequencing (NGS).
To test the sensitivity of our pipeline, we titrated E. coli culture down to 1000, 100, and 10 cells and extracted these samples using our pipeline, along with a blank control to monitor the contamination background. Our results show that our pipeline can accurately detect at least down to 10 E. coli cells from the sample (
We also precisely evaluated the actual amount of nucleic acids content in situations where extremely small amount of sample are collected (samples collected from a personal device as disclosed in U.S. Provisional Applications 62/488,256 filed on Apr. 21, 2017 and 62/617,471 filed on Jan. 15, 2018). To gather samples for this part, we used commercialized RTI device which was intended to collect pollutants on a filter through active sampling from air and measure them using mass-spectrometry. Adapting this strategy, we instead extract biological contents from the filters using our pipeline. To our knowledge, there are no direct methods to reliably measure nucleic acids amount at sub-pg (<10−12 g) levels, thus we resort to amplification and sequencing. Prior to DNA amplification, a known amount of E. coli phage PhiX174 (5 pg, 500 fg, and 50 fg) is spiked into our sample (in triplicates). The spike-in serve as “ballpark estimates” of the amount of materials initially present. Since our protocol uses random amplification, it is reasonable to assume the final amount ratio between our sample and PhiX174 reflects the actual amount collected. Post-sequencing, the sequencing reads are mapped to human and PhiX174 genomes. Sequencing reads that are non-human and non-PhiX174 are labeled as “others”. The number of reads in each category are represented as a percentage of the total reads (
Last but not the least, with our rigorous optimizations, our pipeline is highly reproducible. This is demonstrated by our results where the extraction and processing of two air samples collected side by side show up to 0.9 correlation coefficient at the species level (
The analytical pipeline, or Universal Fragment Classification (UFC) pipeline, is a collection of scripts written in shell and python (
Deduplication—
Amplified DNA or RNA samples frequently suffer from data quality issues where abnormally high coverage of certain regions of genome/transcriptome are observed. This is due to the technical nature of amplification techniques. Conventional approaches attempt to first map reads to reference genomes and use the mapping coordinates to determine if they are duplicates. While memory-efficient, this approach is impossible for most microbiome research because such reference genomes simply do not exist. Therefore, an implementation of reference-free deduplication method is introduced in this pipeline. A possible variation is that the program can be rewritten in C++ for extremely large input size.
Trimming and Quality Control—
This step is carried out using Trim_galore wrapper, which essentially combines the adapter removal tool Cutadapt and NGS quality control tool Fastqc.
Dehumanization—
This step is performed using publicly available BWA-mem algorithm to map all reads to the human reference genome. The purpose is to remove the human reads portion (which is always present when samples need to be amplified before library prep, possibly from the sample handler) from the total reads so that the following assembly step is more efficient. A possible variation is that different version of human reference genome could be used and may yield slightly different results.
De Novo Assembly—
This step can be executed either by Megahit or SPAdes, both of which are popular de novo de bruijn graph assembler for short read NGS sequencing reads. The purpose of this step is to assemble millions or more reads into separate information-dense “contigs”, similar to piecing jigsaw puzzles together into bigger clusters. This is an essential step in this pipeline because of its role in data reduction and information retention, thereby increasing confidence in the subsequent taxonomy assignment (longer sequence=better confidence in assignment). A possible variation is that the choice of assembly algorithms and parameters are subject to change depending on the length of reads.
Searching Against the Darwin Database—
This step is carried out using a BLASTN wrapper, which takes NCBI BLAST as its core and adding functionalities that are essential to the pipeline. The BLAST algorithm is selected for this purpose because it remains the most sensitive algorithm to identify a given DNA/RNA sequence. Different BLAST algorithms can be specified by user depending on the size of input or sensitivity requirements of the analysis.
The choice of database(s) is the most crucial component when it comes to nucleic acids detection and classification. This is because alignment or mapping algorithms use these so-called reference sequences to identify reads or fragments. A poorly chosen database always leads to under-classification and sometimes even false-classification. Unless the sequences are very similar, it is fundamentally impossible to identify a group of species that are not included in the database (for example, a bacteria database can hardly detect any fungi). Thus, for accurate identification of organisms, a broad database encompassing all domains of life is essential. In addition, the database needs to be carefully curated. Unfortunately, public databases are often non-curated, which often translate into redundancy, low-quality, and sometimes contaminating data (especially in cases where one species live within another). We've addressed these issues by creating the DARWIN database. This database is an extensively expanded version of the NCBI BLAST NT database, which is hosted by the national center for biotechnology information (NCBI) containing nucleic acid information that represents all domains of life. However, unlike NCBI BLAST NT, which focuses more on the broad human health related organisms, DARWIN was created to better represent all domains of life (
Taxonomy Analysis with LCA Method—
The BLAST results from previous steps only provide an overview of what sequences may be, as provided in a list of potential organisms ranked by a statistical measure called e-value. However, consideration has been given to this process and simply picking the one with best e-value is not robust enough. Instead, a phylogeny inspired algorithm called Lowest Common Ancestor (LCA) algorithm is preferred. In our analytical pipeline this algorithm is implemented along with special considerations to certain domains of life that do not conform to usual taxonomy database structures. Accompanying the DARWIN database, a DARWIN taxonomy database specific to the DARWIN database (and beyond) is also constructed. The goal of taxonomy database is to provide a unique taxonomy label to each entry in the DARWIN database, which enables fast and accurate evaluation of taxonomy in the LCA step. In practice, a noticeable amount of contigs can be unexpectedly assigned to species belonging to different domains of life at the same time, hinting a possible contamination source in even well curated databases. This conflict of assignment can be easily ignored if the database does not contain species from different domains of life. A possible variation can be that the exact rule of assignment is modified depending on further optimizations.
Taxonomy Report and Abundance Estimation—
The inferred taxonomy results from the LCA step is compiled and displayed in human readable format. Specifically, the report follows the hierarchical taxonomy rank conventions of NCBI and display the sequencing abundance of each taxonomy rank in aggregate. Abundance estimation is handled in two approaches, median copy number of contigs assigned to each species and aggregate sequencing amount, which reflect different focuses of the analysis. The final report also includes a special section where species belonging to different domains of life are listed separately so one can quickly inspect domains of their interests. A possible variation is a graphic module where results from this step are made into standardized figures can be introduced.
UFC Pipeline Detects Significantly More Species than Conventional Methods
Side-by-side comparison shows that the performance of our analytical pipeline can identify far more (53% against 7% in the example provided, but in cases where samples are dominated by plants, the percentage can be as drastic as 95% against 3%) portions of sequencing information than conventional packages (FCP package is compared here,
Furthermore, in a mock community where we mix a panel of 12 different pathogenic viruses with bacteria and yeasts, we can reliably detect almost all viruses in the mixture despite their genome size being extremely small compared to bacteria and yeast (
The final demonstration of the process of the pipeline is reflected by the analysis on more than 100 actual samples as a part of an academic study, where different species covering all domains of life can be detected with dynamic abundance. Several opportunistic pathogens and even a parasite in one case can be detected from the samples (
Simultaneous biotics DNA and RNA extraction Filters captured the biotics samples were used for simultaneous DNA and RNA extraction by combination and modification of MO Bio PowerWater DNA and PowerWater RNA extraction kit. We altered the original protocols to allow extraction of DNA and RNA from the same sample.
Detailed extraction protocol is as follows:
Biotics DNA samples are linearly amplified by the QIAGEN REPLI-g single cell MDA amplification kit with modifications.
Biotics RNA samples are linearly amplified by NuGEN Technologies, Inc. Ovation RNA-seq system V2 with modifications.
Step-1: First strand cDNA synthesis
Step-2: Second strand cDNA synthesis
Step-3: Double-strand cDNA was purified with 1.4 volumes of the Agencourt RNAClean XP beads.
Step-4: Purified cDNA were amplified with Single Primer Isothermal Amplification (SPIA).
Step-5: SPIA amplified cDNA were amplified with 0.8 volumes of AMPure XP beads.
DNA library was conducted with KAPA HyperPlus Kits (KAPA Biosystem, Wilmington, Wash.) according to the modified manufacturer's instructions. Detailed protocol is as follows:
cDNA library was conducted with KAPA HyperPlus Kits (KAPA Biosystem, Wilmington, Wash.) according to the modified manufacturer's instructions. Detailed protocol is as follows:
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US18/28542 | 4/20/2018 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62488119 | Apr 2017 | US | |
62488256 | Apr 2017 | US | |
62617471 | Jan 2018 | US |