The technology disclosed relates to artificial intelligence type computers and digital data processing systems and corresponding data processing methods and products for emulation of intelligence (i.e., knowledge based systems, reasoning systems, and knowledge acquisition systems); and including systems for reasoning with uncertainty (e.g., fuzzy logic systems), adaptive systems, machine learning systems, and artificial neural networks. In particular, the technology disclosed relates to using deep convolutional neural networks to analyze ordered data.
This application is related to US Nonprovisional patent application titled “QUALITY DETECTION OF VARIANT CALLING USING A MACHINE LEARNING CLASSIFIER” (Attorney Docket No. ILLM 1064-3/IP-2297-U52), filed contemporaneously. The related application is hereby incorporated by reference for all purposes.
This application is related to US Nonprovisional patent application titled “UNIQUE MAPPER TOOL FOR EXCLUDING REGIONS WITHOUT ONE-TO-ONE MAPPING BETWEEN A SET OF TWO REFERENCE GENOMES” (Attorney Docket No. ILLM 1064-4/IP-2297-U53), filed contemporaneously. The related application is hereby incorporated by reference for all purposes.
The following are incorporated by reference for all purposes as if fully set forth herein, and should be considered part of, this provisional patent filing:
Sundaram, L. et al. Predicting the clinical impact of human mutation with deep neural networks. Nat. Genet. 50, 1161-1170 (2018);
Jaganathan, K. et al. Predicting splicing from primary sequence with deep learning. Cell 176, 535-548 (2019);
U.S. Patent Application No. 62/573,144, titled “TRAINING A DEEP PATHOGENICITY CLASSIFIER USING LARGE-SCALE BENIGN TRAINING DATA,” filed Oct. 16, 2017 (Attorney Docket No. ILLM 1000-1/IP-1611-PRV);
U.S. Patent Application No. 62/573,149, titled “PATHOGENICITY CLASSIFIER BASED ON DEEP CONVOLUTIONAL NEURAL NETWORKS (CNNs),” filed Oct. 16, 2017 (Attorney Docket No. ILLM 1000-2/IP-1612-PRV);
U.S. Patent Application No. 62/573,153, titled “DEEP SEMI-SUPERVISED LEARNING THAT GENERATES LARGE-SCALE PATHOGENIC TRAINING DATA,” filed Oct. 16, 2017 (Attorney Docket No. ILLM 1000-3/IP-1613-PRV);
U.S. Patent Application No. 62/582,898, titled “PATHOGENICITY CLASSIFICATION OF GENOMIC DATA USING DEEP CONVOLUTIONAL NEURAL NETWORKS (CNNs),” filed Nov. 7, 2017 (Attorney Docket No. ILLM 1000-4/IP-1618-PRV);
U.S. patent application Ser. No. 16/160,903, titled “DEEP LEARNING-BASED TECHNIQUES FOR TRAINING DEEP CONVOLUTIONAL NEURAL NETWORKS,” filed on Oct. 15, 2018 (Attorney Docket No. ILLM 1000-5/IP-1611-US);
U.S. patent application Ser. No. 16/160,986, titled “DEEP CONVOLUTIONAL NEURAL NETWORKS FOR VARIANT CLASSIFICATION,” filed on Oct. 15, 2018 (Attorney Docket No. ILLM 1000-6/IP-1612-US);
U.S. patent application Ser. No. 16/160,968, titled “SEMI-SUPERVISED LEARNING FOR TRAINING AN ENSEMBLE OF DEEP CONVOLUTIONAL NEURAL NETWORKS,”
U.S. patent application Ser. No. 16/160,978, titled “DEEP LEARNING-BASED SPLICE SITE CLASSIFICATION,” filed on Oct. 15, 2018 (Attorney Docket No. ILLM 1001-4/IP-1680-US);
U.S. patent application Ser. No. 16/407,149, titled “DEEP LEARNING-BASED TECHNIQUES FOR PRE-TRAINING DEEP CONVOLUTIONAL NEURAL NETWORKS,” filed May 8, 2019 (Attorney Docket No. ILLM 1010-1/IP-1734-US);
U.S. patent application Ser. No. 17/232,056, titled “DEEP CONVOLUTIONAL NEURAL NETWORKS TO PREDICT VARIANT PATHOGENICITY USING THREE-DIMENSIONAL (3D) PROTEIN STRUCTURES,” filed on Apr. 15, 2021, (Atty. Docket No. ILLM 1037-2/IP-2051-US);
U.S. Patent Application No. 63/175,495, titled “MULTI-CHANNEL PROTEIN VOXELIZATION TO PREDICT VARIANT PATHOGENICITY USING DEEP CONVOLUTIONAL NEURAL NETWORKS,” filed on Apr. 15, 2021, (Atty. Docket No. ILLM 1047-1/IP-2142-PRV);
U.S. Patent Application No. 63/175,767, titled “EFFICIENT VOXELIZATION FOR DEEP LEARNING,” filed on Apr. 16, 2021, (Atty. Docket No. ILLM 1048-1/IP-2143-PRV);
U.S. patent application Ser. No. 17/468,411, titled “ARTIFICIAL INTELLIGENCE-BASED ANALYSIS OF PROTEIN THREE-DIMENSIONAL (3D) STRUCTURES,” filed on Sep. 7, 2021, (Atty. Docket No. ILLM 1037-3/IP-2051A-US);
U.S. Provisional Patent Application No. 63/253,122, titled “PROTEIN STRUCTURE-BASED PROTEIN LANGUAGE MODELS,” filed Oct. 6, 2021 (Attorney Docket No. ILLM 1050-1/IP-2164-PRV);
U.S. Provisional Patent Application No. 63/281,579, titled “PREDICTING VARIANT PATHOGENICITY FROM EVOLUTIONARY CONSERVATION USING THREE-DIMENSIONAL (3D) PROTEIN STRUCTURE VOXELS,” filed Nov. 19, 2021
U.S. Provisional Patent Application No. 63/281,592, titled “COMBINED AND TRANSFER LEARNING OF A VARIANT PATHOGENICITY PREDICTOR USING GAPED AND NON-GAPED PROTEIN SAMPLES,” filed Nov. 19, 2021 (Attorney Docket No. ILLM 1061-1/IP-2271-PRV).
The subject matter discussed in this section should not be assumed to be prior art merely as a result of its mention in this section. Similarly, a problem mentioned in this section or associated with the subject matter provided as background should not be assumed to have been previously recognized in the prior art. The subject matter in this section merely represents different approaches, which in and of themselves can also correspond to implementations of the claimed technology.
The explosion of available biological sequence data has led to multiple computational approaches that infer the proteins' three-dimensional structure, biological function, fitness, and evolutionary history from sequence data. So-called protein language models, like the ones based on the Transformer architecture, have been trained on large ensembles of protein sequences by using the masked language modeling objective of filling in masked amino acids in a sequence, given the surrounding ones.
Protein language models capture long-range dependencies, learn rich representations of protein sequences, and can be employed for multiple tasks. For example, protein language models can predict structural contacts from single sequences in an unsupervised way.
Protein sequences can be classified into families of homologous proteins that descend from an ancestral protein and share a similar structure and function. Analyzing multiple sequence alignments (MSAs) of homologous proteins provides important information about functional and structural constraints. The statistics of MSA columns, which represent amino-acid sites, identify functional residues that are conserved during evolution. Correlations of amino acid usage between the MSA columns contain important information about functional sectors and structural contacts.
Language models were initially developed for natural language processing and operate on a simple but powerful principle: they acquire linguistic understanding by learning to fill in missing words in a sentence, akin to a sentence completion task in standardized tests. Language models develop powerful reasoning capabilities by applying this principle across large text corpora. The Bidirectional Encoder Representations from Transformers (BERT) mode instantiated this principle using Transformers, a class of neural networks in which attention is the primary component of the learning system. In a Transformer, each token in the input sentence can “attend” to all other tokens by exchanging activation patterns corresponding to the intermediate outputs of neurons in a neural network.
Protein language models like the MSA Transformer have been trained to perform inference from MSAs of evolutionarily related sequences. The MSA Transformer interleaves per-sequence (“row”) attention with per-site (“column”) attention to incorporate coevolution. Combinations of row attention heads in the MSA Transformer have led to state-of-the-art unsupervised structural contact predictions.
End-to-end deep learning approaches for variant effect predictions are applied to predict the pathogenicity of missense variants from protein sequence and sequence conservation data (See Sundaram, L. et al. Predicting the clinical impact of human mutation with deep neural networks. Nat. Genet. 50, 1161-1170 (2018), referred to herein as “PrimateAI”). PrimateAI uses deep neural networks trained on variants of known pathogenicity with data augmentation using cross-species information. PrimateAI in particular uses sequences of wild-type and mutant proteins to compare the difference and decide the pathogenicity of mutations using the trained deep neural networks. Such an approach that utilizes the protein sequences for pathogenicity prediction is promising because it can avoid the problem of circularity and overfitting to previous knowledge. However, compared to the adequate number of data to train the deep neural networks effectively, the number of clinical data available in ClinVar is relatively small. To overcome this data scarcity, PrimateAI uses common human variants and variants from primates as benign data while simulated variants based on trinucleotide context were used as unlabeled data.
PrimateAI outperforms prior methods when trained directly upon sequence alignments. PrimateAI learns important protein domains, conserved amino acid positions, and sequence dependencies directly from the training data consisting of about 120,000 human samples. PrimateAI substantially exceeds the performance of other variant pathogenicity prediction tools in differentiating benign and pathogenic de-novo mutations in candidate developmental disorder genes, and in reproducing prior knowledge in ClinVar. These results suggest that PrimateAI is an important step forward for variant classification tools that may lessen the reliance of clinical reporting on prior knowledge.
Therefore, an opportunity arises to use protein language models and MSAs for variant pathogenicity prediction. More accurate variant pathogenicity prediction may result.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee. The color drawings also may be available in PAIR via the Supplemental Content tab.
In the drawings, like reference characters generally refer to like parts throughout the different views. Also, the drawings are not necessarily to scale, with an emphasis instead generally being placed upon illustrating the principles of the technology disclosed. In the following description, various implementations of the technology disclosed are described with reference to the following drawings, in which.
The following discussion is presented to enable any person skilled in the art to make and use the technology disclosed and is provided in the context of a particular application and its requirements. Various modifications to the disclosed implementations will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other implementations and applications without departing from the spirit and scope of the technology disclosed. Thus, the technology disclosed is not intended to be limited to the implementations shown but is to be accorded the widest scope consistent with the principles and features disclosed herein.
The detailed description of various implementations will be better understood when read in conjunction with the appended drawings. To the extent that the figures illustrate diagrams of the functional blocks of the various implementations, the functional blocks are not necessarily indicative of the division between hardware circuitry. Thus, for example, one or more of the functional blocks (e.g., modules, processors, or memories) may be implemented in a single piece of hardware (e.g., a general purpose signal processor or a block of random access memory, hard disk, or the like) or multiple pieces of hardware. Similarly, the programs may be stand-alone programs, may be incorporated as subroutines in an operating system, may be functions in an installed software package, and the like. It should be understood that the various implementations are not limited to the arrangements and instrumentality shown in the drawings.
The processing engines and databases of the figures, designated as modules, can be implemented in hardware or software, and need not be divided up in precisely the same blocks as shown in the figures. Some of the modules can also be implemented on different processors, computers, or servers, or spread among a number of different processors, computers, or servers. In addition, it will be appreciated that some of the modules can be combined, operated in parallel or in a different sequence than that shown in the figures without affecting the functions achieved. The modules in the figures can also be thought of as flowchart steps in a method. A module also need not necessarily have all its code disposed contiguously in memory; some parts of the code can be separated from other parts of the code with code from other modules or other functions disposed in between.
The technologies disclosed can be used to improve the quality of pathogenic variant calling. The technology disclosed can be used to improve the quality of variant calling in scenarios where desired reference genomes are unavailable. There are 8.7 million species worldwide, but very few have reference genome builds. In many scenarios, we need to do variant calling in the absence of reference genome builds. In some instances, we could choose a closely-related species as a reference genome for variant calling. But this, sometimes, leads to many false positive calls. Thus, we developed various methods to reduce the false positives, including the random forest classifiers, linear regression models, and neural network models. We also devised a unique-mapper score to identify regions that are not one-to-one mapping between the species, which will further reduce variant calling errors.
Mapping sequenced reads from a Target Species 120 to a Pseudo-Target Reference Genome 142 detects a Second Set of Variants in the Sequenced Reads of the Target Species 144. The Pseudo-Target Reference Genome 142 is from a pseudo-target species other than the Target Species 120. In some implementations of the technology disclosed, the Pseudo-Target Reference Genome 142 is homologous with the genome of Target Species 120, as determined by a homology threshold (such as a percentage homology above 80%, 90%, or 95%, or a double-bounded range of acceptable homology percentages such as 85-90% or 80-89%). A homology threshold set to determine degree of homology between the pseudo-target species and target species may be the same as a homology threshold set to determine degree of homology between the non-target species and target species, or the respective homology thresholds may differ. In some embodiments, the homology threshold set to determine degree of homology between the non-target species and target species may be informed by the degree of homology between the pseudo-target species and target species, or vice versa. The Comparison 126 of the first set of variants and second set of variants identifies a subset of False Positive Variants 128 (i.e., overlapping variants identified by mapping to the Pseudo-Target Reference Genome 142 cannot be considered as reliable positive variants on the basis of homology when the variants are also identified by mapping to Non-Target Reference Genome 102).
Sequenced Read Z is mapped to Pseudo-Target Reference Genome 442 and will not be called as a variant despite the cytosine and guanine not being equivalent at position five. Due to base pairing, the complementary strand of the Pseudo-Target Reference Genome 442 possesses a cytosine at position 5 and the complementary strand of the Sequenced Read Z 414 possesses a guanine at position 5. As a result, this Sequenced Read Z 414 is not a variant when mapped to Pseudo-Target Reference Genome 442. Sequenced Read Z 414 is also mapped to Non-Target Reference Genome 444 and will not be called as a variant due to complementary bases being present at position 5. As a result, Sequenced Read Z 414 belongs to the complement of both the called variant set from mapping to the Pseudo-Target Reference Genome 442 and the called variant set from the Non-Target Reference Genome 444 therefore Sequenced Read Z 414 is a true negative variant.
Sequenced Read B from a Target Species 982, Sequenced Read B from a Target Species 984, and Sequenced Read B from a Target Species 986 are equivalent. Region Three 984, Region Four 986, and Region Five 988 belong to the non-target reference genome and are not equivalent. Sequenced Read B 982 from the Target Species maps to multiple regions within the non-target reference genome. As with Sequenced Read A 902, Sequenced Read B 982 will map to a different genomic region within the non-target species reference genome than the orthologous genomic region that Sequenced Read B 982 maps to within the pseudo-target reference genome due to the multiplicity of variant calling within the non-target reference genome. Subsequently, sequenced read that maps to more than two genomic regions within the non-target reference genome will result in a false positive.
The Quality Classifier 1064 undergoes a Model Training Process 1040 on the Ground Truth Data 1020. The Quality Classifier 1064 takes an Input Target Variant 1062 represented as a vector containing the set of variant features in the plurality of variant features {x1:xn} where each value of x is a variant feature within the set of variant features in the plurality of variant features describing the Target Variant 1062. In some implementations of the technology disclosed, additional variant features can be extracted from Variant Call Format (.vcf) files. The Quality Classifier 1064 is a binary classification model with output classes for High Quality 1066 and Low Quality 1068.
In one implementation, the random forest model 1744 is communicably linked to the storage subsystem 2910 and the user interface input devices 2938.
User interface input devices 2938 can include a keyboard; pointing devices such as a mouse, trackball, touchpad, or graphics tablet; a scanner; a touch screen incorporated into the display; audio input devices such as voice recognition systems and microphones; and other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computer system 2900.
User interface output devices 2976 can include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem can include an LED display, a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem can also provide a non-visual display such as audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computer system 2900 to the user or to another machine or computer system.
Storage subsystem 2910 stores programming and data constructs that provide the functionality of some or all of the modules and methods described herein. These software modules are generally executed by processors 2978.
Processors 2978 can be graphics processing units (GPUs), field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), and/or coarse-grained reconfigurable architectures (CGRAs). Processors 2978 can be hosted by a deep learning cloud platform such as Google Cloud Platform™, Xilinx™, and Cirrascale™. Examples of processors 2978 include Google's Tensor Processing Unit (TPU)™, rackmount solutions like GX4 Rackmount Series™, GX29 Rackmount Series™, NVIDIA DGX-1™, Microsoft' Stratix V FPGA™, Graphcore's Intelligent Processor Unit (IPU)™, Qualcomm's Zeroth Platform™ with Snapdragon Processors™, NVIDIA's Volta™, NVIDIA's DRIVE PX™, NVIDIA's JETSON TX1/TX2 MODULE™, Intel's Nirvana™, Movidius VPU™, Fujitsu DPI™, ARM's DynamiclQ™, IBM TrueNorth™, Lambda GPU Server with Testa V100s™, and others.
Memory subsystem 2922 used in the storage subsystem 2910 can include a number of memories including a main random access memory (RAM) 2932 for storage of instructions and data during program execution and a read only memory (ROM) 2934 in which fixed instructions are stored. A file storage subsystem 2936 can provide persistent storage for program and data files, and can include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations can be stored by file storage subsystem 2936 in the storage subsystem 2910, or in other machines accessible by the processor.
Bus subsystem 2955 provides a mechanism for letting the various components and subsystems of computer system 2900 communicate with each other as intended. Although bus subsystem 2955 is shown schematically as a single bus, alternative implementations of the bus subsystem can use multiple busses.
Computer system 2900 itself can be of varying types including a personal computer, a portable computer, a workstation, a computer terminal, a network computer, a television, a mainframe, a server farm, a widely-distributed set of loosely networked computers, or any other data processing system or user device. Due to the ever-changing nature of computers and networks, the description of computer system 2900 depicted in
The technology disclosed, in particularly, the clauses disclosed in this section, can be practiced as a system, method, or article of manufacture. One or more features of an implementation can be combined with the base implementation. Implementations that are not mutually exclusive are taught to be combinable. One or more features of an implementation can be combined with other implementations. This disclosure periodically reminds the user of these options. Omission from some implementations of recitations that repeat these options should not be taken as limiting the combinations taught in the preceding sections—these recitations are hereby incorporated forward by reference into each of the following implementations.
One or more implementations and clauses of the technology disclosed or elements thereof can be implemented in the form of a computer product, including a non-transitory computer readable storage medium with computer usable program code for performing the method steps indicated. Furthermore, one or more implementations and clauses of the technology disclosed or elements thereof can be implemented in the form of an apparatus including a memory and at least one processor that is coupled to the memory and operative to perform exemplary method steps. Yet further, in another aspect, one or more implementations and clauses of the technology disclosed or elements thereof can be implemented in the form of means for carrying out one or more of the method steps described herein; the means can include (i) hardware module(s), (ii) software module(s) executing on one or more hardware processors, or (iii) a combination of hardware and software modules; any of (i)-(iii) implement the specific techniques set forth herein, and the software modules are stored in a computer readable storage medium (or multiple such media).
The clauses described in this section can be combined as features. In the interest of conciseness, the combinations of features are not individually enumerated and are not repeated with each base set of features. The reader will understand how features identified in the clauses described in this section can readily be combined with sets of base features identified as implementations in other sections of this application. These clauses are not meant to be mutually exclusive, exhaustive, or restrictive; and the technology disclosed is not limited to these clauses but rather encompasses all possible combinations, modifications, and variations within the scope of the claimed technology and its equivalents.
Other implementations of the clauses described in this section can include a non-transitory computer readable storage medium storing instructions executable by a processor to perform any of the clauses described in this section. Yet another implementation of the clauses described in this section can include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform any of the clauses described in this section.
We disclose the following clauses:
1. A computer-implemented method of determining feasibility of using a reference genome of a non-target species for variant calling a sample of a target species, including:
mapping sequenced reads of a sample of a target species to a reference genome of a non-target species to detect a first set of variants in the sequenced reads of the sample of the target species;
mapping the sequenced reads of the sample of the target species to a reference genome of a pseudo-target species to detect a second set of variants in the sequenced reads of the sample of the target species;
comparing the first set of variants and the second set of variants, and identifying a subset of true positive variants that are common between the first set of variants and the second set of variants;
comparing the first set of variants and the second set of variants, and identifying a subset of false positive variants that are present in the second set of variants but absent from the first set of variants; and based on a count of the subset of false positive variants determining the feasibility of using the reference genome of the non-target species for variant calling the target species.
2. The computer-implemented method of clause 1, wherein the pseudo-target species is the target species.
3. The computer-implemented method of clause 1, wherein the pseudo-target species is different from the target species.
4. The computer-implemented method of clause 3, wherein the pseudo-target species is homologous with the target species.
5. The computer-implemented method of clause 1, wherein the non-target species is a human.
6. The computer-implemented method of clause 1, wherein the target species is a non-human primate.
7. The computer-implemented method of clause 1, further including detecting the second set of variants by mapping the sequenced reads of the sample of the target species to the reference genome of the target species, and then lifting-over the mapped sequenced reads of the sample of the target species to the reference genome of the non-target species.
8. The computer-implemented method of clause 1, further including applying a first filter to filter out low-quality variants from the first set of variants and the second set of variants.
9. The computer-implemented method of clause 1, further including applying a second filter to filter out, from the first set of variants and the second set of variants, fixed substitutions shared between the reference genome of the non-target species and the reference genome of the pseudo-target species.
10. The computer-implemented method of clause 1, wherein false positive variants in the subset of false positive variants occur because a particular region in the sequenced reads of the sample of the target species map to a first region in the reference genome of the non-target species and a second region in the reference genome of the pseudo-target species, wherein the first region and the second region are different.
11. The computer-implemented method of clause 10, wherein the false positive variants occur because the particular region in the sequenced reads of the sample of the target species maps multiple regions in the reference genome of the non-target species.
12. A system including one or more processors coupled to memory, the memory loaded with computer instructions to determine feasibility of using a reference genome of a non-target species for variant calling a sample of a target species, the instructions, when executed on the processors, implement actions comprising:
mapping sequenced reads of a sample of a target species to a reference genome of a non-target species to detect a first set of variants in the sequenced reads of the sample of the target species;
mapping the sequenced reads of the sample of the target species to a reference genome of a pseudo-target species to detect a second set of variants in the sequenced reads of the sample of the target species;
comparing the first set of variants and the second set of variants, and identifying a subset of true positive variants that are common between the first set of variants and the second set of variants;
comparing the first set of variants and the second set of variants, and identifying a subset of false positive variants that are present in the second set of variants but absent from the first set of variants; and
based on a count of the subset of false positive variants determining the feasibility of using the reference genome of the non-target species for variant calling the target species.
13. The system of clause 12, wherein the pseudo-target species is the target species.
14. The system of clause 12, wherein the pseudo-target species is different from the target species.
15. The system of clause 3, wherein the pseudo-target species is homologous with the target species.
16. The system of clause 12, wherein the non-target species is a human.
17. The system of clause 12, wherein the target species is a non-human primate.
18. The system of clause 12, further including detecting the second set of variants by mapping the sequenced reads of the sample of the target species to the reference genome of the target species, and then lifting-over the mapped sequenced reads of the sample of the target species to the reference genome of the non-target species.
19. The system of clause 12, further including applying a first filter to filter out low-quality variants from the first set of variants and the second set of variants.
20. The system of clause 12, further including applying a second filter to filter out, from the first set of variants and the second set of variants, fixed substitutions shared between the reference genome of the non-target species and the reference genome of the pseudo-target species.
21. The system of clause 12, wherein false positive variants in the subset of false positive variants occur because a particular region in the sequenced reads of the sample of the target species map to a first region in the reference genome of the non-target species and a second region in the reference genome of the pseudo-target species, wherein the first region and the second region are different.
22. The system of clause 12, wherein the false positive variants occur because the particular region in the sequenced reads of the sample of the target species maps multiple regions in the reference genome of the non-target species.
23. A non-transitory computer readable storage medium impressed with computer program instructions to determine feasibility of using a reference genome of a non-target species for variant calling a sample of a target species, the instructions, when executed on a processor, implement a method comprising:
mapping sequenced reads of a sample of a target species to a reference genome of a non-target species to detect a first set of variants in the sequenced reads of the sample of the target species;
mapping the sequenced reads of the sample of the target species to a reference genome of a pseudo-target species to detect a second set of variants in the sequenced reads of the sample of the target species;
comparing the first set of variants and the second set of variants, and identifying a subset of true positive variants that are common between the first set of variants and the second set of variants;
comparing the first set of variants and the second set of variants, and identifying a subset of false positive variants that are present in the second set of variants but absent from the first set of variants; and
based on a count of the subset of false positive variants determining the feasibility of using the reference genome of the non-target species for variant calling the target species.
24. The non-transitory computer readable storage medium of clause 23, wherein the pseudo-target species is the target species.
25. The non-transitory computer readable storage medium of clause 23, wherein the pseudo-target species is different from the target species.
26. The non-transitory computer readable storage medium of clause 3, wherein the pseudo-target species is homologous with the target species.
27. The non-transitory computer readable storage medium of clause 23, wherein the non-target species is a human.
28. The non-transitory computer readable storage medium of clause 23, wherein the target species is a non-human primate.
29. The non-transitory computer readable storage medium of clause 23, further including detecting the second set of variants by mapping the sequenced reads of the sample of the target species to the reference genome of the target species, and then lifting-over the mapped sequenced reads of the sample of the target species to the reference genome of the non-target species.
30. The non-transitory computer readable storage medium of clause 23, further including applying a first filter to filter out low-quality variants from the first set of variants and the second set of variants.
31. The non-transitory computer readable storage medium of clause 23, further including applying a second filter to filter out, from the first set of variants and the second set of variants, fixed substitutions shared between the reference genome of the non-target species and the reference genome of the pseudo-target species.
32. The non-transitory computer readable storage medium of clause 23, wherein false positive variants in the subset of false positive variants occur because a particular region in the sequenced reads of the sample of the target species map to a first region in the reference genome of the non-target species and a second region in the reference genome of the pseudo-target species, wherein the first region and the second region are different.
33. The non-transitory computer readable storage medium of clause 23, wherein the false positive variants occur because the particular region in the sequenced reads of the sample of the target species maps multiple regions in the reference genome of the non-target species.
This application claims the benefit of and priority to the following: U.S. Provisional Patent Application No. 63/294,813, titled “PERIODIC MASK PATTERN FOR REVELATION LANGUAGE MODELS,” filed Dec. 29, 2021 (Attorney Docket No. ILLM 1063-1/IP-2296-PRV); U.S. Provisional Patent Application No. 63/294,816, titled “CLASSIFYING MILLIONS OF VARIANTS OF UNCERTAIN SIGNIFICANCE USING PRIMATE SEQUENCING AND DEEP LEARNING,” filed Dec. 29, 2021 (Attorney Docket No. ILLM 1064-1/IP-2297-PRV); U.S. Provisional Patent Application No. 63/294,820, titled “IDENTIFYING GENES WITH DIFFERENTIAL SELECTIVE CONSTRAINT BETWEEN HUMANS AND NON-HUMAN PRIMATES,” filed Dec. 29, 2021 (Attorney Docket No. ILLM 1065-1/IP-2298-PRV); U.S. Provisional Patent Application No. 63/294,827, titled “DEEP LEARNING NETWORK FOR EVOLUTIONARY CONSERVATION,” filed Dec. 29, 2021 (Attorney Docket No. ILLM 1066-1/IP-2299-PRV); U.S. Provisional Patent Application No. 63/294,828, titled “INTER-MODEL PREDICTION SCORE RECALIBRATION,” filed Dec. 29, 2021 (Attorney Docket No. ILLM 1067-1/IP-2301-PRV); and U.S. Provisional Patent Application No. 63/294,830, titled “SPECIES-DIFFERENTIABLE EVOLUTIONARY PROFILES,” filed Dec. 29, 2021 (Attorney Docket No. ILLM 1068-1/IP-2302-PRV). The priority applications are incorporated by reference as if fully set forth herein.
Number | Date | Country | |
---|---|---|---|
63294830 | Dec 2021 | US | |
63294828 | Dec 2021 | US | |
63294827 | Dec 2021 | US | |
63294820 | Dec 2021 | US | |
63294816 | Dec 2021 | US | |
63294813 | Dec 2021 | US |