MULTIPLE TAGGING OF INDIVIDUAL LONG DNA FRAGMENTS

Information

  • Patent Application
  • 20240287598
  • Publication Number
    20240287598
  • Date Filed
    October 16, 2023
    a year ago
  • Date Published
    August 29, 2024
    3 months ago
Abstract
This disclosure provides methods and compositions for tagging long fragments of a target nucleic acid for sequencing and analyzing the resulting sequence information in order to reduce errors and perform haplotype phasing, for example.
Description
REFERENCE TO A “SEQUENCE LISTING,” A TABLE, OR A COMPUTER PROGRAM LISTING APPENDIX SUBMITTED AS AN ASCII TEXT FILE

The Sequence Listing written in file 092171-1411811-5043-US04_SequenceListing.pdf, created Sep. 20, 2018, 785 bytes, machine format IBM-PC, MS-Windows operating system, is hereby incorporated by reference.


BACKGROUND OF THE INVENTION

There is a need for improved methods for determining the parental contribution to the genomes of higher organisms, i.e., haplotype phasing of genomes. Methods for haplotype phasing, including computational methods and experimental phasing, are reviewed in Browning and Browning, Nature Reviews Genetics 12:703-7014, 2011.


Most mammals, including humans, are diploid, with half of the homologous chromosomes being derived from each parent. Many plants have genomes that are polyploid. For example, wheat (Triticum spp.) have a ploidy ranging from diploid (Einkorn wheat) to quadriploid (emmer and durum wheat) to hexaploid (spelt wheat and common wheat [T. aestivum]).


The context in which variations occur on each individual chromosome can have profound effects on the expression and regulation of genes and other transcribed regions of the genome. Further, determining if two potentially detrimental mutations occur within one or both alleles of a gene is of paramount clinical importance. For plant species, knowledge of the parental genetic contribution is important for breeding progeny with desirable traits.


Current methods for whole-genome sequencing lack the ability to separately assemble parental chromosomes in a cost-effective way and describe the context (haplotypes) in which variations co-occur. Simulation experiments show that chromosome-level haplotyping requires allele linkage information across a range of at least 70-100 kb. This cannot be achieved with existing technologies that use amplified DNA, which are be limited to reads less than 1000 bases due to difficulties in uniform amplification of long DNA molecules and loss of linkage information in sequencing. Mate-pair technologies can provide an equivalent to the extended read length but are limited to less than 10 kb due to inefficiencies in making such DNA libraries (due to the difficulty of circularizing DNA longer than a few kb in length). This approach also needs extreme read coverage to link all heterozygotes.


Single molecule sequencing of greater than 100 kb DNA fragments would be useful for haplotyping if processing such long molecules were feasible, if the accuracy of single molecule sequencing were high, and detection/instrument costs were low. This is very difficult to achieve on short molecules with high yield, let alone on 100 kb fragments.


Most recent human genome sequencing has been performed on short read-length (<200 bp), highly parallelized systems starting with hundreds of nanograms of DNA. These technologies are excellent at generating large volumes of data quickly and economically. Unfortunately, short reads, often paired with small mate-gap sizes (500 bp-10 kb), eliminate most SNP phase information beyond a few kilobases (McKernan et al., Genome Res. 19:1527, 2009). Furthermore, it is very difficult to maintain long DNA fragments in multiple processing steps without fragmenting as a result of shearing.


At the present time three personal genomes, those of J. Craig Venter (Levy et al., PLOS Biol. 5:e254, 2007), a Gujarati Indian (HapMap sample NA20847; Kitzman et al., Nat. Biotechnol. 29:59, 2011), and two Europeans (Max Planck One [MP1]; Suk et al., Genome Res., 2011; http://genome.cshlp.org/content/early/2011/09/02/gr.125047.111.full.pdf; and HapMap Sample NA 12878; Duitama et al., Nucl. Acids Res. 40:2041-2053, 2012) have been sequenced and assembled as diploid. All have involved cloning long DNA fragments into constructs in a process similar to the bacterial artificial chromosome (BAC) sequencing used during construction of the human reference genome (Venter et al., Science 291:1304, 2001; Lander et al., Nature 409:860, 2001). While these processes generate long phased contigs (N50s of 350 kb [Levy et al., PLOS Biol. 5:e254, 2007], 386 kb [Kitzman et al., Nat. Biotechnol. 29:59-63, 2011] and 1 Mb [Suk et al., Genome Res. 21:1672-1685, 2011]) they require a large amount of initial DNA, extensive library processing, and are too expensive to use in a routine clinical environment.


Additionally, whole chromosome haplotyping has been demonstrated through direct isolation of metaphase chromosomes (Zhang et al., Nat. Genet. 38:382-387, 2006; Ma et al., Nat. Methods 7:299-301, 2010; Fan et al., Nat. Biotechnol. 29:51-57, 2011; Yang et al., Proc. Natl. Acad. Sci. USA 108:12-17, 2011). These methods are useful for long-range haplotyping but have yet to be used for whole-genome sequencing; they require preparation and isolation of whole metaphase chromosomes, which can be challenging for some clinical samples.


There is also a need for improved methods for obtaining sequence information from mixtures of organisms such as in metagenomics (e.g., gut bacteria or other microbiomes). There is also a need for improved methods for genome sequencing and assembly, including de novo assembly with no or minimal use of a reference sequence), or assembly of genomes that include various types of repeat sequences, including resolution of pseudogenes, copy number variations and structural variations, especially in cancer geneomes.


We have described long fragment read (LFR) methods that provide enable an accurate assembly of separate sequences of parental chromosomes (i.e., complete haplotyping) in diploid genomes at significantly reduced experimental and computational costs and without cloning into vectors and cell-based replication. LFR is based on the physical separation of long fragments of genomic DNA (or other nucleic acids) across many different aliquots such that there is a low probability of any given region of the genome of both the maternal and paternal component being represented in the same aliquot. By placing a unique identifier in each aliquot and analyzing many aliquots in the aggregate, DNA sequence data can be assembled into a diploid genome, e.g., the sequence of each parental chromosome can be determined. LFR does not require cloning fragments of a complex nucleic acid into a vector, as in haplotyping approaches using large-fragment (e.g., BAC) libraries. Nor does LFR require direct isolation of individual chromosomes of an organism. In addition, LFR can be performed on an individual organism and does not require a population of the organism in order to accomplish haplotype phasing. LFR methods have been described in U.S. patent application Ser. Nos. 12/329,365 and 13/447,087, U.S. Pat. Publications US 2011-0033854 and 2009-0176234, and U.S. Pat. Nos. 7,901,890, 7,897,344, 7,906,285, 7,901,891, and 7,709,197, all of which are hereby incorporated by reference in their entirety.


BRIEF SUMMARY

The present invention provides methods and compositions for Multiple Tagging of Individual Long DNA Fragments (referred to herein by the abbreviation Multiple Tagging, or MT) and sequencing and for analyzing the resulting sequence information in order to reduce errors and perform haplotype phasing, among other things.


According to one aspect of the invention, methods are provided for sequencing a target nucleic acid comprising: (a) combining in a single reaction vessel (i) a plurality of long fragments of the target nucleic acid, and (ii) a population of polynucleotides, wherein each polynucleotide comprises a tag and a majority of the polynucleotides comprise a different tag; (b) introducing into a majority of the long fragments tag-containing sequences from said population of polynucleotides to produced tagged long fragments, wherein each of the tagged long fragments comprises a plurality of the tag-containing sequences at a selected average spacing, and each tag-containing sequence comprises a tag; and (c) producing a plurality of subfragments from each tagged long fragment, wherein each subfragment comprises one or more tags. Such methods are suitable for preparing a target nucleic acid for nucleic acid sequencing. According to one embodiment, such methods further comprise sequencing the subfragments to produce a plurality of sequence reads; assigning a majority of the sequence read to corresponding long fragments; and assembling the sequence reads to produce an assembled sequence of the target nucleic acid.


According to one embodiment, producing the subfragments in such methods comprises performing an amplification reaction to produce a plurality of amplicons from each long fragment. According to another embodiment, each amplicon comprises a tag from each of the adjacent introduced sequences and a region of the long fragment between the adjacent introduced sequences.


According to another embodiment, such methods comprise combining the long fragments with an excess of the population of tag-containing sequences.


According to another embodiment, such methods comprise combining the long fragments with the tag-containing solution under conditions that are suitable for introduction of a single tag-containing sequence into a majority of the long fragments.


According to another embodiment, such methods comprise combining the long fragments with the tag-containing solution under conditions that are suitable for introduction of a different tag-containing sequences into a majority of the long fragments.


According to another embodiment, in such methods the population of tag-containing sequences comprises a population of beads, wherein each bead comprises multiple copies of a single tag-containing sequence.


According to another embodiment, in such methods the population of tag-containing sequences comprises a population of concatemers, each concatemer comprising multiple copies of a single tag-containing sequence.


According to another embodiment, in such methods the tag-containing sequences comprise transposon ends, the method comprising combining the long fragments and the tag-containing sequences under conditions that are suitable for transposition of the tag-containing sequences into each of the long fragments.


According to another embodiment, in such methods the tag-containing sequences comprise a hairpin sequence.


According to another embodiment, in such methods the target nucleic acid is a complex nucleic acid.


According to another embodiment, in such methods the target nucleic acid is a genome of an organism.


According to another embodiment, such methods comprise determining a haplotype of the genome.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows a method for tagging and fragmenting of long fragments of a target nucleic acid with transposon-mediated barcodes.



FIGS. 2A and 2B show a method for tagging and fragmenting of long fragments of a target nucleic acid with hairpin-mediated barcodes.



FIG. 3 shows a method for transposon-mediated tagging and fragmenting of long fragments of a target nucleic acid.



FIG. 4A shows a method for tagging long fragments of a target nucleic acid using a tagged adaptor.



FIG. 4B shows an alternative method for tagging long fragments of a target nucleic acid using a tagged adaptor.



FIGS. 4C-4D show a second alternative method for tagging long fragments of a target nucleic acid using a tagged adaptor.



FIGS. 4E-4F show methods for creating a series of tagged subfragments with shorter and shorter regions of the long DNA fragments.



FIGS. 5A and 5B show examples of sequencing systems.



FIG. 6 shows an example of a computing device that can be used in, or in conjunction with, a sequencing machine and/or a computer system.



FIG. 7 shows the general architecture of the MT algorithm.



FIG. 8 shows pairwise analysis of nearby heterozygous SNPs.



FIG. 9 shows an example of the selection of an hypothesis and the assignment of a score to the hypothesis.



FIG. 10 shows graph construction.



FIG. 11 shows graph optimization.



FIG. 12 shows contig alignment.



FIG. 13 shows parent-assisted universal phasing.



FIG. 14 shows natural contig separations.



FIG. 15 shows universal phasing. Sequence legend: Mom: C-CGCAG TAGCTTA CGAATCG (SEQ ID NO:1); Dad: G-ATTTA ACTGAGC ACTTGGC (SEQ ID NO:2).



FIG. 16 shows error detection using MT.



FIG. 17 shows an example of a method of decreasing the number of false negatives in which a confident heterozygous SNP call could be made despite a small number of reads.





DETAILED DESCRIPTION

As used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a polymerase” refers to one agent or mixtures of such agents, and reference to “the method” includes reference to equivalent steps and/or methods known to those skilled in the art, and so forth.


Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. All publications mentioned herein are incorporated herein by reference for the purpose of describing and disclosing devices, compositions, formulations and methodologies which are described in the publication and which might be used in connection with the presently described invention.


Where a range of values is provided, it is understood that each intervening value, to the tenth of the unit of the lower limit unless the context clearly dictates otherwise, between the upper and lower limit of that range and any other stated or intervening value in that stated range is encompassed within the invention. The upper and lower limits of these smaller ranges may independently be included in the smaller ranges is also encompassed within the invention, subject to any specifically excluded limit in the stated range. Where the stated range includes one or both of the limits, ranges excluding either both of those included limits are also included in the invention.


In the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one of skill in the art that the present invention may be practiced without one or more of these specific details. In other instances, well-known features and procedures well known to those skilled in the art have not been described in order to avoid obscuring the invention.


Although the present invention is described primarily with reference to specific embodiments, it is also envisioned that other embodiments will become apparent to those skilled in the art upon reading the present disclosure, and it is intended that such embodiments be contained within the present inventive methods.


The practice of the present invention may employ, unless otherwise indicated, conventional techniques and descriptions of organic chemistry, polymer technology, molecular biology (including recombinant techniques), cell biology, biochemistry, and immunology, which are within the skill of the art. Such conventional techniques include polymer array synthesis, hybridization, ligation, and detection of hybridization using a label. Specific illustrations of suitable techniques can be had by reference to the example herein below. However, other equivalent conventional procedures can, of course, also be used. Such conventional techniques and descriptions can be found in standard laboratory manuals such as Genome Analysis: A Laboratory Manual Series (Vols. I-IV), Using Antibodies: A Laboratory Manual, Cells: A Laboratory Manual, PCR Primer: A Laboratory Manual, and Molecular Cloning: A Laboratory Manual (all from Cold Spring Harbor Laboratory Press), Stryer, L. (1995) Biochemistry (4th Ed.) Freeman, New York, Gait, “Oligonucleotide Synthesis: A Practical Approach” 1984, IRL Press, London, Nelson and Cox (2000), Lehninger, Principles of Biochemistry 3rd Ed., W. H. Freeman Pub., New York, N.Y. and Berg et al. (2002) Biochemistry, 5th Ed., W. H. Freeman Pub., New York, N.Y., all of which are herein incorporated in their entirety by reference for all purposes.


Methods for Sequencing Complex Nucleic Acids—Overview

According to one aspect of the invention, methods are provided for multiple tagging of individual long fragments of target nucleic acids, or polynucleotides, including without limitation complex nucleic acids. Long fragments of a target nucleic acid or polynucleotide (e.g., 100 kb or longer) are tagged by a method that introduces a tag or barcode into multiple sites in each fragment. Ideally, each fragment has introduced into it multiple copies of one unique tag—a fragment-specific tag—or a unique pattern of insertion of multiple tags—a fragment-specific tag pattern. However, this is not necessarily the case: in some embodiments, a long fragment may have inserted into it more than one distinct tag, and more than one fragment may have inserted into them the same tag. After tagging, subfragments of the long fragments are produced, each subfragment including a tag. Commonly, the subfragments are amplified (e.g., by PCR). The subfragments, including the tags that are part of each subfragment, are then sequenced. The tag sequence permits the sequence data obtained from each subfragment to be assigned to the long fragment from which each subfragment is derived, which facilitates sequence mapping and assembly and the ordering of alleles (or hets) into a haplotype of the target nucleic acids.


For accurate clinical sequencing and haplotyping of individual human genomes from a small number of cells, long genomic fragments (˜100 kb or longer) are preferable, although shorter fragments may be used. Assuming 100 kb fragments, a human genome would have about 6×104 fragments per cell, and ˜18 cells would generate about 1 million fragments. DNA tags that are 12 bases long (12-mers) or longer have enough sequence diversity (16 million to over one billion) to tag each fragment with a unique tag.


We provide several methods for associating copies of the same long tag to hundreds of ˜1 kb sub-regions of ˜100 kb gnomic fragments in a homogenous reaction without any physical compartmentalization (e.g., droplets in an emulsion).


A description of various methods for practicing MT is provided below.


In some embodiments of the invention, such methods lead to most (e.g., 50%, 60%, 70%, 80%, 90% or more) of the long fragments of a target nucleic acid being tagged with multiple tag-containing sequences that include the same tag sequence. Such methods minimize tagging with different tag sequences, for example: selecting the proper ratio of tag-containing sequences to long fragments; selecting the proper dilution or DNA concentration; minimizing molecule movement after initiation of the tagging process, for example, by mixing DNA fragments, tag-containing sequences, and enzymes and buffers at low temperature, waiting for liquid movements to stop, and then increasing the temperature of the mixture in order to activate enzymatic processes); tethering a single tag-containing sequence to a single long DNA fragment by covalent or non-covalent binding; and other techniques. There are several ways to attach or tether a single bead or nanoball with multiple copies of a particular tag-containing sequence to a single long fragment of a target nucleic acid. For example, a homopolymer sequence (e.g., and A-tail) may be added to the long fragment using terminal transferase or an adaptor with a selected sequence may be ligated to an end or ends of the long fragment. A complementary sequence may be added to the end of or included within the tag-containing sequence or nanoball such that, under selected appropriate conditions, the tag-containing sequence or tag assembly anneals with the corresponding complementary sequence on the long fragment. Preferably, a long fragment can anneal to only one tag-containing sequence or tag assembly.


Embodiments of the invention, rather than maximizing the number of long fragments with a single tag sequence inserted at multiple locations along the long fragment, provide conditions under which multiple tags with different sequences are inserted at multiple locations, creating a unique pattern or “fingerprint” for each long fragment that is provided by a unique pattern of insertion of the different-sequence tags.


By sequencing 50% of each ˜1 kb fragment, ˜1× sequence coverage would be generated for each genomic fragment because tagged fragments are generated from both strands of dsDNA. If we sequence 25% (½ read coverage per fragment), we would observe the linkage of two regions in 25% of fragments. For the same read budget we can increase the number of fragments two-fold and have only two-fold reduction in the observed linkages. For 25% read (125 bases form each end of 1 kb fragment) and 36 starting cells we will observe nine linkages instead of ˜18 linkages for 18 cells if we read 50% of DNA (250 bases from each end of a ˜1 kb fragment. If only ˜60 bases can be read from each fragment, it is better to use 300-500 bp fragments that still make very useful mate-pairs.


If sequencing a fraction of DNA from each ˜1 kb fragment, we need exponentially more initial fragments. For example, if we sequence one-half, we need 4× more fragments.


MT avoids subcloning of fragments of a complex nucleic acid into a vector and subsequent replication in a host cell, or the need to isolate individual chromosomes (e.g., metaphase chromosomes). It also does not require aliquoting fragments of a target nucleic acid. MT can be fully automated, making it suitable for high-throughput, cost-effective applications. Tagging ˜1 kb sub-regions of long (˜100 kb or longer) genomic fragments with the same unique tag has many applications, including haplotyping diploid or polyploid genomes, efficient de novo genome sequence assembly, and error correction.


The advantages of MT include:

    • A practically unlimited number of individual DNA fragments can be uniquely tagged, providing maximal information for de novo assembly, for example.
    • MT may be performed in a single reaction vessel (e.g., tube, well in a multi-well plate, etc.) in a small number of steps and is easy to scale and automate; there is no need for a large number of aliquots or nanodrops.
    • DNA amplification of shorter DNA fragments (e.g., tagged subfragments) by PCR results in less coverage bias and no loss of sequences from fragment ends compared to multiple displacement amplification (MDA) of long fragments, for example.
    • One MT method, which employs nicking and a primer-ligation process, uses both strand strands of the dsDNA, thereby doubling the sequence coverage per fragment (longer mate-pairs for the same read-length).
    • Reduce dramatically the computational demands and associated costs of sequence mapping and assembly.
    • Substantial reduction in errors or questionable base calls that can result from current sequencing technologies, including, for example, systematic errors that are characteristic of a given sequencing platform or mutations introduced by DNA amplification. MT thereby provides a highly accurate sequence of a human genome or other complex nucleic acid, minimizing the need for follow up confirmation of detected variants and facilitates adoption of human genome sequencing for diagnostic applications.


MT can be used as a preprocessing method with any known sequencing technology, including both short-read and longer-read methods. MT also can be used in conjunction with various types of analysis, including, for example, analysis of the transcriptome, methylome, etc. Because it requires very little input DNA, MT can be used for sequencing and haplotyping one or a small number of cells, which can be particularly important for cancer, prenatal diagnostics, and personalized medicine. This can facilitate the identification of familial genetic disease, etc. By making it possible to distinguish calls from the two sets of chromosomes in a diploid sample, MT also allows higher confidence calling of variant and non-variant positions at low coverage. Additional applications of MT include resolution of extensive rearrangements in cancer genomes and full-length sequencing of alternatively spliced transcripts.


MT can be used to process and analyze complex nucleic acids, including but not limited to genomic DNA, that is purified or unpurified, including cells and tissues that are gently disrupted to release such complex nucleic acids without shearing and overly fragmenting such complex nucleic acids.


In one aspect, MT produces virtual read lengths of approximately 100-1000 kb or longer in length, for example.


In addition to being applicable to all sequencing platforms, MT-based sequencing is suitable for a wide variety of applications, including without limitation the study of structural rearrangements in cancer genomes, full methylome analysis including the haplotypes of methylated sites, and de novo assembly applications for metagenomics or novel genome sequencing, even of complex polyploid genomes like those found in plants.


MT provides the ability to obtain actual sequences of individual chromosomes as opposed to just the consensus sequences of parental or related chromosomes (in spite of their high similarities and presence of long repeats and segmental duplications). To generate this type of data, the continuity of sequence is in general established over long DNA ranges.


A further aspect of the invention includes software and algorithms for efficiently utilizing MT data for whole chromosome haplotype and structural variation mapping and false positive/negative error correction.


In a further aspect, MT techniques of the invention reduce the complexity of DNA to be sequenced. Complexity reduction and haplotype separation in >100 kb long DNA can be helpful in more efficiently and cost effective sequence assembly and detection of sequence variations in human and other diploid and polyploid genomes.


Preparing Long Nucleic Acid Fragments

Target nucleic acids, including but not limited to complex nucleic acids, are isolated using conventional techniques, for example as disclosed in Sambrook and Russell, Molecular Cloning: A Laboratory Manual, cited supra. In some cases, particularly if small amounts of the nucleic acids are employed in a particular step, it is advantageous to provide carrier DNA, e.g., unrelated circular synthetic double-stranded DNA, to be mixed and used with the sample nucleic acids whenever only small amounts of sample nucleic acids are available and there is danger of losses through nonspecific binding, e.g., to container walls and the like.


According to some embodiments of the invention, genomic DNA or other complex nucleic acids are obtained from an individual cell or small number of cells with or without purification, by any known method.


Long fragments are desirable for the methods of the present invention. Long fragments of genomic DNA can be isolated from a cell by any known method. A protocol for isolation of long genomic DNA fragments from human cells is described, for example, in Peters et al., Nature 487:190-195 (2012). In one embodiment, cells are lysed and the intact nuclei are pelleted with a gentle centrifugation step. The genomic DNA is then released through proteinase K and RNase digestion for several hours. The material can be treated to lower the concentration of remaining cellular waste, e.g., by dialysis for a period of time (i.e., from 2-16 hours) and/or dilution. Since such methods need not employ many disruptive processes (such as ethanol precipitation, centrifugation, and vortexing), the genomic nucleic acid remains largely intact, yielding a majority of fragments that have lengths in excess of 150 kilobases. In some embodiments, the fragments are from about 5 to about 750 kilobases in lengths. In further embodiments, the fragments are from about 150 to about 600, about 200 to about 500, about 250 to about 400, and about 300 to about 350 kilobases in length. The smallest fragment that can be used for haplotyping is one containing at least two hets (approximately 2-5 kb); there is no maximum theoretical size, although fragment length can be limited by shearing resulting from manipulation of the starting nucleic acid preparation.


Once the DNA is isolated, it is advantageous to avoid loss of sequences from the ends of each fragment, since loss of such material can result in gaps in the final genome assembly. In one embodiment, sequence loss is avoided through use of an infrequent nicking enzyme, which creates starting sites for a polymerase, such as phi29 polymerase, at distances of approximately 100 kb from each other. As the polymerase creates a new DNA strand, it displaces the old strand, creating overlapping sequences near the sites of polymerase initiation. As a result, there are very few deletions of sequence.


A controlled use of a 5′ exonuclease (either before or during amplification) can promote multiple replications of the original DNA from a single cell and thus minimize propagation of early errors through copying of copies.


In other embodiments, long DNA fragments are isolated and manipulated in a manner that minimizes shearing or absorption of the DNA to a vessel, including, for example, isolating cells in agarose in agarose gel plugs, or oil, or using specially coated tubes and plates.


Fragmented DNA from a single cell can be duplicated by ligating an adaptor with single stranded priming overhang and using an adaptor-specific primer and phi29 polymerase to make two copies from each long fragment. This can generate four cells-worth of DNA from a single cell.


Preparation of Tags

A “barcode” or “tag” is generally a unique sequence of nucleotides. Tagging long fragments as described involves introducing into long fragments multiple copies of sequences (adaptors, transposons, etc.) that include tags. Such introduced sequences are spaced apart on the fragment; the average spacing between adjacent introduced sequences is selected to permit the creation of subfragments of the long fragments by any preferred method, e.g.: by PCR amplification using primers that have primer binding sites in adjacent introduced sequences; by restriction digestion; or by other methods known in the art. Subsequently, sequence reads are generated by sequencing subfragments of the tagged long fragments. Such sequence reads can be assigned to the individual long fragment from which they are ultimately derived.


According to one embodiment, the average spacing between adjacent introduced sequences is 100 bp, 200 bp, 300 bp, 400 bp, 500 bp, 600 bp, 700 bp, 800 bp, 900 bp, 1000 bp, 1500 bp, 2000 bp, 2500 bp, 3000 bp, 3500 bp, 4000 bp, or 5000 bp. According to another embodiment, the average spacing is between about 100 bp and about 5000 bp, or between about 200 bp and about 4000 bp, or between about 300 bp and about 3000 bp, or between about 300 bp and about 2000 bp, or between about 300 bp and about 1000 bp.


According to one embodiment, a majority of the subfragments, or 60%, 70%, 80%, 90% or more, or substantially all of the subfragments, include a tag sequence.


According to one embodiment, a barcode- or tag-containing sequence is used that has two, three or more segments, one or more regions of known sequence and one or more regions of degenerate sequence that serves as the barcode(s) or tag(s). The known sequence (B) may include, for example, PCR primer binding sites, transposon ends, restriction endonuclease recognition sequences (e.g., sites for rare cutters, e.g., Not I, Sac II, Mlu I, BssH II, etc.), or other sequences. The degenerate sequence (N) that serves as the tag is long enough to provide a population of different-sequence tags that is equal to or, preferably, greater than, the number of fragments of a target nucleic acid to be analyzed.


According to one embodiment, the tag-containing sequence comprises one region of known sequence of any selected length (including, without limitation. According to another embodiment the tag-containing sequence comprises two regions of known sequence of a selected length that flank a region of degenerate sequence of a selected length, i.e., BnNnBn, where N may have any length sufficient for tagging long fragments of a target nucleic acid, including, without limitation, N=10, 11, 12, 13, 14, 15, 16, 17, 18, 19 or 20, and B may have any length that accommodates desired sequences such as transposon ends, primer binding sites, etc. For example, such an embodiment may be B20N15B20.


Tag synthesis on beads. A population of tag-containing DNA sequences can be PCR amplified on beads in an water-in-oil (w/o) emulsion by known methods. See, e.g., Tawfik and Griffiths Nature Biotechnology 16: 652-656 (1998); Dressman et al., Proc. Natl. Acad. Sci. USA 100:8817-8820, 2003; and Shendure et al., Science 309:1728-1732 (2005). This results in many copies of each single tag-containing sequence on each bead.


Concatemers (DNA nanoballs) of tag sequences. A circular or circularized (e.g., as with padlock probes) DNA template can be amplified by rolling circle replication (RCR). RCR uses the phi 29 DNA polymerase, which is highly processive. The newly synthesized strand is released from the circular template, resulting in a long single-stranded DNA concatemer comprising many head-to-tail copies of the circular DNA template. The concatemer folds into a substantially globular ball of DNA that is called a DNA nanoball (DNB). The length of the DNB and the number of copies of the DNA template can be controlled by the length of the RCR reaction. The nanoballs remain separated from each other in solution.


In one embodiment, a two- or three-segment design is utilized for the barcodes used to tag long fragments. This design allows for a wider range of possible barcodes by allowing combinatorial barcode segments to be generated by ligating different barcode segments together to form the full barcode segment or by using a segment as a reagent in oligonucleotide synthesis. This combinatorial design provides a larger repertoire of possible barcodes while reducing the number of full-size barcodes that need to be generated. In further embodiments, unique identification of each long fragment is achieved with 8-12 base pair (or longer) barcodes.


In one embodiment, two different barcode segments are used. A and B segments are easily be modified to each contain a different half-barcode sequence to yield thousands of combinations. In a further embodiment, the barcode sequences are incorporated on the same adapter. This can be achieved by breaking the B adaptor into two parts, each with a half barcode sequence separated by a common overlapping sequence used for ligation. The two tag components have 4-6 bases each. An 8-base (2×4 bases) tag set is capable of uniquely tagging 65,000 sequences. Both 2×5 base and 2×6 base tags may include use of degenerate bases (i.e., “wild-cards”) to achieve optimal decoding efficiency.


In further embodiments, unique identification of each sequence is achieved with 8-12 base pair error correcting barcodes.


According to one embodiment of the invention, one starts with more long fragments than are needed for sequencing to achieve adequate sequence coverage and tags only a only a portion of the long fragments with a limited number of tag-containing sequences, or tag assemblies—which include many, perhaps hundreds, of copies of one tag sequence—to increase the probability of unique tagging of the long fragments. Non-tagged subfragments lack introduced sequences that provide primer-binding or capture-oligo binding and may be eliminated in downstream processing. Such tag assemblies include, for example, end-to-end concatemers of tag-containing sequences created by rolling circle replication (DNA nanoballs), beads to which are attached many copies of the tag-containing sequences, or other embodiments.


According to another embodiment, in order to obtain uniform genome coverage in the case of samples with a small number of cells (e.g., 1, 2, 3, 4, 5, 10, 10, 15, 20, 30, 40, 50 or 100 cells from a microbiopsy or circulating tumor or fetal cells, for example), all long fragments obtained from the cells are tagged.


Tagging Single Long Fragments

The methods of the present invention employ various approaches to introduce multiple copies of a tag at multiple spaced-apart sites along a long fragment (e.g., 100 kb or longer) of the target nucleic acid without the need to divide the long fragments into aliquots (as in the long fragment read technology): the entire process can be performed in a single tube or well in a microtiter plate.


According to one embodiment of the invention, tags are introduced at intervals of between about 300 bp and 1000 bp along the fragment. This spacing can be shorter or longer, depending on the desired fragment size for subsequent processing, e.g., library construction and sequencing. After tagging, each subfragment of the long fragment and any sequence information derived from it can be assigned to a single long fragment.


Various exemplary methods for tagging single fragments are shown in FIGS. 1-4.

    • (1) Tagging with transposons. As shown in FIG. 1, a first approach involves in vitro transposition. A population of tagged transposons is used, i.e., DNA constructs that include transposon ends and, near each of the ends, unique tag sequences (the same tag sequence near both ends) and a common PCR primer binding site. The population of transposons is combined with long fragments of a target nucleic acid. Addition of transposase causes in vitro transposition of several of the tagged transposons into the long fragments. Each long fragment has a unique pattern of transposon insertion, and each inserted transposon has a unique bar code. In addition, the act of transposition replicates 9 bp of sequence at each end of the transposon that further distinguishes each transposon insertion event (and may be considered another form of “tagging”).


PCR is performed using primers that bind to the PCR primer binding sites of each inserted transposon. The resulting PCR amplicons include a portion of the long fragment that lies between adjacent transposons; at each end such amplicons include sequences from the end of an adjacent transposon, including the unique tag sequence for that transposon. After sequencing the PCR amplicons, it is possible not only to map the sequence reads to a reference genome, assuming such is available, but to use the tags to build contigs to guide de novo assembly, as shown in FIG. 1. Each sequence read is associated with a tag sequence, and the tag sequence (or pattern of tags) corresponds to a single fragment. Thus, sequence reads from the same fragment should map within the same region of the target nucleic acid. In most cases two different amplicons have the same unique tag from one transposon at their ends and thus adjoin one another in the large fragment from which they are derived. If more than one genome equivalent of long fragments is analyzed (e.g., 2, 3, 4, 5, 10, or 20 or more genome equivalents) building up contigs out of sequence reads derived from overlapping long fragments is straightforward.

    • (2) Tagging with hairpins. As shown in FIGS. 2A and 2B, this approach begins with long fragments of a target nucleic acid that are denatured to form two complementary single strands from each fragment. It also uses a population of oligonucleotides that form hairpins, each including tag sequences in the loop that flank PCR primer binding sites and having a short stretch of random bases (3-5 bases) at each end. The hairpin oligos are annealed to the single stranded form of the starting long fragments spaced apart, for example, by about 300 to 1000 bp. Each long fragment has a unique pattern of annealed hairpins. After annealing the single stranded region between adjacent hairpins is filled in with a 5′-3′ polymerase that lacks strand displacement, followed by ligase treatment to seal the remaining nick. PCR amplification using primers that bind to the PCR binding sites between the bar code sequences of each hairpin creates amplicons that have a portion of the long fragment that lies between adjacent hairpin oligonucleotides; at each end such amplicons include sequences from the loop of an adjacent hairpin oligonucleotide, including the unique tag sequence for that transposon. As for method (1) above, the bar code sequences at the ends of the PCR amplicons can be used to build contigs to guide de novo mapping and assembly.
    • (3) Tagging with transposons on a DNA nanoball or bead. As shown in FIG. 3, this approach employs beads covered with transposon sequences or a concatemer of transposon sequences created by rolling circle replication of a circular DNA that includes the transposon sequence—a transposon nanoball. As in method (1) above, the “transposon sequences” are DNA constructs that include (i) transposon ends and, at a selected location in the transposon sequences between the transposon ends, e.g., near each of the transposon ends, (ii) unique tag sequences (the same tag sequence near both ends), and (iii) a common PCR primer binding site. The transposon-containing bead or nanoball is combined with the long fragments of a target nucleic acid. Conditions are selected to promote the interaction of only a tag assembly, i.e., bead or nanoball bearing a single transposon sequence, with each long fragment. For example, at the correct dilution, only one bead or nanoball interacts with each long fragment in most cases, since diffusion is slow and most transposons don't travel far from a long fragment. Alternatively, the transposon sequence or another sequence on the transposon assembly (e.g., an adaptor ligated to an end of the transposon sequence or concatemer; a homopolymer sequence added by a terminal transferase) Upon addition of transposase, transposition occurs. In most cases, each fragment has multiple copies of the same transposon. A minority of the long fragments receive copies of more than one transposon. Also, in some cases, a transposon with a particular tag transposes into more than one long fragment. As in method (1), PCR amplification is performed using primers that bind to the PCR primer binding sites of each inserted transposon. The resulting PCR amplicons (between about 300 bp and 1000 bp in length) include a portion of the long fragment that lies between adjacent transposons; at each end such amplicons include sequences from the end of an adjacent transposon, including the unique tag sequence for that transposon. After sequencing, sequence reads are mapped and assembled.


In this method, because most long fragments are tagged with multiple copies of a single transposon, the resulting amplicons have the same tag at each end. The tags permit each sequence read to be associated with the same long fragment, although it is not possible to build up contigs based on the ordering of the tag sequences alone as in methods (1) and (2). If more than one transposon inserts into a single long fragment, it is most likely that all of the transposons that insert into one long fragment insert only into that one long fragment and not into other fragments. As a result, sequence reads associated with each of the inserted tags maps closely together in the genome (or other target nucleic acid). Even if this is not the case, and the same transposon jumps into more than one fragment, the likelihood is high that the fragments into which such transposon is inserted are non-overlapping, in which case the resulting sequence reads map to widely separated regions of the genome. Mapping and assembly software can account for these events and correctly map and assemble the sequence reads into a genome sequence and order sequence polymorphisms (hets) into a haplotype.


Tagging with an Tagged Adaptor Carried on a DNA Nanoball or Bead.


In this method, shown in FIG. 4A, long double-stranded fragments of a genome (or other target nu acid) are nicked at random locations on both strands using an agent such as DNase I that nicks DNA double strands (i.e., a “nickase”) and DNA polymerase I large (Klenow) fragment, which retains polymerization and 3′→5′ exonuclease activity, but has lost 5′→3′ exonuclease activity. No dNTPs are included in the reaction. A 3′ common adaptor is ligated to the 3′ end of each strand at a nick. A bead or nanoball with many copies of a sequence that includes (i) the tag sequence and (ii) a sequence that is complementary to the 3′ common adapter is added under conditions that permit the 3′ common adaptor to hybridize to the complementary sequence. At the proper ratio of long fragments to beads or nanoballs and at the proper dilution, most of the fragments are spatially associated with one (or less frequently 2 or more) beads or nanoballs, and copies of the 3′ common adaptor hybridize to the complementary sequence on a single bead or nanoball, since a single hybridization event leads to a physical interaction between the long fragment and the bead or nanoball, bringing other complementary sequences into close proximity. Alternatively, one can use complementary DNA sequences such as an A-tail on the long fragment and a T-tail or poly-T region on the tag-containing sequences, or other interactive moieties, to force the interaction of the long fragment and tag-containing sequences in order to increase the likelihood that each long fragment has introduced into it multiple copies of a single tag-containing sequence. Next, the tag-containing nucleic acids on the bead or nanoball are fragmented, e.g., with a restriction endonuclease, which results in common adaptors ligated to the long fragment hybridizing to complementary sequences that are included in the nucleic acids released from the bead or nanoball. Primer extension using DNA polymerase I large fragment (Klenow) or a similar DNA polymerase results in the creation of a 3′ tagged molecule spaced apart on the long fragment every 300-1000 bp. The long DNA molecule can now be denatured and an oligonucleotide can be hybridized to the 3′ common adaptor; extension with Klenow fragment or a similar polymerase results in a blunt-ended, double-stranded DNA molecule that can be ligated to a 5′ common adaptor and PCR amplified. The resulting PCR amplicons (effectively tagged subfragments of the long DNA fragments) are then sequenced, mapped and assembled in a fashion similar to that described in method (3).


Thus, according to this method of the invention, the MT process comprises:

    • (1) “Clonal” copying of barcode templates and required adapters, for example by (a) rolling circle replication (RCR) to make a concatemer with hundreds of copies of the same tag or by (b) emulsion PCR on beads to create thousands of copies. Optionally, the copied unit may represent a transposon.
    • (2) Mixing long genomic fragments and tag-adapter concatemers or beads in the proper ratio and in proper concentrations to have majority, most or almost all genomic fragments spatially associated with one concatemer and infrequently with two or more.
    • (3) Adding a universal primer to genomic DNA by: (a) nicking of genomic DNA at predefined frequency (e.g., 1 kb) using partial nicking with frequent nicker or other methods; controlled nick translation can be used to further randomize fragment start sites; optionally a small gap may be created at the nicking site, e.g., by exo activity of Pol I or Klenow without dNTPs; (b) ligating a primer by 5′ end to 3′ end of nicked DNA by providing the primer hybridized with a short complementary dideoxy oligo at the 5′ end; this primer is complementary to an adapter next to the barcode. Optionally this step can be done before step two or mixing genomic DNA with clonal tags;
    • (4) Copying the tag from tag donor (DNA nanoball or bead) and another adapter by primer extension using tag templates. After DNA denaturing this results in ˜1 kb ssDNA fragments with an adapter-barcode-adapter extension at 3′ end. These fragments can be used as sequencing templates by a primer complementary to the 3′ end adapter or converted in dsDNA by the same primer and further process (e.g., ligate an adapter on the other end, amplify, circularize) before sequencing.


Optionally steps 3 and 4 can be replaced by transposon insertion and fragmenting or amplification if concatemers or beads represent clones of tagged transposons.



FIG. 4B shows an alternative approach to inserting tag-containing sequences that does not rely on nicking. Long fragments are denatured (e.g., by heating) to produce complementary single strands. Random primers (N-mers) are annealed to the single strands and extended with polymerase. An alkaline phosphatase (e.g., shrimp alkaline phosphatase, SAP) is added, and polymerase having a 3′→5′ exonuclease function (e.g., Klenow) is used to create gaps, and the resulting partially double-stranded product is handled as described above and in FIG. 4A, beginning with 3′ ligation of a common adaptor.



FIG. 4C-D show a second alternative approach to inserting tag-containing sequences that does not rely on nicking. In this approach, two oligonucleotides are annealed to a tag-containing sequence (carried on a bead or, as shown, as a monomer unit of a DNA concatemer or nanoball): (i) a common primer, which is annealed upstream of the tag or barcode sequence, and (ii) a common adapter that is annealed downstream of the tag. The primer is extended and ligase is added to ligate the primer extension product to the common adaptor. This ligation product thus includes the tag sequence and, at its 3′ end, the common adaptor. An population of oligonucleotides that includes (i) a degenerate sequence (random N-mer) at its 5′ end, (ii) a sequence complementary to the common adaptor, and (iii) noncomplementary sequence (not shown in FIG. 4C) is annealed to the ligation product from the previous step and a primer extension is performed, adding to the 3′ end of the ligation product a degenerate sequence complementary to that on each oligonucleotide (which is subsequently removed, e.g., chewed away). The resulting product (a population of “tagged adaptors” each with a degenerate sequence at their 3′ ends) is then released from the bead or nanoball, e.g., by heat denaturation. The tagged adaptors are annealed to a single strand of the long fragment (produced by denaturing the double stranded long fragment); as shown in FIG. 4D, the different degenerate sequences at the ends of various tagged adaptors anneal to complementary sequences spaced apart along the long fragment. As described above, a polymerase is added to extend the tagged adaptor, and the extension product includes a sequence complementary to a region of the long fragment. The resulting molecules, which include a tagged adaptor joined to a sequence from the long fragment can then be used to create tagged subfragments of the long fragment as described above (FIG. 4A).


The method described in FIGS. 4A and 4B results in PCR amplicons that are, effectively, tagged subfragments of the long DNA fragments. This is advantageous if short-read sequencing methods are used. There are a variety of ways to create such a series of fragments.


For example, it is possible to create a series of such subfragments with shorter and shorter regions of the long DNA fragments as shown in FIG. 4E. Starting with the blunt-ended primer-extended tagged subfragment that results from PCR amplification, a 3′ adaptor is ligated to the tagged subfragments. One end of the adaptor includes an overhang; the other end is a blunt end that includes a blocked nucleotide (e.g., a ddNTP). After ligation of the 3′ adaptor, the subfragment is denatured and another round of primer extension is performed using controlled nick translation. The primer extension is stopped before completion such that the primer does not extend all the way to the end of the complementary strand. A 3′ adaptor is ligated to the end of the extended strand. This process can be repeated as many times as desired with the extent of primer extension varied in order to create a series of fragments having a common 5′ end that are shortened on their 3′ ends. Details of the blocked adaptor strategy and of controlled nick translation are provided, for example, in U.S. patent application Ser. No. 12/329,365 (published as U.S. 2012-0100534 A1) and Ser. No. 12/573,697 (published as US-2010-0105052-A1).


Another approach to creating a series of such subfragments with shorter and shorter regions of the long DNA fragments as shown in FIG. 4F. This approach also uses controlled nick translation. Subfragments are circularized then split into two or more separate wells. Controlled nick translation is performed to a different extent in the various wells in order to create subfragments with a common 5′ end that are shortened on their 3′ ends to various degrees. The subfragments can then be pooled and the process continued.


Producing Subfragments of Tagged Long Fragments

After tagging, the long fragments of the target nucleic acid are subfragmented to a desired size by amplification (e.g., by PCR), restriction enzyme digestion (e.g., using a rare cutter that has a recognition site within a tag-containing sequence introduced into long fragments), or by other conventional techniques, including enzymatic digestion, shearing, sonication, etc.


Subfragment sizes can vary depending on the source target nucleic acid and the library construction methods used, but for standard whole-genome sequencing such fragments typically range from 50 to 2000 nucleotides in length. In another embodiment, the fragments are 300 to 600 or 200 to 2000 nucleotides in length. In yet another embodiment, the fragments are 10-100, 50-100, 50-300, 100-200, 200-300, 50-400, 100-400, 200-400, 300-400, 400-500, 400-600, 500-600, 50-1000, 100-1000, 200-1000, 300-1000, 400-1000, 500-1000, 600-1000, 700-1000, 700-900, 700-800, 800-1000, 900-1000, 1500-2000, 1750-2000, and 50-2000 nucleotides in length.


In a further embodiment, fragments of a particular size or in a particular range of sizes are isolated. Such methods are well known in the art. For example, gel fractionation can be used to produce a population of fragments of a particular size within a range of basepairs, for example for 500 base pairs+50 base pairs.


Starting with about 5 to about 1,000,000 genome-equivalents of long fragment DNA ensure that the population of long fragments covers the entire genome. Libraries containing nucleic acid templates generated from such a population of overlapping fragments will provide most or all of the sequence of an entire genome.


Amplification

Before or after any step outlined herein, an amplification step can be used to ensure that enough of the nucleic acid is available for subsequent steps. According to one embodiment of the invention, methods are provided for sequencing small quantities of complex nucleic acids, including those of higher organisms, in which such complex nucleic acids are amplified in order to produce sufficient nucleic acids for sequencing by the methods described herein. A single human cell includes approximately 6.6 picograms (pg) of genomic DNA. Sequencing of complex nucleic acids of a higher organism can be accomplished using 1 pg, 5 pg, 10 pg, 30 pg, 50 pg, 100 pg, or 1 ng or more of a complex nucleic acid as the starting material, which is amplified by any nucleic acid amplification method known in the art, to produce, for example, 200 ng, 400 ng, 600 ng, 800 ng, 1 μg, 2 μg, 3 μg, 4 μg, 5 μg, 10 μg or greater quantities of the complex nucleic acid. We also disclose nucleic acid amplification protocols that minimize GC bias. However, the need for amplification and subsequent GC bias can be reduced further simply by isolating one cell or a small number of cells, culturing them for a sufficient time under suitable culture conditions known in the art, and using progeny of the starting cell or cells for sequencing.


Such amplification methods include without limitation: multiple displacement amplification (MDA), polymerase chain reaction (PCR), ligation chain reaction (sometimes referred to as oligonucleotide ligase amplification OLA), cycling probe technology (CPT), strand displacement assay (SDA), transcription mediated amplification (TMA), nucleic acid sequence based amplification (NASBA), rolling circle amplification (RCA) (for circularized fragments), and invasive cleavage technology.


Amplification can be performed after fragmenting or before or after any step outlined herein.


MDA amplification protocol with reduced GC bias. In one aspect, the present invention provides methods of nucleic acid amplification in which the nucleic acid is faithfully amplified, e.g., approximately 30,000-fold depending on the amount of starting DNA.


According to one embodiment of MT methods of the present invention, MT begins with treatment of genomic nucleic acids, usually genomic DNA, with a 5′ exonuclease to create 3′ single-stranded overhangs. Such single stranded overhangs serve as MDA initiation sites. Use of the exonuclease also eliminates the need for a heat or alkaline denaturation step prior to amplification without introducing bias into the population of fragments. In another embodiment, alkaline denaturation is combined with the 5′ exonuclease treatment, which results in a reduction in bias that is greater than what is seen with either treatment alone. The fragments are then amplified.


In one embodiment, a phi29-based multiple displacement amplification (MDA) is used. Numerous studies have examined the range of unwanted amplification biases, background product formation, and chimeric artifacts introduced via phi29 based MDA, but many of these short comings have occurred under extreme conditions of amplification (greater than 1 million fold). Commonly, MT employs a substantially lower level of amplification and starts with long DNA fragments (e.g., ˜100 kb), resulting in efficient MDA and a more acceptable level of amplification biases and other amplification-related problems.


We have developed an improved MDA protocol to overcome problems associated with MDA that uses various additives (e.g., DNA modifying enzymes, sugars, and/or chemicals like DMSO), and/or different components of the reaction conditions for MDA are reduced, increased or substituted to further improve the protocol. To minimize chimeras, reagents can also be included to reduce the availability of the displaced single stranded DNA from acting as an incorrect template for the extending DNA strand, which is a common mechanism for chimera formation. A major source of coverage bias introduced by MDA is caused by differences in amplification between GC-rich verses AT-rich regions. This can be corrected by using different reagents in the MDA reaction and/or by adjusting the primer concentration to create an environment for even priming across all % GC regions of the genome. In some embodiments, random hexamers are used in priming MDA. In other embodiments, other primer designs are utilized to reduce bias. In further embodiments, use of 5′ exonuclease before or during MDA can help initiate low-bias successful priming, particularly with longer (i.e., 200 kb to 1 Mb) fragments that are useful for sequencing regions characterized by long segmental duplication (i.e., in some cancer cells) and complex repeats.


In some embodiments, improved, more efficient fragmentation and ligation steps are used that reduce the number of rounds of MDA amplification required for preparing samples by as much as 10,000 fold, which further reduces bias and chimera formation resulting from MDA.


In some embodiments, the MDA reaction is designed to introduce uracils into the amplification products in preparation for CORE fragmentation. In some embodiments, a standard MDA reaction utilizing random hexamers is used to amplify the fragments in each well; alternatively, random 8-mer primers can be used to reduce amplification bias (e.g., GC-bias) in the population of fragments. In further embodiments, several different enzymes can also be added to the MDA reaction to reduce the bias of the amplification. For example, low concentrations of non-processive 5′ exonucleases and/or single-stranded binding proteins can be used to create binding sites for the 8-mers. Chemical agents such as betaine, DMSO, and trehalose can also be used to reduce bias.


After amplification of the nucleic acids in a sample, the amplification products may optionally be fragmentated. In some embodiments the CORE method is used to further fragment the fragments following amplification. In such embodiments, MDA amplification of fragments is designed to incorporate uracils into the MDA products. The MDA product is then treated with a mix of Uracil DNA glycosylase (UDG), DNA glycosylase-lyase Endonuclease VIII, and T4 polynucleotide kinase to excise the uracil bases and create single base gaps with functional 5′ phosphate and 3′ hydroxyl groups. Nick translation through use of a polymerase such as Taq polymerase results in double stranded blunt-end breaks, resulting in ligatable fragments of a size range dependent on the concentration of dUTP added in the MDA reaction. In some embodiments, the CORE method used involves removing uracils by polymerization and strand displacement by phi29. The fragmenting of the MDA products can also be achieved via sonication or enzymatic treatment. Enzymatic treatment that could be used in this embodiment includes without limitation DNase I, T7 endonuclease I, micrococcal nuclease, and the like.


Following fragmentation of the MDA products, the ends of the resultant fragments may be repaired. Many fragmentation techniques can result in termini with overhanging ends and termini with functional groups that are not useful in later ligation reactions, such as 3′ and 5′ hydroxyl groups and/or 3′ and 5′ phosphate groups. It may be useful to have fragments that are repaired to have blunt ends. It may also be desirable to modify the termini to add or remove phosphate and hydroxyl groups to prevent “polymerization” of the target sequences. For example, a phosphatase can be used to eliminate phosphate groups, such that all ends contain hydroxyl groups. Each end can then be selectively altered to allow ligation between the desired components. One end of the fragments can then be “activated” by treatment with alkaline phosphatase.


Nucleic Acid Sequencing

MT methods described herein can be used as a pre-processing step for sequencing diploid genomes using any sequencing method known in the art, including for example without limitation, polymerase-based sequencing-by-synthesis (e.g., HiSeq 2500 system, Illumina, San Diego, CA), ligation-based sequencing (e.g., SOLID 5500, Life Technologies Corporation, Carlsbad, CA), ion semiconductor sequencing (e.g., Ion PGM or Ion Proton sequencers, Life Technologies Corporation, Carlsbad, CA), zero-mode waveguides (e.g., PacBio RS sequencer, Pacific Biosciences, Menlo Park, CA), nanopore sequencing (e.g., Oxford Nanopore Technologies Ltd., Oxford, United Kingdom), pyrosequencing (e.g., 454 Life Sciences, Branford, CT), or other sequencing technologies. Some of these sequencing technologies are short-read technologies, but others produce longer reads, e.g., the GS FLX+ (454 Life Sciences; up to 1000 bp), PacBio RS (Pacific Biosciences; approximately 1000 bp) and nanopore sequencing (Oxford Nanopore Technologies Ltd.; 100 kb). For haplotype phasing, longer reads are advantageous, requiring much less computation, although they tend to have a higher error rate and errors in such long reads may need to be identified and corrected according to methods set forth herein before haplotype phasing.


According to one embodiment, sequencing is performed using combinatorial probe-anchor ligation (cPAL) as described, for example, in U.S. Patent Application Publications 2010/0105052 and US2007099208, and U.S. patent application Ser. Nos. 13/448,279, 13/447,087, 11/679,124 (published as US 2009/0264299); Ser. No. 11/981,761 (US 2009/0155781); Ser. No. 11/981,661 (US 2009/0005252); Ser. No. 11/981,605 (US 2009/0011943); Ser. No. 11/981,793 (US 2009-0118488); Ser. No. 11/451,691 (US 2007/0099208); Ser. No. 11/981,607 (US 2008/0234136); Ser. No. 11/981,767 (US 2009/0137404); Ser. No. 11/982,467 (US 2009/0137414); Ser. No. 11/451,692 (US 2007/0072208); Ser. No. 11/541,225 (US 2010/0081128; 11/927,356 (US 2008/0318796); Ser. No. 11/927,388 (US 2009/0143235); Ser. No. 11/938,096 (US 2008/0213771); Ser. No. 11/938,106 (US 2008/0171331); Ser. No. 10/547,214 (US 2007/0037152); Ser. No. 11/981,730 (US 2009/0005259); Ser. No. 11/981,685 (US 2009/0036316); Ser. No. 11/981,797 (US 2009/0011416); Ser. No. 11/934,695 (US 2009/0075343); Ser. No. 11/934,697 (US 2009/0111705); Ser. No. 11/934,703 (US 2009/0111706); Ser. No. 12/265,593 (US 2009/0203551); Ser. No. 11/938,213 (US 2009/0105961); Ser. No. 11/938,221 (US 2008/0221832); Ser. No. 12/325,922 (US 2009/0318304); Ser. No. 12/252,280 (US 2009/0111115); Ser. No. 12/266,385 (US 2009/0176652); Ser. No. 12/335,168 (US 2009/0311691); Ser. No. 12/335,188 (US 2009/0176234); Ser. No. 12/361,507 (US 2009/0263802), Ser. No. 11/981,804 (US 2011/0004413); and Ser. No. 12/329,365; published international patent application numbers WO2007120208, WO2006073504, and WO2007133831, all of which are incorporated herein by reference in their entirety for all purposes.


Exemplary methods for calling variations in a polynucleotide sequence compared to a reference polynucleotide sequence and for polynucleotide sequence assembly (or reassembly), for example, are provided in U.S. patent publication No. 2011-0004413, (application Ser. No. 12/770,089) which is incorporated herein by reference in its entirety for all purposes. See also Drmanac et al., Science 327,78-81, 2010. Also incorporated by references in their entirety and for all purposes are copending related application No. 61/623,876, entitled “Identification Of DNA Fragments And Structural Variations” and Ser. No. 13/447,087, entitled “Processing and Analysis of Complex Nucleic Acid Sequence Data.”


Definitions

As used herein, the term “complex nucleic acid” refers to large populations of nonidentical nucleic acids or polynucleotides. In certain embodiments, the target nucleic acid is genomic DNA; exome DNA (a subset of whole genomic DNA enriched for transcribed sequences which contains the set of exons in a genome); a transcriptome (i.e., the set of all mRNA transcripts produced in a cell or population of cells, or cDNA produced from such mRNA); a methylome (i.e., the population of methylated sites and the pattern of methylation in a genome); an exome (i.e., protein-coding regions of a genome selected by an exon capture or enrichment method); a microbiome; a mixture of genomes of different organisms; a mixture of genomes of different cell types of an organism; and other complex nucleic acid mixtures comprising large numbers of different nucleic acid molecules (examples include, without limitation, a microbiome, a xenograft, a solid tumor biopsy comprising both normal and tumor cells, etc.), including subsets of the aforementioned types of complex nucleic acids. In one embodiment, such a complex nucleic acid has a complete sequence comprising at least one gigabase (Gb) (a diploid human genome comprises approximately 6 Gb of sequence).


Nonlimiting examples of complex nucleic acids include “circulating nucleic acids” (CNA), which are nucleic acids circulating in human blood or other body fluids, including but not limited to lymphatic fluid, liquor, ascites, milk, urine, stool and bronchial lavage, for example, and can be distinguished as either cell-free (CF) or cell-associated nucleic acids (reviewed in Pinzani et al., Methods 50:302-307, 2010), e.g., circulating fetal cells in the bloodstream of a expecting mother (see, e.g., Kavanagh et al., J. Chromatol. B 878:1905-1911, 2010) or circulating tumor cells (CTC) from the bloodstream of a cancer patient (see, e.g., Allard et al., Clin Cancer Res. 10:6897-6904, 2004). Another example is genomic DNA from a single cell or a small number of cells, such as, for example, from biopsies (e.g., fetal cells biopsied from the trophectoderm of a blastocyst; cancer cells from needle aspiration of a solid tumor; etc.). Another example is pathogens, e.g., bacteria cells, virus, or other pathogens, in a tissue, in blood or other body fluids, etc.


As used herein, the term “target nucleic acid” (or polynucleotide) or “nucleic acid of interest” refers to any nucleic acid (or polynucleotide) suitable for processing and sequencing by the methods described herein. The nucleic acid may be single stranded or double stranded and may include DNA, RNA, or other known nucleic acids. The target nucleic acids may be those of any organism, including but not limited to viruses, bacteria, yeast, plants, fish, reptiles, amphibians, birds, and mammals (including, without limitation, mice, rats, dogs, cats, goats, sheep, cattle, horses, pigs, rabbits, monkeys and other non-human primates, and humans). A target nucleic acid may be obtained from an individual or from a multiple individuals (i.e., a population). A sample from which the nucleic acid is obtained may contain a nucleic acids from a mixture of cells or even organisms, such as: a human saliva sample that includes human cells and bacterial cells; a mouse xenograft that includes mouse cells and cells from a transplanted human tumor; etc.


Target nucleic acids may be unamplified or they may be amplified by any suitable nucleic acid amplification method known in the art. Target nucleic acids may be purified according to methods known in the art to remove cellular and subcellular contaminants (lipids, proteins, carbohydrates, nucleic acids other than those to be sequenced, etc.), or they may be unpurified, i.e., include at least some cellular and subcellular contaminants, including without limitation intact cells that are disrupted to release their nucleic acids for processing and sequencing. Target nucleic acids can be obtained from any suitable sample using methods known in the art. Such samples include but are not limited to: tissues, isolated cells or cell cultures, bodily fluids (including, but not limited to, blood, urine, serum, lymph, saliva, anal and vaginal secretions, perspiration and semen); air, agricultural, water and soil samples, etc.


High coverage in shotgun sequencing is desired because it can overcome errors in base calling and assembly. As used herein, for any given position in an assembled sequence, the term “sequence coverage redundancy,” “sequence coverage” or simply “coverage” means the number of reads representing that position. It can be calculated from the length of the original genome (G), the number of reads (N), and the average read length (L) as N×L/G. Coverage also can be calculated directly by making a tally of the bases for each reference position. For a whole-genome sequence, coverage is expressed as an average for all bases in the assembled sequence. Sequence coverage is the average number of times a base is read (as described above). It is often expressed as “fold coverage,” for example, as in “40-fold (or 40×) coverage,” meaning that each base in the final assembled sequence is represented on an average of 40 reads.


As used herein, term “call rate” means a comparison of the percent of bases of the complex nucleic acid that are fully called, commonly with reference to a suitable reference sequence such as, for example, a reference genome. Thus, for a whole human genome, the “genome call rate” (or simply “call rate”) is the percent of the bases of the human genome that are fully called with reference to a whole human genome reference. An “exome call rate” is the percent of the bases of the exome that are fully called with reference to an exome reference. An exome sequence may be obtained by sequencing portions of a genome that have been enriched by various known methods that selectively capture genomic regions of interest from a DNA sample prior to sequencing. Alternatively, an exome sequence may be obtained by sequencing a whole human genome, which includes exome sequences. Thus, a whole human genome sequence may have both a “genome call rate” and an “exome call rate.” There is also a “raw read call rate” that reflects the number of bases that get an A/C/G/T designation as opposed to the total number of attempted bases. (Occasionally, the term “coverage” is used in place of “call rate,” but the meaning will be apparent from the context).


As used herein, the term “haplotype” means a combination of alleles at adjacent locations (loci) on the chromosome that are transmitted together or, alternatively, a set of sequence variants on a single chromosome of a chromosome pair that are statistically associated. Every human individual has two sets of chromosomes, one paternal and the other maternal. Usually DNA sequencing results only in genotypic information, the sequence of unordered alleles along a segment of DNA. Inferring the haplotypes for a genotype separates the alleles in each unordered pair into two separate sequences, each called a haplotype. Haplotype information is necessary for many different types of genetic analysis, including disease association studies and making inference on population ancestries.


As used herein, the term “phasing” (or resolution) means sorting sequence data into the two sets of parental chromosomes or haplotypes. Haplotype phasing refers to the problem of receiving as input a set of genotypes for one individual or a population, i.e., more than one individual, and outputting a pair of haplotypes for each individual, one being paternal and the other maternal. Phasing can involve resolving sequence data over a region of a genome, or as little as two sequence variants in a read or contig, which may be referred to as local phasing, or microphasing. It can also involve phasing of longer contigs, generally including greater than about ten sequence variants, or even a whole genome sequence, which may be referred to as “universal phasing.” Optionally, phasing sequence variants takes place during genome assembly.


As used herein, the term “transposon” or “transposable element” means a DNA sequence that can change its position within the genome. In a classic transposition reaction, a transposase catalyzes the random insertion of excised transposons into DNA targets. During cut-and-paste transposition, a transposase makes random, staggered double-stranded breaks in the target DNA and covalently attaches the 3′ end of the transferred transposon strand to the 5′ end of the target DNA. The transposase/transposon complex inserts an arbitrary DNA sequence at the point of insertion of the transposon into the target nucleic acid. Transposons that insert randomly into the target nucleic acid sequence are preferred. Several transposons have been described and use in in vitro transposition systems. For example, in the Nextera™ technology (Nature Methods 6, November 2009; Epicentre Biotechnologies, Madison, WI) The entire complex is not necessary for insertion; free transposon ends are sufficient for integration. When free transposon ends are used, the target DNA is fragmented and the transferred strand of the transposon end oligonucleotide is covalently attached to the 5′ end of the target fragment. The transposon ends can be modified by addition of desired sequences, such as PCR primer binding sites, bar codes/tags, etc. The size distribution of the fragments can be controlled by changing the amounts of transposase and transposon ends. Exploiting transposon ends with appended sequences results in DNA libraries that can be used in high-throughput sequencing.


Use of Microdroplets and Emulsions

In some embodiments, the methods of the present invention are performed in emulsion or microfluidic devices.


A reduction of volumes down to picoliter levels can achieve an even greater reduction in reagent and computational costs. In some embodiments, this level of cost reduction is accomplished through the combination of the MT process with emulsion or microfluidic-type devices. The ability to perform all enzymatic steps in the same reaction without DNA purification facilitates the ability to miniaturize and automate this process and results in adaptability to a wide variety of platforms and sample preparation methods.


Recent studies have also suggested an improvement in GC bias after amplification (e.g., by MDA) and a reduction in background amplification by decreasing the reaction volumes down to nanoliter size.


There are currently several types of microfluidics devices (e.g., devices sold by Advanced Liquid Logic, Morrisville, NC) or pico/nano-droplet (e.g., RainDance Technologies, Lexington, MA) that have pico-/nano-drop making, fusing (3000/second) and collecting functions and could be used in such embodiments of MT.


Amplifying

According to one embodiment, the MT process begins with a short treatment of genomic DNA with a 5′ exonuclease to create 3′ single-stranded overhangs that serve as MDA initiation sites. The use of the exonuclease eliminates the need for a heat or alkaline denaturation step prior to amplification without introducing bias into the population of fragments. Alkaline denaturation can be combined with the 5′ exonuclease treatment, which results in a further reduction in bias. The fragments are amplified, e.g., using an MDA method. In certain embodiments, the MDA reaction is a modified phi29 polymerase-based amplification reaction, although another known amplification method can be used.


In some embodiments, the MDA reaction is designed to introduce uracils into the amplification products. In some embodiments, a standard MDA reaction utilizing random hexamers is used to amplify the fragments in each well. In many embodiments, rather than the random hexamers, random 8-mer primers are used to reduce amplification bias in the population of fragments. In further embodiments, several different enzymes can also be added to the MDA reaction to reduce the bias of the amplification. For example, low concentrations of non-processive 5′ exonucleases and/or single-stranded binding proteins can be used to create binding sites for the 8-mers. Chemical agents such as betaine, DMSO, and trehalose can also be used to reduce bias through similar mechanisms.


Fragmentation

According to one embodiment, after DNA amplification of DNA, the amplification product, or amplicons, is subjected to a round of fragmentation. In some embodiments the CORE method is used to further fragment the fragments in each well following amplification. In order to use the CORE method, the MDA reaction used to amplify the fragments in each well is designed to incorporate uracils into the MDA products. The fragmenting of the MDA products can also be achieved via sonication or enzymatic treatment.


If a CoRE method is used to fragment the MDA products, amplified DNA is treated with a mix of uracil DNA glycosylase (UDG), DNA glycosylase-lyase endonuclease VIII, and T4 polynucleotide kinase to excise the uracil bases and create single base gaps with functional 5′ phosphate and 3′ hydroxyl groups. Nick translation through use of a polymerase such as Taq polymerase results in double-stranded blunt end breaks, resulting in ligatable fragments of a size range dependent on the concentration of dUTP added in the MDA reaction. In some embodiments, the CORE method used involves removing uracils by polymerization and strand displacement by phi29.


Following fragmentation of the MDA products, the ends of the resultant fragments can be repaired. Such repairs can be necessary, because many fragmentation techniques can result in termini with overhanging ends and termini with functional groups that are not useful in later ligation reactions, such as 3′ and 5′ hydroxyl groups and/or 3′ and 5′ phosphate groups. In many aspects of the present invention, it is useful to have fragments that are repaired to have blunt ends, and in some cases, it can be desirable to alter the chemistry of the termini such that the correct orientation of phosphate and hydroxyl groups is not present, thus preventing “polymerization” of the target sequences. The control over the chemistry of the termini can be provided using methods known in the art. For example, in some circumstances, the use of phosphatase eliminates all the phosphate groups, such that all ends contain hydroxyl groups. Each end can then be selectively altered to allow ligation between the desired components. One end of the fragments can then be “activated”, in some embodiments by treatment with alkaline phosphatase.


MT Using One of a Small Number of Cells as the Source I of Complex Nucleic Acids

According to one embodiment, an MT method is used to analyze the genome of an individual cell or a small number of cells (or a similar number of nuclei isolated from cells). The process for isolating DNA in this case is similar to the methods described above, but may occur in a smaller volume.


As discussed above, isolating long fragments of genomic nucleic acid from a cell can be accomplished by a number of different methods. In one embodiment, cells are lysed and the intact nucleic are pelleted with a gentle centrifugation step. The genomic DNA is then released through proteinase K and RNase digestion for several hours. The material can then in some embodiments be treated to lower the concentration of remaining cellular waste-such treatments are well known in the art and can include without limitation dialysis for a period of time (e.g., from 2-16 hours) and/or dilution. Since such methods of isolating the nucleic acid does not involve many disruptive processes (such as ethanol precipitation, centrifugation, and vortexing), the genomic nucleic acid remains largely intact, yielding a majority of fragments that have lengths in excess of 150 kilobases. In some embodiments, the fragments are from about 100 to about 750 kilobases in lengths. In further embodiments, the fragments are from about 150 to about 600, about 200 to about 500, about 250 to about 400, and about 300 to about 350 kilobases in length.


Once isolated, the genomic DNA must be carefully fragmented to avoid loss of material, particularly to avoid loss of sequence from the ends of each fragment, since loss of such material will result in gaps in the final genome assembly. In some cases, sequence loss is avoided through use of an infrequent nicking enzyme, which creates starting sites for a polymerase, such as phi29 polymerase, at distances of approximately 100 kb from each other. As the polymerase creates the new DNA strand, it displaces the old strand, with the end result being that there are overlapping sequences near the sites of polymerase initiation, resulting in very few deletions of sequence.


In some embodiments, a controlled use of a 5′ exonuclease (either before or during the MDA reaction) can promote multiple replications of the original DNA from the single cell and thus minimize propagation of early errors through copying of copies.


In one aspect, methods of the present invention produce quality genomic data from single cells. Assuming no loss of DNA, there is a benefit to starting with a low number of cells (10 or less) instead of using an equivalent amount of DNA from a large preparation. Starting with less than 10 cells ensures uniform coverage in long fragments of any given region of the genome. Starting with five or fewer cells allows four times or greater coverage per each 100 kb DNA fragment without increasing the total number of reads above 120 Gb (20 times coverage of a 6 Gb diploid genome). However, a large number of longer DNA fragments (100 kb or longer) are even more important for sequencing from a few cells, because for any given sequence there are only as many overlapping fragments as the number of starting cells and the occurrence of overlapping fragments from both parental chromosomes can be a substantial loss of information.


The first step in MT is generally low bias whole genome amplification, which can be of particular use in single cell genomic analysis. Due to DNA strand breaks and DNA losses in handling, even single molecule sequencing methods would likely require some level of DNA amplification from the single cell. The difficulty in sequencing single cells comes from attempting to amplify the entire genome. Studies performed on bacteria using MDA have suffered from loss of approximately half of the genome in the final assembled sequence with a fairly high amount of variation in coverage across those sequenced regions. This can partially be explained as a result of the initial genomic DNA having nicks and strand breaks which cannot be replicated at the ends and are thus lost during the MDA process. MT provides a solution to this problem through the creation of long overlapping fragments of the genome prior to MDA. According to one embodiment of the invention, in order to achieve this, a gentle process is used to isolate genomic DNA from the cell. The largely intact genomic DNA is then be lightly treated with a frequent nickase, resulting in a semi-randomly nicked genome. The strand-displacing ability of phi29 is then used to polymerize from the nicks creating very long (>200 kb) overlapping fragments. These fragments are then be used as starting template for MT.


Methylation Analysis Using MT

In a further aspect, methods and compositions of the present invention are used for genomic methylation analysis. There are several methods currently available for global genomic methylation analysis. One method involves bisulfate treatment of genomic DNA and sequencing of repetitive elements or a fraction of the genome obtained by methylation-specific restriction enzyme fragmenting. This technique yields information on total methylation, but provides no locus-specific data. The next higher level of resolution uses DNA arrays and is limited by the number of features on the chip. Finally, the highest resolution and the most expensive approach requires bisulfate treatment followed by sequencing of the entire genome. Using MT it is possible to sequence all bases of the genome and assemble a complete diploid genome with digital information on levels of methylation for every cytosine position in the human genome (i.e., 5-base sequencing). Further, MT allow blocks of methylated sequence of 100 kb or greater to be linked to sequence haplotypes, providing methylation haplotyping, information that is impossible to achieve with any currently available method.


In one non-limiting exemplary embodiment, methylation status is obtained in a method in which genomic DNA is first denatured for MDA. Next the DNA is treated with bisulfite (a step that requires denatured DNA). The remaining preparation follows those methods described for example in U.S. application Ser. No. 11/451,692, filed on Jun. 13, 2006 (published as US 2007/0072208) and Ser. No. 12/335,168, filed on Dec. 15, 2008 (published as US 2009/0311691), each of which is hereby incorporated by reference in its entirety for all purposes and in particular for all teachings related to nucleic acid analysis of mixtures of fragments according to long fragment read techniques.


In one aspect, MDA will amplify each strand of a specific fragment independently yielding for any given cytosine position 50% of the reads as unaffected by bisulfite (i.e., the base opposite of cytosine, a guanine is unaffected by bisulfate) and 50% providing methylation status. Reduced DNA complexity helps with accurate mapping and assembly of the less informative, mostly 3-base (A, T, G) reads.


Bisulfite treatment has been reported to fragment DNA. However, careful titration of denaturation and bisulfate buffers can avoid excessive fragmenting of genomic DNA. A 50% conversion of cytosine to uracil can be tolerated in MT allowing a reduction in exposure of the DNA to bisulfite to minimize fragmenting. In some embodiments, some degree of fragmenting is acceptable as it would not affect haplotyping.


Using MT for Analysis of Cancer Genomes

It has been suggested that more than 90% of cancers harbor significant losses or gains in regions of the human genome, termed aneuploidy, with some individual cancers having been observed to contain in excess of four copies of some chromosomes. This increased complexity in copy number of chromosomes and regions within chromosomes makes sequencing cancer genomes substantially more difficult. The ability of MT techniques to sequence and assemble very long (>100 kb) fragments of the genome makes it well suited for the sequencing of complete cancer genomes.


Error-Reduction by Sequencing a Target Nucleic Acid

According to one embodiment, even if MT-based phasing is not performed and a standard sequencing approach is used, a target nucleic acid is fragmented (if necessary), and the fragments are tagged before amplification. An advantage of MT is that errors introduced as a result of amplification (or other steps) can be identified and corrected by comparing the sequence obtained from multiple overlapping long fragments. For example, a base call (e.g., identifying a particular base such as A, C, G, or T) at a particular position (e.g., with respect to a reference) of the sequence data can be accepted as true if the base call is present in sequence data from two or more long fragments (or other threshold number), or in a substantial majority of long fragments (e.g., in at least 51, 60, 70, or 80 percent), where the denominator can be restricted to the fragments having a base call at the particular position. A base call can include changing one allele of a het or potential het. A base call at the particular position can be accepted as false if it is present in only one long fragment (or other threshold number of long fragments), or in a substantial minority of long fragments (e.g., less than 10, 5, or 3 fragments or as measure with a relative number, such as 20 or 10 percent). The threshold values can be predetermined or dynamically determined based on the sequencing data. A base call at the particular position may be converted/accepted as “no call” if it is not present in a substantial minority and in a substantial majority of expected fragments (e.g., in 40-60 percent). In some embodiments and implementations, various parameters may be used (e.g., in distribution, probability, and/or other functions or statistics) to characterize what may be considered a substantial minority or a substantial majority of fragments. Examples of such parameters include, without limitation, one or more of: number of base calls identifying a particular base; coverage or total number of called bases at a particular position; number and/or identities of distinct fragments that gave rise to sequence data that includes a particular base call; total number of distinct fragments that gave rise to sequence data that includes at least one base call at a particular position; the reference base at the particular position; and others. In one embodiment, a combination of the above parameters for a particular base call can be input to a function to determine a score (e.g., a probability) for the particular base call. The scores can be compared to one or more threshold values as part of determining if a base call is accepted (e.g., above a threshold), in error (e.g., below a threshold), or a no call (e.g., if all of the scores for the base calls are below a threshold). The determination of a base call can be dependent on the scores of the other base calls.


As one basic example, if a base call of A is found in more than 35% (an example of a score) of the fragments that contain a read for the position of interest and a base call of C is found in more than 35% of these fragments and the other base calls each have a score of less than 20%, then the position can be considered a het composed of A and C, possibly subject to other criteria (e.g., a minimum number of fragments containing a read at the position of interest). Thus, each of the scores can be input into another function (e.g., heuristics, which may use comparative or fuzzy logic) to provide the final determination of the base call(s) for the position.


As another example, a specific number of fragments containing a base call may be used as a threshold. For instance, when analyzing a cancer sample, there may be low prevalence somatic mutations. In such a case, the base call may appear in less than 10% of the fragments covering the position, but the base call may still be considered correct, possibly subject to other criteria. Thus, various embodiments can use absolute numbers or relative numbers, or both (e.g., as inputs into comparative or fuzzy logic). And, such numbers of fragments can be input into a function (as mentioned above), as well as thresholds corresponding to each number, and the function can provide a score, which can also be compared to a one or more thresholds to make a final determination as to the base call at the particular position.


A further example of an error correction function relates to sequencing errors in raw reads leading to a putative variant call inconsistent with other variant calls and their haplotypes. If 20 reads of variant A are found in 9 and 8 fragments belonging to respective haplotypes and 7 reads of variant G are found in 6 wells (5 or 6 of which are shared with fragments with A-reads), the logic can reject variant G as a sequencing error because for the diploid genome only one variant can reside at a position in each haplotype. Variant A is supported with substantially more reads, and the G-reads substantially follow fragments of A-reads indicating that they are most likely generate by wrongly reading G instead of A. If G reads are almost exclusively in separate fragments from A, this can indicates that G-reads are wrongly mapped or they come from a contaminating DNA.


Identifying Expansions in Regions with Short Tandem Repeats


A short tandem repeat (STR) in DNA is a segment of DNA with a strong periodic pattern. STRs occur when a pattern of two or more nucleotides are repeated and the repeated sequences are directly adjacent to each other; the repeats may be perfect or imperfect, i.e., there may be a few base pairs that do not match the periodic motif. The pattern generally ranges in length from 2 to 5 base pairs (bp). STRs typically are located in non-coding regions, e.g., in introns. A short tandem repeat polymorphism (STRP) occurs when homologous STR loci differ in the number of repeats between individuals. STR analysis is often used for determining genetic profiles for forensic purposes. STRs occurring in the exons of genes may represent hypermutable regions that are linked to human disease (Madsen et al, BMC Genomics 9:410, 2008).


In human genomes (and genomes of other organisms) STRs include trinucleotide repeats, e.g., CTG or CAG repeats. Trinucleotide repeat expansion, also known as triplet repeat expansion, is caused by slippage during DNA replication, and is associated with certain diseases categorized as trinucleotide repeat disorders such as Huntington Disease. Generally, the larger the expansion, the more likely it is to cause disease or increase the severity of disease. This property results in the characteristic of “anticipation” seen in trinucleotide repeat disorders, that is, the tendency of age of onset of the disease to decrease and the severity of symptoms to increase through successive generations of an affected family due to the expansion of these repeats. Identification of expansions in trinucleotide repeats may be useful for accurately predict age of onset and disease progression for trinucleotide repeat disorders.


Expansion of STRs such as trinucleotide repeats can be difficult to identify using next-generation sequencing methods. Such expansions may not map and may be missing or underrepresented in libraries. Using MT, it is possible to see a significant drop in sequence coverage in an STR region. For example, a region with STRs will characteristically have a lower level of coverage as compared to regions without such repeats, and there will be a substantial drop in coverage in that region if there is an expansion of the region, observable in a plot of coverage versus position in the genome.


For example, if the sequence coverage is about 20 on average, the region with the expansion region will have a significant drop, e.g., to 10 if the affected haplotype has zero coverage in the expansion region. Thus, a 50% drop would occur. However, if the sequence coverage for the two haplotypes is compared, the coverage is 10 in the normal haplotype and 0 in the affected haplotype, which is a drop of 10 but an overall percentage drop of 100%. Or, one can analyze the relative amounts, which is 2:1 (normal vs. coverage in expansion region) for the combined sequence coverage, but is 10:0 (haplotype 1 vs. haplotype 2), which is infinity or zero (depending on how the ratio is formed), and thus a large distinction.


Diagnostic Use of Sequence Data

Sequence data generated using the methods of the present invention are useful for a wide variety of purposes. According to one embodiment, sequencing methods of the present invention are used to identify a sequence variation in a sequence of a complex nucleic acid, e.g., a whole genome sequence, that is informative regarding a characteristic or medical status of a patient or of an embryo or fetus, such as the sex of an embryo or fetus or the presence or prognosis of a disease having a genetic component, including, for example, cystic fibrosis, sickle cell anemia, Marfan syndrome, Huntington's disease, and hemochromatosis or various cancers, such as breast cancer, for example. According to another embodiment, the sequencing methods of the present invention are used to provide sequence information beginning with between one and 20 cells from a patient (including but not limited to a fetus or an embryo) and assessing a characteristic of the patient on the basis of the sequence.


Cancer Diagnostics

Whole genome sequencing is a valuable tool in assessing the genetic basis of disease. A number of diseases are known for which there is a genetic basis, e.g., cystic fibrosis,


One application of whole genome sequencing is to understanding cancer. The most significant impact of next-generation sequencing on cancer genomics has been the ability to re-sequence, analyze and compare the matched tumor and normal genomes of a single patient as well as multiple patient samples of a given cancer type. Using whole genome sequencing the entire spectrum of sequence variations can be considered, including germline susceptibility loci, somatic single nucleotide polymorphisms (SNPs), small insertion and deletion (indel) mutations, copy number variations (CNVs) and structural variants (SVs).


In general, the cancer genome is comprised of the patient's germ line DNA, upon which somatic genomic alterations have been superimposed. Somatic mutations identified by sequencing can be classified either as “driver” or “passenger” mutations. So-called driver mutations are those that directly contribute to tumor progression by conferring a growth or survival advantage to the cell. Passenger mutations encompass neutral somatic mutations that have been acquired during errors in cell division, DNA replication, and repair; these mutations may be acquired while the cell is phenotypically normal, or following evidence of a neoplastic change.


Historically, attempts have been made to elucidate the molecular mechanism of cancer, and several “driver” mutations, or biomarkers, such as HER2/neu2, have been identified. Based on such genes, therapeutic regimens have been developed to specifically target tumors with known genetic alterations. The best defined example of this approach is the targeting of HER2/neu in breast cancer cells by trastuzumab (Herceptin). Cancers, however, are not simple monogenetic diseases, but are instead characterized by combinations of genetic alterations that can differ among individuals. Consequently, these additional perturbations to the genome may render some drug regimens ineffective for certain individuals.


Cancer cells for whole genome sequencing may be obtained from biopsies of whole tumors (including microbiopsies of a small number of cells), cancer cells isolated from the bloodstream or other body fluids of a patient, or any other source known in the art.


Pre-Implantation Genetic Diagnosis

One application of the methods of the present invention is for pre-implantation genetic diagnosis. About 2 to 3% of babies born have some type of major birth defect. The risk of some problems, due to abnormal separation of genetic material (chromosomes), increases with the mother's age. About 50% of the time these types of problems are due to Down Syndrome, which is a third copy of chromosome 21 (Trisomy 21). The other half result from other types of chromosomal anomalies, including trisomies, point mutations, structural variations, copy number variations, etc. Many of these chromosomal problems result in a severely affected baby or one which does not survive even to delivery.


In medicine and (clinical) genetics pre-implantation genetic diagnosis (PGD or PIGD) (also known as embryo screening) refers to procedures that are performed on embryos prior to implantation, sometimes even on oocytes prior to fertilization. PGD can permit parents to avoid selective pregnancy termination. The term pre-implantation genetic screening (PGS) is used to denote procedures that do not look for a specific disease but use PGD techniques to identify embryos at risk due, for example, to a genetic condition that could lead to disease. Procedures performed on sex cells before fertilization may instead be referred to as methods of oocyte selection or sperm selection, although the methods and aims partly overlap with PGD.


Preimplantation genetic profiling (PGP) is a method of assisted reproductive technology to perform selection of embryos that appear to have the greatest chances for successful pregnancy. When used for women of advanced maternal age and for patients with repetitive in vitro fertilization (IVF) failure, PGP is mainly carried out as a screening for detection of chromosomal abnormalities such as aneuploidy, reciprocal and Robertsonian translocations, and other abnormalities such as chromosomal inversions or deletions. In addition, PGP can examine genetic markers for characteristics, including various disease states The principle behind the use of PGP is that, since it is known that numerical chromosomal abnormalities explain most of the cases of pregnancy loss, and a large proportion of the human embryos are aneuploid, the selective replacement of euploid embryos should increase the chances of a successful IVF treatment. Whole-genome sequencing provides an alternative to such methods of comprehensive chromosome analysis methods as array-comparative genomic hybridization (aCGH), quantitative PCR and SNP microarrays. Whole full genome sequencing can provide information regarding single base changes, insertions, deletions, structural variations and copy number variations, for example.


As PGD can be performed on cells from different developmental stages, the biopsy procedures vary accordingly. The biopsy can be performed at all preimplantation stages, including but not limited to unfertilized and fertilized oocytes (for polar bodies, PBs), on day three cleavage-stage embryos (for blastomeres) and on blastocysts (for trophectoderm cells).


Sequencing Systems and Data Analysis

In some embodiments, sequencing of DNA samples (e.g., such as samples representing whole human genomes) may be performed by a sequencing system. Two examples of sequencing systems are illustrated in FIG. 5.



FIGS. 5A and 5B are block diagrams of example sequencing systems 190 that are configured to perform the techniques and/or methods for nucleic acid sequence analysis according to the embodiments described herein. A sequencing system 190 can include or be associated with multiple subsystems such as, for example, one or more sequencing machines such as sequencing machine 191, one or more computer systems such as computer system 197, and one or more data repositories such as data repository 195. In the embodiment illustrated in FIG. 1A, the various subsystems of system 190 may be communicatively connected over one or more networks 193, which may include packet-switching or other types of network infrastructure devices (e.g., routers, switches, etc.) that are configured to facilitate information exchange between remote systems. In the embodiment illustrated in FIG. 5B, sequencing system 190 is a sequencing device in which the various subsystems (e.g., such as sequencing machine(s) 191, computer system(s) 197, and possibly a data repository 195) are components that are communicatively and/or operatively coupled and integrated within the sequencing device.


In some operational contexts, data repository 195 and/or computer system(s) 197 of the embodiments illustrated in FIGS. 5A and 5B may be configured within a cloud computing environment 196. In a cloud computing environment, the storage devices comprising a data repository and/or the computing devices comprising a computer system may be allocated and instantiated for use as a utility and on-demand; thus, the cloud computing environment provides as services the infrastructure (e.g., physical and virtual machines, raw/block storage, firewalls, load-balancers, aggregators, networks, storage clusters, etc.), the platforms (e.g., a computing device and/or a solution stack that may include an operating system, a programming language execution environment, a database server, a web server, an application server, etc.), and the software (e.g., applications, application programming interfaces or APIs, etc.) necessary to perform any storage-related and/or computing tasks.


It is noted that in various embodiments, the techniques described herein can be performed by various systems and devices that include some or all of the above subsystems and components (e.g., such as sequencing machines, computer systems, and data repositories) in various configurations and form factors; thus, the example embodiments and configurations illustrated in FIGS. 5A and 5B are to be regarded in an illustrative rather than a restrictive sense.


Sequencing machine 191 is configured and operable to receive target nucleic acids 192 derived from fragments of a biological sample, and to perform sequencing on the target nucleic acids. Any suitable machine that can perform sequencing may be used, where such machine may use various sequencing techniques that include, without limitation, sequencing by hybridization, sequencing by ligation, sequencing by synthesis, single-molecule sequencing, optical sequence detection, electro-magnetic sequence detection, voltage-change sequence detection, and any other now-known or later-developed technique that is suitable for generating sequencing reads from DNA. In various embodiments, a sequencing machine can sequence the target nucleic acids and can generate sequencing reads that may or may not include gaps and that may or may not be mate-pair (or paired-end) reads. As illustrated in FIGS. 5A and 5B, sequencing machine 191 sequences target nucleic acids 192 and obtains sequencing reads 194, which are transmitted for (temporary and/or persistent) storage to one or more data repositories 195 and/or for processing by one or more computer systems 197.


Data repository 195 may be implemented on one or more storage devices (e.g., hard disk drives, optical disks, solid-state drives, etc.) that may be configured as an array of disks (e.g., such as a SCSI array), a storage cluster, or any other suitable storage device organization. The storage device(s) of a data repository can be configured as internal/integral components of system 190 or as external components (e.g., such as external hard drives or disk arrays) attachable to system 190 (e.g., as illustrated in FIG. 5B), and/or may be communicatively interconnected in a suitable manner such as, for example, a grid, a storage cluster, a storage area network (SAN), and/or a network attached storage (NAS) (e.g., as illustrated in FIG. 5A). In various embodiments and implementations, a data repository may be implemented on the storage devices as one or more file systems that store information as files, as one or more databases that store information in data records, and/or as any other suitable data storage organization.


Computer system 197 may include one or more computing devices that comprise general purpose processors (e.g., Central Processing Units, or CPUs), memory, and computer logic 199 which, along with configuration data and/or operating system (OS) software, can perform some or all of the techniques and methods described herein, and/or can control the operation of sequencing machine 191. For example, any of the methods described herein (e.g., for error correction, haplotype phasing, etc.) can be totally or partially performed by a computing device including a processor that can be configured to execute logic 199 for performing various steps of the methods. Further, although method steps may be presented as numbered steps, it is understood that steps of the methods described herein can be performed at the same time (e.g., in parallel by a cluster of computing devices) or in a different order. The functionalities of computer logic 199 may be implemented as a single integrated module (e.g., in an integrated logic) or may be combined in two or more software modules that may provide some additional functionalities.


In some embodiments, computer system 197 may be a single computing device. In other embodiments, computer system 197 may comprise multiple computing devices that may be communicatively and/or operatively interconnected in a grid, a cluster, or in a cloud computing environment. Such multiple computing devices may be configured in different form factors such as computing nodes, blades, or any other suitable hardware configuration. For these reasons, computer system 197 in FIGS. 5A and 5B is to be regarded in an illustrative rather than a restrictive sense.



FIG. 6 is a block diagram of an example computing device 200 that can be configured to execute instructions for performing various data-processing and/or control functionalities as part of sequencing machine(s) and/or computer system(s).


In FIG. 6, computing device 200 comprises several components that are interconnected directly or indirectly via one or more system buses such as bus 275. Such components may include, but are not limited to, keyboard 278, persistent storage device(s) 279 (e.g., such as fixed disks, solid-state disks, optical disks, and the like), and display adapter 282 to which one or more display devices (e.g., such as LCD monitors, flat-panel monitors, plasma screens, and the like) may be coupled. Peripherals and input/output (I/O) devices, which couple to I/O controller 271, can be connected to computing device 200 by any number of means known in the art including, but not limited to, one or more serial ports, one or more parallel ports, and one or more universal serial buses (USBs). External interface(s) 281 (which may include a network interface card and/or serial ports) can be used to connect computing device 200 to a network (e.g., such as the Internet or a local area network (LAN)). External interface(s) 281 may also include a number of input interfaces that can receive information from various external devices such as, for example, a sequencing machine or any component thereof. The interconnection via system bus 275 allows one or more processors (e.g., CPUs) 273 to communicate with each connected component and to execute (and/or control the execution of) instructions from system memory 272 and/or from storage device(s) 279, as well as the exchange of information between various components. System memory 272 and/or storage device(s) 279 may be embodied as one or more computer-readable non-transitory storage media that store the sequences of instructions executed by processor(s) 273, as well as other data. Such computer-readable non-transitory storage media include, but is not limited to, random access memory (RAM), read-only memory (ROM), an electro-magnetic medium (e.g., such as a hard disk drive, solid-state drive, thumb drive, floppy disk, etc.), an optical medium such as a compact disk (CD) or digital versatile disk (DVD), flash memory, and the like. Various data values and other structured or unstructured information can be output from one component or subsystem to another component or subsystem, can be presented to a user via display adapter 282 and a suitable display device, can be sent through external interface(s) 281 over a network to a remote device or a remote data repository, or can be (temporarily and/or permanently) stored on storage device(s) 279.


Any of the methods and functionalities performed by computing device 200 can be implemented in the form of logic using hardware and/or computer software in a modular or integrated manner. As used herein, “logic” refers to a set of instructions which, when executed by one or more processors (e.g., CPUs) of one or more computing devices, are operable to perform one or more functionalities and/or to return data in the form of one or more results or data that is used by other logic elements. In various embodiments and implementations, any given logic may be implemented as one or more software components that are executable by one or more processors (e.g., CPUs), as one or more hardware components such as Application-Specific Integrated Circuits (ASICs) and/or Field-Programmable Gate Arrays (FPGAs), or as any combination of one or more software components and one or more hardware components. The software component(s) of any particular logic may be implemented, without limitation, as a standalone software application, as a client in a client-server system, as a server in a client-server system, as one or more software modules, as one or more libraries of functions, and as one or more static and/or dynamically-linked libraries. During execution, the instructions of any particular logic may be embodied as one or more computer processes, threads, fibers, and any other suitable run-time entities that can be instantiated on the hardware of one or more computing devices and can be allocated computing resources that may include, without limitation, memory, CPU time, storage space, and network bandwidth.


Techniques and Algorithms for the MT Process
Basecalling

In some embodiments, data extraction will rely on two types of image data: bright-field images to demarcate the positions of all DNBs on a surface, and sets of fluorescence images acquired during each sequencing cycle. Data extraction software can be used to identify all objects with the bright-field images and then for each such object, the software can be used to compute an average fluorescence value for each sequencing cycle. For any given cycle, there are four data points, corresponding to the four images taken at different wavelengths to query whether that base is an A, G, C or T. These raw data points (also referred to herein as “base calls”) are consolidated, yielding a discontinuous sequencing read for each DNB.


A computing device can assemble the population of identified bases to provide sequence information for the target nucleic acid and/or identify the presence of particular sequences in the target nucleic acid. For example, the computing device may assemble the population of identified bases in accordance with the techniques and algorithms described herein by executing various logic; an example of such logic is software code written in any suitable programming language such as Java, C++, Perl, Python, and any other suitable conventional and/or object-oriented programming language. When executed in the form of one or more computer processes, such logic may read, write, and/or otherwise process structured and unstructured data that may be stored in various structures on persistent storage and/or in volatile memory; examples of such storage structures include, without limitation, files, tables, database records, arrays, lists, vectors, variables, memory and/or processor registers, persistent and/or memory data objects instantiated from object-oriented classes, and any other suitable data structures. In some embodiments, the identified bases are assembled into a complete sequence through alignment of overlapping sequences obtained from multiple sequencing cycles performed on multiple DNBs. As used herein, the term “complete sequence” refers to the sequence of partial or whole genomes as well as partial or whole target nucleic acids. In further embodiments, assembly methods performed by one or more computing devices or computer logic thereof utilize algorithms that can be used to “piece together” overlapping sequences to provide a complete sequence. In still further embodiments, reference tables are used to assist in assembling the identified sequences into a complete sequence. A reference table may be compiled using existing sequencing data on the organism of choice. For example human genome data can be accessed through the National Center for Biotechnology Information at ftp.ncbi.nih.gov/refseq/release, or through the J. Craig Venter Institute at www.jcvi.org/researchhuref/. All or a subset of human genome information can be used to create a reference table for particular sequencing queries. In addition, specific reference tables can be constructed from empirical data derived from specific populations, including genetic sequence from humans with specific ethnicities, geographic heritage, religious or culturally-defined populations, as the variation within the human genome may slant the reference data depending upon the origin of the information contained therein.


In any of the embodiments of the invention discussed herein, a population of nucleic acid templates and/or DNBs may comprise a number of target nucleic acids to substantially cover a whole genome or a whole target polynucleotide. As used herein, “substantially covers” means that the amount of nucleotides (i.e., target sequences) analyzed contains an equivalent of at least two copies of the target polynucleotide, or in another aspect, at least ten copies, or in another aspect, at least twenty copies, or in another aspect, at least 100 copies. Target polynucleotides may include DNA fragments, including genomic DNA fragments and cDNA fragments, and RNA fragments. Guidance for the step of reconstructing target polynucleotide sequences can be found in the following references, which are incorporated by reference: Lander et al, Genomics, 2: 231-239 (1988); Vingron et al, J. Mol. Biol., 235: 1-12 (1994); and like references.


In some embodiments, four images, one for each color dye, are generated for each queried position of a complex nucleotide that is sequenced. The position of each spot in an image and the resulting intensities for each of the four colors is determined by adjusting for crosstalk between dyes and background intensity. A quantitative model can be fit to the resulting four-dimensional dataset. A base is called for a given spot, with a quality score that reflects how well the four intensities fit the model.


Basecalling of the four images for each field can be performed in several steps by one or more computing devices or computer logic thereof. First, the image intensities are corrected for background using modified morphological “image open” operation. Since the locations of the DNBs line up with the camera pixel locations, the intensity extraction is done as a simple read-out of pixel intensities from the background corrected images. These intensities are then corrected for several sources of both optical and biological signal cross-talks, as described below. The corrected intensities are then passed to a probabilistic model that ultimately produces for each DNB a set of four probabilities of the four possible basecall outcomes. Several metrics are then combined to compute the basecall score using pre-fitted logistic regression.


Intensity correction: Several sources of biological and optical cross-talks are corrected using linear regression model implemented as computer logic that is executed by one or more computing devices. The linear regression was preferred over de-convolution methods that are computationally more expensive and produced results with similar quality. The sources of optical cross-talks include filter band overlaps between the four fluorescent dye spectra, and the lateral cross-talks between neighboring DNBs due to light diffraction at their close proximities. The biological sources of cross-talks include incomplete wash of previous cycle, probe synthesis errors and probe “slipping” contaminating signals of neighboring positions, incomplete anchor extension when interrogating “outer” (more distant) bases from anchors. The linear regression is used to determine the part of DNB intensities that can be predicted using intensities of either neighboring DNBs or intensities from previous cycle or other DNB positions. The part of the intensities that can be explained by these sources of cross-talk is then subtracted from the original extracted intensities. To determine the regression coefficients, the intensities on the left side of the linear regression model need to be composed primarily of only “background” intensities, i.e., intensities of DNBs that would not be called the given base for which the regression is being performed. This requires pre-calling step that is done using the original intensities. Once the DNBs that do not have a particular basecall (with reasonable confidence) are selected, a computing device or computer logic thereof performs a simultaneous regression of the cross-talk sources:







I
background
Base




I

DNBneighbor

1

Base

+

+

I
DNBneighborN
Base

+

I
DNB

Base

2


+

I
DNB

Base

3


+

I
DNB

Base

4


+

I
DNBpreviousCycle
Base

+

I

DNBotherPosition

1

Base

+

+

I
DNBotherPositionN
Base

+
ε





The neighbor DNB cross-talk is corrected both using the above regression. Also, each DNB is corrected for its particular neighborhood using a linear model involving all neighbors over all available DNB positions.


Basecall probabilities: Calling bases using maximum intensity does not account for the different shapes of background intensity distributions of the four bases. To address such possible differences, a probabilistic model was developed based on empirical probability distributions of the background intensities. Once the intensities are corrected, a computing device or computer logic thereof pre-calls some DNBs using maximum intensities (DNBs that pass a certain confidence threshold) and uses these pre-called DNBs to derive the background intensity distributions (distributions of intensities of DNBs that are not called a given base). Upon obtaining such distributions, the computing device can compute for each DNB a tail probability under that distribution that describes the empirical probability of the intensity being background intensity. Therefore, for each DNB and each of the four intensities, the computing device or logic thereof can obtain and store their probabilities of being background (PBGA, PBGC PBGG PBGT). Then the computing device can compute the probabilities of all possible basecall outcomes using these probabilities. The possible basecall outcomes need to describe also spots that can be double or in general multiple-occupied or not occupied by a DNB. Combining the computed probabilities with their prior probabilities (lower prior for multiple-occupied or empty spots) gives rise to the probabilities of the 16 possible outcomes:







p
A

=



!


p
BG
A

+

p
BG
C

+

p
BG
G

+

p
BG
T





p


*

p
SingleBase
prior









p
AC

=



!


p
BG
A

+

!


p
BG
C

+

p
BG
G

+

p
BG
T







p


*

p
DoubleOccupied
prior









p
ACG

=



!


p
BG
A

+

!


p
BG
C

+

!


p
BG
G

+

p
BG
T









p


*

p
TripleOccupied
prior









p
ACGT

=



!


p
BG
A

+

!


p
BG
C

+

!


p
BG
G

+

!

p
BG
T










p


*

p
QuadrupleOccupied
prior









p
N

=




p
BG
A

+

p
BG
C

+

p
BG
G

+

p
BG
T




p


*

p
EmptySpot
prior






These 16 probabilities can then be combined to obtain a reduced set of four probabilities for the four possible basecalls. That is:







p

4

base

A

=


p
A

+


1
2



(


p
AC

+

p
AG

+

p
AT


)


+


1
3



(


p
ACG

+

p
ACT

+

p
AGT


)


+


1
4



p
ACGT


+


1
4



p
N







Score computation: Logistic regression was used to derive the score computation formula. A computing device or computer logic thereof fitted the logistic regression to mapping outcomes of the basecalls using several metrics as inputs. The metrics included probability ratio between the called base and the next highest base, called base intensity, indicator variable of the basecall identity, and metrics describing the overall clustering quality of the field. All metrics were transformed to be collinear with log-odds-ratio between concordant and discordant calls. The model was refined using cross-validation. The logit function with the final logistic regression coefficients was used to compute the scores in production.


Mapping and Assembly

In further embodiments, read data is encoded in a compact binary format and includes both a called base and quality score. The quality score is correlated with base accuracy. Analysis software logic, including sequence assembly software, can use the score to determine the contribution of evidence from individual bases with a read.


Reads may be “gapped” due to the DNB structure. Gap sizes vary (usually+/−1 base) due to the variability inherent in enzyme digestion. Due to the random-access nature of cPAL, reads may occasionally have an unread base (“no-call”) in an otherwise high-quality DNB. Read pairs are mated.


Mapping software logic capable of aligning read data to a reference sequence can be used to map data generated by the sequencing methods described herein. When executed by one or more computing devices, such mapping logic will generally be tolerant of small variations from a reference sequence, such as those caused by individual genomic variation, read errors, or unread bases. This property often allows direct reconstruction of SNPs. To support assembly of larger variations, including large-scale structural changes or regions of dense variation, each arm of a DNB can be mapped separately, with mate pairing constraints applied after alignment.


As used herein, the term “sequence variant” or simply “variant” includes any variant, including but not limited to a substitution or replacement of one or more bases; an insertion or deletion of one or more bases (also referred to as an “indel”); inversion; conversion; duplication, or copy number variation (CNV); trinucleotide repeat expansion; structural variation (SV; e.g., intrachromosomal or interchromosomal rearrangement, e.g., a translocation); etc. In a diploid genome, a “heterozygosity” or “het” is two different alleles of a particular gene in a gene pair. The two alleles may be different mutants or a wild type allele paired with a mutant. The present methods can also be used in the analysis of non-diploid organisms, whether such organisms are haploid/monoploid (N=1, where N=haploid number of chromosomes), or polyploid, or aneuploid.


Assembly of sequence reads can in some embodiments utilize software logic that supports DNB read structure (mated, gapped reads with non-called bases) to generate a diploid genome assembly that can in some embodiments be leveraged off of sequence information generating MT methods of the present invention for phasing heterozygote sites.


Methods of the present invention can be used to reconstruct novel segments not present in a reference sequence. Algorithms utilizing a combination of evidential (Bayesian) reasoning and de Bruijin graph-based algorithms may be used in some embodiments. In some embodiments, statistical models empirically calibrated to each dataset can be used, allowing all read data to be used without pre-filtering or data trimming. Large scale structural variations (including without limitation deletions, translocations, and the like) and copy number variations can also be detected by leveraging mated reads.


Phasing MT Data


FIG. 7 describes the main steps in the phasing of MT data. These steps are as follows:

    • (1) Graph construction using MT data: One or more computing devices or computer logic thereof generates an undirected graph, where the vertices represent the heterozygous SNPs, and the edges represent the connection between those heterozygous SNPs. The edge is composed of the orientation and the strength of the connection. The one or more computing devices may store such graph in storage structures include, without limitation, files, tables, database records, arrays, lists, vectors, variables, memory and/or processor registers, persistent and/or memory data objects instantiated from object-oriented classes, and any other suitable temporary and/or persistent data structures.
    • (2) Graph construction using mate pair data: Step 2 is similar to step 1, where the connections are made based on the mate pair data, as opposed to the MT data. For a connection to be made, a DNB must be found with the two heterozygous SNPs of interest in the same read (same arm or mate arm).
    • (3) Graph combination: A computing device or computer logic thereof represents of each of the above graphs is via an N×N sparse matrix, where N is the number of candidate heterozygous SNPs on that chromosome. Two nodes can only have one connection in each of the above methods. Where the two methods are combined, there may be up to two connections for two nodes. Therefore, the computing device or computer logic thereof may use a selection algorithm to select one connection as the connection of choice. The quality of the mate-pair data is significantly inferior to that of the MT data. Therefore, only the MT-derived connections are used.
    • (4) Graph trimming: A series of heuristics were devised and applied, by a computing device, to stored graph data in order to remove some of the erroneous connections. More precisely, a node must satisfy the condition of at least two connections in one direction and one connection in the other direction; otherwise, it is eliminated.
    • (5) Graph optimization: A computing device or computer logic thereof optimized the graph by generating the minimum-spanning tree (MST). The energy function was set to—|strength|. During this process, where possible, the lower strength edges get eliminated, due to the competition with the stronger paths. Therefore, MST provides a natural selection for the strongest and most reliable connections.
    • (6) Contig building: Once the minimum-spanning tree is generated and/or stored in computer-readable medium, a computing device or logic thereof can re-orient all the nodes with taking one node (here, the first node) constant. This first node is the anchor node. For each of the nodes, the computing device then finds the path to the anchor node. The orientation of the test node is the aggregate of the orientations of the edges on the path.
    • (7) Universal phasing: After the above steps, a computing device or logic thereof phases each of the contigs that are built in the previous step(s). Here, the results of this part are referred to as pre-phased, as opposed to phased, indicating that this is not the final phasing. Since the first node was chosen arbitrarily as the anchor node, the phasing of the whole contig is not necessarily in-line with the parental chromosomes. For universal phasing, a few heterozygous SNPs on the contig for which trio information is available are used. These trio heterozygous SNPs are then used to identify the alignment of the contig. At the end of the universal phasing step, all the contigs have been labeled properly and therefore can be considered as a chromosome-wide contig.


Contig Making

In order to make contigs, for each heterozygous SNP-pair, a computing device or computer logic thereof tests two hypotheses: the forward orientation and reverse orientation. A forward orientation means that the two heterozygous SNPs are connected the same way they are originally listed (initially alphabetically). A reverse orientation means that the two heterozygous SNPs are connected in reverse order of their original listing. FIG. 8 depicts the pairwise analysis of nearby heterozygous SNPs involving the assignment of forward and reverse orientations to a heterozygous SNP-pair.


Each orientation will have a numerical support, showing the validity of the corresponding hypothesis. This support is a function of the 16 cells of the connectivity matrix shown in FIG. 9, which shows an example of the selection of a hypothesis, and the assignment of a score to it. To simplify the function, the 16 variables are reduced to 3: Energy 1, Energy2 and Impurity. Energy 1 and Energy2 are two highest value cells corresponding to each hypothesis. Impurity is the ratio of the sum of all the other cells (than the two corresponding to the hypothesis) to the total sum of the cells in the matrix. The selection between the two hypotheses is done based on the sum of the corresponding cells. The hypothesis with the higher sum is the winning hypothesis. The following calculations are only used to assign the strength of that hypothesis. A strong hypothesis is the one with a high value for Energy 1 and Energy2, and a low value for Impurity.


The three metrics Energy 1, Energy2 and Impurity are fed into a fuzzy inference system (FIG. 10), in order to reduce their effects into a single value-score-between (and including) 0 and 1. The fuzzy interference system (FIS) is implemented as a computer logic that can be executed by one or more computing devices.


The connectivity operation is done for each heterozygous SNP pair that is within a reasonable distance up to the expected contig length (e.g., 20-50 Kb). FIG. 6 shows graph construction, depicting some exemplary connectivities and strengths for three nearby heterozygous SNPs.


The rules of the fuzzy inference engine are defined as follows:

    • (1) If Energy 1 is small and Energy2 is small, then Score is very small.
    • (2) If Energy 1 is medium and Energy2 is small, then Score is small.
    • (3) If Energy 1 is medium and Energy2 is medium, then Score is medium.
    • (4) If Energy 1 is large and Energy2 is small, then Score is medium.
    • (5) If Energy 1 is large and Energy2 is medium, then Score is large.
    • (6) If Energy 1 is large and Energy2 is large, then Score is very large.
    • (7) If Impurity is small, then Score is large.
    • (8) If Impurity is medium, then Score is small.
    • (9) If Impurity is large, then Score is very small.


For each variable, the definition of Small, Medium and Large is different, and is governed by its specific membership functions.


After exposing the fuzzy inference system (FIS) to each variable set, the contribution of the input set on the rules is propagated through the fuzzy logic system, and a single (de-fuzzified) number is generated at the output-score. This score is limited between 0 and 1, with 1 showing the highest quality


After the application of the FIS to each node pair, a computing device or computer logic thereof constructs a complete graph. FIG. 11 shows an example of such graph. The nodes are colored according to the orientation of the winning hypothesis. The strength of each connection is derived from the application of the FIS on the heterozygous SNP pair of interest. Once the preliminary graph is constructed (the top plot of FIG. 11), the computing device or computer logic thereof optimizes the graph (the bottom plot of FIG. 11) and reduces it to a tree. This optimization process is done by making a Minimum Spanning Tree (MST) from the original graph. The MST guarantees a unique path from each node to any other node.



FIG. 11 shows graph optimization. In this application, the first node on each contig is used as the anchor node, and all the other nodes are oriented to that node. Depending on the orientation, each hit would have to either flip or not, in order to match the orientation of the anchor node. FIG. 12 shows the contig alignment process for the given example. At the end of this process, a phased contig is made available.


At this point in the process of phasing, the two haplotypes are separated. Although it is known that one of these haplotypes comes from the Mom and one from the Dad, it is not known exactly which one comes from which parent. In the next step of phasing, a computing device or computer logic thereof attempts to assign the correct parental label (Mom/Dad) to each haplotype. This process is referred to as the Universal Phasing. In order to do so, one needs to know the association of at least a few of the heterozygous SNPs (on the contig) to the parents. This information can be obtained by doing a Trio (Mom-Dad-Child) phasing. Using the trio's sequenced genomes, some loci with known parental associations are identified-more specifically when at least one parent is homozygous. These associations are then used by the computing device or computer logic thereof to assign the correct parental label (Mom/Dad) to the whole contigs, that is, to perform parent-assisted universal phasing (FIG. 13).


In order to guarantee high accuracy, the following may be performed: (1) when possible (e.g., in the case of NA19240), acquiring the trio information from multiple sources, and using a combination of such sources; (2) requiring the contigs to include at least two known trio-phased loci; (3) eliminating the contigs that have a series of trio-mismatches in a row (indicating a segmental error); and (4) eliminating the contigs that have a single trio-mismatch at the end of the trio loci (indicating a potential segmental error).



FIG. 14 shows natural contig separations. Whether parental data are used or not, contigs often do not continue naturally beyond a certain point. Reasons for contig separation are: (1) more than usual DNA fragmentation or lack of amplification in certain areas, (2) low heterozygous SNP density, (3) poly-N sequence on the reference genome, and (4) DNA repeat regions (prone to mis-mapping).



FIG. 15 shows Universal Phasing. One of the major advantages of Universal Phasing is the ability to obtain the full chromosomal “contigs.” This is possible because each contig (after Universal Phasing) carries haplotypes with the correct parental labels. Therefore, all the contigs that carry the label Mom can be put on the same haplotype; and a similar operation can be done for Dad's contigs.


Another of the major advantages of the MT process is the ability to dramatically increase the accuracy of heterozygous SNP calling. FIG. 16 shows two examples of error detection resulting from the use of the MT process. The first example is shown in FIG. 16 (left), in which the connectivity matrix does not support any of the expected hypotheses. This is an indication that one of the heterozygous SNPs is not really a heterozygous SNP. In this example, the A/C heterozygous SNP is in reality a homozygous locus (A/A), which was mislabeled as a heterozygous locus by the assembler. This error can be identified, and either eliminated or (in this case) corrected. The second example is shown in FIG. 17 (right), in which the connectivity matrix for this case supports both hypotheses at the same time. This is a sign that the heterozygous SNP calls are not real.


A “healthy” heterozygous SNP-connection matrix is one that has only two high cells (at the expected heterozygous SNP positions, i.e., not on a straight line). All other possibilities point to potential problems, and can be either eliminated, or used to make alternate basecalls for the loci of interest.


Another advantage of the MT process is the ability to call heterozygous SNPs with weak supports (e.g., where it was hard to map DNBs due to the bias or mismatch rate). Since the MT process requires an extra constraint on the heterozygous SNPs, one could reduce the threshold that a heterozygous SNP call requires in a non-MT assembler. FIG. 17 demonstrates an example of this case in which a confident heterozygous SNP call could be made despite a small number of reads. In FIG. 17 (right) under a normal scenario the low number of supporting reads would have prevented any assembler to confidently call the corresponding heterozygous SNPs. However, since the connectivity matrix is “clean,” one could more confidently assign heterozygous SNP calls to these loci.


Annotating SNPs in Splice Sites

Introns in transcribed RNAs need to be spliced out before they become mRNA. Information for splicing is embedded within the sequence of these RNAs, and is consensus based. Mutations in splicing site consensus sequence are causes to many human diseases (Faustino and Cooper, Genes Dev. 17:419-437, 2011). The majority of splice sites conform to a simple consensus at fixed positions around an exon. In this regard, a program was developed to annotate Splice Site mutations. In this program, consensus splice position models (www.life.umd.edu/labs/mount/RNAinfo) was used. A look-up is performed for a pattern: CAGIG in the 5′-end region of an exon (“|” denotes the beginning of exon), and MAG|GTRAG in the 3′-end region of the same exon (“|” denotes the ending of exon). Here M={A,C}, R={A,G}. Further, splicing consensus positions are classified into two types: type I, where consensus to the model is 100% required; and type II, where consensus to the model is preserved in >50% cases. Presumably, a SNP mutation in a type I position will cause the splicing to miss, whereas a SNP in a type II position will only decrease the efficiency of the splicing event.


The program logic for annotating splice site mutations comprises two parts. In part I, a file containing model positions sequences from the input reference genome is generated. In part 2, the SNPs from a sequencing project are compared to these model positions sequences and report any type I and type II mutations. The program logic is exon-centric instead of intron-centric (for convenience in parsing the genome). For a given exon, in its 5′-end we look for the consensus “cAGg” (for positions −3, −2, −1, 0. 0 means the start of exon). Capital letters means type I positions, and lower-case letters means type II positions). In the 3′-end of the exon, a look-up is performed for the consensus “magGTrag” (for position sequence −3, −2, −1, 0, 1, 2, 3, 4). Exons from the genome release that do not confirm to these requirements are simply ignored (˜5% of all cases). These exons fall into other minor classes of splice-site consensus and are not investigated by the program logic. Any SNP from the genome sequenced is compared to the model sequence at these genomic positions. Any mismatch in type I will be reported. Mismatch in type II positions are reported if the mutation departs from the consensus.


The above program logic detects the majority of bad splice-site mutations. The bad SNPs that are reported are definitely problematic. But there are many other bad SNPs causing splicing problem that are not detected by this program. For example, there are many introns within the human genome that do not confirm to the above-mentioned consensus. Also, mutations in bifurcation points in the middle of the intron may also cause splice problem. These splice-site mutations are not reported.


Annotation of SNPs affecting Transcription Factor Binding Sites (TFBS). JASPAR models are used for finding TFBSs from the released human genome sequences (either build 36 or build 37). JASPAR Core is a collection of 130 TFBS positional frequency data for vertebrates, modeled as matrices (Bryne et al., Nucl. Acids Res. 36:D102-D106, 2008; Sandelin et al., Nucl. Acids Res. 23:D91-D94, 2004). These models are downloaded from the JASPAR website (http://jaspar.genereg.net/cgi-bin/jaspar_db.pl?rm=browse&db=core&tax_group=vertebrates). These models are converted into Position Weight Matrices (PWMs) using the following formula: wi=log 2 [(fi+p Ni1/2)/(Ni+Ni1/2)/p], where: fi is the observed frequency for the specific base at position I; Ni is the total observations at the position; and p the background frequency for the current nucleotide, which is defaulted to 0.25 (Wasserman and Sandelin, Nature Reviews, Genetics 5:P276-287, 2004). A specific program, mast (meme.sdsc.edu/meme/mast-intro.html), is used to search sequence segments within the genome for TFBS-sites. A program was run to extract TFBS-sites in the reference genome. The outline of steps is as follows: (i) For each gene with mRNA, extract [−5000, 1000] putative TFBS-containing regions from the genome, with 0 being the mRNA starting location. (ii) Run mast-search of all PWM-models for the putative TFBS-containing sequences. (iii) Select those hits above a given threshold. (iv) For regions with multiple or overlapping hits, select only 1-hit, the one with the highest mast-search score.


With the TFBS model-hits from the reference genome generated and/or stored in suitable computer-readable medium, a computing device or computer logic thereof can identify SNPs which are located within the hit-region. These SNPs will impact on the model, and a change in the hit-score. A second program was written to compute such changes in the hit-score, as the segment containing the SNP is run twice into the PWM model, once for the reference, and the second time for the one with the SNP substitution. A SNP causing the segment hit score to drop more than 3 is identified as a bad SNP.


Selection of genes with two bad SNPs. Genes with bad SNPs are classified into two categories: (1) those affecting the AA-sequence transcribed; and (2) those affecting the transcription binding site. For AA-sequence affecting, the following SNP subcategories are included:

    • (1) Nonsense or nonstop variations. These mutations either cause a truncated protein or an extended protein. In either situation, the function of the protein product is either completely lost or less efficient.
    • (2) Splice site variations. These mutations cause either the splice site for an intron to be destroyed (for those positions required to be 100% of a certain nucleotide by the model) or severely diminished (for those sites required to be >50% for a certain nucleotide by the model. The SNP causes the splice-site nucleotide to mutate to another nucleotide that is below 50% of consensus as predicted by the splice-site consensus sequence model). These mutations will likely produce proteins which are truncated, missing exons, or severely diminishing in protein product quantity.
    • (3) Polyphen2 annotation of AA variations. For SNPs that cause change in amino-acid sequence of a protein, but not its length, Polyphen2 (Adzhubei et al., Nat. Methods 7:248-249, 2010) was used as the main annotation tool. Polyphen2 annotates the SNP with “benign”, “unknown, “possibly damaging”, and “probably damaging”. Both “possibly damaging” and “probably damaging” were identified as bad SNPs. These category assignments by Polyphen2 are based on structural predictions of the Polyphen2 software.


For transcription-binding site mutations the 75% of maxScore of the models was used based on the reference genome as a screening for TFBS-binding sites. Any model-hit in the region that is <=75% of maxScore are removed. For those remaining, if a SNP causes the hit-score to drop 3 or more, it is considered as a detrimental SNP.


Two classes of genes are reported. Class 1 genes are those that had at least 2-bad AA-affecting mutations. These mutations can be all on a single allele (Class 1.1), or spread on 2 distinct alleles (Class 1.2). Class 2 genes are a superset of the Class 1 set. Class 2 genes are genes contain at least 2-bad SNPs, irrespective it is AA-affecting or TFBS-site affecting. But a requirement is that at least 1 SNP is AA-affecting. Class 2 genes are those either in Class 1, or those that have 1 detrimental AA-mutation and 1 or more detrimental TFBS-affecting variations. Class 2.1 means that all these detrimental mutations are from a single allele, whereas Class 2.2 means that detrimental SNPs are coming from two distinct alleles.


The foregoing techniques and algorithms are applicable to methods for sequencing complex nucleic acids, optionally in conjunction with MT processing prior to sequencing (MT in combination with sequencing may be referred to as “MT sequencing”), which are described in detail as follows. Such methods for sequencing complex nucleic acids may be performed by one or more computing devices that execute computer logic. An example of such logic is software code written in any suitable programming language such as Java, C++, Perl, Python, and any other suitable conventional and/or object-oriented programming language. When executed in the form of one or more computer processes, such logic may read, write, and/or otherwise process structured and unstructured data that may be stored in various structures on persistent storage and/or in volatile memory; examples of such storage structures include, without limitation, files, tables, database records, arrays, lists, vectors, variables, memory and/or processor registers, persistent and/or memory data objects instantiated from object-oriented classes, and any other suitable data structures.


Improving Accuracy in Long-Read Sequencing

In DNA sequencing using certain long-read technologies (e.g., nanopore sequencing), long (e.g., 10-100 kb) read lengths are available but generally have high false negative and false positive rates. The final accuracy of sequence from such long-read technologies can be significantly enhanced using haplotype information (complete or partial phasing) according to the following general process.


First, a computing device or computer logic thereof aligns reads to each other. A large number of heterozygous calls are expected to exist in the overlap. For example, if two to five 100 kb fragments overlap by a minimum of 10%, this results in >10 kb overlap, which could roughly translate to 10 heterozygous loci. Alternatively, each long read is aligned to a reference genome, by which a multiple alignment of the reads would be implicitly obtained.


Once the multiple read alignments have been achieved, the overlap region can be considered. The fact that the overlap could include a large number (e.g., N=10) of het loci can be leveraged to consider combinations of hets. This combinatorial modality results in a large space (4N or 4{circumflex over ( )}N; if N=10, then 4N=˜1 million) of possibilities for the haplotypes. Of all of these 4N points in the N-dimensional space, only two points are expected to contain biologically viable information, i.e., those corresponding to the two haplotypes. In other words, there is a noise suppression ratio of 4N/2 (here 1e6/2 or ˜500,000). In reality, much of this 4N space is degenerate, particularly since the sequences are already aligned (and therefore look alike), and also because each locus does not usually carry more than two possible bases (if it is a real het). Consequently, a lower bound for this space is actually 2N (if N=10, then 2N=˜1000). Therefore, the noise suppression ratio could only be 2N/2 (here 1000/2=500), which is still quite impressive. As the number of the false positives and false negatives grow, the size of the space expands from 2N to 4N, which in turn results in a higher noise suppression ratio. In other words, as the noise grows, it will automatically be more suppressed. Therefore, the output products are expected to retain only a very small (and rather constant) amount of noise, almost independently from the input noise. (The tradeoff is the yield loss in the noisier conditions.) Of course, these suppression ratios are altered if (1) the errors are systematic (or other data idiosyncrasies), (2) the algorithms are not optimal, (3) the overlapping sections are shorter, or (4) the coverage redundancy is less. N is any integer greater than one, such as 2, 3, 5, 10, or more.


The following methodology is useful for increasing the accuracy of the long-read sequencing methods, which could have a large initial error rate.


First, a computing device or computer logic thereof aligns a few reads, for instance 5 reads. Assuming reads are ˜100 kb, and the shared overlap is 10%, this results in a 10 kb overlap in the 5 reads or more, such as 10-20 reads. Also assume there is a het in every 1 Kb. Therefore, there would be a total of 10 hets in this common region.


Next, the computing device or computer logic thereof fills in in a portion (e.g., just non-zero elements) or the whole matrix of alpha10 possibilities (where alpha is between 2 and 4) for the above 10 candidate hets. In one implementation only 2 out of alpha10 cells of this matrix should be high density (e.g., as measured by a threshold, which can be predetermined or dynamic). These are the cells that correspond to the real hets. These two cells can be considered substantially noise-free centers. The rest should contain mostly 0 and occasionally 1 memberships, especially if the errors are not systematic. If the errors are systematic, there may be a clustering event (e.g., a third cell that has more than just 0 or 1), which makes the task more difficult. However, even in this case, the cluster membership for the false cluster should be significantly weaker (e.g., as measured by an absolute or relative amount) than that of the two expected clusters. The trade-off in this case is that the starting point should include more multiple sequences aligned, which relates directly to having longer reads or larger coverage redundancy.


The above step assume that the two viable clusters are observed among the overlapped reads. For a large number of false positives, this would not be the case. If this is the case, in the alpha-dimensional space, the expected two clusters will be blurred, i.e., instead of being single points with high density, they will be blurred clusters of M points around the cells of interest, where these cells of interest are the noise-free centers that are at the center of the cluster. This enables the clustering methods to capture the locality of the expected points, despite the fact that the exact sequence is not represented in each read. A cluster event may also occur when the clusters are blurred (i.e. there could be more than two centers), but in a similar manner as described above, a score (e.g., the total counts for the cells of a cluster) can be used to distinguish a weaker cluster from the two real clusters, for a diploid organism. The two real clusters can be used to create contigs, as described herein, for various regions, and the contigs can be matched into two groups to form haplotypes for a large region of the complex nucleic acid.


Finally, the computing device or computer logic thereof the population-based (known) haplotypes can be used to increase confidence and/or to provide extra guidance in finding the actual clusters. A way to enable this method is to provide each observed haplotype a weight, and to provide a smaller but non-zero value to the unobserved haplotypes. By doing so, one achieves a bias toward the natural haplotypes that have been observed in the population of interest.


Converting Long Reads to Virtual MT

The algorithms that are designed for MT (including the phasing algorithm) can be used for long reads by assigning a random virtual tag (with uniform distribution) to each of the long fragments. The virtual tag has the benefit of enabling a true uniform distribution for each code. MT cannot achieve this level of uniformity due to the difference in the pooling of the codes and the difference in the decoding efficiency of the codes. A ratio of 3:1 (and up to 10:1) can be easily observed in the representation of any two codes in MT. However, the virtual MT process results in a true 1:1 ratio between any two codes.


In view of the foregoing description, according to one aspect of the invention, methods are provided for determining a sequence of a complex nucleic acid (for example, a whole genome) of one or more organisms, that is, an individual organism or a population of organisms. Such methods comprise: (a) receiving at one or more computing devices a plurality of reads of the complex nucleic acid; and (b) producing, with the computing devices, an assembled sequence of the complex nucleic acid from the reads, the assembled sequence comprising less than 1.0, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.08, 0.07, 0.06, 0.05 or 0.04 false single nucleotide variant per megabase at a call rate of 70, 75, 80, 85, 90 or 95 percent or greater, wherein the methods are performed by one or more computing devices. In some aspects, a computer-readable non-transitory storage medium stores one or more sequences of instructions that comprise instructions which, when executed by one or more computing devices, cause the one or more computing devices to perform the steps of such methods.


According to one embodiment, in which such methods involve haplotype phasing, the method further comprises identifying a plurality of sequence variants in the assembled sequence and phasing the sequence variants (e.g., 70, 75, 80, 85, 90, 95 percent or more of the sequence variants) to produce a phased sequence, i.e., a sequence wherein sequence variants are phased. Such phasing information can be used in the context of error correction. For example, according to one embodiment, such methods comprise identifying as an error a sequence variant that is inconsistent with the phasing of at least two (or three or more) phased sequence variants.


According to another such embodiment, in such methods the step of receiving the plurality of reads of the complex nucleic acid comprises a computing device and/or a computer logic thereof receiving a plurality of reads from each of a plurality of long fragments of the complex nucleic acid. Information regarding such fragments is useful for correcting errors or for calling a base that otherwise would have been a “no call.” According to one such embodiment, such methods comprise a computing device and/or a computer logic thereof calling a base at a position of said assembled sequence on the basis of preliminary base calls for the position from two or more long fragments. For example, methods may comprise calling a base at a position of said assembled sequence on the basis of preliminary base calls from at least two, at least three at least four or more than four long fragments. In some embodiments, such methods may comprise identifying a base call as true if it is present at least two, at least three, at least four long fragments or more than four long fragments. In some embodiments, such methods may comprise identifying a base call as true if it is present at least a majority (or at least 60%, at least 75%, or at least 80%) of the fragments for which a preliminary base call is made for that position in the assembled sequence. According to another such embodiment, such methods comprise a computing device and/or a computer logic thereof identifying a base call as true if it is present three or more times in reads from two or more long fragments.


According to another such embodiment, the long fragment from which the reads originate is determined by identifying a tag (or unique pattern of tags) that is associated with the fragment. Such tags optionally comprise an error-correction or error-detection code (e.g., a Reed-Solomon error correction code). According to one embodiment of the invention, upon sequencing a fragment and tag, the resulting read comprises tag sequence data and fragment sequence data.


According to another embodiment, such methods further comprise: a computing device and/or a computer logic thereof providing a first phased sequence of a region of the complex nucleic acid in the region comprising a short tandem repeat; a computing device and/or a computer logic thereof comparing reads (e.g., regular or mate-pair reads) of the first phased sequence of the region with reads of a second phased sequence of the region (e.g., using sequence coverage); and a computing device and/or a computer logic thereof identifying an expansion of the short tandem repeat in one of the first phased sequence or the second phased sequence based on the comparison.


According to another embodiment, the method further comprises a computing device and/or a computer logic thereof obtaining genotype data from at least one parent of the organism and producing an assembled sequence of the complex nucleic acid from the reads and the genotype data.


According to another embodiment, the method further comprises a computing device and/or a computer logic thereof performing steps that comprise: aligning a plurality of the reads for a first region of the complex nucleic acid, thereby creating an overlap between the aligned reads; identifying N candidate hets within the overlap; clustering the space of 2N to 4N possibilities or a selected subspace thereof, thereby creating a plurality of clusters; identifying two clusters with the highest density, each identified cluster comprising a substantially noise-free center; and repeating the foregoing steps for one or more additional regions of the complex nucleic acid.


According to another embodiment, such methods further comprise providing an amount of the complex nucleic acid, and sequencing the complex nucleic acid to produce the reads.


According to another embodiment, in such methods the complex nucleic acid is selected from the group consisting of a genome, an exome, a transcriptome, a methylome, a mixture of genomes of different organisms, and a mixture of genomes of different cell types of an organism.


According to another aspect of the invention, an assembled human genome sequence is provided that is produced by any of the foregoing methods. For example, one or more computer-readable non-transitory storage media stores an assembled human genome sequence that is produced by any of the foregoing methods. According to another aspect, a computer-readable non-transitory storage medium stores one or more sequences of instructions that comprise instructions which, when executed by one or more computing devices, cause the one or more computing devices to perform any, some, or all of the foregoing methods.


According to another aspect of the invention, methods are provided for determining a whole human genome sequence, such methods comprising: (a) receiving, at one or more computing devices, a plurality of reads of the genome; and (b) producing, with the one or more computing devices, an assembled sequence of the genome from the reads comprising less than 600 false heterozygous single nucleotide variants per gigabase at a genome call rate of 70% or greater. According to one embodiment, the assembled sequence of the genome has a genome call rate of 70% or more and an exome call rate of 70% or greater. In some aspects, a computer-readable non-transitory storage medium stores one or more sequences of instructions that comprise instructions which, when executed by one or more computing devices, cause the one or more computing devices to perform any of the methods of the invention described herein.


According to another aspect of the invention, methods are provided for determining a whole human genome sequence, such methods comprising: (a) receiving, at one or more computing devices, a plurality of reads from each of a plurality of long fragments, each long fragment comprising one or more fragments of the genome; and (b) producing, with the one or more computing devices, a phased, assembled sequence of the genome from the reads that comprises less than 1000 false single nucleotide variants per gigabase at a genome call rate of 70% or greater. In some aspects, a computer-readable non-transitory storage medium stores one or more sequences of instructions that comprise instructions which, when executed by one or more computing devices, cause the one or more computing devices to perform such methods.


While this invention has been disclosed with reference to specific aspects and embodiments, it is apparent that other embodiments and variations of this invention may be devised by others skilled in the art without departing from the true spirit and scope of the invention.

Claims
  • 1-13. (canceled)
  • 14. A method for sequence analysis of a target nucleic acid comprising: (a) combining a plurality of long DNA fragments of the target nucleic acid with a population of tag-containing sequences, wherein the population comprises at least 1000 different tag sequences;(b) producing tagged long fragments, wherein each tagged long fragment comprises target nucleic acid sequence and multiple interspersed tag sequences, wherein the multiple interspersed tag sequences in an individual tagged long fragment may be the same or different;(c) producing from each tagged long fragment a plurality of tagged subfragments, wherein the tagged subfragments each comprise one or more tag sequences;(d) obtaining sequence of individual tagged subfragments, wherein the obtained sequence includes target nucleic acid sequence and at least one tag sequence;(e) combining sequences obtained in (d) to produce assembled sequence(s) of the target nucleic acid, wherein the combining comprises (i) determining that sequences obtained in (d) originated from the same long DNA fragment if said sequences comprise the same tag sequence and/or (ii) identifying pairs of sequences as being adjacent sequences in the target nucleic acid if the pair comprise the same tag sequence.
  • 15. The method of any of claim 14, wherein steps (a)-(c) are carried out in a single vessel or mixture.
  • 16. The method of claim 1 wherein the plurality of long DNA fragments are genomic DNA sequence.
  • 17. The method of claim 16 wherein the plurality of long DNA fragments are at least 50 kb, optionally at least 100 kb, in length.
  • 18. The method of claim 17 wherein the length of the long DNA fragments is in the range 50 kb to 200 kb.
  • 19. The method of claim 1 wherein the tagged long fragments comprise a plurality of the tag-containing sequences at a selected average spacing.
  • 20. The method of claim 18 wherein the average spacing is in the range 100 to 5000 bases.
  • 21. The method of claim 19 wherein the average spacing is in the range 200 and 1500 bases.
  • 22. The method of claim 20 wherein the average spacing is in the range 250 and 1000 bases.
  • 23. The method of claim 22, wherein steps (a)-(c) are carried out in a single vessel or mixture and the single vessel or mixture comprises more than a haploid (N) amount of genomic DNA.
  • 24. A composition in comprising at least 103 different tag-containing nucleic acid elements and at least one of (i) genomic DNA and (ii) primers that bind the tag-containing nucleic acid elements.
  • 25. The composition of claim 24 that comprises at least 5 genome equivalents of genomic DNA.
  • 26. The composition of claim 25 that comprises both genomic DNA and primers.
  • 27. The composition of any of claim 24 that comprises tagged long fragments comprising genomic nucleic acid sequence and multiple interspersed tag sequences.
  • 28. A kit comprising a library comprising 103 or more distinct bar codes or sources of clonal barcodes: i) a library of barcodes associated with transposon ends, and optionally adaptor sequences;ii) a library of clonal barcodes, optionally with adaptor sequences, comprising a plurality of 104 or more distinct sources of clonal bar codes;iii) a library of concatemers comprising monomers, wherein the monomers comprise bar codes;iv) a library of templates suitable for rolling circle amplification, wherein the templates comprise a monomer as described in (iii); and/orv) a library of hairpin oligonucleotides, each oligonucleotide comprising two copies of a barcode sequence, wherein the library comprises a plurality of at least about 104 barcodes.
  • 29. The kit of claim 28 comprising an enzyme selected from a transposase, a polymerase, a ligase, an endonuclease and an exonuclease.
  • 30. The kit of claim 29 that comprises at least about 104, at least about 105, at least about 106, or at least about 107 different barcodes.
  • 31. The kit of claim 28 that comprises at least about 104, at least about 105, at least about 106, or at least about 107 different barcodes or sources of clonal barcodes.
  • 32. The kit of claim 28 wherein the library members comprise one or two common sequences for primer binding.
  • 33. The kit of claim 32 comprising a primer or primers that anneal to a sequence or sequences within tag-containing sequence.
RELATED APPLICATION

This application claims benefit and is a continuation of U.S. patent application Ser. No. 18/065,567, filed Dec. 13, 2022, which is a continuation of U.S. patent application Ser. No. 15/993,418, filed May 30, 2018, now abandon, which is a continuation of U.S. patent application Ser. No. 15/136,780, filed Apr. 22, 2016, now U.S. Pat. No. 10,023,910, which is a continuation of U.S. patent application Ser. No. 14/205,145, filed Mar. 11, 2014, now U.S. Pat. No. 9,328,382, which claims the priority benefit of U.S. provisional patent application No. 61/801,052, filed Mar. 15, 2013, which applications are incorporated herein by reference in their entirety.

Provisional Applications (1)
Number Date Country
61801052 Mar 2013 US
Continuations (4)
Number Date Country
Parent 18065567 Dec 2022 US
Child 18487926 US
Parent 15993418 May 2018 US
Child 18065567 US
Parent 15136780 Apr 2016 US
Child 15993418 US
Parent 14205145 Mar 2014 US
Child 15136780 US