The present application provides new methods for efficient storing, access, transmission and multiplexing of bioinformatic data and in particular genomic sequencing data.
An appropriate representation of genome sequencing data is fundamental to enable efficient processing, storage and transmission of genomic data to render possible and facilitate analysis applications such as genome variants calling and all analysis performed, with various purposes, by processing the sequencing data and metadata. Today, genome sequencing information are generated by High Throughput Sequencing (HTS) machines in the form of sequences of nucleotides (a. k. a. bases) represented by strings of letters from a defined vocabulary.
These sequencing machines do not read out an entire genomes or genes, but they produce short random fragments of nucleotide sequences known as sequence reads.
A quality score is associated to each nucleotide in a sequence read. Such number represents the confidence level given by the machine to the read of a specific nucleotide at a specific location in the nucleotide sequence.
This raw sequencing data generated by NGS machines are commonly stored in FASTQ files (see also
The smallest vocabulary to represent sequences of nucleotides obtained by a sequencing process is composed by five symbols: {A, C, G, T, N} representing the 4 types of nucleotides present in DNA namely Adenine, Cytosine, Guanine, and Thymine plus the symbol N to indicate that the sequencing machine was not able to call any base with a sufficient level of confidence, so the type of base in such position remains undetermined in the reading process. In RNA Thymine is replaced by Uracil (U). The nucleotides sequences produced by sequencing machines are called “reads”. In case of paired reads the term “template” is used to designate the original sequence from which the read pair has been extracted. Sequence reads can be composed by a number of nucleotides in a range from a few dozen up to several thousand. Some technologies produce sequence reads in pairs where each read can come from one of the two DNA strands.
In the genome sequencing field the term “coverage” is used to express the level of redundancy of the sequence data with respect to a reference genome. For example, to reach a coverage of 30× on a human genome (3.2 billion bases long) a sequencing machine shall produce a total of about 30×3.2 billion bases so that in average each position in the reference is “covered” 30 times.
The most used genome information representations of sequencing data are based on FASTQ and SAM file formats which are commonly made available in zipped form to reduce the original size. The traditional file formats, respectively FASTQ and SAM for non-aligned and aligned sequencing data, are constituted by plain text characters and are thus compressed by using general purpose approaches such as LZ (from Lempel and Ziv) schemes (the well-known zip, gzip etc). When general purpose compressors such as gzip are used, the result of the compression is usually a single blob of binary data. The information in such monolithic form results quite difficult to archive, transfer and elaborate particularly in the case of high throughput sequencing when the volumes of data are extremely large.
After sequencing, each stage of a genomic information processing pipeline produces data represented by a completely new data structure (file format) despite the fact that in reality only a small fraction of the generated data is new with respect to the previous stage.
Commonly used solutions presents several drawbacks: data archival is inefficient for the fact that a different file format is used at each stage of the genomic information processing pipelines which implies the multiple replication of data, with the consequent rapid increase of the required storage space. This is inefficient and unnecessary and it is also becoming not sustainable for the increase of the data volume generated by HTS machines. This has in fact consequences in terms of available storage space and generated costs, and it is also hindering the benefits of genomic analysis in healthcare from reaching a larger portion of the population. The impact of the IT costs generated by the exponential growth of sequence data to be stored and analysed is currently one of the main challenges the scientific community and that the healthcare industry have to face (see Scott D. Kahn “On the future of genomic data”—Science 331, 728 (2011) and Pavlichin, D. S., Weissman, T., and G. Yona. 2013. “The human genome contracts again” Bioinformatics 29(17): 2199-2202). At the same time several are the initiatives attempting to scale genome sequencing from a few selected individuals to large populations (see Josh P. Roberts “Million Veterans Sequenced”—Nature Biotechnology 31, 470 (2013))
The transfer of genomic data is slow and inefficient because the currently used data formats are organized into monolithic files of up to several hundred Gigabytes of size which need to be entirely transferred at the receiving end in order to be processed. This implies that the analysis of a small segment of the data requires the transfer of the entire file with significant costs in terms of consumed bandwidth and waiting time. Often online transfer is prohibitive for the large volumes of the data to be transferred, and the transport of the data is performed by physically moving storage media such as hard disk drives or storage servers from one location to another.
These limitations occurring when employing state of the art approaches are overcome by the present invention. Processing the data is slow and inefficient for to the fact that the information is not structured in such a way that the portions of the different classes of data and metadata required by commonly used analysis applications cannot be retrieved without the need of accessing the data in its totality. This fact implies that common analysis pipelines can require to run for days or weeks wasting precious and costly processing resources because of the need, at each stage of accessing, of parsing and filtering large volumes of data even if the portions of data relevant for the specific analysis purpose is much smaller.
These limitations are preventing health care professionals from timely obtaining genomic analysis reports and promptly reacting to diseases outbreaks. The present invention provides a solution to this need.
There is another technical limitation that is overcome by the present invention.
In fact the invention aims at providing an appropriate genomic sequencing data and metadata representation by organizing and partitioning the data so that the compression of data and metadata is maximized and several functionality such as selective access and support for incremental updates are efficiently enabled.
A key aspect of the invention is a specific definition of classes of data and metadata to be represented by an appropriate source model, coded (i.e. compressed) separately by being structured in specific layers. The most important achievements of this invention with respect to existing state of the art methods consist in:
The present application discloses a method and system addressing the problem of efficient manipulation, storage and transmission of very large amounts of genomic sequencing data, by employing a structured access units approach combined with multiplexing techniques.
The present application overcomes all the limitations of the prior art approaches related to the functionality of genomic data accessibility, efficient processing of data subsets, transmission and streaming functionality combined with an efficient compression.
Today the most used representation format for genomic data is the Sequence Alignment Mapping (SAM) textual format and its binary correspondent BAM. SAM files are human readable ASCII text files whereas BAM adopts a block based variant of gzip. BAM files can be indexed to enable a limited modality of random access. This is supported by the creation of a separate index file.
The BAM format is characterized by poor compression performance for the following reasons:
A more sophisticated approach to genomic data compression that is less commonly used, but more efficient than BAM is CRAM (CRAM specification: https://samtools.github.io/hts-specs/CRAMv3.pdf). CRAM provides a more efficient compression for the adoption of differential encoding with respect to an existing reference (it partially exploits the data source redundance), but it still lacks features such as incremental updates, support for streaming and selective access to specific classes of compressed data.
CRAM relies on the concept of the CRAM record. Each CRAM record encodes a single mapped or unmapped reads by encoding all the elements necessary to reconstruct it.
The main differences of the present invention with respect to the CRAM approach are:
Genomic compression algorithms used in the state of the art can be classified into these categories:
The first two categories share the disadvantage of not exploiting the specific characteristics of the data source (genomic sequence reads) and process the genomic data as string of text to be compressed without taking into account the specific properties of such kind of information (e.g. redundancy among reads, reference to an existing sample). Two of the most advanced toolkits for genomic data compression, namely CRAM and Goby (“Compression of structured high-throughput sequencing data”, F. Campagne, K. C. Dorff, N. Chambwe, J. T. Robinson, J. P. Mesirov, T. D. Wu), make a poor use of arithmetic coding as they implicitly model data as independent and identically distributed by a Geometric distribution. Goby is slightly more sophisticated since it converts all the fields to a list of integers and each list is encoded independently using arithmetic coding without using any context. In the most efficient mode of operation, Goby is able to perform some inter-list modeling over the integer lists to improve compression. These prior art solutions yield poor compression ratios and data structures that are difficult if not impossible to selectively access and manipulate once compressed. Downstream analysis stages can result to be inefficient and very slow due to the necessity of handling large and rigid data structures even to perform simple operation or to access selected regions of the genomic dataset.
A simplified vision of the relation among the file formats used in genome processing pipelines is depicted in
The use of multiple file formats for the storage of genomic information is highly inefficient and costly. Having different file formats at different stages of the genomic information life cycle implies a linear growth of utilized storage space even if the incremental information is minimal. Further disadvantages of prior art solutions are listed below.
There is therefore a need of an appropriate Genomic Information Storage Layer (Genomic File Format) that enables efficient compression, supports the selective access in the compressed domain, supports the incremental addition of heterogeneous metadata in the compressed domain at all levels of the different stages of the genomic data processing.
The present invention provides a solution to the limitations of the state of the art by employing the method, devices and computer programs as claimed in the accompanying set of claims.
The present invention describes a multiplexing file format and the relevant access units to be used to store, transport, access and process genomic or proteomic information in the form of sequences of symbols representing molecules.
These molecules include, for example, nucleotides, amino acids and proteins. One of the most important pieces of information represented as sequence of symbols are the data generated by high-throughput genome sequencing devices.
The genome of any living organism is usually represented as a string of symbols expressing the chain of nucleic acids (bases) characterizing that organism. Current state of the art genome sequencing technology is able to produce only a fragmented representation of the genome in the form of several (up to billions) strings of nucleic acids associated to metadata (identifiers, level of accuracy etc.). Such strings are usually called “sequence reads” or “reads”.
The typical steps of the genomic information life cycle comprise Sequence reads extraction, Mapping and Alignment, Variant detection, Variant annotation and Functional and Structural Analysis (see
Sequence reads extraction is the process—performed by either a human operator or a machine—of representation of fragments of genetic information in the form of sequences of symbols representing the molecules composing a biological sample. In the case of nucleic acids such molecules are called “nucleotides”.
The sequences of symbols produced by the extraction are commonly referred to as “reads”. This information is usually encoded in prior art as FASTA files including a textual header and a sequence of symbols representing the sequenced molecules.
When the biological sample is sequenced to extract DNA of a living organism the alphabet is composed by the symbols (A,C,G,T,N).
When the biological sample is sequenced to extract RNA of a living organism the alphabet is composed by the symbols (A,C,G,U,N).
In case the IUPAC extended set of symbols, so called “ambiguity codes” are also generated by the sequencing machine, the alphabet used for the symbols composing the reads are (A, C, G, T, U, W, S, M, K, R, Y, B, D, H, V, N or -).
When the IUPAC ambiguity codes are not used, a sequence of quality score can be associated to each sequence read. In such case prior art solutions encode the resulting information as a FASTQ file. Sequencing devices can introduce errors in the sequence reads such as:
The term “coverage” is used in literature to quantify the extent to which a reference genome or part thereof can be covered by the available sequence reads. Coverage is said to be:
Sequence alignment refers to the process of arranging sequence reads by finding regions of similarity that may be a consequence of functional, structural, or evolutionary relationships among the sequences. When the alignment is performed with reference to a pre-existing nucleotides sequence referred to as “reference genome”, the process is called “mapping”. Sequence alignment can also be performed without a pre-existing sequence (i.e. reference genome) in such cases the process is known in prior art as “de novo” alignment. Prior art solutions store this information in SAM, BAM or CRAM files. The concept of aligning sequences to reconstruct a partial or complete genome is depicted in
Variant detection (a.k.a. variant calling) is the process of translating the aligned output of genome sequencing machines, (sequence reads generated by NGS devices and aligned), to a summary of the unique characteristics of the organism being sequenced that cannot be found in other pre-existing sequence or can be found in a few pre-existing sequences only. These characteristics are called “variants” because they are expressed as differences between the genome of the organism under study and a reference genome. Prior art solutions store this information in a specific file format called VCF file.
Variant annotation is the process of assigning functional information to the genomic variants identified by the process of variant calling. This implies the classification of variants according to their relationship to coding sequences in the genome and according to their impact on the coding sequence and the gene product. This is in prior art usually stored in a MAF file.
The process of analysis of DNA (variant, CNV=copy number variation, methylation etc,) strands to define their relationship with genes (and proteins) functions and structure is called functional or structural analysis. Several different solutions exist in the prior art for the storage of this data.
Genomic File Format
The invention disclosed in this document consists in the definition of a compressed data structure for representing, processing manipulating and transmitting genome sequencing data that differs from prior art solutions for at least the following aspects:
Classifying the reads according to the result of mapping and coding them using descriptors to be stored in layers (position layer, mate distance layer, mismatch type layer etc, etc, . . . ) present the following advantages:
The key elements of the invention are:
The method described in this document aims at exploiting the available a-priori knowledge on genomic data to define an alphabet of syntax elements with reduced entropy. In genomics the available knowledge is represented by an existing genomic sequence usually—but not necessarily—of the same species as the one to be processed. As an example, human genomes of different individuals differ only of a fraction of 1%. On the other hand that small amount of data contain enough information to enable early diagnosis, personalized medicine, customized drugs synthesis etc. This invention aims at defining a genomic information representation format where the relevant information is efficiently accessible and transportable and the weight of the redundant information is reduced.
The technical features used in the present invention are:
In order to solve all the aforementioned problems of the prior art (in terms of efficient access to random positions in the file, efficient transmission and storing, efficient compression) the present application re-orders and packs together the data that are more homogeneous and or semantically significant for the easiness of processing.
The present invention also adopts a data structure based on the concept of Access Unit and the multiplexing of the relevant data.
Genomic data are structured and encoded into different access units. Hereafter follows a description of the genomic data that are contained into different access units.
Genomic Data Classification
The sequence reads generated by sequencing machines are classified by the disclosed invention into 5 different “Classes” according to the results of the alignment with respect to one or more reference sequences or genomes.
When aligning a DNA sequence of nucleotides with respect to a reference sequence five are the possible results:
Unmapped reads can be assembled into a single sequence using de-novo assembly algorithms. Once the new sequence has been created unmapped reads can be further mapped with respect to it and be classified in one of the 4 classes P, N, M and I.
The data structure of said genomic data requires the storage of global parameters and metadata to be used by the decoding engine. These data are structured in a main header described in the table below.
Once the classification of reads is completed with the definition of the Classes, further processing consists in defining a set of distinct syntax elements which represent the remaining information enabling the reconstruction of the DNA read sequence when represented as being mapped on a given reference sequence. A DNA segment referred to a given reference sequence can be fully expressed by:
This classification creates groups of descriptors (syntax elements) that can be used to univocally represent genome sequence reads. The table below summarizes the syntax elements needed for each class of aligned reads.
Reads belonging to class P are characterized and can be perfectly reconstructed by only a position, a reverse complement information and an offset between mates in case they have been obtained by a sequencing technology yielding mated pairs, some flags and a read length.
The next section details how these descriptors are defined.
Position Descriptors Layer
In each Access Unit, only the mapping position of the first encoded read is stored in the AU header as absolute position on the reference genome. All the other positions are expressed as a difference with respect to the previous position and are stored in a specific layer. This modeling of the information source, defined by the sequence of read positions, is in general characterized by a reduced entropy particularly for sequencing processes generating high coverage results. Once the absolute position of the first alignment has been stored, all positions of other reads are expressed as difference (distance) with respect to the first one.
For example
The same source model is used for the positions of reads belonging to classes N, M, P and I. In order to enable any combination of selective access to the data, the positions of reads belonging to the four classes are encoded in separate layers as depicted in Table I.
Pairing Descriptors Layer
The pairing descriptor is stored in the pair layer. Such layer stores descriptors encoding the information needed to reconstruct the originating reads pairs, when the employed sequencing technology produces reads by pairs. Although at the date of the disclosure of the invention the vast majority of sequencing data is generated by using a technology generating paired reads, it is not the case of all technologies. This is the reason for which the presence of this layer is not necessary to reconstruct all sequencing data information if the sequencing technology of the genomic data considered does not generate paired reads information.
Definitions:
mate pair: read associated to another read in a read pair (e.g. Read 2 is the mate pair of Read 1 in the example of
pairing distance: number of nucleotide positions on the reference sequence which separate one position in the first read (pairing anchor, e.g. last nucleotide of first read) from one position of the second read (e.g. the first nucleotide of the second read)
most probable pairing distance (MPPD): this is the most probable pairing distance expressed in number of nucleotide positions.
position pairing distance (PPD): the PPD is a way to express a pairing distance in terms of the number of reads separating one read from its respective mate present in a specific position descriptor layer.
most probable position pairing distance (MPPPD): is the most probable number of reads separating one read from its mate pair present in a specific position descriptor layer.
position pairing error (PPE): is defined as the difference between the MPPD or MPPPD and the actual position of the mate.
pairing anchor: position of first read last nucleotide in a pair used as reference to calculate the distance of the mate pair in terms of number of nucleotide positions or number of read positions.
The pair descriptor layer is the vector of pairing errors calculated as number of reads to be skipped to reach the mate pair of the first read of a pair with respect to the defined decoding pairing distance.
The same descriptors are used for the pairing information of reads belonging to classes N, M, P and I. In order to enable the selective access to the different data classes, the pairing information of reads belonging to the four classes are encoded in different layer as depicted in.
Pairing Information in Case of Reads Mapped on Different References
In the process of mapping sequence reads on a reference sequence it is not uncommon to have the first read in a pair mapped on one reference (e.g. chromosome 1) and the second on a different reference (e.g. chromosome 4). In this case the pairing information described above has to be integrated by additional information related to the reference sequence used to map one of the reads. This is achieved by coding
1. A reserved value (flag) indicating that the pair is mapped on two different sequences (different values indicate if read 1 or read 2 are mapped on the sequence that is not currently encoded)
2. a unique reference identifier referring to the reference identifiers encoded in the main header structure as described in Table 1.
3. a third element containing the mapping information on the reference identified at point 2 and expressed as offset with respect to the last encoded position.
In
1) One special reserved value is encoded as pairing distance (in this case 0xffffff)
2) A second descriptor provides a reference ID as listed in the main header (in this case 4)
3) The third element contains the mapping information on the concerned reference (170).
Reverse Complement Descriptor Layer
Each read of the read pairs produced by sequencing technologies can be originated from either genome strands of the sequenced organic sample. However, only one of the two strands is used as reference sequence.
When the strand 1 is used as reference sequence, read 2 can be encoded as reverse complement of the corresponding fragment on strand 1. This is shown in
In case of coupled reads, four are the possible combinations of direct and reverse complement mate pairs. This is shown in
The same coding is used for the reverse complement information of reads belonging to classes P, N, M, I. In order to enable enhanced selective access to the data, the reverse complement information of reads belonging to the four classes are coded in different layers as depicted in Table 2.
Mismatches of Class N
Class N includes all reads which show mismatches where an ‘N’ is present instead of a base call. All other bases perfectly match on the reference sequence.
The positions of Ns in read 1 are encoded as
The positions of Ns in read 2 are encoded as
In the nmis layer, the encoding of each reads pair is terminated by a special “separator” “S” symbol. This is shown in
Encoding Substitutions (Mismatches or SNPs)
A substitution is defined as the presence, in a mapped read, of a different nucleotide with respect to the one that is present in the reference sequence at the same position (see
Each substitution can be encoded as
Substitutions Positions
A substitution position is calculated as for the values of the nmis layer, i.e.:
In read 1 substitutions are encoded
In read 2 substitutions are encoded:
In the snpp layer, the encoding of each reads pair is terminated by a special “separator” symbol.
Substitutions Types Descriptors
For class M (and I as described in the next sections), mismatches are coded by an index (moving from right to left) from the actual symbol present in the reference to the corresponding substitution symbol present in the read {A, C, G, T, N, Z}. For example if the aligned read presents a C instead of a T which is present at the same position in the reference, the mismatch index will be denoted as “4”. The decoding process reads the encoded syntax element, the nucleotide at the given position on the reference and moves from left to right to retrieve the decoded symbol. E.g. a “2” received for a position where a G is present in the reference will be decoded as “N”.
In case of presence of IUPAC ambiguity codes, the substitution indexes change as shown in
In case the encoding of substation types described above presents high information entropy, an alternative method of substitution encoding consists in storing only the mismatches positions in separate layers, one per nucleotide, as depicted in
Coding of Insertions and Deletions
For class I, mismatches and deletions are coded by an indexes (moving from right to left) from the actual symbol present in the reference to the corresponding substitution symbol present in the read: {A, C, G, T, N, Z}. For example if the aligned read presents a C instead of a T present at the same position in the reference, the mismatch index will be “4”. In case the read presents a deletion where a A is present in the reference, the coded symbol will be “5”. The decoding process reads the coded syntax element, the nucleotide at the given position on the reference and moves from left to right to retrieve the decoded symbol. E.g. a “3” received for a position where a G is present in the reference will be decoded as “Z” which indicates the presence of a deletion in the sequence read.
Inserts are coded as 6, 7, 8, 9, 10 respectively for inserted A, C, G, T, N.
In case of adoption of the IUPAC ambiguity codes the substitution mechanism results to be exactly the same however the substitution vector is extended as: S={A, C, G, T, N, Z, M, R, W, S, Y, K, V, H, D, B}.
The following structures of file format, access units and multiplexing are described referring to the coding elements disclosed here above. However, the access units, the file format and the multiplexing produce the same technical advantage also with other and different algorithms of source modeling and genomic data compression.
File Format: Selective Access to Regions of Genomic Data
Master Index Table
In order to support selective access to specific regions of the aligned data, the data structure described in this document implements an indexing tool called Master Index Table (MIT). This is a multi-dimensional array containing the loci at which specific reads map on the used reference sequences. The values contained in the MIT are the mapping positions of the first read in each pos layer so that non-sequential access to each Access Unit is supported. The MIT contains one section per each class of data (P, N, M and I) and per each reference sequence. The MIT is contained in the Main Header of the encoded data.
The values contained in the MIT depicted in
For example, with reference to
Together with pointers to the layer containing the data belonging to the four classes of genomic data described above, the MIT can be uses as an index of additional metadata and/or annotations added to the genomic data during its life cycle.
Local Index Table
Each data layer described above is prefixed with a data structure referred to as local header. The local header contains a unique identifier of the layer, a vector of Access Units counters per each reference sequence, a Local Index Table (LIT) and optionally some layer specific metadata. The LIT is a vector of pointers to the physical position of the data belonging to each AU in the layer payload.
In the previous example, in order to access region 150,000 to 250,000 of reads aligned on the reference sequence no. 2, the decoding application retrieved positions 3 and 4 from the MIT. These values shall be used by the decoding process to access the 3rd and 4th elements of the corresponding section of the LIT. In the example shown in
position of the data blocks belonging to the requested AU=data blocks belonging to AUs of reference 1 to be skipped+position retrieved using the MIT, i.e.
First block position: 5+3=8
Last block position: 5+4=9
The blocks of data retrieved using the indexing mechanism called Local Index Table, are part of the Access Units requested.
Access Units
The genomic data classified in data classes and structured in compressed or uncompressed layers are organized into different access units.
Genomic Access Units (AU) are defined as sections of genome data (in a compressed or uncompressed form) that reconstructs nucleotide sequences and/or the relevant metadata, and/or sequence of DNA/RNA (e.g. the virtual reference) and/or annotation data generated by a genome sequencing machine and/or a genomic processing device or analysis application.
An Access Unit is a block of data that can be decoded either independently from other Access Units by using only globally available data (e.g. decoder configuration) or by using information contained in other Access Units. Access Units contain data information related to genomic data in the form of positional information (absolute and/or relative), information related to reverse complement and possibly pairing and additional data. It is possible to identify several types of access units.
Access units are differentiated by:
Access units of any type can be further classified into different “categories”.
Hereafter follows a non-exhaustive list of definition of different types of genomic access units:
Access units of this type can contain information of mismatching or dissimilarity or non-correspondence with respect to the information contained in the access unit of type 0.
Each Access unit can have a different number of packets in each block, but within an Access Unit all blocks have the same number of packets.
Each data packet can be identified by the combination of 3 identifiers X Y Z where:
Access units of any type can be classified and labelled in different “categories” according to different sequencing processes. For example, but not as a limitation, classification and labelling can take place when
The access units of type 1, 2, 3 and 4 are built according to the result of a matching function applied on genome sequence fragments (a.k.a. reads) with respect to the reference sequence encoded in Access Units of type 0 they refer to.
For example access units (AUs) of type 1 (see
With reference to the genomic data classification previously described in this document, the Access Units of type 1 described above would contain information related to genomic sequence reads of class P (perfect matches).
In case of variable reads length and paired reads the data contained in AUs of type 1 mentioned in the previous example, have to be integrated with the data representing the information about reads pairing and reads length in order to be able to completely reconstruct the genomic data including the reads pairs association. With respect to the data classification previously introduced in the present document, pair and rlen layers would be encoded in AU of type 1.
The matching functions applied with respect to access units of type 1 to classify the content of AU for the type 2, 3 and 4 can provide results such as:
Access units of type 0 are ordered (e.g. numbered), but they do not need to be stored and/or transmitted in an ordered manner (technical advantage: parallel processing/parallel streaming, multiplexing)
Access units of type 1, 2, 3 and 4 do not need to be ordered and do not need to be stored and/or transmitted in an ordered manner (technical advantage: parallel processing/parallel streaming).
Technical Effects
The technical effect of structuring genomic information in access units as described here is that the genomic data:
1. can be selectively queried in order to access:
2. can be incrementally updated with new data that can be available when:
3. can be efficiently transcoded to a new data format in case of
With respect to prior art solutions such as SAM/BAM, the aforementioned technical features address the issues of requiring data filtering to happen at the application level when the entire data has been retrieved and decompressed from the encoded format.
Hereafter follows examples of application scenario where the access unit structure becomes instrumental for a technological advantage.
Selective Access
In particular the disclosed data structure based on Access Units of different types enables to
A further technical advantage is that the querying on the data is much more efficient in terms of data accessibility and execution speed because it can be based on accessing and decoding only selected “categories”, specific regions of longer genomic sequences and only specific layers for access units of type 1, 2, 3, 4 that match the criteria of the applied queries and any combination thereof.
The organization of access units of type 1, 2, 3, 4 into layers allow for efficient extraction of nucleotides sequences
Incremental Update
The access units of type 5 and 6 allow for easy insertion of annotations without the need to depacketize/decode/decompress the whole file thereby adding to the efficient handling of the file which is a limitation of prior art approaches. Existing compression solutions may have to access and process a large amount of compressed data before the desired genomic data can be accessed. This will cause inefficient RAM bandwidth utilization and more power consumption also in hardware implementations. Power consumption and memory access issues may be alleviated by using the approach based on Access Units described here.
The data indexing mechanism described in the Master Index Table (see
Insertion of Additional Data
New genomic information can be periodically added to existing genomic data for several reasons. For example when:
In the above mentioned situations, structuring data using the Access Units described here and the data structure described in the file format section enables the incremental integration of the newly generated data without the need to re-encode the existing data. The incremental update process can be implemented as follows:
This mechanism is illustrated in
In the specific use case of streaming genomic data and data sets in compressed form, the incremental update of a pre-existing data set may be useful when analysing data as soon as they are generated by a sequencing machine and before the actual sequencing is completed. An encoding engine (compressor) can assemble several AUs in parallel by “clustering” sequence reads that map on the same region of the selected reference sequence. Once the first AU contains a number of reads above a pre-configured threshold/parameter, the AU is ready to be sent to the analysis application. Together with the newly encoded Access Unit, the encoding engine (the compressor) shall make sure that all Access Units the new AU depends on have already been sent to the receiving end or is sent together with it. For example an AU of type 3 will require the appropriate AU of type 0 and type 1 to be present at the receiving end in order to be properly decoded.
By means of the described mechanism, a receiving variant calling application would be able to start calling variants on the AU received before the sequencing process has been completed at the transmitting side. A schematic of this process is depicted in
New Analysis of Results.
During the genome processing life cycle several iterations of genome analysis can be applied on the same data (e.g. different variant calling using different processing algorithm). The use of AUs as defined in this document and the data structure described in the file format section of this document enable incremental update of existing compressed data with the results of new analysis.
For example, new analysis performed on existing compressed data can produce new data in these cases:
The use cases described above and depicted in
Transcoding
Compressed genomic data can require transcoding, for example, in the following situations:
When genomic data is mapped on an existing public reference genome, the publication of a new version of said reference sequence or the desire to map the data using a different processing algorithm, today requires a process of re-mapping. When remapping compressed data using prior art file formats such as SAM or CRAM the entire compressed data has to be decompressed into its “raw” form to be mapped again with reference to the newly available reference sequence or using a different mapping algorithm. This is true even if the newly published reference is only slightly different from the previous or the different mapping algorithm used produces a mapping that is very close (or identical) to the previous mapping.
The advantage of transcoding genomic data structured using Access Units described here is that:
Moreover, prior art compression solutions may have to access and process a large amount of compressed data before the desired genomic data can be accessed. This will cause inefficient RAM bandwidth utilization and more power consumption and in hardware implementations. Power consumption and memory access issues may be alleviated by using the approach based on Access Units described here.
A further advantage of the adoption of the genomic access units described here is the facilitation of parallel processing and suitability for hardware implementations. Current solutions such as SAM/BAM and CRAM are conceived for single-threaded software implementation.
Selective Encryption
The approach based on Access Units organized in several types an layers as described in this document enables the implementation of content protection mechanisms otherwise not possible with state of the art monolithic solutions.
A person skilled in the art knows that the majority of genomic information related to an organism's genetic profile relies in the differences (variants) with respect to a known sequence (e.g. a reference genome or a population of genomes). An individual genetic profile to be protected from unauthorized access will therefore be encoded in Access Units of type 3 and 4 as described in this document. The implementation of controlled access to the most sensible genomic information produced by a sequencing and analysis process can therefore be realized by encrypting only the payload of AUs of type 3 and 4 (see
Transport of Genomic Access Units
Genomic Data Multiplex
Genomic Access Units can be transported over a communication network within a Genomic Data Multiplex. A Genomic Data Multiplex is defined as a sequence of packetized genomic data and metadata represented according to the data classification disclosed as part of this invention, transmitted in network environments where errors, such as packet losses, may occur.
The Genomic Data Multiplex is conceived to ease and render more efficient the transport of genomic coded data over different environments (typically network environments) and has the following advantages not present in state of the art solutions:
An example of genomic data multiplexing is shown in
Genomic Dataset
In the context of the present invention a Genomic Dataset is defined as a structured set of Genomic Data including, for example, genomic data of a living organism, one or more sequences and metadata generated by several steps of genomic data processing, or the result of the genomic sequencing of a living organism. One Genomic Data Multiplex may include multiple Genomic Datasets (as in a multi-channel scenario) where each dataset refers to a different organism. The multiplexing mechanism of the several datasets into a single Genomic Data Multiplex is governed by information contained in data structures called Genomic Dataset List (GDL) and Genomic Dataset Mapping Table (GDMT).
Genomic Dataset List
A Genomic Dataset List (GDL) is defined as a data structure listing all Genomic Datasets available in a Genomic Data Multiplex. Each of the listed Genomic Datasets is identified by a unique value called Genomic Dataset ID (GID).
Each Genomic Dataset listed in the GDL is associated to:
The GDL is sent as payload of a single Transport Packet at the beginning of a Genomic Data Stream transmission; it can then be periodically re-transmitted in order to enable random access to the Stream.
The syntax of the GDL data structure is provided in the table below with an indication of the data type associated to each syntax element.
The syntax elements composing the GDL described above have the following meaning and function.
Genomic Dataset Mapping Table
The Genomic Dataset Mapping Table (GDMT) is produced and transmitted at the beginning of a streaming process (and possibly periodically re-transmitted, updated or identical in order to enable the update of correspondence points and the relevant dependencies in the streamed data). The GDMT is carried by a single Packet following the Genomic Dataset List and lists the SIDs identifying the Genomic Data Streams composing one Genomic Dataset. The GDMT is the complete collection of all identifiers of Genomic Data Streams (e.g., the genomic sequence, reference genome, metadata, etc) composing one Genomic Dataset carried by a Genomic Multiplex. A genomic dataset mapping table is instrumental in enabling random access to genomic sequences by providing the identifier of the stream of genomic data associated to each genomic dataset.
The syntax of the GDMT data structure is provided in the table below with an indication of the data type associated to each syntax element.
The syntax elements composing the GDMT described above have the following meaning and function.
extension_fields are optional descriptors that might be used to further describe either a Genomic Dataset or one Genomic Dataset component.
Reference ID Mapping Table
The Reference ID Mapping Table (RIDMT) is produced and transmitted at the beginning of a streaming process. The RIDMT is carried by a single Packet following the Genomic Dataset List. The RIDMT specifies a mapping between the numeric identifiers of reference sequences (REFID) contained in the Block header of an access unit and the (typically literal) reference identifiers contained in the main header specified in Table 1.
The RIDMT can be periodically re-transmitted in order to:
The syntax of the RIDMT data structure is provided in the table below with an indication of the data type associated to each syntax element.
The syntax elements composing the RIDMT described above have the following meaning and function.
Genomic Data Stream
A Genomic Data Multiplex contains one or several Genomic Data Streams where each stream can transport
A Genomic Data Stream containing genomic data is essentially a packetized version of a Genomic Data Layer where each packet is prepended with a header describing the packet content and how it is related to other elements of the Multiplex.
The Genomic Data Stream format described in this document and the File Format defined in this invention are mutually convertible. Whereas a full file format can be reconstructed in full only after all data have been received, in case of streaming a decoding tool can reconstruct and access, and start processing the partial data at any time.
A Genomic Data Stream is composed by several Genomic Data Blocks each containing one or more Genomic Data Packet. Genomic Data Blocks (GDBs) are containers of genomic information composing one genomic AU. GDB can be split into several Genomic Data Packets, according to the communication channel requirements. Genomic access units are composed by one or more Genomic Data Blocks belonging to different Genomic Data Streams.
Genomic Data Packets (GDPs) are transmission units composing one GDB. Packet size is typically set according to the communication channel requirements.
Genomic Data Blocks are composed by a header, a payload of compressed data and padding information. The table below provides an example of implementation of a GDB header with a description of each field and a typical data type.
The use of AUID, POS and BS enables the decoder to reconstruct the data indexing mechanisms referred to as Master Index Table (MIT) and Local Index Table (LIT) in this invention. In a data streaming scenario the use of AUID and BS enables the receiving end to dynamically re-create a LIT locally, without the need to send extra-data. The use of AUID, BS and POS will enable to recreate a MIT locally without the need to send additional data. This has the technical advantage to
A Genomic Data Block can be split into one or more Genomic Data Packets, depending on network layer constraints such as maximum packet size, packet loss rate, etc. A Genomic Data Packet is composed by a header and a payload of encoded or encrypted genomic data as described in the table below.
The Genomic Multiplex can be properly decoded only when at least one Genomic Dataset List, one Genomic Dataset Mapping Table and one Reference ID Mapping Table have been received, allowing to map every packet to a specific Genomic Dataset component.
Multiplex Encoding Process
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2016/074311 | 10/11/2016 | WO | 00 |