The present invention relates to the field of sequence similarity searching. In particular, the present invention relates to the field of searching large databases of protein biological sequences for strings that are similar to a query sequence.
Sequence analysis is a commonly used tool in computational biology to help study the evolutionary relationship between two sequences, by attempting to detect patterns of conservation and divergence. Sequence analysis measures the similarity of two sequences by performing inexact matching, using biologically meaningful mutation probabilities. As used herein, the term “sequence” refers to an ordered list of items, wherein each item is represented by a plurality of adjacent bit values. The items can be symbols from a finite symbol alphabet. In computational biology, the symbols can be DNA bases, protein residues, etc. As an example, each symbol that represents an amino acid may be represented by 5 adjacent bit values. A high-scoring alignment of the two sequences matches as many identical residues as possible while keeping differences to a minimum, thus recreating a hypothesized chain of mutational events that separates the two sequences.
Biologists use high-scoring alignments as evidence in deducing homology, i.e., that the two sequences share a common ancestor. Homology between sequences implies a possible similarity in function or structure, and information known for one sequence can be applied to the other. Sequence analysis helps to quickly understand an unidentified sequence using existing information. Considerable effort has been spent in collecting and organizing information on existing sequences. An unknown DNA or protein sequence, termed the query sequence, can be compared to a database of annotated sequences such as GenBank or Swiss-Prot to detect homologs.
Sequence databases continue to grow exponentially as entire genomes of organisms are sequenced, making sequence analysis a computationally demanding task. For example, since its release in 1982, the GenBank DNA database has doubled in size approximately every 18 months. The International Nucleotide Sequence Databases comprised of DNA and RNA sequences from GenBank, the European Molecular Biology Laboratory's European Bioinformatics Institute (EMBL-Bank), and the DNA Data Bank of Japan recently announced a significant milestone in archiving 100 gigabases of sequence data. The Swiss-Prot protein database has experienced a corresponding growth as newly sequenced genomic DNA are translated into proteins. Existing sequence analysis tools are fast becoming outdated in the post-genomic era.
The most widely used software for efficiently comparing biosequences to a database is known as BLAST (the Basic Local Alignment Search Tool). BLAST compares a query sequence to a database sequence to find sequences in the database that exactly match the query sequence (or a subportion thereof) or differ from the query sequence (or a subportion thereof) by a small number of “edits” (which may be single-character insertions, deletions or substitutions). Because direct measurement of edit distance between sequences is computationally expensive, BLAST uses a variety of heuristics to identify small portions of a large database that are worth comparing carefully to the query sequence.
In an effort to meet a need in the art for BLAST acceleration, particularly BLASTP acceleration, the inventors herein disclose the following.
According to one aspect of a preferred embodiment of the present invention, the inventors disclose a BLAST design wherein all three stages of BLAST are implemented in hardware as a data processing pipeline. Preferably, this pipeline implements three stages of BLASTP, wherein the first stage comprises a seed generation stage, the second stage comprises an ungapped extension analysis stage, and wherein the third stage comprises a gapped extension analysis stage. However, it should be noted that only a portion of the gapped extension stage may be implemented in hardware, such as a prefilter portion of the gapped extension stage as described herein. It is also preferred that the hardware logic device (or devices) on which the pipeline is deployed be a reconfigurable logic device (or devices). A preferred example of such a reconfigurable logic device is a field programmable gate array (FPGA).
According to another aspect of a preferred embodiment of the present invention, the inventors herein disclose a design for deploying the seed generation stage of BLAST, particularly BLASTP, in hardware (preferably in reconfigurable logic such as an FPGA). Two components of the seed generation stage comprise a word matching module and a hit filtering module.
As one aspect of this design for the word matching module of the seed generation stage, disclosed herein is a hit generator that uses a lookup table to find hits between a plurality of database w-mers and a plurality of query w-mers. Preferably, this lookup table includes addresses corresponding to all possible w-mers that may be present in the database sequence. Stored at each address is preferably a position identifier for each query w-mer that is deemed a match to a database w-mer whose residues are the same as those of the lookup table address. A position identifier in the lookup table preferably identifies the position in the query sequence for the “matching” query w-mer.
Given that a query w-mer may (and likely will) exist at multiple positions within the query sequence, multiple position identifiers may (and likely will) map to the same lookup table address. To accommodate situations where the number of position identifiers for a given address exceeds the storage space available for that address (e.g., 32 bits), the lookup table preferably comprises two subtables—a primary table and a duplicate table. If the storage space for addresses in the lookup table corresponds to a maximum of Z position identifiers for each address, the primary table will store position identifiers for matching query w-mers when the number of such position identifiers is less than or equal to Z. If the number of such position identifiers exceeds Z, then the duplicate table will be used to store the position identifiers, and the address of the primary table corresponding to that matching query w-mer will be populated with data that identifies where in the duplicate table all of the pertinent position identifiers can be found.
In one embodiment, this lookup table is stored in memory that is offchip from the reconfigurable logic device. Thus, accessing the lookup table to find hits is a potential bottleneck source for the pipelined processing of the seed generation stage. Therefore, it is desirable to minimize the need to perform multiple lookups in the lookup table when retrieving the position identifiers corresponding to hits between the database w-mers and the query w-mers, particularly lookups in the duplicate table. As one solution to this problem, the inventors herein disclose a preferred embodiment wherein the position identifiers are modular delta encoded into the lookup table addresses. Consider an example where the query sequence is of residue length 2048 (or 211). If the w-mer length, w, were to be 3, this means that the number of query positions (qi) for the query w-mers would be 2046 (or q=1:2046). Thus, to represent q without encoding, 11 bits would be needed. Furthermore, in such a situation, each lookup table address would need at least Z*11 bits (plus one additional bit for flagging whether reference to the duplicate table is needed) of space to store the limit of Z position identifiers. If z were equal to 3, this translates to a need for 34 bits. However, most memory devices such as SRAM are 32 bits or 64 bits wide. If a practitioner of the present invention were to use a 32 bit wide SRAM device to store the lookup table, there would not be sufficient room in the SRAM addresses for storing Z position identifiers. However, by modular delta encoding each position identifier, this aspect of the preferred embodiment of the present invention allows for Z position identifiers to be stored in a single address of the lookup table. This efficient storage technique enhances the throughput of the seed generation pipeline because fewer lookups into the duplicate table will need to be performed. The modular delta encoding of position identifiers can be performed in software as part of a query pre-processing operation, with the results of the modular delta encoding to be stored in the SRAM at compile time.
As another aspect of the preferred embodiment, optimal base selection can also be used to reduce the memory capacity needed to implement the lookup table. Continuing with the example above (where the query sequence length is 2048 and the w-mer length w is 3), it should be noted that the protein residues of the protein biosequence are preferably represented by a 20 residue alphabet. Thus, to represent a given residue, the number of bits needed would be 5 (wherein 25=32; which provides sufficient granularity for representing a 20 residue alphabet). Without optimal base selection, the number of bit values needed to represent every possible combination of residues in the w-mers would be 25w (or 32,768 when w equals 3), wherein these bit values would serve as the addresses of the lookup table. However, given the 20 residue alphabet, only 20w (or 8,000 when w equals 3) of these addresses would specify a valid w-mer. To solve this potential problem of wasted memory space, the inventors herein disclose an optimal base selection technique based on polynomial evaluation techniques for restricting the lookup table addresses to only valid w-mers. Thus, with this aspect of the preferred design, the key used for lookups into the lookup table uses a base equal to the size of the alphabet of interest, thereby allowing an efficient use of memory resources.
According to another aspect of the preferred embodiment, disclosed herein is a hit filtering module for the seed generation stage. Given the high volume of hits produced as a result of lookups in the lookup table, and given the expectation that only a small percentage of these hits will correspond to a significant degree of alignment between the query sequence and the database sequence over a length greater than the w-mer length, it is desirable to filter out hits having a low probability of being part of a longer alignment. By filtering out such unpromising hits, the processing burden of the downstream ungapped extension stage and the gapped extension stage will be greatly reduced. As such, a hit filtering module is preferably employed in the seed generation stage to filter hits from the lookup table based at least in part upon whether a plurality of hits are determined to be sufficiently close to each other in the database sequence. In one embodiment, this hit filtering module comprises a two hit module that filters hits at least partially based upon whether two hits are determined to be sufficiently close to each other in the database sequence. To aid this determination, the two hit module preferably computes a diagonal index for each hit by calculating the difference between the query sequence position for the hit and the database sequence position for the hit. The two hit module can then decide to maintain a hit if another hit is found in the hit stream that shares the same diagonal index value and wherein the database sequence position for that another hit is within a pre-selected distance from the database sequence position of the hit under consideration.
The inventors herein further disclose that a plurality of hit filtering modules can be deployed in parallel within the seed generation stage on at least one hardware logic device (preferably at least one reconfigurable logic device such as at least one FPGA). When the hit filtering modules are replicated in the seed generation pipeline, a switch is also preferably deployed in the pipeline between the word matching module to selectively route hits to one of the plurality of hit filtering modules. This load balancing allows the hit filtering modules to process the hit stream produced by the word matching module with minimal delays. Preferably, this switch is configured to selectively route each hit in the hit stream. With such selective routing, each hit filtering module is associated with at least one diagonal index value. The switch then routes a given hit to the hit filtering module that is associated with the diagonal index value for that hit. Preferably, this selective routing employs modulo division routing. With modulo division routing, the destination hit filtering module for a given hit is identified by computing the diagonal index for that hit, modulo the number of hit filtering modules. The result of this computation identifies the particular hit filtering module to which that hit should be routed. If the number of replicated hit filtering modules in the pipeline comprises b, wherein b=2t, then this modulo division routing can be implemented by having the switch check the least significant t bits of each hit's diagonal index value to determine the appropriate hit filtering module to which that hit should be routed. This switch can also be deployed on a hardware logic device, preferably a reconfigurable logic device such as an FPGA.
As yet another aspect of the seed generation stage, the inventors herein further disclose that throughput can be further enhanced by deploying a plurality of the word matching modules, or at least a plurality of the hit generators of the word matching module, in parallel within the pipeline on the hardware logic device (the hardware logic device preferably being a reconfigurable logic device such as an FPGA). A w-mer feeder upstream from the hit generator preferably selectively delivers the database w-mers of the database sequence to an appropriate one of the hit generators. With such a configuration, a plurality of the switches are also deployed in the pipeline, wherein each switch receives a hit stream from a different one of the parallel hit generators. Thus, in a preferred embodiment, if there are a plurality h of hit generators in the pipeline, then a plurality h of the above-described switches will also be deployed in the pipeline. To bridge the h switches to the b hit filtering modules, this design preferably also deploys a plurality T of buffered multiplexers. Each buffered multiplexer is connected at its output to one of the T hit filtering modules and preferably receives as inputs from each of the switches the modulo-routed hits that are destined for the downstream hit filtering module at its output. The buffered multiplexer then multiplexes the modulo-routed hits from multiple inputs to a single output stream. As disclosed herein, the buffered multiplexers are also preferably deployed in the pipeline in hardware logic, preferably reconfigurable logic such as that provided by an FPGA.
According to another aspect of a preferred embodiment of the present invention, the inventors herein disclose a design for deploying the ungapped extension stage of BLAST, particularly BLASTP, in hardware (preferably in reconfigurable logic such as an FPGA). The ungapped extension stage preferably passes only hits that qualify as high scoring pairs (HSPs), as determined over some extended window of the database sequence and query sequence near the hit, wherein the determination as to whether a hit qualifies as an HSP is based on a scoring matrix. From the scoring matrix, the ungapped extension stage can compute the similarity scores of nearby pairs of bases from the database and query. Preferably, this scoring matrix comprises a BLOSUM-62 scoring matrix. Furthermore, the scoring matrix is preferably stored in a BRAM unit deployed on a hardware logic device (preferably a reconfigurable logic device such as an FPGA).
According to another aspect of a preferred embodiment of the present invention, the inventors herein disclose a design for deploying the gapped extension stage of BLAST, particularly BLASTP, in hardware (preferably in reconfigurable logic such as an FPGA). The gapped extension stage preferably processes high scoring pairs to identify which hits correspond to alignments of interest for reporting back to the user. The gapped extension stage of this design employs a banded Smith-Waterman algorithm to find which hits pass this test. This banded Smith-Waterman algorithm preferably uses an HSP as a seed to define a band in which the Smith-Waterman algorithm is run, wherein the band is at least partially specified by a bandwidth parameter defined at compile time.
These and other features and advantages of the present invention will be described hereinafter to those having ordinary skill in the art.
FIGS. 2(a) and (b) illustrate an exemplary system into which the BLASTP pipeline of
FIGS. 3(a)-(c) illustrate exemplary boards on which BLASTP pipeline functionality can be deployed;
FIGS. 6(d) and 6(e) depict an exemplary vector implementation of a prune-and-search algorithm that can be used for neighborhood generation;
FIGS. 12(a) and 12(b) depict a preferred algorithm for finding hits with the hit generator;
FIGS. 14(a) and (b) depict examples of functionality provided by a two hit module;
FIGS. 18(a) and (b) comparatively illustrate a load distribution of hits for two types of routing of hits to parallel hit filtering modules;
FIGS. 26(a) and (b) depict a comparison of the search space as between NCBI BLASTP employing X-drop and banded Smith-Waterman;
FIGS. 34, 35(a) and 35(b) depict exemplary process flows for creating a template to be loaded onto a hardware logic device.
As used herein, the term “stage” refers to a functional process or group of processes that transforms/converts/calculates a set of outputs from a set of inputs. It should be understood to those of ordinary skill in the art that, any two or more “stages” could be combined and yet still be covered by this definition as a stage may itself comprise a plurality of stages.
One observation in the BLASTP technique is the high likelihood of the presence of short aligned words (or w-mers) in an alignment. Seed generation stage 102 preferably comprises a word matching module 108 and a hit filtering module 110. The word matching module 108 is configured find a plurality of hits between substrings (or words) of a query sequence (referred to as query w-mers) and substrings (or words) of a database sequence (referred to as database w-mers). The word matching module is preferably keyed with the query w-mers corresponding to the query sequence prior to the database sequence being streamed therethrough. As an input, the word matching module receives a bit stream comprising a database sequence and then operates to find hits between database w-mers produced from the database sequence and the query w-mers produced from the query sequence, as explained below in greater detail. The hit filtering module 110 receives a stream of hits from the word matching module 108 and decides whether the hits show sufficient likelihood of being part of a longer alignment between the database sequence and the query sequence. Those hits passing this test by the hit filtering module are passed along to the ungapped extension stage 104 as seeds. In a preferred embodiment, the hit filtering module is implemented as a two hit module, as explained below.
The ungapped extension stage 104 operates to process the seed stream received from the first stage 102 and determine which of those hits qualify as high scoring pairs (HSPs). An HSP is a pair of continuous subsequences of residues (identical or not, but without gaps at this stage) of equal length, at some location in the query sequence and the database sequence. Statistically significant HSPs are then passed into the gapped extension stage 106, where a Smith-Waterman-like dynamic programming algorithm is performed. An HSP that successfully passes through all three stages is reported to the user.
FIGS. 2(a) and (b) depict a preferred system 200 in which the BLASTP pipeline of
However, it should be noted that all three stages need not be fully deployed in hardware to achieve some degree of higher throughput for BLAST (particularly BLASTP) relative to conventional software-based BLAST processing. For example, a practitioner of the present invention may choose to implement only the seed generation stage in hardware. Similarly, a practitioner of the present invention may choose to implement only the ungapped extension stage in hardware (or even only a portion thereof in hardware, such as deploying a prefilter portion of the ungapped extension stage in hardware). Further still, a practitioner of the present invention may choose to implement only the gapped extension stage in hardware (or even only a portion thereof in hardware, such as deploying a prefilter portion of the gapped extension stage in hardware).
Board 250 comprises at least one hardware logic device. As used herein, “hardware logic device” refers to a logic device in which the organization of the logic is designed to specifically perform an algorithm and/or application of interest by means other than through the execution of software. For example, a general purpose processor (GPP) would not fall under the category of a hardware logic device because the instructions executed by the GPP to carry out an algorithm or application of interest are software instructions. As used herein, the term “general-purpose processor” refers to a hardware device that fetches instructions and executes those instructions (for example, an Intel Xeon processor or an AMD Opteron processor). Examples of hardware logic devices include Application Specific Integrated Circuits (ASICs) and reconfigurable logic devices, as more fully described below.
The hardware logic device(s) of board 250 is preferably a reconfigurable logic device 202 such as a field programmable gate array (FPGA). The term “reconfigurable logic” refers to any logic technology whose form and function can be significantly altered (i.e., reconfigured) in the field post-manufacture. This is to be contrasted with a GPP, whose function can change post-manufacture, but whose form is fixed at manufacture. This can also be contrasted with those hardware logic devices whose logic is not reconfigurable, in which case both the form and the function is fixed at manufacture (e.g., an ASIC).
In this system, board 250 is positioned to receive data that streams off either or both a disk subsystem defined by disk controller 206 and data store(s) 204 (either directly or indirectly by way of system memory such as RAM 210). The board 250 is also positioned to receive data that streams in from and a network data source/destination 242 (via network interface 240). Preferably, data streams into the reconfigurable logic device 202 by way of system bus 212, although other design architectures are possible (see
The data store(s) 204 can be any data storage device/system, but is preferably some form of a mass storage medium. For example, the data store(s) 204 can be a magnetic storage device such as an array of Seagate disks. However, it should be noted that other types of storage media are suitable for use in the practice of the invention. For example, the data store could also be one or more remote data storage devices that are accessed over a network such as the Internet or some local area network (LAN). Another source/destination for data streaming to or from the reconfigurable logic device 202, is network 242 by way of network interface 240, as described above.
The computer system defined by main processor 208 and RAM 210 is preferably any commodity computer system as would be understood by those having ordinary skill in the art. For example, the computer system may be an Intel Xeon system or an AMD Opteron system.
The reconfigurable logic device 202 has firmware modules deployed thereon that define its functionality. The firmware socket module 220 handles the data movement requirements (both command data and target data) into and out of the reconfigurable logic device, thereby providing a consistent application interface to the firmware application module (FAM) chain 230 that is also deployed on the reconfigurable logic device. The FAMs 230i of the FAM chain 230 are configured to perform specified data processing operations on any data that streams through the chain 230 from the firmware socket module 220. Preferred examples of FAMs that can be deployed on reconfigurable logic in accordance with a preferred embodiment of the present invention are described below. The term “firmware” will refer to data processing functionality that is deployed on reconfigurable logic. The term “software” will refer to data processing functionality that is deployed on a GPP (such as processor 208).
The specific data processing operation that is performed by a FAM is controlled/parameterized by the command data that FAM receives from the firmware socket module 220. This command data can be FAM-specific, and upon receipt of the command, the FAM will arrange itself to carry out the data processing operation controlled by the received command. For example, within a FAM that is configured to perform sequence alignment between a database sequence and a first query sequence, the FAM's modules can be parameterized to key the various FAMs to the first query sequence. If another alignment search is requested between the database sequence and a different query sequence, the FAMs can be readily re-arranged to perform the alignment for a different query sequence by sending appropriate control instructions to the FAMs to re-key them for the different query sequence.
Once a FAM has been arranged to perform the data processing operation specified by a received command, that FAM is ready to carry out its specified data processing operation on the data stream that it receives from the firmware socket module. Thus, a FAM can be arranged through an appropriate command to process a specified stream of data in a specified manner. Once the FAM has completed its data processing operation, another command can be sent to that FAM that will cause the FAM to re-arrange itself to alter the nature of the data processing operation performed thereby, as explained above. Not only will the FAM operate at hardware speeds (thereby providing a high throughput of target data through the FAM), but the FAMs can also be flexibly reprogrammed to change the parameters of their data processing operations.
The FAM chain 230 preferably comprises a plurality of firmware application modules (FAMs) 230a, 230b, . . . that are arranged in a pipelined sequence. As used herein, “pipeline”, “pipelined sequence”, or “chain” refers to an arrangement of FAMs wherein the output of one FAM is connected to the input of the next FAM in the sequence. This pipelining arrangement allows each FAM to independently operate on any data it receives during a given clock cycle and then pass its output to the next downstream FAM in the sequence during another clock cycle.
A communication path 232 connects the firmware socket module 220 with the input of the first one of the pipelined FAMs 230a. The input of the first FAM 230a serves as the entry point into the FAM chain 230. A communication path 234 connects the output of the final one of the pipelined FAMs 230m with the firmware socket module 220. The output of the final FAM 230m serves as the exit point from the FAM chain 230. Both communication path 232 and communication path 234 are preferably multi-bit paths.
It is worth noting that while a single FPGA 202 is shown on the printed circuit boards of FIGS. 3(a) and (b), it should be understood that multiple FPGAs can be supported by either including more than one FPGA on the printed circuit board 250 or by installing more than one printed circuit board 250 in the computer system.
Additional details regarding the preferred system 200, including FAM chain 230 and firmware socket module 220, for deployment of the BLASTP pipeline are found in the following patent applications: U.S. patent application Ser. No. 09/545,472 (filed Apr. 7, 2000, and entitled “Associative Database Scanning and Information Retrieval”, now U.S. Pat. No. 6,711,558), U.S. patent application Ser. No. 10/153,151 (filed May 21, 2002, and entitled “Associative Database Scanning and Information Retrieval using FPGA Devices”, now U.S. Pat. No. 7,139,743), published PCT applications WO 05/048134 and WO 05/026925 (both filed May 21, 2004, and entitled “Intelligent Data Storage and Processing Using FPGA Devices”), U.S. patent application Ser. No. 11/359,285 (filed Feb. 22, 2006, entitled “Method and Apparatus for Performing Biosequence Similarity Searching” and published as U.S. Patent Application Publication 2007/0067108), U.S. patent application Ser. No. 11/293,619 (filed Dec. 2, 2005, and entitled “Method and Device for High Performance Regular Expression Pattern Matching” and published as U.S. Patent Application Publication 2007/0130140), U.S. patent application Ser. No. 11/339,892 (filed Jan. 26, 2006, and entitled “Firmware Socket Module for FPGA-Based Pipeline Processing” and published as U.S. Patent Application Publication 2007/0174841), and U.S. patent application Ser. No. 11/381,214 (filed May 2, 2006, and entitled “Method and Apparatus for Approximate Pattern Matching”), the entire disclosures of each of which are incorporated herein by reference.
1.A. Word Matching Module 108
The w-mer feeder 502 preferably exists as a FAM 230 and receives a database stream from the data store 204 (by way of the firmware socket 220). The w-mer feeder 502 then constructs fixed length words to be scanned against the a query neighborhood. Preferably, twelve 5-bit database residues are accepted in each clock cycle by the w-mer control finite state machine unit 506. The output of this stage 502 is a database w-mer and its position in the database sequence. The word length w of the w-mers is defined by the user at compile time.
The w-mer creator unit 508 is a structural module that generates the database w-mer for each database position.
W-mer creator unit 508 can readily be designed to enable various word lengths, masks (discontiguous residue position taps), or even multiple database w-mers based on different masks. Another function of the module 508 is to flag invalid database w-mers. While NCBI BLASTP supports an alphabet size of 24 (20 amino acids, 2 ambiguity characters and 2 control characters), a preferred embodiment of the present invention restricts this alphabet to only the 20 amino acids. Database w-mers that contain residues not representing the twenty amino acids are flagged as invalid and discarded by the seed generation hardware. This stage is also capable of servicing multiple consumers in a single clock cycle. Up to M consecutive database w-mers can be routed to downstream sinks based on independent read signals. This functionality is helpful to support multiple parallel hit generator modules, as described below. Care can also be taken to eliminate dead cycles; the w-mer feeder 502 is capable of satisfying up to M requests in every clock cycle.
The hit generator 504 produces hits from an input database w-mer by querying a lookup table stored in memory 514. In a preferred embodiment, this memory 514 is off-chip SRAM (such as memory 300 in
The hardware pipeline of the hit generator 504 preferably comprises a base conversion unit 510, a table lookup unit 512, and a hit compute module 516.
A direct memory lookup table 514 stores the position(s) in the query sequence to which every possible w-mer maps. The twenty amino acids are represented using 5 bits. A direct mapping of a w-mer to the lookup table requires a large lookup table with 25w entries. However, of these 25w entries, only 20w specify a valid w-mer. Therefore, a change of base to an optimal base is preferably performed by the base conversion unit 510 using the formula below:
Key=20w−1rw−1+20w−2rw−2+K+r0
where ri is the ith residue of the w-mer. For a fixed word length (which is set during compile time), this computation is easily realized in hardware, as shown in
The base conversion unit 510 of
As noted above, the hit generator 504 identifies hits, and a hit is preferably identified by a (q, d) pair that corresponds to a pair of aligned w-mers (the pair being a query w-mer and a database w-mer) at query sequence offset q and database sequence offset d. Thus, q serves as a position identifier for identifying where in the query sequence a query w-mer is located that serves as a “hit” on a database w-mer. Likewise, d serves as a position identifier for locating where in the database sequence that database w-mer serving as the basis of the “hit” is located.
To aid this process, the neighborhood of a query sequence is generated by identifying all overlapping words of a fixed length, termed a “w-mer”. A w-mer in the neighborhood acts as an index to one or more positions in the query. Linear scanning of overlapping words in the database sequence, using a lookup table constructed from the neighborhood helps in quick identification of hits, as explained below.
Due to the high degree of conservation in DNA sequences, BLASTN word matches are simply pairs of exact matches in both sequences (with the default word length being 11). Thus, with BLASTN, building the neighborhood involves identifying all N−w+1 overlapping w-mers in a query sequence of length N. However, for protein sequences, amino acids readily mutate into other, functionally similar amino acids. Hence, BLASTP looks for shorter (typically of length w=3) non-identical pairs of substrings that have a high similarity score. Thus, with word matching in BLASTP, “hits” between database w-mers and query w-mers include not only hits between a database w-mer and its exactly matching query w-mer, but also any hits between a database w-mer and any of the query w-mers within the neighborhood of the exactly matching query w-mer. In BLASTP, the neighborhood N(w, T) is preferably generated by identifying all possible amino acid subsequences of size w that match each overlapping w-mer in the query sequence. All such subsequences that score at least T (called the neighborhood threshold) when compared to the query w-mer are added to the neighborhood.
Neighborhood generation is preferably performed by software as part of a query pre-processing operation (see
However, such a naïve algorithm can be both memory- and computationally-intensive, degrading exponentially as longer word lengths. As an alternative, a prune-and-search algorithm can be used to generate the neighborhood. Such a prune-and-search algorithm has the same worst-case bound as the naïve algorithm, but is believed to show practical improvements in speed. The prune-and-search algorithm divides the search space into a number of independent partitions, each of which is inspected recursively. At each step, it is possible to determine if there exists at least one w-mer in the partition that must be added to the neighborhood. This decision can be made without the costly inspection of all w-mers in the partition. Such w-mer partitions are pruned from the search process. Another advantage of a prune-and-search algorithm is that it can be easily parallelized.
Given a query w-mer r, the alphabet Σ, and a scoring matrix 6, the neighborhood of the w-mer can be computed using the recurrence shown below, wherein the neighborhood N(w, T) of the query Q is the union of the individual neighborhoods of every query w-mer r ε Q.
Gr(x,w,T) is the set of all w-mers in Nr(w,T) having the prefix x, wherein x can be termed a partial w-mer. The base is Gr(x,w,T) where |x|=w−1 and the target is to compute Gr(ε,w,T). At each step of the recurrence, the prefix x is extended by one character a ε Σ. The pruning process is invoked at this stage. If it can be determined that no w-mers with a prefix xa exist in the neighborhood, all such w-mers are pruned; otherwise the partition is recursively inspected. The score of xa is also computed and stored in Sr(xa). The base case of the recurrence occurs when |xa|=w−1. At this point, it is possible to determine conclusively if the w-mer scores above the neighborhood threshold.
For the pruning step, during the extension of x by a, the highest score of any w-mer in Nr(w,T) with the prefix xa is determined. This score is computed as the sum of three parts: the score of x against the pairwise score of a against the character r|x|+1, and the highest score of some suffix string y and r|x|+2 . . . w with |xay|=w. The three score values are computed by constant-time table lookups into Sr, δ, and Cr respectively. Cr(i) holds the score of the highest scoring suffix y of some w-mer in Nr(w,T), where |y|=w−i. This can be easily computed in linear time using the score matrix.
A stack implementation of the computation of Gr(e,w,T) is shown in
As partial w-mers are extended, a larger number of partitions are discarded. The fraction of the neighborhood discarded at each step depends on the scoring matrix δ and the threshold T. While in the worst case scenario the algorithm of
As another alternative to the naïve algorithm, a vector implementation of the prune-and-search algorithm that employs Single Instruction Multiple Data (SIMD) technology available on a host CPU can be used to accelerate the neighborhood generation. SIMD instructions exploit data parallelism in algorithms by performing the same operation on multiple data values. The instruction set architectures of most modern GPPs are augmented with SIMD instructions that offer increasingly complex functionality. Existing extensions include SSE2 on x86 architectures and AltiVec on PowerPC cores, as is known in the art.
Sample SIMD instructions are illustrated in
Prune-and-search algorithms partition a search problem into a number of subinstances that are independent of each other. With the exemplary prune-and-search algorithm, the extensions of a partial w-mer by every character in the alphabet can be performed independently of each other. The resultant data parallelism can then be exploited by vectorizing the computation in the “for” loop of the algorithm of
FIGS. 6(d) and 6(e) illustrate a vector implementation of the prune-and-search algorithm. As in the sequential version, each partial w-mer is extended by every character in the alphabet. However, each iteration of the loop performs VECTOR_SIZE such simultaneous extensions. As previously noted, a sorted alphabet list is used for extension. The sequential add operation is replaced by the vector equivalent, Vector-Add. Lines 21-27 of
SSE2 extensions available on a host CPU can be used for implementing the algorithm of FIGS. 6(d) and 6(e). A vector size of 16 and signed 8-bit integer data values can also be used. The precision afforded by such an implementation is sufficient for use with typical parameters without overflow or underflow exceptions. Saturated signed arithmetic can be used to detect overflow/underflow and clamp the result to the largest/smallest value. The alphabet size can be increased to the nearest multiple of 16 by introducing dummy characters, and the scoring matrix can be extended accordingly.
Table 1 below compares the neighborhood generation times of the three neighborhood generation algorithms discussed above, wherein the NCBI-BLAST algorithm represents the naïve algorithm. The run times were averaged over twenty runs on a 2048-residue query sequence. The benchmark machine was a 2.0 GHz AMD Opteron workstation with 6 GB of memory.
As can be seen from Table 1, the prune-and-search algorithm is approximately 5× faster than the naïve NCBI-BLAST algorithm for w=4. Furthermore, it can be seen that the performance of the naïve NCBI-BLAST algorithm degrades drastically with increasing word lengths. For example, at w=6, the prune-and-search algorithm is over 60× faster. It can also be seen that the vector implementation shows a speed-up of around 3× over the sequential prune-and-search method.
It should be noted that a tradeoff exists between speed and sensitivity when selecting the neighborhood parameters. Increasing the word length or the neighborhood threshold decreases the neighborhood size, and therefore reduces the computational costs of seed generation, since fewer hits are generated. However, this comes at the cost of decreased sensitivity. Fewer word matches are generated from the smaller neighborhood, reducing the probability of a hit in a biologically relevant alignment.
The neighborhood of a query w-mer is stored in a direct lookup table 514 indexed by w-mers (preferably indirectly indexed by the w-mers when optimal base selection is used to compute a lookup table index key as described in connection with the base conversion unit 510). A linear scan of the database sequence performs a lookup in the lookup table 514 for each overlapping database w-mer r at database offset d. The table lookup yields a linked list of query offsets q1, q2, . . . , qn which correspond to hits (q1, d1), (q2, d2), . . . , (qn, dn). Hits generated from a table lookup may be further processed to generate seeds for the ungapped extension stage.
Thus, as indicated, the table lookup unit 512 generates hits for each database w-mer. The query neighborhood is stored in the lookup table 514 (embodied as off-chip SRAM in one embodiment). Preferably, the lookup table 514 comprises a primary table 906 and a duplicate table 908, as described below in connection with
With reference to
Lookups into the duplicate table 908 reduce the throughput of the word matching stage 108. It is highly desirable for such lookups be kept to a minimum, such that most w-mer lookups are satisfied by a single probe into the primary table 906. It is expected that the word matching stage 108 will generate approximately two query positions per w-mer lookup, when used with the default parameters. To decrease the number of SRAM probes for each w-mer, the 11-bit query positions are preferably packed three in each primary table entry. To achieve this packing in the 31 bits available in the 32-bit SRAM, it is preferred that a modular delta encoding scheme be employed. Modular delta encoding can be defined as representing a plurality of query positions by defining one query position with a base reference for that position in the query sequence and then using a plurality of modulo offsets that define the remaining actual query positions when combined with the base reference. The conditions under which such modular delta encoding is particularly advantageous can be defined as:
G+(G−1)(n−1)≦W−1 and
Gn>W−1
Wherein W represents the bit width of the lookup table entries, wherein G represents the number of bits needed to represent a full query position, and wherein n represents the maximum limit for storing query positions in a single lookup table entry.
With modular delta encoding, a first query position (qp0) 914 for a given w-mer is stored in the first 11 bits, followed by two unsigned 10-bit offset values 916 and 918 (qo1 and qo2). The three query positions for hits H1, H2 and H3 (wherein Hi=(qi, di) can then be decoded as follows:
q1=qp0
q2=(qp0+qo1)mod 2048
q3=(qp0+qo1+qo2)mod 2048
The result of each modulo addition for q2 and q3 will be an 11-bit query position. Thus, the pointers 914, 916 and 918 stored in the lookup table serve as position identifiers for identifying where in the query sequence a hit with the current database w-mer is found.
Preferably, the encoding of the query positions in the lookup table is performed during the pre-processing step on the host CPU using the algorithm shown in
qpj−qpi>2G−1 and
qpj>qpi
The solution to this exception is to start the encoding by storing qpj in the first G bits 914 of the table entry (wherein G is 11 bits in the preferred embodiment). For example, query positions 10, 90, and 2000 can be encoded as (2000, 58, 80). Secondly, if there are only two query positions, with a difference of exactly 1024, a dummy value of 2047 is introduced, after which the solution to the first case applies. For example, query positions 70 and 1094 are encoded as (1094, 953, 71). Query position 2047 is recognized as a special case and ignored in the hit compute module 516 (as shown in
As a result of the encoding scheme used, query positions may be retrieved out of order by the word matching module. This, however, is of no consequence to the downstream stages, since the hits remain sorted by database position.
Table 2 reveals the effect of the modular delta encoding scheme for the query sequence on the SRAM access pattern in the word matching stage. The table displays the percentage fi of database w-mer lookups that are satisfied in ai or fewer probes into the SRAM. The data is averaged for a neighborhood of N(4,13), over BLASTP searches of twenty 2048-residue query sequences compiled from the Escherichia coli k12 proteome, against the NR database. It should be noted that 82% of the w-mer lookups can be satisfied in a single probe when using the modular delta encoded lookup table (in which a single probe is capable of returning up to three query positions). The naïve scheme (in which a single probe is capable of returning only two query positions) would satisfy only 67% of lookups with a single probe, thus reducing the overall throughput.
Note, in case that the duplicate bit is set, the first probe returns the duplicate table address (and zero query positions). Table 2 also indicates that all fifteen query positions are retrieved in 6 SRAM accesses when the encoding scheme is used; this increases to 9 otherwise in the naïve scheme.
Thus, with reference to
Decoding the query positions in hardware is done in the hit compute module 516. The two stage pipeline 516 is depicted in
1.B. Hit Filtering Module 110
Another component in the seed generation pipeline is the hit filtering module 110. As noted above, only a subset of the hits found in the hit stream produced by the word matching module are likely to be significant. The BLASTN heuristic and the initial version of BLASTP heuristic consider each hit in isolation. In such a one-hit approach, a single hit is considered sufficient evidence of the presence an HSP and is used to trigger a seed for delivery to the ungapped extension stage. A neighborhood N(4, 17) may be used to yield sufficient hits to detect similarity between typical protein sequences. A large number of these seeds, however, are spurious and must be filtered by expensive seed extension, unless an alternative solution is implemented.
Thus, to reduce the likelihood of spurious hits being passed on to the more intensive ungapped extension stage of BLASTP processing, a hit filtering module 110 is preferably employed in the seed generation stage. To pass the hit filtering module 110, a hit must be determined to be sufficiently close to another hit in the database biosequence. As a preferred embodiment, the hit filtering module 110 may be implemented as a two hit module described hereinafter.
The two-hit refinement is based on the observation that HSPs of biological interest are typically much longer than a word. Hence, there is a high likelihood of generating multiple hits in a single HSP. In the two-hit method, hits generated by the word matching module are not passed directly to ungapped extension; instead they are recorded in memory that is representative of a diagonal array. The presence of two hits in close proximity on the same diagonal (noting that there is a unique diagonal associated with any HSP that does not include gaps) is the necessary condition to trigger a seed. Upon encountering a hit (q, d) in the word matching stage, its offset in the database sequence is recorded on the diagonal D=d−q. A seed is generated when a second non-overlapping hit (q′, d′) is detected on the same diagonal within a window length of A residues, i.e. d′−q′=d−q and d′−d<A. The reduced seed generation rate provided by this technique improves filtering efficiency, drastically reducing time spent in later stages.
In order to attain comparable sensitivity to the one-hit algorithm, a more permissive neighborhood of N(3, 11) can be used. Although this increases the number of hits generated by the word matching stage, only a fraction pass as seeds for ungapped extension. Since far less time is spent filtering hits than extending them, there is a significant savings in the computational cost.
The algorithm conceptually illustrated by
As described below, the two-hit module is preferably capable of handling hits that are received out of order (with respect to database position), without an appreciable loss in sensitivity or an appreciable increase in the workload of downstream stages. To address this “out of order” issue, the algorithm of
NCBI BLASTP employs a redundancy filter to discard seeds present in the vicinity of HSPs inspected in the ungapped extension stage. The furthest database position examined after extension is recorded in a structure similar to the diagonal array. In addition to passing the two-hit check, a hit must be non-overlapping with this region to be forwarded to the next stage. This feedback characteristic of the redundancy filter for BLASTP (wherein the redundancy filter requires feedback from the ungapped extension stage) makes it a questionable as to its value in a hardware implementation.
The inventors herein measured the effect of the lack of the NCBI BLASTP extension feedback on the seed generation rate of the first stage. Table 3 shows the increased seed generation rate for various query sizes and neighborhoods. The data of Table 3 suggests a modest increase in workload for ungapped extension, of less than a quarter of one percent. The reason for this minimal increase in workload is that the two-hit algorithm is already an excellent filter, approximately performing the role of the redundancy filter. Based on this data, the inventors conclude that feedback from stage 2 has little effect on system throughput and prefer to not include a redundancy filter in the BLASTP pipeline. However, it should be noted that a practitioner of the present invention may nevertheless choose to include such a redundancy filter.
As previously noted, the word matching module 108 can be expected to generate hits at the rate of approximately two per database sequence position for a neighborhood of N(4, 13). The two-hit module 110, with the capacity to process only a single hit per clock cycle, then becomes the bottleneck in the pipeline. Processing multiple hits per clock cycle for the two-hit module, however, poses a substantial challenge due to the physical constraints of the implementation. Concurrent access to the diagonal array is limited by the dual-ported block RAMs 1600 on the FPGA. Since one port is used to read a diagonal and the other to update it, no more than one hit can be processed in the two-hit module at a time. In order to address this issue, the hit filtering module (preferably embodied as a two-hit module) is preferably replicated in multiple parallel hit filtering modules to process hits simultaneously. Preferably, for load balancing purposes, hits are relatively evenly distributed among the copies of the hit filtering module.
A straightforward replication of the entire diagonal array would require that all copies of the diagonal array be kept coherent, leading to a multi-cycle update phase and a corresponding loss in throughput. Efforts to time-multiplex access to block RAMs (for example, quad-porting by running them at twice the clock speed of the two-hit logic) can be used, although such a technique is less than optimal and in some instances may be impractical because the two-hit logic already runs at a high clock speed.
The inventors herein note that the two-hit computation for a w-mer is performed on a single diagonal and the assessment by the two hit module as to whether a hit is maintained is independent of the values of all other diagonals. Rather than replicating the entire diagonal array, the diagonals can instead be evenly divided among b two-hit modules. A hit (qi, di) is processed by the jth two-hit copy if Di mod b=j−1. This modulo division scheme also increases the probability of equal work distribution between the b copies.
While a banded division of diagonals to two hit module copies can be used (e.g., diagonals 1-4 are assigned to a first two hit module, diagonals 5-8 are assigned to a second two hit module, and so on), it should be noted that hits generated by the word matching phase tend to be clustered around a few high scoring residues. Hence, a banded division of the diagonal array into b bands may likely lead to an uneven partitioning of hits, as shown in FIGS. 18(a) and (b).
With a modulo division routing scheme, the routing of a hit to its appropriate two-hit module is also simplified. If b is a power of two, i.e. b=2t, the lower t bits of Di act as the identifier for the appropriate two hit module to serve as the destination for hit Hi. If not b is not a power of 2, the modulo division operation can be pre-computed for all possible Di values and stored in on-chip lookup tables.
A decoder 1902 for each hit examines t low-order bits of the diagonal index (wherein t=1, when b is 2 given that b=2t). The decoded signal is passed to a priority encoder 1904 at each two-hit module to select one of the three hits. In case of a collision, priority is given to the higher-ordered hit. Information on whether a hit has been routed is stored in a register 1906 and is used to deselect a hit that has already been sent to its two-hit module. This decision is made by examining if the hit is valid, is being routed to a two-hit unit that is not busy, or has already been routed previously. The read signal is asserted once the entire triple has been routed. Each two-hit module can thus accept at least one available hit every clock cycle. With the word matching module 108 generating two hits on average per clock cycle, b=2 two-hit modules are likely to be sufficient to eliminate the bottleneck from this phase. However, it should be noted that other values for b can be used in the practice of this aspect of the invention.
With downstream stages capable of handling the seed generation rate of the first stage 102, the throughput of the BLASTP pipeline 100 is thus limited by the word matching module 108, wherein the throughput of the word matching module 108 is constrained by the lookup into off-chip SRAM 514. One solution to speed up the pipeline 100 is to run multiple hit generation modules 504 in parallel, each accessing an independent off-chip SRAM resource 514 with its own copy of the lookup table. Adjacent database w-mers are distributed by the feeder stage 502 to each of h hit generation modules 504. The w-mer feeder 502 preferably employs a round robin scheme to distribute database w-mers among hit generators 504 that are available for that clock cycle. Each hit generator 504 preferably has its own independent backpressure signal for assertion when that hit generator is not ready to receive a database w-mer. However, it should be noted that distribution techniques other than round robin can be used to distribute database w-mers among the hit generators. Hits generated by each copy of the hit generator 504 are then destined for the two-hit modules 110. It should be noted that the number of two-hit modules should be increased to keep up with the larger hit generation rate (e.g., the number of parallel two hit modules in the pipeline is preferably b*h)
The use of h independent hit generator modules 504 has an unintended consequence on the generated hit stream. The w-mer processing time within each hit generator 504 is variable due to the possibility of duplicate query positions. This characteristic causes the different hit generators 504 to lose synchronization with each other and generate hits that are out of order with respect to the database positions. Out-of-order hits may be discarded in the hardware stages. This however, leads to decreased search sensitivity. Alternatively, hits that are out of order by more than a fixed window of database residues in the extension stages may be forwarded to the host CPU without inspection. This increases the false positive rate and has an adverse effect on the throughput of the pipeline.
This problem may be tackled in one of three ways. First, the h hit generator modules 504 may be deliberately kept synchronized. On encountering a duplicate, every hit generator module 504 can be controlled to pause until all duplicates are retrieved, before the next set of w-mers is accepted. This approach quickly degrades in performance: as h grows, the probability of the modules pausing increases, and the throughput decreases drastically. A second approach is to pause the hit generator modules 504 only if they get out of order by more than a downstream tolerance. A preferred third solution is slightly different. The number of duplicates for each w-mer in the lookup table 514 is limited to L, requiring a maximum processing time of 1=┌L/3┐ clock cycles in a preferred implementation. This automatically limits the distance the hits can get out of order in the worst case to (dt+1)×(h−1) database residues, without the use of additional hardware circuitry. Here, dt is the latency of access into the duplicate table. The downstream stages can then be designed for this out-of-order tolerance level. In such a preferred implementation, dt can be 4 and L can be 15. The loss in sensitivity due to the pruning of hits outside this window was experimentally determined to be negligible.
With the addition of multiple hit generation modules 504, additional switching circuitry can be used to route all h hit triples to their corresponding two-hit modules 110. Such a switch essentially serves as a buffered multiplexer and can also be referred to as Switch2 (wherein switch 1700 is referred to as Switch1). The switching functions of Switch2 can be achieved in two phases. Firstly, a triple from each hit generation module 504 is routed to b queues 2104 (one for each copy of the two-hit module), using the interconnecting Switch1 (1700). A total of h×b queues, each containing a single hit per entry, are generated. Finally, a new interconnecting switch (Switch2) is deployed upstream from each two-hit module 110 to select hits from one of h queues. This two-phase switching mechanism successfully routes any one of 3×h hits generated by the word matching stage to any one of the b two-hit modules.
Thus,
A final piece of the high throughput seed generation pipeline depicted in
Further still, it should be noted that a plurality of parallel ungapped extension analysis stage circuits as described hereinafter can be deployed downstream from the output queues 2108 for the multiple two hit modules 110. Each ungapped extension analysis circuit can be configured to receive hits from one or more two hit modules 110 through queues 2108. In such an embodiment, the seed reduction module 2100 could be eliminated.
Preferred instantiation parameters for the seed generation stage 102 of
A dual-FPGA solution is used in a preferred embodiment of a BLASTP pipeline, with seed generation and ungapped extension deployed on the first FPGA and gapped extension running on the second FPGA, as shown in
Data flowing into and out of a board 250 is preferably communicated along a single 64-bit data path having two logic channels—one for data and the other for commands. Data flowing between stages on the same board or same reconfigurable logic device may utilize separate 64-bit data and control buses. For example, the data flow between stage 108 and stage 110 may utilize separate 64-bit data and control buses if those two stages are deployed on the same board 250. Module-specific commands program the lookup tables 514 and clear the diagonal array 1600 in the two-hit modules. The seed generation and ungapped extension modules preferably communicate via two independent data paths. The standard data communication channel is used to send seed hits, while a new bus is used to stream the database sequence. All modules preferably respect backpressure signals asserted to halt an upstream stage when busy.
The architecture for the ungapped extension stage 104 of the BLASTP is preferably the same as the ungapped extension stage architecture disclosed in the incorporated Ser. No. 11/359,285 patent application for BLASTN, albeit with a different scoring technique and some additional buffering (and associated control logic) used to accommodate the increased number of bits needed to represent protein residues (as opposed to DNA bases).
As disclosed in the incorporated Ser. No. 11/359,285 patent application, the ungapped extension stage 104 can be realized as a filter circuit 2300 such as shown in
The circuit 2300 is preferably organized into three (3) pipelined stages. This includes an extension controller 2306, a window lookup module 2308, and a scoring module 2310. The extension controller 2306 is preferably configured to parse the input to demultiplex the shared w-mers/commands 2302 and database stream 2304. All w-mer matches and the database stream flow through the extension controller 2306 into the window lookup module 2308. The window lookup module 2308 is responsible for fetching the appropriate substrings of the database stream and the query to form an alignment window. A preferred embodiment of the window lookup module also employs a shifting-tree to appropriately align the data retrieved from the buffers.
After the window is fetched, it is passed into the scoring module 2310 and stored in registers. The scoring module 2310 is preferably extensively pipelined as shown in
The final pipeline stage of the scoring module 2310 is the threshold comparator 2316. The comparator 2316 takes the fully-scored segment and makes a decision to discard or keep the segment. This decision is based on the score of the alignment relative to a user-defined threshold T, as well as the position of the highest-scoring substring. If the maximum score is above the threshold, the segment is passed on. Additionally, if the maximal scoring substring intersects either boundary of the window, the segment is also passed on, regardless of the score. If neither condition holds, the substring of a predetermined length, i.e., segment, is discarded. The segments that are passed on are indicated as HSPs 2318 in
As indicated above, when configured as a BLASTP ungapped extension stage 104, the circuit 2300 can employ a different scoring technique than that used for BLASTN applications. Whereas the preferred BLASTN scoring technique used a reward score of α for exact matches between a query sequence residue and a database sequence residue in the extension window and a penalty score of −β for a non-match between a query sequence residue and a database sequence residue in the extension window, the BLASTP scoring technique preferably uses a scoring system based on a more complex scoring matrix.
The scores found in the scoring matrix are preferably defined in accordance with the BLOSUM-62 standard. However, it should be noted that other scoring standards can readily be used in the practice of this aspect of the invention. Preferably, scoring lookup table 2408 is stored in one or more BRAM units within the FPGA on which the ungapped extension stage is deployed. Because BRAMs are dual-ported, Lw/2 BRAMs are preferably used to store the table 2408 to thereby allow each residue pair in the extension window to obtain its value in a single clock cycle. However, it should be noted that quad-ported BRAMs can be used to further reduce the total number of BRAMs needed for score lookups.
It should also be noted that the gapped extension stage 106 is preferably configured such that, to appropriately process the HSPs that are used as seeds for the gapped extension analysis, an appropriate window of the database sequence around the HSP must already be buffered by the gapped extension stage 106 when that HSP arrives. To ensure that the a sufficient amount of the database sequence can be buffered by the gapped extension stage 106 prior to the arrival of each HSP, a synchronization circuit 2350 such as the one shown in
To achieve this, circuit 2350 preferably comprises a buffer 2352 for buffering the database sequence 2304 and a buffer 2354 for buffering the HSPs 2318 generated by circuit 2300. Logic 2356 also preferably receives the database sequence and the HSPs. Logic 2356 can be configured to maintain a running window threshold calculation for Tw, wherein Tw is set equal to the latest database position for the current HSP plus some window value W. Logic 2356 then compares this computed Tw value with the database positions in the database sequence 2304 to control whether database residues in the database buffer 2352 or HSPs in the HSP buffer 2354 are passed by multiplexer 2358 to the circuit output 2360, which comprises an interleaved stream of database sequence portions and HSPs. Appropriate command data can be included in the output to tag data within the stream as either database data or HSP data. Thus, the value for W can be selected such that a window of an appropriate size for the database sequence around each HSP is guaranteed. An exemplary value for W can be 500 residue positions of a sequence. However, it should be understood that other values for W could be used, and the choice as to W for a preferred embodiment can be based on the characteristics of the band used by the Stage 3 circuit to perform a banded Smith-Waterman algorithm, as explained below.
As an alternative to the synchronization circuit 2350, the system can also be set up to forward any HSPs that are out of synchronization by more than W with the database sequence to an exception handling process in software.
The Smith-Waterman (SW) algorithm is a well-known algorithm for use in gapped extension analysis for BLAST. SW allows for insertions and deletions in the query sequence as well as matches and mismatches in the alignment. A common variant of SW is affine SW. Affine SW requires that the cost of a gap can be expressed in the form of o+k*e wherein o is the cost of an existing gap, wherein k is the length of the gap, and wherein e is the cost of extending the gap length by 1. In practice, o is usually costly, around −12, while e is usually less costly, around −3. Because one will never have gaps of length zero, one can define a value d as o+e, the cost of the first gap. In nature, when gaps in proteins do occur, they tend to be several residues long, so affine SW serves as a good model for the underlying biology.
If one next considers a database sequence x and a query sequence y, wherein m is the length of x, and wherein n is the length of y, affine SW will operate in an m*n grid representing the possibility of aligning any residue in x with any residue in y. Using two variables, i=0,1, . . . , n and j=0,1, . . . , m, for each pair of residues (i,j) wherein i≧1 and wherein j≧1, affine SW computes three values: (1) the highest scoring alignment which ends at the cell for (i,j)−M(i,j), (2) the highest scoring alignment which ends in an insertion in x−I(i,j), and (3) the highest scoring alignment which ends in a deletion in x−D(i,j).
As an initialization condition, one can set M(0,j)=I(0,j)=0 for all values of j, and one can set M(i,0)=D(i,0)=0 for all values of i. If xi and yj denote the ith and jth residues of the x and y sequences respectively, one can define a substitution matrix s such that s(xi,yj) gives the score of matching xi and yi, wherein the recurrence is then expressed as:
which is shown graphically by
A variety of observations can be made about this recurrence. First, each cell is dependent solely on the cell to the left, above, and upper-left. Second, M(i,j) is never negative, which allows for finding strong local alignments regardless of the strength of the global alignment because a local alignment is not penalized by a negative scoring section before it. Lastly, this algorithm runs in O(mn) time and space.
In most biology applications, the majority of alignments are not statistically significant and are discarded. Because allocating and initializing a search space of mn takes significant time, linear SW is often run as a prefilter to a full SW. Linear SW is an adaptation of SW which allows the computation to be performed in linear space, but gives only the score and not the actual alignment. Alignments with high enough scores are then recomputed with SW to get the path of the alignment. Linear SW can be computed in a way consistent with the data dependencies by computing on an anti-diagonal, but in each instance just the last two iterations are stored.
A variety of hardware deployments of the SW algorithm for use in Stage 3 BLAST processing are known in the art, and such known hardware designs can be used in the practice of the present invention. However, it should be noted that in a preferred embodiment of the present invention, Stage 3 for BLAST is implemented using a gapped extension prefilter 402 wherein the prefilter 402 employs a banded SW (BSW) algorithm for its gapped extension analysis. As shown in
For BSW, and with reference to
Under these conditions, a successful alignment may not be the product of the seed 2502, it may start and end before the seed or start and end after the seed. To avoid this situation, an additional constraint is preferably imposed that the alignment must start before the seed 2502 and end after the seed 2502. To enforce this constraint, the hardware logic which performs the BSW algorithm preferably specifies that only scores which come after the seed can indicate a successful alignment. After the seed 2502, scores are preferably allowed to become negative, which greatly reduces the chance of a successful alignment which starts in the second half. Even with these restrictions however, the alignment does not have to go directly through the seed.
As with SW, each cell in BSW is dependent only on its left, upper and upper-left neighbors. Thus it is preferable to compute along the anti-diagonal 2504. The order of this computation can be a bit deceiving because the order of anti-diagonal computation does not proceed in a diagonal fashion but rather a stair stepping fashion. That is, after the first anti-diagonal is computed (for the anti-diagonal numbered 1 in
A preferred design of the hardware pipeline for implementing the BSW algorithm in the gapped extension prefilter stage 402 of BLASTP can be thought of in three categories: (1) control, (2) buffering and storage, and (3) BSW computation.
3.A. Control
With reference to
3.B. Storage and Buffering
There are several parameters and tables which are preferably stored by the BSW prefilter in addition to the query sequence(s) and the database sequence. The requisite parameters for storage are λ, e, and d. Each parameter, which is preferably set using a separate command from the software, is stored in the control and parameters registers 2818, which is preferably within the hardware.
Registers 2810 preferably include a threshold table and a start table, examples of which are shown in
For an exemplary maximum query length of 2048 residues (which translates to around 1.25 Kbytes), the query sequence can readily be stored in a query table 2812 located on the hardware. Because residues are consumed sequentially starting from an initial offset, the query buffer 2812 provides a FIFO-like interface. The initial address is loaded, and then the compute state-machine can request the next residue by incrementing a counter in the query table 2812.
The database sequence, however, is expected to be too large to be stored on-chip, so the BSW hardware prefilter preferably only stores an active portion of the database sequence in a database buffer 2814 that serves as a circular buffer. Because of the Stage 1 design discussed above, HSPs will not necessarily arrive in order by ascending database position. To accommodate such out-of-order arrivals, database buffer 2814 keeps a window of X residues (preferably 2048 residues) behind the database offset of the current HSP. Given that the typical out-of-orderness is around 40 residues, the database buffer 2814 is expected to support almost all out-of-order instances. In an exceptional case were an HSP is too far behind, the BSW hardware prefilter can flag an error and send that HSP to software for further processing. Another preferred feature of the database buffer 2814 is that the buffer 2814 does not service a request until it has buffered the next o+(λ/2) residues, thereby making buffer 2814 difficult to stall once computation has started. This can be implemented using a FIFO-like interface similar to the query buffer 2812, albeit that after loading the initial address, the compute state-machine must wait for the database buffer 2814 to signal that it is ready (which only happens once the buffer 2814 has buffered the next ω+(λ/2) residues).
3.C. BSW Computation
The BSW computation is carried out by the BSW core 2820. Preferably, the parallelism of the BSW algorithm is exploited such that each value in an anti-diagonal can be computed concurrently. To compute o) values simultaneously, the BSW core 2820 preferably employs o) SW computational cells. Because there will be o) cells, and the computation requires one clock cycle, the values for each anti-diagonal can be computed in a single clock cycle. As shown in
Rather than arrange the design in a per-cell fashion, a preferred embodiment of the BSW core 2820 can arrange the BSW computation in blocks which provide all the dependencies of one type for all cells. This allows the internal implementation of each block to change as long as it provides the same interface.
As shown in
The score block 3008 takes in the xi+1, . . . xi+1+(ω−1) and yj, . . . yj−(ω−1) residues from the shift registers 3010 and 3012 to produce the required s(xi,yj), . . . s(xi+(ω−1), yj−(ω−1)) values. The score block 3008 can be implemented using a lookup table in block RAM. To generate an address for the lookup table, the score block 3008 can concatenate the x and y value for each clock cycle. If each residue is represented with 5-bits, the total address space will be 10-bits. Each score value can be represented as a signed 8-bit value so that the total size of the table is 1 Kbyte—which is small enough to fit in one block RAM. Because each residue pair may be different, the design is preferably configured to service all requests simultaneously and independently. Since each block RAM provides two independent ports and by using ω/2 replicated block RAMs for the lookup table, the block RAMs can take one cycle to produce data. As such, the inputs are preferably sent one clock cycle ahead.
The pass-fail block 3002 simultaneously compares the o cell scores in an anti-diagonal against a threshold from the threshold table. If any cell value exceeds the threshold, the HSP is deemed significant and is immediately passed through to software for further processing (by way of FIFO 2808 and the Send FSM 2806). In some circumstances, it may be desirable to terminate extension early. To achieve this, once an alignment crosses the HSP, its score is never clamped to zero, but may become negative. In instances where only negative scores are observed on all cells on two consecutive anti-diagonals, the extension process is terminated. Most HSPs that yield no high-scoring gapped alignment are rapidly discarded by this optimization.
With reference to the embodiment of
The NCBI codebase can be leveraged in such a design. Fundamental support routines such as I/O processing, query filtering, and the generation of sequence statistics can be re-used. Further, support for additional BLAST programs such as blastx and tblastn can be more easily added at a later stage. Furthermore, the user interface, including command-line options, input sequence format, and output alignment format from NCBI BLAST can be preserved. This facilitates transparent migration for end users and seamless integration with the large set of applications that are designed to work with NCBI BLAST.
Query pre-processing, as shown in
The NCBI Initialize code shown in
Board 2501 preferably performs the first two stages of the BLASTP pipeline. The HSPs generated by the ungapped extension can be sent back to the host CPU, where they are multiplexed with the database stream. However, it should be noted that if Stage 2 employs the synchronization circuit 2350 that is shown in
FPGA communication wrappers, device drivers, and hardware DMA engines can be those disclosed in the referenced and incorporated Ser. No. 11/339,892 patent application.
Query bin packing is an optimization that can be performed as part of the query pre-processing to accelerate the BLAST search process. With query bin packing, multiple short query sequences are concatenated and processed during a single pass over the database sequence. Query sequences larger than the maximum supported size are broken into smaller, overlapping chunks and processed over several passes of the database sequence. Query bin packing can be particularly useful for BLASTP applications when the maximum query sequence size is 2048 residues because the average protein sequence in typical sequence databases is only around 300 residues.
Sequence packing reduces the overhead of each pass, and so ensures that the resources available are fully utilized. However, a number of caveats are preferably first addressed. First, to ensure that generated alignments do not cross query sequence boundaries, an invalid sequence control character is preferably used to separate the different commonly packed query sequences. The word matching stage is preferably configured to detect and reject w-mers that cross these boundaries. Similar safeguards are preferably employed during downstream extension stages. Second, the HSP coordinates returned by the hardware stages must be translated to the reference system of the individual components. Finally, the process of packing a set of query sequences in an online configuration is preferably optimized to reduce the overhead to a minimum.
For a query bin packing problem, one starts from a list of L=(q1,q2, . . . , qn) query sequences, each of length liε(0,2048), that must be packed into a minimum number of bins, each of capacity 2048. This is a classical one-dimensional bin-packing problem that is known to be NP-hard. A variety of algorithms can be used that guarantee to use no more than a constant factor of bins used by the optimal solution. (See Hochbaum, D., Approximation algorithms for NP-hard problems, PWS Publishing Co., Boston, Mass., 1997, the entire disclosure of which is incorporated herein by reference).
If one lets B1B2, . . . be a list of bins indexed by the order of their creation, then Bk1 can be the sum of the lengths of the query sequences packed into bin Bk. With a Next Fit (NF) query bin packing algorithm, the query qi is added to the most recently created bin Bk if li≦2048−Bk1 is satisfied. Otherwise, Bk is closed and qi is placed in a new bin Bk+l, which now becomes the active bin. The NF algorithm is guaranteed to use not more than twice the number of bins used by the optimal solution.
A First Fit (FF) query bin packing algorithm attempts to place the query qi in the first bin in which it can fit, i.e., the lowest indexed bin Bk such that the condition li≦2048−Bk1 is satisfied. If no bin with sufficient room exists, a new one is created with qi as its first sequence. The FF algorithm uses no more than 17/10 the number of bins used by the optimal solution.
The NF and FF algorithms can be improved by first sorting the query list by decreasing sequence lengths before applying the packing rules. The corresponding algorithms can be termed Next Fit Decreasing (NFD) and First Fit Decreasing (FFD) respectively. It can be shown that FFD uses no more than 11/9 the number of bins used by the optimal solution.
The performance of the NF and FF approximation algorithms were tested over 4,241 sequences (1,348,939 residues) of the Escherichia coli k12 proteome. The length of each query sequence was increased by one to accommodate the sequence control character. The capacity of each bin was set to 2048 sequence residues. Bin packing was performed either in the original order of the sequence in the input file or after sorting by decreasing sequence length. An optimal solution for this input set uses at least 1,353,180/2048=661 bins. Table 4 below illustrates the results of this testing.
As can be seen from Table 4, both algorithms perform considerably better than the worst case. FF performs best on the sorted list of query sequences, using only one more bin than the optimal solution. Sorting the list of query sequences is possible when they are known in advance. In certain configuration, such as when the BLASTP pipeline is configured to service requests from a web server, such sorting will not likely be feasible. In such approaches where sequences cannot be sorted, the FF rule uses just six more bins than the optimum. Thus, in instances where the query set is known in advance, FFD (which is an improvement on FF) can serve as a good method for query bin packing.
The BLASTP pipeline is stalled during the query bin packing pre-processing computation. FF keeps every bin open until the entire query set is processed. The NF algorithm may be used if this pre-processing time becomes a major concern. Since only the most recent bin is inspected with NF, all previously closed query bins may be dispatched for processing in the pipeline. However, it should also be noted that NF increases the number of database passes required and causes a corresponding increase in execution time.
It is also worth noting that the deployment of BLAST on reconfigurable logic as described herein and as described in the related U.S. patent application Ser. No. 11/359,285 allows for BLAST users to accelerate similarity searching for a plurality of different types of sequence analysis (e.g., both BLASTN searches and BLASTP searches) while using the same board(s) 250. That is, a computer system used by a searcher can store a plurality of hardware templates in memory, wherein at least one of the templates defines a FAM chain 230 for BLASTN similarity searching while another at least one template defines a FAM chain 230 for BLASTP similarity searching. Depending on whether the user wants to perform BLASTP or BLASTN similarity searching, the processor 208 can cause an appropriate one of the prestored templates to be loaded onto the reconfigurable logic device to carry out the similarity search (or can generate an appropriate template for loading onto the reconfigurable logic device). Once the reconfigurable logic device 202 has been configured with the appropriate FAM chain, the reconfigurable logic device 202 will be ready to receive the instantiation parameters such as, for BLASTP processing, the position identifiers for storage in lookup table 514. If the searcher later wants to perform a sequence analysis using a different search methodology, he/she can then instruct the computer system to load a new template onto the reconfigurable logic device such that the reconfigurable logic device is reconfigured for the different search (e.g., reconfiguring the FPGA to perform a BLASTN search when it was previously configured for a BLASTP search).
Further still, a variety of templates for each type of BLAST processing may be developed with different pipeline characteristics (e.g., one template defining a FAM chain 230 wherein only Stages 1 and 2 of BLASTP are performed on the reconfigurable logic device 202, another template defining a FAM chain 230 wherein all stages of BLASTP are performed on the reconfigurable logic device 202, and another template defining a FAM chain 230 wherein only Stage 1 of BLASTP is performed on the reconfigurable logic device). With such a library of prestored templates available for loading onto the reconfigurable logic device, users can be provided with the flexibility to choose a type of BLAST processing that suits their particular needs. Additional details concerning the loading of templates onto reconfigurable logic devices can be found in the above-referenced patent applications: U.S. patent application Ser. No. 10/153,151, published PCT applications WO 05/048134 and WO 05/026925, and U.S. patent application Ser. No. 11/339,892.
As disclosed in the above-referenced and incorporated WO 05/048134 and WO 05/026925 patent applications, to generate a template for loading onto an FPGA, the process flow of
Thereafter, at step 3402, a synthesis tool is used to convert the HDL source code 3400 into a data structure that is a gate level logic description 3404 for the processing engines. A preferred synthesis tool is the well-known Synplicity Pro software provided by Synplicity, and a preferred gate level description 3404 is an EDIF netlist. However, it should be noted that other synthesis tools and gate level descriptions can be used. Next, at step 3406, a place and route tool is used to convert the EDIF netlist 3404 into a data structure that comprises the template 3408 that is to be loaded into the FPGA. A preferred place and route tool is the Xilinx ISE toolset that includes functionality for mapping, timing analysis, and output generation, as is known in the art. However, other place and route tools can be used in the practice of the present invention. The template 3408 is a bit configuration file that can be loaded into an FPGA through the FPGA's Joint Test Access Group (JTAG) multipin interface, as is known in the art.
However, it should also be noted that the process of generating template 3408 can begin at a higher level, as shown in FIGS. 35(a) and (b). Thus, a user can create a data structure that comprises high level source code 3500. An example of a high level source code language is SystemC, an IEEE standard language; however, it should be noted that other high level languages could be used. With respect to an embodiment where stages 1 and 2 of BLASTP are deployed on a single FPGA on board 2501 (see
At step 3502, a compiler such as a SystemC compiler can be used to convert the high level source code 3500 to the HDL code 3400. Thereafter, the process flow can proceed as described in
As would be readily understood by those having ordinary skill in the art, the process flows of
It should also be noted that, while a preferred embodiment of the present invention is configured to perform BLASTP similarity searching between protein biosequences, the architecture of the present invention can also be employed to perform comparisons for other sequences. For example, the sequence can be in the form of a profile, wherein the items into which the sequence's bit values are grouped comprise profile columns, as explained below.
4.A. Profiles
A profile is a model describing a collection of sequences. A profile P describes sequences of a fixed length L over an alphabet A (e.g. DNA bases or amino acids). Profiles are represented as matrices of size AxL, where the jth column of P (1<=j<=L) describes the jth position of a sequence. There are several variants of profiles, which are described below.
Frequency matrix. The simplest form of profile describes the character frequencies observed in a collection C of sequences of common length L. If character c occurs at position j in m of the sequences in C, then P(c,j)=m. The total of the entries in each column is therefore the number of sequences in C. For example, a frequency matrix derived from a set of 10 sequences of length 4 might look like the following:
Probabilistic model. A probabilistic profile describes a probability distribution over sequences of length L. The profile entry P(c,j) (where c is a character from alphabet A) gives the probability of seeing character c at sequence position j. Hence, the sum of P(c,j) over all c in A is 1 for any j. The probability that a sequence sampled uniformly at random from P is precisely the sequence s is given by
Note that the probability of seeing character c in column j is independent of the probability of seeing character c′ in column k, for k !=j.
Typically, a probabilistic profile is derived from a frequency matrix for a collection of sequences. The probabilities are simply the counts, normalized by the total number of sequences in each column. For example, the probability version of the above profile might look like the following:
In practice, these probabilities are sometimes adjusted with prior weights or pseudocounts, e.g. to avoid having any zero entries in a profile and hence avoid assigning any sequence zero probability under the model.
Score matrix. A third use of profiles is as a matrix of similarity scores. In this formulation, each entry of P is an arbitrary real number (though the entries may be rounded to integers for efficiency). Higher scores represent greater similarity, while lower scores represent lesser similarity. The similarity score of a sequence s against profile P is then given by
Again, each sequence position contributes independently to the score.
A common way to produce score-based profiles is as a log-likelihood ratio (LLR) matrix. Given two probabilistic profiles P and P0, an LLR score profile Pr can be defined as follows:
Higher scores in the resulting LLR matrix correspond to characters that are more likely to occur at position j under model P than under model P0. Typically, P0 is a null model describing an “uninteresting” sequence, while P describes a family of interesting sequences such as transcription factor binding sites or protein motifs.
This form of profile is sometimes called a position-specific scoring matrix (PSSM).
4.B. Comparison Tasks Involving Profiles
One may extend the pairwise sequence comparison problem to permit one or both sides of the comparison to be a profile. The following two problem statements describe well-known bioinformatics computations.
Problem (1): given a query profile P representing a score matrix, and a database D of sequences, find all substrings s of sequences from D such that score(s|P) is at least some threshold value T.
Problem (1′): given a query sequences s and a database D of profiles representing score matrices, find all profiles P in D such that for some substring s′ of s, score (s′|P) is at least some threshold value T.
Problem (2): given a query profile P representing a frequency matrix, and a database D of profiles representing frequency matrices, find all pairs of profiles (P, P′) with similarity above a threshold value T.
Problem (1) describes the core computation of PSI-BLAST, while (1′) describes the core computation of (e.g.) RPS-BLAST and the BLOCKS Searcher. (See Altschul et al., Gapped BLAST and PSI-BLAST: a new generation of protein database search programs, Nucleic Acids Res, 1997, 25(17): p. 3389-3402; Machler-Bauer et al., CDD: a conserved domain database for interactive domain family analysis, Nucleic Acids Res., 2007, 35(Database Issue): p. D237-40; and Pietrokovski et al., The Blocks database—a system for protein classification, Nucleic Acid Res, 1996, 24(1): p. 197-200, the entire disclosures of each of which are incorporated herein by reference). These tools are all used to recognize known motifs in biosequences.
A motif is a sequence pattern that occurs (with variation) in many different sequences. Biologists collect examples of a motif and summarize the result as a frequency profile P. To use the profile P in search, it is converted to a probabilistic model and thence to an LLR score matrix using some background model P0. In Problem (1), a single profile representing a motif is scanned against a sequence database to find additional instances of the motif; in Problem (1′), a single sequence is scanned against a database of motifs to discover which motifs are present in the sequence.
Problem (2) describes the core computations of several tools, including LAMA, IMPALA, and PhyloNet. (See Pietrokovski S., Searching databases of conserved sequence regions by aligning protein multiple-alignments, Nucleic Acids Res, 1996, 24(19): p. 3836-45; Schaffer et al., IMPALA: matching a protein sequence against a collection of PSI-BLAST-constructed position-specific score matrices, Bioinformatics, 1999, 15(12),p. 1000-11; and Wang and Stormo, Identifying the conserved network of cis-regulatory sites of a eukaryotic genome, Proc. of Nat'l Acad. Sci. USA, 2005, 102(48): p. 17400-5, the entire disclosures of each of which are incorporated herein by reference). The inputs to this problem are typically collections of aligned DNA or protein sequences, where each collection has been converted to a frequency matrix. The goal is to discover occurrences of the same motif in two or more collections of sequences, which may be used as evidence that these sequences share a common function or evolutionary ancestor.
The application defines a function Z to measure the similarity of two profile columns. Given two profiles P, P′ of common length L, the similarity score of P with P′ is then
To compare profiles of unequal length, one may compute their optimal local alignment using the same algorithms (Smith-Waterman, etc.) used to align sequences, using the function Z to score each pair of aligned columns. In practice, programs that compare profiles do not permit insertion and deletion, and thus only ungapped alignments are needed and not gapped alignments.
4.C. Implementing Profile Comparison with a Hardware BLAST Circuit:
Solutions to Problems (1), (1′), and (2) above using a BLASTP-like seeded alignment algorithm are now described. For Problems (1) and (1′), the implementation corresponds to a hardware realization for PSI-BLAST, and for Problem (2), the implementation corresponds to a hardware realization for PhyloNet. As noted above, the hardware pipeline need not employ a gapped extension stage.
The similarity search problems defined above can be implemented naively by full pairwise comparison of query and database. For Problem (1) with a query profile P of length L, this entails computing, for each sequence s in D, the similarity scores score (s[i . . . i+L-l]|P) for 1<=i<=|s|−L+1 and comparing each score to the threshold T. For Problem (1′), a comparable computation is performed between the query sequence s and each profile P in D. For Problem (2), the query profile P is compared to each other profile P′ in D by ungapped dynamic programming with score function Z, to find the optimal local alignment of P to P′. Each of these implementations is analogous to naïve comparison between a query sequence and a database of sequences; only the scoring function has changed in each case.
Just as BLAST uses seeded alignment to accelerate sequence-to-sequence comparison, one may apply seeded alignment to accelerate comparisons between sequences and profiles, or between profiles and profiles. The seeded approach has two stages, corresponding to Stage 1 and Stage 2 of the BLASTP pipeline. In Stage 1, one can apply the previously-described hashing approach after first converting the profiles to a form that permits hash-based comparison. In Stage 2, one can implement ungapped dynamic programming to extend each seed, using the full profiles with their corresponding scoring functions as described in the previous paragraph.
As shown above, stage 1 of BLASTP derives high performance from being able to scan the database in linear time to find all word matches to the query, regardless of the query's length. The linear scan is implemented by hashing each word in the query into a table; each word in the database is then looked up in this table to determine if it (or another word in its high-scoring neighborhood) is present in the query.
In Problem (1), the query is a profile P of length L. One may define the (w,T)-neighborhood of profile P to be all strings s of length w, such that for some j, 1<=j<=L−w+1,
In other words, the neighborhood of P is the set of all w-mers that score at least T when aligned at some offset into P. This definition is precisely analogous to the (w,T)-neighborhood of a biosequence as used by BLASTP, except that the profile itself, rather than some external scoring function, supplies the scores.
Stage 1 for Problem (1) can be implemented as follows using the stage 1 hardware circuit described above: convert the query profile P to its (w,T)-neighborhood; then hash this neighborhood; and finally, scan the sequence database against the resulting hash table and forward all w-mer hits (more precisely, all two-hits) to Stage 2. This implementation corresponds to Stage 1 of PSI-BLAST.
For Problem (1′), the profiles form the database, while the query is a sequence. RPS-BLAST is believed to implement Stage 1 for this problem by constructing neighborhood hash tables for each profile in the database, then sequentially scanning the query against each of these hash tables to generate w-mer hits. The hash tables are precomputed and stored along with the database, then read in during the search. RPS-BLAST may choose to hash multiple profiles' neighborhoods in one table to reduce the total number of tables used.
In Problem (2), both query and database consist of profiles, with a similarity scoring function Z on their columns. Simply creating the neighborhood of the query is insufficient, because one cannot perform a hash lookup on a profile. A solution to this problem is to quantize the columns of the input profiles to create sequences, as follows. First, define a set of k centers, each of which is a valid profile column. Associate with each center Ci a code bi, and define a scoring function Y on codes by Y(bi,bj)=Z(Ci,Cj). Now, map each column of the query and of every database profile to the center that is most similar to it, and replace it by the code for that center. Finally, execute BLASTP Stage 1 to generate hits between the code sequence for the query profile and the code sequences for every database profile, using the scoring function Y, and forward those hits to Stage 2.
A software realization of the above scheme may be found in the PhyloNet program. The authors therein define a set of 15 centers that were chosen to maximize the total similarity between a large database of columns from real biological profiles and the most similar center to each column. Similarity between profile columns and centers was measured using the scoring function Z on columns, which for PhyloNet is the Average Log-Likelihood Ratio (ALLR) score. (See Wang and Stormo, Combining phylogenetic data with co-regulated genes to identify regulatory motifs, Bioinformatics, 2003, 19(18): p. 2369-80, the entire disclosure of which is incorporated herein by reference).
Using the implementation techniques described above, profile-sequence and profile-profile comparison may be implemented on a BLASTP hardware pipeline, essentially configuring the BLASTP pipeline to perform PSI-BLAST and PhyloNet computations.
To implement the extended Stage 1 computation for Problem (1) above, one would extend the software-based hash table construction to implement neighborhood generation for profiles, just as is done in PSI-BLAST. The Stage 1 hardware itself would remain unchanged. For Problem (2) above, one would likely implement the conversion of profiles to code sequences offline, constructing a code database that parallels the profile database. The code database, along with a hash table generated from the encoded query, would be used by the Stage 1 hardware. The only changes required to the Stage 1 hardware would be to change its alphabet size to reflect the number k of distinct codes, rather than the number of characters in the underlying sequence alphabet.
In Stage 2, the extension algorithm currently implemented by the ungapped extension stage may be used unchanged for Problems (1) and (2), with the single exception of the scoring function. In the current Stage 2 score computation block, each pair of aligned amino acids is scored using a lookup into a fixed score matrix. In the proposed extension, this lookup would be replaced by a circuit that evaluates the necessary score function on its inputs. For Problem (1), the inputs are a sequence character c and a profile column P(*,j), and the circuit simply returns P(c,j). For Problem (2), the inputs are two profile columns Ci and Cj, and the circuit implements the scoring function Z.
The database input to the BLASTP hardware pipeline would remain a stream of characters (DNA bases or amino acids) for Problem (1). For Problem (2), there would be two parallel database streams: one with the original profile columns, and one with the corresponding codes. The first stream is used by Stage 2, while the second is used by Stage 1.
While the present invention has been described above in relation to its preferred embodiments, various modifications may be made thereto that still fall within the invention's scope. Such modifications to the invention will be recognizable upon review of the teachings herein. Accordingly, the full scope of the present invention is to be defined solely by the appended claims and their legal equivalents.
This application claims priority to U.S. provisional patent application 60/836,813, filed Aug. 10, 2006, entitled “Method and Apparatus for Protein Sequence Alignment Using FPGA Devices”, the entire disclosure of which is incorporated herein by reference. This application is related to pending U.S. patent application Ser. No. 11/359,285 filed Feb. 22, 2006, entitled “Method and Apparatus for Performing Biosequence Similarity Searching” and published as U.S. Patent Application Publication 2007/0067108, which claims the benefit of both U.S. Provisional Application No. 60/658,418, filed on Mar. 3, 2005 and U.S. Provisional Application No. 60/736,081, filed on Nov. 11, 2005, the entire disclosures of each of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
60836813 | Aug 2006 | US |