This invention relates generally to processing of genomic data in the field of bio-informatics, and more particularly, to techniques for facilitating correlation analysis of nucleotide loci of one or more data sets comprising genomic data.
Through the use of recent technology advances, systems biology and related experiments have gained wide acceptance in the biological community. Experiments in this field result in extensive amounts of data, and very often this data represents a group or groups of polynucleotides. These polynucleotides can have many attributes, including: DNA or RNA; relative quantities; length(s); nucleotide sequence; and putative function. As a result of the human genome project, another attribute is able to be added; that is, genomic location.
Tools have been developed to visualize genomic data, using the genomic coordinates as a common thread. One example of this is the genomic browser at UCSC (http://genome.ucsc.edu/). The UCSC genome bio-informatics site acts as a central repository for data related to the human genome project, and provides a web-based visualization tool for viewing the data.
While existing tools for visualization of genomic data are vital to progress of the biological community, analysis of this data is also critical and has not been nearly as well addressed.
Disclosed herein are a suite of data storage, retrieval, analysis and display processes and tools which focus on the genomic location attribute of data generated by, for example, systems biology experiments. Genomic location is a set of coordinates, comprising a chromosome identification, a nucleotide start position and a nucleotide end position, which represent the point of origin and position of a nucleotide locus or nucleotide sequence. This attribute is significant because it homogenizes polynucleotide data and gives a common attribute across data set instances, regardless of source. This homogizing attribute allows analysis of large amounts of data from many disparate sources and produces useful and relevant results. More particularly, presented herein is a gene regulation informatics platform actively fitted to support ongoing research in gene regulation and functional genomics. A need exists for innovative tools and resources in this area which can provide customized search, exploration, analysis and hypothesis generation. Such tools must keep pace with the dynamically changing world of gene regulation (ranging from transcriptional regulation, DNA methylation, chromatin remodeling, histone modification, post-transcriptional regulation by RNAs), as well as provide new perspectives and insights.
Thus, provided herein, in one aspect, is a computer-implemented method of processing genomic data, which includes: retrieving a selected nucleotide sequence locus from genomic data stored in a database as a plurality of data subsets of common nucleotide sequence size n, wherein n≧2, and wherein each data subset of common nucleotide sequence size n is separately indexed within the database, the selected nucleotide sequence locus being sized differently from the common nucleotide size n of the plurality of data subsets, and the retrieving including identifying each data subset of common nucleotide size n containing at least a portion of the selected nucleotide sequence locus and retrieving the identified data subsets; processing the retrieved, identified data subsets to remove genomic data mapped to the nucleotide positions outside of the selected nucleotide sequence locus; and outputting the selected nucleotide sequence locus.
In another aspect, a computer-implemented method of processing genomic data is provided, which includes: automatically storing nucleotide sequence data in a segmented sequence table of a database as a plurality of data subsets of common nucleotide size n. The automatically storing includes: initializing a segment buffer of size n, wherein n≧2, and obtaining at least one chromosome file comprising a genomic sequence mapped to a genomic coordinate system; processing a chromosome file of the at least one chromosome file by automatically reading in sequence characters of the chromosome file in series, skipping header characters and line break characters until the segment buffer of size n is full or an end of file character is read; adding the characters in the buffer to the segmented sequence table of the database once the segment buffer of size n is full or an end of file character is read, along with a corresponding chromosome number and start position number, the start position number being an index into the segmented sequence table for that data subset; and resetting content of the segment buffer of size n and setting a current nucleotide position to be a start position of the reset segment buffer, and determining whether an end character of the chromosome file has been reached, and if no, repeating the automatically reading in of sequence characters in series from the chromosome file of the at least one chromosome file into the segment buffer of size n, and if the end character of the chromosome file has been reached, then determining whether another chromosome file of the at least one chromosome file exists for reading into the database, and if so, repeating the processing and the adding of sequence characters from the another chromosome file into the segmented sequence table of the database.
Systems and articles of manufacture corresponding to the above-summarized methods are also described and claimed herein.
Further, additional features and advantages are realized through the techniques of the present invention. Other embodiments and aspects of the invention are described in detail herein and are considered a part of the claimed invention.
The subject matter which is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
By way of example,
Presented herein are various techniques for processing and analysis of genomic data in the field of bio-informatics. More particularly, a suite of data retrieval and analysis tools and processes are disclosed which focus on the genomic coordinate attribute of genomic data generated, for example, by systems biology experiments. This homogizing attribute allows for analysis of large amounts of information from many disparate sources, while producing useful and relevant results.
Relational database array 210 may be implemented using, for example, MySQL, version 5, offered by My SQL AB (http://www.mysql.com/company/). The databases within relational database array 210, which are each contextual in one embodiment to a species and assembly (described further below), may reside within a single instance of the database engine. This instance can reside at any location that is network accessible from the application server. A JDBC connection may be used to link the application server to the database. (JDBC is a Sun Microsystems standard defining how JAVA applications access database data.) As explained further below, a sub-system database manager module may be provided within relational database array 210 to facilitate access to databases from the application server. This provides a single point of access and control over the database processes.
Application server 220 may be implemented using standard J2EE technologies (servlets and JPSs) on Jakarta Tomcat, Version 5, provided by The Apache Software Foundation (http://www.apache.org/). User interaction is session-based. However, it is also possible to store a session state at the server for later retrieval. A “model-view-controller” design may be used to control interaction and data flow within the system. The model is the current set of data and state information for a user session. As described further below, it is made of locus set objects representing user-loaded and pre-existing data sets, as well as new data sets 221 generated during the session. The model also holds session state information, such as logic parameters and process cardinality. The controllers are the individual system tools which act as independent modules within the system. In this example, these modules or tools include a correlation analysis tool 222, a data retrieval tool 224, a control generation tool 226 and a hypothesis generation tool 228. Each modular tool represents a logic implementation (described below), which can execute individually or in succession.
Client 230 includes a display window illustrating the data sets and session states utilized by the client. As described below, the display window may illustrate a flow diagram which contains: data sets and their annotation; instances of modules used to process the data, along with the parameters used; and relationships among the data sets and processes describing the interactions. Further, the client is presented with a menu of operations which can be performed, such as uploading data, retrieving additional data from a database, or executing an analysis process on the data. There is also a section in the interface for user input which may be required for a given operation. This area may be contact sensitive, and present appropriate options for a currently selected operation. As noted, this is in addition to the client interface presenting the user with a view of their data and operations performed. This data and operations information is rendered as a flow diagram, sequentially describing (for example) each data set and the operations that were performed thereon. The client interface is configured such that the user can interact with the diagram to obtain more detailed information about any of the elements, download data sets, or to generate an image file for documentation purposes.
In order to utilize the processing and system capabilities disclosed herein, a data file must first contain the genomic coordinate attribute. This attribute often exists by default as part of the result of an experiment. However, the feature may not be implicit for certain technologies. For example, certain micro-array results may provide accession numbers only, or require statistical analysis before coordinates can be generated. In these cases, the system can provide a means to transform the data. For example, the database manager can be used to perform simple data look-up, such as mapping accession numbers to loci, or third party tools can be integrated into the system (such as Bioconductor (http://www.biocondutor.org/) or TileMap (http://www.bioinformatics.oxfordjournals.org/cgi/content/abstract/21/18/3629)) or the system could “link out” to a third part website service for data conversion (such as offered by NetAffx (http://www.affymetrix.com/analysis/index.affx) or TileScope (http://www.tilescope/gersteinlab.org/)).
Once a data set contains genomic coordinates, it is then loaded into the system. Additional data sets can be added, for example, from the existing relational database array as desired. The user then chooses which operations are to be performed on which data sets, and resultant data sets are generated. Since all data sets are homogenous, they can be mixed and matched in any operation and in any order. The sequence of operations, data sets generated, parameters used, and all other corresponding information may be displayed in the client's flow diagram. The user can continue to perform analysis until the desired result(s) and data set(s) are generated. An example of a resultant flow diagram is presented in
To summarize, the client may advantageously be designed to be runable from any web browser, and present a user with their data sets modeled in the above-described workflow diagram, as well as a “tool set” reflecting the executable modules within the system. The application server contains the user's session-based data and process state. Further, the application server may execute instances of analysis modules, manipulating the current data sets and user-defined parameters. As noted above, and described further below, the relational database array houses local instances of species and assembly genomes and associated annotations. The system depicted in
In
Optionally, a mapped control data set may be generated with reference to one or more characteristics of the mapped experimental data set 365, and in the embodiments disclosed herein, with reference to multiple characteristics thereof. Correlation analysis may be automatically performed on the mapped experimental data set with at least one other mapped data set, for example, retrieved from the relational database array 370. The result is a compared data set which is then output 375. In addition to performing correlation analysis on the mapped experimental data set with the at least one other mapped data set, correlation analysis of the mapped control data set (if created) may also be automatically performed with reference to the at least one other mapped data set, again with the results of the comparing process being output.
In the process flow example of
As noted briefly above, data can originate from a variety of sources. Besides the user's own data, another source of data is pre-existing databases. For example, the system disclosed herein may maintain its own database array for: providing a local, fast look-up of common data sets for user retrieval without having to depend on third party sources; and providing specially structured and accessed database tables of additional annotation, which allow a user to rapidly recover certain additional data that is normally slow and resource-intensive to generate.
As illustrated in the database example of
The database schema depicted in
Advantageously, the meta-data tables 505, 510 may be employed to add new data sets to the system on the fly, and have those data sets immediately available. In addition, uniquely structured tables of additional annotation are provided which allow for rapid retrieval of large repositories of information with minimal overhead.
The database manager utilizes database 500, as well as the databases and tables therein, and takes advantage of the schema depicted in
The database manager provides a list of species and assembly combinations that are available, and the user makes the appropriate choice. For the given species/assembly, a list of annotation sets are provided and the user chooses which sets are to be searched. For example, RefSeq 550, CCDS 555, KnownGene 560, and GenBank 565 may be included. If available, the database manager provides a list of sub-types called “locus types” (described further below), from which the user can choose to refine the results. If the selected annotation set represents genes, locus types could be exons, UTRs, etc. If the selected annotation set represents promoters, then the available locus type would be the entire locus. The user's accession numbers can be searched in the database, and all found items transformed into mapped coordinate-based data. Any accession numbers that could not be found would be reported back to the user.
As noted, each species/assembly database thus contains a number of data sets gathered from third party sources such as UCSC or others. When describing this data, the genomic location attribute (chromosomal identifier and nucleotide coordinates) is the focus of the system described herein. However, there are other attributes of significance, such as sequence, which may be part of the analysis. Thus, the database array also provides a means by which this information can quickly and easily accompany the loci in a data set. For example, additional annotation sets may include nucleotide sequences, and phylogenetic conservation (i.e., genome table 530 and PHAST_CONS table 540, respectively). In each case, an attribute of each nucleotide must be maintained, that is, a sequence “letter” (ATCG, etc.), or a conservation score. Each table is structured in a similar manner. In particular, and as described in detail below, the attributes of each nucleotide sequence may be grouped together into equal length short segments, and each segment given its own corresponding chromosomal position. In this case, only the chromosome and first nucleotide (start position) need be tracked. An index is also created based on the chromosomal coordinates, thus giving a unique index. In this way, data that was previously “horizontal” (e.g., an entire chromosome sequence) is transformed into readily indexible, vertical data. This allows extremely fast retrieval of large amounts of information using the processing described below (for example, with reference to
The data model disclosed herein can be better understood with reference to
Attributes:
Functionality:
Attributes:
Functionality:
Locus sorting can be accomplished using the specification for object sorting. The locus object fulfills the specification requirement by implementing a “compare to” function. Simple conditional logic can be used to perform a lexicographic comparison of chromosome values and numeric comparison of start position values.
In the example of
Returning to
Beginning with the logic of
Continuing with the processing of
Those skilled in the art will note from the above discussion that the logic presented iterates over provided a chromosome file reading one character at a time, with each segment of characters being of a common specific size and being sequentially added to the segmented sequence table within the database. In this example, the common specific size is 255, however, other segments sizes could be employed. The chromosome and coordinate positions of each segment are also tracked and added to the database automatically.
FIGS. 10 & 11A-11C illustrate an examplary data retrieval process from a genomic sequence table, such as described above. Processing begins with user-inputted parameters, which include the requested chromosome (REQCHROM), the requested start position (REQSTART), and the requested end position (REQEND) 1000. The logic initiates a resultant sequence buffer 1005 and sets a select_start_position variable equal to the requested start position minus 254 1010. The subtraction of 254 nucleotide positions assumes that the nucleotide sequences are stored in 255 segments, as in the example described above.
All records containing at least a portion of the desired sequence are retrieved. In particular, each segment is selected where the chromosome ID equals the requested chromosome (REQCHROM), the segment start is grater than or equal to the set select_start_position, and the segment start is less than the requested end position (REQEND) 1015. The result is a set of one or more selected segments.
Processing next determines whether more records exist from the set of selected segments 1020, and if “no”, processing is complete 1025. Assuming that more records exist, then processing determines whether the current record's start position is less than or equal to the requested start position (REQSTART) 1025. If “yes”, then an offset variable is defined, that is, OFFSETSTART=REQSTART—Current Record Start 1050. This can be seen in
From inquiry 1055, if the current record end is greater than or equal to the requested end position, then processing sets a variable OFFSETEND equal to the OFFSETSTART+(REQEND−REQSTART) 1065. In the example of
From inquiry 1025, if the current record start is greater than or equal to the requested start position, then processing determines whether the current record end is greater than or equal to the requested record end 1030. If “no”, then the current sequence segment is appended to the resultant sequence buffer 1035, and processing determines whether more records exist. If “yes”, then the variable REMAININGLEN is set equal to REQEND—Current Record Start 1040, and the current sequence is appended to the buffer from index 0 to REMAININGLEN 1045.
As discussed above, the logic of
As noted above with reference to the data model discussion of
Beginning with
Assuming that locus object A's chromosome is neither before or after locus object B's chromosome (meaning that the loci may be on the same chromosome), then processing determines whether the start position of locus object A is equal to the start position of locus object B 1220. If “yes”, then an “Equal” indication is returned 1225. Otherwise, processing determines whether the start position of locus object A is before the start position of locus object B 1230. If “yes”, then a “Before” indication is returned 1235. If “no”, then processing determines whether the start position of locus object A is after the start position of locus object B 1240. If “yes”, then an “After” indication is returned 1245. If “no”, an invalid case has been identified 1250, for example, representative of data error. In using the logic of
When intersection type is selected, correlation is defined by the first nucleotide locus and the second nucleotide sequence locus overlapping with at least the number (n) of nucleotide positions in common, or by the first nucleotide locus and the second nucleotide locus overlapping with at least the percent number (pn) of nucleotide positions in common relative to a smaller one of the first nucleotide locus and the second nucleotide locus. When proximity type is selected, correlation is defined by the first nucleotide locus and the second nucleotide locus being within at least the number (n) of nucleotide positions. Results of the correlation analysis can be output as an indication of “Before”, “After”, or “Correlate”.
By way of example, whether two loci correlate depends in one embodiment on what the user considers a valid correlation condition. For example, if two loci share a common region of only a single nucleotide, do they correlate? Or, does the shared region need to be at least 50 nucleotide positions? The user may instead prefer that a gap of some length be allowed between the two loci, while still maintaining a correlation condition. This flexibility of correlation definition is left to the user via selection of the comparison type and comparison value parameters. In addition, or as an alternative, default comparison type and comparison value parameters could be provided and utilized within the system, for example, in place of a user pre-selecting these parameters.
Note that in a further alternate implementation, comparison type may be defined as either fixed or percent, with fixed indicating a specific number of nucleotide positions that define the correlation criteria, whether intersection or proximity. For example, two loci might be required to share a region of at least 50 nucleotides, or the loci might be required to be within 1,000 nucleotide positions of each other, etc. Percent type, in this example, is a calculated percentage of the length which defines the intersect/proximity criteria. For example, two loci might correlate by at least 50%, with the percent number of nucleotide positions being calculated from the smaller number of the two loci. In this example, the comparison value may refer to either an integer value to accompany the fixed type, or a floating point value to accompany the percent type. In this implementation, it may be assumed that intersection type or proximity type may either be inherent in the options to be selected or fixed within the system for a particular application.
In
In this embodiment, the coordinates of locus object A are then adjusted to facilitate the comparison process 1345. This adjustment may include increasing the start coordinate for the first nucleotide locus (i.e., locus object A) by the fixed number (n) of nucleotide positions or a number (x) of nucleotide positions, depending on the comparison type selected. In this example, and assuming intersection type selection, the number (x) is a required number derived from the percent number (pn) applied to the smaller of the two loci being compared. Additionally, the end coordinate for the first nucleotide locus is decreased by the same number (n) of nucleotide positions or number (x) of nucleotide positions to produce an adjusted start position and an adjusted end position for the first nucleotide locus. These adjusted positions are then used in the comparisons to follow. Specifically, processing determines whether the adjusted start position of locus object A is after the locus object B end position 1350. If “yes”, then an “After” indication is returned 1355. Otherwise, processing determines whether the adjusted end position of locus object A is before the start position of locus object B 1360. If “yes”, then a “Before” indication is returned 1365. If “no”, then a “Correlate” indication is returned 1370.
More particularly,
Continuing with the processing of
If a single nucleotide locus within the region container is not to be wrapped, then from inquiry 1440 processing inquires whether the region contains greater than one child locus 1450. If “no”, then the child locus is added to the new locus set (that is, is removed from the region container) 1455. Otherwise, the new region locus is added to the new locus list 1445.
As noted above, control data set generation is also disclosed herein wherein a control generator tool/process creates matched data sets for facilitating informatic analysis. These matched data sets may include genomic loci and/or genomic sequences. The data is taken from a database of actual genomic data (including sequence and annotation data), as opposed to ad-hoc generation, sequence scrambling or the like. This produces biologically relevant and accurate results which allow for stronger controls. The controls are matched against a user-provided data set via a number of parameters, as illustrated in
In
Note that the species/assembly database parameter, annotation table parameter and locus type parameter allow for user selection of the data population to be employed in generating the control data set. Each of these parameters is essentially a filter which qualifies where the control data is to be randomly selected from. The match length parameter, min/max length parameter, concatemerize sequence parameter and match GC parameter relate to attributes of the experimental data that are to be used to either accept or reject pieces of information being randomly retrieved to create the control data set. If desired, default settings for one or more of the parameters identified in
Control data generation logic, in accordance with one aspect of the invention disclosed herein, employs a database structure and access manager, as described above, which provide the user with a list of available species, assemblies, and annotations to choose from. The database manager, via the control generation tool, retrieves random data samples and filters this data based upon the user-defined parameters noted above. As described, these parameters can be contextual to the annotation (e.g., CDS only, 5′ UTRs, etc.), and they can be matched to the user's data set for greater control accuracy.
J As an overview, a first data set is loaded into the control generation tool in the form of a locus set object. This represents the genomic loci or genomic sequences to be controlled. A matched control record is produced for each record in the data set, and each evaluated criteria is contextual to the current user record being examined. First, the user chooses which species/assembly database to be employed. Once selected, the user is presented with a list of annotation tables, and again a selection is made. Examples of annotation tables are: RefSeq, KnownGene, miRNAs, Transcription Factor Binding sites, Methylation, etc.
The user then sets parameters which will act as filters on the data. The first level filtering happens during data retrieval. A random sample is selected from the user-defined table, and only the specified loci are returned. The possible loci are contextual to the annotation table selected. For example, miRNAs would just have a single locus per record, while KnownGene could return whole gene regions, CDS, UTR, etc. This sample size is configurable, and is used to maintain a pool of data, thus minimizing database look-ups. The control generation tool then uses this pool of data and applies the second set of filtering criteria.
The logic branches, depending upon whether the user-requested sequences, or loci only. For the latter, the logic iterates over the loci in the pool and attempts to apply any length criteria (matching length, minimum length, maximum length, etc.). If the locus, or a subset, can meet the criteria, it is saved to the control set and the next user record is examined. Otherwise, it is discarded.
If the user-requested control is for a genomic sequence, then the actual nucleotide sequence is retrieved for the loci in the pool. The user can decide whether the control sequences should originate from a single concatemerized sequence. This avoids creating any “center selection” bias when randomly selecting regions from within a given locus. If this is the case, then an appropriate length sequence is selected with a random starting point, continuing across one or more sequences as needed to complete the length. If concatemerization is not required, then the logic iterates over the loci in the pool, and attempts to apply any length criteria (as described above). Once an appropriate length sequence is found, it is checked for matching GC content. GC content can be set to match a given percentage threshold from ±100% (GC does not need to be matched) to ±5% (for example). If the locus matches required GC content, it is saved to the control set, and the next user record is examined. Otherwise, it is discarded.
Once all records in the user-defined table set have a matched control, processing exits and the control set is output, for example, to the user.
If concatemerize sequence is not employed, then a next record is examined 1760, and processing determines whether a min/max/match length designation can be applied to the record 1765. If “no”, then the record is discarded 1750. Otherwise, the record is examined for a matching GC content 1745, as described above.
After adding a loci or sequence length to the control set, processing determines whether the control set is complete 1770. If “yes”, then the control set is returned to the user or system, for example, for use in correlation analysis, as described herein. If the control set is not complete, then processing determines whether more records exist within the pool 1720. If processing is not to apply sequence parameters to the pool of records, then processing examines the next record 1780 and determines whether the record meets the minimum/maximum/match length designation set by the user 1785. If “no”, then the record is discarded 1750, and if “yes”, the record is added to the control data set. The result is a control data set wherein loci within the data set correlate to loci within the initially-loaded data set to be controlled. This intelligent selection of loci results in a control data set which is matched closely to the user-provided data set and thus produces more biologically relevant and accurate results when using the control data set, for example, for comparison purposes in correlation analysis with a third data set.
The correlation analysis tool of the system performs correlation analysis for sets of genomic loci. It performs comparisons among coordinate-based data in a high throughput manner, identifying shared or common regions. The tool allows for any number of sets of loci to be compared, with each set containing any number of loci, which may overlap within a set. A variable number of nucleotides can be defined for each minimum required correlation, or maximum allowed gap between loci. This minimum overlap or maximum gap can be set either as a fixed number, or a percentage, as described above. Also, any set can be defined as a negative set, meaning it should not be in common with the others. Further, a “bridging” criterion is allowed, where a locus can span two other loci and bridge the intervening region. The correlation analysis tool is rooted in a simple set intersection analysis. However, the data and compare conditions hold additional complexity. Each group of loci is a set which can intersect with other sets. But each set member (i.e., each nucleotide locus) is not a discrete unit which can be defined as a member of multiple sets. In fact, each locus is itself a set (of nucleotides) and the nucleotides act as the discrete unit of comparison. Thus, the requirement becomes an analysis of sets of sets.
There are caveats within the conditional comparisons as well. For instance, multiple loci within the same set are able to intersect with each other (e.g., isoforms of a gene). Also, when comparing loci, the determination of a true/false intersecting condition is variable, given the user-defined parameters. This means that loci can share any number of nucleotides, or even none at all (allowing for a proximity analysis), and still be considered a true condition. Further, a bridging criteria can be considered, which forces a simultaneous comparison among elements of three or more sets, allowing for more complex truth conditions. To maximize efficiency, the correlation analysis tool applies an ordered set and sweep concept to move through the data. (The ordered set and sweep is conceptually similar to the Bentley-Ottoman algorithm for finding the set of intersection points for a collection of line segments in two-dimensional space.) The correlation analysis tool orders loci within each input set based on their genomic coordinates. This allows the tool to organize each data set in a virtual linear model, and then “sweep” across them, minimizing the number of comparative permutations that must be generated. Due to the possibility of intersecting loci within a single set, there are a minimum number of iterative permutations that must be computed. However, by utilizing the ordered nature of the data and hierarchical data structures, these permutations are isolated to many small scopes, and the resource requirement is minimal.
In LCA (locus correlation analysis) the loci are addressed in a linear order within their context, and directionality is implicit within the coordinates. It doesn't matter whether the biological directionality of the loci is 5′→3′, p→q. etc; and LCA does not need to make any assumptions. However for reference purposes, the end of the context with the lowest number coordinates is referred to as the “low end”, and the end of the context with the highest number coordinates is referred to as the “high end”. Thus the locus closest to the low end is referred to as the “low-end locus”. The next locus in order is the “next low-end locus”, etc. Input data sets can be defined in two ways: they “should intersect” or they “should not intersect”. Sets that should intersect are referred to herein as “positive sets”, and sets that should not intersect are referred to herein as “negative sets”.
Each locus set given to LCA is prepared before the comparison processing begins. First the locus sets are copied, in order to preserve the integrity of the original sets. Then they are ordered, as described above. Lastly, the locus sets are compressed, again as described above. This is done because the sweeping process could fault in certain instances when the data sets are not linear (i.e., multiple loci overlap within the same set). For the compression process, the “Wrap All” parameter is used to tell the locus set to place all locus objects into a region container, as described above. This would give the LCA logic a consistent data structure to work with.
The logic maintains a reference to one region from each set. The referenced regions are determined in an iterative fashion by virtually sweeping along the genomic data and finding which set has the next low-end region. Once it is found, that set's reference is changed to the newly discovered region, the referenced regions from the sets are evaluated for intersection, and the sweep continues.
For example, in
Each time regions are evaluated for intersection, the logic accounts for the user defined parameters of minimum overlap or maximum gap, and bridging. As stated previously, bridging allows for a true condition (i.e., a common region) among 3 or more loci. For example, in
Each time referenced regions are determined to be positive for intersection, the logic branches. When this occurs, all permutations for the individual loci contained within the regions are examined. Each permutation of loci is evaluated for intersection, using the same criteria as the region comparisons. If a positive condition is found, then the negative data set condition is checked.
The negative locus sets are treated similarly to the positive data sets, except they are aggregated into a single locus set to reduce the conditional load. The negative locus set maintains a reference, which keep track of the current scope (genomic coordinates) of the positive regions. This allows for ‘checks’ against negative regions to be held to a minimum, since only negative regions within the current scope need to be checked. When positive intersecting regions are found, references to the negative regions are evaluated. If the currently referenced negative region is “before” the first positive region, then the reference is moved up to the next negative region. This process repeats until the current negative region is no longer before the first positive region (and thus is no longer out of scope). After the negative region reference has been updated, the permutations of loci within the positive regions are checked. When an intersection of loci is found, processing compares these loci to the negative regions. The comparison starts at the currently referenced negative region (which is now in scope), and continues to compare against consecutive negative regions, but only until the negative regions are “after” the last positive region (and thus out of scope).
As the iteration proceeds, each group of loci which have passed the criteria are processed as positive results. This includes:
Any of the above result types can be requested from the LCA logic after a single iteration of the processing. Each presents the results in a different manner, and which type the user chooses depends on the question(s) being asked.
Those skilled in the art should note that the displays of
Referring to
Continuing with the logic of
Referring to
Returning to
If the loci correlate, then from inquiry 2065, processing compares the correlated loci with the aggregate negative data set, or more particularly, with the negative loci therein 2080 and determines whether the correlated positive loci conflict with one or more negative loci within the aggregate negative data set 2085 using, for example, the logic of
Referring to
If the current negative region is not before the positive region, then processing determines whether the current negative region is after the positive region 2435. If “yes”, then processing is complete, and a false indication is returned, meaning that there is no overlap with a negative region of the aggregate negative data set 2440.
If the current negative region is not before or after the positive correlated region, processing compares the current negative region to all loci in the positive correlated region 2445, and determines whether any positive loci overlap with the current negative region 2450. If “yes”, then a true indication is returned, meaning that the correlated loci are not to be processed 2455. If “no”, then processing loops back to determine whether more negative regions exist within the aggregate negative data set 2405.
Returning to
The detailed description presented above is discussed in terms of program procedures executed on a computer, a network or a cluster of computers. These procedural descriptions and representations are used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. They may be implemented in hardware or software, or a combination of the two.
A procedure is here, and generally, conceived to be a sequence of steps leading to a desired result. These steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It proves convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, objects, attributes or the like. It should be noted, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities.
Further, the manipulations performed are often referred to in terms, such as adding or comparing, which are commonly associated with mental operations performed by a human operator. No such capability of a human operator is necessary, or desirable in most cases, in any of the operations described herein which form part of the present invention; the operations are automatic machine operations. Useful machines for performing the operations of the present invention include general purpose digital computers or similar devices.
Each step of the methods described may be executed on any general computer, such as a server, mainframe computer, personal computer or the like and pursuant to one or more, or a part of one or more, program modules or objects generated from any programming language, such as C++, Java, Fortran or the like. And still further, each step, or a file or object or the like implementing each step, may be executed by special purpose hardware or a circuit module designed for that purpose.
Aspects of the invention are preferably implemented in a high level procedural or object-oriented programming language to communicate with a computer. However, the inventive aspects can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language.
The invention may be implemented as a mechanism or a computer program product comprising a recording medium such as illustrated in
The invention may also be implemented in a system. A system may comprise a computer that includes a processor and a memory device and optionally, a storage device, an output device such as a video display and/or an input device such as a keyboard or computer mouse. Moreover, a system may comprise an interconnected network of computers. Computers may equally be in stand-alone form (such as the traditional desktop personal computer) or integrated into another environment (such as a partially clustered computing environment). The system may be specially constructed for the required purposes to perform, for example, the method steps of the invention or it may comprise one or more general purpose computers as selectively activated or reconfigured by a computer program in accordance with the teachings herein stored in the computer(s). The procedures presented herein are not inherently related to a particular computing environment. The required structure for a variety of these systems will appear from the description given.
Further, one or more aspects of the present invention can be provided, offered, deployed, managed, serviced, etc., by a service provider. For instance, the service provider can create, maintain, support, etc., computer code, a relational database array, and/or a computer infrastructure that performs one or more aspects of the present invention for one or more customers. In return, the service provider can receive payment from the customer under a subscription and/or fee arrangement, as examples. Additionally, or alternatively, the service provider can receive payment from the sale of advertising content to one or more third parties.
In one aspect of the present invention, an application can be deployed for performing one or more aspects of the invention. As one example, the deploying of the application comprises adapting computer infrastructure operable to perform one or more aspects of the present invention.
As a further aspect of the present invention, a computing infrastructure can be deployed comprising integrating computer-readable program code into a computing system, in which the code, in combination with the computing system, is capable of performing one or more aspects of the present invention.
As yet a further aspect of the present invention, a process for integrating computer infrastructure, comprising integrating computer-readable program code into a computer system may be provided. The computer system comprises a computer-usable medium, in which the computer-usable medium comprises one or more aspects of the present invention. The code, in combination with the computer system, is capable of performing one or more aspects of the present invention.
The capabilities of one or more aspects of the present invention can be implemented in software, firmware, hardware or some combination thereof. At least one program storage device readable by a machine embodying at least one program of instructions executable by the machine to perform the capabilities of the present invention can be provided.
The flow diagrams depicted herein are just examples. There may be many variations to these diagrams or the steps (or operations) described therein without departing from the spirit of the invention. For instance, the steps may be performed in a differing order, or steps may be added, deleted or modified. All of these variations are considered a part of the claimed invention.
Although preferred embodiments have been depicted and described in detail herein, it will be apparent to those skilled in the relevant art that various modifications, additions, substitutions and the like can be made without departing from the spirit of the invention and these are therefore considered to be within the scope of the invention as defined in the following claims.
This application claims the benefit of U.S. Provisional Application No. 60/917,155, filed May 10, 2007, entitled “System and Method for Data Retrieval and Analysis”, and U.S. Provisional Application No. 60/975,979, filed Sep. 28, 2007, entitled “Genomic Data Processing Utilizing Correlation Analysis of Nucleotide Loci”, both of which are hereby incorporated herein by reference in their entirety. In addition, this application contains subject matter which is related to the subject matter of the following applications, each of which is assigned to the same assignee as this application, and filed on the same day as this application. Each of the below-listed applications is hereby incorporated herein by reference in its entirety: “Genomic Data Processing Utilizing Correlation Analysis of Nucleotide Loci”, Tenenbaum et al., Ser. No. ______, (Docket No. 0794.087A), filed herewith;“Genomic Data Processing Utilizing Correlation Analysis of Nucleotide Loci of Multiple Data Sets”, Tenenbaum et al., Ser. No. ______, (Docket No. 0794.087B), filed herewith; and“Non-Random Control Data Set Generation for Facilitating Genomic Data Processing”, Tenenbaum et al., Ser. No. ______, (Docket No. 0794.087D), filed herewith.
This invention was made, in part, under Grant Number 1043750 from the National Human Genome Research Institute/National Institutes of Health. Accordingly, the United States Government may have certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
60917155 | May 2007 | US | |
60975979 | Sep 2007 | US |