The invention relates generally to the field of relational databases and specifically to the field of optimizing queries on databases.
Most query optimizers for relational database management systems (RDBMS) rely on a cost model to choose the best possible query execution plan for a given query. Thus, the quality of the query execution plan depends on the accuracy of cost estimates. Cost estimates, in turn, crucially depend on cardinality estimations of various sub-plans (intermediate results) generated during optimization. Traditionally, query optimizers use statistics built over base tables for cardinality estimates, and assume independence while propagating these base-table statistics through the query plans. However, it is widely recognized that such cardinality estimates can be off by orders of magnitude. Therefore, the traditional propagation of statistics that assumes independence between attributes can lead the query optimizer to choose significantly low-quality execution plans.
The query optimizer is the component in a database system that transforms a parsed representation of an SQL query into an efficient execution plan for evaluating it. Optimizers examine a large number of possible query plans and choose the best one in a cost-based manner. For each incoming query, the optimizer iteratively explores the set of candidate execution plans using a rule-based enumeration engine. After each candidate plan or sub-plan is generated, the optimizer estimates its execution cost, which in turn refines the exploration of further candidate plans. Once all “interesting” plans are explored, the most efficient one is extracted and passed on to the execution engine.
The cost estimation module is critical in the optimization process, since the quality of plans produced by the optimizer is highly correlated to the accuracy of the cost estimation routines. The cost estimate for a sub-plan, in turn, depends on cardinality estimations of its sub-plans. Traditionally, query optimizers use statistics (mainly histograms) that are built over base tables to estimate cardinalities. Histograms are accurate for estimating cardinalities of simple queries, such as range queries. For complex query plans, however, the optimizer estimates cardinalities by “propagating” base-table histograms through the plan and relying on some simplifying assumptions (notably the independence assumption between attributes).
The sub-plan shown in
When the cardinality estimation technique illustrated in
The containment assumption is relied upon when estimating the cardinality of joins using histograms. The buckets of each histogram are aligned and a per-bucket estimation takes place, followed by an aggregation of all partial results. The containment assumption dictates that for each pair of buckets, each group of distinct valued tuples belonging to the bucket with the minimal number of different values joins with some group of tuples in the other bucket. For instance, if the number of distinct values in bucket bR is 10, and the number of distinct values in bucket bS is 15, the containment assumption states that each of the 10 groups of distinct valued tuples in bR join with one of the 15 groups of distinct valued tuples in bS.
Random sampling is a standard technique for constructing approximate base-table histograms. Usually the approximated histograms are of good quality regarding frequency distribution. However, estimating the number of distinct values inside buckets using sampling is difficult. The sampling assumption states that the number of distinct values in each bucket predicted by sampling is a good estimator of the actual of distinct values.
Often one or more of the simplifying assumptions do not reflect real data values and distributions. For instance, many attributes are actually correlated and the independence assumption is often inaccurate. Therefore, the optimizer might rely on wrong cardinality information and therefore choose low quality execution plans. More complex queries (e.g., n-way joins) only exacerbate this problem, since estimation errors propagate themselves through the plans.
Statistics can be constructed on the results of join queries without executing the join by scanning one of the tables in the join and for each scanned tuple determining an approximate number of tuples in the other table that have a matching join attribute. A number of copies of the tuple corresponding to the multiplicity value are copied into a stream of tuples which is sampled on the fly to construct a statistical representation of the join result. Statistics can be constructed on the results of more complex queries by recursively performing the method, by applying filter conditions during the scan operation, and accessing multidimensional histograms to determine multiplicity values over joint distributions. To create multiple statistical representations on different user queries on the same database tables, the scans can be shared and intermediate results of scans stored temporarily for access later. An optimal order of table scans can be computed using adaptations of shortest common supersequence techniques.
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which:
Exemplary Operating Environment
With reference to
A number of program modules may be stored on the hard disk, magnetic disk 129, optical disk 31, ROM 24 or RAM 25, including an operating system 35, one or more application programs 36, other program modules 37, and program data 38. A database system 55 may also be stored on the hard disk, magnetic disk 29, optical disk 31, ROM 24 or RAM 25. A user may enter commands and information into personal computer 20 through input devices such as a keyboard 40 and pointing device 42. Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to processing unit 21 through a serial port interface 46 that is coupled to system bus 23, but may be connected by other interfaces, such as a parallel port, game port or a universal serial bus (USB). A monitor 47 or other type of display device is also connected to system bus 23 via an interface, such as a video adapter 48. In addition to the monitor, personal computers typically include other peripheral output devices such as speakers and printers.
Personal computer 20 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 49. Remote computer 49 may be another personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to personal computer 20, although only a memory storage device 50 has been illustrated in
When using a LAN networking environment, personal computer 20 is connected to local network 51 through a network interface or adapter 53. When used in a WAN networking environment, personal computer 20 typically includes a modem 54 or other means for establishing communication over wide area network 52, such as the Internet. Modem 54, which may be internal or external, is connected to system bus 23 via serial port interface 46. In a networked environment, program modules depicted relative to personal computer 20, or portions thereof, may be stored in remote memory storage device 50. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
Cost Estimation Using Cardinality Estimates Based on Statistics on Intermediate Tables
SITs (statistics on intermediate tables) are statistics built over the results of query expressions, and their purpose is to eliminate error propagation through query plan operators. As a simple example, consider again
For the purposes of this description, a SIT is defined as follows: Let R be a table, A an attribute of R, and Q an SQL query that contains R.A in the SELECT clause. SIT(R.A|Q) is the statistic for attribute A on the result of the executing query expression Q. Q is called the generating query expression of SIT (R.A|Q). This definition can be extended for multi-attribute statistics. Furthermore, the definition can be used as the basis for extending the CREATE STATISTICS statement in SQL where instead of specifying the table name of the query, more general query expression such as a table valued expression can be used.
In U.S. patent application Ser. No. 10/191,822, incorporated herein by reference in its entirety, the concept of SITs, statistics that are built over intermediate nodes in query execution plans, was introduced. A particular method of adapting a prior art query optimizer to access and utilize a preexisting set of SITs for cost estimation was described in detail in this application, which method is summarized here briefly as background information.
Referring to
In general, the use of SITs is enabled by implementing a wrapper (shown in phantom in
According to the embodiment described in application Ser. No. 10/191,822, the transformed plan that is passed to the cardinality estimation module exploits applicable SITs to enable a potentially more accurate cardinality estimate. The original cardinality estimation module requires little or no modification to accept the transformed plan as input. The transformation of plans is performed efficiently, which is important because the transformation will be used for several sub-plans for a single query optimization.
An Algorithm for Creating SITs
Although SITs can be defined using arbitrary generating queries, a technique for generating SITs will described herein in conjunction with join generating queries and in particular SPJ generating queries yielding SITs of the form SIT(Rk.a|R1 . . . Rn). (Possible adaptations to the technique to enable it to handle more generalized queries will be described later.) For this class of generating query expressions, techniques inspired by the work done in approximate query processing can be leveraged to create SITs. In many cases approximate statistical distributions provide sufficient accuracy to enable efficient generation of SITs that provide a large improvement in cardinality estimation without the need to execute the underlying intermediate query expression. The technique for building approximate SITs for binary-join generating queries described herein, called Sweep, does not rely on the independence assumption, but does rely on the containment and sampling assumptions.
To create SIT(R.a|Q), Sweep attempts to efficiently generate a sample of πR.a(Q) without actually executing Q, and then use existing techniques for building histograms over this intermediate result.
In step 2 in
To estimate the multiplicity of y in R two scenarios are considered. The number of distinct values in buckets bR.y and bS.y are denoted dvR.y and dvS.y, respectively. In the case that dvS.y≦dvR.y, i.e. the number of distinct values from hR is larger than that of hS, under the containment assumption it can be guaranteed that value y, which belongs to one of the dvS.y groups in bS.y, would match some of the dvR.y, groups in bR.y. Since it is assumed that a uniform distribution exists within the buckets, the multiplicity for y in this situation would by fR.y/dvR.y, where fR.y is the frequency of bucket bR.y. However, if dvS.y>dvR.y, it can no longer be guaranteed that value y joins with some of the dvR.y buckets in hR. If the tuples that verify the join are distributed uniformly, the probability that y is in one of the dvR.y<dvS.y groups in bR.y that match some group in bS.y is dvR.y/dvS.y. In that case the multiplicity would be fR.y/dvR.y. Otherwise (y does not match with any value in R), the multiplicity would be 0. In conclusion, when dvS.y>dvR.y, the expected multiplicity of y in R is (fR.y/dvR.y)·(dvR.y/dvS.y)+0·(1−dvR.y/dvS.y)−fR.y/dvS,y.
Putting both results together, the expected multiplicity of y from S in R is given by fR.y/max(dvR.y, dvS.y). Since the bucket that contains a given tuple can be located efficiently in main memory, this histogram based algorithm is extremely efficient.
The Sweep algorithm described above does not rely on an independence assumption to estimate cardinalities of SPJ queries, however it does rely on the containment assumption and sampling for building histograms. The Sweep algorithm can be modified so that it does not rely on these assumptions to obtain more accurate SITs, at the expense of introducing execution overhead to the algorithm. For example, if an index over attribute R.x is available, repeated index lookups can be issued to find exact multiplicity values rather than relying on the containment assumption to estimate multiplicity values in step 2 of
Sweep, as already discussed in conjunction with binary-join generating queries, can be extended to handle acyclic-join queries. A given query is an acyclic join query if its corresponding join-graph is acyclic. This description will discuss a restricted class of acyclic join queries in which for every pair of tables t1 and t2 in the generating query q, there is at most one predicate in q joining t1 and t2. The more general case e.g., for RR.w=S.x R.y=S.xS, the technique can be extended using multidimensional histograms.
A linear-join query can be expressed as (R1, . . . , Rn), where the i-th join (1≦i≦n−1) connects tables Ri and Ri+1, i.e., the corresponding join-graph is a chain. Based on the description for Sweep above, to approximate SIT(S.a|Rx=yS) the following operations must be performed: (i) a sequential (or index) scan covering attributes {S.y, S.a} in table S, and (ii) histogram lookups over attributes R.x and S.y. To approximate a SIT over a linear-join query, the query joins are left-associated and the original SIT is unfolded into a set of single-join SITs as illustrated in
To extend Sweep to handle more general kinds of acyclic-join generating queries, an acyclic join-graph is converted into a join-tree that has at the root the table holding the SIT's attribute.
For an arbitrary acyclic-join generating query the join-tree is traversed in post-order. At each leaf node a base-table histogram is built for the attribute that participates in the join with the parent table in the join-tree. For each internal node, the children's SITs produce earlier are used to compute the corresponding SIT for the attribute that participates in the join predicate with the parent (or the final attribute for the root node). As an example, for the SIT depicted in
As described, Sweep can be used to create SITs with arbitrary acyclic-join generating queries. Sweep requires a sequential scan over each involved table (except for the root of the join-tree) and some additional amount of extra processing to build intermediate SITs. Sweep can be extended to cover more complex scenarios by materializing some portions of the generating query first and then applying the Sweep algorithm to the materialized portions.
Extensions to More General Queries
Sweep can be extended to handle generating queries with selection predicates. Given SIT(S.a|σS.b<5(RS)), since the selection predicate S.b<5 is on table S, tuples that satisfy S.b<5 can be filtered during the scan of table S. If a clustered index over S.b is available, it can be used to improve execution time. In general for more complex queries this filtering approach can be used as well. To obtain SIT(R.a|σS.b=2R.c<5((Rw=xS)y=zT)), a sequential scan over table R (keeping tuples that satisfy R.c<5) is performed and SIT(R.w|σR.c<5(R)) and SIT(S.x|σS.b=2(Sy=zT)) are manipulated. The former SIT can be created by using an additional scan (or a sample) over table R. The latter is recursively obtained with a sequential scan over S (keeping tuples satisfying S.b=2) and manipulating SIT(S.y|S.b=2) and base-table histogram H(T.z). If the filter predicate is defined over a table that is not scanned during Sweep, the corresponding SIT may be obtained by exploiting multidimensional histograms. For example, to obtain SIT(S.a|σR.b<10(Rx=yS)) a sequential scan is performed over S and multiplicity values are determined using a histogram over S.y and a two-dimensional histogram over {R.b,R.x}.
Sweep can also be extended to generate multidimensional SITs in which all the attributes are over the same base table. For instance, to create SIT(S.a,S.b|Rx=yS), a scan over the joint distribution (S.a,S.b,S.y) is performed first. Then, multiplicity values are obtained and a temporary table approximating πS.a, S.b(Rx=yS) is created and sampled. Finally, a traditional multidimensional technique is used to materialize the SIT over the approximate sample. This technique requires more space for samples because each element in the sample is a multidimensional tuple.
Multiple SIT Creation
In many instances it will be useful to create several SITs at once. Given a set of candidate SITs that would be useful for a workload, many commonalities between the SITs might exist so that the sequential scans required to build the SITs could be “shared”. For this reason, a one-at-a-time approach to building SITS will likely be suboptimal such as in this example.
Two SITs are to be created:
SIT(T.a|Rr1=S1Ss3=t3T) and SIT(S.b|Rr2=s2S)
A naïve approach would be to apply Sweep to each SIT separately. In that case, one sequential scan would be used over tables S and T to build the first SIT and a second sequential scan over table S would be used to build the second SIT. However, the sequential scans can be ordered so that a single scan of table S can be used for both SITs. A sequential scan over table S can be performed to get both SIT(S.b|Rr2=s2S) and SIT(S.s3|Rr1=S1S). This can be done by sharing the sequential scan over S (on attributes S.s2, S.b, S.s3, and S.s1) and using histograms over R.r2 and S.s2 for the first SIT, and histograms R.r1 and S.s1 for the second SIT to obtain the required multiplicity values. A sequential scan over T can be then performed and the previously calculated SIT(S.s3|Rr1=S1S) can be used to obtain the required SIT(T.a|Rr1=S1Ss3=t3T). This strategy requires a single sequential scan over table S, as opposed to two scans for the naïve strategy. Of course, the memory requirements for the second strategy are larger than those for the first, since it is necessary to maintain two sets of samples in memory: one for πS.b(Rr2=s2S), and another for πS.s1(Rr1=S1S).
The following optimization problem can be used to create a given set of SITs. Given a set of SITs S={S1, . . . , Sn} a sampling rate s (specified as a percentage of table size, an absolute amount, or a combination of both), and the amount of available memory M, find the optimal sequence of applications of the Sweep algorithm (sharing sequential scans as explained above) such that (i) at any time the total amount of memory used for sampling is bounded by M, and (ii) the estimated execution cost for building S is minimized.
The Shortest Common Supersequence (SCS) problem, used in text editing, data compression, and robot assembly lines can be adapted to address this optimization problem as follows. R=x1, . . . , xn is a sequence of elements (individual elements of R can be accessed using array notation, so R[i]=xi). Given a pair of sequences R and R′, R′ is a subsequence of R, if R′ can be obtained by deleting zero or more elements form R (R is said to be a supersequence of R′). A sequence R is a common supersequence of a set of sequences ={R, . . . , Rn} if R is a supersequence of all Ri∈. A shortest common supersequence of , denoted SCS(), is a common supersequence of with minimal length.
For example, ={abdc,bca} has supersequences abdcbca, aabbddccbbcaa, and abdca. SCR()=abdca, since no sequence of size four is common supersequence of both abdc and bca.
Finding the SCS of a set of sequences in an NP-complete problem that can be solved using dynamic programming in (ln) for n sequences of length at most l, by formulating SCS as a shortest path problem in an acyclic directed graph with (ln) nodes. For a given set of sequences R={R1, . . . , Rn}, the graph is constructed as follows. Each node in the graph is a n-tuple (r1, . . . , rn), where ri∈{0 . . . |Ri|} indexes a position Ri. Node (r1, . . . , rn) will encode a solution for the common supersequence of {S1, . . . , Sn} where Si=Ri[1]Ri[2]. . . , Ri[ri], i.e., the ri-prefix of Ri. An edge is inserted from node (ul, . . . , un) to node (v1, . . . , vn) with label θ if the following properties hold: (i) ui=viui+1=vi, (ii) at least one position uj verifies uj+1=vj, and (iii) for every position vj such that uj+1=vj, Rj[vj]=θ. Informally, an edge labeled θ connects nodes u and v if the state represented by v can be reached from the state represented by u by adding θ to the common supersequence encoded at u.
Any path from node O=(0, . . . , 0) to node F=(|R1|, . . . , |Rn|) in the graph corresponds to a common supersequence of R. In particular, any shortest path from O to F corresponds to a shortest common supersequence of R. Therefore, to solve SCS the induced graph is materialized and any algorithm may be used to find the shortest path between O and F.
Algorithm A* is a heuristic technique to efficiently find shortest paths in graphs that are inductively built (i.e., graphs in which the set of successors of any given node can be generated). A* is applied to the SCS problem so that only a portion of the graph induced by the input set of sequences is materialized at any given time. A* searches the input graph outwards from the starting node O until it reaches the goal node F, expanding at each iteration the node that has the most chances to be along the best path from O to F. The application of A* is based on the possibility, for each node u in the induced graph, of estimating a lower bound of the length of the best path connecting O and F through u (denoted f(u)). At each step in the search, the most promising node is chosen, i.e., the node for which f(u) is the smallest among those for the nodes created so far. Then the chosen node is expanded by dynamically generating all its successors in the graph. Typically, the cost function f(u) is composed of two components, f(u)=g(u)+h(u), where g(u) is the length of the shortest path found so far between O and u, and h(u) is the expected remaining cost (heuristically determined) to get from u to F. If the heuristic function h(u) is always an underestimate of the actual length from u to F, A* is guaranteed to find the optimal solution. However, if h(u) is too optimistic, A* will expand too many nodes and may run out of resources before a solution is found. Therefore, it is important to define h(u) as tight as possible. Also, if for any pair of nodes u and v that are connected by an edge in the graph, h(u)−h(v)≦d(u,v) where d(u,v) is the cost of going from u to v, the following property holds: whenever a node u is expanded, a shortest path from O to u is already known. This property allows efficient implementations of A*.
For the SCS problem, an estimate on the length of the shortest path from u to F, i.e., h(u), equivalent to an estimate of the shortest common supersequence of the suffixes of the original sequences not yet processed in state u. A good value for h(u) can then be calculated as follows. Given a maximum number of occurrences of c in some suffix sequence u (denoted o(u,c)), a lower bound h(u) is then o(u,c), since every common supersequence must contain at least o(u,c) occurrences of c. For instance, referring to node (2,1) in
A* does not affect the size of the graph, but usually results in faster executions since it does not need to generate the whole graph in advance, but only explores a small fraction guided by the heuristic function h.
The A* technique can be adapted to optimally create a set of SITs. As already discussed, creating a SIT requires performing sequential scans over the set of tables referenced in the SIT's generating query (with the exception of the root table in the join tree). Moreover, the sequential scans must follow the order given by some post-order traversal of the join tree. For example, to create a SIT over attribute R.a with the acyclic-join generating query of
These order restrictions can be concisely specified by using a set of dependency sequences. A dependency sequence is a sequence of tables (R1, . . . , Rn), such that for all 1≦i,j≦n, the sequential scan over table Ri must precede the sequential scan over Rj. For linear-join queries, a single dependency sequence is needed, which is obtained by traversing the chain of joins starting from the table that originally hosts the SIT's attribute, and omitting the last table. In general, for an acyclic-join query one dependency sequence is needed for each root-to-leaf path in the join-tree (omitting leaf nodes).
To model the time and space required to execute Sweep over a single-join generating query, the following values are associated with each table T: Cost(T) which is the estimated cost to perform a sequential scan over T, and SampleSize(T,a), which specifies how much memory is allocated for a sample over attribute a of T. SampleSize(T,a) can be a constant value or depend on the specific table and attribute. Therefore, if Sweep is used to create SIT(S.a|RS), the cost of the procedure is estimated as Cost(S) and the memory requirements are estimated as SampleSize(S,a).
As illustrated above, the sequential scan over table S can be shared to create any SIT of the form SIT(S.a|Rx=yS) for arbitrary table R and attributes a, x, and y, provided there are histograms over R.x and S.y available. Note that for acyclic-join generating queries, R could represent an intermediate join result. In this situation the cost of executing Sweep remains fixed at Cost(T) since the sequential scan is shared. However, the space required for sampling increases to ΣisampleSets(ai)·SampleSize(T,a), where samplesets(ai) is the number of sample sets for attribute ai required during the sequential scan over table S. For instance, if the sequential scan over S is shared to create sit (S.a|Rx=yS), SIT(S.b|Rx=yS), and SIT(S.a|Tz=yS), the estimated cost will be Cost(S) and the memory requirements for sampling will be 2·SampleSize(S,a)+SampleSize(S,b).
If the amount of available memory is unbounded, the optimization problem can be mapped to a weighted version of SCS, where the input sequences to the SCS problem are all the dependency sequences of the given SITs. In this case, the A* algorithm is changed only to the extent that the definition of the distance function between nodes must incorporate weights and the heuristic function h(u) must also be modified (lines 6 and 9 in the A* algorithm). In particular, d(bestN,s) is given a weight of Cost(R) where R is the label of edge (bestN,s). The definition of h(u) is changed accordingly and the second assignment of line 9 becomes h(s)=ΣcCost(c)·o(u,c).
Given the SCS, the elements (tables) of the SCS are iterated through, one at a time. When a table T is processed, all SITs of the form SIT(T.a|Ssi=tjT) are created (using Sweep) for which the histogram of S.si is already built (or if S is a base table, the corresponding base-table histogram is created first).
Referring to
The scenario considered above assumes that any amount of memory can be allocated to create SITs. When the amount of memory M is bounded, the search space is modified to solve a constrained, weighted SCS. For instance if 2·SampleSize(S,s1)>M, the sequential scan over S could not be shared, and the optimal execution path would be different as described below.
Multiple SIT Creation With Bounded Memory
If the amount of available memory is bounded, some edges in A*'s search graph are no longer valid. This is because the implicit meaning of an edge from node u=(u1, . . . , un) to node v=(v1, . . . , vn) with label θ is to “advance” one position all input sequences for which R[ui]=θ. While creating SITs, each position that was changed from ui to vi=u1+1 in transition (u,v) corresponds to an additional SIT to create and therefore may increase the memory requirements above the given limit. When memory is limited, only subsets of all possible positions from node u using edge θ can be advanced. To ensure optimality, each possible position must be tried. To deal with a bounded memory, successors of a given node are determined at each iteration of A* as outlined in pseudo code below.
The size of the search graph is bounded by O(ln), where n is the number of input SITs and l is the size of the largest dependency sequence among the input SITs. The A* algorithm is guaranteed to find an optimal scheduling. However, if there are many input SITs, or SITs with many joins, the A*-based technique may become expensive due to the increase in the number of edges to be evaluated. The worst-case time complexity of the algorithm is O(ln·2S) where l is the maximum length of any chain of joins, n is roughly the number of input SITs, and S is the maximum size of any candidate set. For small values of l and n, the A* algorithm is efficient, but larger values of l or n can cause executions of A* to become prohibitively expensive. The A* algorithm can be modified in a manner that balances efficiency and quality of the resulting schedule.
A simple modification is to take a greedy approach. At each iteration of A*, after the best node u is selected, the OPEN set is emptied before adding the successors of u. In this way, the greedy approach chooses at each step the element that would result in the largest local improvement. In this case, the size of OPEN at each iteration is bounded by the maximal number of successors of any given node, and the algorithm is guaranteed to finish in at most Σi|Ri| steps (since the induced search graph is always acyclic). However, due to the aggressive pruning in the search space, the greedy approach when used exclusively may result in suboptimal schedules.
A hybrid approach that combines A* and the greedy method above switches from A* to the greedy approach when appropriate by cleaning OPEN at the current and every subsequent iteration. The hybrid approach starts as A* and after a switch condition, greedily continues from the most promising node found so far. Several switching conditions can be used for the hybrid approach. The switch can be made after a pre-determined amount of time has passed without A* returning the optimal solution, or after |OPEN ∪ CLOSE| uses all available memory. In one particular hybrid approach, the switch is made after one second of time without A* finding an optimal solution.
It can be seen from the foregoing description that building and maintaining statistical information on intermediate query results can result in more efficient query plans. Although the present invention has been described with a degree of particularity, it is the intent that the invention include all modifications and alterations from the disclosed design falling within the spirit or scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
5091852 | Tsuchida et al. | Feb 1992 | A |
5875445 | Antonshenkov | Feb 1999 | A |
5918225 | White et al. | Jun 1999 | A |
6691101 | MacNicol et al. | Feb 2004 | B2 |
6782421 | Soles et al. | Aug 2004 | B1 |
6947927 | Chaudhuri et al. | Sep 2005 | B2 |
20040225639 | Jakobsson et al. | Nov 2004 | A1 |
Number | Date | Country | |
---|---|---|---|
20040236762 A1 | Nov 2004 | US |