Information
-
Patent Application
-
20040186684
-
Publication Number
20040186684
-
Date Filed
May 10, 200420 years ago
-
Date Published
September 23, 200420 years ago
-
CPC
-
US Classifications
-
International Classifications
Abstract
The invention relates to a method for the automatic, software-driven statistical evaluation of large amounts of data that is to be assigned to statistical variables in a database. Said method is characterised in that a statistical model, which approximately describes the relative frequencies of the states of the variables and the statistical dependencies between said states, is learnt and is used to determine the approximate relative frequencies of states of the variables, in addition to the approximate relative frequencies belonging to predeterminable relative frequencies of states of the variables and expectation values of the states of variables dependent thereon.
Description
[0001] This invention relates to a method for the automatic, software-driven statistical evaluation of large amounts of data that is to be assigned to statistical variables in a database. The data to be evaluated can, in particular, be contained in one or several clusters.
[0002] Nowadays databases are in the position to store immense amounts of data. In order to evaluate the stored data and to be able to extract profitable information, efficient i.e. quick and specific database accesses are required because of the data occupancy.
[0003] In general, for an evaluation all the data must be found that conforms to a pre-determinable condition. Often it is not the case that the located data itself must be known, but often only information about the statistics based on the data is required.
[0004] If, for example, in a customer relationship management (CRM) system in which customer data is stored, it be determined what proportion of customers with specific features bought a certain product, a simple procedure could be to access all the customer entries in the database, request all the features of the customers and under these to find out and count those entries which “match” the desired features for which the customers bought the specific product. For example, such a request to the database could be as follows: how often were specific mobile telephones purchased by male customers who are at least 30 years old? Therefore, all the customer entries that conform to the requirements “male” and “at least 30 years old” must be found in which case a test must be performed for the matching entries found to determine which mobile telephone was purchased the most.
[0005] However, a disadvantage of this procedure is the fact that the entire database has to be read to find the matching entries. This can occasionally take a very long time in the case of very large databases.
[0006] The database can be searched more skillfully and more efficiently if all the variables are provided with selective indexes that can be requested. In this case it is a rule that the more exact and sophisticated the applicable index technique of a database is, the quicker the database can be accessed. More efficient statistical information about the database entries can also be provided accordingly. This in particular applies if the database is specifically prepared by a special index technique for the requests to be expected.
[0007] Alternatively or in combination with index techniques, the results of all the statistical requests to be expected can be pre-calculated which has the disadvantage of considerable effort required for the calculations and storage of results.
[0008] The term “online analytical processing” (OLAP) characterizes a class of methods for extracting statistical information from the data of a database. In general, such methods can be subdivided into “relational online analytical processing” (ROLAP) and “multidimensional online analytical processing” (MOLAP).
[0009] The ROLAP method only makes slight pre-calculations. When requesting the statistics, the data about the index techniques required for a response to the request is accessed and the statistics are then calculated from the data. The emphasis of ROLAP is then on a suitable organization and indexing of the data to find and load the required data as quickly as possible. Nevertheless, the effort for large amounts of data can still be very great and in addition the selected indexing is sometimes not optimum for all the requests.
[0010] In the MOLAP method the focus is on pre-calculating the results for many possible requests. As a result, the response time for a pre-calculated request remains very short. For requests that have not been pre-calculated, the pre-calculated values can sometimes also lead to an acceleration if the desired sizes can be calculated from the pre-calculated results, and this means that it is more cost-effective than directly accessing the data. The number of all possible requests increases rapidly with the increasing number of states of these variables so that the pre-calculation hits against the limits of the present possibilities with regard to memory location and turnaround time. Restrictions with regard to the variables considered, the different states of these variables or the permissible requests must then be taken into consideration.
[0011] Even though the OLAP method guarantees an increase in the efficiency compared to merely accessing each database entry it is disadvantageous that a great amount of redundant information has to be generated here. Therefore, statistics must be pre-calculated and extensive index lists created. In general, an efficient application of an OLAP method also requires that this method is optimized to specific requests in which case the OLAP method is then also subject to these selected restrictions, i.e. no random requests can be made to the database.
[0012] In addition, it is also true for the OLAP method that, the more quickly the information is to be provided and the more this information varies, the more structures must be pre-calculated and stored. Therefore, OLAP systems can become very large and are by far less efficient than would be desired, response times of less than one second can in practice not be implemented for any statistical requests to a large database. Often the response times are considerably more than one second.
[0013] Therefore, there is a need for more efficient methods for the statistical evaluation of data entries. In such cases the requests should not be subject to any restrictions if possible.
[0014] The object of this invention is to overcome the disadvantages of the methods known in the prior art, particularly, the OLAP method for the statistical evaluation of database entries.
[0015] The methods according to the features of the contingent claims achieve this object according to the invention. Advantageous developments of the invention are specified in the subclaims.
[0016] According to the invention, a method is shown for the automatic, software-driven statistical evaluation of large amounts of data that is to be assigned to statistical variables in a database, in particular, contained in one or more clusters which is characterized in that a statistical model for the approximate description of the relative frequencies of the states of the variables and the statistical dependencies between said states, is learnt by means of the data stored in the database and is used to determine, on the basis of the statistical model, the approximate relative frequencies of states of the variables, in addition to the approximate relative frequencies belonging to the pre-determinable relative frequencies of states of the variables and expected values of the states of variables dependent thereon.
[0017] Unlike the conventional method for statistical evaluation of the data from databases, the model is not an exact image of the statistics of the data. In general, this procedure obtains no exact, but only approximate statistical statements. However, the statistical models are subject to fewer restrictions than, for example, the conventional OLAP methods.
[0018] In order to make approximate, statistical statements, the entries are then “condensed” in a database to a statistical model in which case the statistical model virtually represents an approximation of the “common probabilistic distribution” of the database entries. In practice, this takes place by learning the statistical model on the basis of database entries, in which case the relative frequencies of the states of the variables of the database entries can approximately be described in this sequence. Therefore, the variables can capture many states with different, relative frequencies. As soon as such a statistical model is available, this can be used to study the relative dependencies between the states of the variables. According to a pre-determinable condition, relative frequencies of the states of the variables can be specified in this way and are used to determine the relative frequencies of states of the variables belonging to predetermined relative frequencies of states of the variables dependent thereon.
[0019] A statistical request to the database can in this way be made as a condition for the relative frequencies of specific states of the variables in which case a response to the statistical request is used to determine the relative frequencies of states of the variables belonging to predetermined relative frequencies of states of the variables dependent thereon.
[0020] As the statistical model, a graphical probabilistic model is preferably used (see e.g.: Castillo, Jose Manuel Gutierrez, Ali S. Hadi, Expert Systems and Probabilistic Network Models, Springer, N.Y.). The graphical probabilistic models particularly include the Bayesian networks or Belief networks and Markov networks.
[0021] A statistical model can for example be generated by structured theories in Bayesian networks (see e.g.: Reimar Hofmann, Lernen der Struktur nichtlinearer Abhängigkeiten mit graphischen Modellen—learning the structure of non-linear dependencies with graphical models—, Dissertation, Berlin, or David Heckermann, a tutorial on learning Bayesian networks, Technical Report MSR-TR-95-06, Microsoft Research).
[0022] A further possibility is to learn the parameters for a fixed structure (see e.g.: Martin A. Tanner: Tools for Statistical Inference, Springer N.Y., 1996).
[0023] Many learning methods use the likelihood function as an optimization criterion for the parameters of the model. A particular embodiment here is the expectation maximation (EM) learning method that is explained below in detail on the basis of a special model. In principle, it mainly does not concern a generalization ability of the models, but it is only necessary to obtain a good adaptation of the models to the data.
[0024] As the statistical model, a statistical clustering model preferably a Bayesian clustering model is used by means of which the data is subdivided into many clusters.
[0025] Similarly, a clustering model based on a distance measurement can be used together with a statistical model by means of which the data is likewise subdivided into many clusters.
[0026] By using clustering models, a very large database breaks down into smaller clusters that, for their part, can be interpreted as separate databases and can be handled more efficiently based on the comparably smaller size. Here the statistical evaluation of the database tests whether or not a predetermined condition can be mapped via the statistical model to one or more clusters. Should this be applicable, the evaluated data will be restricted to one cluster or a number of clusters. Similarly, it is possible that such clusters are restricted to those in which the data conforming to the predetermined condition contains at least one specific relative frequency. The remaining clusters in which only a smaller amount of data is contained according to the predetermined condition can be ignored because the considered procedure only aims at approximate statements.
[0027] For example, a Bayesian clustering model (a model with a discrete latent variable) is used as a statistical clustering model.
[0028] This is described in further detail below:
[0029] Given a set of statistical variables {A, B. C, D, . . . }, or in other words, a set of fields in a database table. The relevant lower case letters describe the states of the variables. Therefore, variable A can also accept the states {a1 a2, . . . }. The states are assumed to be discrete; but in general continuous (real-value) variables are also permitted.
[0030] An entry in the database table consists of values for all the variables in which case the values belonging to an entry are combined into one data record D for all the variables. For example, xπ=(aπ, bπ, cπ, dπ, . . . ) describes the flth data record. The table has M entries, i.e. D={xπ, π=1, . . . , M}.
[0031] In addition, there is a hidden variable (cluster variable) that is designated with Ω. The cluster variable can accept the values {ωi, i=1, . . . , N}; i.e. there are N clusters.
[0032] Here, P(Ω|θ) describes a priori distribution of the cluster in which case the a priori weight of the ith cluster is given via P(ωi|θ) and θ represents the parameters of the model. The a priori distribution describes which cluster of the data is assigned to the relevant clusters.
[0033] The expression P (A, B, C, D, . . . |ωi, |θ) describes the structure of the ith cluster or the conditional distribution of the variables of the variable set {A, B, C, D, . . . } within the ith cluster.
[0034] The a priori distribution and the distributions of the conditional probabilities of each cluster thus together parameterize one common probabilistic model on {A, B. C, D, . . . } U Ω or on {A, B, C, D, . . . }. The probabilistic model is given by the product from the a priori distribution and the conditional distribution
P
(A,B,C, . . . ,Ω|Θ)=P(Ω|Θ) P(A,B,C, . . . |Ω,Θ),
[0035] or by
P
(A,B,C, . . . |Θ)=Σi P(ωi|Θ) P(A,B,C, . . . |ωi,Θ).
[0036] The logarithmic likelihood function L of parameter θ of the data record D is now given by
L
(Θ)=log P(D|Θ)=Σπlog P(xπ|Θ).
[0037] Within the context of the expectation maximation (EM) theory, a sequence of parameters θ(t) is now constructed according to the following general specification:
Θ(t+1)=arg maxΘΣπΣi P(ωi|xπ,Θ(t)) log P (xπ,ωi|Θ)
[0038] This iteration specification maximizes the likelihood function step by step.
[0039] For the conditional distributions P(A, B, C, D, . . . i, θ), restrictive assumptions can (and must, in general) be made. An example of such a restrictive assumption is the following factorization assumption:
[0040] If for example for the distribution of the conditional probabilities P(A, B, C, D, . . . i, θ) of the variables of the variable set {A, B, C, D, . . . }, the factorization P(A, B, C, D, . . . i, θ)=P(A iθ)P (B iθ)P(Ciθ)P(D iθ) . . . is accepted, the probabilistic model conforms to a naive Bayesian network. Instead of a largely dimensional table one is now only confronted with many one dimensional tables (tables for one variable in each case).
[0041] The parameters of the distribution can, as shown above, be learnt from the data with an expectation maximation (EM) learning method. A cluster can be assigned to each data record xπ=(aπ, bπ, cπ, dπ, . . . ) after the learning process. The assignment is then takes place via the a posteriori distribution P(Ωπ, bπ, cπ, dπ, . . . , θ) in which case the cluster ωi with the highest weight P(ωi π, bπ, cπ, dπ, . . . , is assigned to the data record xπ.
[0042] The cluster affiliation of each entry in the database can be stored as an additional field in the database and corresponding indexes can be prepared to quickly access the data that belongs to a specific cluster.
[0043] If, for example, a statistical request of the type “give all the data records with A=a1 and B=b3 as well as the relevant distribution via C and D (i.e. P(C|a1, b3) and P(D|a1, b3))” is made to the database, proceed as follows:
[0044] First of all, the a posteriori distribution P(Ω1, b3) is determined. From this distribution (approximate) it is clear what proportion of the data must be found in which clusters of the database according to the set condition. In this way, it is possible to restrict oneself in the case of all further processes, depending on the desired accuracy, to the clusters of the database that have a high a posteriori weight according to P(Ω1, b3).
[0045] The ideal case is when P(Ω1, b3)=1 applies to an i and accordingly P(Ω1, b3)=0 for all j≠i, i.e. all the data corresponding with the set condition lies in one cluster. In such a case, it is possible to restrict oneself to the ith cluster without losing accuracy in further evaluation.
[0046] In order to obtain (approximate) distributions for C and D, it is possible to either carry on using the model, i.e. approximately determine the desired distributions P(C|a1, b3) and P(D|a1, b3) based on the parameters of the model:
P
(C|a1, b3)≅Σi P (C|ωi, a1, b3, Θ) P (ωi|a1, b3, Θ).
[0047] However, alternatively the model can also only be used to determine the clusters that are relevant for the current request.
[0048] After restricting the request to these clusters, more exact methods can be used within the clusters. E.g. the statistics within the clusters can be counted exactly (with the help of an additional index referring to the cluster affiliation or based on the conventional database reporting method or the OLAP method) or further statistical models adapted to the clusters can be used. A tight interlocking with OLAP is particularly advantageous because the so-called “sparsity” of the data is utilized in large dimensions by statistical clustering models and the OLAP methods are only used effectively within the smaller dimensional clusters.
[0049] The trade-off for speed and accuracy when evaluating results from the amount of data excluded from the evaluation: the more clusters excluded from the evaluation, the quicker, but also more inexactly, the response to a statistical request will be. The user himself can determine the trade-off between accuracy and speed. In addition, more exact automatic methods can be initiated if an insufficient accuracy from evaluating the model seems to be apparent.
[0050] In general, clusters that are below a specific minimum weight are excluded from the evaluation. Exact results can be obtained by excluding only such clusters from the evaluation that have an a posteriori weight of zero. Here, an exact “indexing” of the clusters can be reached as a result of an exact indexing of the database in which case the evaluation is accelerated in many cases. However, in general as many clusters as possible are used for the evaluation.
[0051] Overtraining a clustering model is of no importance, because on the contrary the aim is to produce the most exact reproduction possible of historical data and not a prognosis for the future. In the same way, intensely overtrained clustering models tend to supply a the most unambiguous possible assignment of requests to clusters, which means that in further operations it is possible to limit the request to small clusters of the database very quickly.
[0052] In an advantageous way, the data belonging to a cluster is stored on a data carrier in a way appropriate to the cluster affiliation. For example, the data belonging to one cluster can be stored on a section of the hard disk so that the data in a block belonging together can be read more quickly.
[0053] As has already been shown according to the method of the invention, conventional methods for the statistical evaluation of the data from databases can also be used in a supplementary way if approximate statements are deemed to be insufficient. In particular, conventional database reporting or OLAP methods are used to determine the relative frequencies of the states of the variables.
[0054] A supplementary application of conventional database techniques can for example be initiated automatically if a definable test variable accepts or exceeds a predetermined value.
[0055] According to the invention, a method is shown for the automatic, software-driven statistical evaluation of large amounts of data that is to be assigned to statistical variables in a database, in particular, contained in one or several clusters which is characterized in that the data is subdivided into many clusters by a clustering model based on distance measurement and, if required, the considered data is restricted to the data contained in one cluster or several clusters and in which case the database reporting methods or the OLAF methods are used to determine the relative frequencies and expected values of the states of variables.
[0056] The methods shown in the invention can subdivide the data of the database into clusters as well as, if required, result in a restriction to one cluster or several clusters. If the methods according to the invention are used for data that is already contained in one cluster or several clusters, the clusters are in this way subdivided into subclusters. If restriction is to be to one or more subclusters, the methods according to the invention for the data contained therein can be used, in which case, if required, more exactly adapted statistical models can be used. In general, this procedure can be repeated as often as desired, i.e. the clusters can be subdivided into subclusters or the subclusters into sub-subclusters as often as desired, etc. and, if required, there can be a restriction to the data contained therein in each case and the methods according to the invention used (adapted more exactly) for the data contained in the considered clusters.
[0057] An embodiment of the invention in the Web reporting/Web mining area is described below in which case reference is made to the accompanying drawings.
[0058]
FIG. 1 Shows different monitor windows in which variables for describing the visitors to a Web site are displayed.
[0059]
FIG. 2 Shows different monitor windows of the variables of FIG. 1 in which case the behavior of visitors of a specific referrer is investigated.
[0060]
FIG. 3 Shows different monitor windows of the variables of FIG. 1 in which case the behavior of visitors that call up the homepage first, then read the news and subsequently again call up the homepage is investigated.
[0061] In general, in the Web reporting/Web mining area large amounts of data has to be evaluated. Should a user visit a Web site, each action of the user is usually recorded in the Web log file. This is very data-intensive because such Web log files can increase very rapidly to sizes in the region of several gigabytes.
[0062] In order to prepare the evaluation of the Web log files, “sessions” or visits by visitors were extracted, i.e. all the successive entries (page retrievals or clicks) belonging to a visitor are summarized.
[0063] Each session by a visitor was characterized by a set of different variables, namely particularly “start time”, “session duration”, “number of requests”, “referrer”, “1st visited category”, “2nd visited category”, “3rd visited category” and “4th visited category”.
[0064] In addition, further variables (not shown in the figures) were specified such as “does the visitor accept cookies”, “number of sessions that the visitor had already had up to the current session”, “number of pages retrieved in the last session”, “interval in time to the last session”, “on which page did the last session end”, “time of the first session by the visitor” and “weekday”.
[0065] Altogether, each session was characterized in this way on the basis of 18 different variables.
[0066] In order to determine the relative frequencies of the states of the variables, a naive Bayesian clustering model, as described above, was used.
[0067] Therefore, the specified variables were integrated in the statistical model. The statistical model was trained below by the data contained in the Web log files to find good parameters for the model. The desired relative frequencies can then be read from the model.
[0068] The result of determining the relative frequencies of the states of the variables is displayed in FIG. 1. FIG. 1 shows different monitor windows in which the variables “start time”, “session duration”, “number of requests”, “referrer”, “1st visited category”, “2nd visited category”, “3rd visited category” and “4th visited category” to describe the visitors to a Web site are shown.
[0069] From FIG. 1 it must particularly be identified that
[0070] approximately 55% of the visitors visit the Web site during the afternoon or evening,
[0071] approximately 47% of the visitors only remain less than 1 minute on the Web site,
[0072] approximately 34% of the visitors only start one request,
[0073] approximately 56% of the visitors do not have a referrer,
[0074] approximately 45% of the visitors start on the homepage, and
[0075] approximately 57% of the visitors only visit 1 category,
[0076] approximately 74% of the visitors only 2 categories and
[0077] approximately 85% of the visitors only 3 categories.
[0078] After the statistical model based on an EM learning method was trained, the dependencies between the variables could also be studied.
[0079] As can be seen in FIG. 2, the behavior of for example those visitors that came from a specific referrer (referred to as Endemann below) was investigated. For this, the corresponding entry in the variable “referrer” was set at 100%. By using the statistical model, it could be determined within fractions of a second that particularly approximately 99% of these visitors first visit the homepage and subsequently in the predominant majority (approximately 96%) again immediately leave the Web site.
[0080]
FIG. 3 displays a complicated request to the database. FIG. 3 shows different monitor windows of the variables to be considered in which case the behavior of the visitors that call up the homepage first, then read the news and subsequently again call up the homepage is investigated. Here the corresponding entries in the variables “1st visited category”, “2nd visited category” and “3rd visited category” were set at 100%.
[0081] Again, it could particularly be determined by means of the statistical model within fractions of a second that these visitors then predominantly either again read the news (approximately 37%) or left the Web site (approximately 36%). It can also be seen in FIG. 3 that approximately 89% of these visitors have no referrer.
[0082] In a corresponding way, a response could be given to an amplitude of further requests to the database within a short period, i.e. in general, within less than 1 second. For example, it could be tested which section of the visitors that come from a specific referrer makes more than three side requests, how these people are distributed over the time of day and which one of these visitors is a returning visitor. It could also be tested how the visitor traffic of those visitors starting with the homepage is distributed, i.e. which section of the visitors continues or subsequently aborts the session in which way.
[0083] Such an amplitude of requests with many different variables in the case of the data that simultaneously has the same size can only be handled more efficiently with the method according to the invention compared to the conventional database techniques, particularly the OLAP methods. Similarly, conventional OLAP methods can also be used in addition to this, if exact statements are to supplement the approximate statements gained by the statistical model. However, considerably longer response times must then be taken into consideration.
[0084] To summarize, it can be established that this invention as opposed to the conventional database techniques, particularly the database reporting and OLAP methods, can answer statistical requests made to extensive databases more or less by using statistical models in a more efficient way. This does not exclude that conventional techniques for evaluating databases can be used in a corresponding way to have exact statements, if required. By using a clustering model by means of which the database can be broken up into smaller clusters, it is possible to restrict oneself very quickly for requests made to the relevant clusters of a database (approximately or exactly). If clusters of the database were restricted, a recent statistical evaluation of these clusters of the database can be carried out according to the invention in the course of which, if required, a renewed restriction of the subclusters contained in these clusters of the database, as well as a renewed statistical evaluation of the data contained in the subclusters can be made. In general, this procedure can be repeated as often as desired. Here it is possible to create more efficient statistics or respond to statistical requests.
[0085] Similarly, according to the invention, a clustering model based on a distance measurement can be used to subdivide the data of a database into many clusters in which case the relevant clusters of the database (cluster) are restricted. In order to determine the relative frequencies and expected values of the states of variables, conventional database reporting methods or OLAP methods are used.
[0086] In principle, this invention can be used everywhere where an efficient statistical evaluation of large amounts of data is required.
[0087] Therefore, a possible application is in the Web reporting/Web mining area as has already been shown in the embodiment.
[0088] Further possible applications can for example be found there where the customer data is obtained in large amounts, such as:
[0089] data from call centers,
[0090] data from operational custom relationship management systems,
[0091] data from the health area,
[0092] data from medical databases,
[0093] data from environmental databases,
[0094] data from genome databases,
[0095] data from the financial area.
Claims
- 1. Method for the automatic, software-driven statistical evaluation of large amounts of data that is to be assigned to statistical variables in a database, in particular, contained in one or several clusters which is characterized in that
a statistical model for the approximate description of the relative frequencies of the states of the variables and the statistical dependencies between said states, is learnt and by means of the data stored in the database and is used to determine, on the basis of the statistical model, the approximate relative frequencies of states of the variables, in addition to the approximate relative frequencies belonging to the pre-determinable relative frequencies of states of the variables and expected values of the states of variables dependent thereon.
- 2. Method according to claim 1, characterized in that as the statistical model, a graphical probabilistic model, in particular a Bayesian network, is used.
- 3. Method according to claim 1, characterized in that a statistical clustering model, in particular a Bayesian clustering model, is used by means of which the data is subdivided into many clusters.
- 4. Method according to claim 1, characterized in that likewise a clustering model based on a distance measurement is used by means of which the data is likewise subdivided into a plurality of clusters.
- 5. Method according to claim 3 or 4, characterized in that the considered data is restricted to the data contained in one cluster or a number of clusters.
- 6. Method according to claim 5, characterized in that it is possible that such clusters are restricted in which the data belonging to the specific states of variables contains at least one specific relative frequency.
- 7. Method according to one of the claims 4 to 6, characterized in that the data belonging to a cluster is stored on a data carrier in a way appropriate to the cluster affiliation.
- 8. Method according to one of the previous claims, characterized in that database reporting methods or OLAF methods are further used to determine the relative frequencies and expected values of the states of variables.
- 9. Method according to claim 8, characterized in that database reporting methods or OLAP methods are used if a test variable assumes or exceeds a predetermined value.
- 10. Method for the automatic, software-driven statistical evaluation of large amounts of data that is to be assigned to statistical variables in a database, in particular, contained in one or several clusters which is characterized in that,
the data is subdivided into many clusters by a clustering model based on distance measurement and, if required, the considered data is restricted to the data contained in one cluster or several clusters, and database reporting methods or the OLAF methods are used to determine the relative frequencies and expected values of the states of variables.
- 11. Application of the method according to one of the previous claims for the statistical evaluation of customer data, in particular, in the Web reporting/Web mining area and in customer relationship management systems.
- 12. Application of the method according to one of the previous claims for the statistical evaluation of environmental databases, medical databases or genome databases.
Priority Claims (1)
Number |
Date |
Country |
Kind |
101 27 914.0 |
Jun 2001 |
DE |
|
PCT Information
Filing Document |
Filing Date |
Country |
Kind |
PCT/DE02/01745 |
5/15/2002 |
WO |
|