The present invention relates to information acquisition from a large scale database, and more particularly to a computer executable dimension reduction method, a program for causing a computer to perform the dimension reduction method, a dimension reduction device and an information retrieval engine using the dimension reduction device, in which the dimension reduction dependent upon the document data stored in a database is enabled with the power saving of computer hardware.
Along with the remarkable development of computer environments in recent years, the techniques for finding necessary knowledge information from the large scale database via the Internet or Intranet, including so-called information retrieval, clustering, and data mining have become more important. When a corpus of large scale document data is given, a method for providing the information retrieval or clustering (document classification) efficiently and precisely makes a great contribution to the knowledge retrieval technique in the database in which data is increasingly accumulated along with the expansion of network.
The following Non-patent documents are considered:
[Non-Patent Document 1]
Kenji Kita, Kazuhiko Tsuda, Masamiki Shishibori, Information retrieval algorithm, Kyoritsu Shuppan, 2002
[Non-Patent Document 2]
Richard K. Below, Findings Out About, Cambridge University Press, Cambridge, UK, 2000
[Non-Patent Document 3]
G. Salton and M. Mcgill, Introduction to Modern Information Retrieval, McGraw-Hill, 1983
[Non-Patent Document 4]
Scott Deerwester, et al., “Indexing by Latent Semantic Analysis”, Journal of the American Society for Information Science, Vol. 41, (6), 391-407, 1990
[Non-Patent Document 5]
Masaki Aono, Mei Kobayashi, “Retrieval and Visualization of Large Scale Document Data by Dimension Reduction based on Vector Space Model”, Information Processing Society of Japan, Multimedia and Distributed Processing Research Meeting, 2002-DPS-108, pp. 79-84, June, 2002
[Non-Patent Document 6]
Minoru Sasaki, Kenji Kita, “Dimension Reduction of Vector Space Information Retrieval Model with Random Projection”, Natural Language Processing, Vol. 8, No. 1, pp. 5-19, 2001
[Non-Patent Document 7]
Mei Kobayashi, Masaki Aono, “Covariance matrix analysis for mining major and minor clusters”, 5-th International Congress on Industrial and Applied Mathematics (ICIAM), Sydney, Australia, pp. 188, July 2003
[Non-Patent Document 8]
K. V. Mardia, J. T. Kent and J. M. Bibby, Multivariate Analysis, Academic Press, London, 1979
[Non-Patent Document 9]
Dimitris Achilioptas, “Database-friendly Random Projections”, In Proc. ACM Symposium on the Principles of Database Systems, pp. 274-281, 2001
[Non-Patent Document 10]
Ella Bingham and Heikki Mannila, “Random projections in dimensionality reduction: Applications to image and text data”, Proc. ACM SIG KDD, pp. 245-250, San Francisco, Calif., USA, 2001
Firstly, for the information retrieval, various models have been proposed. For example, an information retrieval of a so-called Query-by-Terms method is supposed. Also, in a case of retrieving a document having a representation fully coincident with a query, a full text retrieval model may be suitable (non-patent document 1). On the other hand, when the information retrieval is similar retrieval or conceptual retrieval, a so-called Query-by-Example is supposed. If the same model is applied to clustering at the same time, a content retrieval model is effectively employed. A vector space model is effective as the analytical model that is commonly employed for any information retrieval (non-patent document 2). The conventional techniques referred to or employed in this invention will be outlined below.
(1) Vector Space Model
In a vector space model (VSM), each document contained in a document corpus is modeled by a vector of a set of keywords. As the method for weighting the keyword that is applied in modeling, a simple Boolean method for representing by only one bit whether or not a keyword is contained, and a TF-IDF method based on the appearance frequency of keyword in a document or whole document are well known (non-patent document 2). In the VSM, the document corpus is represented as an M×N numerical matrix, or a so-called document keyword matrix, where the number of documents is M and the number of keywords is N (non-patent document 3).
(2) Dimension Reduction Technique
To enhance the retrieval efficiency, it is common practice that the dimension of keyword vector is reduced to a much smaller dimension k than N in the M×N numerical matrix (hereinafter referred to as A) of the document corpus. For this purpose, there are a Latent Semantic Indexing (LSI) method as proposed by Deerwester et al. (non-patent document 4) and a Covariance Matrix (COV) Method as proposed by the present inventors (non-patent document 5, non-patent document 1, non-patent document 6, non-patent document 7, non-patent document 8).
With the LSI method, a given, normally rectangular matrix A is decomposed into singular values, and k singular vectors are selected in the order in which the singular value is larger to reduce the dimension. Also, with the COV method, a covariance matrix C is generated from the matrix A. The covariance matrix C is provided as an N×N symmetric matrix, and calculated easily at high precision, using an eigenvalue decomposition. In this case, the dimension reduction is performed by selecting k eigenvectors in the order of larger value. The COV method has a feature that highly correlated data is relatively easy to form a cluster, because the covariance matrix C itself already reflects the correlation between keywords to some extent.
Besides, another method for reducing the dimension of a huge numerical matrix is a Random Production (hereinafter referred to as RP) method. The RP method (non-patent document 9, non-patent document 10) is primarily employed in the fields of LSI design and noise removal of image, in which an N×k dimensional random matrix R is firstly generated, and multiplied by the matrix A to make the dimension reduction. In this case, it is unnecessary to perform the singular value decomposition or eigenvalue decomposition for a huge numerical matrix, so that the dimension reduction calculation is necessarily made faster, and the capacity of computer hardware resources smaller. However, the RP method has a problem that the cluster distribution within the document is not reflected, because the random matrix R is generated regardless of data accumulated within the database. That is, there is a very high possibility that the dimension reduction matrix A may not reflect the cluster size.
In most cases, even when the retrieval engine is not highly dedicated, the major cluster can be retrieved. In addition, the person making the information retrieval is often interested in the cluster of data having a small existence percentage of non-major cluster (hereinafter referred to as a minor cluster). In this regard, the RP method had an inconvenience that though it allows the calculation at high speed and in resource saving, the generated dimension reduction data has reduced dimension without referring to the document data, and the cluster distribution information within the document is discarded, it being not assured that the major cluster and the minor cluster are detected in accordance with the distribution. Therefore, the RP method could be used to make the keyword retrieval, but did not provide enough information to make the semantic analysis or the information retrieval represented by similar retrieval.
Up to now, an information acquisition method satisfying the precision, high speed and resource saving at the same time, a dimension reduction device, a retrieval engine comprising a dimension reduction device, and a computer program have not been provided, whereby it is necessary to have an information acquisition method satisfying the precision, high speed and resource saving at the same time, a retrieval engine, and a computer program.
Therefore, it is an aspect of this invention to provide information acquisition methods, apparatus and systems satisfying the precision, high speed and resource saving at the same time, and a retrieval engine.
In an example embodiment of this invention, an M×N numerical matrix is generated from data stored in the database, and M data vectors are shuffled randomly. Thereafter, for M data vectors, k chunks having a roughly equal number of vectors are provided. A non-normalized basis vector is calculated from the vectors included in one chunk, whereby k non-normalized basis vectors are generated corresponding to the number of chunks k. For a document keyword numerical matrix A in which the number of documents is M and the total number of keywords is N, k non-normalized basis vectors generated by averaging the document vectors within the chunk are made orthogonal to provide a k×N dimensional random average (RAV) matrix. For this random average matrix RAV, a transposed matrix tRAV of N×k dimensions is multiplied by the numerical matrix A to generate a dimension reduction matrix A′ of M×k dimensions in which the keyword dimension is reduced. A retrieval engine of the invention involves calculating a query vector from a retrieval query input by the user, and calculating an inner product with the generated dimension reduction matrix A′. Since the inner product value corresponds to the degree of similarity between the query vector and the document, sorted in order of size, and stored as the retrieval result with a ranking value such as top 10 or top 100 in the computer apparatus.
In another aspect of this invention, the random average matrix RAV is generated based on the data vector stored in the database without performing the eigenvalue computer or singular value computation for the large scale numerical matrix. Therefore, the computational efficiency is greatly improved in terms of the computation speed and the capability and memory capacity of the processing apparatus. In addition, the random average matrix RAV is computed based on the data of document stored in the database, and applicable to the automatic classification of documents within the database, similar retrieval and clustering computation.
That is, the invention provides a dimension reduction method for reducing the dimension of a numerical matrix with a computer to provide the information, comprising:
Another aspect of this invention, provides a computer executable program for performing a dimension reduction method for reducing the dimension of a numerical matrix with a computer to provide a dimension reduction matrix or the index data for dimension reduction
Another aspect of this invention, provides a dimension reduction device for reducing the dimension of a numerical matrix with a computer to provide a dimension reduction matrix or the index data for dimension reduction
Another aspect of this invention, provides a retrieval engine for enabling a computer to provide the information.
The above and other objects, features, and advantages of the present invention will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings, in which:
The present invention provides methods, systems and apparatus for dimension reduction for reducing the dimension of a numerical matrix with a computer to provide the information.
It provides for information acquisition from a large scale database. Included are a computer executable dimension reduction method, a program for causing a computer to perform the dimension reduction method, a dimension reduction device and an information retrieval engine using the dimension reduction device, in which the dimension reduction dependent upon the document data stored in a database is enabled with the power saving of computer hardware.
This invention has been achieved in the light of the above-mentioned problems associated with the conventional technique. It has been noted that the basis vectors useful for dimension reduction of k dimensions can be created randomly without depending on the size of data accumulated in the database. Thus, the present inventors have completed this invention on the basis of an idea that the reliable knowledge acquisition is enabled by making the retrieval precision of information of major and minor clusters at high speed and high efficiency, if it is possible to randomize the data vector while a cluster distribution latent inside the data is held from data accumulated in a large scale database.
More specifically, in this invention, an M×N numerical matrix is generated from data stored in the database, and M data vectors are shuffled randomly. Thereafter, for M data vectors, k chunks having a roughly equal number of vectors are provided. A non-normalized basis vector is calculated from the vectors included in one chunk, whereby k non-normalized basis vectors are generated corresponding to the number of chunks k.
For a document keyword numerical matrix A in which the number of documents is M and the total number of keywords is N, k non-normalized basis vectors generated by averaging the document vectors within the chunk are made orthogonal to provide a k×N dimensional random average (RAV) matrix. For this random average matrix RAV, a transposed matrix tRAV of N×k dimensions is multiplied by the numerical matrix A to generate a dimension reduction matrix A′ of M×k dimensions in which the keyword dimension is reduced. A retrieval engine of the invention involves calculating a query vector from a retrieval query input by the user, and calculating an inner product with the generated dimension reduction matrix A′. Since the inner product value corresponds to the degree of similarity between the query vector and the document, sorted in order of size, and stored as the retrieval result with a ranking value such as top 10 or top 100 in the computer apparatus.
In this invention, the random average matrix RAV is generated based on the data vector stored in the database without performing the eigenvalue computer or singular value computation for the large scale numerical matrix. Therefore, the computational efficiency is greatly improved in terms of the computation speed and the capability and memory capacity of the processing apparatus. In addition, the random average matrix RAV is computed based on the data of document stored in the database, and applicable to the automatic classification of documents within the database, similar retrieval and clustering computation.
That is, the invention provides a dimension reduction method for reducing the dimension of a numerical matrix with a computer to provide the information, comprising: a step of generating the shuffle information by selecting randomly a data vector stored in a database and storing the shuffle information in a memory; and a step of reducing the dimension of the numerical matrix by the basis vectors that are made orthogonal using the shuffle information.
In the invention, the step of generating the shuffle information comprises a step of storing an identification value of the data vector selected randomly in a memory in the selected order and a step of generating a shuffle vector, and the step of reducing the dimension comprises a step of reading the numerical elements of the data vector specified by the shuffle vector from the database, and calculating an average value for every allocated chunk to generate the non-normalized basis vectors that are stored in a memory, a step of making the non-normalized basis vectors orthogonal to generate the normalized basis vectors that are stored as a random average matrix in a memory, and a step of multiplying the random average matrix by the data vector to generate a dimension reduction matrix with reduced dimension or the index data for dimension reduction that is stored in a storing part. Also, in the invention, the number of the chunks corresponds to the number of basis vectors. Also, in the invention, the step of calculating the average value comprises a step of averaging the elements of the data vector for every floor (M/k) with the number of data vectors (M) and the number of basis vectors (k).
Also, this invention provides a computer executable program for performing a dimension reduction method for reducing the dimension of a numerical matrix with a computer to provide a dimension reduction matrix or the index data for dimension reduction, the method comprising: a step of generating the shuffle information by selecting randomly a data vector stored in a database and storing the shuffle information in a memory; and a step of reducing the dimension of the numerical matrix by the basis vectors that are made orthogonal using the shuffle information.
Also, the invention provides a dimension reduction device for reducing the dimension of a numerical matrix with a computer to provide a dimension reduction matrix or the index data for dimension reduction, the device comprising: a processing part for generating the shuffle information by selecting randomly a data vector stored in a database to store the shuffle information in a memory; and a processing part for generating a random average matrix with the basis vectors that are made orthogonal using the shuffle information, and generating a dimension reduction matrix or the index data for dimension reduction using the random average matrix to store the dimension reduction matrix or the index data.
In the dimension reduction device of the invention, the processing parts comprise a shuffle vector generating part for generating the shuffle information as a shuffle vector by storing an identification value of the data vector selected randomly in a memory in the selected order and a non-normalized basis vector generating part for generating the non-normalized basis vectors that are stored in a memory by reading the numerical elements of the data vector specified by the shuffle vector from the database, and calculating an average value for every allocated chunk.
In the dimension reduction device of the invention, the processing parts comprise a random average matrix generating part for generating a random average matrix with the normalized basis vectors obtained by making the non-normalized basis vectors orthogonal, and a dimension reduction data storing part for generating a dimension reduction matrix with reduced dimension or the index data for dimension reduction that is stored in a storing part by reading the random average matrix, and multiplying the random average matrix by the data vector.
Also, the invention provides a retrieval engine for enabling a computer to provide the information, comprising: a processing part for generating the shuffle information by selecting randomly a data vector stored in a database to store the shuffle information in a memory; a processing part for generating a random average matrix with the basis vectors that are made orthogonal using the shuffle information, and generating a dimension reduction matrix using the random average matrix to store the dimension reduction matrix; a query vector storing part for generating and storing a query vector; an inner product calculating part for calculating an inner product between the dimension reduction matrix and the query vector; and a retrieval result storing part for storing a score of the calculated inner product.
In the retrieval engine of the invention, the processing parts comprise a shuffle vector generating part for generating the shuffle information as a shuffle vector by storing an identification value of the data vector selected randomly in a memory in the selected order and a non-normalized basis vector generating part for generating the non-normalized basis vectors that are stored in a memory by reading the numerical elements of the data vector specified by the shuffle vector from the database, and calculating an average value for every allocated chunk.
In the retrieval engine of the invention, the processing parts comprise a random average matrix generating part for generating a random average matrix with the normalized basis vectors obtained by making the non-normalized basis vectors orthogonal, and a dimension reduction data storing part for generating a dimension reduction matrix with reduced dimension or the index data for dimension reduction that is stored in a storing part by reading the random average matrix, and multiplying the random average matrix by the data vector. In an advantageous embodiment of the invention, the data vector comprises a number vector in which a document is digitized using a keyword. Advantageous embodiments of the present invention will be described below with reference to the accompanying drawings, but the invention is not be limited to the embodiments as shown in the drawings.
Consequently, a number vector composed of an element having the title or header digitized is generated for the document data, as shown in
The data vector has an identification value “Id” that is the same as that of the corresponding document data, or related with it for reference, as shown in
In this invention, in this case, a specific basis vector depends on a storage or generation history of data. Thus, in this invention, the data vectors making up the document keyword matrix as shown in
In the computation process, when the shuffle vector is referenced, the shuffle vector is sequentially read from the top or end, in which the corresponding data vector is referred to, and the elements of the corresponding data vector are averaged. Also, in this invention, a chunk is set for every predetermined number of elements of the shuffle vector, and the reference of the shuffle vector is made for every number of data vectors assigned to the chunk. The number of chunks corresponds to the number of basis vectors k in this invention.
At step S16, the elements of the data vector are read for every chunk, and integrated in an appropriate memory to calculate an average value. This processing is repeated by the number of keywords N, whereby the non-normalized basis vectors di (1≦i≦k) are calculated for every chunk, and stored in memory. At step S18, the stored non-normalized basis vectors di are read, and made orthogonal, whereby the basis vectors b1, b2, b3, . . . , bk are calculated and stored in an appropriate memory.
Moreover, at step S20, the calculated basis vectors bi are read, arranged sequentially in an appropriate memory, and stored as the k×N dimensional random average matrix RAV. The RAV is produced through the process for referring to and averaging the data vectors for every chunk in this way. Statistically, the RAV is reflected in the basis vectors having the ratio of major cluster to minor cluster at the almost same ratio as included in the original document keyword matrix.
Therefore, when the dimension reduction is made in this invention, the detectability from major cluster to minor cluster is not appreciably decreased. Also, the orthogonal processing at step S18 is sequentially performed by using a modified Gram Schmidt (MGS) method, for example.
In block B22, the chunk is assigned to given shuffle vectors for every floor (M/k), whereby the average value of j-th elements of the data vectors is calculated. aπ(p),j in block B22 of
With the MGS method in block B24, the number of calculated non-normalized basis vectors is counted at the first stage until at least three non-normalized basis vectors are accumulated in the specific embodiment. In Block B24, at the time when a predetermined number of non-normalized basis vectors are accumulated, the non-normalized basis vectors di are made orthogonal by applying the MGS method, whereby the normalized basis vectors are calculated and stored in memory. Thereafter, in block B26, the processing chunk is incremented such as i=i+floor(M/k), in which the calculation of the non-normalized basis vectors in block B22 and the sequential orthogonal processing in block B24 are performed again. Finally, the k normalized basis vectors are generated corresponding to all the chunks. Then, the procedure is ended.
The number of chunks k may be automatically set corresponding to the number of data by the system, or set by the user who inputs the number of basis vectors into the system, and appropriately selected in accordance with a user's preference or the apparatus environment.
With the RAV method of the invention, data from the major cluster to the minor cluster are employed without exception to determine the basis vectors. Therefore, it is statistically assured that any basis vector contains the element of each cluster, whereby the dimension reduction matrix applicable to the data mining or similar retrieval or the index data for dimension reduction is provided, irrespective of high speed dimension reduction. In this invention, the index data means the set of identification values, which are required to make the dimension reduction and appropriately call the data vector in the corresponding RAV process, or means the data for generating the data vectors of reduced dimension on the fly when an inner product calculating process is called using the index data.
On the other hand, with the RP method as shown in
At step S34, the dimension reduced data that is referred to as the data vector of reduced dimension included in the dimension reduction matrix generated by the RAV method of the invention, or the index data, read into the buffer memory to calculate the inner product with the retrieval query. At step S36, the generated score is stored in a hash table created in an appropriate memory, corresponding to the identification value of data vector. At step S38, the results are sorted in the order in which the score is larger, and the retrieval result is displayed on the display screen. The retrieval result is displayed in various ways, but may be graphically displayed using a graphical user interface, or displayed on the screen as a hyper text markup language (HTML) or extended markup language (XML) in which the retrieved data vector is hyper linked using the identification value, for example.
In the case where the computer apparatus 12 is employed as the stand alone retrieval engine, the user inputs the retrieval query via a predetermined graphical user interface (GUI) using the input/output unit 16 such as keyboard or mouse. Upon receiving the retrieval query, the computer apparatus 12 generates the query vector from the retrieval query, calculates the inner product between the data vector and the dimension reduction matrix, and performs the retrieval.
Also, in the case where the computer apparatus 12 is provided as the server, the computer apparatus 12 receives an HTTP request for retrieval via the network 26 and saves it in the buffer memory in the outside communication unit 28. Thereafter, a retrieval application program is initiated or called, and subsequently, the query vector is generated from the retrieval query transmitted from the user. Furthermore, the retrieval result is produced by performing the process as shown in
The function of the RAV processing part 32 will be described below. The RAV processing part 32 generates the shuffle vector as the shuffle information associated with the data in the database, not shown, and calculates the basis vectors according to the invention. The calculated basis vectors are sent to the random average matrix storing part 34 and stored in a predetermined format for the random average matrix RAM. Moreover, a dimension reduction matrix ARAV is calculated by multiplying the random average matrix RAV and the document keyword matrix. This ARAV matrix is stored in a dimension reduction data storing part 36, which is configured as the storage unit such as hard disk, to calculate the inner product for the retrieval query.
Also, in this invention, the dimension reduction matrix ARAV may not be positively created, but stored in the dimension reduction data storing part 36 as the dimension reduction data in which the identification value of document keyword matrix as the index data and the identification value of a predetermined column vector in the random average matrix RAV corresponding to the basis vectors are paired. On the other hand, the query vector stored in the query vector storing part 40, or the data vector having dimension reduced in the dimension reduction data storing part 36, or the index data is read into the inner product calculating part 38 to perform the inner product, and the calculated inner product score is stored in the retrieval result storing part 42. When the index data is employed, the inner product calculating part 38 creates the data vector of reduced dimension directly from the index data on the fly, which is used to calculate the inner product. Also, in this invention, a dimension reduced vector generating part is provided in a functional portion on the input side of the inner product calculating part 38 and on the downstream side of the dimension reduction data storing part and the generated dimension reduced vector is input into the inner product calculating part 38 in
The functional blocks of the invention may be configured as a software block in a computer executable program read and executed by the computer. The computer executable program is described in various languages, including C, C++, FORTRAN, and JAVA®.
Specific examples of the invention will be described below in detail.
Comparative Examination With the Conventional Method
(1) Database Used in the Experiment
The database data had a size of 332,918 documents, and 56,300 keywords, in which the dimension reduction was made to 300 dimensions.
(2) Hardware Environment Used in the Experiment
The computer apparatus was IntelliStation (manufactured by IBM) with the CPU of Pentinum 4, 1.7 GHz, and the operating system of Windows® XP.
(3) Computation Time
The computation time was compared between the RAV method and the COV method under the above-mentioned conditions. The results are shown in Table 1.
As seen from Table 1, the RAV method of the invention was about 30 times faster than the COV method. Also, the scalability of computation time was only proportional to M in the RAV method, but was roughly proportional to the number of keywords (N) to the third power in the COV method. That is, it was revealed that the RAV method was more excellent in the scalability of computation time than the conventional dimension reduction method.
(4) Precision
The precision of the RAV method of the invention was examined using a measure whether or not the top 10 or top 20 documents among the retrieved documents contain a quite small number of query keywords with df=49 or 29. As a result, for the keywords with df=49, the precision (precision value) was 100% for top 10, or 75% or more for top 20. The precision (precision value) and the recall value are given in the following expression (1).
Numerical Expression 1
I. Recall
A measure of the ability of a system to present all relevant items.
II. Precision
A measure of the ability of a system to present only relevant items.
(1) Comparative Examination Between RAV Method and RP Method
For the same query, the recall-precision curve was computed by the RAV method of the invention and the RP method, using a means as defined in Text Research Collection Volume 5, April 1997, http://trec.nist.gov/. At this time, the dimension reduction matrix R in the RP method was given in the following
Expression (2).
(2) Results
Typical results obtained by the RAV method and the RP method are shown in
Computer Resource Consumption
Computation experiments were conducted under the same conditions, in which the memory consumption amounts in run time were compared. The following Table 2 shows the memory use amounts as measurement data for the methods.
As shown in Table 2, the method of the invention does not perform a large scale singular value or eignevalue decomposition, whereby the storage space in the computer apparatus is greatly decreased. Also, since the required amount of storage space in run time was smaller than the RP method, the excellent results were obtained.
Minor Cluster Detection Ability
(1) Experiment Contents
Experiments for comparing the RAV method of the invention,and the RP method, from the standpoint of detecting the minor cluster, were conducted using the same database and under the same conditions as in example 2. The dimension reduction process involved 300 dimensions, the retrieval query used query1=<Michael Jordan, basketball> and query2=<McEnroe, tennis>, which were confirmed to be included in the minor cluster, and a comparison was made in the existence percentage of retrieval queries query1 and query2 in the upper level documents between the RAV method and the RP method.
(2) Experiment Results
The obtained experiment results are shown in Table 3 as below.
As seen from the Table 3, the RAV method has more excellent detection ability for the minor cluster and higher precision than the RP method.
As described above, with this invention, it is possible to prevent wasteful consumption of the computer resources at high efficiency, and acquire the information indicting a detection precision stable from the major cluster to the minor cluster.
The present invention can be realized in hardware, software, or a combination of hardware and software. It may be implemented as a method having steps to implement one or more functions of the invention, and/or it may be implemented as an apparatus having components and/or means to implement one or more steps of a method of the invention described above and/or known to those skilled in the art. A visualization tool according to the present invention can be realized in a centralized fashion in one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system—or other apparatus adapted for carrying out the methods and/or functions described herein—is suitable. A typical combination of hardware and software could be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein. The present invention can also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which—when loaded in a computer system—is able to carry out these methods.
Computer program means or computer program in the present context include any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after conversion to another language, code or notation, and/or after reproduction in a different material form.
Thus the invention includes an article of manufacture which comprises a computer usable medium having computer readable program code means embodied therein for causing one or more functions described above. The computer readable program code means in the article of manufacture comprises computer readable program code means for causing a computer to effect the steps of a method of this invention. Similarly, the present invention may be implemented as a computer program product comprising a computer usable medium having computer readable program code means embodied therein for causing a a function described above. The computer readable program code means in) the computer program product comprising computer readable program code means for causing a computer to effect one or more functions of this invention. Furthermore, the present invention may be implemented as a program storage device readable by machine, tangibly embodying a program of instructions executable by the machine to perform method steps for causing one or more functions of this invention.
It is noted that the foregoing has outlined some of the more pertinent objects and embodiments of the present invention. This invention may be used for many applications. Thus, although the description is made for particular arrangements and methods, the intent and concept of the invention is suitable and applicable to other arrangements and applications. It will be clear to those skilled in the art that modifications to the disclosed embodiments can be effected without departing from the spirit and scope of the invention. The described embodiments ought to be construed to be merely illustrative of some of the more prominent features and applications of the invention. Other beneficial results can be realized by applying the disclosed invention in a different manner or modifying the invention in ways known to those familiar with the art.
Number | Date | Country | Kind |
---|---|---|---|
2003-282690 | Jul 2003 | JP | national |