The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein:
a shows part of a GO board with a chain of black stones;
b shows the part of the GO board of
a, 4b and 4c each show an example of an eye in the game of GO;
Like reference numerals are used to designate like parts in the accompanying drawings.
The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example may be constructed or utilized. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.
The term “board position” is used to refer to a square, grid intersection or other playing location used in a board game. For example, in GO, board positions are grid intersections.
The term “board configuration” is used to refer to information specifying the state of play at a particular stage of a board game. For example, in GO, this comprises details of locations and colors of all stones on the board.
The embodiments now described relate to the game of GO. However, it is noted that many of these embodiments may also be applicable to other territory board games where it is required to automatically determine a score for a given board configuration. Some information about the game of GO is now given to aid understanding of the embodiments described with reference to GO.
The game of Go originated in China over 4000 years ago. Its rules are straightforward. Two players, Black and White, take turns to place stones on the intersections of an N×N grid (usually N=19 but smaller boards are in use as well). All the stones of each player are identical. Players place their stones in order to create territory by occupying or surrounding areas of the board. The player with the most territory at the end of the game is the winner. A stone is captured if it has been completely surrounded (in the horizontal and vertical directions) by stones or the opponent's color.
For example,
As mentioned above, calculating the score is not straightforward because it involves assessing whether stones on the board are alive or dead. In addition, so called seki positions should be identified if possible. The concepts of “alive” and “dead” and seki positions in the game of GO are explained in detail in many publicly available materials such as “The Game of GO” referred to above. A brief explanation of these concepts is now given.
Dead stones can be thought of as those which are controlled by the opponent; they can be captured by the opponent but have not yet been captured. Chains and clusters of stones can be thought of as dead. As mentioned above, a chain of stones is solidly connected along a line of the board and shares its liberties. A cluster of stones is also solidly connected along the lines of the board and can only be captured as a unit. A group of stones is a loose cluster where the stones have not yet finished connecting up but are likely to be able to do so later in the game.
We use the term “eye” to refer to empty board positions inside the boundary of a chain of stones. It is possible for groups of stones to be formed which have two separate eyes or the ability to create two separate eyes. This type of group is said to be “alive”. In contrast, stones which are unable to form two eyes or connect up to other living stones are “dead”.
A seki position exists when two groups of stones of different color coexist and any move made disadvantages the side making it. In this situation neither player has an incentive to move.
An example of our method for automatically scoring a given board configuration in the game of GO or another suitable territory board game is now given with reference to
A board configuration to be scored is received (box 30) and an empty board position corresponding to a legal move of a player whose move is next is then selected (box 31). Details of how this selection is made are given below. A move is then executed according to the rules of the game at the selected board position. For example, as a result of the move, playing pieces may be captured. This process is repeated until no further legal moves are possible (box 33) and the resulting terminal board configuration is recorded (box 34). The process is repeated (boxes 31 to 34) until a plurality, n, of terminal board configurations are obtained (box 35). Any suitable value of n can be used and this will vary depending on the selection criteria used when selecting the empty board position for the next move (box 31).
For each board position, the method then calculates the proportion of n times that position was associated with a given player in the n terminal board configurations (box 36). A “winning” player for each board position can then be determined using a threshold applied to the proportion. Any suitable threshold can be used.
If the board configuration being scored is from the end of a game the winning player can then be determined by combining the results for each board position. Any suitable method of combining the results can be used. If the board configuration being scored is not from the end of a game an estimate of the likely outcome of the game is obtained in the form of a numerical score. The results can also be thought of as a probability for each board position as to whether it will be black or white at the end of the game.
As mentioned above a “winning” player for each board position can be determined using a threshold applied to the calculated proportion. Any suitable threshold can be used. For example, say the threshold is 80%, if the proportion of n times the board position is occupied by black in the terminal board configurations is greater than or equal to 80%, then the board position is won by black.
It is also possible to identify situations where the player controlling the board position is not clear cut; for example, suppose that the proportion of n times the board position is occupied is 53%. In this situation the scoring system may be arranged to prompt the players, via a graphical user interface or other means, for a decision as to which player should control the board position. For example, if the proportion for a particular board position is within a specified range, the system is arranged to prompt one or more of the players for information as to which player the particular board position is to be associated with.
In some embodiments the scoring system comprises a graphical user interface arranged to present to one or more players of the game, information about the calculated proportions. For example, a graphical representation of the board and playing pieces is presented. Superimposed on this graphical representation marks may be given where the size of the marks represent the calculated proportions and the color of the marks the selected players. For example, if the board configuration being scored is not from the end of a game an estimate of the probability for each board position as to whether it will be black or white at the end of the game is found. This information is advantageously presented using colored squares or other marks at the board positions with the size of the marks indicating the estimated probability. If the next move is black's then such black marks can be used to indicate possible future moves of black and an indication as to how far that board position is already under the control of black. In this way the marks provide a hint to the player. For example, a move to a board position that is already highly likely to be controlled by black at the end of the game is arguably less advantageous than an alternative move with a lower probability of being black at the end of the game.
In the game of GO, in order to ensure that a terminal board configuration is eventually reached, it is necessary to avoid filling in one's own eyes. Otherwise a cycle or loop can be entered during the selection step (box 31) which prevents reaching a terminal board configuration. Therefore, the selection step may also comprise only selecting empty board positions which do not fill in eyes of the current player. For example, the step of selecting an empty board position further comprises selecting that board position such that it is not within an eye of the player whose move is current. We identify a potential eye as an empty board position whose four nearest neighbour board positions are occupied by the player whose move is current or are off-board positions. This is illustrated in
The process of selecting an empty board position (box 31) is made in a substantially random manner in some embodiments. This is advantageous in that the selection process is straightforward and computationally inexpensive.
In other embodiments the step of selecting an empty board position is made using a biased sampling technique. This involves more computation than making a random selection but typically enables fewer terminal board configurations to be determined (i.e. smaller value of n) whilst still achieving good results. This is because instead of sampling uniformly from the available moves the samples are biased such that some moves are more likely to be played than others. For example, an evaluation function for the game may be taken into account such that better moves are played more often than worse moves. As a consequence, the resulting sample may be more indicative of what would happen in a real game. While the purely random sampling typically requires anywhere between about 50 and 500 samples, the number may be reduced to 10 to 100 samples when using biased sampling.
Any suitable biased sampling technique can be used. For example, favouring moves at the middle of the board rather than the border.
In a particular example we use learnt information about belief distributions associated with patterns corresponding to moves in a game. For example, the step of selecting an empty board position is made on the basis of learnt information about patterns corresponding to game moves. Details about these patterns and how the belief distribution information is learnt are given later in this document. In this example, the step 31 of
As mentioned above, a particular problem when scoring the game of GO involves identifying seki positions. We provide an accurate, simple and fast method for achieving this. We recognise that seki positions are typically characterised by two chains of opposite colour whose life/death evaluations are anti-correlated and yield a proportion of about 50% under the method of
In another example, the scoring system is used in conjunction with an automated system for playing the game. For example, in order to assist with determining when to offer to end the game. In the past it has been very difficult for computer GO systems to offer to end the game at sensible times. We address this by using the calculated proportions for each available board position obtained as described above.
As mentioned above, our scoring system can provide an estimate of the probability for each board position as to whether it will be black or white at the end of the game. This information can be used to assess the volatility of groups, chains or clusters of playing pieces. That is, if a chain, group or cluster has a high probability of being controlled by one player then it has a low volatility. Otherwise its volatility is high and there is a reason to continue the game; i.e. to try to take control of the territory involved. If volatility is low over the board configuration then there is reason to offer to end the game. Thus we provide an automated system for playing a territory board game which is arranged to receive information about the current board configuration; access calculated proportions for board positions (as described above); and to determine whether to offer to end the game on the basis of the calculated proportions.
For example, the automated system might check whether there are any groups, clusters or chains of stones in the current board configuration which have a high volatility. That is, any groups, clusters or chains where there are one or more associated board locations where the calculated proportions are within a specified range. This can be done when a human player offers to end the game and/or after a specified number of moves.
The scoring apparatus optionally comprises a user interface 54 as described above. The selector, processor and user interface are provided using any suitable computer and display apparatus. For example, the game engine, user interface, selector and processor are provided using a personal computer or game console having suitable software arranged to carry out the methods described herein.
In some embodiments, the step of selecting an empty board position is made on the basis of learnt information about patterns corresponding to game moves. Details about these patterns and how the belief distribution information is learnt are now given together with details of an apparatus and method for playing a game using the belief distributions.
We define a pattern as an exact arrangement of stones (or other playing pieces) within a sub region of a board (or other defined playing space comprising a grid), centered on an empty location where a move is to be made. The possible sub-regions are preferably specified as pattern templates as described below. By focusing on exact local patterns for move prediction we achieve advantages. We are able to match patterns very efficiently because of their local range and because the matching procedure does not have to take into account wildcards, i.e., parts of the patterns that match to more than one possible value. As a result we can train our system on a very large number of games without requiring impractical processing capacity and time. Also, we can generate moves for play very quickly because we are able to match the patterns very efficiently.
In a preferred embodiment, the pattern templates are a nested sequence of increasing size so as to be able to use large patterns with greater predictive power when possible, but to be able to match smaller patterns when necessary.
We automatically generate and learn from the patterns in two distinct processes. Firstly we harvest sufficiently frequent patterns from historical game records and then we learn from those patterns. The historical game records are preferably taken from games involving expert players although this is not essential. In one group of embodiments we learn urgencies for the patterns (or pattern classes as explained later) using a ranking model. In another group of embodiments we learn an estimate of a play probability (probability of being played) for each pattern or pattern class. Both groups of embodiments use types of Bayesian inference. Each board configuration contains a subset of the harvested move-patterns of which the player in the historical record chooses one. In the urgency embodiment, this information indicates that the chosen move-pattern has a higher urgency than that of the other possible patterns. In the play probability embodiments if an available move is observed as played its probability estimate is increased and if it is observed as not being played its probability estimate is decreased. It is this information together with the fact that typical move-patterns occur in more than one position that allows the system to generalize across specific board configurations.
In some embodiments we advantageously group patterns into pattern classes in order to reduce computational complexity, storage requirements, computation time, required processing capacity and the like. However, it is not essential to use pattern classes. Computations can be carried out for each individual member of a pattern class.
We recognize that every pattern can occur in up to 16 different ways in the case of GO and in other multiples for other games depending on allowable board configurations etc. For example, in the case of GO, the 16 different ways can be thought of as the 8 different symmetries of a square (which can be rotated through 0, 90, 180 and 270 degrees for itself and for its mirror image) and each of these 8 different symmetries can occur for either black or white playing pieces (color reversal) giving 16 different options. A pattern class is then an exact arrangement of playing pieces within a sub-region of the board, centered on an empty location where a move is to be made, and all the other equivalent patterns occurring as a result of symmetries and/or color reversal.
In order to learn from a huge number of known Go board moves (or board moves of other games such as Chess and the like) from historical game records we are faced with a number of problems. The large number of board moves per game (e.g. 250) leads to problems when we need to learn from large numbers of games. These problems are associated with limits on processing capacity, storage capacity and the like. In addition, we require some degree of generalization in order to learn effectively.
That is, in systems which learn by example, if all historical examples are stored without generalization, it is difficult to deal with new instances that have not been observed before. Instead, it is preferred to make generalizations where possible such that new instances can be appropriately dealt with. In the present case we achieve this generalization in the case of games such as GO, Chess and other games involving configuration of playing pieces, by selecting less than the full set of pattern classes from historical game moves for use by a learning system. In the present application we refer to this selection process as pattern harvesting. We recognize that a particular problem with such pattern harvesting lies in deciding which patterns to harvest and which to ignore. One option is to randomly select, or select in some arbitrary manner, a specified proportion of historical patterns from game records. In another embodiment we select those patterns which occur multiple times in the complete set of historical game records. For example, in a preferred embodiment, we select any pattern which occurs more than once in the complete set of game records. However, it is also possible to select only those patters which occur 2 or more times; or 3 or more times and so on. This is based on the consideration that if a pattern is observed more than once in the training sample then it is likely to be observed (and hence useful) in new board configurations. Any suitable selection criteria can be used.
For example, we used a training set made up of 181,000 Go games, each game having about 250 moves. We used a set of 14 different pattern templates giving about 600 million patterns at our disposal (181,000×250×14=635,500,000). Of these we selected those patterns that appear as a move at least twice in the collection. This enabled us to retain about 2% (about 12 million) of the observed patterns and discard around 98% of those. However, these figures are examples only; other suitable sizes of collection and selection rates can be used.
With huge numbers (e.g. 600 million) of patterns at our disposal it is a difficult task to identify those patterns which occur at least twice in the collection. Even if enough memory is available to store the 600 million patterns there is an efficiency issue to go through this list to find patterns occurring at least twice. This problem is general in that it is not specific to GO moves or moves of other games. Any situation in which there are a large number of records from which it is required to select a sub-set on the basis of a non-trivial criterion is applicable.
In order to address this problem we use a Bloom filter approach. Bloom filters are described in Bloom, B H (1970) “Space/time trade-offs in hash coding with allowable errors”, Communications of the ACM, 13, 422-426. A Bloom filter can be used to test whether an element is a member of a set using a space-efficient probabilistic data structure. It is asymmetric in the sense that false positives are possible but false negatives are not. So in the case of selecting GO move patterns which occur twice or more, a false positive involves selecting a pattern in error when it actually occurs only once in the collection. A false negative involves rejecting a pattern which should have been selected. In our GO, and other game situations, we recognize that it is much more valuable to prevent false negatives than to prevent false positives and so Bloom filters are suitable in this respect. Because there are relatively few patterns which occur twice or more we prefer to retain all such patterns. Having said that, other embodiments in which selection methods do make false negatives can be used.
In an example we use a spectral Bloom filter which is an extension of the original Bloom filter to multi-sets, allowing the filtering of elements whose multiplicities are below a threshold given at query time. Spectral Bloom filters are described in “Spectral Bloom filters”, Saar Cohen, Yossi Matias, Proceedings of the 2003 ACM SIGMOD international conference on Management of Data, 241-252, 2003.
When a pattern is found in the historical game records then it is first tested if that pattern has been stored in the Bloom filter before. If it has, then the current occurrence must be at least the second time that the pattern is observed and we can add it to our collection (e.g., store it in a hash table). If it has not been previously stored in the Bloom filter, we store it and move on to the next pattern.
We represent the Go board as a lattice ζ:={1, . . . , N}2 where N is the board size and is usually 9 or 1 g. In order to represent patterns that extend across the edge of the board in a unified way, we expand the board lattice to include the off-board areas. The extended board lattice is
where the offset vectors are given by D:={−(N−1, . . . ,(N−1)}2 and we will use the notation to represent 2-dimensional vertex vectors. We define a set of “colors” C:={b,w,e,o} (black, white, empty, off). Then a board configuration is given by a coloring function c:{circumflex over (ζ)}→C and we fix the position for off-board vertices
Our analysis is based on a fixed set of pattern templates T⊂ on which we define a set Π of patterns π:T→C that are used to represent moves made in a given local context. The patterns preferably have the following properties:
The pattern templates T are rotation and mirror symmetric with respect to their origin, i.e. we have that (vx,vy)εT(−vx, vy)εT and (vy,−vx)εT, thus displaying an 8-fold symmetry.
Any two pattern templates T,T′ε satisfy that either T⊂T′ or T′⊂T. For convenience, we index the templates Tε with the convention that i<j implies Ti⊂Tj, resulting in a nested sequence (see
We have
for all patterns cause for each pattern to represent a legal move the centre point must be empty.
The set of patterns Π is closed under rotation, mirroring and color reversal, i.e. if πεΠ and π′ is such that it can be generated from π by any of these transformations then π′εΠ. In this case, π and π′ are considered equivalent, π˜π′, and we define a set {tilde over (Π)} of equivalence classes if {tilde over (π)}⊂Π.
Note that {tilde over (Π)} is a partition of Π and thus mutually exclusive, I{tilde over (π)}ε{tilde over (Π)}{tilde over (π)}=θ, and exhaustive, Y{tilde over (π)}ε{tilde over (Π)}{tilde over (π)}=Π.
We say that a pattern πεΠ matches configuration c at vertex if for all
we have
In order to extend the predictive power of the smaller patterns and hence improve generalization we optionally incorporate one or more additional features into each pattern. For example, one or more of the following local features of a move can optionally be used:
Liberties of new chain (2 bits). The number of liberties of the chain of stones we produce by making the move. In a preferred embodiment we limit the values of this local features to being any of {1, 2, 3, >3}. However, this is not essential, any suitable values for this local feature can be used. The number of ‘liberties’ of a chain of stones is the lower bound on the number of opponent moves needed to capture the chain.
Liberties of opponent (2 bits). The number of liberties of the closest opponent chain after making the move. Values are preferably, but not essentially any of {1, 2, 3, >3}.
Ko (1 bit). Is there an active Ko on the board? A ‘Ko’ is a move which is illegal because it would cause an earlier board position to be repeated.
Escapes atari (1 bit). Does this move bring a chain out of Atari? A chain is in ‘atari’ if it can be captured immediately.
Distance to edge (2 bits). Distance of move to the board edge. Values are preferably but not essentially selected from {<3, 4, 5, >5}.
We define the set of labels of these features as {1, . . . ,8}. Given a move in position c the function ƒc:×ζ→{1,0} maps each feature to its binary true/false value. It is worth noting that for the larger patterns these features are already seen in the arrangement of stones within the template region so the larger patterns are less likely to be altered by the addition of these features.
In a preferred embodiment we do not use an explicit representation of the patterns but define a hash key for patterns and store their properties in a hash table. However, it is not essential to do this. The pattern properties can be stored in any suitable manner or alternatively the patterns themselves can be stored together with their properties. In the hash key example, we advantageously adapt a Zobrist hashing method (Zobrist, A. 1990 “A new hashing method with applications for game playing”. ICCA Journal, 13, 69-73), which has the advantage that it can be updated incrementally. For example, we generate four sets of 64 bit random numbers, ha:{circumflex over (ζ)}→{0, . . . ,264−1},aεC, four for each vertex in the extended Go lattice {circumflex over (ζ)}. We also generate a random number for each of the local features l:→{0, . . . ,264−1}. The hash-key k(π, , c) of a given pattern π vertex in board configuration c can be calculated by XORing together the corresponding random numbers,
Both adding a stone and removing a stone of color aε{b,w}at position correspond to the same operation kπ←kπ⊕ha. Due to the commutativity of XOR the hash-key can be calculated incrementally as stones are added or removed from a pattern. However, we prefer to store the pattern classes {tilde over (π)} instead of single patterns π to take account of the relevant symmetries. This is achieved by choosing {tilde over (k)}π:=minπε{tilde over (π)}, i.e. by calculating the hash-key for every symmetry variant of the pattern and choosing the minimum of those hash-keys. (It is not essential to choose the minimum; any particular one of the hash-keys can be selected e.g. the maximum, as long as a consistent selection method is used in creation of the hash table.) The resulting hash-table allows us to store and retrieve information associated with each pattern without an explicit representation of the pattern itself. This could be the game-record the move was found in or relevant statistics, such as the percentage of games won after playing that move.
A particular example of pattern harvesting is now described in detail.
From a database of Go game records we harvest pattern classes {tilde over (π)} corresponding to moves made by expert players. We let the computer play through each of the games in the collection and maintain a ×|{circumflex over (ζ)}|-table H of hash-keys corresponding to each of the pattern templates Ti at each of the vertices The update after each move makes sure that if pattern class {tilde over (π)} matches the resulting configuration c at vertex then Ha,={tilde over (k)}({tilde over (π)}). Whenever an entry in H changes, the new hash-key can be used to mark that pattern as being present in the collection.
A rough estimate shows that for 181,000 games records with an average length of 250 moves and =14 different pattern templates we have about 600 million patterns at our disposal. To limit storage requirements and to ensure generalization to as yet unseen positions we only want to include in Π those patterns that appear as a move made by an expert twice in the collection. We use a Bloom filter (Bloom, 1970) B to mark of patterns that have been seen at least once. For every pattern we observe we use B to check if it is new; if it is, we add it to B. If B indicates that the pattern has been seen before we increment the count in our pattern hash-table D{tilde over (Π)} that represents {tilde over (Π)}.
Once the patterns have been harvested we carry out a learning process to either learn an urgency value for each pattern class, or to learn a play probability for each pattern class. It is also possible to learn these factors for individual patterns as opposed to pattern classes.
The term “urgency” is used herein to refer to a latent variable, not directly observable, which provides an indication of the quality (or goodness) of playing a move (or pattern class).
In the case that we learn an urgency value for each pattern class a method of learning is now described with reference to
In
If a pattern class has been analyzed before using our learning process and we have stored urgency information for that pattern class, that information is accessed. In the case of a new pattern class we use a default urgency belief distribution with associated default statistics, for example an initial mean of 0 and standard deviation of 1. Any suitable default belief distribution is used.
Information about the harvested pattern classes is obtained (see box 70) including an associated board configuration for each harvested pattern class. For a given harvested pattern class (and board configuration) we determine other possible legal moves, each having an associated pattern class (see box 71). Information about the rules of the game is used to determine the other possible legal moves.
We know that the harvested pattern class was played and that the other possible legal moves (here pattern classes) were not played. This information together with the statistics is used to form a factor graph (see box 72). The factor graph comprises nodes associated with particular pattern classes, those nodes being ordered on the basis of which pattern class was played and which pattern classes were not played. Some nodes of the factor graph are instantiated with the accessed statistics information (see box 73). Message passing is then performed over the factor graph to update the statistics thus carrying out Bayesian inference (see box 74). The resulting updated statistics describe our belief of the relative urgencies of the pattern classes and these results are stored (see box 75) for example, in a hash table or other suitable store. This process is repeated (see 76) for further harvested pattern classes.
More detail about the process of forming the factor graph is now given with reference to
In
The factor nodes labeled s1, s2, . . . sn are functions which access a database or other store to obtain belief distributions for each pattern class (or use as a default distribution in the case of new pattern classes). These computational units feed the parameters describing the urgency belief distributions into the corresponding variable nodes. For example, in the case of Gaussian distributions there would be two parameters stored in each variable node. The next column of variable nodes, that is the circular nodes u1, u2 . . . un represent the urgencies of the pattern classes. These nodes each store the statistics describing the belief distribution for the associated pattern class. The next column of factor nodes are computation units g1, g2 . . . gn which compute the distribution corresponding to the effective urgency value in the observed configuration. and feed the parameters into the corresponding variable nodes x1, x2 . . . xn. The remaining factor nodes h2, . . . hn encode which pattern class was played. These are order factor nodes which implement an order constraint indicating that the effective urgency value of the move made must have been greater than the effective urgency values of the moves not played. For these nodes the associated update equations are not exact as the true factor-to-variable messages are no longer Gaussian. The other factor nodes of
The process of message passing comprises carrying out a calculation associated with a computation node (square node of
The processing schedule is preferably divided into three phases: pre-processing, chain processing and post processing. An example pre-processing schedule is illustrated by arrows 90 in
We present general update equations for use in carrying out the computations along the arrows in the message passing process. We tailor those general update equations for use with Gaussian distributions as shown.
Consider the example of factor graph of
Suppose we would like to update the message mƒ→x and the marginal distribution px. Then, the general update equations are as follows:
where MM[·] returns the distribution in the Gaussian family with the same moments as the argument and all quantities on the right are normalized to be distributions. In the following we use the exponential representation of the Gaussian, that is,
G(x;τ,π)∝exp(πx2−2τx)
This density has the following relation to the standard density
In the case of the exact factor nodes the update equations are given in the following table.
This is exact and should only be executed once.
In the case of the order factor nodes, the update equations are given in the following table.
In the update equations set out in the tables above a represents weightings which in a preferred example are set to 1. Also, in the update equations v and w correspond to the functions v(.,.) and w(.,.) given by
Where the symbols N and Φ represent the density of the Gaussian distribution function and the cumulative distribution function of the Gaussian, respectively. The symbols t and α are simply arguments to the functions. Any suitable numerical or analytic methods can be used to evaluate these functions such as those described in Press et al., Numerical Recipes in C: the Art of Scientific Computing (2d. ed.), Cambridge, Cambridge University Press, ISBN-0-521-43108-5.
In the example shown in
In the case of exact factor nodes, for message passing from a computation node (square node) to a single variable node (circular node) the update equations of the first row of the exact factor node update equation table is used. In the case of message passing from a computation node to two variable nodes the update equations of the second or third row of the table are used as appropriate.
In other embodiments we learn a play probability (probability of being played given a particular board configuration) for each pattern class. This is now described with reference with
Once the learning phase has been completed to learn from the harvested patterns we are able to use the results to select a move or rank possible moves, given a particular board configuration. This is now described with reference to
At a high level, we access the current board configuration, and for each possible board location where the next move could be played (all legal potential play positions) we find any harvested patterns which match. Any suitable pattern matching technique is used. For example, we can search for exact matches only, or allow inexact matches within specified bounds. We then select one of the matched pattern classes on the basis of the learnt urgency belief or play probability information. In a preferred embodiment we carry out the pattern matching process in an order on the basis of the size of the pattern templates. This is not essential but it enables us to give preference to patterns of larger size. It would also be possible to give preference to patterns of a specified size range or patterns having particular characteristics. One could also combine patterns of different sizes.
As shown in
As mentioned above, the learnt information can be used for other purposes besides creating an automated player of a game. For example, the learnt information can be used in study or training tools for those seeking to improve their game playing skills. It can also be provided as input to automated game playing systems which use other approaches such as tree search or Monte Carlo game playing systems.
The game engine 110 has an input arranged to receive a current board configuration 113. For example, this can be via the user interface or input received as an email, flat file or any other suitable medium. The game engine 110 also has access to learnt probability information 115 for harvested pattern classes 117 obtained from historical game records 116. The learnt probability information is stored in any suitable manner either integral with the game engine or at another entity in communication with the game engine.
The game engine provides ranked possible next moves 114 as output or one selected next move for play. It can also provide a list of legal moves with associated urgency statistics or probability of play. Optionally the system also comprises a tree search, Monte Carlo game system 118 or other game system which uses techniques different to that of the game engine itself. The learnt probability information 115 and/or the ranked possible next moves 114 are used by that optional game system 118 to tailor or focus the processing of that system.
A preferred embodiment involving urgencies is now described:
We now present an example model of the probability
of an expert Go player making a move (at vertex) vwεζ in board configuration c. We only consider legal moves
where L(c)⊂ζ is the set of legal moves in configuration c.
A move at in configuration c is represented by the largest pattern
that matches c at . In our Bayesian model, we use a Gaussian belief p(u)=N(u;μ,diag(σ2)) over urgencies u({tilde over (π)}) of pattern classes {tilde over (π)}. Then the predictive distribution is given by
Our likelihood model Pc, u) is defined via the notion of a latent, unobserved urgency x({tilde over (π)}) for each pattern class, where p(x|u)=N(x:u,β2) is also assumed to be Gaussian with mean u and a fixed variance β2; the value of β expresses the variability of the urgency depending on specific position and player characteristics. In this sense, β can also be related to the consistency of play and can be chosen smaller for stronger players. We assume that an expert makes the move with the highest effective urgency value, hence,
This model can be expressed as a factor graph of the type shown in
A goal of learning is to determine the parameters μ and σ2 of the belief distribution p(u)=N(u;μ,diag(σ2)) from training data. The Bayesian posterior is given by
In general, this posterior is no longer a Gaussian and has non-zero covariance. We use a local assumed density filtering to approach where we seek the best (diagonal) Gaussian approximation
to the posterior
in the sense of minimum Kullback-Leibler divergence when leaving out one factor from
is based as the prior for the next expert move at the new board configuration. Again inference can be performed efficiently using EP message passing.
The factor graph (see
where
s
i(ui)=N(ui;μi,σi2),
g
j(xj,uj)=N(xj,uj,β2),
h
k(xi,xk)=Π(xi>xk).
We are interested in determining the marginals p(ui) of the joint distribution defined above. This can be accomplished by a sum-product algorithm. Examples of such an algorithm are given in Jordan & Weiss, 2002 “Graphical models: probabilistic inference”. In M. Arbib (Ed.), Handbook of neural networks and brain theory. MIT Press. 2nd edition.
For any variable, vi, connected to its neighboring factors, ƒkεneigh(vi), the marginal distribution of vi is given by
where mƒ
These equations derive from the fact that we can make use of the conditional independence structure of the joint distribution to rearrange the marginalization integral and thus simplify it.
We make the approximation that all messages are Gaussian densities to reduce storage requirements (messages can be represented by two numbers) and to simplify the calculations. For factors ƒk of general form, the factor-to-variable message calculated by (3) are not necessarily Gaussian. Therefore we approximate these messages by a Gaussian which minimizes the Kullback-Leibler divergence between the marginal distribution, p(vi)=mƒ
where MM demotes ‘Moment Match’.
The goal of learning is to determine (from training data) the parameters μi and σi2 of the belief distribution p(ui)=N(ui;μi,σi2) for the value of each pattern. We calculate the posterior
by first propagating messages particular, the game engine and scoring system can be provided at a central location accessible by remote game terminals over any suitable communications network. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like.
The term ‘computer’ is used herein to refer to any device with processing capability such that it can execute instructions. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the term ‘computer’ includes PCs, servers, mobile telephones, personal digital assistants and many other devices
Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.
The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate.
It will be understood that the above description of a preferred embodiment is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments of the invention. Although various embodiments of the invention have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the spirit or scope of this invention. about the graph according to (3), (4) and (5) until convergence. The approximate posterior distributions we require are
p(ui)=mg
Once a move at vertex at configuration c has been incorporated into the prior p(u), the posterior
is used as the prior for the next expert move at the new board configuration. This approach is a form of assumed-density filtering,
We also consider an alternative approach where we assume that each pattern class is played independently of the other available pattern classes with probability
(the probability of the maximal pattern class matched at vertex in configuration c). The probability of a move at location in position c given the pattern probabilities, p, is
Our uncertainty on the p,c is modeled by a conjugate Beta prior p(p,c)=Beta(p,c; α,c; β,c) so the marginal probability of a move
is
where α,c corresponds to the number of times this pattern class
matched for a move played by an expert in the training data and β,c corresponds to number of times the pattern class matched
for moves that were available but not chosen.
Those skilled in the art will realize that storage devices utilized to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. In
This application is a continuation-in-part of U.S. patent application Ser. No. 11/421,913, filed on Jun. 2, 2006 entitled, “Learning belief distributions for game moves” which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | 11421913 | Jun 2006 | US |
Child | 11532452 | US |