Casinos propose a wide variety of gambling activities to accommodate players and their preferences. Some of those activities reward strategic thinking while others are impartial, but each one of them obeys a strict set of rules that favours the casino over its clients.
The success of a casino relies partially on the efficiency and consistency with which those rules are applied by the dealer. A pair of slow dealing hands or an undeserved payout may have substantial consequences on profitability.
Another critical factor is the consistency with which those rules are respected by the player. Large sums of money travel through the casino, tempting players to bend the rules. Again, an undetected card switch or complicity between a dealer and a player may be highly detrimental to profitability.
For those reasons among others, casinos have traditionally invested tremendous efforts in monitoring gambling activities. Initially, the task was performed manually, a solution that was both expensive and inefficient. However, technological innovations have been offering advantageous alternatives that reduce costs while increasing efficiency.
One of the most important aspects of table game monitoring consists in recognizing playing cards, or at the very least, their value with respect to the game being played. Such recognition is particularly challenging when the card corner or the central region of a playing card is undetectable within an overhead image of a card hand, or more generally, within that of an amalgam of overlapping objects. Current solutions for achieving such recognition bear various weaknesses, especially when confronted to those particular situations.
U.S. patent application Ser. No. 11/052,941, titled “Automated Game Monitoring”, by Tran, discloses a method of recognizing a playing card positioned on a table within an overhead image. The method consists in detecting the contour of the card, validating the card from its contour, detecting adjacent corners of the card, projecting the boundary of the card based on the adjacent corners, binarizing pixels within the boundary, and counting the number of pips to identify the value of the card. While such a method is practical for recognizing a solitary playing card, or at least one that is not significantly overlapped by other objects, it may not be applicable in cases where the corner or central region of the card is undetectable due to the presence of overlapping objects. It also does not provide a method of distinguishing face cards. Furthermore, it does not provide a method of extracting a region of interest encompassing a card identifying symbol when only a partial card edge is available or when card corners are not available.
A paper titled “Introducing Computers to Blackjack: Implementation of a Card Recognition System Using Computer Vision Techniques”, written by G. Hollinger and N. Ward, proposes the use of neural networks to distinguish face cards. The method proposes determining a central moment of individual playing cards to determine a rotation angle. This approach of determining a rotation angle is not appropriate for overlapping cards forming a card hand. They propose counting the number of pips in the central region of the card to identify number cards. This approach of pip counting will not be feasible when a card is significantly overlapped by another object. They propose training three neural networks to recognize face card symbols extracted from an upper left region of a face card, where each of the networks would be dedicated to a distinct face card symbol. The neural network is trained using a scaled image of the card symbol. A possible disadvantage of trying to directly recognize images of a symbol using a neural network is that it may have insufficient recognition accuracy especially under conditions of stress such as image rotation, noise, insufficient resolution and lighting variations.
Several references propose to achieve such recognition by endowing each playing card with detectable and identifiable sensors. For instance, U.S. patent application Ser. No. 10/823,051, titled “Wireless monitoring of playing cards and/or wagers in gaming”, by SOLTYS, discloses playing cards bearing a conductive material that may be wirelessly interrogated to achieve recognition in any plausible situation, regardless of visual obtrusions. One disadvantage of their implementation is that such cards are more expensive than normal playing cards. Furthermore, adhering casinos would be restricted to dealing such special playing cards instead of those of their liking.
Card recognition is particularly instrumental in detecting inconsistencies on a game table, particularly those resulting from illegal procedures. However, such detection is yet to be entirely automated and seamless as it requires some form of human intervention.
MP Bacc, a product marketed by Bally Gaming for detecting an inconsistency within a game of Baccarat consists of a card shoe reader for reading bar-coded cards as they are being dealt, a barcode reader built into a special table for reading cards that were dealt, as well as a software module for comparing data provided by the card reader and discard rack.
The software module verifies that the cards that have been removed from the shoe correspond to those that have been inserted into the barcode reader on the table. It also verifies that the order in which the cards have been removed from the shoe corresponds to the order in which they were placed in the barcode reader. One disadvantage of this system is that it requires the use of bar-coded cards and barcode readers to be present in the playing area. The presence of such devices in the playing area may be intrusive to players. Furthermore, dealers may need to be trained to use the special devices and therefore the system does not appear to be seamless or natural to the existing playing environment.
It would be desirable to be provided with a system for recognizing playing cards positioned on a game table in an accurate and efficient manner.
It would be desirable to be provided with a method of recognizing standard playing cards positioned on a game table without having to detect their corner.
It would also be desirable to be provided with a seamless, automated, and reliable system for detecting inconsistencies on a game table and providing an accurate description of the context in which detected inconsistencies occurred.
An exemplary embodiment is directed to a system for identifying a gaming object on a gaming table comprising at least one overhead camera for capturing an image of the table; a detection module for detecting a feature of the object on the image; a search module for extracting a region of interest of the image that describes the object from the feature; a feature space module for transforming a feature space of the region of interest to obtain a transformed region of interest; and an identity module trained to recognize the object from the transformed region.
According to another embodiment, at least one factor attributable to casino and table game environments and gaming objects impedes reliable recognition of said object by said statistical classifier when trained to recognize said object from said region of interest without transformation by said feature space module.
Another embodiment is directed to a method of identifying a value of a playing card placed on a game table comprising: capturing an image of the table; detecting at least one feature of the playing card on the image; delimiting a target region of the image according to the feature, wherein the target region overlaps a region of interest, and the region of interest describes the value; scanning the target region for a pattern of contrasting points; detecting the pattern; delimiting the region of interest of the image according to a position of the pattern; and analyzing the region of interest to identify the value.
Another embodiment is directed to a system for detecting an inconsistency with respect to playing cards dealt on a game table comprising: a card reader for determining an identity of each playing card as it is being dealt on the table from the shoe; an overhead camera for capturing images of the table; a recognition module for determining an identity of each card positioned on the table from the images; and a tracking module for comparing the identity determined by the card reader with the identity determined by the recognition module, and detecting the inconsistency.
For a better understanding of embodiments of the present invention, and to show more clearly how it may be carried into effect, reference will now be made, by way of example, to the accompanying drawings which aid in understanding and in which:
In the following description of exemplary embodiments we will use the card game of blackjack as an example to illustrate how the embodiments may be utilized.
Referring now to
An example of a bet being placed by player 14 is shown as chips 28a within betting region 26a. Dealer 16 utilizes chip tray 30 to receive and provide chips 28. Feature 32 is an imaging system, which is utilized by the present invention to provide overhead imaging and optional lateral imaging of game 10. An optional feature is a player identity card 34, which may be utilized by the present invention to identify a player 14.
At the beginning of every game players 14 that wish to play place their wager, usually in the form of gaming chips 28, in a betting region 26 (also known as betting circle or wagering area). Chips 28 can be added to a betting region 26 during the course of the game as per the rules of the game being played. The dealer 16 then initiates the game by dealing the playing cards 18, 22. Playing cards can be dealt either from the dealer's hand, or from a card dispensing mechanism such as a shoe 24. The shoe 24 can take different embodiments including non-electromechanical types and electromechanical types. The shoe 24 can be coupled to an apparatus (not shown) to read, scan or image cards being dealt from the shoe 24. The dealer 16 can deal the playing cards 18, 22 into dealing area 20. The dealing area 20 may have a different shape or a different size than shown in
During the progression of the game, playing cards 18, 22 may appear, move, or be removed from the dealing area 20 by the dealer 16. The dealing area 20 may have specific regions outlined on the table 12 where the cards 18, 22 are to be dealt in a certain physical organization otherwise known as card sets or “card hands”, including overlapping and non-overlapping organizations.
For the purpose of this disclosure, chips, cards, card hands, currency bills, player identity cards, dealer identity cards, lammers and dice are collectively referred to as gaming objects. In addition the term “gaming region” is meant to refer to any section of gaming table 12 including the entire gaming table 12.
Referring now to
The imaging system 32 utilizes periodic imaging to capture a video stream at a specific number of frames over a specific period of time, such as for example, thirty frames per second. Periodic imaging can also be used by an imaging system 32 when triggered via software or hardware means to capture an image upon the occurrence of a specific event. An example of a specific event would be if a stack of chips were placed in a betting region 26. An optical chip stack or chip detection method utilizing overhead imaging system 40 can detect this event and can send a trigger to lateral imaging system 42 to capture an image of the betting region 26. In an alternative embodiment overhead imaging system 40 can trigger an RFID reader to identify the chips. Should there be a discrepancy between the two means of identifying chips the discrepancy will be flagged.
Referring now to
An optional case 54 encloses overhead imaging system 40 and if so provided, includes a transparent portion 56, as shown by the dotted line, so that imaging devices 50 may view a gaming region.
Referring now to
An optional case 60 encloses lateral imaging system 42 and if so provided includes a transparent portion 62, as shown by the dotted line, so that imaging devices 50 may view a gaming region.
The examples of overhead imaging system 40 and lateral imaging system 42 are not meant by the inventors to restrict the configuration of the devices to the examples shown. Any number of imaging devices 50 may be utilized and if a case is used to house the imaging devices 50, the transparent portions 56 and 62 may be configured to scan the desired gaming regions.
According to one embodiment of the present invention, a calibration module assigns parameters for visual properties of the gaming region.
Referring back to
In step 4804, coefficients for perspective correction are calculated. Such correction consists in an image processing technique whereby an image can be warped to any desired view point. Its application is particularly useful if the overhead imagers are located in the signage and the view of the gaming region is slightly warped. A perfectly overhead view point would be best for further image analysis. A checkerboard or markers on the table may be utilized to assist with calculating the perspective correction coefficients.
Subsequently, in step 4806, the resulting image is displayed to allow the user to select specific points or regions of interest within the gaming area. For instance, the user may select the position of betting spots and the region encompassing the dealer's chip tray. Other specific regions or points within the gaming area may be selected.
In the next step 4808, camera parameters such as shutter value, gain value(s) are calculated and white balancing operations are performed. Numerous algorithms are publicly available to one skilled in the art for performing camera calibration.
In step 4810, additional camera calibration is performed to adjust the lens focus and aperture.
Once the camera calibration is complete and according to step 4812, an image of the table layout, clear of any objects on its surface, is captured and saved as a background image. Such an image may be for detecting objects on the table. The background image may be continuously captured at various points during system operation in order to have a most recent background image.
In step 4814, while the table surface is still clear of objects additional points of interest such as predetermined markers are captured.
In the final step 4816, the calibration parameters are stored in memory.
It must be noted that the calibration concepts may be applied for the lateral imaging system as well as other imaging systems.
In an optional embodiment, continuous calibration checks may be utilized to ensure that the initially calibrated environment remains relevant. For instance a continuous brightness check may be performed periodically, and if it fails, an alert may be asserted through a feedback device indicating the need for re-calibration. Similar periodic, automatic checks may be performed for white balancing, perspective correction, and region of interest definition.
As an example, if lighting in the gaming region changes calibration may need to be performed again. A continuous brightness check may be applied periodically and if the brightness check fails, an alert may be asserted through one of the feedback devices indicating the need for re-calibration. Similar periodic, automatic checks may be performed for white balancing, perspective correction and the regions of interest.
In an optional embodiment, a white sheet similar in shade to a playing card surface may be placed on the table during calibration in order to determine the value of the white sheet at various points on the gaming table and consequently the lighting conditions at these various points. The recorded values may be subsequently utilized to determine threshold parameters for detecting positions of objects on the table.
It must be noted that not all steps of calibration need human input. Certain steps such as white balancing may be performed automatically.
In addition to the imaging systems described above, exemplary embodiments may also make use of RFID detectors for gambling chips containing an RFID.
Referring now to
Modules 80 to 94 communicate with one another through a network 96. A 100 Mbps Ethernet Local Area Network or Wireless Network can be used as a digital network. The digital network is not limited to the specified implementations, and can be of any other type, including local area network (LAN), Wide Area Network (WAN), wired or wireless Internet, or the World Wide Web, and can take the form of a proprietary extranet.
Controller 98 such as a processor or multiple processors can be employed to execute modules 80 to 94 and to coordinate their interaction amongst themselves, with the imaging system 32 and with input/output devices 100, optional shoe 24 and optional RFID detectors 70. Further, controller 98 utilizes data stored in database 102 for providing operating parameters to any of the modules 80 to 94. Modules 80 to 94 may write data to database 102 or collect stored data from database 102. Input/Output devices 100 such as a laptop computer, may be used to input operational parameters into database 102. Examples of operational parameters are the position coordinates of the betting regions 26 on the gaming table 12, position coordinates of the dealer chip tray 30, game type and game rules.
Before describing how the present invention may be implemented we first provide some preliminary definitions. Referring now to
IP module 80 may be implemented in a number of different ways. In a first embodiment, overhead imaging system 32 (see
Referring now to
Moving to step 142 the process waits to receive an overhead image of a gaming region from overhead imaging system 40. At step 144 a thresholding algorithm is applied to the overhead image in order to differentiate playing cards from the background to create a threshold image. A background subtraction algorithm may be combined with the thresholding algorithm for improved performance. Contrast information of the playing card against the background of the gaming table 12 can be utilized to determine static or adaptive threshold parameters. Static thresholds are fixed while dynamic thresholds may vary based upon input such as the lighting on a table. The threshold operation can be performed on a gray level image or on a color image. Step 144 requires that the surface of game table 12 be visually contrasted against the card. For instance, if the surface of game table 12 is predominantly white, then a threshold may not be effective for obtaining the outlines of playing cards. The output of the thresholded image will ideally show the playing cards as independent blobs 110. This may not always be the case due to issues of motion or occlusion. Other bright objects such as a dealer's hand may also be visible as blobs 110 in the thresholded image. Filtering operations such as erosion, dilation and smoothing may optionally be performed on the thresholded image in order to eliminate noise or to smooth the boundaries of a blob 110.
In the next step 146, the contour 112 corresponding to each blob 110 is detected. A contour 112 can be a sequence of boundary points of the blob 110 that more or less define the shape of the blob 110. The contour 112 of a blob 110 can be extracted by traversing along the boundary points of the blob 110 using a boundary following algorithm. Alternatively, a connected components algorithm may also be utilized to obtain the contour 112.
Once the contours 112 have been obtained processing moves to step 148 where shape analysis is performed in order to identify contours that are likely not cards or card hands and eliminate these from further analysis. By examining the area of a contour 112 and the external boundaries, a match may be made to the known size and/or dimensions of cards. If a contour 112 does not match the expected dimensions of a card or card hand it can be discarded.
Moving next to step 150, line segments 114 forming the card and card hand boundaries are extracted. One way to extract line segments is to traverse along the boundary points of the contour 112 and test the traversed points with a line fitting algorithm. Another potential line detection algorithm that may be utilized is a Hough Transform. At the end of step 150, line segments 114 forming the card or card hand boundaries are obtained. It is to be noted that, in alternate embodiments, straight line segments 114 of the card and card hand boundaries may be obtained in other ways. For instance, straight line segments 114 can be obtained directly from an edge detected image. For example, an edge detector such as the Laplace edge detector can be applied to the source image to obtain an edge map of the image from which straight line segments 114 can be detected. These algorithms are non-limiting examples of methods to extract positioning features, and one skilled in the art might use alternate methods to extract these card and card hand positioning features.
Moving to step 152, one or more corners 116 of cards can be obtained from the detected straight line segments 114. Card corners 116 may be detected directly from the original image or thresholded image by applying a corner detector algorithm such as for example, using a template matching method using templates of corner points. Alternatively, the corner 116 may be detected by traversing points along contour 112 and fitting the points to a corner shape. Corner points 116, and line segments 114 are then utilized to create a position profile for cards and card hands, i.e. where they reside in the gaming region.
Moving to step 154, card corners 116 are utilized to obtain a Region of Interest (ROI) 118 encompassing a card identifying symbol, such as the number of the card, and the suit. A card identifying symbol can also include features located in the card such as the arrangement of pips on the card, or can be some other machine readable code.
Corners of a card are highly indicative of a position of a region of interest. For this very reason, they constitute the preferred reference points for extracting regions of interest. Occasionally, corners of a card may be undetectable within an amalgam of overlapping gaming objects, such as a card hand. The present invention provides a method of identifying such cards by extracting a region of interest from any detected card feature that may constitute a valid reference point.
According to a preferred embodiment of the invention, the overhead image is analyzed to obtain the contour of the card hand 3500. Subsequently, line segments 3510, 3512, 3514, 3516, 3518, 3520, 3522, and 3524 forming the contour of the card hand 3500 are extracted. The detected line segments are thereafter utilized to detect convex corners 3530, 3532, 3534, 3536, 3538, and 3540.
As mentioned herein above, corners constitute the preferred reference points for extracting Regions of Interest. In the following description, the term “index corner” refers to a corner of a card in the vicinity of which a region of interest is located. The term “blank corner” refers to a corner of a card that is not an index corner.
The corner 3530 is the first one to be considered. A sample of pixels drawn within the contour, in the vicinity of the corner 3530, is analyzed in order to determine whether the corner 3530 is an index corner. A sufficient number of contrasting pixels are detected and the corner 3530 is identified as an index corner. Consequently, a region of interest is projected and extracted according to the position of the corner 3530, as well as the width, height, and offset of regions of interests from index corners.
Similarly, the corner 3532 is identified as an index corner and a corresponding region of interest is projected and extracted.
The corner 3534 is the third to be considered. Corner 3534 is identified as a blank corner. Due to their coordinates, the corners 3532 and 3534 are identified as belonging to a same card, and consequently, the corner 3534 is dismissed from further analysis.
Similarly to corners 3530 and 3532, the corner 3536 is identified as an index corner and a corresponding region of interest is projected and extracted.
The corners 3538 and 3540 are the last ones to be considered. Due to their coordinates, the corners 3530, 3538 and 3540 are identified as belonging to a same card, and consequently, the corners 3538 and 3540 are dismissed from further analysis.
As a result of the corner analysis, the regions of interest of the cards 3502, 3506 and 3508 of the card hand 3500 have been extracted. However, none of the corners of the card 3504 has been detected and consequently, no corresponding region of interest has been extracted.
In order to extract any remaining regions of interest, the extracted line segments 3510, 3512, 3514, 3516, 3518, 3520, 3522, and 3524 forming the contour of the card hand 3500 are utilized according to a method provided by the present invention.
In
In step 3600, two scan line segments are determined. The scan line segments are of the same length as the analyzed line segment. Furthermore, the scan line segments are parallel to the analyzed line segment. Finally, a first of the scan line segments is offset according to a predetermined offset of the region of interest from a corresponding card edge. The second of the scan line segments is offset from the first scan line segment according to the predetermined width of the rank and suit symbols.
In step 3602, pixel rows delimited by the scan line segments are scanned, and for each of the rows a most contrasting color or brightness value is recorded.
Subsequently, in step 3604, the resulting sequence of most contrasting color or brightness values, referred to as a contrasting value scan line segment, is analyzed to identify regions that may correspond to a card rank and suit. The analysis may be performed according to pattern matching or pattern recognition algorithms.
According to the preferred embodiment, the sequence of contrasting color values is convolved with a mask of properties expected from rank characters and suit symbols. For instance, in the context of a white card having darker coloured rank characters and suit symbols, the mask may consist of a stream of darker pixels corresponding to the height of rank characters, a stream of brighter pixels corresponding to the height of spaces separating rank characters and suit symbols, and a final stream of darker pixels corresponding to the height of suit symbols. The result of the convolution will give rise to peaks where a sequence of the set of contrasting color values corresponds to the expected properties described by the mask.
Several methods are available for performing such convolution, including but not limited to cross-correlation, squared difference, correlation coefficient, as well as their normalized versions.
In step 3606, the resulting peaks are detected, and the corresponding regions of interests are extracted.
First, two scan line segments, 3700 and 3702 are determined. The scan line segments 3700 and 3702 are of the same length as the line segment 3510. Furthermore, the scan line segments are parallel to the line segment 3510. Finally, the scan line segment 3700 is offset from the line segment 3510 according to a predetermined offset of the region of interest from a corresponding card edge. The scan line segment 3702 is offset from the scan line segment 3700 according to the predetermined width of the rank characters and suit symbols.
Subsequently, rows delimited by the scan line segments 3700 and 3702 are scanned. For each of the rows, a most contrasting color or brightness value is recorded to form a sequence of contrasting color or brightness values 3704, also referred to as a contrasting value scan line segment.
Once the sequence 3704 is obtained, it is convolved with a mask 3706 of properties expected from rank characters and suit symbols. The mask 3706 consists of a stream of darker pixels corresponding to the height of rank characters, a stream of brighter pixels corresponding to the height of spaces separating rank characters and suit symbols, and a final stream of darker pixels corresponding to the height of suit symbols.
A result 3708 of the convolution gives rise to a peak 3710 where a sub-sequence of sequence 3704 corresponds to the expected properties described by the mask 3706. Finally, a region of interest 3714 corresponding to the card 3502 is extracted.
In
In step 3800, several scan line segments are determined. The scan line segments are of the same length as the analyzed line segment. Furthermore, the scan line segments are parallel to the analyzed line segment. Finally, a first of the scan line segments is offset from the analyzed line segment according to a predetermined offset of the region of interest from a corresponding card edge. The other scan line segments are offset from the first scan line segment according to the predetermined width of the rank and suit symbols. The scan line segments are positioned in that manner to ensure that at least some of them would intersect any characters and symbols located along the analyzed line segment.
In step 3802, each scan line segment is scanned and points of contrasting color or brightness values are recorded to assemble a set of contrasting points, which we will refer to as seed points.
Subsequently, in step 3804, the set of contrasting points is analyzed to identify clusters that appear to be defining, at least partially, rank characters and suit symbols. The clusters can be extracted by grouping the seed points or by further analyzing the vicinity of one or more of the seed points using a region growing algorithm.
Finally, in step 3806, regions of interest are extracted from the identified clusters of contrasting points.
First, two scan line segments 3900 and 3902 are determined. The scan line segments 3900 and 3902 are of the same length as the line segment 3510. Furthermore, the scan line segments 3900 and 3902 are parallel to the line segment 3510. Finally, the scan line segment 3900 is offset from the line segment 3510 according to a predetermined offset of the region of interest from a corresponding card edge segment. The scan line segment 3902 is offset from the scan line segment 3900 according to the predetermined width of rank characters and suit symbols. The scan line segments 3900 and 3902 are positioned in that manner to ensure that at least one of them would intersect any characters and symbols located along the line segment 3510.
The scan line segments 3900 and 3902 are scanned and points of contrasting color and brightness values are recorded to assemble a sequence of contrasting points. Subsequently, the sequence is analyzed and clusters of seed points 3910, 3912 and 3914 are identified as likely to define, at least partially, rank characters and suit symbols.
Finally, regions of interest 3920, 3922, and 3924 are extracted respectively from the clusters of seed points 3910, 3912, and 3914. Therefore, the method has succeeded in extraction a region of interest of a card having no detectable corners.
Referring back to
Although the invention has been described within the context of a hand of cards, it may be applied within the context of a single gaming object, or an amalgam of overlapping gaming objects.
Although the invention has been described as preceded by a corner analysis, it may be applied without any previous corner analysis. However, it is usually preferable to start with a corner analysis since corners are preferred over line segments as reference points.
Although the invention has been described as a method of extracting a region of interest from a card edge, it may do so from any detected card feature, provided that the feature constitutes a valid reference point for locating a region of interest. For instance, the method may be applied to extract regions of interest from detected corners, or detected pips, instead of line segments. Such versatility is a sizeable asset within the context of table games, where some playing cards may present a very limited number of detectable features.
It is important to note that the preceding corner analysis could have been performed according to the invention.
Referring back to
The present invention provides a system for identifying a gaming object on a gaming table in an efficient and seamless manner. The system comprises at least one overhead camera for capturing a plurality of images of the table; a detection module for detecting a feature of the object on an image of the plurality; a search module for extracting a region of interest of the image that describes the object from the feature; a feature space module for transforming a feature space of the region of interest to obtain a transformed region of interest; a dimensionality reduction module for reducing the transformed region into a reduced representation according to dimensionality reduction algorithms, and an identity module trained to recognize the object from the transformed region.
Within the context of the system illustrated in
The Imager 32 provides an overhead image of the game table to a Detection module 4000. Subsequently, the Detection Module 4000 detects features of potential gaming objects placed on the game table. Such detection may be performed according to any of the aforementioned methods; for instance, it may consist of the steps 142, 144, 146, 148, 150, and 152, as illustrated in
According to one embodiment of the present invention, the Detection Module 4000 comprises a cascade of classifiers trained to recognize specific features of interest such as corners and edges.
According to another embodiment of the present invention, the system further comprises a Booster Module, and the Detection Module 4000 comprises a cascade of classifiers. The Booster module serves the purpose of combining weak classifiers of the cascade into a stronger classifier as illustrated in
Referring back to
The Search Module 4002 provides the extracted regions of interest to the Feature Space (FS) Module 4004. For each region of interest, the FS Module 4004 transforms a provided representation into a feature space, or a set of feature spaces that is more appropriate for recognition purposes.
According to one embodiment, each region of interest provided to the FS Module 4004 is represented as a grid of pixels, wherein each pixel is assigned a color or brightness value.
Prior to performing a transformation, the FS Module 4004 must select a desirable feature space according to a required type, speed, and robustness of recognition. The selection may be performed in a supervised manner, an unsupervised manner, or both.
Once a feature space is selected, the FS Module 4004 applies a corresponding feature space transformation on a corresponding image.
It is important to distinguish feature space transformations from geometrical transformations. The geometrical transformation of an image consists in reassigning positions of pixels positions within a corresponding grid. While such a transformation does modify an image, it does not modify underlying semantics; the means by which the original image and its transformed version are represented is the same. On the other hand, feature space transformations modify underlying semantics.
One example of a feature space transformation consists in modifying the representation of colours within a pixel grid from RGB (Red, Green, and Blue) to HSV (Hue, Saturation, and Value or Brightness). In this particular case, the data is not modified, but its representation is. Such a transformation is advantageous in cases where it is desirable for the brightness of a pixel to be readily available. Furthermore, the HSV space is less sensitive to a certain type of noise than its RGB counterpart.
The Hough Line Transform is another example of a feature space transformation. It consists in transforming a binary image from a set of pixels to a set of lines. In the new feature space, each vector represents a line whereas in the original space, each vector represents the coordinates of a pixel. Consequently, such a transformation is particularly advantageous for applications where lines are to be analyzed.
Other feature space transformations include various filtering operations such as Laplace and Sobel. Pixels resulting from such transformations store image derivative information rather than image intensity.
Canny edge detection, Fast Fourier Transform (FFT), and Discrete Cosine Transform (DCT), and Wavelet transforms are other examples of feature space transformations. Images resulting from FFT and DCT are no longer represented spatially (by a pixel grid), but rather in a frequency domain, wherein each point represents a particular frequency contained in the real-domain image. Such transformations are practical because the resulting feature space is invariant with respect to some transformations, and robust with respect to others. For instance, discarding the higher frequency components of an image resulting from a DCT makes it more resilient to noise, which is generally present in high frequencies. As a result, recognition is more reliable.
Within the context of the present invention, the use of different feature spaces provides for additional robustness with respect to parameters such as lighting variations, brightness, image noise, image resolutions, ambient smoke, as well as geometrical transformations such as rotations and translations. As a result, the system of the present invention provides for greater training and recognition accuracy.
According to a preferred embodiment of the present invention, Principal Component Analysis (PCA) is the main feature space transformation in the arsenal of the FS Module 4004. It is a linear transform that selects a new coordinate system for a given data set, such that the greatest variance by any projection of the data set relates to a first axis, known as the principal component, the second greatest variance, on the second axis, and so on.
The first step of the PCA consists in constructing a 2D matrix A of size n×wh where each column is an image vector, given n images of w×h pixels. Each image vector is formed by concatenating all the pixel rows of a corresponding image into vector. The second step consists in computing an average image from the matrix A by summing up all the rows and dividing by n. The resulting clement vector of size (wh) is called
According to another embodiment, the FS Module 4004 applies predominantly one or more of the DCT, FFT, Log Polar Domains, or other techniques resulting in edge images.
Referring back to
According to the preferred embodiment of the present invention, the representations provided by the FS Module 4004 result from the application of a PCA, and the DR Module 4006 reduces their dimensionality by applying a feature selection technique that consists in selecting a subset of the PCA coefficients that contain the most information.
According to one embodiment of the present invention, the representations provided by the FS Module 4004 result from the application of a DCT, and the DR Module 4006 reduces their dimensionality by applying a feature selection technique that consists in selecting a subset of the DCT coefficients that contain the most information.
According to another embodiment of the present invention, the DR Module 4006 reduces the dimensionality of the provided representations by applying a feature extraction technique that consists in projecting them into a feature space of fewer dimensions.
According to another embodiment of the present invention, the representations provided by the FS Module 4004 result from the application of a DCT, and the DR Module applies a combination of feature selection and feature extraction techniques that consists in selecting a subset of the DCT coefficients that contain the most information, and applying PCA on the selected coefficients.
Within the context of the present invention, the application of dimensionality reduction techniques reduces computing computational overhead, thereby increasing the training and recognition procedures performed by the Identity Module 4008. Furthermore, dimensionality reduction tends to eliminate, or at the very least reduce noise, and therefore, increase recognition and training efficiency.
According to another embodiment of the invention, the FS Module 4004 provides the transformed representation or set of transformed representations to an Identity Module 4008 trained to recognize gaming objects from dimensionality reduced representations of regions of interest.
Referring back to
Still according to the preferred embodiment of the present invention, the Identity Module 4008 comprises a statistical classifier trained to recognize gaming objects from dimensionality reduced representations.
According to one embodiment of the present invention, the Identity Module 4008 comprises a Feed-forward Neural Network such as the one illustrated in
According to another embodiment of the present invention, the Identification Module comprises a cascade of classifiers.
According to another embodiment of the present invention, the system further comprises a Booster Module, and the Identity Module 4008 comprises a cascade of classifiers. The Booster module serves the purpose of combining weak classifiers of the cascade into a stronger classifier. It may operate according to one of several boosting algorithms including Discrete Adaboost, Real Adaboost, LogitBoost, and Gentle Adaboost.
Referring back to
According to one embodiment of the present invention, the Detection Module 4000 recognizes a configuration of playing cards suitable for a deck verification procedure and triggers the Identity Module 4008 to provide the rank and suit of each identified card to a Deck Verification Module 4010.
According to another embodiment of the present invention, the Identity Module 4008 is manually triggered to provide the rank and suit of each identified card to the Deck Verification Module 4010.
Referring back to
At step 160 the process waits for a new image and when received processing returns to step 144.
Referring now to
In this embodiment, identity data generated from the card shoe reader 24 and positioning data generated from proximity detection sensors 170 may be grouped and output to other modules. Associating positional data to cards may be performed by the IPAT module 84.
In another alternate embodiment of the IP module 80, card reading may have an RFID based implementation. For example, RFID chips embedded inside playing cards may be wirelessly interrogated by RFID antennae or scanners in order to determine the identity of the cards. Multiple antennae may be used to wirelessly interrogate and triangulate the position of the RFID chips embedded inside the cards. Card positioning data may be obtained either by wireless interrogation and triangulation, a matrix of RFID sensors, or via an array of proximity sensors as explained herein.
We shall now describe the function of the Intelligent Position Analysis and Tracking module (IPAT module) 84 (see
According to the present invention, the IPAT module 84, in combination with the Imager 32, the IP module 80, and the card shoe 24, may also detect inconsistencies that occur on a game table as a result of an illegal or erroneous manipulation of playing cards.
According to a preferred embodiment of the present invention, the system for detecting inconsistencies that occur on a game table as a result of an illegal or erroneous manipulation of playing cards comprises a card shoe for storing playing cards to be dealt on the table; a card reader for determining an identity and a dealing order of each playing card as it is being dealt on the table from the shoe; an overhead camera for capturing images of the table, a recognition module for determining an identity and a position of each card positioned on the table from the images; and a tracking module for comparing the dealing order and identity determined by the card reader with the identity and the position determined by the recognition module, and detecting the inconsistency.
Within the context of the system illustrated in
In
In the preferred embodiment of the present invention, the data is received immediately following each removal of a card from the card shoe 24. In another embodiment, the data is received following each removal of a predetermined number of cards from the card shoe 24. In yet another embodiment, the data is received periodically.
In the preferred embodiment of the present invention, the data consist of a rank and suit of a last card to be removed from the card shoe 24. In another embodiment, the data consist of a rank of a last card to be removed from the card shoe 24.
In step 4204, the IPAT module 84 receives data from the IP module 80.
In the preferred embodiment of the present invention, the data is received periodically. In another embodiment, the data is received in response to the realization of step 4202.
In the preferred embodiment of the present invention, the data consist of a rank, suit, and position of each card placed on the game table.
In another embodiment, the data consist of a rank and suit of each card placed on the game table.
In yet another embodiment, the data consist of a rank of each card placed on the game table.
In yet another embodiment, the data consist of a suit of each card placed on the game table.
In yet another embodiment of the present invention, the data consist of a likely rank, and suit, as well as a position of each card placed on the game table.
In yet another embodiment of the present invention, the data consist of a likely rank and a position of each card placed each card placed on the game table.
In yet another embodiment of the present invention, the data consist of a likely suit and a position of each card placed each card placed on the game table.
In step 4206, the IPAT module 84 compares the data provided by the card shoe 24 with those provided by the IP module 80.
In the preferred embodiment of the present invention, the IPAT module 84 verifies whether the rank and suit of cards removed from the card shoe 24 as well as the order in which they were removed correspond to the rank, suit, and position of cards placed on the game table according to a set of rules of the game being played.
In another embodiment, the IPAT module 84 verifies whether the rank and suit of cards removed from the card shoe 24 correspond to the rank and suit of those that are placed on the game table.
If an inconsistency is detected, the IPAT module 84 informs the surveillance module 92 according to step 4208. Otherwise, the IPAT module 84 returns to step 4202 as soon as subsequent data is provided by the card shoe 24.
The invention will now be described within the context of monitoring a game of Baccarat. According to the rules of the game, a dealer withdraws four cards from a card shoe and deals two hands of two cards, face down; one for the player, and one for the bank. The player is required to flip the dealt cards and return them back to the dealer. The latter organizes the returned cards on the table and determines the outcome of the game. One known form of cheating consists in switching cards. More specifically, a player may hide cards of desirable value, switch a dealt card with one of the hidden cards, flip the illegally introduced card and return it back to the dealer. The present invention provides an efficient and seamless means to detect such illegal procedures.
As mentioned hereinabove, according to the rules of the Baccarat, the dealer must withdraw four cards from the card shoe. According to a first exemplary scenario, the dealer withdraws in order the Five of Spade, Six of Hearts, Queen of Clubs, and the Ace of Diamonds. The rank and suit of each of the four cards is read by the card shoe 24, and provided to the IPAT module 84.
The player flips the dealt card and returns them to the dealer. The latter organizes the four cards on the table as illustrated in
The Imager 32 captures overhead images of the table, and sends the images to the IP module 80 for processing. The IP module 80 determines the position, suit, and rank of cards 4300, 4302, 4304, and 4306, and provides the information to the IPAT module 84. The latter compares the data received from the card shoe reader and the IP module, and finds no inconsistency. Consequently, it waits for a new set of data from the card shoe reader.
According to a second exemplary scenario, the dealer withdraws in order the Five of Spade, Six of Hearts, Queen of Clubs, and the Ace of Diamonds. The rank and suit of each of the four cards is read by the card shoe 24, and provided to the IPAT module 84.
The player switches one of the dealt cards with one of his hidden cards to form a new hand, flips the cards of the new hand, and returns them to the dealer. The latter arranges the four cards returned by the player as illustrated in
The Imager 32 captures overhead images of the table, and sends the images to the IP module 80 for processing. The IP module 80 determines the position, suit, and rank of cards 4300, 4400, 4304 and 4306, and provides the information to the IPAT module 84. The latter compares the data received from the card shoe 24 and the IP module, and finds an inconsistency; the rank of the cards 4300, 4302, 4304, and 4306 removed from the card shoe do not correspond to the rank of the cards 4300, 4400, 4304 and 4306 placed on the table. More specifically, the card 4302 has been replaced by 4400, which likely results from a card switching procedure. Consequently, the IPAT module 84 provides a detailed description of the detected inconsistency to the surveillance module 92.
According to a third exemplary scenario, the dealer withdraws in order the Five of Spades, Six of Hearts, Queen of Clubs, and the Ace of Diamonds. The rank and suit of each of the four cards is read by the card shoe 24, and provided to the IPAT module 84.
The player flips the dealt card and returns them to the dealer. The latter organizes the four cards on the table in an erroneous manner, as illustrated in
The Imager 32 captures overhead images of the table, and sends the images to the IP module 80 for processing. The IP module 80 determines the position, suit, and rank of cards 4300, 4302, 4304 and 4306, and provides the information to the IPAT module 84. The latter compares the data received from the card shoe reader 24 and the IP module, and finds an inconsistency; while the rank and suit of the cards removed from the card shoe correspond to the rank and suit of the cards positioned on the table, the order in which the cards were removed from the card shoe does not correspond to the order in which the cards were organized on the table. More specifically, the card 4302 has been permutated with the card 4304. Consequently, the IPAT module 84 provides a detailed description of the detected inconsistency to the surveillance module 92.
While the invention has been described within the context of monitoring a game of Baccarat, it is applicable to any table game involving playing cards dealt from a card shoe.
We shall now discuss the functionality of the game tracking (GT) module 86 (see
Returning to
The bet recognition module 88 can interact with the other modules to provide more comprehensive game tracking. As an example, the game tracking module 86 can send a capture trigger to the bet recognition module 88 at the start of a game to automatically capture bets at a table game.
Referring to
Optionally the system can recognize special player identity cards with machine readable indicia printed or affixed to them (via stickers for example). The machine readable indicia can include matrix codes, barcodes or other identification indicia. Such specialty identity cards may also be utilized for identifying and registering a dealer at a table. Furthermore, specialty identity cards may be utilized to indicate game events such as a deck being shuffled or a dispute being resolved at the table.
Optionally, biometrics technologies such as face recognition can be utilized to assist with identification of players.
We will now discuss the functionality of surveillance module 92. Surveillance module 92 obtains input relating to automatically detected game events from one or more of the other modules and associates the game events to specific points in recorded video. The surveillance module 92 can include means for recording images or video of a gaming table. The recording means can include the imagers 32. The recording means can be computer or software activated and can be stored in a digital medium such as a computer hard drive. Less preferred recording means such as analog cameras or analog media such as video cassettes may also be utilized.
We shall now discuss the analysis and reporting module 94 of
Output, including alerts and player compensation notifications, can be through output devices such as monitors, LCD displays, or PDAs. An output device can be of any type and is not limited to visual displays and can include auditory or other sensory means. The software can potentially be configured to generate any type of report with respect to casino operations.
Module 94 can be configured to accept input from a user interface running on input devices. These inputs can include, without limitation, training parameters, configuration commands, dealer identity, table status, and other inputs required to operate the system.
Although not shown in
Although not shown in
The terms imagers and imaging devices have been used interchangeably in this document. The imagers can have any combination of sensor, lens and/or interface. Possible interfaces include, without limitation, 10/100 Ethernet, Gigabit Ethernet, USB, USB 2, FireWire, Optical Fiber, PAL or NTSC interfaces. For analog interfaces such as NTSC and PAL a processor having a capture card in combination with a frame grabber can be utilized to get digital images or digital video.
The image processing and computer vision algorithms in the software can utilize any type or combination or color spaces or digital file formats. Possible color spaces include, without limitation, RGB, HSL, CMYK, Grayscale and binary color spaces.
The overhead imaging system may be associated with one or more display signs. Display sign(s) can be non-electronic, electronic or digital. A display sign can be an electronic display displaying game related events happening at the table in real time. A display and the housing unit for the overhead imaging devices may be integrated into a large unit. The overhead imaging system may be located on or near the ceiling above the gaming region.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
The present application claims priority from U.S. provisional patent applications No. 60/676,936, filed May 3, 2005; 60/693,406, filed Jun. 24, 2005; 60/723,481 filed Oct. 5 2005, 60/723,452 filed Oct. 5 2005, 60/736,334 filed Nov. 15 2005, 60/760,365 filed Jan. 20 2005 and 60/771,058 filed Feb. 8, 2006.
Number | Date | Country | |
---|---|---|---|
60676936 | May 2005 | US | |
60693406 | Jun 2005 | US | |
60723481 | Oct 2005 | US | |
60723452 | Oct 2005 | US | |
60736334 | Nov 2005 | US | |
60760365 | Jan 2006 | US | |
60771058 | Feb 2006 | US |