Part of game play in betting environments such as card rooms and casinos includes handling the game components, such as playing cards, betting chips, and dice, and demonstrating skill and manual dexterity by performing card maneuvers, chip tricks, artful dice throws, etc. Textual description and video demonstration of known tricks can be found online using a search engine. Card maneuvers and “chip tricks” can also be observed first-hand at casinos.
Description of a few chip tricks is now presented to underscore the mechanical dexterity and sometimes the acrobatic skill that is needed to execute a trick for a game component. However, tricks are not limited to those for betting chips.
In Mexican Jumping Chip, a chip “jumps” from one stack of chips to another. In Twirl Flick, the chip is tossed or “flicked” over the hand, across the body, and caught with the opposite hand. In Bounce Back, a chip is thrown up into the air with backspin. Once the chip bounces on a soft surface it then bounces back and is caught next to the other chips. In Drifter, a chip is slammed onto a surface so that it runs forward and the backspin created brings the chip back. In Lift Twirl, a chip is lifted onto the top of the index finger and then twirled. When the twirl is completed, the chip is lowered from the top of the index finger back down next to the other chips.
There are many other chip tricks, and also many tricks for other game components. What is needed is a way to enjoy these tricks on a digital electronic betting game platform.
Entertaining visual tricks for electronic betting games are described. In one implementation, a system receives gesture input, such as finger motions, from a user interface of a player at an electronic game table. The system maps the gesture input to known tricks and maneuvers to animate the virtual game components used in the electronic betting game, such as virtual playing cards, betting chips, dice, dice cups, tiles, and so forth. In one mode, the system divides the gesture input into segments and maps each segment to movement information for the virtual game component, enabling the player to record a custom visual trick. A motion synthesizer can apply kinematics to impart realistic or imaginary motion to the virtual game components, which can then be displayed across one or more video displays.
This summary section is not intended to give a full description of the visual tricks for electronic betting games, or to provide a list of features and elements. A detailed description of example embodiments of the electronic gaming system follows.
Overview
This disclosure describes entertaining visual tricks for electronic betting games. As used herein, a visual trick is a movement of an image or video object virtually representing a physical artifact that would be used in non-electronic versions of a particular electronic betting game. For example, the physical artifact may be betting chips and the corresponding video object is then a digital image of the betting chips, or “virtual betting chips.” Generically, these video images of the physical artifacts used in a game are referred to herein as “virtual game components.” The movement of the image or video object—i.e., the virtual game component—can have a practical purpose, an entertaining purpose, or both.
During conventional game play, it is not uncommon for poker players and many other types of gamers and gamblers to manipulate gaming components—the physical artifacts used in non-electronic versions of the game—in an artful and skillful manner. Experienced players may perform skillful and entertaining maneuvers of the playing cards (“card tricks”), the betting chips (“chip tricks”), the dice (“dice tricks”), the tiles (“tile tricks”), and so forth to pass the time, relieve tension, impress onlookers, distract the opponents, demonstrate bravado, crush opponents' confidence, and to generate luck. For some, skillfully manipulating cards or chips with the fingers of one hand or two during a round of play is similar to doodling in order to pass the time, while others have developed the tricks to the point of a fine art, which provides a provocative sideshow to complement the main action of the betting game. Sometimes clever maneuvers reveal the player's hidden cards to the player—for a brief peek. In some localities it is even acceptable or at least tolerable for players to modify a game piece, such as tearing a playing card in half upon significantly losing or missing an opportunity in the game.
Exemplary systems and methods described herein digitally simulate live casino game play, and simulate player or host interactions with the physical artifacts that constitute betting game components. That is, an exemplary system simulates interactions with virtual betting chips, virtual dice, virtual balls, virtual dice cups, virtual playing cards, virtual game tiles, and so forth. An exemplary system can simulate simple movements and elegant tricks for player-die interactions, such as rolling the dice; shaking, throwing, and setting of a single die or multiple dice; player-chip interactions, such as placing or removing bets, throwing/tossing chips, and performing chip tricks; player-card interactions such as card peeking, tearing, bending, folding, and mucking of the cards; player-tile interactions such as placing or removing tiles, re-orienting tiles, spinning or flipping the tiles, etc.
The exemplary system can simulate the physical interactions that a player would perform and experience at a live casino gaming table, i.e., rendered in a digital electronic environment. Thus, via animation, the exemplary system simulates the feel of live game play on a digital platform.
An exemplary visual trick engine synthesizes both movements and tricks of the virtual game components, such as cards, chips, dice, tiles, etc. Such a visual trick engine creates 3-dimensional (3D) graphics on a 2-dimensional (2D) display or on a 3D display, if available. Audio effects, such as live game sounds including voices, in mono, stereo, and surround-sound, may accompany the visual tricks and image movements.
In one implementation or mode, the visual trick engine translates a user's gestures that are input via a user interface such as a touch screen display into simple movements and known tricks that are applied to the virtual game component.
In another or the same implementation, the visual trick engine enables the player to leverage some skill on the available user interfaces to invent and record new tricks for a given virtual game component.
In yet another or the same implementation, the visual trick engine applies the laws of kinematics—the branch of physics or mechanics that deals with objects in motion—to digitally simulate the touch, speed, direction, momentum, rotation, drag, friction, and collisions experienced by the real physical counterparts of the virtual game components. Such kinematic effects and trajectories, which may also be exaggerated or imaginary, can then be executed by the visual trick engine across one or more video displays of an electronic game platform. This can give the entertaining appearance, e.g., of sliding virtual playing cards all the way across a table, even though a single video display does not cover the entire tabletop.
The visual movement of a given virtual game component can be initiated by player, host, or combined player-host interaction with sensors or with images of the game components on a video display, e.g., via touch screen, mechanical button, or other input device (e.g., touching, dragging, mouse gesture, etc.). Or, the visual movement can be triggered by a computer-initiated sequence based on a random event, a fixed or random timer, game state, or game sequence.
The animations mapped from a player's input can be displayed through various mechanisms, such as linking to or synthesizing a video frame sequence that virtually emulates at least one virtual kinematic maneuver of the represented virtual game component to provide an entertaining visual effect. In the same or another implementation, the animations mapped from a player's input can be displayed by applying one or more mathematical operations to a model of a 2-dimensional or 3-dimensional physical artifact. Another animation mechanism includes applying one or more mathematical operations to one or more images of a 3-dimensional physical artifact, e.g., from a single camera directed toward the 3-dimensional artifact, or via stereo images obtained in real-time from one or more pairs of cameras directed toward the 3-dimensional artifact. Thus, animation can be via a stock video clip of an entire trick; and/or stock video clips of movements that can be combined on the fly to create tricks; and/or real-time mathematical modeling of 2-dimensional or 3-dimensional objects, including application of 2-dimensional or 3-dimensional kinematics to the animated motion of the portrayed objects. The kinematic formulas applied can be leveraged to create realistic motion or imaginary motion that is physically impossible but entertaining nonetheless.
The exemplary visual tricks engine to be described below may be included in electronic games, such as electronic game tables at which card games and casino games are played. For example, the games may include electronic poker, Blackjack, Baccarat, Pai Gow Poker, craps, roulette, and many other games, as played around an electronic game table in a card room or casino.
Exemplary Systems
The exemplary visual trick systems and methods to be described below can be used with wagering games, such as those games that are playable on multi-participant electronic game tables. For example, the exemplary visual trick systems and methods described herein can be used on table game platforms such as those described in U.S. Pat. No. 5,586,766 and U.S. Pat. No. 5,934,998 to Forte et al.; and U.S. Pat. No. 6,165,069, U.S. Pat. No. 7,048,629, and U.S. Pat. No. 7,255,642 to Sines et al., each of these incorporated herein by reference.
The illustrated example game table 100 also includes a common display 118 in the center of the game table 100, for presenting visual information to all participants. The common display(s) 118 may present general information redundantly in two, four, or more visual orientations so that the displayed information is oriented correctly for each participant.
The example electronic game table 100 of
In
Likewise, in one implementation the visual trick engine 120 is not a discrete component or separate engine as shown in
In
The exemplary game processing system 300 includes a computing device 302, which may be a desktop, server, or notebook style computer, or other device that has processor, memory, and data storage. The computing device 302 thus includes a processor 304, memory 306, data storage 308; and interface(s) 310 to communicatively couple with the participant “1” user interface 102, the participant “2” user interface 104, . . . , and the participant “N” user interface 312. The game processing system 300 includes a gaming engine 314, game rules 316, and the exemplary visual trick engine 120, shown as software loaded into memory 306.
The interfaces 310 can be one or more hardware components that drive the visual displays and communicate with the interactive components, e.g., touch screen displays, of the multiple participant user interfaces 102, 104, . . . , 312.
The exemplary game processing system 400 includes a server computing device 402, which can be a computer or other device that has processor, memory, and data storage. The server computing device 402 thus includes a processor 404, memory 406, data storage 408, and an interface, such as a network interface card (NIC) 410, to communicatively couple over a network 412 with remote computing devices, such as computing device “1” 414 that hosts the participant “1” user interface 416; computing device “2” 418 that hosts the participant “2” user interface 420; . . . ; and computing device “N” 422 that hosts the participant “N” user interface 424. The game processing system 400 includes a gaming engine 314, game rules 316, and the exemplary visual trick engine 120, shown as software loaded into memory 406.
The participant computing devices 414, 418, and 422 may be desktop or notebook computers, or may be workstations or other client computing devices that have processor and memory, but may or may not have onboard data storage. Typically, a player station does not have data storage. Such modules may be “dumb” in that they have no bootable device, communicate with the game visual trick engine 120, but generally receive images and instructions from the server 402. Thus, in one implementation, a player computing device 414 is a visual display with graphics processing power and user interface components.
Exemplary Engines
Components in the illustrated example implementation of the visual trick engine 120 include a user interfaces manager 602, a virtual game component manager 604, a mode selector 606, a user interface input interpreter 608, a database of gestures 610, a buffer for storing a recognized gesture 612, a mapper 614, a database of image tricks 616, a buffer for a mapped trick 618, an animation engine 620, and a display driver interface 622. The illustrated trick engine 120 further includes a user interface input analyzer 624, a gesture segmenting engine 626, a segment mapper 628, a database of image movement segments 630, a motion synthesizer 632, kinematic formulas 634, a learning engine 636, and a custom trick recorder 638. As mentioned, the above list of components can vary and the interrelation of these components can vary depending on implementation.
Operation of the Exemplary Engine
The visual trick engine 120 can serve multiple functions, or, from another point of view, can operate in multiple modes. Thus, a mode selector 606 can direct certain lines of operation, but in one implementation the mode selector 606 changes modes on the fly, while in another implementation there is no mode selector 606, as the various modes of operation are built into the fabric of the visual trick engine 120.
In a first mode of operation, the visual trick engine 120 maps recognized finger gestures 612 to pre-packaged tricks, which may be stored as video clips representing a virtual game component undergoing the trick.
In a second mode of operation, there may or may not be pre-packaged tricks. Instead, the UI input analyzer 624 divides the user input into gesture segments and maps each segment to a plausible movement instruction or movement instruction segment. Thus, the player's gesture drives the motion of the virtual game component from scratch, and the custom trick recorder 638 can record an accumulation of the movement segments and store the segments as a novel trick in the database of tricks 616.
In a third mode of operation, a motion synthesizer 632 applies kinematic formulas 634 (laws of physics) to a mathematic model of the virtual game component undergoing a movement or trick. An initial velocity and/or momentum is typically assigned to the virtual game component based on the player's user interface input, and subsequent display of the virtual game component follows kinematic trajectories and behavior plotted by the kinematic formulas as they relate to mechanics, such as simulated interaction between a person's touch and a hypothetical surface of the modeled physical artifact; a velocity of the physical artifact, a momentum of the physical artifact, a friction acting on the physical artifact, a drag acting on the physical artifact, a gravitational force acting on the physical artifact, a rotational inertia possessed by the physical artifact, and one or more collision forces acting on the physical artifact.
In one implementation, an electronic game table (e.g., 100) utilizing the visual trick engine 120 can recognize an extensive repertoire of gestures 610 (e.g., finger or hand gestures) made by the human player (or dealer) and the mapper 614 can relate each recognized gesture 612 to a library of preprogrammed image tricks 616 or manipulations. For example, the image manipulations may be virtual renditions of visually entertaining playing card maneuvers (e.g., a one hand shuffle); chip tricks, or sophisticated dice rolls and dice tricks.
The chip vortex 904 has a gesture component 902 as shown. The trick component of the chip vortex 904 is virtually displayed as one or more stacks of virtual chips that appear to “explode” as if by an invisible bomb beneath them. None of the chips are visually destroyed, but burst out from the center in all directions of 3D space, as displayed on a 2D video display. Then, as the chips are exploding out, the chips come to a slow, smooth halt, as if an invisible vortex is pulling them back to their place of origin. When they reach the maximum explosive volume, the chips continue to rotate a little longer in 3D space, then are quickly sucked by the invisible vortex back to their starting place.
Other chip tricks and their variations are well-known by name. Some of these known chip tricks are described above as introductory or background material.
Alternatively,
A pre-programmed trick, e.g., a standard chip trick, may be stored in the database of tricks 616 or in the relational database 702 as one or more movement instructions for the particular virtual game component, such as virtual playing card or virtual chip. The movement instructions may be commands and screen coordinates for sequentially posting a video object in order to simulate motion, or the movement instructions may be one or more motion vectors for creating a sequence of video frames to animate the motion of the virtual game component. Or, the movement instructions may be a stored video sequence or video clip that can be played to create the visual animation that constitutes the trick.
Returning to
The virtual game component manager 604 may filter which user inputs correspond to particular virtual game components. That is, when a particular implementation enables visual tricks for more than one type of virtual game component. For example, a particular betting game may allow the players to perform visual tricks for both the betting chips and the playing cards.
The mode selector 606, as describe above, can direct certain lines of operation. In one implementation the mode selector 606 changes modes on the fly, while in another implementation there is no mode selector 606, as the various modes of operation can be built into the fabric of the visual trick engine 120. The mode selector 606 can change from mapping player gestures to pre-packaged tricks, to mapping the gestures directly to movements that have some correspondence with the parts of the gestures.
In one implementation or mode, upon receiving user input, the user interface input interpreter 608 may consult a database of gestures 610 to attempt determination of a recognized gesture 612. The mapper 614 aims to find a corresponding trick 618 for the recognized gesture from the database of image tricks 616. The trick to be performed is passed to the animation engine 620, which sends display control signals to the display driver interface 622.
In another implementation or mode, the UI input analyzer 624 has a gesture segmenting engine 626 that divides the user input into gesture segments. The segment mapper 628 relates each gesture segment to a plausible movement instruction or movement instruction segment from the database of image movement segments 630. An accumulation of these movement segments can be passed to the animation engine 620 to perform the trick. A learning engine 636 can derive an innovative new trick from the movement segments established, and/or the custom trick recorder 638 can add the novel trick to the database of gestures 610 and the database of image tricks 616, or to the relational database 702 that relates gestures to tricks.
In another implementation or mode, as introduced above, the motion synthesizer 632 applies kinematic formulas 634 (laws of physics) to a mathematic model of the virtual game component undergoing a movement or trick. An initial velocity and/or momentum is typically assigned to the virtual game component based on the player's user input, and subsequent display of the virtual game component follows kinematic trajectories and behavior plotted by the kinematic formulas as they relate to mechanics, such as simulated interaction between a person's touch and a hypothetical surface of the modeled physical artifact that is the subject of the virtual game component; a velocity of the physical artifact, a momentum of the physical artifact, a friction acting on the physical artifact, a drag acting on the physical artifact, a gravitational force acting on the physical artifact, a rotational inertia possessed by the physical artifact, and one or more collision forces acting on the physical artifact.
A typical system has an electronic game table 100 with multiple video displays 102 . . . 116, a visual trick engine 120 for receiving an input from a user interface (e.g., 102) of the electronic game table 100, and a mapper 614 in the visual trick engine 120 to relate the input to a movement instruction for a video object representing a physical artifact used in the betting game played on the electronic game table 100. An animation engine 620 displays the video object in multiple video frames displayed on one or more of the multiple video displays 102 according to the movement instruction.
As shown in
Or, as shown in
In one implementation, the mapper 614 matches the input to a video sequence as in
The video object can virtually represent a playing card, a betting chip, a die, a pair of dice 1106 and 1108 as shown in
Likewise, the video object can represent one or more virtual playing cards, and the movement instruction retrieved from the database can comprise visual actions of a playing card maneuver for the one or more playing cards, such as shuffling the playing cards, cutting a deck of the playing cards, dealing a playing card, discarding a playing card, passing a playing card, revealing a playing card, changing a size of the playing card, or tearing a playing card.
The motion synthesizer 632 in the visual trick engine 120 can apply one or more laws of physics to the movement instruction. The motion synthesizer can derive a kinematic motion for the video object by applying the one or more laws of physics via kinematic formulas 634 to a mathematical model of the physical artifact represented by the video object.
The motion synthesizer 632 may derive the kinematic motion for the video object by applying a mathematical formula describing an interaction between a person initiating the motion and a surface of the physical artifact, a velocity, a momentum, a friction, a drag, a gravitational force, a rotational inertia, and a collision force acting on the physical artifact. For example, as shown in
As shown in
Example Method
At block 1502, an input from a user interface of an electronic game table that includes one or more video displays is received. In one implementation, the method 1500 receives the input from a touch screen display of the electronic game table. The input may be one or more single touch contacts between a finger and the touch screen, and/or may be one or more movements or strokes of one or more fingers or hand parts on a given touch screen user interface.
At block 1504, the input is mapped to a movement instruction for a video object representing a physical artifact used in a betting game played on the electronic game table. For example, an electronic table game system can map segments of hand gestures to segments of image movement, so that a player may perform—or attempt-a custom or new visual trick (e.g., a chip trick) in real-time. That is, the electronic table game system implements the player's arbitrary finger (hand, etc.) gestures as a custom image trick. The appeal and sophistication of the virtual image trick depends on the player's learned skill at performing the trick, for example, via the exemplary visual trick engine of the electronic table game system. In one implementation, the exemplary visual trick engine combines mouse-like macro-movement of an image (by the player) with more subtle manipulation of the moving image, based on subtle or skilled hand or finger gestures.
The exemplary visual trick engine may record new tricks by extending a “recording” or “recording trick” mode to the player. In such an implementation, the player can actuate a recording switch (e.g., a visual icon) and begin recording an image trick as the player performs finger and hand gestures that are translated by the visual tricks engine into visual tricks of artifacts, such as playing cards, betting chips, or dice. Once perfected, the recorded trick may be stored as a macro composed of smaller image movements and motions, stored in the player's (or the system's) collection, library, or database of tricks.
The method 1500 may include matching the input to a video sequence representing a 3-dimensional movement instruction for the video object representing the physical artifact. The video object can virtually represent a playing card, a betting chip, a die, a pair of dice, a dice cup, a ball, a game tile, a domino, or a slot machine symbol, etc.
The mapping may include searching a database of gestures for a representation of the input and retrieving a corresponding movement instruction for the video object from the database. The video object may represent one or more virtual betting chips; and then the movement instruction retrieved from the database comprises visual actions of a trick for the betting chip, such as a five chip coin star, an abduction, an Areat shuffle, a back spin, a bounce back, a bounce, a butterfly, a caterpillar, a caterpillar star, a chip roll, a Danish twirl, a drifter, a drop bounce, a finger-to-finger twirl, a finger flip, a finger roll, a floater, a fountain, a J-factor, a Johnny Chan, a knuckle roll, a lift twirl, a lookout, a Mexican jumping chip, a moon landing, a muscle pass, a pendulum, a Phil Ivey stepup, a pick, a pickover, a reverse thumb flip, a roll down, a run around, a scissor twirl, a shuffle, a sweep, a swirl, a switch, a swivel display, a sub-zero, a thumb flip, a top spin, a twirl, a twirl hop, a twirl lift, or an unwrap-and-recapture.
The method 1500 may include simulating at least part of a human hand for display when displaying one of the tricks. When the video object represents one or more virtual playing cards, the movement instruction retrieved from the database can be visual actions of a playing card maneuver for the one or more playing cards, such as shuffling the playing cards, cutting a deck of the playing cards, dealing a playing card, discarding a playing card, passing a playing card, revealing a playing card, changing a size of the playing card, or tearing a playing card.
Further, the mapping may include applying one or more laws of physics to the movement instruction. Applying a law of physics to the movement instruction can further includes deriving a kinematic motion for the video object by applying the one or more laws of physics to a mathematical model of the physical artifact represented by the video object. Mathematical formulas for imparting kinematic motion may include those for describing an interaction or touch between a person and a surface of the physical artifact, a velocity of the physical artifact, a momentum of the physical artifact, a friction acting on the physical artifact, a drag acting on the physical artifact, a gravitational force acting on the physical artifact, a rotational inertia possessed by the physical artifact, and a collision force acting on the physical artifact.
In one implementation, the method 1500 includes dividing the input into input gesture segments, mapping each input gesture segment to a movement instruction segment for the video object, and displaying the video object according to an accumulation of the movement instruction segments; as well as recording the input gesture segments and the accumulation of the movement instruction segments and storing the input gesture segments with the associated accumulation of the movement instruction segments in a relational database—thus creating a custom trick addressable in the relational database.
At block 1506, the video object is displayed in multiple video frames displayed on one or more of the multiple video displays according to the movement instruction. The user interface for sensing player gestures, such as hand and finger motions, may be the same user interface that displays an instance of the visual trick being performed, i.e., when the display screen is also a touch screen for sensing user input. Or, the user interface for sensing player gestures may be composed of optical sensors, motion sensors, accelerometers, or even touch sensors worn on the hand like part of a glove.
Movement of virtual game components and visual tricks may be executed on one display, or across multiple displays. For example, a card dealer may touch a first touch screen to pass virtual playing cards from the first screen to a second screen of another card player. Likewise, a first player in a virtual dice game may touch an input device, such as a touch screen, to throw the dice from the first player's screen, across a common display positioned between all players, to a second player's display on an opposing side of the common screen.
Although exemplary systems have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claimed systems, methods, and structures.