A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever. Copyright 2013, WMS Gaming, Inc.
Embodiments of the inventive subject matter relate generally to wagering game systems and networks that, more particularly, detect sounds and voices.
Wagering game machines, such as slot machines, video poker machines and the like, have been a cornerstone of the gaming industry for several years. Generally, the popularity of such machines depends on the likelihood (or perceived likelihood) of winning money at the machine and the intrinsic entertainment value of the machine relative to other available gaming options. Where the available gaming options include a number of competing wagering game machines and the expectation of winning at each machine is roughly the same (or believed to be the same), players are likely to be attracted to the most entertaining and exciting machines. Shrewd operators consequently strive to employ the most entertaining and exciting machines, features, and enhancements available because such machines attract frequent play and hence increase profitability to the operator. Therefore, there is a continuing need for wagering game machine manufacturers to continuously develop new games and gaming enhancements that will attract frequent play.
Furthermore, during gaming sessions (e.g., during periods of wagering game play) players tend to communicate a variety of thoughts and emotions both verbally and non-verbally. Gaming entities, such as wagering game machine manufacturers, gaming venue operators, and wagering game providers, would like to understand players' communications and emotions to improve the gaming experience.
Embodiments are illustrated in the Figures of the accompanying drawings in which:
This description of the embodiments is divided into four sections. The first section provides an introduction to embodiments. The second section describes example operations performed by some embodiments while the third section describes example operating environments. The fourth section presents some general comments.
This section provides an introduction to some embodiments.
As described previously, gaming entities, such as wagering game machine manufacturers, gaming venue operators, and wagering game providers, would like to understand players' communications and emotions. Some embodiments of the present inventive subject matter include detecting audible communications made by a player, and other individuals within a gaming venue, during a gaming session. Some embodiments further include analyzing, or evaluating, the audible communications (e.g., audible words, sounds, etc.) in context of a scenario in which the audible communication was made. For instance, some embodiments include evaluating information the player communicates, and evaluating how the player communicates the information, to determine a meaning for the audible communication. Some embodiments include evaluating a history of communications that the player has previously made. Some embodiments include evaluating the information communicated by the player in context of wagering game content presented, or presentable, during the wagering game session. For instance, a player's audible communication may refer to gaming content presented during the wagering game session, such as commands for a wagering game system to perform certain wagering game actions, expressions of confusion or frustration about certain wagering game features, and so forth. In response, a wagering game system can automatically respond to the player's audible communications, such as by performing actions, suggesting alternative content, providing encouragement and/or rewards, etc. Another embodiment includes detecting passive comments made by a player, detecting background conversations between the player and other individuals near the player, or detecting other such indirect or passive communications (e.g., communications that are not directed specifically at a wagering game machine). Some embodiments include responding to the indirect communications with subtle suggestions for content, or for subtle changes of content. These are but a few examples. Many more are described in further detail below.
In
In
In
Further, some embodiments of the inventive subject matter describe examples of detection and response to audible communications for gaming in a network wagering venue (e.g., an online casino, a wagering game website, a wagering network, etc.) using a communication network, such as via one of various types of communications network that provides access to wagering games, such as a public network (e.g., a public wide-area-network, such as the Internet), a private network (e.g., a private local-area-network gaming network), a file sharing network, a social network, etc., or any combination of networks. Multiple users can be connected to the networks via computing devices. The multiple users can have accounts that subscribe to specific services, such as account-based wagering systems (e.g., account-based wagering game websites, account-based casino networks, etc.).
Further, in some embodiments herein a user may be referred to as a player (i.e., of wagering games), and a player may be referred to interchangeably as a player account. Account-based wagering systems utilize player accounts when transacting and performing activities, at the computer level, that are initiated by players. Therefore, a “player account” represents the player at a computerized level. The player account can perform actions via computerized instructions. For example, in some embodiments, a player account may be referred to as performing an action, controlling an item, communicating information, etc. Although a player, or person, may be activating a game control or device to perform the action, control the item, communicate the information, etc., the player account, at the computer level, can be associated with the player, and therefore any actions associated with the player can also be associated with the player account. Therefore, for brevity, to avoid having to describe the interconnection between player and player account in every instance, a “player account” may be referred to herein in either context. Further, in some embodiments herein, the word “gaming” is used interchangeably with “gambling.”
Although
This section describes operations associated with some embodiments. In the discussion below, some flow diagrams are described with reference to block diagrams presented herein. However, in some embodiments, the operations can be performed by logic not described in the block diagrams.
In certain embodiments, the operations can be performed by executing instructions residing on machine-readable storage media (e.g., software), while in other embodiments, the operations can be performed by hardware and/or other logic (e.g., firmware). In some embodiments, the operations can be performed in series, while in other embodiments, one or more of the operations can be performed in parallel. Moreover, some embodiments can perform more or less than all the operations shown in any flow diagram.
The flow 200 continues at processing block 204, where the system evaluates at least one characteristic of the one or more audible communications in context of one or more characteristics or conditions associated with the wagering game session. The system can detect and analyze characteristics of the audible communication (e.g., inflection, volume, words, etc.) and compare the characteristics against libraries, files, databases, or other collections of data, that indicate a description or meaning for the characteristics. For instance, the system can detect a spoken phrase by an individual and cross-reference the spoken phrase to a library of known terms. Characteristics of an audible communication can include characteristics of how, when, where, and by whom, the audible communication is made. Characteristics or conditions associated with a wagering game session can include information associated with wagering game content, wagering game rules or mechanics, a wagering game machine, a history of game play, wagering game events, game play achievements, betting amounts, player-account information, group game data, secondary gaming content (e.g., secondary wagering games, community games, progressives, etc.), casino services, persistent or episodic wagering games, environmental conditions in a gaming venue, or any other information associated with gaming.
The flow 200 continues at processing block 206, where the system generates an automated response to the one or more audible communications based on the evaluation. In other words, the system automatically generates a response based on the evaluation of the characteristic(s) of the audible communication(s) in context of the one or more characteristics or conditions associated with the wagering game session. The automated response can take a variety of different forms, some of which may include wagering game content, encouraging messages, advertisements, suggestions for content, help tips, replays of gaming events, and so forth.
The following includes some descriptive elements pertinent to the flow 200 of
In some embodiments, the system evaluates whether the audible communication should be responded to, or whether the audible communication requires clarification or authorization.
In some embodiments, the system determines whether the audible communication originates within a given proximity to a wagering game machine and, in response, determines whether to further evaluate the audible communication or generate an automated response. The system can disregard, or filter out, audible sounds that are detected from beyond the proximity (e.g., ignores sounds that occur too far away from the wagering game machine). For instance, the system filters out ambient noises that come from the wagering game machine or from nearby machines or patrons. In some embodiments, the system uses microphones in a chair of the wagering game machine to detect sounds that come from, or that are directed to, the player. In some examples, the system determines the location of the origin of the audible communication via multiple microphones and/or specialized microphones (e.g., 3D microphones) and/or via visual confirmation from cameras.
In some embodiments, the system determines whether the audible communication originates from a specific player associated with the wagering game session. For instance, the system detects unique voice characteristics of the player via biometrics, and verifies that the voice characteristics are a biometric match to the player who has logged in to the wagering game machine.
In some embodiments, the system generates a relevance score of the audible communication and, based on the relevance score, determines a degree of clarification or authorization to request within the automated response. The system can determine the degree of clarification or authorization proportional to the relevance score. For example, if the audible communication is related to a wager or an amount to wager, the system may assign a high relevance value, and may present a prompt asking the player if they intended to bet more. For instance, the system detects the words “I'd like to bet more” but, before making another wager or before increasing a betting amount, the system may ask, for clarification, (e.g., the system asks “Did you just say ‘I'd like to bet more.’?”, “Did you want to make a bet?”, or “Did you want to increase your betting amounts?”). In some embodiments, the system may hear a grunt or groan and may ask the player additional clarifying questions (e.g., “Are you upset?” or “Is there anything I can do for you?”). In some embodiments, the system presents a prompt or confirmation message on a display for touch approval.
Furthermore, the system can determine the degree of priority to assign to the automated response based on the relevance score. For example, based on an emotion or preference detected from the audible communication, the system may determine a timing for response, a degree of encouragement to include in a response, a degree of marketing or advertising to target via the response, a type of content to present or suggest in a response, etc.
In some embodiments, the system determines to respond based on whether the content of the audible communication is decipherable. In some embodiments, the system responds only to comments that contain phrases or words that are similar to data within a library, knowledgebase, etc. (e.g., only data in a game knowledgebase). In other words, the system can filter the audible communications, and provide responses based only on the relevance of the audible communication to the wagering game content. For instance, the system detects and parses an audible communication, and compares the parsed components of an audible communication to words or phrases in a library of words and phrases that have been pre-determined to be relevant to the wagering game content. The system then analyzes the comparison of the parsed components of the audible communication to the library (e.g., analyzes the comparison of words from the player to the words in the library) to generate a relevance score. Based on the relevance score, the system can determine that, based on a specific value of the relevance score, the system can perform, or not perform, certain actions. The system can also generate an automated response to contain a degree of requests for clarification or a degree of authorization of actions (e.g., prior to performing the actions) proportional to the relevance score. For instance, the system may refrain from generating an automated response if a comment is determined to have a low relevance score (e.g., if the comment's relevance score does not exceed a lower limit or relevance threshold). As certain degrees of relevance scores increase, the system can respond proportionately, such as to prompt additional questions for higher relevance scores (e.g., “Can you rephrase that comment?”, “Are you speaking to me?”, “What do you mean by double-down, do you want me to double your bet on the next spin?”, etc.). Based on the value of the relevance scores, the system can present direct vocal confirmations (e.g., “Ok, I'll do that” or “I will double your next bet, please confirm by saying OK”). Further, based on the relevance scores, the system can perform direct actions (e.g., present a listing of new games, increase a wager amount, presents specific content).
In some embodiments, the system evaluates characteristics of an audible communication against data sources to determine a meaning for the audible communication.
In some embodiments the system evaluates characteristics of audible communications against entries in a variety of different types of data sources, such as libraries, files, records, databases, etc., such as, but not limited to, the following: a library of word definitions, a library of colloquialisms, a library of languages or dialects, a library of elements of speech, a library of non-verbal sounds, a library of sounds made by a device, a library of vocal tones that indicate emotion, a record of one or more additional audible communications made by an individual from whom the one or more audible communications originated, a record of one or more additional audible communications made by one of a plurality of individuals in proximity to a wagering game machine associated with the wagering game session, etc. In some embodiments, the data source is specifically tuned to a venue within which the audible communication occurs. For example, the data source may include specific references to unique elements, shops, shows, services, etc., associated with a venue. Based on the evaluation of the characteristics of audible communications against descriptions within the data sources, the system can determine a meaning of an audible communication.
In some embodiments, the system evaluates new content of the audible communication in context.
In some embodiments, the system detects an audible communication that contains new content (e.g., new words or phrases) that the system has not detected before and that is not within a library of known phrases. The system can evaluate the new content of the audible communication in context of the situation in which the audible communication was made (e.g., in context of characteristics of the individual who made the communication, in context of a mode of expression, in context of gaming information presented or presentable during a wagering game session, in context of a history of audible communications, etc.). Based on the evaluation, the system can determine how, or whether, to respond to the audible communication. For example, the system may present a wagering game called “The Great and Powerful Oz” during a wagering game session. During that session, the system detects a comment to “Pay no attention to the man behind the curtain.” The system can analyze the statement (e.g., search through a database, a player profile, a network, the Internet, etc.) to determine that the phrase refers to a line from the movie “The Wizard of Oz” on which the wagering game is based. The system interprets the audible communication as being related to the wagering game, such as being related to a character in the game and further interprets the comment to not display the specific character or to perform some other action related to the character. In some embodiments, the system can present a response that asks for more information (e.g., “The Great and Powerful Oz requests that you clarify what you mean by Pay no attention to the man behind the curtain.”). In some embodiments, when the system cannot detect a meaning for the audible communication, the system can disregard the content of the communication and/or store the content for later reference. In some embodiments, the system stores any or all forms of communication, whether audible or inaudible, verbal or non-verbal, for reference and/or analytics.
In some embodiments, the system evaluates the content of the audible communication (e.g., evaluates what was said in the audible communication) as well as characteristics of an expression of the audible communication (e.g., evaluates how the communication was expressed).
In some embodiments, the system detects and evaluates nonverbal elements of speech related to the audible communication. For instance, the system detects and determines levels and/or fluctuations in volume, pitch, voice quality, rate, speaking style, rhythm, intonation, stress, inflection, etc., associated with the audible communication. In some embodiments, the system detects types of non-spoken sounds (e.g., grunts and groans). In some embodiments, the system detects body language and/or gestures. In some embodiments, the system evaluates the nonverbal elements of speech, or other expressions of the audible communication, and uses them to detect emotions or other indicators of meaning (e.g., detect a calm emotion from subdued and quiet speech, detect an excited or frustrated emotion from direct and forceful speech, etc.).
In some embodiments, the system detects an audible communication made in a passive tone, but determines that the content of the audible communication suggests a direct command or direct request of the system. For example, the system detects that a player says, “I have no idea how I won.” instead of “Why did I win?” The first comment (“I have no idea how I won”) is not a direct command or direct request for the system to tell the player why the player won, but it strongly suggests that the player is interested in knowing something about the mechanics of the game. The system can determine whether the player has a history of using the passive tone in speech and, based on that history, determine that a comment made in a passive tone is actually a direct request or command.
In some embodiments, the system detects physical aspects of an individual who makes the audible communication to determine a sense of emotion (e.g., negative, positive, or neutral) for the individual. For example, the system monitors an individual using recording equipment or other sensors (e.g., cameras, pressure sensors, heart-rate monitors, temperature sensors, etc.) that detect physical appearance, activity, biometric function, movement, etc. In some examples, when the individual makes an audible communication (e.g., whether verbal or non-verbal, such as a grunt or groan), the system can record an image of the individual's face and body to detect visible signs of heightened emotion, such as wincing, a furrowed brow, specific types of movement (e.g., shifting in a chair, jittery movement, hand-wringing, excessive tapping of the fingers, etc.) and so forth. In other examples, sensors in a chair can detect when a player is sitting on an edge of the seat or slumped down in the chair, which indicate clues to specific types of emotions. The system can analyze all physical aspects of the individual to give meaning to the audible communication.
Based on the evaluation of the expression of the audible communication, and said detection of emotion, the system can apply a meaning to the audible communication. For instance, the system refers to a library of descriptions of emotions and/or potential meanings associated with emotions, and, in context of the detected emotions, and other audio cues (e.g., verbal elements of speech such as spoken words from the communication) and/or visual cues taken from the audible communication, the system determines a most probable meaning from the library. The system can further refer to libraries associated with word definitions, languages or dialects, elements of speech, non-verbal sounds, sounds made by a device, vocal tones, etc.
In some embodiments, the system evaluates an audible communication in relation to wagering game content presentable via the wagering game machine, or other information associated with wagering games.
In some embodiments, the system detects verbal commands for a wagering game to perform an action or present content (e.g., “Play it.”, “Bet max.”, etc.). The system can evaluate the command against data associated with the wagering game. For instance, the system can detect that a command is associated with a specific object or event of the wagering game content (e.g., a character, a title, a theme, a graphic, an accomplishment, a wager, etc.). For instance, in
In some embodiments, the system detects a query or request for information about the wagering game content (e.g., “Why did I not win?” “How do you get to the bonus options?” “How do you play the bonus round?” “Show the game rules.” “Show the pay table.” “How many lines does the game have?” “What are the betting options?” “Who is this Wizard of Oz character?”). For instance, the system can detect that the audible communication is associated with a point of game play within a wagering game and responds accordingly (e.g., when a player says “I wonder how I won?” or “How did I lose?” the system detects a point in play, as well as any recent gaming events, and generates an explanation or help tip related to game rules.)
In some embodiments, the system detects a request for a specific type of content or feature (e.g., “Show me games about racing.”) The system detects queries about the player's own play or general play history for a game (e.g., “Why did I win?” “Show me my wins.” “Show me all the big wins that have occurred for this game in the last 6 weeks.” “When did this machine last go into a bonus round?” “I want to play the game I played last week.” “How long since my last bonus round?”), queries about locations of friends, queries about how to contact other social contacts (e.g., send an invitation to a friend on Facebook™ with similar interests or questions, such as someone who knows how to play the game that the player is playing, present a list of individuals who are available within a casino, etc.), a question or request for a casino service (“Please send the waiting staff” “Where is my drink?”), and so forth.
In some embodiments, the system evaluates characteristics of the audible communication against a library of gaming terms or verbal game commands associated with the wagering game content. For example, the system includes a library of words, phrases, descriptions, metadata, etc., associated with wagering game content. In some embodiments, the system evaluates audible communications against metadata associated with one or more of wagering game content presentable during the wagering game session. In some embodiments, the system evaluates audible communications against one or more of game rules and game mechanics. In some embodiments, the system evaluates audible communications against a history of game play for a wagering game machine associated with the wagering game session.
In some embodiments, the system evaluates the audible communication in relation to a gaming event (e.g., in relation to a description of an event, in relation to metadata of the event, in relation to a timing of an event, etc.).
In some embodiments, the system determines that the audible communication is immediately followed, or preceded, by a specific game event. For example, when a player says “Awe, so close” the system detects that the wagering game had, on its last spin or play, experienced a near-win or almost resulted in a winning outcome based on game rules, game element configurations, etc. In some embodiments, the system refers to a log of wagering game events for a wagering game session to detect the specific game event. The log can be stored on a wagering game machine, a wagering game server, or any other gaming device associated with a wagering game network or other gaming venue (e.g., an online gaming server).
In some embodiments, the system evaluates audible communications against a library of descriptions of wagering game events. In some embodiments, the system evaluates audible communications against metadata associated with wagering game events of the wagering game session.
In some embodiments, the system evaluates the audible communication in context to additional communications or a history of communications made.
In some embodiments, the system detects additional audible communications made prior to, or concurrent with, or after, an audible communication. The system analyzes the additional audible communications for clues or indications of what the first audible communication means. For instance, the system detects that a player groans, and also detects that another individual says “Too bad.” Based on the groan by the player, and the additional comment by other individual, the system interprets the grunt or groan as a negative communication, or a communication that expresses a negative emotion by the player. If, however, the other individual had instead said “Wow, nice win!” then the system interprets the groan as positive communication. The system further generates the automated response based on context of the audible communication to the one or more of the additional audible communications and the wagering game information.
In some embodiments, the system tracks a history of communications made by the source of the audible communication and analyzes the audible communication in context of the history of communications. For example, the system determines a meaning of a comment based on a history of communications. In
In some embodiments, the system evaluates communications in context to a characteristic of the source of the communication.
In some embodiments, the system detects one or more characteristics of the source of the audible communication. For instance, the system can determine a position, orientation, or location of an individual who made the audible communication, such as whether the individual is seated in front of the wagering game machine, whether the player's eyes are looking at the display of the wagering game machine or away from the machine, etc. In some embodiments, the system utilizes player-tracking techniques, such as head tracking. In some embodiments, the system can refer to a map of an area within a gaming venue and overlay the position of individuals onto the map to determine distances from a wagering game machine or other positions relative to the wagering game machine and/or relative to other individuals. In some embodiments, the system utilizes geo-positioning and/or geo-locationing (e.g., global position systems, radio-frequency location systems, etc.).
In some embodiments, based on the one or more characteristics of the source of the audible communication, the system determines whether the audible communication is a direct command to perform an action or present content related to the wagering game session or whether the audible communication is an indirect comment (e.g., an off-hand remark or background conversation) that the system can utilize to control or present content or to enhance the wagering game experience.
In some embodiments, the system refers to a library of specific words or phrases that indicate direct commands as well as specific words or phrases that indicate indirect comments.
In some embodiments, the system generates responses to address negative player emotions.
In some embodiments, the system generates an automated response that addresses a negative emotion detected via evaluation of the audible communication. A player is continuously providing user feedback to gaming events in the forms of audible and physical reactions. Much of that user feedback is not intentionally directed to the system but is, nonetheless, communicated. For example, the system may detect an inflection of the vocal quality of an audible communication, which inflection indicates a negative tone or emotion of the player associated with game play. In another example, the system may determine that the language of the audible communication indicates a negative perception of wagering game content (e.g., confusion, frustration, disappointment, lack of understanding regarding game functionality, etc.). When the system detects negative user feedback, the system can provide a positive response to counteract the emotional negativity. For instance, the system can present a help tip or suggestion for play strategy if the audible communication indicates that the player is confused. In another example, the system can present an encouraging remark or a replay of a past win if the audible communication indicates disappointment with a lack of winning. In another example, the system can suggest additional content that may be easier to understand or have more entertainment value if an audible communication indicates a lack of comprehension of game mechanics. In yet another example, the system can provide a reward or compensation to lighten a player's mood.
In some embodiments, the system generates responses in context.
In some embodiments, based on an evaluation of a scenario during a wagering game session, the system generates a response that is customized to the scenario so that the system does not always respond to the same type of audible communication the same way each time. For example, in some embodiments, the system generates an automated response that has a presentation characteristic that is in accordance with a characteristic of the source or a characteristic of the audible communication (e.g., detect a language, age, gender, country of origin, dialect, speech pattern, personality, mood, etc. of a player that speaks the audible communication and adapts the response to have a quality that mimics, compliments, or in some other way uses the characteristic of the source). In another embodiment, the system generates an automated response with a constructed element of speech (e.g., vocabulary, language, grammar, dialect, speech pattern, etc.), that mirrors the element of speech of the audible communication. The system can parse the meaning of words based on user dialects or information from a user profile. In some examples, the system can automatically translate languages and dialects spoken by the user. In some examples, the system matches dialects to the content. In some examples, the system can detect characteristics of the player such as personality type, mood, gender, age, education, profession, country of origin, ethnicity, marital status, and demographics. The system can store files regarding the player's speech, or other characteristics, for future reference.
In some embodiments, the system generates an automated response that has characteristics of a specific person or personality, such as a celebrity, that the player prefers or that is similar to the player is some way (e.g., based on a player's age, the system responds in a celebrity's voice who would have been popular in the player's youth). In some examples, the system can generate automated responses using an avatar. In some examples, the avatar's personality can adapt to game history or game characteristics as well as to characteristics of the player or to preferences of the player. In some embodiments, the avatar can be a concierge or a game agent. In some embodiments, the avatar agent acts as a communication facilitator. An avatar can be a representation of the system and/or of the player or other players. In some examples, the avatar can act as a personal agent to the player that performs certain actions in response to player comments and/or that represents the player based on comments from other players (e.g., another player sends a chat message, but the avatar responds saying that the player is busy). In some embodiments, the avatar can respond using voice characteristics that are similar to the player. In some embodiments, the avatar grows and progresses according to the player's use of the system.
In some embodiments, during a chat session, the system can automatically translate a language of a first individual, who sends the chat message, to the language of a second individual, who receives the chat message.
In some embodiments, the system provides non-monetary incentives, such as incentivizing more vocal interaction with virtual rewards or types of wagering games that have specific features that occur only when the player uses the vocal interaction. Therefore, based on a player's degree of voice interaction, the system can present customization that is specific to each player or degree of interaction during the wagering game session.
In some embodiments, automated responses can vary based on location. For example, in response to the query, “Who's playing onstage tonight,” the system would have a different answer at each casino. In some embodiments, the system includes an operator interface configuring a library of potential responses for a given facility.
In some embodiments, the system presents an offer of a reward to discuss a topic, detects that the audible communication is associated with the topic, and presents the reward. In some examples, the system provides marketing offers, coupons, and compensations as an incentive to get the user to speak and interact with a wagering game machine. In some examples, the incentives can include offers for nearby products or services. In some embodiments, the system offers game rewards such as modified reel symbols or bonus games. In some embodiments, the system sends a communication to a vendor so that the vendor can provide rewards and/or compensations to the player for talking about a particular product or service while at the wagering game machine.
In some examples, the system listens to a chat or reads the text from a chat and responds by inviting others to play or interact with the game. In some embodiments, the system can integrate into the chat a player's voice, a voice of an avatar, or a voice of a character in a game. The system can listen into a chat conversation and provide contextual suggestions via a chat console.
In some embodiments, the system evaluates communications in context of multiple sources of communication.
In some embodiments, the system detects audible communications from various individuals and evaluates and/or responds any one or more of the audible communications. For example, the system determines which of a plurality of individuals expresses the audible communication, or by what manner the individual or individuals generates or expresses audible communications. In some embodiments, the system detects multiple audible communications and prioritizes the communications based on source, content, etc.
One example of the concept described in
In some embodiments, the system generates relevance values for the plurality of sources of the communications (e.g., based on identity, location, position, etc.). For example, in
In some embodiments, the system tracks (e.g., detects and stores) a history of communications made by the plurality of sources and analyzes audible communications in context of the history of communications to determine the meanings of certain communications, to generate customized responses, to better facilitate communications with and between individuals, etc. For example, in
In some embodiments, the system evaluates passive, or indirect, characteristics of communication in context.
In some embodiments, the system determines whether the audible communication is a passive communication spoken indirectly (i.e., not spoken directly to a wagering game device as a direct query or command), such as a background comment.
In some embodiments, the system analyzes a history of a player's communication to determine when, and how often, a player makes passive comments. In some embodiments, the system refers to a library of gaming commands and indirect comments. The library includes descriptions of specific words or phrases that indicate commands as well as specific words or phrases that indicate indirect comments.
In some embodiments, the system evaluates at least one characteristic of the one or more audible communications in context of one or more characteristics or conditions associated with the wagering game session, as similarly described previously in
The flow 500 continues at processing block 504 where the system determines an automated response to present based on the one or more audible communications. For instance, the system can passively, or subtly, present content that is related to the indirect audible communication. For instance, if the audible communication is an indirect comment that indicates a preference for content, the system can suggest related gaming content, or incorporate related content into wagering games.
The flow 500 continues at processing block 506, where the system delays a presentation of the automated response based on determination that the one or more audible communications are indirectly communicated. In some embodiments, the system continually, and subtly, presents relevant information via the wagering game machine in a way that corresponds to the comments. However, the system may delay the presentation of content so that the response is subtle. In other words, the delay can prevent the automated response from appearing as if it is in immediate response to the indirect audible communication. Thus, the system can present relevant information in a way that does not appear to be an obvious response to the player's indirect audible communication. Prior to delaying the presenting of a response, the system determines that the audible communication does not indicate an immediate response (e.g., if the audible communication is indirect, then the individual is not expecting a direct response and so the system determines that it can delay the response).
Some examples of delaying a response may include, but are not limited to, the following:
Although
In response to detecting the meaning of, and emotional expression associated with, the audible communication (e.g., in response to detecting that the player 610 has experienced a near-win and that player 610 is in a negative emotional state), the system attempts to address the negative emotional state with a positive comment and/or reward. For instance, in
This section describes example operating architectures, systems, networks, etc. and presents structural aspects of some embodiments.
The wagering game system architecture 800 can also include a wagering game server 850 configured to control wagering game content, provide random numbers, and communicate wagering game information, account information, and other information to and from a wagering game machine 860. The wagering game server 850 can include a content controller 851 configured to manage and control content for presentation on the wagering game machine 860. For example, the content controller 851 can generate game results (e.g., win/loss values), including win amounts, for games played on the wagering game machine 860. The content controller 851 can communicate the game results to the wagering game machine 860. The content controller 851 can also generate random numbers and provide them to the wagering game machine 860 so that the wagering game machine 860 can generate game results. The wagering game server 850 can also include a content store 852 configured to contain content to present on the wagering game machine 860. The wagering game server 850 can also include an account manager 853 configured to control information related to player accounts. For example, the account manager 853 can communicate wager amounts, game results amounts (e.g., win amounts), bonus game amounts, etc., to the account server 870. The wagering game server 850 can also include a communication unit 854 configured to communicate information to the wagering game machine 860 and to communicate with other systems, devices and networks. The wagering game server 850 can also include an audible communication module 855 configured to detect audible communications and generate automated responses based on audible communications. The wagering game server 850 can also include a gaming environment module 856 configured to control environmental sounds, lights, etc.
The wagering game system architecture 800 can also include the wagering game machine 860 configured to present wagering games and receive and transmit information to detect and respond to audible communications for gaming. The wagering game machine 860 can include a content controller 861 configured to manage and control content and presentation of content on the wagering game machine 860. The wagering game machine 860 can also include a content store 862 configured to contain content to present on the wagering game machine 860. The wagering game machine 860 can also include an application management module 863 configured to manage multiple instances of gaming applications. For example, the application management module 863 can be configured to launch, load, unload and control applications and instances of applications. The application management module 863 can launch different software players (e.g., a Microsoft® Silverlight™ player, an Adobe® Flash® player, etc.) and manage, coordinate, and prioritize what the software players do. The application management module 863 can also coordinate instances of server applications in addition to local copies of applications. The application management module 863 can control window locations on a wagering game screen or display for the multiple gaming applications. In some embodiments, the application management module 863 can manage window locations on multiple displays including displays on devices associated with and/or external to the wagering game machine 860 (e.g., a top display and a bottom display on the wagering game machine 860, a peripheral device connected to the wagering game machine 860, a mobile device connected to the wagering game machine 860, etc.). The application management module 863 can manage priority or precedence of client applications that compete for the same display area. For instance, the application management module 863 can determine each client application's precedence. The precedence may be static (i.e. set only when the client application first launches or connects) or dynamic. The applications may provide precedence values to the application management module 863, which the application management module 863 can use to establish order and priority. The precedence, or priority, values can be related to tilt events, administrative events, primary game events (e.g., hierarchical, levels, etc.), secondary game events, local bonus game events, advertising events, etc. As each client application runs, it can also inform the application management module 863 of its current presentation state. The applications may provide presentation state values to the application management module 863, which the application management module 863 can use to evaluate and assess priority. Examples of presentation states may include celebration states (e.g., indicates that client application is currently running a win celebration), playing states (e.g., indicates that the client application is currently playing), game starting states (e.g., indicates that the client application is showing an invitation or indication that a game is about to start), status update states (e.g., indicates that the client application is not ‘playing’ but has a change of status that should be annunciated, such as a change in progressive meter values or a change in a bonus game multiplier), idle states (e.g., indicates that the client application is idle), etc. In some embodiments, the application management module 863 can be pre-configurable. The system can provide controls and interfaces for operators to control screen layouts and other presentation features for the configuring of the application management module 863. The application management module 863 can communicate with, and/or be a communication mechanism for, a base game stored on a wagering game machine. For example, the application management module 863 can communicate events from the base game such as the base game state, pay line status, bet amount status, etc. The application management module 863 can also provide events that assist and/or restrict the base game, such as providing bet amounts from secondary gaming applications, inhibiting play based on gaming event priority, etc. The application management module 863 can also communicate some (or all) financial information between the base game and other applications including amounts wagered, amounts won, base game outcomes, etc. The application management module 863 can also communicate pay table information such as possible outcomes, bonus frequency, etc.
In some embodiments, the application management module 863 can control different types of applications. For example, the application management module 863 can perform rendering operations for presenting applications of varying platforms, formats, environments, programming languages, etc. For example, the application management module 863 can be written in one programming language format (e.g., Javascript, Java, C++, etc.) but can manage, and communicate data from applications that are written in other programming languages or that communicate in different data formats (e.g., Adobe® Flash®, Microsoft® Silverlight™, Adobe® Air™, hyper-text markup language, etc.). The application management module 863 can include a portable virtual machine capable of generating and executing code for the varying platforms, formats, environments, programming languages, etc. The application management module 863 can enable many-to-many messaging distribution and can enable the multiple applications to communicate with each other in a cross-manufacturer environment at the client application level. For example, multiple gaming applications on a wagering game machine may need to coordinate many different types of gaming and casino services events (e.g., financial or account access to run spins on the base game and/or run side bets, transacting drink orders, tracking player history and player loyalty points, etc.).
The wagering game machine 860 can also include an audible communication module 864 configured to detect audible communications and generate automated responses based on audible communications.
The wagering game system architecture 800 can also include a secondary content server 840 configured to provide content and control information for secondary games and other secondary content available on a wagering game network (e.g., secondary wagering game content, promotions content, advertising content, player tracking content, web content, etc.). The secondary content server 880 can provide “secondary” content, or content for “secondary” games presented on the wagering game machine 860. “Secondary” in some embodiments can refer to an application's importance or priority of the data. In some embodiments, “secondary” can refer to a distinction, or separation, from a primary application (e.g., separate application files, separate content, separate states, separate functions, separate processes, separate programming sources, separate processor threads, separate data, separate control, separate domains, etc.). Nevertheless, in some embodiments, secondary content and control can be passed between applications (e.g., via application protocol interfaces), thus becoming, or falling under the control of, primary content or primary applications, and vice versa. In some embodiments, the secondary content can be in one or more different formats, such as Adobe® Flash®, Microsoft® Silverlight™, Adobe® Air™, hyper-text markup language, etc. In some embodiments, the secondary content server 880 can provide and control content for community games, including networked games, social games, competitive games, or any other game that multiple players can participate in at the same time. In some embodiments, the secondary content server 880 can control and present an online website that hosts wagering games. The secondary content server 880 can also be configured to present multiple wagering game applications on the wagering game machine 860 via a wagering game website, or other gaming-type venue accessible via the Internet. The secondary content server 880 can host an online wagering website and/or a social networking website. The secondary content server 880 can include other devices, servers, mechanisms, etc., that provide functionality (e.g., controls, web pages, applications, etc.) that web users can use to connect to a social networking application and/or website and utilize social networking and website features (e.g., communications mechanisms, applications, etc.). In some embodiments, the secondary content server 880 can also host social networking accounts, provide social networking content, control social networking communications, store associated social contacts, etc. The secondary content server 880 can also provide chat functionality for a social networking website, a chat application, or any other social networking communications mechanism. In some embodiments, the secondary content server 880 can utilize player data to determine marketing promotions that may be of interest to a player account. The secondary content server 880 can also analyze player data and generate analytics for players, group players into demographics, integrate with third party marketing services and devices, etc. The secondary content server 880 can also provide player data to third parties that can use the player data for marketing. In some embodiments, the secondary content server 880 can provide one or more social networking communication mechanisms that publish (e.g., post, broadcast, etc.) a message to a mass (e.g., to multiple people, users, social contacts, accounts, etc.). The social networking communication mechanism can publish the message to the mass simultaneously. Examples of the published message may include, but not be limited to, a blog post, a mass message post, a news feed post, a profile status update, a mass chat feed, a mass text message broadcast, a video blog, a forum post, etc. Multiple users and/or accounts can access the published message and/or receive automated notifications of the published message.
Each component shown in the wagering game system architecture 800 is shown as a separate and distinct element connected via a communications network 822. However, some functions performed by one component could be performed by other components. For example, the wagering game server 850 can also be configured to perform functions of the application management module 863, the audible communication module 864, and other network elements and/or system devices. Furthermore, the components shown may all be contained in one device, but some, or all, may be included in, or performed by, multiple devices, as in the configurations shown in
The wagering game machines described herein (e.g., wagering game machine 860) can take any suitable form, such as floor standing models, handheld mobile units, bar-top models, workstation-type console models, surface computing machines, etc. Further, wagering game machines can be primarily dedicated for use in conducting wagering games, or can include non-dedicated devices, such as mobile phones, personal digital assistants, personal computers, etc.
In some embodiments, wagering game machines and wagering game servers work together such that wagering game machines can be operated as thin, thick, or intermediate clients. For example, one or more elements of game play may be controlled by the wagering game machines (client) or the wagering game servers (server). Game play elements can include executable game code, lookup tables, configuration files, game outcome, audio or visual representations of the game, game assets or the like. In a thin-client example, the wagering game server can perform functions such as determining game outcome or managing assets, while the wagering game machines can present a graphical representation of such outcome or asset modification to the user (e.g., player). In a thick-client example, the wagering game machines can determine game outcomes and communicate the outcomes to the wagering game server for recording or managing a player's account.
In some embodiments, either the wagering game machines (client) or the wagering game server(s) can provide functionality that is not directly related to game play. For example, account transactions and account rules may be managed centrally (e.g., by the wagering game server(s)) or locally (e.g., by the wagering game machines). Other functionality not directly related to game play may include power management, presentation of advertising, software or firmware updates, system quality or security checks, etc.
Furthermore, the wagering game system architecture 800 can be implemented as software, hardware, any combination thereof, or other forms of embodiments not listed. For example, any of the network components (e.g., the wagering game machines, servers, etc.) can include hardware and machine-readable storage media including instructions for performing the operations described herein.
The memory unit 930 may also include an I/O scheduling policy unit and I/O schedulers. The memory unit 930 can store data and/or instructions, and may comprise any suitable memory, such as a dynamic random access memory (DRAM), for example. The computer system 900 may also include one or more suitable integrated drive electronics (IDE) drive(s) 908 and/or other suitable storage devices. A graphics controller 904 controls the display of information on a display device 906, according to some embodiments.
The ICH 924 provides an interface to I/O devices or peripheral components for the computer system 900. The ICH 924 may comprise any suitable interface controller to provide for any suitable communication link to the processor unit 902, memory unit 930 and/or to any suitable device or component in communication with the ICH 924. The ICH 924 can provide suitable arbitration and buffering for each interface.
For one embodiment, the ICH 924 provides an interface to the one or more IDE drives 908, such as a hard disk drive (HDD) or compact disc read only memory (CD ROM) drive, or to suitable universal serial bus (USB) devices through one or more USB ports 910. For one embodiment, the ICH 924 also provides an interface to a keyboard 912, selection device 914 (e.g., a mouse, trackball, touchpad, etc.), CD-ROM drive 918, and one or more suitable devices through one or more firewire ports 916. For one embodiment, the ICH 924 also provides a network interface 920 through which the computer system 900 can communicate with other computers and/or devices.
The computer system 900 may also include a machine-readable storage medium that stores a set of instructions (e.g., software) embodying any one, or all, of the methodologies to detect and respond to audible communications for gaming. Furthermore, software can reside, completely or at least partially, within the memory unit 930 and/or within the processor unit 902. The computer system 900 can also include an audible communication module 937. The audible communication module 937 can process communications, commands, or other information, to detect and respond to audible communications for gaming. In some embodiments, the computer system 900 includes an environmental tracking unit 931 that includes microphones, cameras, sensors, or other devices used to capture sounds, images, or other characteristics of an environment in which the computer system 900 is situated. For example, the environmental tracking unit 931 can record sounds and images associated with individuals that make audible communications. Any component of the computer system 900 can be implemented as hardware, firmware, and/or machine-readable storage media including instructions for performing the operations described herein.
The CPU 1026 is also connected to an input/output (“I/O”) bus 1022, which can include any suitable bus technologies, such as an AGTL+ frontside bus and a PCI backside bus. The I/O bus 1022 is connected to a payout mechanism 1008, primary display 1010, secondary display 1012, value input device 1014, player input device 1016, information reader 1018, and storage unit 1030. The player input device 1016 can include the value input device 1014 to the extent the player input device 1016 is used to place wagers. The I/O bus 1022 is also connected to an external system interface 1024, which is connected to external systems 1004 (e.g., wagering game networks). The external system interface 1024 can include logic for exchanging information over wired and wireless networks (e.g., 802.11g transceiver, Bluetooth transceiver, Ethernet transceiver, etc.)
The I/O bus 1022 is also connected to a location unit 1038. The location unit 1038 can create player information that indicates the wagering game machine's location/movements in a casino. In some embodiments, the location unit 1038 includes a global positioning system (GPS) receiver that can determine the wagering game machine's location using GPS satellites. In other embodiments, the location unit 1038 can include a radio frequency identification (RFID) tag that can determine the wagering game machine's location using RFID readers positioned throughout a casino. Some embodiments can use GPS receiver and RFID tags in combination, while other embodiments can use other suitable methods for determining the wagering game machine's location. Although not shown in
In some embodiments, the wagering game machine 1006 can include additional peripheral devices and/or more than one of each component shown in
In some embodiments, the wagering game machine 1006 includes an audible communication module 1037. The audible communication module 1037 can process communications, commands, or other information, where the processing can detect and respond to audible communications for gaming. In some embodiments, the wagering game machine 1006 includes an environmental tracking unit 1031 that includes microphones, cameras, sensors, or other devices used to capture sounds, images, or other characteristics of an environment in which the wagering game machine 1006 is situated. For example, the environmental tracking unit 1031 can record sounds and images associated with individuals that make audible communications.
Furthermore, any component of the wagering game machine 1006 can include hardware, firmware, and/or machine-readable storage media including instructions for performing the operations described herein.
The wagering game machine 1160 illustrated in
Input devices, such as the touch screen 1118, buttons 1120, a mouse, a joystick, a gesture-sensing device, a voice-recognition device, and a virtual input device, accept player input(s) and transform the player input(s) to electronic data signals indicative of the player input(s), which correspond to an enabled feature for such input(s) at a time of activation (e.g., pressing a “Max Bet” button or soft key to indicate a player's desire to place a maximum wager to play the wagering game). The input(s), once transformed into electronic data signals, are output to a CPU for processing. The electronic data signals are selected from a group consisting essentially of an electrical current, an electrical voltage, an electrical charge, an optical signal, an optical element, a magnetic signal, and a magnetic element.
Embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, embodiments of the inventive subject matter may take the form of a computer program product embodied in any tangible medium of expression having computer readable program code embodied in the medium. The described embodiments may be provided as a computer program product that may include a machine-readable storage medium having stored thereon instructions, which may be used to program a computer system to perform a process according to embodiments(s), whether presently described or not, because every conceivable variation is not enumerated herein. A machine-readable storage medium includes any mechanism that stores information in a form readable by a machine (e.g., a wagering game machine, computer, etc.). For example, machine-readable storage media includes read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media (e.g., CD-ROM), flash memory machines, erasable programmable memory (e.g., EPROM and EEPROM); etc. Some embodiments of the invention can also include machine-readable signal media, such as any media suitable for transmitting software over a network.
This detailed description refers to specific examples in the drawings and illustrations. These examples are described in sufficient detail to enable those skilled in the art to practice the inventive subject matter. These examples also serve to illustrate how the inventive subject matter can be applied to various purposes or embodiments. Other embodiments are included within the inventive subject matter, as logical, mechanical, electrical, and other changes can be made to the example embodiments described herein. Features of various embodiments described herein, however essential to the example embodiments in which they are incorporated, do not limit the inventive subject matter as a whole, and any reference to the invention, its elements, operation, and application are not limiting as a whole, but serve only to define these example embodiments. This detailed description does not, therefore, limit embodiments, which are defined only by the appended claims. Each of the embodiments described herein are contemplated as falling within the inventive subject matter, which is set forth in the following claims.
Number | Name | Date | Kind |
---|---|---|---|
6676523 | Kasai et al. | Jan 2004 | B1 |
8138930 | Heath | Mar 2012 | B1 |
20040044574 | Cochran et al. | Mar 2004 | A1 |
20040073482 | Wiggins et al. | Apr 2004 | A1 |
20040152514 | Kasai et al. | Aug 2004 | A1 |
20050170890 | Rowe et al. | Aug 2005 | A1 |
20050228797 | Koningstein et al. | Oct 2005 | A1 |
20060058102 | Nguyen et al. | Mar 2006 | A1 |
20070083408 | Altberg et al. | Apr 2007 | A1 |
20080109317 | Singh | May 2008 | A1 |
20080147488 | Tunick et al. | Jun 2008 | A1 |
20090270170 | Patton | Oct 2009 | A1 |
20100036717 | Trest | Feb 2010 | A1 |
20100317437 | Berry et al. | Dec 2010 | A1 |
20110009193 | Bond et al. | Jan 2011 | A1 |
20110092288 | Pryzby et al. | Apr 2011 | A1 |
20110223993 | Allen et al. | Sep 2011 | A1 |
Number | Date | Country |
---|---|---|
1064975 | Mar 2001 | EP |
Entry |
---|
“Kinect”, Wikipedia http://en.wikipedia.org/wiki/Kinect Date Obtained from the Internet: Jun. 15, 2010 Last Date Modified Jun. 6, 2012, 15 pages. |
Microsoft, “Capturing Audio Data in C#”, MSDN Library http://msdn.microsoft.com/en-us/library/hh855349.aspx Date Obtained from the Internet: Jun. 15, 2012, 2 pages. |
Microsoft, “Microsoft Kinect for Windows SDK—V1.0 Release Notes”, Kinect for Windows http://www.microsoft.com/en-us/kinectforwindows/develop/release-notes.aspx Date Obtained from Internet: Jun. 15, 2012 Last Date Modified: May 2, 2012, 5 pages. |
Number | Date | Country | |
---|---|---|---|
20130337889 A1 | Dec 2013 | US |
Number | Date | Country | |
---|---|---|---|
61659792 | Jun 2012 | US |