The present disclosure relates generally to amusement parks. More specifically, embodiments of the present disclosure relate to systems and methods utilized to provide amusement park experiences.
Amusement parks and other entertainment venues contain, among many other attractions, animated characters that interact with guests. For example, the animated characters may walk around the amusement park, provide entertainment, and speak to the guests. Certain animated characters may include a performer in a costume with an animated character head that covers the performer's face. With the increasing sophistication and complexity of attractions, and the corresponding increase in expectations among guests, more creative animated character head systems and methods are needed to provide an interactive and personalized experience for guests.
Certain embodiments commensurate in scope with the originally claimed subject matter are summarized below. These embodiments are not intended to limit the scope of the disclosure, but rather these embodiments are intended only to provide a brief summary of certain disclosed embodiments. Indeed, the present disclosure may encompass a variety of forms that may be similar to or different from the embodiments set forth below.
In an embodiment, an interactive system includes one or more processors that are configured to receive a first signal indicative of an activity of a user within an environment and to receive a second signal indicative of the user approaching an animated character head. The one or more processors are also configured to provide information related to the activity of the user to a base station control system associated with the animated character head in response to receipt of the second signal to facilitate a personalized interaction between the animated character head and the user.
In an embodiment, an interactive system includes one or more identification devices configured to detect an identifier supported by a wearable device of a user. The interactive system also includes one or more processors configured to monitor activities of the user within an environment based on respective signals received from the one or more identification devices. The one or more processors are further configured to output a respective signal based on the activities of the user to an animated character head, thereby causing the animated character head to present an animation that is relevant to the activities of the user to facilitate a personalized interaction between the animated character head and the user.
In an embodiment, a method includes receiving, at one or more processors, a signal indicative of a user approaching an animated character head. The method also includes accessing, using the one or more processors, information related to a prior activity of the user. The method further includes providing, using the one or more processors, the information related to the prior activity of the user to a base station control system associated with the animated character head in response to receipt of the signal to facilitate a personalized interaction between the animated character head and the user.
These and other features, aspects, and advantages of the present disclosure will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
One or more specific embodiments of the present disclosure will be described below. In an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
Amusement parks feature a wide variety of entertainment, such as amusement park rides, games, performance shows, and animated characters. However, many of the forms of entertainment do not vary based upon a guest's previous activities (e.g., experiences and/or actions). For example, an animated character may greet each guest in a similar manner. Some guests may prefer a unique or customized interaction with the animated character that is different for each guest, different during each interaction, and/or that indicates recognition of the guest's previous activities. Accordingly, the present embodiments relate to an interactive system that monitors a guest's activities within an amusement park and provides an output to control or influence an animated character's interaction with the guest based at least in part on the guest's previous activities.
More particularly, the present disclosure relates to an interactive system that uses an identification system, such as a radio-frequency identification (RFID) system, to monitor a guest's activities within an amusement park. In an embodiment, the guest may wear or carry a device that supports an identifier, such as an RFID tag. When the guest brings the device within a range of a reader (e.g., RFID transceiver) positioned within the amusement park, the reader may detect the identifier and provide a signal to a computing system to enable the computing system to monitor and to record (e.g., in a database) the guest's activities within the amusement park. For example, the reader may be positioned at an exit of a ride (e.g., a roller coaster or other similar attraction), and the reader may detect the identifier in the device as the guest exits the ride. The reader may provide a signal indicating that the device was detected proximate to the exit of the ride to the computing system, the computing may then determine that the guest completed the ride based on the signal, and the computing system may then store the information (e.g., that the guest completed the ride) in the database.
Subsequently, the guest may visit an animated character, which may be located in another portion of the amusement park. In an embodiment, the animated character includes an animated character head worn by a performer. An additional reader (e.g., RFID transceiver) positioned proximate to the animated character (e.g., coupled to the animated character, carried by the performer, coupled to a base station control system, or coupled to a stationary structure or feature within the amusement park) may detect the identifier in the device as the guest approaches the animated character. The additional reader may provide a signal to the computing system indicating that the device was detected proximate to the animated character, the computing system may then determine that the guest is approaching the animated character based on the signal, the computing system may then access the information stored in the database, and the computing system may then provide an output to control or influence the animated character's interaction with the guest based on the information. For example, the computing system may be in communication with the base station control system, which may be a tablet or other computing device (e.g., mobile phone) operated by a handler who travels with and/or provides support to the performer wearing the animated character head. In some such cases, the computing system may provide an output to the base station control system that causes display of the information (e.g., the guest's score in a game, the rides completed by the guest) and/or a recommended interaction (e.g., congratulate the guest on winning a game) on a display screen of the base station control system. The handler may then select an appropriate phrase and/or gesture for the animated character, such as by providing an input at the base station control system (e.g., making a selection on a touch screen) that causes the animated character to speak the phrase and/or to perform the gesture. In an embodiment, the handler may suggest an appropriate phrase and/or gesture for the animated character, such as by speaking (e.g., via a two-way wireless communication system) to the performer wearing the animated character head.
Each guest may have had different, respective experiences and/or carried out different, respective actions in the amusement park. For example, one guest may experience a ride, earn virtual points by playing a game, and eat at a restaurant, while another guest may experience a different ride, earn a different number of virtual points by playing the game, and eat at a different restaurant. The disclosed interactive system may enable the animated character to carry out a unique, personalized interaction with each guest by speaking or gesturing based on each guest's particular activities. To facilitate discussion, a user of the interactive system is described as being a guest at an amusement park and the interactive system is described as being implemented in the amusement park; however, it should be appreciated that the interactive system may be implemented in other environments. Furthermore, the disclosed embodiments refer to an animated character head worn by a performer; however, it should be appreciated that the interactive system may additionally or alternatively include and affect operation of other components, such as objects (e.g., cape, hat, glasses, armour, sword, button) held, worn, or carried by the performer.
With the foregoing in mind,
In one embodiment, the animated character system 14 includes an animated character head 30 that may be worn by a performer and that may be configured to emit sounds (e.g., speak phrases) and/or carry out various gestures (e.g., eye blinks, jaw motions, lip shapes). The animated character system 14 may also include a base station control system 32 (e.g., remote control system) that may be operated by a handler who travels with and/or provides support to the performer wearing the animated character head 30. In an embodiment, the animated character head 30 and the base station control system 32 are communicatively coupled, such that an input by the handler at the base station control system 32 causes the animated character head 30 to emit a certain sound or perform a certain gesture.
More particularly, in one embodiment, the animated character head 30 may include a controller 34 (e.g., electronic controller) with one or more processors 36 and one or more memory devices 38. In an embodiment, the memory 36 may be configured to store instructions, data, and/or information, such as a library of animations (e.g., database of available animations, including sounds and/or gestures, and corresponding control instructions for effecting the animations) for the animated character head 30. In an embodiment, the processor 36 may be configured to receive an input (e.g., signal from the base station control system 32), to identify an appropriate animation from the library of animations (e.g., a selected animation) based on the received input, and/or to provide one or more appropriate control signals to a display 42, a speaker 44, an actuator 46, and/or a light source 48 based on the received input and/or in accordance with the selected animation. In this way, the animated character head 30 may enable the handler to control the speech and/or gestures of the animated character head 30. It should be appreciated that the library of animations may include separate sounds or small sound clips (e.g., single word, beep, buzz), separate gestures (e.g., smile, frown, eye blink), and/or combinations of multiple sounds and gestures (e.g., a greeting that includes multiple words in combination with a motion profile that includes smile and eye movements). For example, the base station control system 32 may present the handler with a selection menu of available animations for the animated character head 30, and the handler may be able to provide an input at the base station control system 32 to select a smile and then select a particular greeting. Subsequently, the processor 36 of the animated character head 30 may receive a signal indicative of the handler's input from the base station control system 32, access the selected animations from the library, and control the actuators 46 to effect the smile and the particular greeting.
The animated character head 30 may include various features to facilitate the techniques disclosed herein. For example, the animated character head 30 may include one or more sensors 40 that are configured to monitor the performer and/or to receive inputs from the performer. The one or more sensors 40 may include eye tracking sensors that are configured to monitor eye movement of the performer, machine vision sensors that are configured to monitor movement of the performer's face, microphones or audio sensors that are configured to receive spoken inputs or other audible inputs from the performer, physical input sensors (e.g., switch, button, motion sensors, foot controls, or wearable input device, such as a myo input, ring input, or gesture gloves) that are configured to receive a physical or manual input from the performer, or any combination thereof. The inputs may be processed by the processor 36 to select an animation from the library of animations stored in the memory 38 and/or to otherwise affect the animations presented via the animated character head 30. For example, certain inputs via the one or more sensors 40 may veto or cancel a selection made by the handler and/or certain inputs may initiate a particular animation.
The actuators 46 may be any suitable actuators, such as electromechanical actuators (e.g., linear actuator, rotary actuator). The actuators 46 may be located inside the animated character head 30 and be configured to adjust certain features or portions of the animated character head 30 (e.g., the eyes, eyebrows, cheeks, mouth, lips, ears, light features). For example, a rotary actuator may be positioned inside the animated character head 30 along the outer cusps of the lips of the animated character head 30 to cause the face of the animated character head 30 to smile in response to a control signal (e.g., from the processor 36). As a further example, the animated character head 30 may contain an electric linear actuator that drives the position of the eyebrows (e.g., to frown) of the animated character head 30 in response to a control signal (e.g., from the processor 36).
As shown, the animated character head 30 may include the light source 48, and the duration, brightness, color, and/or polarity of the light emitted from the light source 48 may be controlled based on a control signal (e.g., from the processor 36). In an embodiment, the light source 48 may be configured to project light onto a screen or other surface of the animated character head 30, such as to display a still image, a moving image (e.g., a video), or other visible representation of facial features or gestures on the animated character head 30. In some embodiments, the actuators 46 and/or the light source 48 may enable the animated character head 30 to provide any of a variety of projected facial features or gestures, animatronic facial features or gestures, or combinations thereof.
In an embodiment, the processor 36 may instruct the display 42 to show an indication of available animations (e.g., a list of animations stored in the library in the memory 38), an indication of the selected animation (e.g., selected by the processor 36 from the library in the memory 38 based on an input from the base station control system 32), and/or other information (e.g., information about the user 26, such as prior activities; recommended animations) for visualization by the performer wearing the animated character head 30. For example, in operation, the display 42 may provide a list of available animations, and the one or more sensors 40 may obtain an input from the performer (e.g., an eye tracking sensor may enable the performer to provide the input with certain eye movements) to enable the performer to scroll through the list of available animations and/or to select an animation from the list of available animations. In an embodiment, a selected animation may be shown on the display 42, and the selected animation may be confirmed, changed, modified, switched, delayed, or deleted by the performer via various inputs to the one or more sensors 40 (e.g., by speaking into a microphone or actuating a physical input sensor), thereby enabling efficient updates by the performer during interactions with guests. It should be appreciated that the performer may not have control over the selections, and thus, may not be able to input a selection or change the selection made by the handler via the one or more sensors 40, for example.
The display 42 may be utilized to provide various other information. For example, in some embodiments, a camera 50 (e.g., coupled to or physically separate from the animated character head 30) may be provided to obtain images (e.g., still or moving images, such as video) of the user 26, the surrounding environment, and/or the currently playing animation (e.g., current movements or features of the animated character head 30), which may be relayed to the animated character head 30 (e.g., via wireless communication devices, such as transceivers) for display via the display 42 to provide information and/or feedback to the performer. In an embodiment, the display 42 may be part of augmented or virtual reality glasses worn by the performer.
In an embodiment, the animated character head 30 may include one or more status sensors 52 configured to monitor a component status and/or a system status (e.g., to determine whether a performed animation does not correspond to the selected animation), and an indication of the status may be provided to the performer via the display 42 and/or to the handler via the base station control system 32. For example, a status sensor 52 may be associated with each actuator 46 and may be configured to detect a position and/or movement of the respective actuator 46, which may be indicative of whether the actuator 46 is functioning properly (e.g., moving in an expected way based on the selected animation).
The processor 36 may execute instructions stored in the memory 38 to perform the operations disclosed herein. As such, in an embodiment, the processor 36 may include one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more field programmable logic arrays (FPGAs), or any combination thereof. Additionally, the memory 38 may be a tangible, non-transitory, computer-readable medium that store instructions executable by and data to be processed by the processor 36. Thus, in some embodiments, the memory 38 may include random access memory (RAM), read only memory (ROM), rewritable non-volatile memory, flash memory, hard drives, optical discs, and the like.
The base station control system 32 may include various features to facilitate the techniques disclosed herein. In an embodiment, the handler may utilize an input device 60 (e.g., a touch screen) at the base station control system 32 to provide an input and/or to select animations. In such cases, the handler's selections and/or other data may be transmitted wirelessly or through a wired connection to the animated character head 30 via the communication devices 62, 64. In an embodiment, the handler receives system status information (e.g., an indication of component failure as detected by the status sensors 52, completed animations, images from the camera 50) from the animated character head 30. In an embodiment, if a particular actuator 46 is not functioning properly, animation selections that rely on the particular actuator 46 may be removed from the list of available animations and/or otherwise made inaccessible for selection by the handler.
In an embodiment, the animated character head 30 and the base station control system 32 may include audio communication devices 68, 70 (e.g., a headset or other devices having a microphone and/or a speaker) that enable the performer and the handler to communicate (e.g., verbally communicate via one-way or two-way communication). In such cases, the handler may be able to verbally inform the performer of the handler's current selection, the handler's next selection, information about the user 26, or the like. Additionally, the performer may be able to request a particular animation, indicate a preference to cancel a selected animation, or the like.
In the depicted embodiment, a controller 72 of the base station control system 32 contains a processor 74 that may execute instructions stored in the memory 76 to perform operations, such as receiving, accessing, and/or displaying a selection menu of available animations for the animated character head 30 on a display 66 (which may also operate as the input device 60), providing a signal indicative of a selected animation to the animated character head 30, or the like. As such, in an embodiment, the processor 74 may include one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more field programmable logic arrays (FPGAs), or any combination thereof. Additionally, the memory 76 may be a tangible, non-transitory, computer-readable medium that stores instructions executable by and data to be processed by the processor 74. Thus, in some embodiments, the memory 76 may include random access memory (RAM), read only memory (ROM), rewritable non-volatile memory, flash memory, hard drives, optical discs, and the like.
Furthermore, the communication devices 62, 64 may enable the controllers 34, 72 to interface with one another and/or with various other electronic devices, such as the components in the identification system 12. For example, the communication devices 62, 64 may enable the controllers 34, 72 to communicatively couple to a network, such as a personal area network (PAN), a local area network (LAN), and/or a wide area network (WAN). As noted above, the base station control system 32 may also include the display 66 to enable display of information, such as the selection menu of animations, completed animations, the system status as detected by the status sensors 52, and/or an external images obtained by the camera 50, or the like.
In an embodiment, the animated character system 14 is configured to operate independently of or without the identification system 12. For example, at least at certain times, the handler and/or the performer may provide inputs to play various animations on the animated character head 30 to interact with the user 26 without any information regarding the user's 26 previous activities within the amusement park. In an embodiment, at least at certain other times, the handler and/or the performer may receive information about the user's 26 previous activities within the amusement park from the identification system 12 (e.g., information that the user 26 completed a ride, earned points in a game, or visited an attraction), and the information may be utilized to provide a unique, personalized interactive experience for the user 26. For example, the identification system 12 may provide the information for visualization on one or both of the displays 42, 66, or the identification system 12 may provide the information for visualization by the handler on the display 66 and the handler may be able to verbally communicate the information to the performer using the audio communication devices 68, 70. The information may enable the handler to select a more appropriate animation for the user 26, such as a greeting in which the animated character head 30 congratulates the user 26 on an achievement in a game or in which the animated character head 30 asks the user 26 if the user 26 enjoyed a recent ride.
Additionally or alternatively, the identification system 12 and/or another processing component of the interactive system 10 (e.g., the processor 74) may determine one or more relevant animations for the user 26 based on the information about the user's 26 previous activities within the amusement park. In some such cases, the identification system 12 and/or the another processing component may provide a recommendation to select or to play the one or more relevant animations. The recommendation may be provided by highlighting (e.g., with color, font size or style, position on the screen or in the menu) the one or more relevant animations on one or both of the displays 42, 66, thereby facilitating selection of the one or more relevant animations. Additionally or alternatively, the identification system 12 may cause the base station control system 32 to select and/or the animated character head 30 to play a particular animation (e.g., if the user 26 recently completed the ride, the signal from the identification system 12 received at the processor 74 may cause selection of a greeting related to the ride, and the selection may or may not be overridden or changed by the handler and/or the performer).
More particularly, the identification system 12 operates to monitor the user's 26 activities within an amusement park. In an embodiment, the user 26 may wear or carry the wearable device 24 that supports the identifier 22. When the user 26 brings the wearable device 24 within an area proximate to the reader 20 (e.g., within a reading range of the reader 20), the reader 20 may detect the identifier 22 and provide a signal to the computing system 16 to enable the computing system 16 to monitor and to record (e.g., in the one or more databases 18) the user's 26 activities within the amusement park. For example, one reader 20 may be positioned at an exit of a ride (e.g., a roller coaster or other similar attraction), and the reader may detect the identifier 22 in the wearable device 24 as the user 26 exits the ride. The reader 20 may provide a signal indicating that the wearable device 24 was detected proximate to the exit of the ride to the computing system 16, the computing system 16 may then determine that the user completed the ride based on the signal, and the computing system 16 may then store the information (e.g., that the user completed the ride) in the one or more databases 18. In this way, the identification system 12 may monitor the various activities of the user 26 as the user 26 travels through the amusement park.
Subsequently, the user 26 may visit the animated character head 30, which may be located in another portion of the amusement park. Another reader 20 positioned proximate to the animated character head 30 (e.g., coupled and/or inside to the animated character head 30, carried by the performer, coupled to the base station control system 32, or coupled to a stationary structure or feature within the amusement park) may detect the identifier 22 in the wearable device 24 as the user 26 approaches the animated character head 30. The another reader 20 may provide a signal indicating that the wearable device 24 was detected proximate to the animated character head 30 to the computing system 16 (e.g., via a wireless or wired connection), the computing system 16 may then determine that the user 26 is approaching the animated character head 30 based on the signal, the computing system 16 may then access the information stored in the one or more databases 18, and the computing system 16 may then provide an output to control or influence the animated character head's 30 interaction with the user 26 based on the information. For example, the computing system 16 may be in communication with the base station control system 32 (e.g., via communication devices 64, 80). In some such cases, the computing system 16 may provide an output to the base station control system 32 that causes display of the information and/or a recommended interaction on the display 66 of the base station control system 32. The handler may then select an appropriate animation for the animated character head 30, such as by providing an input at the base station control system 32 that causes the animated character head 30 to speak a particular phrase and/or to perform a particular gesture (e.g., relevant to the information about the user 26). In some such cases, the computing system 16 may provide an output to the animated character head 30 that causes display of the information and/or a recommended interaction on the display 42 of the animated character head 30. In an embodiment, the handler may convey the information and/or provide a recommendation based on the information to the performer, such as by speaking (e.g., via a two-way wireless communication system) to the performer.
In this manner, the interactive system 10 may provide a unique, personalized interactive experience between the user 26 and the animated character head 30. The interactive experience may be different for each user 26 and/or different each time the user 26 visits the animated character head 30. It should be appreciated that any of the features, functions, and/or techniques disclosed herein may be distributed between the identification system 12, the animated character head 30, and the base station control system 32 in any suitable manner. As noted above, the animated character system 14 may be able to operate independently of or without the identification system 12. Similarly, in an embodiment, the animated character head 30 may be able to operate independently of or without the base station control system 32. Thus, it should be appreciated that, in some such cases, the identification system 12 may provide outputs directly to the animated character head 30 (e.g., the processor 36 of the animated character head 30 may process signals received directly from the identification system 12 to select and play an animation from the library).
Certain examples disclosed herein relate to activities that involve interaction with attractions (e.g., rides, restaurants, characters) within the amusement park. In an embodiment, the interactive system 10 may receive and utilize information about the user 26 other than activities within the amusement park to provide the unique, personalized interaction. For example, the interactive system 10 may receive and utilize information about the user's 26 performance in a video game at a remote location (e.g., other than within the amusement park, such as at a home video console or computing system), the user's 26 name, age, or other information provided by the user 26 (e.g., during a registration process or ticket purchasing process), or the like. In an embodiment, the interactive system 10 may receive and utilize information related to the user's preferred language, any unique conditions of the user (e.g., limited mobility, limited hearing, sensitive to loud sounds), or the like. For example, the user 26 may complete a registration process or otherwise have the opportunity to input preferences or other information that is associated with the wearable device 14. When the user 26 approaches the animated character head 30, the preferences or other information may be presented to the handler and/or the performer, may be used to select an animation, and/or may be used to determine a recommended animation. In this way, the animated character head 30 may speak to the user 26 in a language that the user 26 understands and/or interact with the user 26 in a manner that is appropriate for the user 26. Additionally, it should be understood that the illustrated interactive system 10 is merely intended to be exemplary, and that certain features and components may be omitted and various other features and components may be added to facilitate performance, in accordance with the disclosed embodiments.
As shown, the computing system 16 may include a processor 82 configured to execute instructions stored in a memory 84 to perform the operations disclosed herein. As such, in an embodiment, the processor 82 may include one or more general purpose microprocessors, one or more application specific processors (ASICs), one or more field programmable logic arrays (FPGAs), or any combination thereof. Additionally, the memory 84 may be a tangible, non-transitory, computer-readable medium that store instructions executable by and data to be processed by the processor 82. Thus, in some embodiments, the memory 84 may include random access memory (RAM), read only memory (ROM), rewritable non-volatile memory, flash memory, hard drives, optical discs, and the like.
With reference to
At a later time, each user 26A, 26B may approach the animated character head 30. When the first user 26A is within range of a third reader 20C positioned proximate to the animated character head 30, the third reader 20C may provide a signal indicating detection of the first user 24A to the computing system 16. In response, the computing system 16 may provide information regarding the first user's 26A previous activities within the amusement park 100 to the base station control system 32. For example, the computing system 16 may provide information indicating that the first user 26A recently visited the restaurant 102, and the base station control system 32 may provide the information on the display 66 for visualization by the handler. Thus, the handler may be led to select an animation related to the first user's 26A visit to the restaurant, such as to ask the first user 26A whether the first user 26A enjoyed the meal at the restaurant 102. As noted above, the information may be communicated and/or utilized in various other ways to provide the unique, customized interactive experience for the first user 26A. For example, in an embodiment, the computing system 16 may additionally or alternatively determine and provide a recommended animation to the base station control system 32, or the base station control system 32 may determine one or more relevant animations to facilitate selection by the handler.
Similarly, when the second user 26B is within range of the third reader 20C positioned proximate to the animated character head 30, the third reader 20C may provide a signal indicating detection of the second user 24B to the computing system 16. In response, the computing system 16 may provide information regarding the second user's 26B previous activities within the amusement park 100 to the base station control system 32. For example, the computing system 16 may provide information indicating that the second user 26B recently visited the ride 104, and the base station control system 32 may provide the information on the display 66 for visualization by the handler. Thus, the handler may be led to select an animation related to the second user's 26B visit to the ride, such as to ask the second user 26B whether the second user 26B enjoyed the ride 104. As noted above, the information may be communicated and/or utilized in various other ways to provide the unique, customized interactive experience for the second user 26B. For example, in an embodiment, the computing system 16 may additionally or alternatively determine and provide a recommended animation, or the base station control system 32 may determine one or more relevant animations to facilitate selection by the handler.
It should also be appreciated that the third reader 20C may provide respective signals indicating detection of the users 24A, 24B to the computing system 16, which determines and records the interaction with the animated character head 30 in the one or more databases 18. Accordingly, subsequent activities (e.g., at the restaurant 102, the ride 104, or at other attractions, including interactions with other animated character heads) may be varied based on the user's 24A, 24B interaction with the animated character head 30. For example, a game attraction may adjust game elements based on the user's 24A, 24B achievements, including the user's 24A, 24B interaction with the animated character head 30. In an embodiment, another handler may be led to select an animation for another animated character head based on the user's 24A, 24B previous interaction with the animated character head 30. Similarly, should the users 24A, 24B revisit the animated character head 30 (e.g., in the same day or at any later time, including one or more later years), the animated character system 14 may operate in a manner that avoids repeating the same phrase(s), builds off of a prior interaction, or indicates recognition of the user 26 (e.g., states “it is nice to see you again,” or “I have not seen you since last year”). For example, some or all of the phrases that were previously spoken may be removed from inventory (e.g., the handler is not given the option to play the phrases), will not be presented to the handler on an initial screen that is viewable to the handler via the base station control system 32 as the user 26 approaches the animated character head 30, and/or will be marked or highlighted as being previously spoken.
In step 112, the computing system 16 tracks (e.g., detects and records) the user's 26 activities within an amusement park. In an embodiment, the user 26 may wear or carry the wearable device 24 that supports the identifier 22. When the user 26 brings the wearable device 24 within range of the reader 20, the reader 20 may detect the identifier 22 and provide a signal to the computing system 16 to enable the computing system 16 to detect and to record (e.g., in the one or more databases 18) the user's 26 activities within the amusement park.
In step 114, the computing system 16 receives a signal indicative of detection of the identifier 22 in the wearable device 24 of the user 26 from one or more readers 20 proximate to the animated character head 30. The computing system 16 may determine that the user 26 is approaching the animated character head 30 based on the signal.
In step 116, in response to receipt of the signal at step 114, the computing system 16 may then access and provide information related to the activities of the user to the base station control system 32 to control or influence the animated character head's 30 interaction with the user 26. For example, the computing system 16 may provide an output to the base station control system 32 that causes display of the information and/or a recommended interaction on the display 66 of the base station control system 32. The handler may then select an appropriate animation for the animated character head 30, such as by providing an input at the base station control system 32 that causes the animated character head 30 to speak a particular phrase and/or to perform a particular gesture (e.g., relevant to the information about the user 26). In this manner, the interactive system 10 may provide a unique, personalized interactive experience between the user 26 and the animated character head 30.
The animated character head 30 may be used with the base station control system 32. As shown, the display 66 of the base station control system 32 shows a selection menu of available animations for the animated character head 30. In an embodiment, the animations may be arranged in order of relevance to the user 26 (
While the identification system is disclosed as a radio-frequency identification (RFID) system to facilitate discussion, it should be appreciated that the identification system may be or include any of a variety of tracking or identification technologies, such as a Bluetooth system (e.g., Bluetooth low energy [BLE] system), that enable an identification device (e.g., transceiver, receiver, sensor, scanner) positioned within an environment (e.g., the amusement park) to detect the identifier in the device of the user. Additionally, while only certain features of the present disclosure have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. Further, it should be understood that components of various embodiments disclosed herein may be combined or exchanged with one another. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the disclosure.
The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).
Number | Name | Date | Kind |
---|---|---|---|
4660033 | Brandt | Apr 1987 | A |
5946444 | Evans et al. | Aug 1999 | A |
6142368 | Mullins et al. | Nov 2000 | A |
6307952 | Dietz | Oct 2001 | B1 |
6346886 | De La Huerga | Feb 2002 | B1 |
6352205 | Mullins et al. | Mar 2002 | B1 |
6474557 | Mullins et al. | Nov 2002 | B2 |
6526158 | Goldberg | Feb 2003 | B1 |
6634949 | Briggs et al. | Oct 2003 | B1 |
6680707 | Allen et al. | Jan 2004 | B2 |
6761637 | Weston et al. | Jul 2004 | B2 |
6822569 | Bellum et al. | Nov 2004 | B1 |
6888502 | Beigel et al. | May 2005 | B2 |
6908387 | Hedrick et al. | Jun 2005 | B2 |
6967566 | Weston et al. | Nov 2005 | B2 |
7029400 | Briggs | Apr 2006 | B2 |
7047205 | Hale et al. | May 2006 | B2 |
7066781 | Weston | Jun 2006 | B2 |
7204425 | Mosher, Jr. et al. | Apr 2007 | B2 |
7224967 | Hale et al. | May 2007 | B2 |
7311605 | Moser | Dec 2007 | B2 |
7327251 | Corbett, Jr. | Feb 2008 | B2 |
7336178 | Le | Feb 2008 | B2 |
7336185 | Turner et al. | Feb 2008 | B2 |
7385498 | Dobosz | Jun 2008 | B2 |
7396281 | Mendelsohn et al. | Jul 2008 | B2 |
7400253 | Cohen | Jul 2008 | B2 |
7445550 | Barney et al. | Nov 2008 | B2 |
7479886 | Burr | Jan 2009 | B2 |
7488231 | Weston | Feb 2009 | B2 |
7492254 | Bandy et al. | Feb 2009 | B2 |
7500917 | Barney et al. | Mar 2009 | B2 |
7528729 | Light et al. | May 2009 | B2 |
7541926 | Dugan et al. | Jun 2009 | B2 |
7564360 | Cote et al. | Jul 2009 | B2 |
7564426 | Poor et al. | Jul 2009 | B2 |
7606540 | Yoon | Oct 2009 | B2 |
7614958 | Weston et al. | Nov 2009 | B2 |
7642921 | Cutler et al. | Jan 2010 | B2 |
7674184 | Briggs et al. | Mar 2010 | B2 |
7720718 | Hale et al. | May 2010 | B2 |
7739925 | Foster | Jun 2010 | B2 |
7749089 | Briggs et al. | Jul 2010 | B1 |
7752794 | Kerlin | Jul 2010 | B2 |
7775894 | Henry et al. | Aug 2010 | B2 |
7786871 | Schwarze et al. | Aug 2010 | B2 |
7791557 | Mickle et al. | Sep 2010 | B2 |
7802724 | Nohr | Sep 2010 | B1 |
7812779 | Turner et al. | Oct 2010 | B2 |
7817044 | Posamentier | Oct 2010 | B2 |
7837567 | Holzberg et al. | Nov 2010 | B2 |
7850527 | Barney et al. | Dec 2010 | B2 |
7855697 | Chamarti et al. | Dec 2010 | B2 |
7878905 | Weston et al. | Feb 2011 | B2 |
7881713 | Hale et al. | Feb 2011 | B2 |
7885763 | Havens | Feb 2011 | B2 |
7896742 | Weston et al. | Mar 2011 | B2 |
7925308 | Greene et al. | Apr 2011 | B2 |
7942320 | Joe | May 2011 | B2 |
7956725 | Smith | Jun 2011 | B2 |
7994910 | Brooks et al. | Aug 2011 | B2 |
7997981 | Rowe et al. | Aug 2011 | B2 |
7997991 | Kanemaru et al. | Aug 2011 | B2 |
8016667 | Benbrahim | Sep 2011 | B2 |
8035335 | Duron et al. | Oct 2011 | B2 |
8082165 | Natsuyama et al. | Dec 2011 | B2 |
8085130 | Liu et al. | Dec 2011 | B2 |
8089458 | Barney et al. | Jan 2012 | B2 |
8123613 | Dabrowski | Feb 2012 | B2 |
8164567 | Barney et al. | Apr 2012 | B1 |
8169406 | Barney et al. | May 2012 | B2 |
8184097 | Barney et al. | May 2012 | B1 |
8200515 | Natsuyama et al. | Jun 2012 | B2 |
8213862 | Muth | Jul 2012 | B2 |
8222996 | Smith et al. | Jul 2012 | B2 |
8226493 | Briggs et al. | Jul 2012 | B2 |
8231047 | Canora | Jul 2012 | B2 |
8237561 | Beigel et al. | Aug 2012 | B2 |
8248208 | Renfro, Jr. | Aug 2012 | B2 |
8248367 | Barney et al. | Aug 2012 | B1 |
8253533 | Jones | Aug 2012 | B2 |
8253542 | Canora et al. | Aug 2012 | B2 |
8296983 | Padgett et al. | Oct 2012 | B2 |
8313381 | Ackley et al. | Nov 2012 | B2 |
8330284 | Weston et al. | Dec 2012 | B2 |
8330587 | Kupstas | Dec 2012 | B2 |
8342929 | Briggs et al. | Jan 2013 | B2 |
8353705 | Dobson et al. | Jan 2013 | B2 |
8368648 | Barney et al. | Feb 2013 | B2 |
8371893 | Eck et al. | Feb 2013 | B2 |
8373543 | Brommer et al. | Feb 2013 | B2 |
8373659 | Barney et al. | Feb 2013 | B2 |
8384668 | Barney et al. | Feb 2013 | B2 |
8392506 | Rowe et al. | Mar 2013 | B2 |
8416087 | Canora et al. | Apr 2013 | B2 |
8425313 | Nelson et al. | Apr 2013 | B2 |
8430749 | Nelson et al. | Apr 2013 | B2 |
8463183 | Muth | Jun 2013 | B2 |
8475275 | Weston et al. | Jul 2013 | B2 |
8477046 | Alonso | Jul 2013 | B2 |
8489657 | Shepherd et al. | Jul 2013 | B2 |
8491389 | Weston et al. | Jul 2013 | B2 |
8517788 | Eck et al. | Aug 2013 | B2 |
8531050 | Barney et al. | Sep 2013 | B2 |
8552597 | Song et al. | Oct 2013 | B2 |
8564414 | Bergevoet | Oct 2013 | B2 |
8571905 | Risnoveanu et al. | Oct 2013 | B2 |
8581721 | Asher et al. | Nov 2013 | B2 |
8593283 | Smith | Nov 2013 | B2 |
8593291 | Townsend et al. | Nov 2013 | B2 |
8597111 | LeMay et al. | Dec 2013 | B2 |
8608535 | Weston et al. | Dec 2013 | B2 |
8618928 | Weed et al. | Dec 2013 | B2 |
8621245 | Shearer et al. | Dec 2013 | B2 |
8635126 | Risnoveanu et al. | Jan 2014 | B2 |
8681000 | August et al. | Mar 2014 | B2 |
8682729 | Werbitt | Mar 2014 | B2 |
8686579 | Barney et al. | Apr 2014 | B2 |
8702515 | Weston et al. | Apr 2014 | B2 |
8708821 | Barney et al. | Apr 2014 | B2 |
8711094 | Barney et al. | Apr 2014 | B2 |
8742623 | Biederman et al. | Jun 2014 | B1 |
8753165 | Weston | Jun 2014 | B2 |
8758136 | Briggs et al. | Jun 2014 | B2 |
8773245 | Canora et al. | Jul 2014 | B2 |
8790180 | Barney et al. | Jul 2014 | B2 |
8797146 | Cook et al. | Aug 2014 | B2 |
8801488 | Irmler | Aug 2014 | B2 |
8810373 | Kim et al. | Aug 2014 | B2 |
8810430 | Proud | Aug 2014 | B2 |
8814688 | Barney et al. | Aug 2014 | B2 |
8816873 | Bisset et al. | Aug 2014 | B2 |
8821238 | Ackley et al. | Sep 2014 | B2 |
8827810 | Weston et al. | Sep 2014 | B2 |
8830030 | Arthurs et al. | Sep 2014 | B2 |
8851372 | Zhou et al. | Oct 2014 | B2 |
8866673 | Mendelson | Oct 2014 | B2 |
8870641 | Dabrowski | Oct 2014 | B2 |
8888576 | Briggs et al. | Nov 2014 | B2 |
8913011 | Barney et al. | Dec 2014 | B2 |
8915785 | Barney et al. | Dec 2014 | B2 |
8917172 | Charych | Dec 2014 | B2 |
8923994 | Laikari et al. | Dec 2014 | B2 |
8924432 | Richards et al. | Dec 2014 | B2 |
8937530 | Smith et al. | Jan 2015 | B2 |
8961260 | Weston | Feb 2015 | B2 |
8961312 | Barney et al. | Feb 2015 | B2 |
8971804 | Butler | Mar 2015 | B2 |
8972048 | Canora et al. | Mar 2015 | B2 |
9002264 | Zhang | Apr 2015 | B2 |
9021277 | Shearer et al. | Apr 2015 | B2 |
9039533 | Barney et al. | May 2015 | B2 |
9072965 | Kessman et al. | Jul 2015 | B2 |
9087246 | Chin et al. | Jul 2015 | B1 |
9109763 | Wein | Aug 2015 | B1 |
9122964 | Krawczewicz | Sep 2015 | B2 |
9130651 | Tabe | Sep 2015 | B2 |
9138650 | Barney et al. | Sep 2015 | B2 |
9149717 | Barney et al. | Oct 2015 | B2 |
9162148 | Barney et al. | Oct 2015 | B2 |
9162149 | Weston et al. | Oct 2015 | B2 |
9178569 | Chakravarty et al. | Nov 2015 | B2 |
9186585 | Briggs et al. | Nov 2015 | B2 |
9196964 | Baringer | Nov 2015 | B2 |
9207650 | Narendra et al. | Dec 2015 | B2 |
9215592 | Narendra et al. | Dec 2015 | B2 |
9225372 | Butler | Dec 2015 | B2 |
9232475 | Heinzelman et al. | Jan 2016 | B2 |
9245158 | Gudan et al. | Jan 2016 | B2 |
9272206 | Weston et al. | Mar 2016 | B2 |
9318898 | John | Apr 2016 | B2 |
9320976 | Weston | Apr 2016 | B2 |
9367852 | Canora et al. | Jun 2016 | B2 |
9383730 | Prestenback | Jul 2016 | B2 |
9393491 | Barney et al. | Jul 2016 | B2 |
9393500 | Barney et al. | Jul 2016 | B2 |
9411992 | Marek et al. | Aug 2016 | B1 |
9412231 | Dabrowski | Aug 2016 | B2 |
9413229 | Fleming | Aug 2016 | B2 |
9424451 | Kalhous et al. | Aug 2016 | B2 |
9438044 | Proud | Sep 2016 | B2 |
9443382 | Lyons | Sep 2016 | B2 |
9446319 | Barney et al. | Sep 2016 | B2 |
9463380 | Weston et al. | Oct 2016 | B2 |
9468854 | Briggs et al. | Oct 2016 | B2 |
9474962 | Barney et al. | Oct 2016 | B2 |
9480929 | Weston | Nov 2016 | B2 |
9483906 | LeMay et al. | Nov 2016 | B2 |
9491584 | Mendelson | Nov 2016 | B1 |
9523775 | Chakraborty et al. | Dec 2016 | B2 |
9542579 | Mangold et al. | Jan 2017 | B2 |
9563898 | McMahan et al. | Feb 2017 | B2 |
9579568 | Barney et al. | Feb 2017 | B2 |
9582981 | Rokhsaz et al. | Feb 2017 | B2 |
9589224 | Patterson et al. | Mar 2017 | B2 |
9613237 | Nikunen et al. | Apr 2017 | B2 |
9616334 | Weston et al. | Apr 2017 | B2 |
9626672 | Fisher | Apr 2017 | B2 |
9642089 | Sharma et al. | May 2017 | B2 |
9646312 | Lyons et al. | May 2017 | B2 |
9651992 | Stotler | May 2017 | B2 |
9661450 | Agrawal et al. | May 2017 | B2 |
9675878 | Barney et al. | Jun 2017 | B2 |
9680533 | Gudan et al. | Jun 2017 | B2 |
9692230 | Biederman et al. | Jun 2017 | B2 |
9696802 | Priyantha et al. | Jul 2017 | B2 |
9706924 | Greene | Jul 2017 | B2 |
9707478 | Barney et al. | Jul 2017 | B2 |
9713766 | Barney et al. | Jul 2017 | B2 |
9731194 | Briggs et al. | Aug 2017 | B2 |
9737797 | Barney et al. | Aug 2017 | B2 |
9741022 | Ziskind et al. | Aug 2017 | B2 |
9743357 | Tabe | Aug 2017 | B2 |
9747538 | Gudan et al. | Aug 2017 | B2 |
9748632 | Rokhsaz et al. | Aug 2017 | B2 |
9754139 | Chemishkian et al. | Sep 2017 | B2 |
9754202 | Gudan et al. | Sep 2017 | B2 |
9756579 | Zhou et al. | Sep 2017 | B2 |
9762292 | Manian et al. | Sep 2017 | B2 |
9767649 | Dabrowski | Sep 2017 | B2 |
9770652 | Barney et al. | Sep 2017 | B2 |
9813855 | Sahadi et al. | Nov 2017 | B2 |
9814973 | Barney et al. | Nov 2017 | B2 |
9831724 | Copeland et al. | Nov 2017 | B2 |
9836103 | Kramer et al. | Dec 2017 | B2 |
9837865 | Mitcheson et al. | Dec 2017 | B2 |
9854581 | Shaw | Dec 2017 | B2 |
9861887 | Briggs et al. | Jan 2018 | B1 |
9864882 | Geist et al. | Jan 2018 | B1 |
9867024 | Larson | Jan 2018 | B1 |
9871298 | Daniel et al. | Jan 2018 | B2 |
9909896 | Bass et al. | Mar 2018 | B2 |
9928527 | Woycik et al. | Mar 2018 | B2 |
9928681 | LeMay, Jr. et al. | Mar 2018 | B2 |
9931578 | Weston | Apr 2018 | B2 |
9936357 | Mills et al. | Apr 2018 | B2 |
9949219 | Belogolovy | Apr 2018 | B2 |
9972894 | Dion et al. | May 2018 | B2 |
9993724 | Barney et al. | Jun 2018 | B2 |
1001079 | Weston et al. | Jul 2018 | A1 |
1002262 | Barney et al. | Jul 2018 | A1 |
10058793 | Cortelyou et al. | Aug 2018 | B2 |
10238976 | Ackley | Mar 2019 | B2 |
20090069935 | Wieland | Mar 2009 | A1 |
20100144239 | Eck et al. | Jun 2010 | A1 |
20120154633 | Rodriguez | Jun 2012 | A1 |
20120286938 | Cote et al. | Nov 2012 | A1 |
20130073087 | Irmler | Mar 2013 | A1 |
20130130585 | Eck | May 2013 | A1 |
20130226588 | Irmler | Aug 2013 | A1 |
20130324059 | Lee et al. | Dec 2013 | A1 |
20130332509 | Schwartz | Dec 2013 | A1 |
20140122170 | Padgett et al. | May 2014 | A1 |
20140162693 | Wachter et al. | Jun 2014 | A1 |
20150046202 | Hunt | Feb 2015 | A1 |
20150078140 | Riobo Aboy et al. | Mar 2015 | A1 |
20150138556 | LeBoeuf et al. | May 2015 | A1 |
20150194817 | Lee et al. | Jul 2015 | A1 |
20150236551 | Shearer et al. | Aug 2015 | A1 |
20150255226 | Rouvala et al. | Sep 2015 | A1 |
20150312517 | Hoyt et al. | Oct 2015 | A1 |
20150336013 | Stenzler et al. | Nov 2015 | A1 |
20150371194 | Marshall et al. | Dec 2015 | A1 |
20160001193 | Eck | Jan 2016 | A1 |
20160019423 | Ortiz et al. | Jan 2016 | A1 |
20160020636 | Khlat | Jan 2016 | A1 |
20160020637 | Khlat | Jan 2016 | A1 |
20160067600 | Barney et al. | Mar 2016 | A1 |
20160144280 | Pawlowski et al. | May 2016 | A1 |
20160170998 | Frank et al. | Jun 2016 | A1 |
20160179075 | Shin | Jun 2016 | A1 |
20160182165 | Margon et al. | Jun 2016 | A1 |
20160203663 | Proctor | Jul 2016 | A1 |
20160217496 | Tuchman et al. | Jul 2016 | A1 |
20160226610 | Pinzon Gonzales, Jr. | Aug 2016 | A1 |
20160307398 | Walker et al. | Oct 2016 | A1 |
20160316324 | Sahadi et al. | Oct 2016 | A1 |
20160321548 | Ziskind et al. | Nov 2016 | A1 |
20160373522 | Carlos et al. | Dec 2016 | A1 |
20170091850 | Alvarez et al. | Mar 2017 | A1 |
20170093463 | Wang et al. | Mar 2017 | A1 |
20170115018 | Mintz | Apr 2017 | A1 |
20170132438 | Cletheroe et al. | May 2017 | A1 |
20170162006 | Sahadi et al. | Jun 2017 | A1 |
20170169449 | Heaven et al. | Jun 2017 | A1 |
20170186270 | Acres | Jun 2017 | A1 |
20170201003 | Ackley et al. | Jul 2017 | A1 |
20170228804 | Soni et al. | Aug 2017 | A1 |
20170235369 | Acer et al. | Aug 2017 | A1 |
20170237466 | Carr | Aug 2017 | A1 |
20170270734 | Geraghty et al. | Sep 2017 | A1 |
20170288735 | Zhou et al. | Oct 2017 | A1 |
20170293985 | Deria et al. | Oct 2017 | A1 |
20170331509 | Gollakota et al. | Nov 2017 | A1 |
20170340961 | Weston et al. | Nov 2017 | A1 |
20170348593 | Barney et al. | Dec 2017 | A1 |
20170358957 | Mitcheson et al. | Dec 2017 | A1 |
20170361236 | Barney et al. | Dec 2017 | A1 |
20170373526 | Huang et al. | Dec 2017 | A1 |
20180014385 | Wein | Jan 2018 | A1 |
20180078853 | Barney et al. | Mar 2018 | A1 |
20180147728 | Vyas et al. | May 2018 | A1 |
20180214769 | Briggs et al. | Aug 2018 | A1 |
20180318723 | Weston | Nov 2018 | A1 |
20180339226 | Barney et al. | Nov 2018 | A1 |
20180350144 | Rathod | Dec 2018 | A1 |
20190009171 | Barney et al. | Jan 2019 | A1 |
20190038970 | Weston et al. | Feb 2019 | A1 |
20190098099 | Goslin | Mar 2019 | A1 |
20190176027 | Smith | Jun 2019 | A1 |
20190188061 | Scanlon | Jun 2019 | A1 |
20190220635 | Yeh | Jul 2019 | A1 |
Number | Date | Country |
---|---|---|
2598084 | Jan 2004 | CN |
2003288472 | Oct 2003 | JP |
2004126791 | Apr 2004 | JP |
2005267179 | Sep 2005 | JP |
2010000178 | Jan 2010 | JP |
2012244846 | Dec 2012 | JP |
2013188019 | Sep 2013 | JP |
6152919 | Jun 2017 | JP |
Entry |
---|
Slyper, Ronit, et al., “A Tongue Input Device for Creating Conversations,” UIST'll, Oct. 16-19, 2011, ACM, 9 pgs. |
U.S. Appl. No. 16/196,967, filed Nov. 20, 2018, Matthew Usi. |
U.S. Appl. No. 15/826,357, filed Nov. 29, 2017, Wei Yeh. |
U.S. Appl. No. 15/833,839, filed Dec. 6, 2017, Travis Jon Cossairt. |
U.S. Appl. No. 15/861,502, filed Jan. 3, 2018, Wei Cheng Yeh. |
U.S. Appl. No. 15/874,671, filed Jan. 18, 2018, Wei Cheng Yeh. |
U.S. Appl. No. 15/882,761, filed Jan. 29, 2018, Wei Cheng Yeh. |
U.S. Appl. No. 15/882,721, filed Jan. 29, 2018, Wei Cheng Yeh. |
U.S. Appl. No. 15/882,788, filed Jan. 29, 2018, Wei Cheng Yeh. |
U.S. Appl. No. 15/882,738, filed Jan. 29, 2018, Travis Jon Cossairt. |
PCT/US2019/016895 International Search Report and Written Opinion dated Apr. 12, 2019. |
Number | Date | Country | |
---|---|---|---|
20190302991 A1 | Oct 2019 | US |