The present application relates, generally, to systems and methods associated with interactive models.
The increasing proliferation of mobile computing devices, such as smartphones, has resulted in users increasingly relying on such devices for recreational purposes, including for game playing. Accordingly, many electronic video games such as multi-player video games have overtaken traditional “physical” games, such as board games, in popularity. While electronic video games may provide many advantages over board games, such video games do not provide the same tangible, ‘real world’ gameplay experience, as reflected in certain board games through the use of figurines or gameplay pieces.
An interactive apparatus is disclosed for providing actuations in at least one point of the apparatus. An audio player component is provided that is configured to receive and play an audio file having a plurality of frequencies. Further, an audio detection component is included that is configured to detect at least one frequency as the audio file plays. At least one solenoid component is included that is configured to actuate at least one point in the object portion in response to the at least one audio detection component detecting the at least one frequency.
These and other aspects, features and advantages can be appreciated from the accompanying description of certain implementations and the accompanying figures and claims.
Aspects of the present disclosure will be more readily appreciated upon review of the detailed description of its various embodiments, described below, when taken in conjunction with the accompanying drawings, of which:
The referenced systems and methods are described herein with reference to the accompanying drawings, in which like reference numerals refer to like elements and in which one or more illustrated embodiments and/or arrangements of the systems and methods are shown. The systems and methods are not limited in any way to the illustrated embodiments and/or arrangements as the illustrated embodiments and/or arrangements described below are merely exemplary of the systems and methods, which can be embodied in various forms. Therefore, it is to be understood that any structural and functional details disclosed herein are not to be interpreted as limiting the systems and methods, but rather are provided as a representative embodiment and/or arrangement for teaching one skilled in the art one or more ways to implement the systems and methods.
The present application provides an interactive model that is operable in a network, can operate in an “always-on” mode, and that carries a very low cost of manufacture. In one or more implementations of the present application, the model includes a wireless speaker and microphone (e.g., Bluetooth-enabled speaker and microphone), and can include an animatronic structure and memory (e.g., flash memory). Information directed to kinetics can be encoded in audio, such as by playing a sound file (e.g., .WAV, .MP3, .AIFF, .DCT or other suitable format) that includes frequencies that, when detected by the model, cause its movable parts (e.g., levers and hinges) to actuate.
In one or more implementations, a computing device, such as a smart phone, tablet or other mobile computing device, executes an application (e.g., a mobile app) or other software. The mobile app can provide instructions for audio to play from the speaker configured with the model, which can result in one or more components of the model to actuate in response. The model can further be configured with a series of low-cost sensors that detect the proximity of physical objects, including objects configured with passive radio-frequency identification (“RFID”) tags. The model can further be configured with one or more components to send and receive information associated therewith to and from mobile computing device.
In one or more implementations, one or more objects external to the model can be configured with passive RFID tags, and the model can be configured with components for passive RFID reading functionality. In another implementation, one or more objects may be configured with a Bluetooth beacon or other suitable component(s) that can be detected by components configured with the model. Moreover, the present patent application enables wireless tethering of the networked mobile computing device (e.g., smartphone) and the model thereby enabling updates of content, such as related to the time of day, date, weather, or virtually any other information for interactivity, to be used in connection with the present application. Moreover, the model can be configured with memory to store information, such as for interactivity, prior to or in lieu of transmission of information to and/or from a mobile computing device.
Accordingly, a model that is configured with an audio output component, such as an internal Bluetooth speaker, an audio input component, such as a microphone, and detection functionality, such as passive RFID reading functionality, can interact with a user including as a function of data and/or sound transmissions to and/or from a mobile computing device. In one non-limiting example implementation, simple commands can be formatted in the form of respective frequencies, which resulted in movement, such as via one or more potentiometers, resulting in actuation of one or more components in the model.
The model, which can be configured as a doll or other figurine, can be physically accessible and appear to have “Listen and Respond” functionality. Audio input such as from a microphone, audio output, such as from a Bluetooth-enabled speaker, and kinetics such as from a responsive animatronic framework, enable the model to appear to respond to specific verbal cues and even to converse with users. Moreover, by communicating with a smart device via Bluetooth Low Energy, new content can be passively pushed and stored to the model. Even when within a significant range, such as 30 to 50 feet (or more) of the computing device, the model can become highly interactive and offer contextual activity, including by responding to audio cues (voice, broadcast, etc.) and time of day. While the model is not in contact with a computing device (e.g., a smart phone), the model may exhibit a smaller yet still smart range of behaviors.
An example electronic base 100 in accordance with an example implementation for a model is illustrated in
In one or more implementations, a cross-platform application (e.g., a mobile app) experience can deliver automatic content to the model, as well as also gives users an ability to control the experience of other users of the model (e.g., their children's experience). For example and in an implementation, the model can be configured with a communication module that enables. As new content is downloaded to the app via Wi-Fi, cellular or other communications channel, and when in the vicinity of an app-equipped smart device, the model can listen to broadcast content, e.g., storybooks, and respond appropriately.
In various applications, the present application includes a model that can be configured to be cross-generational and a cultural touchstone, with potential to be one of the most meaningful consumer products to date. Much more than a simple dancing plaything, a model configured in accordance with the teachings herein provides a networked, “always-on” companion and a powerful vehicle for receiving and delivering daily programming. Furthermore, the model can cost less to manufacture than a traditional animatronic model.
In operation, the present application includes a model that can listen and converse meaningfully, such as with a child. Beyond simply dancing, laughing and singing in pre-programmed ways, the model in accordance with the present application is capable of evolving and can learn a seemingly limitless number of behaviors. The model can, accordingly, be used to reinforce new habits based on both a user's life-stage and external content (e.g., the time of day). In an implementation as a child's toy, the model can be programmed by parents to deliver appropriate educational content and that can be updated daily with new lessons and interactive content.
For both the user of the model (e.g., a child, student or other entity) and a user of the mobile computing device (e.g., a parent, teacher or other entity), the present application provides a seemingly magical experience, almost as if the model were a fully aware, alive entity that is interacting. The model can appear to know the time of day, to listen and respond to conversational cues, and can engage in contextually relevant activities. Spoken words can be provided, such as via the Bluetooth speaker, and a corresponding mechanical elements (such as a jaw) can actuate in response.
Further, and in connection with an example implementation in which an object include shoes that are configured with an RFID tag, the model can appear to ask a child to put on shoes so the model can go outside with the user. The user can be prompted to find the RFID-equipped shoes, and once the shoes are placed on the model's feet, the model can respond positively. Further in this non-limiting example implementation, the model can prompt a child to bring the model along and they will look for shoes together. Using iBeacon or other functionality, the model can say “warmer,” “hot,” “very hot,” “cold” or “colder” depending on the child's proximity to the shoes.
In another non-limiting example, a simple listening experience in which the model chimes in as a user reads from a book. The model can interject at various points, such as to offer additional commentary while the user reads. By detecting input from the user, the model can cease adding commentary or add more, accordingly.
Thus, the present application provides a cross-platform real-world application experience that not only delivers automatic content to an interactive model, but also provides for dynamic responsiveness. Such responsiveness can be useful in many contexts, such as during long travel (car-trip- or airplane-appropriate content and interactions to educate and entertain children during long travel experiences, photo hunt “I Spy” games, spot objects of certain colors, play an alphabet game, or the like).
In accordance with an example physical model, an inner skeleton can be provided that supports a spine (e.g., its back) and provide rigidity for the workings of various mechanical parts included in the model's mouth and neck. The model's mouth can open to convey a character's personality and to be recognizable. The electronics assist with control of various parts of the model, such as hands, mouth, legs, or the like. The electronics can one or more low cost integrated circuits, which can translate commands to actuate the coils. include Bluetooth connectivity provided with a smartphone or other mobile computing device. Further, indication and transmission of audio sounds to the smartphone can be provided, as a data coding algorithm that controls various parts of the model.
For example, when the model is activated (is turned on), the model can greet with a hello. Sounds can cause parts of the model to actuate, such as mouth opening and closing, lips can move in sequence or simultaneously. When the model's battery is running low, the model can say: “Hey, my battery is low, please charge me.” When the model is turned off, the model can recite, “Bye-bye, see you later.” These examples provide an indication of functionality in accordance with the present application.
Turning now to
Continuing with reference to
Thus, as shown and described herein, kinetic information can be encoded in audio, such as a sound file, and a low-cost interactive model responds accordingly. When an analog frequency is recognized, respective movable portions of the model respectively actuate.
In one or more implementations regarding an educational model, cognitive support is supported and designed to teach someone in various ways. In one non-limiting example, a child can be taught how to describe times of day; how to track time; and how to adopt time-appropriate habits. For example, a child can be taught how to put on shoes, brush teeth or the like. Further, the model can point to important clues and asks predictive questions, thereby supporting a child's development of reading strategies and reading comprehension, scaffolding the reading experience.
With regard to social-emotional support, the model can provide to someone a friend who can respond in a personalized, contextually appropriate way. This can provide, for example, a benefit to parents, such as by providing personalized greetings that convey the fact that the model can educate and support children in ways big and small. The model can provide for context relevant greetings, such as responding to a particular time of day. Further, the model can support child engagement and alleviate frustration by responding to the child and encouraging the child to read. Parents cannot always be there with their child when the child tries to read a book. The model can become a reassurance that the model is an “educational agent,” guiding their child through the reading experience in a personalized way. In other contexts, the model can provide pre-visit information to various places, such as to doctors, dentists, airports, new schools, or virtually any other place that a user would appreciate advanced knowledge.
Although illustrated embodiments of the present invention have been shown and described, it should be understood that various changes, substitutions, and alterations can be made by one of ordinary skill in the art without departing from the scope of the present invention.
This application is based on and claims priority to U.S. Provisional Patent Application 62/116,309, filed Feb. 13, 2015, the entire contents of which is incorporated by reference herein as if expressly set forth in its respective entirety herein. The present application further incorporates by reference U.S. Pat. No. 8,955,750, issued Feb. 17, 2015, in its entirety as if expressly set forth herein.
Number | Date | Country | |
---|---|---|---|
62116309 | Feb 2015 | US |