1. Field
Various software services may benefit from enhancements to interactive services. For example, context aware interactive experiences over a network may benefit from a platform for their creation.
2. Description of the Related Art
There are conventional systems for real-time perception and response to an unencumbered user. Such unencumbered interactive experiences may utilize natural user interface techniques including gesture tracking, face tracking, voice recognition, touch, inertial sensing, and other forms of input that sense and respond to the natural unencumbered actions of human beings. These systems can include application programming interfaces (APIs), integrated development environments (IDEs), and platforms, such as coding toolkits, that can be used for creating unencumbered interactive experiences. These APIs, IDEs, and platforms may be helpful for building unencumbered interactive experiences, but they do not fully address issues of building experiences that can scale from a single interactive instance to a deployed network of multiple simultaneous instances in separate physical locations such as a retail chain stores, consumer “app store” markets, or healthcare networks.
These platforms do not address deployment, maintenance, retargeting to different device capabilities, organization, and data collection from interactive experiences in a network. In addition, they do not address issues specific to unencumbered interactive experiences when scaled to multiple locations.
According to certain embodiments of the present invention, a method can include receiving an application descriptor at a user device from a server, wherein the application descriptor corresponds to an interactive application. The method can also include creating an application instance on an interactive experience processing system targeted to a platform's architecture and interactive capabilities based on the application descriptor. The method can further include responding to at least one action of a user of the interactive application via interactive multimedia rendered on the user device. The application instance can include an application core including at least one interaction node and at least one media node.
In certain embodiments of the present invention, a method can include transmitting an application descriptor to a user device from a server. The application descriptor can correspond to an interactive application, The user device can be configured to create an application instance on an interactive experience processing system targeted to a platform's architecture and interactive capabilities based on the application descriptor. The method can also include providing an object-code executable version of a program targeted to the architecture and capabilities, either cached on servers, or at run-time.
A method, according to certain embodiments of the present invention, can include providing an application descriptor to be transmitted to a user device from a server, wherein the application descriptor corresponds to an interactive application, wherein the user device is configured to create an application instance on an interactive experience processing system targeted to a platform's architecture and interactive capabilities based on the application descriptor. The method can also include providing a content management system configured to customize the behavior and appearance of applications by modifying the application descriptor and associated media.
An apparatus, in certain embodiments of the present invention, can include at least one processor and at least one memory including computer program code. The at least one memory and the computer program code can be configured to, with the at least one processor, cause the apparatus at least to perform any of the preceding methods.
A non-transitory computer readable medium can be encoded with instructions that, when executed in hardware, perform a process. The process can include any of the preceding methods.
For proper understanding of the invention, reference should be made to the accompanying drawings, wherein:
As an unencumbered interactive experience is scaled to multiple instances, and multiple target device types with differing device capabilities, it may be helpful to preserve the interactive behavior of the experience as well as to maintain the relationship between the interactive experience and the perceived simulation from the interactive experience without significant effort in scaling and customization.
Building an unencumbered interactive experience that can be deployed in multiple locations on many different setups and hardware platforms may involve significant effort to maintain the perceived interactive experience across different sensors and different detection schemes, and may involve adjusting to variations of the user's perception of the simulation based on the orientation, scale, and technology of the display used.
An issue sometimes faced in deploying an unencumbered interactive experience in different physical locations is that the hardware and physical configuration may need to be altered to reflect the physical constraints of a specific location, economic constraints of a buyer, or other constraints or user preferences. Depending on the environmental conditions and restrictions, different physical locations may require different user detection setups with different camera or sensor technologies. The required display technologies and display size and orientation may also vary based on obstructions and available space and light requirements.
Further complications are that different sensors and platforms have widely varying capabilities and may discern the user differently. On the other hand, the user's interaction may need to be processed such that the perceived interactive experience is the same. Likewise, a user's perception of the user's interaction may also be affected by the display size, technology, and orientation to the user. To maintain a consistent experience on different hardware platforms, the interactive experience may need to take into account the context of the input and output devices as well as the physical configuration.
Scaling an interactive experience may involve effort to retarget the media, simulation, interaction, interaction design, any shared media, and metrics for a specific interactive instance with a different configuration and location. To maintain relevancy to a user, a scaled unencumbered interactive experience may need retargeted media, messaging, simulation, shared media, and metrics to reflect changes in geography, time or season, events or venues, sponsors, and the like. An example is a single unencumbered interactive experience that is deployed at multiple stadiums around the country. Though every instance has the same interactive behavior and provides the same interactive experience, each separate experience may have media or messaging specific to the venue or location while the shared media and metrics may reflect a user's interaction with customized content from each instance.
To address these challenges, certain embodiments of the present invention provide a system and method for developing scalable networked unencumbered interactive experiences that can easily be deployed and retargeted as separate instances on different hardware, locations, and configurations while maintaining consistent interactive behavior and a context relevant experience with relevant output, with minimal customization.
More particularly, certain embodiments provide a networked platform that can define and describe how a specific interactive experience responds to a user, via interactive multimedia, using specific media assets while preserving these behaviors and interactive experiences independently of the context and input or output devices. The platform can extract metrics, logs, and shareable media that are relevant to context, input, output, media, and behavior. This platform can define these interactive experiences in such a way that the description of any specific instance of these experiences and their associated behavior, media, context, logic, assets, logs, metrics, metadata, shared media, schedules can be stored as a file, a set of files, or in a database. The platform can collect context-specific metrics, context-aware user experience media for sharing, and logs. The data and files collected by the platform allow for targeted sharing, analytics, data mining, and maintenance of the networked interactive experiences. This networked platform can use the descriptions of these interactive experiences to allow distributed updates and purchases, customizations, retargeting, organization, and scheduling of each instance of the experience by modifying the definitions of the experience and by altering the associated media or altering the behavior parameters of the specific experience.
In short, certain embodiments provide a platform that can define and describe the interactive multimedia response of an unencumbered interactive experience, independent of the sensor devices or configuration context, to facilitate retargeting and deployment. Certain embodiments provide an interactive experience platform that can extract context relevant metrics, logs, and shared media from multiple networked instances of interactive experiences with minimal customization. Certain embodiments also provide an interactive experience platform that defines interactive experiences in such a way that the description of any specific instance of these experiences and their associated behavior, media, context, logic, assets, logs, metrics, metadata, shared media, schedules, and the like can be stored as a file, a set of files, or in a database such that these descriptions of interactive experiences can be used to distribute, maintain, customize, organize, and schedule interactive experiences in a network. Other features of certain embodiments can be seen in the following discussion.
An interactive system in certain embodiments can respond to the unencumbered interaction of a user. Such a system may be present in many different kinds of locations and environments, including home gaming systems. The system can include a sensor for detecting the user and the user's interaction (e.g. via computer vision or touch), a display, a processing system such as a computer, and the software that responds to the user via interactive multimedia, simulation, game logic, or other interactive system. The software can be written in a platform such as an application programming interface (API) or graphical user interface (GUI) based integrated development environment (IDE). Certain embodiments allow for the easy creation of interactive experiences that maintain a similar simulated behavior and rendering of this behavior in reaction to a user, independent of the devices or configuration context. Here, the configuration context can refer to the specific hardware, such as input and output interfaces and devices, with which the system is equipped.
Certain embodiments allow for automatic generation of metrics, logs, and shared media that are context relevant. Also, certain embodiments also define each interactive experience and what it does, what it uses, what it collects, and how it responds to the user in such a way that this description and definition can be easily distributed and generalized to multiple device types and configuration contexts.
In one embodiment of a platform, specific instances of the interactive experience can be written on a platform core that handles all of the shared functionality of an interactive experience such as networking, device interfaces, an asset system, media rendering, an event system, a metrics system, a scripting system, a parameter system, a physics simulation system, an interface system. The specific instance of an interactive experience can then built with this platform core and with experience specific code that describes the specific interaction experience and specific media usage. In one embodiment of the platform, the experience specific code can be written in a scripting language. In another embodiment of the platform, the experience specific code can be generated by a GUI interface. The platform core can be shared by all of the instances of interactive experiences and the platform core can be responsible for managing the interactive experiences that are on the processing system.
An app instance 140 can also include an app core 220. The app core 220 can be what differentiates each interactive experience from each other interactive experience. A developer who creates an interactive experience on the platform may primarily develop the app core 220. The app core 220 can be the description, logic, and media of an interactive experience that is componentized into interaction and media nodes 222. Interaction and media nodes 222 describe the behavior and logic of an interaction in a device, and device capability agnostic fashion. The app logic 224 and app data 226 of the app core 220 can contain the data and logic for the experience that cannot be componentized into nodes such as a state machine for the simulation.
The context of an app instance 140 can be contained in the app context 230. The platform developer describes the app configuration 232 and metadata 236 while the platform generates the license 234 and identifiers. The app configuration 232 includes information about the devices, configuration, layout, network, and location of the app instance 140. The metadata 236 describes the interactive experience and also includes specific descriptions of the configuration. The identifiers of an app instance 140 are generated by the platform and are used to uniquely identify a specific app instance 140 and associated files on the system of servers 160. The app context 230 also includes the license 234. The license 234 is used to preserve the app instance 140 by preventing any unintentional changes to the app context 230 or app content. The license 234 includes machine identifying data, a cryptographic hash of the app files and app content, expiration dates for the license 234, geographic restrictions, schedule restrictions, usage restrictions, and compatible device restrictions.
When deploying an app instance 140 on a new interactive processing system, the platform core 210 can use the app descriptor's app files 312, which can store key files used by the interactive experience and their storage locations, to set up a base application without media, data, scripts, and interaction content. The platform core 210 can build the rest of the app instance 140 from the app content 316 of the app descriptor 150. The app content 316 includes all of the files and data and the associated metadata 236 that are used by the app instance 140. The app content 316 can be organized into collections of files, data, and collections of collections and include the associated metadata 236 to describe the collections and their content. The metadata 236 stored in the app content 316 can include descriptions of content, storage locations of content, and versions information of content.
The platform can use the app template 314 to store a user-friendly description and hierarchy to content and data generated from interactions with the content. The platform can separate the system-level representation of the content, which is the app content 316, from the app template 314, which is the user-friendly description and organization of content. The app template 314 of an app descriptor 150 can represent an abstraction of the app content 316 and encompasses the usage, description, and organization of content as described by a developer or client. For example, the app template 314 could describe a collection of media as “insects.” The app template 314 could also describe a subset of “insects” as a “ladybug” and specify the parameters of the content that represent the instance of a “ladybug” as well as restrictions to the files or data that represent the “ladybug.”
The platform core 210 can use the app template 314 to generate metrics and shared media with context from a user's interaction with media. Using the app template 314, the platform core 210 can associate a user's interaction with media with the descriptions and hierarchy of the media contained in the app template 314. For example, the platform core 210 can create a metric that automatically describes the interaction of a user with a representation of a particular model and color of a toy by using the app template 314. The metric generated by the platform can capture the user interaction and the description and hierarchy of the content that the user is interested in without developer effort.
In one embodiment of the platform, the app template 314 can be used by the system of servers 160 to dynamically generate a web-based GUI 520 for modifying, organizing, and adding content. This web-based GUI 520 can be a content management system for managing the content of interactive experiences. The app template 314 can add descriptions, organization, usage, and restrictions to the app content 316 within the content management GUI. Changes to the app content 316 and app template 314 can then be synchronized by the system of servers 160 to interactive experience processing systems, through the app descriptor 150.
In one embodiment of the platform, the app template 314 can be used to generate descriptions of an app instance 140 for a web-based store where app instances can be purchased. The app template 314 can also be used by the system of servers 160 (see
The platform can use the app descriptor's app configuration 232 to add contextual information of a specific configuration of an app instance 140 to interactive experiences, the output from interactive experiences, and to the management of interactive experiences. The app configuration 232 can include, but is not limited to, information of the input devices, the output devices, the physical configuration, the output and input orientation, obstructions, the network configuration, geographic location, the layout of other interactive experience processing systems, hardware configuration, any arbitrary grouping or hierarchy of processing systems or interactive experiences, and hardware restrictions.
The platform core 210 can add such contextual information from the app descriptor's app configuration 232 to interaction components, media, metrics 324, and shared content 328. In addition, the app configuration 232 can be used by the system of servers 160 (see
The platform core 210 can use the license 234 in the app descriptor 150 to regulate which app instances can be used on which interactive experience processing systems, as well as to prevent inadvertent modifications to the app instance 140, the app context 230, and app content 316 on the processing system. The system of servers 160 (see
The platform core 210 can use a schedule 322 in the app descriptor 150 to schedule different app instances on a processing system as well as to schedule the content of an app instance 140. For example the platform core 210 can schedule a media asset such that it is only used on weekends while the app instance 140 is only active during the evening. In one embodiment of the platform, a client can use a web-based GUI 520, hosted by the system of servers 160, to modify the schedule 322 for app instances and for app content 316. The modification to the schedules are stored in the app descriptor 150 on the server and synced to interactive processing systems.
The platform core 210 can generate and store these items in the app descriptor: metrics 324, logs and monitoring media 326 such as screenshots, shared media and metadata 236, and parameters and state configuration 330. By storing state information along with parameters in the app descriptor 150, the system of servers 160 can modify these parameters and alter the state of an app instance 140 remotely. In one embodiment of the platform, a client can modify parameters and state configuration of an app instance 140 remotely using a web-based GUI 520 hosted by the system of servers 160. The state information stored in an app descriptor 150 allows a system of servers 160 to redeploy an app instance 140 and preserve the current state of the app instance 140 across different interactive experience processing systems. For example, if a processing system is replaced then the system of servers 160 can redeploy the interactive experiences associated with the original processing system and maintain the current state of those interactive experiences.
The developer can then test the app instance 140, which can be generated by the platform using the app descriptor 150. The developer can choose to make more modifications after testing or send the app descriptor 150 to a system of servers 160 using the Internet 170. The system of servers 160 can use the app descriptor 150 to host a web-based GUI 520 for remote clients to license new experiences, manage, customize, organize, re-target, data-mine interactive experiences built with the platform. When a client licenses an interactive experience through the GUI 520, the system of servers 160 can deploy the app descriptor 150 to interactive experience processing systems 410.
Each interactive experience processing system 410 can include a platform core 210 and a basic app context 230 installed that describes the hardware and configuration of the interactive experience processing system 410. The platform core 210 on each interactive experience processing system 410 can synchronize previously installed app descriptors and new app descriptors with the system of servers 160. The platform core 210, on the interactive experience processing systems 410, can use the basic app context 230 to localize, re-target and configure a specific app instance 140 for the specific venue, location, and hardware without developer input.
The context 230 of a node can represent all of the factors outside a node that influence the behavior 630 and output of a node. User interaction and node-to-node interactions can be delivered to nodes as events. Events can be a callback system that sends data from input devices and node interactions in a standardized format. When a node is instantiated by the platform core 210, the platform core 210 can register the node for events that are relevant to the node based on the behavior 630 and the app context 230. Events can be combined with the app instance properties to provide context for the events that are received. The app instance properties can represent parameters of the specific instance and configuration of the app that are relevant for the node.
The context interpretation 620 of a node can use the context 230 of the system to provide a context relevant interpretation of the events to the behaviors 630. Event filters can apply context to events by translating data in events to node space coordinates, applying boundaries to event data, and applying any group restrictions to event data. Additionally, user interaction events can have additional context interpretation steps before being passed to the interaction behavior. Interaction behaviors can be described as sets of interaction elements. Interaction elements can describe a componentized simulated response to a user interaction, agnostic of the implementation or hardware used.
An interaction element interpreter can use the app context 230 to determine the best implementation of an interactive element that corresponds to a received interaction event. Implementations of interaction elements can belong to the interaction element system, which can be part of the platform core 210. The interaction element interpreter can pass context relevant implementation of interaction elements to the interaction behavior of a node. This dynamic translation from device specific interaction events to the appropriate device agnostic interaction behavior can be a component that facilitates context relevancy and retargeting in the platform.
To maintain a consistent user interactive experience across various configurations, or for other purposes, the platform can preserve the componentized behavior of nodes. The behavior 630 of a node can represent the response of a node to an input independent of the device and configuration. The interaction behavior can be the response and logic of a node in response to user interaction. The group behavior of a node can represent the actions and logic of a node in relation to a group. The group behavior can also represent the actions and logic of a group system contained in a single node. The media behavior can represent dynamic actions of media that are not direct responses of media to a user's interaction, such as, but not limited to, media on a path, animation easing of media, scaling of media, media sequences, spatially moving audio, and looping media. Node data 740 can be the data and parameters that a node stores to represent the node's state.
The rendering descriptor 640 of a node can describe the output of a node independent of the device or the configuration. The rendering descriptor 640 can contain the media of a node. The rendering descriptor 640 can also contain an output descriptor. The output descriptor can describe additional rendering effects that can be applied by the platform core 210 to the output. Examples of additional rendering effects are graphical effects such as iridescence, audio effects such as an echo, or video effects such as a blurred video effect.
An example of an interaction node is a box that a user pushes around. A developer using the platform can describe a box that has a behavior 630 that allows it to be pushed by a user. The interaction node of the box can listen for interaction events that represent a pushing interaction on the node. In one configuration, the interaction event can be created from a hand-tracking camera, while in another configuration the interaction events can be created from a touch-screen mobile phone or tablet display. The interaction element interpreter for the box can use the app instance 140 to determine the appropriate implementation of the interaction elements corresponding to the interaction events that will best preserve the interactive behavior 630 for the respective input. The interaction element interpreter can pass this implementation of the interaction elements representing a pushing action to the box. The interaction behavior can apply the appropriate interaction element implementation and behavior logic to the media that represents the box to simulate a box that has been pushed by a user. The media, coordinates, size, and output effects can be collected in the rendering descriptor 640 and sent to the platform core 210 for rendering.
A media and interaction node 610 may not describe a user interaction. An example of a media node without user interaction is an animated bird that flies across the screen when a door opens at regular intervals. The bird media node can listen for media node events from the door when it opens. The bird media behavior can describe the bird animation looping sequences, the bird animation path, the bird velocity, as well as what animation triggers the bird to fly. The output descriptor for the bird can contain any additional output parameters that are not contained in media itself. For example, the bird can have an iridescent effect described in the output descriptor that uses a previously implemented effect from the platform.
Different interaction element implementations can be implemented within the platform and not in the node. A developer can use combinations of existing interaction elements to create new interaction behaviors 710 without creating new interaction elements. An interaction behavior 710 can be created by describing the behavior as sets of interaction elements in scripts and interaction behavior parameters. The interaction logic can use the interaction element implementations, from the platform core 210, to create the interaction behavior 710 in reaction to interaction and node events.
Group behavior 720 can be determined by grouping rules and parameters for the rules. Grouping rules can, for example, be created as scripts. Complex group behavior 720 can be composed of a set of simpler rules that an individual node applies to itself when in a group. Examples of grouping rules are a neighbor or target attraction rule, a neighbor or target avoidance rule, a collision avoidance rule, and a neighbor alignment rule. The grouping logic can be the logic for the behavior. The grouping logic can use the grouping rules to implement a reaction. Grouping logic can represent the logic for an individual component in reaction to its neighbors or the logic for entire systems of grouping behavior such as a system for simulating ants. In this example the ant system can be represented in a single interaction node. The individual ants can be parts of an ant system that is described in the group behavior 720. The ants can share all the interaction behavior 710, media, and grouping rules, since all ants are in the same node.
Media behavior 730 can be described in the media descriptor and by the media logic. Media descriptors can be specified for any media component attached to a node including audio and visual media. Media descriptors can include the title and description of media along with the behaviors of media that are not direct reactions to user interaction such as, but not limited to, animation, scaling, translation, rotation, audio effects, media looping, and other rendering and media post processing effects. Media descriptors can be created in the platform GUI 510 and can be include scripts in a scripting language. The media logic can use the media descriptor, media data, and media to implement the media behavior 730 of the node.
The platform core automatically generates metrics 324 from interaction inputs based on the type of interaction input 920 and based on the app configuration. For example, a depth image as an input may not generate metrics 324 while a blob identity tracking interaction input may generate metrics 324 as new identities were added or removed over time if blob identities correlated to the traffic of users for a time period. However, a blob based identity tracking system's metrics 324 may not be collected if the tracked blobs did not represent users and instead represented hands due to the orientation of the device and the usage represented in the app configuration.
The platform core can automatically generate metrics 324 from interaction events 930 based on the type of interaction event though most interaction events 930 may not generate metrics 324 since the interaction behavior 710 may generate a more relevant metric. An example of an interaction event that may generate a metric is a blob based velocity field event that could measure the general activity level of blobs, when blobs correlate to users, or facial expression interaction events that produce metrics 324 based on different facial expressions regardless of whether or not a user is directly interacting with content or not.
The platform core can automatically generate metrics 324 from interaction elements based on the interaction element type and app configuration context. Interaction elements 1010 can represent a user's interaction without the relevant content so metrics 324 from interaction elements 1010 can provide usage patterns for a specific app context. For example interaction element metrics 324 can measure the overall engagement of users and their directed actions such as the length of time users spend pushing or pulling content in a branded experience compared to indirect or momentary user interactions such as users bouncing content.
The platform core can automatically generate metrics 324 from interaction behaviors 710 based on the interaction elements 1010 composing it, the number of events 930 this node is receiving, and the number of nodes this node is influencing through node events. Metrics 324 from node behaviors can app context relevant and can include the context of the content associated with the node behavior. The platform core can create a hierarchy of importance to distinguish interaction behaviors 710. The platform may only collect metrics 324 from the ranked interaction behaviors 710 based on preset parameters from the developer or as adjusted using a distributed parameter system. Interaction behaviors 710 that are deemed important and ranked higher than others may be the interaction behaviors 710 will trigger another node. Highly important nodes may be ones that trigger several nodes or receive a trigger from more than one node. The platform core can record these important and highly important node metrics 324 along with the accompanying metadata of the media content associated with the node and how a user interacted with the node, which is described by the interaction elements 1010. Examples of interaction behaviors 710 that may not generate metrics 324, other than overall usage metrics 324 from the interaction elements 1010, are simply pushing or pulling an interaction node without triggering any other interaction or media nodes, unless a developer overrides this. An example of an important interaction behavior that may trigger the collection of a metric 324 may be the user positioning a selector over a selectable option to select it.
This ranking of interaction behavior can be used to highlight logs, messaging, and to emphasize important interaction or animation events in shared media as well. For example, shared media can be automatically edited by the platform, to emphasize important and highly important interaction or animation events while cutting out un-eventful interaction.
Networked platform cores can send the simulation rendering information to a networked rendering system 1220, which can be on the same or different processing system than the nodes. Nodes can send the output of their rendering descriptors 640 to the networked platform core. The rendering system 1220 can output the visual and audio rendering information from the networked nodes as a simulation rendered on audio, visual, electromechanical devices.
Various embodiments are possible. For example, certain embodiments include a system for creating unencumbered interactive experiences that maintain consistent interactive user experiences using componentized interactive behaviors while preserving these interactive behaviors and simulated experience independently of the context and the input or output devices used. The platform can extract metrics, logs, and shareable media that are relevant to location, event, input devices, output devices, simulation device, installation context, media, user interaction, and simulated experience.
The system can include an interactive experience processing system, an output device, and a sensor for determining an unencumbered user's interaction with displayed media content. The system can also include a software platform for developing context and configuration agnostic interactive experiences. Moreover, the system can include a platform that dynamically translates user interaction from various interaction setups to the best implementation of the intended simulation behavior based on the specific configuration and user interaction.
The platform developer can describe the user interactive components as media, parameters, logic, and a set of device and configuration agnostic descriptions of the intended simulation responses to user interaction. The platform can use the developer-intended simulation responses and the configuration to automatically listen for the appropriate user interaction events and perform interpretation of various user interaction(s) to simulated behavior implementations to create a simulation response agnostically described.
The platform developer can, moreover, use a graphical user interface or a web-based interface to describe the user interactive components as media, parameters, logic, a set of device and configuration agnostic descriptions of the intended simulation responses to user interaction.
Also, in certain embodiments, the platform can automatically reflect any changes to devices, configuration, descriptions, organization, geography, time or season, event or venues, sponsors or affiliations, and content in the messaging that the platform uses in a simulation response to a user interaction and to messaging accompanying shared media from the user interaction experience.
In certain embodiments, the platform can automatically reflect any changes to devices, configuration, descriptions, organization, geography, time or season, event or venues, sponsors or affiliations, and content in the contextual metadata that the platform appends to metrics, logs, and shared media from the user interaction experience.
Furthermore, the platform can automatically collect and append contextual metadata to metrics from user interaction based on the device type, the user interaction type, the intended simulation response to the user interaction, and a hierarchy of interaction importance derived from the intended simulation response and the number of interaction or media components triggered by the intended simulation response.
The platform can automatically collect and append contextual metadata to shared media from user interaction. Moreover, the platform can dynamically edit shared media to highlight specific user interaction moments based on a hierarchy of interaction importance derived from the intended simulation response to the user and the number of interaction or media components triggered by the intended simulation response.
The interactive experience processing system can be networked with other interactive experience processing systems and the inputs detecting user interaction, as well as the outputs of componentized user interactions, the outputs of componentized media behaviors, the interaction between componentized media components, and the interaction between user interactive behavior components are transferred on a network.
In certain embodiments, the system can be networked with a system of servers and the interactive experience can be defined in such a way that the description of any specific instance of these experiences and their associated behavior, state, media, context, logic, assets, logs, metrics, metadata, shared media, and schedules can be synchronized on a network as a file, a set of files, or on a database.
Furthermore, in certain embodiments a system of servers can use the descriptions of the interactive experiences to distribute and synchronize these experiences for updates and purchases, customizations, retargeting, management of interactive processing systems, data-mining, and to schedule each instance of an interactive experience by modifying the definitions of the experience and by altering the associated media or altering the behavior parameters of the specific experience.
The system of servers can, in certain embodiments, use the descriptions of the interactive experiences to serve a web-based interface that allows a remotely located user to customize content, to customize parameters, to re-deploy a current instance of an interactive experience to another processing system, to license and purchase new interactive experiences, to apply updates to interactive experiences, to data-mine a single processing system or a network of processing systems, to schedule interactive experiences, to schedule content for interactive experiences, to monitor interactive experiences, to view logs or metrics for interactive experiences, to view shared media from interactive experiences, and to organize interactive experiences and processing systems in a hierarchy for networked interactive experience processing systems.
The application instance can include an application core that includes at least one interaction node and at least one media node. The at least one interaction node and the at least one media node can provide a description of at least one of behavior or rendering. The application core can further include application logic and application data.
The method can additionally include, at 1325, adding contextual information to the application instance based on an application configuration. The contextual information can include information regarding at least one of architecture, chips, memory, display characteristics, or interaction characteristics. Other contextual information is also permitted. The adding contextual information can include applying an event filter.
The method can also include, at 1330, regulating use of the application instance based on license information in the application descriptor. The method can further include, at 1335, synchronizing the application descriptor with at least one remote server.
The method can also include, at 1340, processing the at least one action of the user into an interaction input. The method can further include, at 1345, changing a type and quality of user interaction, interface, media, sequence, or media of the interactive application running on the user device based on device capabilities.
The method can additionally include, at 1350, providing a visual representation of the application descriptors and content management systems to end users.
The method can further include, at 1355, providing a scheduling application so that applications can be scheduled to run at specific times on an end-user device or network of end-user devices, with specific media elements or other specified time-targeted content. Examples of such scheduling may be for using a hospital to schedule physical therapy and entertainment.
The application instance of certain embodiments can be networked between users for remote interaction among users in, for example, multiplayer games, multi-person collaboration, multi-person learning, and the like.
The application descriptor can be configured to be encoded with one or more generic or specific interaction descriptions configured to be targeted to capabilities of user devices. The capabilities can include screen size, touch capability, mouse and keyboard capability, gesture sensing capability, voice recognition capability, and voice synthesis capability. For example, the application descriptor can be configured to target devices with small screens (such as mobile phones), devices with large screens (such as Internet-connected televisions), devices with touch capability (such as tablets and smartphones), devices with mouse and keyboard capability (such as laptop computers), devices with gesture sensing capability (such as devices equipped with 3D- or 2D cameras to sense user movement, as well as computer vision hardware, software, and algorithms), devices with voice recognition capabilities, and devices with voice synthesis capabilities.
The method can also include, at 1430, delivering the application descriptor in a high-level encoded format, which is interpreted by architecture-specific interpreters running on end-user devices. The high-level encoded format can be, for example, extensible markup language (XML), a scripting language, or another high-level descriptor.
The method can also include, at 1520, providing a content management system configured to customize the behavior and appearance of applications by modifying the application descriptor and associated media. The method can further include, at 1530, automatically updating user applications when the application descriptor or media is updated on the server.
The method can additionally include, at 1540, providing a visual representation of the application descriptor and content management system to an application creator. The method can further include, at 1550, providing a software development kit to developers for creating application descriptors.
The method can also include, at 1560, deploying the application instance to a consumer for individual purchase or free download as a context-aware application or application feature. Moreover, the method can include, at 1570, deploying the application instance to a business as a cloud-based service for delivering a context-aware application or application feature to end users as a free or paid service.
The end-user device 1620 can also include input devices 1626 and output devices 1628. The input devices 1626 can include devices such a touch screen, camera, keyboard, mouse, or the like. The output devices 1628 can include such things as a display or speakers. Other input devices 1626 and output devices 1628 are also permitted.
The application descriptor can be aware of, or taken into account, a context containing multiple devices. For example, certain embodiments can target apps into an environment with multiple simultaneous devices with different characteristics. Thus, if users have cell phones, tables, large displays, and small displays all in one room, an application descriptor can be aware of the context containing multiple devices, and have simultaneous, complementary behaviors on the different apps. For example, the application descriptor can permit picking photos on a phone, and having those photos displayed on big screens. In another example, the application descriptor can permit choosing weapons on a phone that get mapped to a 3D avatar of the person as he is tracked by a gesture system and appears on a large display.
One having ordinary skill in the art will readily understand that the invention as discussed above may be practiced with steps in a different order, and/or with hardware elements in configurations which are different than those which are disclosed. Therefore, although the invention has been described based upon these preferred embodiments, it would be apparent to those of skill in the art that certain modifications, variations, and alternative constructions would be apparent, while remaining within the spirit and scope of the invention. In order to determine the metes and bounds of the invention, therefore, reference should be made to the appended claims
The present application is related to and claims the benefit and priority of U.S. Patent Application No. 61/810,909 filed Apr. 11, 2013, which is hereby incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61810909 | Apr 2013 | US |