The present invention relates to methods and systems for incorporating computer-controlled representations into a real world environment and, in particular, to methods and systems for using one or more mobile devices to interact with simulated phenomena.
Computerized devices, such as portable computers, wireless phones, personal digital assistants (PDAs), global positioning system devices (GPSes) etc., are becoming compact enough to be easily carried and used while a user is mobile. They are also becoming increasingly connected to communication networks over wireless connections and other portable communications media, allowing voice and data to be shared with other devices and other users while being transported between locations. Interestingly enough, although such devices are also able to determine a variety of aspects of the user's surroundings, including the absolute location of the user, and the relative position of other devices, these capabilities have not yet been well integrated into applications for these devices.
For example, applications such as games have been developed to be executed on such mobile devices. They are typically downloaded to the mobile device and executed solely from within that device. Alternatively, there are multi-player network based games, which allow a user to “log-in” to a remotely-controlled game from a portable or mobile device; however, typically, once the user has logged-on, the narrative of such games is independent from any environment-sensing capabilities of the mobile device. At most, a user's presence through addition of an avatar that represents the user may be indicated in an on-line game to other mobile device operators. Puzzle type gaming applications have also been developed for use with some portable devices. These games detect a current location of a mobile device and deliver “clues” to help the user find a next physical item (like a scavenger hunt).
GPS mobile devices have also been used with navigation system applications such as for nautical navigation. Typical of these applications is the idea that a user indicates to the navigation system a target location for which the user wishes to receive an alert. When the navigation system detects (by the GPS coordinates) that the location has been reached, the system alerts the user that the target location has been reached.
Computerized simulation applications have also been developed to simulate a nuclear, biological, or chemical weapon using a GPS. These applications mathematically represent, in a quantifiable manner, the behavior of dispersion of the weapon's damaging forces (for example, the detection area is approximated from the way the wind carries the material emanating from the weapon). A mobile device is then used to simulate detection of this damaging force when the device is transported to a location within the dispersion area.
None of these applications take advantage of or integrate a device's ability to determine a variety of aspects of the user's surroundings.
Embodiments of the present invention provide enhanced computer- and network-based methods and systems for interacting with simulated phenomena using mobile devices. Example embodiments provide a Simulated Phenomena Interaction System (“SPIS”), which enables users to enhance their real world activity with computer-generated and computer-controlled simulated entities, circumstances, or events, whose behavior is at least partially based upon the real world activity taking place. The Simulated Phenomena Interaction System is a computer-based environment that can be used to offer an enhanced gaming, training, or other simulation experience to users by allowing a user's actions to influence the behavior of the simulated phenomenon including the simulated phenomenon's simulated responses to interactions with the simulated phenomenon. In addition, the user's actions may influence or modify a simulation's narrative, which is used by the SPIS to assist in controlling interactions with the simulated phenomenon, thus providing an enriched, individualized, and dynamic experience to each user.
For the purposes of describing a Simulated Phenomena Interaction System, a simulated phenomenon includes any computer software controlled entity, circumstance, occurrence, or event that is associated with the user's current physical world, such as persons, objects, places, and events. For example, a simulated phenomenon may be a ghost, playmate, animal, particular person, house, thief, maze, terrorist, bomb, missile, fire, hurricane, tornado, contaminant, or other similar real or imaginary phenomenon, depending upon the context in which the SPIS is deployed. Also, a narrative is sequence of events (a story—typically with a plot), which unfold over time. For the purposes herein, a narrative is represented by data (e.g., the current state and behavior of the characters and the story) and logic which dictates the next “event” to occur based upon specified conditions. A narrative may be rich, such as an unfolding scenario with complex modeling capabilities that take into account physical or imaginary characteristics of a mobile device, simulated phenomena, and the like. Or, a narrative may be more simplified, such as merely the unfolding of changes to the location of a particular simulated phenomenon over time.
In one example embodiment, the Simulated Phenomena Interaction System comprises one or more functional components/modules that work together to support a single or multi-player computer gaming environment that uses one or more mobile devices to “play” with one or more simulated phenomena according to a narrative. The narrative is potentially dynamic and influenced by players' actions, external personnel, as well as the phenomena being simulated. One skilled in the art will recognize that these components may be implemented in software or hardware or a combination of both. In another example embodiment, the Simulated Phenomena Interaction System comprises one or more functional components/modules that work together to provide a hands-on training environment that simulates real world situations, for example dangerous or hazardous situations, such as contaminant and air-born pathogen detection and containment, in a manner that safely allows operators trial experiences that more accurately reflect real world behaviors. In another example embodiment, the Simulated Phenomena Interaction System one or more functional components/modules that work together to provide a commerce-enabled application that generates funds for profit and non-profit entities. For example, in one embodiment, spectators are defined that can participate in an underlying simulation experience by influencing or otherwise affecting interactions with Simulated Phenomena Interaction System based upon financial contributions to a charity or to a for-profit entity.
For use in all such simulation environments, a Simulated Phenomena Interaction System comprises a mobile device or other mobile computing environment and a simulation engine. The mobile device is typically used by an operator to indicate interaction requests with a simulated phenomenon. The simulation engine responds to such indicated requests by determining whether the indicated interaction request is permissible and performing the interaction request if deemed permissible. The simulation engine comprises additional components, such as a narrative engine and various data repositories, which are further described below and which provide sufficient data and logic to implement the simulation experience. That is, the components of the simulation engine implement the characteristics and behavior of the simulated phenomena as influenced by a simulation narrative.
In a hands-on training environment that simulates real world situations, such as a contaminant detection simulation system, the interaction requests and interaction responses and processed by the mobile device are appropriately modified to reflect the needs of the simulation. For example, techniques of the Simulated Phenomena Interaction System may be used to provide training scenarios which address critical needs related to national security, world health, and the challenges of modern peacekeeping efforts. In one example embodiment, the SPIS is used to create a Biohazard Detection Training Simulator (BDTS) that can be used to train emergency medical and security personnel in the use of portable biohazard detection and identification units in a safe, convenient, affordable, and realistic environment.
This embodiment simulates the use of contagion detector devices that have been developed using new technologies to detect pathogens and contagions in a physical area. Example devices include BIOHAZ, FACSCount, LUMINEX 100, ANALYTE 2000, BioDetector (BD), ORIGEN Analyzer, and others, as described by the Bio-Detector Assessment Report prepared by the U.S. Army Edgewood Chemical, Biological Center (ERT Technical Bulletin 2001-4), which is herein included by reference in its entirety. Since it is prohibitively expensive to install such devices in advance everywhere they may be needed in the future, removing them from commission for training emergency personnel is not practical. Thus, BDTSs can be substituted for training purposes. These BDTSs need to simulate the pathogen and contagion detection technology as well as the calibration of a real contagion detector device and any substances needed to calibrate or operate the device. In addition, the narrative needs to be constructed to simulate field conditions and provide guidance to increase the awareness of proper personnel protocol when hazardous conditions exist.
In addition to gaming and hazardous substance training simulators, one skilled in the art will recognize, that the techniques of the Simulated Phenomena Interaction System may be useful to create a variety of other simulation environments, including response training environments for other naturally occurring phenomenon, for example, earthquakes, floods, hurricanes, tornados, bombs, and the like. Also, these techniques may be used to enhance real world experiences with more “game-like” features. For example, a SPIS may be used to provide computerized (and narrative based) routing in an amusement park with rides or other facility so that a user's experience is optimized to frequent rides with the shortest waiting times. In this scenario, the SPIS acts as a “guide” by placing SPs in locations (relative to the user's physical location in the park) that are strategically located relative to the desired physical destination. The narrative, as evidenced by the SPs behavior and responses, encourages the user to go after the strategically placed SPs. The user is thus “led” by the SPIS to the desired physical destination and encouraged to engage in desired behavior (such as paying for the ride) by being “rewarded” by the SPIS according to the narrative (such as becoming eligible for some real world prize once the state of the mobile deice is shown to a park operator). Many other gaming, training, and computer aided learning experiences can be similarly presented and supported using the techniques of a Simulated Phenomena Interaction System.
Any such SPIS game (or other SPIS simulation scenario) can be augmented by placing the game in a commerce-enabled environment that integrates with the SPIS game through defined SPIS interfaces and data structures. For example, with the inclusion of additional modules and the use of a financial transaction system (such as those systems known in the art that are available to authorize and authenticate financial transactions over the Internet), spectators of various levels can affect, for a price, the interactions of a game in progress. The price paid may go to a designated charitable organization or may provide direct payment to the game provider or some other profit-seeking entity, depending upon how the commerce-enable environment is deployed. An additional type of SPIS participant (not the operator of the mobile device) called a “spectator” is defined. A spectator, depending upon the particular simulation scenario, authentication, etc. may have different access rights that designate what data is viewable by the spectator and what parts of or how the SPIS scenario or underlying environment may be affected. A spectator's ability to affect the simulation scenario or assist a mobile device operator is typically in proportion to the price paid. In addition, a spectator may be able to provide assistance to an individual participant or a team. For example, a narrative “hint” may be provided to the designated operator of a mobile device (the “game participant”) in exchange for the receipt of funds from the spectator. Further, the price of such assistance may vary according to the current standing of the game participant relative to the competition or some level to be attained. Thus, the spectator is given access to such information to facilitate a contribution decision.
Different “levels” of spectators may be defined, for example, by specifying a plurality of “classes” (as in the object-oriented term, or equivalents thereto) of spectators that own or inherit a set of rights. These rights dictate what types of data are viewable from, for example, the SPIS data repositories. The simulation engine is then responsible to abide by the specified access right definitions once a spectator is recognized as belonging to a particular spectator class. One skilled in the art will recognize that other simulation participants, such as a game administrator, an operator (game participant), or a member of a team can also be categorized as belonging to a participant level that defines the participants access rights.
In one example embodiment of a commerce-enabled environment, five classes of spectators (roles) are defined as having the following access rights:
(1) Participant (operator(s) of a mobile device):
With the use of a commerce-enabled environment, spectators can indirectly participate in the simulation in a manner that enhances the simulation environment, while providing a source of income to the non-profit or profit-based recipient of the funds. In another example, spectators place (and pay for) wagers on simulation participants (e.g., game players) or others aspects of the underlying simulation scenario and the proceeds are distributed accordingly.
For use in all such simulation environments, a Simulated Phenomena Interaction System comprises a mobile device or other mobile computing environment and a simulation engine.
The simulation engine may further comprise a narrative with data and event logic, a simulated phenomena characterizations data repository, and a narrative engine (e.g., to implement a state machine for the simulation). The narrative engine uses the narrative and the simulated phenomena characterizations data repository to determine whether an indicated interaction is permissible, and, if so, to perform that interaction with a simulated phenomenon. In addition, the simulation engine may comprise other data repositories or store other data that characterizes the state of the mobile device, information about the operator, the state of the narrative, etc.
Accordingly, the simulation engine 610 may comprise a number of other components for processing interaction requests and for implementing the characterizations and behavior of simulated phenomena. For example, simulation engine 610 may comprise a narrative engine 612, an input/output interface 611 for interacting with the mobile devices 601-604 and for presenting a standardized interface to control the narrative engine and/or data repositories, and one or more data repositories 620-624. In what might be considered a more minimally configured simulation engine 610, the narrative engine 612 interacts with a simulated phenomena attributes data repository 620 and a narrative data and logic data repository 621. The simulated phenomena attributes data repository 620 typically stores information that is used to characterize and implement the “behavior” of simulated phenomena (responses to interaction requests). For example, attributes may include values for location, orientation, velocity, direction, acceleration, path, size, duration schedule, type, elasticity, mood, temperament, image, ancestry, or any other seemingly real world or imaginary characteristic of simulated phenomena. The narrative data and logic data repository 621 stores narrative information and event logic which is used to determine a next logical response to an interaction request. The narrative engine 612 uses the narrative data and logic data repository 621 and the simulated phenomena attributes data repository 620 to determine whether an indicated interaction is permissible, and, if so, to perform that interaction with the simulated phenomena. The narrative engine 612 then communicates a response or the result of the interaction to a mobile device, such as devices 601-604 through the I/O interface 611. I/O interface 611 may contain, for example support tools and protocol for interacting with a wireless device over a wireless network.
In a less minimal configuration, the simulation engine 610 may also include one or more other data repositories 622-624 for use with different configurations of the narrative engine 612. These repositories may include, for example, a user characteristics data repository 622, which stores characterizations of each user who is interacting with the system; a environment characteristics data repository 624, which stores values sensed by sensors within the real world environment; and a device attributes data repository 623, which may be used to track the state of each mobile device being used to interact with the SPs.
One skilled in the art will recognize that many different ways are available to determine or calculate values for the attributes stored in these repositories, including, for example, determining a pre-defined constant value; evaluating a mathematical formula, including a value that is based upon the values of other attributes; human input; real-world data sampling; etc. In addition, the same or different determination techniques may be used for each of the different types of data repositories (e.g., simulated phenomena, device, user, environment, etc.), varied on a per attribute basis, per device, per SP, etc. Many other arrangements are possible and contemplated.
One skilled in the art will recognize that many configurations are possible with respect to the narrative engine 612 and the various data repositories 620-624. These configurations may vary with respect to how much logic and data is contained in the narrative engine 612 itself versus stored in each data repository and whether the event logic (e.g., in the form of a narrative state machine) is stored in data repositories, as for example stored procedures, or is stored in other (not shown) code modules or as mathematical function definitions. In the embodiment exemplified in
Models 704-706 are used to implement the logic (that affects event flow and attribute values) that governs the various entities being manipulated by the system, instead of placing all of the logic into the narrative engine 702, for example. Distributing the logic into separate models allows for more complex modeling of the various entities manipulated by the simulation engine 701, such as, for example, the simulated phenomena, the narrative, and representations of the environment, users, and devices. For example, a module or subcomponent that models the simulated phenomena, the phenomenon model 704, is shown separately connected to the plurality of data repositories 708-712. This allows separate modeling of the same type of SP, depending, for example, on the mobile device, the user, the experience of the user, sensed real world environment values for a specific device, etc. Having a separate phenomenon model 704 also allows easy testing of the environment to implement, for example, new scenarios by simply replacing the relevant modeling components. It also allows complex modeling behaviors to be implemented more easily, such as SP attributes whose values require a significant amount of computing resources to calculate; new behaviors to be dynamically added to the system (perhaps, even, on a random basis); multi-user interaction behavior (similar to a transaction processing system that coordinates between multiple users interacting with the same SP); algorithms, such as artificial intelligence based algorithms, which are better executed on a distributed server machine; or other complex requirements.
Also, for example, the environment model 705 is shown separately connected to the plurality of data repositories 708-712. Environment model 705 may comprise state and logic that dictates how attribute values that are sensed from the environment influence the simulation engine responses. For example, the type of device requesting the interaction, the user associated with the current interaction request, or some such state may potentially influences how a sensed environment value affects an interaction response or an attribute value of an SP.
Similarly, the narrative logic model 706 is shown separately connected to the plurality of data repositories 708-712. The narrative logic model 706 may comprise narrative logic that determines the next event in the narrative but may vary the response from user to user, device to device, etc., as well as based upon the particular simulated phenomenon being interacted with.
The content of the data repositories and the logic necessary to model the various aspects of the system essentially defines each possible narrative, and hence it is beneficial to have an easy method for tailoring the SPIS for a specific scenario. In one embodiment, the various data repositories and/or the models are populated using an authoring system.
When a Simulated Phenomena Interaction System is integrated into a commerce-enabled scenario, additional components are present to handle commerce transactions and interfacing to the various other “participants” of the simulation scenario, for example, spectators, game administrators, contagion experts, etc.
In
For example, after viewing the progress of the underlying simulation scenario via spectator support module 2406, the spectator 2403 may choose to support a team the spectator 2403 desires will win. (In a commerce-enable wagering environment, the spectator 2403 may choose to place “bets” on a team, a device operator, or, for example, a simulated phenomenon that the spectator 2403 believes will win.) Accordingly, spectator 2403 “orders” an assist via spectator support module 2406 by paying for it via commerce support module 2431. Once a financial transaction has been authenticated and verified (using well-known transaction processing systems such as credit card servers on the Internet), appropriate identifying data is placed by the commerce support module 2431 into the commerce data repository 2430 where it can be retrieved by the various SPIS support modules 2404-2406. The spectator support module then informs the simulation engine 2410 of the donation and instructs the simulation engine 2410 to provide assistance (for example, through a hint to the designated mobile device operator) or other activity.
In some scenarios, a spectator 2403 may be permitted to modify certain simulation data stored in the data repositories 2420-2422. Such capabilities are determined by the capabilities offered through the API 2411, the narrative, and the manner in which the data is stored.
In one arrangement, the SPIS support modules 2404-2406 interface with the SPIS data repositories 2420-2422 via the narrative engine 2412. One skilled in the art will recognize that rather than interface through the narrative engine 2412, other embodiments are possible that interface directly through data repositories 2420-2422. Example SPIS data repositories that can be viewed and potentially manipulated by the different participants 2401-2403 include the simulated phenomena attributes data repository 2420, the narrative data & logic data repository 2421, and the user (operator) characteristics data repository 2422. Other SPIS data repositories, although not shown, may be similarly integrated.
In some scenarios, a spectator is permitted to place wagers on particular device operators, teams, or simulated phenomena. Further, in response to such wagers, the narrative may influence aspects of the underlying simulation scenario. In such cases the commerce support 2431 includes well-known wager-related support services as well as general commerce transaction support. One skilled in the art will recognize that the possibilities abound and that that modules depicted in
Regardless of the internal configurations of the simulation engine, the components of the Simulated Phenomena Interaction System process interaction requests in a similar overall functional manner.
When the simulation engine is used in a commerce-enabled environment, such as that shown in
Although the techniques of Simulated Phenomena Interaction System are generally applicable to any type of entity, circumstance, or event that can be modeled to incorporate a real world attribute value, the phrase “simulated phenomenon,” is used generally to imply any type of imaginary or real-world place, person, entity, circumstance, event, occurrence. In addition, one skilled in the art will recognize that the phrase “real-world” means in the physical environment or something observable as existing, whether directly or indirectly. Also, although the examples described herein often refer to an operator or user, one skilled in the art will recognize that the techniques of the present invention can also be used by any entity capable of interacting with a mobile environment, including a computer system or other automated or robotic device. In addition, the concepts and techniques described are applicable to other mobile devices and other means of communication other than wireless communications, including other types of phones, personal digital assistances, portable computers, infrared devices, etc, whether they exist today or have yet to be developed. Essentially, the concepts and techniques described are applicable to any mobile environment. Also, although certain terms are used primarily herein, one skilled in the art will recognize that other terms could be used interchangeably to yield equivalent embodiments and examples. In addition, terms may have alternate spellings which may or may not be explicitly mentioned, and one skilled in the art will recognize that all such variations of terms are intended to be included.
Example embodiments described herein provide applications, tools, data structures and other support to implement a Simulated Phenomena Interaction System to be used for games, interactive guides, hands-on training environments, and commerce-enabled simulation scenarios. One skilled in the art will recognize that other embodiments of the methods and systems of the present invention may be used for other purposes, including, for example, traveling guides, emergency protocol evaluation, and for more fanciful purposes including, for example, a matchmaker (SP makes introductions between people in a public place), traveling companions (e.g., a bus “buddy” that presents SPs to interact with to make an otherwise boring ride potentially more engaging), a driving pace coach (SP recommends what speed to attempt to maintain to optimize travel in current traffic flows), a wardrobe advisor (personal dog robot has SP “personality,” which accesses current and predicted weather conditions and suggests attire), etc. In the following description, numerous specific details are set forth, such as data formats and code sequences, etc., in order to provide a thorough understanding of the techniques of the methods and systems of the present invention. One skilled in the art will recognize, however, that the present invention also can be practiced without some of the specific details described herein, or with other specific details, such as changes with respect to the ordering of the code flow.
A variety of hardware and software configurations may be used to implement a Simulated Phenomena Interaction System. A typical configuration, as illustrated with respect to
In the embodiment shown, computer system 1000 comprises a computer memory (“memory”) 1001, an optional display 1002, a Central Processing Unit (“CPU”) 1003, and Input/Output devices 1004. The simulation engine 1010 of the Simulated Phenomena Interaction System (“SPIS”) is shown residing in the memory 1001. The components of the simulation engine 1010 preferably execute on CPU 1003 and manage the generation and interaction with of simulated phenomena, as described in previous figures. Other downloaded code 1030 and potentially other data repositories 1030 also reside in the memory 1010, and preferably execute on one or more CPU's 1003. In a typical embodiment, the simulation engine 1010 includes a narrative engine 1011, an I/O interface 1012, and one or more data repositories, including simulated phenomena attributes data repository 1013, narrative data and logic data repository 1014, and other data repositories 1015. In embodiments that include separate modeling components, these components would additionally reside in the memory 1001 and execute on the CPU 1003.
In an example embodiment, components of the simulation engine 1010 are implemented using standard programming techniques. One skilled in the art will recognize that the components lend themselves object-oriented, distributed programming, since the values of the attributes and behavior of simulated phenomena can be individualized and parameterized to account for each device, each user, real world sensed values, etc. However, any of the simulation engine components 1011-1015 may be implemented using more monolithic programming techniques as well. In addition, programming interfaces to the data stored as part of the simulation engine 1010 can be available by standard means such as through C, C++, C#, and Java API and through scripting languages such as XML, or through web servers supporting such interfaces. The data repositories 1013-1015 are preferably implemented for scalability reasons as databases rather than as a text file, however any storage method for storing such information may be used. In addition, behaviors of simulated phenomena may be implemented as stored procedures, or methods attached to SP “objects,” although other techniques are equally effective.
One skilled in the art will recognize that the simulation engine 1010 and the SPIS may be implemented in a distributed environment that is comprised of multiple, even heterogeneous, computer systems and networks. For example, in one embodiment, the narrative engine 1011, the I/O interface 1012, and each data repository 1013-1015 are all located in physically different computer systems, some of which may be on a client mobile device as described with reference to
Specifically,
Alternatively, the client device may be implemented as a fat client mobile device as shown in
Different configurations and locations of programs and data are contemplated for use with the techniques of the present invention. In example embodiments, these components may execute concurrently and asynchronously; thus, the components may communicate using well-known message passing techniques. One skilled in the art will recognize that equivalent synchronous embodiments are also supported by an SPIS implementation, especially in the case of a fat client architecture. Also, other steps could be implemented for each routine, and in different orders, and in different routines, yet still achieve the functions of the SPIS.
As described in
As indicated in
Specifically, in step 1401, the routine determines whether the detector is working, and, if so, continues in step 1404 else continues in step 1402. This determination is conducted from the point of view of the narrative, not the mobile device (the detector). In other words, although the mobile device may be working correctly, the narrative may dictate a state in which the client device (the detector) appears to be malfunctioning. In step 1402, the routine, because the detector is not working, determines whether the mobile device has designated or previously indicated in some manner that the reporting of status information is desirable. If so, the routine continues in step 1403 to report status information to the mobile device (via the narrative engine), and then returns. Otherwise, the routine simply returns without detection and without reporting information. In step 1404, when the detector is working, the routine determines whether a “sensitivity function” exists for the particular interaction routine based upon the designated SP identifier, device identifier, the type of attribute that the detection is detecting (the type of detection), and similar parameters.
A “sensitivity function” is the generic name for a routine, associated with the particular interaction requested, that determines whether an interaction can be performed and, in some embodiments, performs the interaction if it can be performed. That is, a sensitivity function determines whether the device is sufficiently “sensitive” (in “range” or some other attribute value) to interact with the SP with regard specifically to the designated attribute in the manner requested. For example, there may exist many detection routines available to detect whether a particular SP should be considered “detected” relative to the current characteristics of the requesting mobile device. The detection routine that is eventually selected as the “sensitivity function” to invoke at that moment may be particular to the type of device, some other characteristic of the device, the simulated phenomena being interacted with, or another consideration, such as an attribute value sensed in the real world environment, here shown as “attrib_type.” For example, the mobile device may indicate the need to “detect” an SP based upon a proximity attribute, or an agitation attribute, or a “mood” attribute (an example of a completely arbitrary, imaginary attribute of an SP). The routine may determine which sensitivity function to use in a variety of ways. The sensitivity functions may be stored, for example, as a stored procedures in the simulated phenomena characterizations data repository, such as data repository 620 in
Once the appropriate sensitivity function is determined, then the routine continues in step 1405 to invoke the determined detection sensitivity function. Then, in step 1406, the routine determines as a result of invoking the sensitivity function, whether the simulated phenomenon was considered detectable, and, if so, continues in step 1407, otherwise continues in step 1402 (to optionally report non-success). In step 1407, the routine indicates (in a manner that is dependent upon the particular SP or other characteristics of the routine) that the simulated phenomenon is present (detected) and modifies or updates any data repositories and state information as necessary to update the state of the SP, narrative, and potentially the simulated engine's internal representation of the mobile device, to consider the SP “detected.” In step 1408, the routine determines whether the mobile device has previously requested to be in a continuous detection mode, and, if so, continues in step 1401 to begin the detection loop again, otherwise returns.
One skilled in the art will recognize that other functionality can be added and is contemplated to be added to the detection routine and the other interaction routines. For example, functions for adjustment (real or imaginary) of the mobile device from the narrative's perspective and functions for logging information could be easily integrated into these routines.
As mentioned, several different techniques can be used to determine which particular sensitivity function to invoke for a particular interaction request. Because, for example, there may be different sensitivity calculations based upon the type of interaction and the type of attribute to be interacted with. A separate sensitivity function may also exist on a per-attribute basis for the particular interaction on a per-simulated phenomenon basis (or additionally per device, per user, etc.). Table 1 shows the use of a single overall routine to retrieve multiple sensitivity functions for the particular simulated phenomenon and device combination, one for each attribute being interacted with. (Note that multiple attributes may be specified in the interaction request. Interaction may be a complex function of multiple attributes as well.) Thus, for example, if for a particular simulated phenomenon there are four attributes that need to be detected in order for the SP to be detected from the mobile device perspective, then there may be four separate sensitivity functions that are used to determine whether that attribute of the SP is detectable at that point. Note that, as shown in line 4, the overall routine can also include logic to invoke the sensitivity functions on the spot, as opposed to invoking the function as a separate step as shown in
Table 2 is an example sensitivity function that is returned by the routine GetSensitivityFunctionForType shown in Table 1 for a detection interaction for a particular simulated phenomenon and device pair as would be used with an agitation characteristic (attribute) of the simulated phenomenon. In essence, the sensitivity agitation function retrieves an agitation state variable value from the SP characterizations data repository, retrieves a current position from the SP characterization data repository, and receives a current position of the device from the device characterization data repository. The current position of the SP is typically an attribute of the SP, or calculated from such attribute. Further, it may be a function of the current actual location of the device. Note that the characteristics of the SP (e.g., the agitation state) are dependent upon which SP is being addressed by the interaction request, and may also be dependent upon the particular device interacting with the particular SP and/or the user that is interacting with the SP. Once the values are retrieved, the example sensitivity function then performs a set of calculations based upon these retrieved values to determine whether, based upon the actual location of the device relative to the programmed location of the SP, the SP agitation value is “within range.” If so, the function sends back a status of detectable; otherwise, it sends back a status of not detectable.
As mentioned earlier, the response to each interaction request is in some way based upon a real world physical characteristic, such as the physical location of the mobile device submitting the interaction request. The real world physical characteristic may be sent with the interaction request, sensed from a sensor in some other way or at some other time. Responses to interaction requests can also be based upon other real world physical characteristics, such as physical orientation of the mobile device—e.g., whether the device is pointing at a particular object or at another mobile device, or, for example, how fast the operator of the device is moving (velocity) or the direction of travel (bearing). One skilled in the art will recognize that many other characteristics can be incorporated in the modeling of the simulated phenomena, provided that the physical characteristics are measurable and taken into account by the narrative or models incorporated by the simulation engine. For the purposes of ease of description, a device's physical location will be used as exemplary of how a real world physical characteristic is incorporated in SPIS.
A mobile device, depending upon its type, is capable of sensing its location in a variety of ways, some of which are described here. One skilled in the art will recognize that there are many methods for sensing location and are contemplated for use with the SPIS. Once the location of the device is sensed, this location can in turn be used to model the behavior of the SP in response to the different interaction requests. For example, the position of the SP relative to the mobile device may be dictated by the narrative to be always a multiple from the current physical location of the user's device until the user enters a particular spot, a room, for example. Alternatively, an SP may “jump away” (exhibiting behavior similar to trying to swat a fly) each time the physical location of the mobile device is computed to “coincide” with the apparent location of the SP. To perform these type of behaviors, the simulation engine typically models both the apparent location of the SP and the physical location of the device based upon sensed information.
The location of the device may be an absolute location as available with some devices, or may be computed by the simulation engine (modeled) based upon methods like triangulation techniques, the device's ability to detect electromagnetic broadcasts, and software modeling techniques such as data structures and logic that models latitude, longitude, altitude, velocity and bearing of the device, etc. Examples of devices that can be modeled in part based upon the device's ability to detect electromagnetic broadcasts include cell phones such as the Samsung SCH W300 with the Verizon™ network, the Motorola V710, which can operate using terrestrial electromagnetic broadcasts of cell phone networks or using the electromagnetic broadcasts of satellite GPS systems, and other “location aware” cell phones, wireless networking receivers, radio receivers, photo-detectors, radiation detectors, heat detectors, and magnetic orientation or flux detectors. Examples of devices that can be modeled in part based upon triangulation techniques include GPS devices, Loran devices, some E911 cell phones.
In the example shown in
In addition, by controlling the apparent position of an SP, the narrative may in effect “guide” the user of the mobile device to a particular location. For example, the narrative can indicate the position of an SP at a continuous relative distance to the (indicator of the) user, provided the location of the mobile device travels through and to the region desired by the narrative, for example along a path from region #2, through region #5, to region #1. If the mobile device location instead veers from this path (travels from region # 2 directly to region #1 by passing region #5, the narrative can detect this situation and communicate with the user, for example indicating that the SP has become further away or undetectable (the user might be considered (“lost”).
A device might also be able to sense its location in the physical world based upon a signal “grid” as provided, for example, by GPS-enabled systems. A GPS-enabled mobile device might be able to sense not only that it is in a physical region, such as receiving transmissions from transmitter #5, but it also might be able to determine that it is in a particular rectangular grid within that region, as indicated by rectangular regions #6-9. This information may be used to give GPS-enabled device a finer degree of detection than that available from cell phones, for example. One example such device is a Compaq iPaq H3850, with a Sierra wireless AirCard 300 using AT&T Wireless Internet Service and a Transplant Computing GPS card. In addition, cell phones that use the Qualcomm MSM6100 chipset have the same theoretical resolution as any other GPS. Also, an example of a fat-client mobile device is the Garmin IQue 3600, which is a PDA with GPS capability.
Other devices present more complicated location modeling considerations and opportunities for integration of simulated phenomena into the real world. For example, a wearable display device, such as Wireless 3D Glasses from the eDimensional company, allows a user to “see” simulated phenomena in the same field of vision as real world objects, thus providing a kind of “augmented reality.”
PDAs with IRDA (infrared) capabilities, for example, a Tungsten T PDA manufactured by Palm Computing, also present more complicated modeling considerations and allows additionally for the detection of device orientation. Though this PDA supports multiple wireless networking functions (e.g., Bluetooth & Wi-Fi expansion card), the IRDA version utilizes its Infrared Port for physical location and spatial orientation determination. By pointing the infrared transmitter at an infrared transceiver (which may be an installed transceiver, such as in a wall in a room, or another infrared device, such as another player using a PDA/IRDA device), the direction the user is facing can be supplied to the simulation engine for modeling as well. This measurement may result in producing more “realistic” behavior in the simulation. For example, the simulation engine may be able to better detect when a user has actually pointed the device at an SP to capture it. Similarly, the simulation engine can also better detect two users facing their respective devices at each other (for example, in a simulated battle). Thus, depending upon the device, it may be possible for the SPIS to produce SPs that respond to orientation characteristics of the mobile device as well as location.
One skilled in the art will recognize that, in general, other devices with other types of location detection can also be incorporated into SPIS in a similar manner to incorporating detection using PDAs with IRDA. Many types of local location determination (determination local to the mobile device) can be employed. For example, a mobile device enhanced with the ability to detect radio frequency, ultrasonic, or other broadcast identification can also be incorporated. Transmitters that broadcast such signals can be placed in an environment similar to that illustrated in
One skilled in the art will also recognize that there are inherent inconsistencies and limitations as to the accuracy of sampling data from all such devices. For example, broadcasting methodologies used in location determination as described above can be blocked, reflected, or distorted by the environment or other objects within the environment. Preferably, the narrative handles such errors, inconsistencies, and ambiguities in a manner that is consistent with the narrative context. For example, in the gaming system called “Spook” described earlier, when the environmental conditions provide insufficient reliability or precision in location determination, the narrative might send an appropriate text message to the user such as “Ghosts have haunted your spectral detector! Try to shake them by walking into an open field.” Also, some devices may necessitate that different techniques be used for location determination and the narrative may need to adjust accordingly and dynamically. For example, a device such as a GPS might have high resolution outdoors, but be virtually undetectable (and thus have low location resolution) indoors. The narrative might need to specify the detectability of an SP at that point in a manner that is independent from the actual physical location of the device, yet still gives the user information. Dependent upon the narrative, the system may choose to indicate that the resolution has changed or not.
A variety of techniques can be used to indicate detectability of an SP when location determination becomes degraded, unreliable, or lost. For example, the system can display its location in courser detail (similar to a “zoom out” effect). Using this technique the view range is modified to cover a larger area, so that the loss of location precision does not create a view that continuously shifts even though the user is stationary. If the system loses location determination capability completely, the device can use the last known position. Moreover, if the shape of the degraded or occluded location data area is known, the estimated or last-known device position can be shown as a part of a boundary of this area. For example, if the user enters a rectangular building that blocks all location determination signals, the presentation to the user can show the location of the user as a point on the edge of a corresponding rectangle. The view presented to the user will remain based on this location until the device's location can be updated. Regardless of the ability to determine the device's precise location, SP locations can be updated relative to whatever device location the simulation uses.
As mentioned, the physical location of the device may be sent with the interaction request itself or may have been sent earlier as part of some other interaction request, or may have been indicated to the simulation engine by some kind of sensor somewhere else in the environment. Once the simulation engine receives the location information, the narrative can determine or modify the behavior of an SP relative to that location.
Indications of a simulated phenomenon relative to a mobile device are also functions of both the apparent range of the device (area in which the device “operates” for the purposes of the simulation engine) and the apparent range of the sensitivity function(s) used for interactions. The latter (sensitivity range) is typically controlled by the narrative engine but may be programmed to be related to the apparent range of the device. Thus, for example, in
Although the granularity of the actual resolution of the physical device may be constrained by the technology used by the physical device, the range of interaction, such as detectability, that is supported by the narrative engine is controlled directly by the narrative engine. Thus, the relative size between what the mobile device can detect and what is detectable may be arbitrary or imaginary. For example, although a device might have an actual physical range of 3 meters for a GPS, 30 meters for a WiFi connected device, or 100-1000 meters for cell phones, the simulation engine may be able to indicate to the user of the mobile device that there is a detectable SP 200 meters away, although the user might not yet be able to use a communication interaction to ask questions of it at this point.
In Diagram B, the smaller circle indicates where the narrative has located the SP is relative to the apparent detection range of the device. The larger circle in the center indicates the location of the user relative to this same range and is presumed to be a convention of the narrative in this example. When the user progresses to a location that is in the vicinity of an SP (as determined by whatever modeling technique is being used by the narrative engine), then, as shown in Diagram C, the narrative indicates to the user that a particular SP is present. (The big “X” in the center circle might indicate that the user is in the same vicinity as the SP.) This indication may need to be modified based upon the capabilities and physical limitations of the device. For example, if a user is using a device, such as a GPS, that doesn't work inside a building and the narrative has located the SP inside the building, then the narrative engine may need to change the type of display used to indicate the SP's location relative to the user. For example, the display might change to a map that shows an inside of the building and indicate an approximate location of the SP on that map even though movement of the device cannot be physically detected from that point on. One skilled in the art will recognize that a multitude of possibilities exist for displaying relative SP and user locations based upon and taking into account the physical location of the mobile device and other physical parameters and that the user will perceive the “influence” of the SP on the user's physical environment as long as it continues to be related back to that physical environment.
Specifically, in step 2001, the routine determines whether the measurement meter is working, and, if so, continues in step 2004 else continues in step 2002. This determination is conducted from the point of view of the narrative, not the mobile device (the meter). Thus, although the metering device appears to be working correctly, the narrative may dictate a state in which the device appears to be malfunctioning. In step 2002, the routine, because the meter is not working, determines whether the device has designated or previously indicated in some manner that the reporting of status information is desirable. If so, the routine continues in step 2003 to report status information to the mobile device (via the narrative engine) and then returns. Otherwise, the routine simply returns without measuring anything or reporting information. In step 2004, when the meter is working, the routine determines whether a sensitivity function exists for a measurement interaction routine based upon the designated SP identifier, device identifier, and the type of attribute that the measurement is measuring (the type of measurement), and similar parameters. As described with reference to Tables 1 and 2, there may be one sensitivity function that needs to be invoked to complete the measurement of different or multiple attributes of a particular SP for that device. Once the appropriate sensitivity function is determined, then the routine continues in step 2005 to invoke the determined measurement sensitivity function. Then, in step 2006, the routine determines as a result of invoking the measurement related sensitivity function, whether the simulated phenomenon was measurable, and if so, continues in step 2007, otherwise continues in step 2002 (to optionally report non-success). In step 2007, the routine indicates the various measurement values of the SP (from attributes that were measured) and modifies or updates any data repositories and state information as necessary to update the state of the SP, narrative, and potentially the simulated engine's internal representation of the mobile device, to consider the SP “measured.” In step 2008, the routine determines whether the device has previously requested to be in a continuous measurement mode, and, if so, continues in step 2001 to begin the measurement loop again, otherwise returns.
Specifically, in step 2101, the routine determines whether the SP is available to be communicated with, and if so, continues in step 2104, else continues in step 2102. This determination is conducted from the point of view of the narrative, not the mobile device. Thus, although the mobile device appears to be working correctly, the narrative may dictate a state in which the device appears to be malfunctioning. In step 2102, the routine, because the SP is not available for communication, determines whether the device has designated or previously indicated in some manner that the reporting of such status information is desirable. If so, the routine continues in step 2103 to report status information to the mobile device of the incommunicability of the SP (via the narrative engine), and then returns. Otherwise, if reporting status information is not desired, the routine simply returns without the communication completing. In step 2104, when the SP is available for communication, the routine determines whether there is a sensitivity function for communicating with the designated SP based upon the other designated parameters. If so, then the routine invokes the communication sensitivity function in step 2105 passing along the content of the desired communication and a designated output parameter to which the SP can indicate its response. By indicating a response, the SP is effectively demonstrating its behavior based upon the current state of its attributes, the designated input parameters, and the current state of the narrative. In step 2106, the routine determines whether a response has been indicated by the SP, and, if so, continues in step 2107, otherwise continues in step 2102 (to optionally report non-success). In step 2107, the routine indicates that the SP returned a response and the contents of the response, which is eventually forwarded to the mobile device by the narrative engine. The routine also modifies or updates any data repositories and state information to reflect the current state of the SP, narrative, and potentially the simulated engine's internal representation of the mobile device to reflect the recent communication interaction. The routine then returns.
Specifically, in step 2201, the routine determines whether it is possible to manipulate the designated SP given the state of the narrative, particular device and user, etc. and, if so, the routine continues in step 2204, else continues in step 2202. This determination is conducted from the point of view of the narrative, not the mobile device. Thus, although the mobile device appears to be working correctly, the narrative may dictate a state in which the device appears to be malfunctioning. In step 2202, because manipulation with the SP is not currently available, the routine determines whether the device has designated or previously indicated in some manner that the reporting of status information is desirable. If so, the routine continues in step 2203 to report the status information to the mobile device (via the narrative engine) and then returns. Otherwise, if reporting status information is not desired, the routine simply returns without communicating with the SP. In step 2204, when manipulation with the SP is available, the routine determines whether a sensitivity function exists for a communication interaction routine based upon a variety of factors such as those discussed with reference to prior interaction functions. In step 2205, the routine invokes the determined manipulation sensitivity function passing along any necessary parameters such as the value of an attribute of a device or a value of the SP to be manipulated. In step 2206, the routine determines as a result of invoking the manipulation sensitivity function whether the simulated phenomenon was successfully manipulated and, if so, continues in step 2207, otherwise continues in step 2202. In step 2207, the routine indicates the results of the particular manipulation requested with the SP, for example reporting a newly set value of an attribute, modifies or updates any data repositories and state information to reflect current state of the SP, narrative, and potentially the simulated engine's internal representation of the mobile device as necessary, and then returns.
A simple SPIS embodiment is described and referred to as a “Simple Game.” The Simple Game comprises a mobile device with user I/O hardware, software processing capability, a sensor allowing the determination of location, and the data and logic such that the following narrative functionality is provided:
In
More specifically, Interaction A Range 2602 represents a range limit imposed on the user when interacting with an SP with respect to a first type of interaction. In other words, an interaction of type A is only allowed when the SP is determined to be within the area bounded by the limit indicated by Range 2602. The narrative engine calculates an appropriate range limit and displays the range limit as a distance between the user's current real-world location and a real-world location associated with the SP.
In the Simple Game example embodiment, interaction A is a viewing function, which presents to the user a graphical view of detection (existence of an SP) and two measurements of the SP (distance between the user and the SP, and the bearing of the SP relative to the user's real-world orientation).
Simulated Phenomenon (“SP”) 2603 represents a visual indication of an SP. In the Simple Game, the position of the SP indication 2603 in the display area 2601 is relative to the user's real-world location, which is presumed to be for this example in the center of the interaction ranges. That is, the user's position (not shown) is in the center of display area 2601. The location of the SP's displayed indication 2603 on the display area 2601 depends on both the sensed or otherwise calculated user's real-world location and orientation of the user's device. However, the real-world location of an SP is typically calculated independent from the user's real-world location except during Simple Game initialization when a game defining fix (“GDF”) based on the user's location is used.
In the Simple Game embodiment described, the visual representations of SP locations are typically restricted to a single horizontal plane that corresponds to a flat surface. However, other embodiments can support more complex representations such as representations of multi-dimensional shapes, where the SP position may be calculated in more than two dimensions. As another example, if an SP is motionless, and is directly in front of the user, its indication would be shown as centered in the upper portion of display area 2601. If the user then physically moved forward in a straight line, the user would see the indication move down the middle of the display area. However, if the SP were instead in motion, it's the SP's indication could move in any direction, depending on its and the user's velocity and bearing.
As an example, in
In other embodiments (not shown), objects and ranges within the display area 2601 are represented from a first person point of view, instead of the “birds-eye” perspective currently shown in
In addition, it can be advantageous to perform some “smoothing” calculations to minimize erratic displayed changes in orientation and/or bearing. This is especially useful when a location calculation can return different values even when the sensing device is stationary. For example, even a stationary GPS receiver will indicate different locations over time. This can cause the bearing and/or distance of an SP indication, such as indication 2603, to change regardless of any actual motion by the device. This display error is more noticeable when the user is moving at low velocities. To minimize this type of error, the SPIS can use a variety of dampening algorithms, such as comparing the current calculated bearing or position to previous ones calculated on a running average basis (with older data being de-emphasized). One skilled in the art will recognize that other variations for detecting real device movement versus calculation errors can also be incorporated and the display adjusted accordingly.
Interaction B Range 2604 represents the range limit imposed on the user when interacting with an SP with respect to a second type of interaction. As currently shown in
In addition to interactions that have corresponding visual representations, interactions can also have audio characteristics. For example, a predefined sound can be presented to the user when an interaction has been allowed and/or is being executed. The use of audio feedback can also allow the user to monitor for narrative conditions without watching the display area. For example, a distinct sound can be played when an SP is detected within the range of interaction A 2602.
Interactions can also have associated corresponding tactile feedback, such as vibrations of a mobile device. For example, when the user captures an SP, the device vibrates. Even further, the device may vibrate differently, depending upon the type of SP captured. One skilled in the art will recognize that combinations of visual, audio, and tactile feedback can also be incorporated.
The range limit lines displayed in
In addition to deterministic ranges defined by clearly bounded areas, the ranges can be also determined by probabilistic algorithms. For example, there may be interaction areas defined for a particular SP where the likelihood of interaction is increased or decreased, rather than being a binary allowed/not allowed circumstance. In addition, the likelihood of interaction may correspond to a non-linear increase or decrease.
Interaction ranges can be established using a variety of techniques. In one embodiment of the Simple Game, SPs have identical ranges for each of the interaction types. These can be determined by a narrative author either as fixed in distance or as a function of some other game value. In some embodiments, the ranges can be selected by the user. Selection process can be from within the game (such as choosing specific values or general game descriptions), or before the game is invoked (such as selecting from a collection of games). For example, two Simple Games, otherwise the same, could be labeled “Backyard Spook” and “Game Field Spook”. When the former is selected, the game area, including the SP paths and interaction ranges, are determined to be smaller that the latter.
Also, the Simple Game can be enhanced such to make ranges for the same type of interaction different for different SP types or specific SP instances. For example, SPs may have an attribute that has values associated with its “visibility”. When the attribute has a low associated value, the corresponding SP may only be visible when the user's real-world location is close to that associated with the SP.
Establishing and Managing Location
Typically, the Simple Game uses a single GDF (game defining fix) to determine location and establish what it means in a game context. For example, when the Simple Game is initially invoked at a new location, the system determines an absolute location of the mobile device. This location is used to determine the game playing area, typically with the area centered on the GDF, although one skilled in the art will recognize that other variations are possible, for example, defining the game playing area with the GDF at a particular distance from the playing area.
Multiple GDFs are also supported. For example, two points can be determined, the first serving as the center of the game, the second the maximum distance that SPs can occur away from the first. Additional GDFs can be used to define non-circular game playing areas.
The Simple Game must have at least one SP, though multiple SPs may be created by the SPIS either simultaneously or serially. For example, the creation of a new SP may be predicated on the capture of an existing SP. The initial locations of SP (or multiple SPs) are typically determined relative to the GDF. SP paths can be based solely on the original GDF, or can be based solely or in part on the device's real-world location. SP's can move, following paths defined in a variety of ways, including formulaically defined shapes (e.g., ellipses or polygons), explicit paths (along a series of relative or absolute locations), pseudo-random generated paths or according to paths determined using other software algorithms that mathematically derive or select location values.
Even though path-defining functions may be simple and the same for multiple invocations of the game, the resulting SP paths can significantly differ as the user's movement and interaction with SPs differ. Also, SP paths can be defined in terms of seed conditions (e.g., a pseudo-random number generated). They can also be defined in terms of other changing states, such as real world phenomenon (weather, stock market), dynamic game narratives, and SP conditions, such as those defined by SP attributes stored in a data repository.
User Commands
The example Simple Game provides the user with four commands: 1) initiate game, which can be an explicit command indicated by user manipulation of device input controls, or an implicit command such as by activating the device or other software program; 2) change real-world location, which is accomplished implicitly by user physically changing locations as determined by SPIS using data from a location sensor; 3) capture SP, which can be automatic or manually controlled; and 4) end game, which although typically always an option, can happen without user control, such as when a timer expires or other system-aware condition is satisfied.
Of course many other commands can be provided, such as the following example commands:
One skilled in the art will recognize that, even given the minimum narrative requirements as defined above, a Simple Game can support a variety of user commands, potentially limited only by one's imagination.
Game Playing Area
The example Simple Game described can be used in any arbitrary but constrained geographic area served by location determination technology. For example, it can be played in a neighborhood park or other safe open space that has sufficient unobstructed view of the sky to allow the reception of GPS satellite broadcasts if a GPS type mobile device is used.
The size of the game playing area can be predetermined by the game creators, or it can be selected by the user when the game begins. For example, the user can enter or select from among a set of values defining the maximum distance that SPs can differ from the GDF (absolute game defining fix location determined at game initialization).
Simple Game Types
The example Simple Game can support various narrative configurations. One example narrative configuration is defined as a ‘sudden death’ configuration, which runs for a fixed time. For example, a quick version of the game may run for not greater than a limited period of time, such as three minutes from game initiation. At the end of the duration, the user's ability to interact with SPs is eliminated, and the user is presented with a score corresponding to points associated with the capture of SPs. Potentially, a running score tabulation can also be maintained by the SPIS.
The duration of a sudden death configuration may be predetermined and fixed by the game creator, or may be selected by the user. In either case it may be preferable to have the duration established by the time the user is allowed to interact with the SPs.
In a typical sudden death narrative, a fixed number of SPs are always present in the vicinity of the game defining fix (GDF). Of course, if the user strays sufficiently far from the vicinity of the GDF these SPs may not be visible. Five SPs, each representing a different type of SP, each type associated with unique characteristics such as velocity values, works well. When an SP is captured, the user accumulates points. SPs with higher velocities provide greater point values. When an SP is captured, a replacement of the same type is created elsewhere in the vicinity of the game defining fix. In an alternative narrative, SPs can be associated with predetermined locations, however this may limit the ability of playing the game when and where a user may desire.
Another example narrative configuration supported by embodiments of the Simple Game is the “exterminator” configuration. Unlike the sudden death version, a total fixed number of SPs are made available for capture in any one game. The SPs may be available immediately and simultaneously when the game is invoked, or they may have narrative dependencies such as a requirement that particular ones are captured before others are made available. Accordingly, the game ends when all SPs are captured. At that time, the total elapsed time necessary to capture all the SPs may be displayed.
Data Storage for a Simple Game
In general, the SPIS supports narrative constructs complex enough that it is often desirable to maintain permanent and separately maintained data stores to manage the logic and state of the narrative. However, in Simple Game narratives there is typically little data needed; thus, it can be convenient to integrate the required variables into the simulation engine's memory space. This can be done with variables created at game initiation and kept in volatile memory. For example, aside from accounting for the actions of the user and SPs, a Sudden Death narrative configuration can be implemented using simple logic based on variables that maintain a value associated with elapsed time and accumulated points. Similarly, an Exterminator narrative configuration uses elapsed time and possibly precedence rules for SPs (i.e., which SPs need to be captured before others are made available for interaction), which are easily maintained as part of the simulation engine's volatile memory.
Similar to the storage considerations for narrative related data, the SP characterizations and states for the Simple Game do not require separately maintained data stores per se. For example, SP actions can be controlled with narrative logic that relies exclusively on SP location values. Optionally, a value of an additional attribute of an SP such as its type, can be used by the simulation engine to select logic of an SP path determination formula. These values can be stored in volatile memory as well.
Enhancements for Multiple Players
When played on a single device, the Simple Game typically requires separate users to take turns (that is, they do not use the device simultaneously). However various game enhancements that facilitate multi-user play on a single device can be implemented, such as performance rankings.
Though users can keep track of their game performance themselves, it may be convenient to have the system assist them. One mechanism is have the system display the total accumulated points for each game playing session. These totals may include all of the scores (perhaps organized by sequence or user identification) or a selection of scores (such as the top 10). The scores can be associated with particular players.
Other performance data can be presented, such as a history of the real-world path that the user followed, or other metrics describing the history of the narrative such as when or where the SP interactions occurred, and characteristics of the SPs themselves. For example, a graphic representation of the play area, with indications of user and SP path, with time of capture, and SP characteristics can be available during or at the completion of games. Alternatively, the information can be displayed in a textual fashion, such as using a table that lists user, SP, captures (time and location detail),” etc.
Performance data, besides often being interesting and entertaining, can be used to augment multi-player scenarios. For example, the path that a participant follows can have its total length calculated, shared and displayed. The length of the path can be part of a performance rating by, among other techniques, having points deducted for length greater than the player's prior and/or established “par” value. Alternatively, the shape of the path can be meaningful. For example, the goal of the games can be, in whole or part, to achieve a path that does not cross itself.
In addition, the Simple Game can be enhanced with features that facilitate simultaneous competitive play on multiple devices, even when there is no communication between the devices. These include those for single devices described previously, and others such as creating situations where there is competition for location. For example, multiple similar games running on separate devices can be run at the same time within the same game area and thereby create competition for capturing SPs at particular locations.
In general, when multiple SPIS-enabled games are started at the same time, using the same GDF, the same deterministic SP, and the same narrative definitions, the ranges and real-world locations of allowed SP interactions will be identical (give or take errors caused by the devices, as described above). Further, when an interaction range is relatively small (for example, capture can be within a few meters) and occurs at the same place and time, user's will often try to occupy the same location at the same time, resulting in enjoyable game conflict. Even stationary SPs can cause significant competition when the ability to interact with them is limited in time.
The multi-player enhancement just described is typically implemented with deterministic game functions, such as predetermined or formulaically repeatable SP paths. However randomization or user affected SP paths can still be used during game play, even though they may only create location competition by coincidence (the SP paths vary with the users). When randomization is used to determine SP paths, the frequency of location competition can be increased if the SP path or game area is restricted in some manner.
An important aspect to providing this type of competition between players in a convenient fashion is the use of the single game defining fix as described above. When each of the users occupies the same location when generating their respective GDFs, they can precisely share the game space.
A further enhancement is to implement for each player a delay between creating the GDF and actual game play. Techniques for delaying and synchronizing game play include having the games begin at a predetermined time, or having the users manually begin the games simultaneously.
When the devices are able to communicate, additional enhancements can be made to enhance competitive play between multiple players, each having a mobile device. Typically implementations of such enhancements rely on continuous or rapid inter-device data exchange. Multi-user play can include features that rely on the physical exchange of memory media containing simulation relevant data. For example, a device with removable memory media can store a digitally secure definition of a captured SP. By allowing another player with a similar system to access the device, they can transport or morph the SP to their device.
Additional Enhancements
Many other enhancements can be incorporated into a Simple Game or into other SPIS-enabled games. For example, another technique for determining location or for recognizing objects is to make available environmental or user information tokens that can be detected & identified optically by the mobile device and incorporated as part of the game. For instance, using built-in cameras, a physical pattern on a real world object can be detected by the mobile device. Patterns may be stored for example using known bar coding techniques, or other mechanisms for creating identifiable tokes. A pattern once detected can be used, for example, to identify a real-world location from a table or other data structure stored as part of the game or device that associates such patterns with locations. In addition, such detected patterns may be used to identify objects, such as SPs, that have significance within the game narrative. For example, the identification of a particular object, may be interpreted by the narrative aspect to cause an SP to appear in a particular location. Other ramifications can of course result, dependant upon the narrative and other configurative aspects of the game.
Enhancements to the Simple Game can include those that maintain the user's full participation in the shared narrative for extended periods even when the user is connected only intermittently. For example, the SP path determining logic can be simple enough to run on either mobile or remote computing system running remote simulation engine software. This allows the user to interact with SP's in their vicinity by using the narrative logic executing on their machine. However, other users who maintain connection with the remote system will likely lose the ability to track the location of the disconnected mobile user.
Automatic Determination of Location (Auto-Localization)
Games, simulators, training programs, guiding programs, and other computerized systems that associate real-world locations with computer-controlled system aspects need mechanisms to unambiguously determine locations and incorporate them into system logic. This need presents a variety of challenges to system creators, including the difficulty in verifying that a particular locale is appropriate for the system's operation. In particular, the verification of the safety of area locale is important and difficult without an in-person inspection of the area. The problem becomes more acute for systems that seek to rely on especially precise locations.
Currently some location-based systems rely on generating the necessary location data by making estimations based on Geographic Information System (GIS) reference data. This technique often fails due to its lack of location resolution and/or the limited or dated nature of the environmental context information. Other systems rely on in-person inspections by individuals who assess locale appropriateness and, if necessary, determine location specifications. This technique can be impractical if, for example, the system is intended to be used at many or undetermined locations.
For location-based systems that only provide relatively low location-resolution, determining an unambiguous location is less of a problem. For example, consider how, when the minimum range for a type of SP interaction of interest to the user is over 100 meters, it is easy to localize an application. An accurate city map referenced with a location grid, e.g., longitude/latitude will often suffice to determine locations on which to base an SP location in such a system. For example, the intersection of major cross streets, an object such as a fountain in a park, or a geographic feature such as a mountain peak will provide sufficient accuracy for this low level of resolution.
In contrast, location-based systems that make use of higher location-resolution (finer granularity), such as less than 20 meters, can support types of interactions that are not so easily localized. Consider an SP that can only be interacted with when a user is in close proximity. This requires the user to occupy a relatively small area where the precision of commonly available maps is typically insufficient. Further, maps rarely convey significant situation context such as attendant physical hazards like automobile or pedestrian traffic; environmental details of potential interest to game creators such as street signage; or details relevant to social context such as a gathering place for particular types of activities.
The SPIS provides an improved technique termed “auto-localization.” The Simple Game provides an example system that uses auto-localization. The techniques also are operable for similar location-based systems that employ simulated phenomena and are suitable for use in other location-based systems as well that may or may not be associated with simulated phenomena.
The SPIS auto-localization technique allow a user of the location-based system to localize the system at run time, with no explicit action by the operator of the device. Auto-localization techniques can determine location using absolute (real-world) location information or using a process termed “dead reckoning,” which deduces location based upon changes in location or other sensed measurements. Auto-localization using dead reckoning is described further below.
Auto-Localization Using Absolute Locations
When auto-localizing using absolute locations, the mobile device determines an initial (current) location in the real world, fixes a game “grid” (which defines a game area for the purpose of that game), and then executes the game, including determining and interpreting movement of the mobile device user(s) and SP(s) according to the localized grid. For example, the Simple Game requires a single point on which to establish the association between the virtual playfield and the user's immediate vicinity.
When the game is initialized, a current location is determined and stored in a data store (for example, a single variable). When used to associate the game area with the real-world, this current location is referred to as the Game Defining Fix (GDF). The GDF can be determined using a single latitude/longitude reading, by taking an average of “N” readings for a more precise GDF, or by any other method that generates a more precise latitude/longitude from several readings. A specific technique for establishing and using a GDF is described further below with respect to the pseudo-code of Table 3. In
For example, in the Simple Game described above, a user may desire to interact with a particular SP. Whether that interaction is permitted depends upon the proximity between the user's location and that of the SP. Using the GDF and the fixed game grid, the game's narrative algorithm(s) can begin determining the locations of SPs, and thus make the determinations of what interactions are permissible by the user.
Once the initial position(s) of the SP(s) are determined, the game then determines what interactions are possible.
Because the SP location (e.g., at position A1) falls at least partially within the interaction range 2901, the user is permitted to receive all or part of a visual representation of the SP's location relative to the user's location. Thus, a display of the SP, such as that illustrated in
In a typical game, as the user moves (the mobile device moving with the user), the game narrative advances to make additional interactions and/or additional SPs available.
Note that the view range can be associated to the fixed game grid without further reference to the GDF, since the grid has been associated with the real-world and subsequent positions are detected or calculated relative to this initial location on the grid. Alternatively, a game can operate without an explicit grid, such as that shown in
The auto-localization techniques can be used with different shapes and sizes of game areas. A game area can be defined in terms of a single GDF and a pre-established shape and size—for example, a circle with a 15 m radius. Alternatively, the user can select the size or some other method of determining or inherited a size can be incorporated. For example, after invoking the system and allowing it sufficient time to collect location data for establishing the GDF, the user can then move to the closest hazard. At the hazard, the user can then make an indication (e.g., press a button) to establish the boundary of the game space. The distance from this location to the GDF can be used as the maximum radius of a game space. Alternatively, the user can define an arbitrarily complex game space by moving to multiple locations and making indications to establish a fix at each location. These locations can then be used as the vertices of a polygon and an arbitrary game shape thus established. Though the shape of the game space can be arbitrarily complex, the game can still rely on a single fix (the GDF) to define the game grid. For example, the first fix can be used as the GDF, and all other points calculated relative to it.
Game spaces may also differ depending upon the number of players supported by the game. Game spaces for a single player typically include the following attributes: a GDF is determined only when needed; the GDF does not require permanent storage; there is no explicit user or game/trainer developer action necessary to establish the game space; and the game space can be determined using one or more absolute location determinations. For a multi-player game, the game logic typically uses one or more GDF, hence the game space needs to be determined accordingly. The game space may be defined, for example, as the largest area encompassing all of the game spaces of the individual players. Other definitions are also possible.
Note that the a GDF can be determined at game initialization and can be stored for many different uses. For example, it can be used for a single play of game (a game “session”), or stored for subsequent games. Also, it can be used by other location games and thus shared. In addition, a GDF can be supplied externally, such as by the user manually entering coordinates, place names, or other location identifiers associated with the a location model used by the game logic. If external data communication is supported (e.g., via a data transmission receiver or removable media reader), the game defining location can be provided by sources other than the user or game provider. Also, a GDF can be shared among users.
Table 3 is pseudo-code that illustrates an example technique for establishing and using a GDF as part of auto-localization.
Auto-Localization Using Dead Reckoning
As mentioned, auto-localization can be performed without relying on a precise location of the user in the real world. Instead, using a “dead reckoning” technique, the game system produced using SPIS can automatically and dynamically determine location by relying on knowing how fast and what direction the user has traveled since a last location request. For this technique to work, the game determines a start position, which may be based upon real world values such as latitude or longitude, or which may be an arbitrary position such as at coordinate (0,0). The game area is presumed to surround the user (the start position) when the game begins and a game grid is established. Using dead reckoning, the game then tracks the user's motion in the real world relative to this starting position in the game. As with absolute location auto-localization techniques, the game determines and interprets the movement of the mobile device user(s) and SP(s) according to the established (explicit or implicit) game grid.
Accordingly,
Table 4 is pseudo-code that illustrates an example technique for establishing and using latitude and longitude to establish a game center (GC) for use with auto-localization.
GPS Transient Error Suppression
Though location-based systems can be conveniently localized using with a single location determination, transient errors inherent in GPS systems can cause the localized reference position to suddenly shift erratically. This can cause confusion and inconvenience to the device operator by, among other effects, causing the visual representation of location-based simulated phenomenon to move independently of the SP's intended position and or motion. Further, GPS systems not only experience occasional erroneous location determinations, but are also known to produce erroneous latitude/longitude in patterns. In some cases, these patterns constitute a series of erroneous readings that can cause the visual representation of location-based SPs to drift for periods extending for multiple seconds.
The SPIS location determination techniques also provide a mechanism for reducing, if not eliminating, many such GPS transient errors.
More specifically, in step 3101, the routine reads the GPS data to establish a GDF as described above. In step 3102, this coordinate value is stored in a data store, for example in variable “Previous_Loc.” Steps 3103-3108 are executed each time the game needs to determine the user's location (once the game has begun). They may, for example, be invoked as an interrupt routine, or called from or as part of an event handler for the game. In step 3103, after the game has begun, the GPS (e.g., longitude/latitude) data is again collected (detected or determined in whatever manner appropriate to the device) and stored for use as a (potential) current location position. The GPS also typically collects velocity and bearing values along with the longitude/latitude data. In step 3104, the GPS reported velocity and bearing is used to derive a location calculation based upon the prior stored location. Specifically, a current location position is calculated as the previous location position (stored in variable “Previous_Loc”) plus the reported offset of the reported bearing and velocity. In some embodiments, an additional test can be incorporated whereby reported velocities with low values (e.g., less than 1 knot/hour) are rounded to zero. Then, in step 3105, the two potential current location positions (from steps 3103 and 3104) are compared. The difference between the two positions is compared against a minimum threshold. When the difference is greater than the threshold, the routine continues to execute in step 3106 to account for GPS error otherwise continues in step 3107 to use the GPS determined location. In step 3106, the routine sets the current location (stored in “Previous_Loc) to the calculated current location position from step 3104. In step 3107, the routine sets the current location position to the GPS detected/reported current location position from step 3103. In step 3108, the routine determines if the end of the game has been indicated, for example, by the user, narrative, system administrator, or by some other mechanism, and, if so, ends the game, otherwise continues to the beginning of the loop in step 3103 to process the next location.
Note that the minimum threshold value used in step 3105 can be a single pre-established value (such as 3 meters). Alternatively, the threshold can change over time. For instance, in games played for extended periods over large game spaces, it is possible that the extended consecutive use of locations determined solely using calculated positions in step 3104 could drift sufficiently far enough from those in step 3103 that step 3107 may not execute. Such behavior is undesirable as this technique of
Testing Location-Based Systems
Location-based systems have unique development and testing challenges. One challenge is they cannot be completely tested until actually used in the real-world locations for which they are intended, because a necessary part of their operation is the operation of at least one component that reliably determines the real-world location of mobile devices. Another challenge is that it is inconvenient to perform system tests on a device while it is moving.
SPIS offers a variety of solutions to these problems. For example, data can be collected by the system while it is being used, and then the data later incorporated for testing purposes, such as by using a simulation generated from the collected data. In general, the testing process includes:
More specifically, a game creation kit can be enabled with a data collection control setting that controls collection of device, SP, narrative, and SP interaction data generated while the game system is in actual use. The data collection control can also be active while the simulation is running by an authorized user or remote administrator. While system attributes are being collected there need not be any indication to the user that this is occurring. The collected data is then used to recreate the simulation activities at the convenience of a user or system developer.
One way to recreate a simulation is to run the simulation using the recorded location data in place of the data normally determined by the location determination system. By providing the recorded location data, the game uses deterministic SP paths to display the SP's in the same position as when the game was originally played. Many advantages are realized, such as allowing the simulation engine to use all the same data and logic during testing as is used in the field. Also, the developer can see and experience the game from a game user's point of view. The developer can even try to attempt to interact with the SP's as would the recorded user.
Another testing enhancement that can be incorporated is to use any valid arbitrary location data, including a single location. For example, a single latitude/longitude value can be used to establish a GDF. The game can then run indefinitely acting as if a hypothetical mobile user remained stationary. This simplifies testing of SP behavior (e.g. it is easy to observe the SP motion as presented by a mobile device) independent of movement of the user's mobile device. Other testing enhancements that use simulated or hypothetical data can similarly be incorporated.
Location-Based Systems for Non-Mobile Users
In addition to the mobile scenarios described above, the SPIS also can be used to produce games and other systems that are played or used by one or more users at fixed positions. In this case, Simulated Phenomena are defined in terms of changing real-world attributes (weather, traffic flow, stock market). As with other SPIS configurations, these types of systems can be used in an individual or multi-player mode. Note that, although many of the examples herein are described with reference to games, other types of SPIS-based systems, as mentioned throughout, for example, complex games, simulation scenarios, charity systems, can be similarly enhanced.
To participate in a SPIS-enabled narrative as a game player (in contrast to an audience member), a user typically needs to be associated with a real-world location. This includes users who never change position. These permanently immobile users can have their locations determined in a variety of ways, including self-reporting. That is, a user can indicate to the SPIS-based system an initial location. The system may then associate the self-reported location with a specific user and not allow them to change it.
In other situations, the initial physical location of the user is determined based upon a real-world sensor (in contrast to user self-report), and the user (device) remains motionless. For the purposes of systems such as the Simple Game, it doesn't matter if the user never moves, as long as a physical location can be at least initially established with confidence.
For example, the user may wish to interact with SPs using a non-mobile computing device. As with other SPIS-based systems, the game system can be configured as a ‘fat client’ on which all its processing occurs, or as a system in which all or part of the simulation engine is remotely accessed via a communications capability. SPs can be shown to be dispersed and moving in large geographic patterns. As any particular SP approaches a user location, the user's ability to interact with the SP increases.
In addition to self-reporting techniques and techniques using real-world sensors, a user's location can be established when the user is using a mobile device capable of location determination and coordinated with a stationary device. The device can stay on, continuing to establish location. Sometime thereafter, a data exchanges between the mobile and one or more stationary computing devices can occur, verifying the location of the user to the software being accessed from the stationary device. Alternatively, an exchange of authorizing data can occur from the mobile to the stationary device, and the mobile device then turned off. Alternatively, the SPIS-based system can make use of the mobile device tracking to determine a location of the stationary device. It could then allow the user to operate from the fixed location, within the context of the game.
Some users may be immobile for significant amounts of time. A SPIS-based system such as a game can be adapted to accommodate such scenarios. For example, suppose that the immobile user is a member of a team. Team members can be associated with a particular geographic location (for example, King County in Washington State), a particular human organization or characteristic (for example, Santa Rosa Junior College Alumni), or some other arbitrary identification (a group calling itself “The Radical Empiricists”). Members of this team can behave cooperatively among themselves to gain points or other game advantages. For example, points can be gained by the capturing of SPs. Further, the ability to capture an SP may be limited in range from each user's determined or recorded location. (In this narrative example it is assumed that at least some of the users are immobile at least at some point during the game.)
Further suppose that the game narrative in this example generates SPs according to atmospheric pressure, with an increase in SP density (as measured across geographic regions) associated with lower air pressure. This produces an enhanced ability to capture SPs during stormy weather, especially for immobile users who need to wait for an SP's path to bring them within interaction range.
While this type of narrative provides valuable entertainment experiences, including the inducement for team members to cooperate in part by observing and anticipating weather systems, it can also create disparities in advantage between geographic regions. Consider, for example, the frequency, intensity, and durations of low-pressure systems in mid-latitude cities like Seattle, and those close to the equator in within longer lasting stable high-pressure systems like San Diego.
One way to address these disparities is to have explicit handicapping of regions based on historic weather patterns. For instance, the points associated with SPs captured in San Diego can be greater than those in Seattle. Another way to address these disparities is to have cooperation between teams. For example, the San Diego and Seattle teams could share points or immobile player locations. Another way to address these disparities more implicitly is to have the rarer good weather SPs be worth more points.
An aspect that is common to many such scenarios is that geographic distribution of immobile users can affect a team's ability to easily interact with a large numbers of SPs. For example, it can be advantageous to have a uniform geographic distribution of users if the SP density tends to be uniform. Alternatively, if SP's tend to cluster, then more users in areas where the greatest density tends to occur can be advantageous. Thus, in some embodiments, a SPIS-based system is supported that allows immobile users to select a discretionary (perhaps user specified) single fixed location associated with their immobile locations.
Since SP locations can be anywhere a narrative determines, algorithms based on human population density, travel patterns, or other historic or sensed behavior can be incorporated. For example, the number of SPs moving in New York's Central Park can be initially based on its typical daily visitor count (for example, since there are typically more people on the weekends, SP population would be higher on the weekends in this case). Alternatively, the SP count could depend on how many users were currently participating in the narrative. Alternatively, the SP population can be fixed at all times (for example, when an SP is captured, the narrative generates a replacement somewhere in the park or its vicinity). In embodiments that accommodate immobile as well as mobile users, when the system loses track of a mobile device's location, the last known location can be used as their current location, regardless of which device was used to connect to the SPIS-based system.
Other SPIS-Based System Enhancements
As briefly mentioned, the SPIS supports non-visual presentations of SPs. For example, a SPIS-based game can be enhance using audio features. Audio can occur or change when detection occurs; or audio volume or type of audio can change based on proximity to user (for example, with something is found within a detectable range). Multiple synchronized audio can be employed to indicate bearing. For example, the volume of a right channel (for example, as played by a speaker) can be greater than the volume of a left channel when an SP is bearing to the right of the user's orientation. Alternatively, a delay of left channel can be used to simulate propagation delay. Sounds also can differ according to SP type. Distinctive sounds can be associated with SP or device status, or the attempted or successful occurrence of an SP interaction. For example, in a Simple Game each type of ghost (SP) may have a scream associated with it and presented when the SP is captured.
SPIS-based systems can also be similarly enhanced using tactile feedback, such as vibration frequencies, pitch, etc.
SPIS-based systems can also be enhanced to be user-modifiable. For example, in some games, a user can be allowed to create SPs. An initial location for the user created SP could be the user location when SP is created. SP characteristics also can be user defined. For example, a hostile SP could be created that seeks out another user. Other variations are of course possible.
In some SPIS-based systems, SPs can be shared or generated across users. For example, previously captured SPs could be released (regardless of who captured them), providing the user an ability to populate their current location with one or more SPs that the user did not create.
In addition, SPs can, within the context of specific narratives, be unique even among multiple players. That is, there can be but a single instance of an SP that has distinguishing characteristics. For example, a unique SP can be experienced by users as an SP that is visible to any user who is within range, but that disappears when someone captures it. While visible it can be uniquely identified, and once captured never visible again unless released.
Also, by using a variety of well known digital security techniques, a unique SP can be reliably transferred between individual users. Again, since the SP is unique and non-duplicatable, there can only be one user who has control of it at any point in time.
Historically based SPs provide an example of unique SPs having distinguishing characteristics. That is, an SP based on an historic figure can be created by a narrative, and then exchanged between users. Historically based SPs can incorporate a wide variety of distinguishing characteristics, including their actual names and the places they traveled while alive. Other characteristics can similarly be included.
All of the above U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet, including but not limited to U.S. Provisional Patent Application No. 60/577,438, entitled “SIMULATED PHENOMENA INTERACTION GAME,” filed Jun. 5, 2004; U.S. patent application Ser. No. 10/845,584, entitled “COMMERCE-ENABLED ENVIRONMENT FOR INERACTING WITH SIMULATED PHENOMENA,” filed May 13, 2004; U.S. Provisional Patent Application No. 60/470,394, entitled “METHOD AND SYSTEM FOR INTERACTING WITH SIMULATED PHENOMENA,” filed May 13, 2003; and U.S. patent application Ser. No. 10/438,172, entitled “METHOD AND SYSTEM FOR INTERACTING WITH SIMULATED PHENOMENA,” filed May 13, 2003, are incorporated herein by reference, in their entirety.
From the foregoing it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. For example, one skilled in the art will recognize that the methods and systems for limiting the range of interacting with simulated phenomena discussed herein are applicable to other architectures other than a fat client device. For example, using client-server architectures, the experience of the simulation environment can be distributed across multiple devices. In addition, although described herein with reference to a mobile device, one skilled in the art will recognize that the mobile device need not be transported to work with the system and that a non-mobile device may be used as long as there is some other means of sensing or associating information about the user's real world environment and forwarding that information to the SPIS. One skilled in the art will also recognize that the methods and systems discussed herein are applicable to differing protocols, communication media (optical, wireless, cable, etc.) and devices (such as wireless handsets, electronic organizers, personal digital assistants, portable email machines, pagers, etc.) whether or not they are explicitly mentioned herein, providing they support the capabilities of a SPIS-based system.
Number | Date | Country | |
---|---|---|---|
60577438 | Jun 2004 | US | |
60470394 | May 2003 | US | |
60380552 | May 2002 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10845584 | May 2004 | US |
Child | 11147408 | Jun 2005 | US |
Parent | 10438172 | May 2003 | US |
Child | 11147408 | Jun 2005 | US |