1. Field of the Invention
The disclosure generally relates to the field of computer interaction, and more particularly to the field of physical interfaces with computer interaction.
2. Relevant Background
The interaction between computing devices and users continues to improve as computing platforms become more powerful and able to respond to a user in many new and different ways, so that a user is not required to type on a keyboard in order to control applications and input data. The development of a user interface system has greatly improved the ease with which a user can interact with a computing device by enabling a user to input control actions and make selections in a more natural and intuitive manner.
The ease with which a user can input control actions is particularly important in electronic games and other virtual environments, because of the need to provide input quickly and efficiently. Users typically interact with virtual environments by manipulating a mouse, joystick, wheel, game pad, track ball, or other user input device to carry out some function as defined by a software program.
Another form of user input employs displays that are responsive to the touch of a user's finger or a stylus. Touch responsive displays can be pressure activated, respond to electrical capacitance or changes in magnetic field intensity, employ surface acoustic waves, or respond to other conditions that indicate the location of a finger or stylus on the display. Another type of touch sensitive display includes a plurality of optical sensors that are spaced apart around the periphery of the display screen so that the location of a finger or stylus touching the screen can be detected. Using one of these touch sensitive displays, a user can more directly control a virtual object that is being displayed. For example, the user may touch the displayed virtual object with a finger to select the virtual object and then drag the selected virtual object to a new position on the touch-sensitive display.
Capacitive, electromagnetic, optical, or other types of sensors used in conventional touch-sensitive displays typically cannot simultaneously detect the location of more than one finger or object touching the display screen at a time. Capacitive, resistive, or acoustic surface wave sensing display surfaces that can detect multiple points of contact are unable to image objects on a display surface with any degree of resolution. And prior art systems of these types cannot detect patterns on an object or detailed shapes that might be used to identify each object among a plurality of different objects that are placed on a display surface.
Because applications of the human computer interface are widely applied in many different fields, a gesture recognition approach is widely sought after. Moreover, the gesture-based input interface is a more natural and direct human computer interface.
In the field of interactive computer software and systems, there are a variety of patterns and methodologies employed to allow a user to understand and interact with the software via physical inputs and outputs. Lacking in each of these approaches is the ability to physically interact with and manipulate a plurality of input devices that can individually detect proximity and orientation so as to provide to the user the ability to interface with a computer system using a gesture language.
Physical action languages used in conjunction with a distributed tangible user interface enables a user to interface with a computer using physically manipulable objects. Upon the detection of physical interaction with one or more physically manipulable objects a determination is made whether the identified physical action matches a predefined action parameter. When the physical interaction matches the predetermined parameters software elements associated with the physically manipulable objects that detected the physical interaction are updated.
According to one embodiment of the present invention the physically manipulable objects operate independent of each other and include a plurality of sensors operative to detect any physical interaction. Among other things these manipulable objects include motion and proximity sensors as well as the ability to render feedback to the user via visual and auditory means.
Upon detection of a physical interaction and according to one embodiment of the present invention, the physically manipulable objects wirelessly convey data regarding the physical interaction to a software architecture. In one version of the invention the architecture is resident on a host computer while in another the software architecture is distributed among the objects and in yet another embodiment the software architecture is distributed among a host computer and the objects. This architecture is operable to process the physical interaction and determine whether a predetermined action parameter has been achieved.
According to one embodiment of the present invention the physical interaction of the one or more physically manipulable objects includes, among others, a touch, motion, location alteration, or a compound event. A touch event can include a touch, a touch release, a combined touch and release, a surface touch and drag, or a touch-release-touch event. The physical interaction can also include multiple physical interactions with two or more physically manipulable objects simultaneously or within a predetermined window of time.
Other types of physical interactions contemplated by the present invention includes motion events such as tilting, shaking, translating or moving, and rotating a manipulable physical object either in one plane or flipping the objects through multiple planes. Indeed the present invention contemplates a physical interaction to include multiple combinations of events.
In addition other embodiments of the present invention address physical interaction to include location altering events. In such a situation the location of one or more physically manipulable objects is altered. The present invention examines the location altering data to determine whether one or more objects are either rendered closer in proximity to each other or separated from each other. In the instance in which the objects are moved to be in closer proximity to each other the present invention determines whether a new group of objects has been formed or whether the newly added object(s) is merely merged into an existing group. Likewise, when object(s) are moved away from an existing group the invention determines whether two new groups have been created.
Another physical interaction addressed by the present invention includes compound events or an interaction that triggers two or more sensors. Compound events can include multiple events on a single object, substantially simultaneous events on a plurality of objects or any combination thereof. According to one embodiment of the present invention compound events can include any of several touch events combined with any of several motion events. Likewise other compound events can include any of several motion events combined with any of several location altering events. As will be apparent to one skilled in the relevant art any of the above mentioned events can be combined to form numerous permutations, all of which are contemplated by the present invention.
According to another embodiment of the present invention a computer-readable storage medium embodies a program of instructions that includes a plurality of program codes. These program codes are operative for using a plurality of physically manipulable objects to interface with a computer system. Once such program code detects physical interaction with one or more physically manipulable objects, another program code conveys data with respect to that physical interaction to a software architecture. There processing occurs using yet another program code to determine whether the physical interactions match a predetermined action parameter. Should an action parameter be matched, another program code is operative to update a software element corresponding to the one or more of the physically manipulable objects and, in some embodiments, rendering feedback to the user.
As with the previous embodiments a plurality of physical interaction can occur with one or more physically manipulable objects either singularly or in combination. Indeed multiple permutations of combined physical interactions and action events is contemplated and addressed by embodiments of the present invention.
Another aspect of the present invention includes a distributed tangible user interface system comprising a plurality of physically manipulable objects wherein each object includes a plurality of sensors. These sensors can detect, among other things, touch, motion, surface contact, location alterations and proximity to other objects.
The system further includes, according to one embodiment, a host computer on which a software architecture resides. In one version of the present invention physical interaction with one or more of the physically manipulable objects is communicated to the software architecture resident on the host wherein software portions determine whether the physical interaction matches a predetermined action parameter. Based on this analysis an action event such as a touch, motion, location alteration, or any combination thereof can be declared. Once declared another software portion updates elements associated with the physically manipulable objects corresponding to the detected physical interactions.
The features and advantages described in this disclosure and in the following detailed description are not all-inclusive. Many additional features and advantages will be apparent to one of ordinary skill in the relevant art in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter; reference to the claims is necessary to determine such inventive subject matter.
The aforementioned and other features and objects of the present invention and the manner of attaining them will become more apparent, and the invention itself will be best understood, by reference to the following description of one or more embodiments taken in conjunction with the accompanying drawings, wherein:
The Figures depict embodiments of the present invention for purposes of illustration only. It is noted that wherever practicable similar or like reference numbers may be used in the figures and may indicate similar or like functionality. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.
Embodiments of the present invention are hereafter described in detail with reference to the accompanying Figures. Although the invention has been described and illustrated with a certain degree of particularity, it is understood that the present disclosure has been made only by way of example and that numerous changes in the combination and arrangement of parts can be resorted to by those skilled in the art without departing from the spirit and scope of the invention.
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments of the present invention as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the invention. Accordingly, it should be apparent to those skilled in the art that the following description of exemplary embodiments of the present invention are provided for illustration purpose only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents.
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces.
By the term “substantially” it is meant that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.
Described hereafter by way of example are computer-implemented methods, computer systems, and computer program products that allow interaction of a computing system with a distributed, tangible user interface; the interface comprising one or more physically manipulable objects that may be used singularly or in combination with other such objects. Such a system may be implemented in myriad ways. Given such a distributed tangible user interface, the software system is manipulated via a set of individual actions on individual objects, and in which such individual actions may be combined across one or more objects, simultaneously and/or over time, resulting in compound actions that manipulate the software. These actions and compound actions may be interpreted by the software differently depending on the configuration and state of the manipulable objects. Moreover, the resulting manipulation of the software system causes further actions that can either be perceived by a user, or causes a transformation in the physically manipulable objects themselves, or in an external system or an external object distinct from the physically manipulable objects.
According to one embodiment of the present invention interfaces can use one or more physical objects with gestural physical input. These systems can be referred to as distributed tangible user interfaces. For such systems comprised of interchangeable objects that are physically manipulated, there is a need for standardized software design solutions. These solutions form the basis of interaction and software frameworks for implementation of software requiring sophisticated physical input on such systems. Additionally, existing actions used in other interface systems (such as a mouse-driven GUI's “click,” “double click,” or “drag and drop”) do not apply to a computing interface comprising a set of graspable objects. Each object in a set of graspable objects (the interface) acts as input such that there may be no single input device but where inputs and actions of another object(s) may be useful to forming the interface. Such wireless, distributed, tangible interfaces require a unique language of physical actions and system responses as well as a software system for enabling such an action language. The system and methods disclosed herein are operable to identify user actions and gestures on objects in a distributed tangible interface and map these actions to changes in state or behavior of a computer program.
In one embodiment, a physical implementation of a distributed tangible user interface comprises a set of compact manipulable tiles or devices, each tile including a microcontroller, battery, a feedback mechanism (such as a display or auditory generator) accelerometer sensor, onboard memory, button/click sensor, and sensors for detecting nearby tiles (such as radio, sonic, ultrasonic, visible light, infrared, image-based (camera) capacitive, magnetic, inductive or electromechanical contact sensors). In this example of a distributed tangible interface, triggering of these input sensors is interpreted to direct or influence the state or behavior of the controlling software system. In one embodiment, the software architecture resides on a host tile or other host computer. Each tile (also referred to herein as a graspable or manipulable object) reports sensor input events to the host and/or to other tiles via radio or other means. Other tiles can process some of the sensor input events into higher-order formulations, either to trigger immediate feedback on the tiles or for transmission to the software architecture on the host. The software architecture processes these formulations and/or sensor inputs into higher-order actions. This architecture allows actions performed by a user on an object to trigger state or behavior changes on the tiles and in the software architecture, including but not limited to executing subroutines or modifying locally or remotely stored variables or data.
One aspect of the present invention is that each tile or graspable object forming the distributed computer interface is aware of the presence and position of the other tiles. A tile can detect nearby tiles by a variety of means including, but are not limited to, proximity-detection and absolute position sensing. To detect proximity, a number of different methods may be used, including but not limited to light-based (i.e. edges of tiles have a transmitter and receiver for visible or near-visible light that is transmitted by one tile and received by the other), capacitive (i.e. edges of tiles have an antenna that is modulated by an electric signal, which induces a corresponding electric signal in a similar antenna on a neighboring tile), inductive (i.e. edges of tiles have a wound metal coil that generates a modulated electro-magnetic field unique to each tile when a modulated electric current is applied and which is received by the corresponding coil on a neighboring tile), magnetic switch based electro-magnet (i.e. edges of tiles have an electro-magnet that may be modulated and when an electro-magnet on one tile is modulated, the induced field causes the magnetic switch to open and close on the neighboring tile) and camera-based (i.e. tiles have optical “fiducial” markers on each side and a camera based system on each side that can recognize the identity, and perhaps the distance and orientation of, neighboring tiles). To detect absolute position of tiles, a number of different methods may be used, including but not limited to radio received signal-strength (RSS) triangulation, sonic or ultrasonic (i.e. time of flight) based triangulation and surface-location sensing (i.e. the devices themselves sense their position on the surface using a camera while resting on a surface with a unique spatial pattern or some other technique of sensing.
Still referring to
Notably, the software behavior, state, or subroutines triggered by compound actions can be different from the behavior triggered by its component actions. As an example, consider
Another example could involve virtually moving a character from one object 10 to the other. Graphically depicting a gopher on a trail in a maze is a more specific implementation of this example. Without being adjacent to one another, objects 10 display a map of the maze when the user depresses the push button sensor inputs. When placed adjacent to one another, the displays remain unchanged. Conversely, upon the user placing the objects 10 adjacent to one another and depressing the push button sensor inputs, the gopher could graphically move between objects 10.
According to one embodiment of the present invention software architecture implements correspondence between physical interface elements and software elements. These software elements can take the form of individual software objects that reside in memory on a single system that supports and runs the architecture. In one instance these software elements may be simply a collection of variables and behaviors resident in the computational architecture of the physical interface devices themselves or they may be situated elsewhere, for instance on a remote machine or machines on the internet. Software actions on the software elements can propagate to the physical interface elements, updating their internal state and optionally triggering output that is perceptible to the user. Additionally, actions that the user applies to the physical interface elements may update the internal state of the software elements and trigger further actions as defined by the software behavior such as updating the physical condition of the physical objects.
In one embodiment of the present invention, software objects, for example objects implemented by object-oriented programming languages, can be used to implement the correspondence between physical interface elements and software elements. In an implementation that uses software objects, the execution of software “methods” exposed to the programmer by the object is a means of triggering state change and optionally triggering user-perceptible feedback on the physical interface element. User interaction with the physical interface elements can be reflected back in updates to the internal state of the corresponding software objects and can optionally trigger additional actions as determined by the software behavior. For example, as a result of a specific arrangement of a set of objects 10, the resulting action, as determined by the software, can be to generate a particular sound by one or more of the objects or by a computing system separate from the objects. Alternatively, the specific arrangements of the objects may result in graphical feedback being displayed by one or more of the objects or some message being presented by a computing system separate from the objects.
As an example, the objects 10 may correspond to pieces of a puzzle being solved by the user and specific arrangements of the objects may correspond to valid solutions of the puzzle. A user arranging the objects as a valid solution is informed of that fact by graphical feedback, audio signals, or other kinds of messages. One instance of this could be pieces of a jigsaw puzzle graphically shown on objects 10. When the images displayed on objects 10 are correctly aligned, a “ding” could sound and flash of green color depicted on the objects 10. In another example, a novel music sequencing game includes objects 10 that represent sounds, as indicated by graphical feedback, audio signals, or other kind of messages. The game plays a sequence of sounds through a speaker device either on the objects 10 or on another device, such as a PC or mobile phone. The game requires the user to physically arrange the objects 10 in a sequence that corresponds to the audio sequence and shake them to the correct rhythm. A user arranging the objects as a valid solution is informed of his or her success by graphical feedback, audio signals, or other kinds of messages.
Included in the description are flowcharts depicting examples of the methodology which may be used to update the state of one or more objects due to physical action induced by a user. In the following description, it will be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be loaded onto a computer or other programmable apparatus to produce a machine such that the instructions that execute on the computer or other programmable apparatus create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means that implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed in the computer, or, on another programmable apparatus, to produce a computer implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks.
Accordingly, blocks of the flowchart illustrations support combinations of means for performing the specified functions and combinations of steps for performing the specified functions. It will also be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and computer instructions.
In
A click event could be used similarly to the touch event, but could also be used in conjunction with the touch and release events to indicate commitment. For example, using the answering question example from above, the touch event could highlight a particular answer, and the click event could confirm that answer selection. Another example could be a game in which an auditory cue is given and the user tries to click the object 10 at the correct moment of tempo.
A software application using the architecture of the present invention could thus be notified of this touch event, and change state accordingly. The examples illustrated above for press events apply equally to touch events. However, the touch events enable the user to select certain areas within one display on the object 10. An example of this includes a pattern recognition game. For example, multiple colored symbols can be graphically depicted within one display on the object 10 and the user can identify like-colored groupings of symbols by initiating a touch event. As a result of this event, the selected grouping is virtually replaced with another set of symbols, indicating the success of the event to the user.
Referring to
Referring to
Consider the following example: A multiple surface touch and drag event can be used to change the viewable portion of a map. Whereas the touch and drag event can move items within the display, a multiple surface touch and drag event can move the whole display area or reshape the area (expand or reduce). Notably, the gesture response can differ depending upon the direction of drag. For example, in a car-selection portion of a racing game, a top-to-bottom drag might be used to scroll through a list of possible types of cars, while a left-to-right drag could change the color and car accessories.
According to another embodiment of the present invention the activity or movement of the object itself can generate an event.
Notably, the software can respond differently depending on the magnitude of tilt. For example, a ball displayed graphically on the screen could be animated to roll towards the side of the object 10 that is tilted down. It could roll faster or slower depending on the magnitude of the tilt or to varying sides of the object if the tilt involved changers in two different planes of motion simultaneously. The tilt mechanism can also be used to scroll through a menu of items. Keeping the object 10 tilted in one direction would scroll through options that would appear in succession on the screen. The tilt event can be ended by the user returning the object 10 to a flat orientation and the option that is on the screen at the time the tilt event terminates would remain to potentially be selected. Greater angles of tilt could result in faster scrolling.
Another example of how this tilt action can be used as a control device, according to one embodiment of the present invention, is that tilting one object 10 can cause a graphical symbol on another object to change. If a car is depicted driving in a lane of a road on another object, a first object 10 could be tilted to change the lane in which the car is driving. Thus the motion or tilt of a first object can alter the state of a second object. Another type of motion interaction contemplated by the present invention is shaking.
One example implementation of a shake event could be an event that initiates the start of a new round of a card game. The shaking event could “shuffle” the cards displayed graphically on the objects 10 thereby allowing a new round of the game to begin. A different application of a shake event can be to change the internal state of the virtual world displayed on a plurality of objects 10. In this instance the shaking event can cause the figures graphically displayed on each object to have their locations changed within the same display or across displays. Yet another example utilizing a shake event as a control input is as a method of scanning through options. When inside a menu of options, each item could be presented singularly on an object 10. Each shake event can cause the presented option to cycle through all of the potential options.
Another aspect of the present invention, depicted in
A software application using the architecture can thus be notified of a motion event and change state accordingly. For example, an object that is placed into motion can advance an application to a new state that is represented graphically by a menu item shown on the display of the object 10. This change of state can further result in state updates to other objects in the local vicinity including causing the original object 10 to display a graphic providing visual feedback to the user that the system recognized the motion event. For example, consider a car depicted graphically on the object 10. A motion event could cause an animation that would show the wheels on the car turning in the direction of motion. Another example could be a realistic physics simulation to depict the principle of inertia. With a ball graphic shown on the object 10, a motion event could trigger the balls movement on the object and subsequent motion events could mimic potential physical reactions of the ball. For example, collisions of the objects in motion can be combined with the direction or orientation of the objects to depict a resulting vector.
Another physical action interpreted by the present invention, shown in
The construct of groups is a part of the described software architecture of the present invention. Some embodiments of the present invention can lack the construct of groups; however groups can provide certain utility and conveniences to an application developer. In one embodiment, a group is considered to be a specific subset of the available physical interface elements. Membership in the group can be determined by an instantaneous physical configuration of the interface objects (for example, all objects that are currently upside-down or that are currently in motion are part of the group), or group membership may be determined by the architecture independent of the instantaneous state of the objects, or some combination of these. According to another embodiment of the present invention there can be a data structure that specifies the set of interface objects currently belonging to a particular group. Alternately, the members of a group can be determined as-needed only when required by the architecture, but not stored in a persistent manner. The data that specifies the members of a group can reside in memory on a single system that runs the architecture or reside in physical or virtual state in the physical interface devices themselves. The data can also be situated elsewhere, for instance distributed across machines on the internet. Extensions to the construct of an unordered group may include ordering (as in an ordered sequence) or topology (for example, two-dimensional or three-dimensional relationships between elements).
By utilizing a group construct the architecture can operate on a group in a manner that is similar or identical to operations on an individual object. For example, the programmer can invoke selected.displayOn( ) (where selected is the name of a group) as well as obj.displayOn( ) (where obj is the name of a single object) to turn on the display on a group of interface objects, or on a single interface object, respectively. This equivalence between the manner of addressing individual objects and the manner of addressing groups of objects enables the development of sophisticated behavior for a distributed user interface.
Another aspect of the present invention by which user interaction can establish groups includes arrangement actions. For instance, a set of physical interface elements (objects) that are placed near each other on a surface at some predetermined distance from each other can be assigned to a single common group. Similarly, elements placed atop each other can be assigned to the same group. In either of the aforementioned cases, an element (object) that is moved away from the other(s) may be removed from the group.
Referring in addition to
A software application using the architecture described above can thus be notified of the location data and change state accordingly. For example, whereas the objects 10 previously displayed different images, when placed adjacent to one another the images on objects 10 can become the same. The location data can also be used to increase the size of the virtual space. For example, when a ball is graphically depicted bouncing around the virtual area afforded by the display of object A, placing object B adjacent to object A can increase the virtual space such that the ball can be shown bouncing “through” the physical confines of the objects 10. The software response to the adjacency event can differ depending on the location of adjacency. For example, a simple interaction between an object A with a hat displayed on it and object B with a virtual character could have different adjacency events. Placing object A vertically adjacent to object B could cause the virtual character to wear the hat, while placing them horizontally adjacent could cause the virtual character to hold the hat. In
For example, the location data can be used to indicate an action. A personal pet game can be designed having a feeding option displayed on object A and a graphical representation of the pet on object B. When placed adjacent to one another, the pet is virtually fed. In
One implementation of this diagram can include measurements by internally integrated proximity or adjacency sensors, similar to
According to another embodiment of the present invention different sections of a maze are displayed on a plurality of objects such as objects B, C, and D. Placing object A adjacent to objects B and C reveals another section of the maze now shown on object A. In
A software application using the architecture of the present invention can thus be notified of the location data, and change state accordingly. For example, a software application may advance to a new state that is represented graphically on the display of the objects A, B, C, and D providing visual feedback to the user that the system recognized the location data action. Consider a jigsaw puzzle. Two correctly matched puzzle pieces graphically depicted on objects A and B could be moved adjacent to two other correctly matched puzzle pieces C and D in order to see if the four pieces all match together. Graphical and auditory feedback could indicate when there was correct placement. In another example, this type of location data action could be useful for organizing photographs or other types of data displayed on objects A, B, C, and D.
Organizing data such as photographs into coherent collections can be accomplished by moving multiple objects simultaneously so as to be adjacent to stationary objects that were previously grouped.
The reclassification begins when the objects A and B are still in a common group with objects C and D. First the software elements corresponding to each object are separated. A software application using the architecture described above is notified of the change in location data and changes the state of the objects accordingly. For example, images that are graphically displayed across all objects 10 could change once a subset of those objects is removed, thereby providing the user with feedback that the location data action occurred. Additionally, the same examples used in
The photograph organizing concept works similarly, as grouping pictures can be a multiple-step process that could require several iterations of removal and addition. Referring again to
One example of this type of interaction is a train that is progressing on tracks, with each object 10 graphically representing a section of the train tracks. The introduction of object A allows the train to virtually move further along on the tracks, whereby empty tracks would replace the train image on object B and the train image would subsequently appear on object A. Another example is a word spelling game where each object has a different letter displayed on it. The addition of the letter presented on object A could create a word that was previously incomplete.
As shown in
In the present example the architecture determines whether the location data corresponds to a location data action by, for example, verifying that the distance between objects A and B and between objects A and C is sufficiently short. At this point, the architecture updates software elements corresponding to objects A, B, C, and D. When the architecture determines that the objects A and B are not in a common group with objects C and D, the software elements corresponding to each object are merged into a group. For example, object B could have a graphical image displayed that separates it from objects C and D. Upon the introduction of object A, the images shown on objects B, C, and D change to provide feedback to the user of the location data action.
Using the gopher on a trail example described previously, the placement of object A could create a link between the image of a trail displayed on object B and the images of a trail displayed on objects C and D. Without object A, the software would prevent the gopher from being able to graphically transfer from one portion of the trail on object B to another portion of the trail on objects C and D. The introduction of object A also reveals another section of the trail to which the gopher can virtually travel.
The gestures of the present invention can also be applied to musical arrangements. If each object were to visually represent a different musical note, the insertion of object A between object B and objects C and D can create a musical sequence. Auditory feedback could confirm the correct, or incorrect, placement of the objects 10.
To better illustrate this approach of the present invention consider an implementation comprising an image of a building displayed on the top-most object 10. The building could graphically appear taller as more objects were placed on top of one another. Placing a different object on top with the image of a person on it can, for instance, result in the person virtually entering the building, as depicted graphically on the top-most block. Thus the building group interacts with the object representing the person.
Another example could be an addition mechanic, whereby the top-most block can display the quantity of blocks in a given stack. While seemingly trivial, this could be used to teach children how to count and could also teach multiplication when combined with adjacency events. For example, in
Another example is using the objects to display Chinese characters where text is read vertically. Furthermore, this action could also be used to transfer the image displayed on one block to the image on another, essentially “dropping” a graphical image down. For this example, an image would graphically disappear from the top object 10 and appear on the bottom object 10. It is also possible to have two objects interacting on different planes of motion. A user can place an object vertically on another, horizontally orientated object resulting in a different stack arrangement. The system architecture shown in
The physical action distributed user interface of the present invention can also result from single object compound actions. Examples of these types of actions are shown in
As one skilled in the relevant art will recognize the software architecture of the present invention can analyze and classify multiple touch events so as to properly characterize and initiate an appropriate response. In this case the event can be used for selection of multiple graphics on a display. For example, in a matching game, users could select two matching images from a multitude of images displayed graphically on the same object 10. Another example could be the same “Whac-a-mole™” type game described above, except with multiple mole holes displayed on a single object 10. This configuration would also allow multiple users to play simultaneously on one object 10 and allow users to use more than one finger to interact with the touch display.
Another example of a compound touch event on a single object is the combination of a touch with tilting the object. One application of this motion would be games that utilize guidance of the trajectory of a graphical object. A more specific implementation of this could be a bowling game in which users touch the object 10 to virtually release a bowling ball and tilt the object 10 to guide the ball's path down the alley.
Another example of the same type of trajectory-controlled interaction could be a dart game where users touch to graphically throw a dart and tilt to guide it to the bull's-eye on a virtual dartboard. Similarly the compound action could be a skateboarding game where users are able to virtually perform a variety of skateboarding tricks. The touch and tilt event could be a way to virtually jump onto different obstacles with the skateboard (touch event) while changing balance and direction on the skateboard (tilt event).
The process of such a compound action is shown in
Another example is the interaction of virtually rolling dice. With dice graphically displayed on an object 10, the compound event can signify the user's roll of the dice. Graphical and auditory feedback could indicate to the user that the roll had been completed. This compound event could also be used as an advanced combination move in a fighting game. Performing a sequence of simultaneous actions, such as touching and shaking, could allow a virtual character displayed on a different object 10 to perform special combination moves. Each of these single object compound actions may result in software behavior that differs from simply combining results of the constituent actions.
Another aspect of the present invention involves multi-object compound actions. Such actions are shown in
Examples of a multi-object compound action include causing a virtual vehicle or character to experience a boost of energy or speed. The touch event on one object 10 could graphically and audibly begin to activate a speed boost and the adjacency event with another object 10 with a spaceship displayed on it could cause the image of a spaceship to appear to move faster through space.
Another example is a “tangram” game that could display the shadow of a larger shape on one object 10 and display smaller shapes on another object 10. The user could be given the task of virtually arranging the smaller shapes to fit into the larger one. The press and adjacency event could act to select the desired smaller shape and graphically place it within the larger shape. Examples of a multi-touch compound action involving a tilting motion as shown in
In the compound action depicted in
Another multi-object compound action contemplated by the present invention and shown in
Another example is an action game in which a virtual character collects items. The collected items could all be graphically shown on object B. When a user wants to use one of these items, s/he could move the virtual character displayed on object A to be adjacent to object B, touch object B to virtually use one of the items, and then continue motion past object B to continue playing the game. This event could cause the virtual character to have increased skill or power to battle an upcoming enemy, for example.
There are several ways of implementing the architecture's detection of the described single and multi-object compound actions. In a temporal detection model, individual actions that occur together within a certain amount of time are considered by the architecture to be part of a compound action. For instance, when a button press and adjacency (motion) action occurs within 500 milliseconds of each other, these individual actions can be grouped into a single compound action. The specific timing constraints can be tuned to match the application and user audience. In a grammar-based model, individual actions occur in specific patterns that are matched against established action templates. A hybrid temporal-grammatical model combines these approaches, matching detected actions against templates, but with certain maximum tolerances for delay between actions that are treated as simultaneous and actions that are detected as sequential. Other embodiments can use alternative approaches for detecting compound actions.
Another example is in an application in which a question is presented to the user and the set of possible answers are displayed each on a single object 10; by pressing on a single object 10 the user can indicate their desired answer and the system may provide feedback about this choice such as feedback (graphical, auditory) about whether the selection was correct. Another example is in a “Whac-a-mole™” game in which the user must press the object 10 within a certain amount of time after a specific graphical feedback is presented on the display on object 10. If the touch event is detected within the given time window, graphical feedback may be presented on the display of object 10.
In other embodiments of the present invention, elements of the software architecture can reside in the objects themselves. The data structures and the information related to the various objects can be distributed among the various objects and an action based on a combined arrangement or compound action involving multiple objects determined in a coordinated and distributed fashion by the various objects. In one embodiment of the present invention a particular object may be elected by the participating objects to act as a coordinator. The role of the coordinator object may be assigned to any particular object and determined dynamically. Each object is equipped with the computational resources to be able to perform the above processing. In another embodiment there may be no elected coordinator object and each active object may compute the processing in parallel with the other objects. In a hybrid approach the software architecture may be partly executed by the objects and partly by a host machine separate from the objects. In a wholly distributed design the software architecture can be implemented on the objects themselves wherein the objects report events to the other objects and the objects tabulate these events. When an object determines that it has received the correct set of events for a particular action it can process that event and act on the action by, for example, sending a message to update the state of the other objects.
Some embodiments of the present invention include extensions to the software configuration that provide visual, audio, tactile, or other sensory feedback to the user in order to prompt or confirm user behavior. These extensions include but are not limited to initiating, continuing, or completing further actions or compound actions. For example, in
The process shown in
One example of this is a trivia game in which object A displays a question. Touching object A could cause a hint to display on object B, but moving object A adjacent to object B could cause the display on object B to change from a hint to the correct answer. Another example is in an adventure game where this event could cause the virtual character to take a certain action. In this case, object A has the character displayed virtually on top of an object that could initiate an action for the character. Touching object A in this circumstance can cause object B to display a question of whether or not the user wants to take that action. The adjacency event can also act as confirmation of the user's choice to take the specified action.
In each of the above object/user interactions an object is touched or moved by a user such that data is transmitted to the software architecture for analysis. In some cases the interaction is a touch to a surface of the object and in others the interaction is the movement of an object relative to another object or the object's orientation. In each case the software architecture analyzes the data to determine whether action parameters have been met so as to determine whether the state of the object should be updated.
The storage device 6608 is a non-transitory computer-readable storage medium such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device. The memory 6606 holds instructions and data used by the processor 6602. The pointing device 6614 is a mouse, track ball, or other type of pointing device, and is used in combination with the keyboard 6610 to input data into the computer system 6600. The graphics adapter 6612 displays images and other information on the display 6618. The network adapter 6616 couples the computer system 6600 to one or more computer networks.
The computer 6600 is adapted to execute computer program modules for providing functionality described herein. As used herein, the term “module” refers to computer program logic used to provide the specified functionality. Thus, a module can be implemented in hardware, firmware, and/or software. In one embodiment, program modules are stored on the storage device 6608, loaded into the memory 6606, and executed by the processor 6602.
The types of computers 6600 used can vary depending upon the embodiment and requirements. For example, a computer system used for implementing the logic of a distributed tangible user interface may have limited processing power, and it may lack keyboards, and/or other devices shown in
In a preferred embodiment, the present invention can be implemented in software. Software programming code which embodies the present invention is typically accessed by a microprocessor from long-term, persistent storage media of some type, such as a flash drive or hard drive. The software programming code may be embodied on any of a variety of known media for use with a data processing system, such as a diskette, hard drive, or CD-ROM. The code may be distributed on such media, or may be distributed from the memory or storage of one computer system over a network of some type to other computer systems for use by such other systems. Alternatively, the programming code may be embodied in the memory of the device and accessed by a microprocessor using an internal bus. The techniques and methods for embodying software programming code in memory, on physical media, and/or distributing software code via networks are well known and will not be further discussed herein.
Generally, program modules include routines, programs, objects, components, data structures and the like that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the invention can be practiced with other computer system configurations, including hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers and the like. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
An exemplary system for implementing the invention includes a general purpose computing device in the form of a conventional personal computer, a personal communication device or the like, including a processing unit, a system memory, and a system bus that couples various system components including the system memory to the processing unit. The system bus may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The system memory generally includes read-only memory (ROM) and random access memory (RAM). A basic input/output system (BIOS), containing the basic routines that help to transfer information between elements within the personal computer, such as during start-up, is stored in ROM. The personal computer may further include a hard disk drive for reading from and writing to a hard disk and/or a magnetic disk drive for reading from or writing to a removable magnetic disk. The hard disk drive and magnetic disk drive are connected to the system bus by a hard disk drive interface and a magnetic disk drive interface respectively. The drives and their associated computer-readable media provide non-volatile storage of computer readable instructions, data structures, program modules and other data for the personal computer. Although the exemplary environment described herein employs a hard disk and a removable magnetic disk, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, random access memories (RAMs), read-only memories (ROMs) and the like may also be used in the exemplary operating environment.
A number of program modules may be stored on the hard disk, magnetic disk, ROM or RAM, including an operating system, one or more application programs or software portions, other program modules and program data. A user may enter commands and information into the personal computer through input devices such as a keyboard and pointing device. Other input devices may include a microphone, joystick, game pad, satellite dish, scanner or the like. These and other input devices are often connected to the processing unit through a serial port interface that is coupled to the system bus, but may be connected by other interfaces, such as a parallel port, game port or universal serial bus (USB). A monitor or other type of display device may also connect to the system bus via an interface, such as a video adapter.
There are several advantages to the disclosed software designs. For example, the disclosed design offers a complete, generalizable methodology and implementable solution to incorporating and handling single and multi-object actions and gestures within a multi-object distributed tangible interface. In addition, the disclosed software designs support implementing program behavior triggered by not only single actions (including but not limited to click, shake, tilt, or group), but more sophisticated compound actions, making it possible for the software developer using a distributed tangible user interface to create programs that users will find more intuitive, engaging, and expressive. Further, the disclosed software designs extend to multi-object distributed tangible user interfaces of various forms, functions, and implementations, and offer a consistent grammar of patterns and actions that will enable developers and designers to create software utilizing such interfaces with greater speed and ease, while maintaining consistency across systems.
In a broad embodiment, a software system is configured to receive input from a distributed tangible user interface, thus detecting and handling user actions on single sensor inputs, as well as detecting and handling compound user actions involving multiple sensor inputs, on one object or across multiple objects, simultaneously, serially, or in combination, and with results of any such action wholly determined by the software code utilizing this system.
Some portions of this specification are presented in terms of algorithms or symbolic representations of operations on data stored as bits or binary digital signals within machine memory (e.g., a computer memory). These algorithms or symbolic representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. As used herein, an “algorithm” is a self-consistent sequence of operations or similar processing leading to a desired result. In this context, algorithms and operations involve the manipulation of information elements. Typically, but not necessarily, such elements may take the form of electrical, magnetic, or optical signals capable of being stored, accessed, transferred, combined, compared, or otherwise manipulated by a machine. It is convenient at times, principally for reasons of common usage, to refer to such signals using words such as “data,” “content,” “bits,” “values,” “elements,” “symbols,” “characters,” “terms,” “numbers,” “numerals,” “words”, or the like. These specific words, however, are merely convenient labels and are to be associated with appropriate information elements.
Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information.
As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the invention. This description should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise.
Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for a system and a process for an interaction system for a distributed tangible user interface through the disclosed principles herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope defined in the appended claims.
Claimable subject matter and additional written description includes, but is not limited to the following:
A software architecture that communicates with physical objects 10, each of which is equipped with inputs sensors and possibly outputs. The architecture provides an abstract software connector that allows the architecture to function with a variety of different types of objects 10. The architecture processes input events on these objects, classifies them and aggregates them into high-level user actions. These actions can be composed of events on one or more objects over time, and can be composed of arbitrarily complex compound sets of actions. These user actions generate software events that can be used by an application, allowing a programmer to more easily create applications that the user will find more intuitive and engaging.
As will be understood by those familiar with the art, the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Likewise, the particular naming and division of the modules, managers, functions, systems, engines, layers, features, attributes, methodologies, and other aspects are not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, divisions, and/or formats. Furthermore, as will be apparent to one of ordinary skill in the relevant art, the modules, managers, functions, systems, engines, layers, features, attributes, methodologies, and other aspects of the invention can be implemented as software, hardware, firmware, or any combination of the three. Of course, wherever a component of the present invention is implemented as software, the component can be implemented as a script, as a standalone program, as part of a larger program, as a plurality of separate scripts and/or programs, as a statically or dynamically linked library, as a kernel loadable module, as a device driver, and/or in every and any other way known now or in the future to those of skill in the art of computer programming. Additionally, the present invention is in no way limited to implementation in any specific programming language, or for any specific operating system or environment. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
The present application relates to and claims the benefit of priority to U.S. Provisional Patent Application No. 61/311,716 filed 8 Mar., 2010 and U.S. Patent Provisional Patent Application No. 61/429,420 filed 1 Jan., 2011 which is hereby incorporated by reference in its entirety for all purposes as if fully set forth herein.
Number | Date | Country | |
---|---|---|---|
61311716 | Mar 2010 | US | |
61429420 | Jan 2011 | US |