The present invention relates generally to providing a current indication of a user's status or activity via a computer generated avatar.
In the computing sense, an avatar is a virtual representation of a computer user. The term “avatar” can also refer to the personality connected with a screen name, or handle, of an Internet user. Avatars are often used to represent the real world user in the virtual world of computing. Avatars can be three-dimensional models used in virtual reality applications and computer games. Avatars can also be a two-dimensional icon (picture) used in Internet forums and other online communities, instant messaging, gaming and non-gaming applications. Avatars may be animated or static.
The term avatar dates at least as far back as 1985, when it was used as the name for the player character in a series of computer games. Recently, the usage of avatars has spread in popularity and avatars are now often used in Internet forums. Avatars on Internet forums serve the purpose of representing users and their actions, personalizing their contributions to the forum, and may represent different parts of their persona, beliefs, interests or social status in the forum.
The traditional avatar system used on most Internet forums is a small (96×96 to 100×100 pixels, for example) square-shaped area close to the user's forum post, where the avatar is placed. Some forums allow the user to upload an avatar image that may have been designed by the user or acquired from elsewhere. Other forums allow the user to select an avatar from a preset list or use an auto-discovery algorithm to extract one from the user's homepage.
In the instant messaging (IM) context, avatars, sometimes referred to as buddy icons, are usually small images. For example, IM icons are 48×48 pixels, although many icons can be found online that typically measure anywhere from 50×50 pixels to 100×100 pixels in size. A wide variety of these imaged avatars can be found on web sites and popular eGroups such as Yahoo! Groups. The latest use of avatars in instant messaging is dominated by dynamic avatars. The user chooses an avatar that represents him while chatting and, through the use of text to speech technology, enables the avatar to talk the text being used at the chat window. Another form of use for this kind of avatar is for video chats/calls. Some services, such as Skype (through some external plug-ins) allow users to use talking avatars during video calls, replacing the image from the user's camera with an animated, talking avatar.
Various embodiment systems and methods are disclosed which automatically update a user's virtual world avatar to provide a more accurate representation of the user's current real world status or activity. Embodiments may receive information from a variety of sensors located either within the user's mobile device or within close proximity to the mobile device to provide some parameters of the user's real world environment. The variety of sensors may include, but are not limited to a location sensor (e.g., GPS coordinates), a microphone for sensing ambient noise, a camera or light sensor for sensing ambient light, accelerometers, temperature sensor, and bio-physiological sensors such as a breathalyzer, heart rate monitor, pulse sensor, EEG, ECG, EKG, and/or blood pressure sensor. In addition, embodiments may utilize a user's calendar data as well as mobile device settings to generate an updated virtual representation via an avatar of the user's real world status or activity. Alternative embodiments may age the user's avatar over time so that a user's avatar grows older, more mature as the user grows older, more mature. Various embodiments automatically update or change the user's avatar as the user goes about his/her daily activities. Other embodiments update or change the user's avatar when a request to view the avatar is made. The user's avatar may be viewed in a singular location, such as a webpage. Alternative embodiments may allow a user's avatar to be downloaded to any requesting party. Still other embodiments may pro-actively inform selected parties of a user's current real world status or activity by sending an avatar.
The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate exemplary embodiments of the invention, and, together with the general description given above and the detailed description given below, serve to explain features of the invention.
a is an example parameter data table suitable for storing a variety of sensor data, user calendar data, and mobile device settings indicating the current status of the user.
b is an illustrative avatar selection logic table which indicates an avatar to display based on various parameters.
c is a process flow diagram of an embodiment method for calibrating an avatar selection logic table.
a is a process flow diagram of another embodiment method suitable for displaying an avatar directly on the requesting device.
b is a process flow diagram of another embodiment method suitable for displaying a new or updated avatar directly on the requesting device
a is a process flow diagram of another embodiment method suitable for displaying an avatar directly on the requesting device which conserves battery and processor time.
b is a process flow diagram of another embodiment method suitable for displaying a new or updated avatar directly on the requesting device which conserves battery and processor time.
a is a process flow diagram of another embodiment method suitable for displaying an avatar directly on the requesting device which conserves battery and processing time by responding to a second user request.
b is a process flow diagram of another embodiment method suitable for displaying a new or updated avatar directly on the requesting device which conserves battery and processing time by responding to a second user request.
a is an example parameter data table suitable for storing a variety of sensor data, user calendar data, mobile device settings and authorization level of a user requesting an avatar.
b is an illustrative avatar selection logic table which indicates an avatar to display based on various parameters including the authorization level of the requesting user.
c is a process flow diagram of an embodiment method for calibrating an avatar selection logic table including the authorization level of the requesting user.
a is a process flow diagram of another embodiment method suitable for displaying an avatar selected based upon sensor and setting data and the authorization level of a second user directly on the requesting device.
b is a process flow diagram of another embodiment method suitable for displaying a new or updated avatar selected based upon sensor and setting data and the authorization level of a second user directly on the requesting device.
a is a process flow diagram of another embodiment method suitable for displaying an avatar selected based upon sensor and setting data and the authorization level of a second user directly on the requesting device.
b is a process flow diagram of another embodiment method suitable for displaying a new or updated avatar selected based upon sensor and setting data and the authorization level of a second user directly on the requesting device.
The various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the invention or the claims.
As used herein, the term mobile device may refer to any one or all of cellular telephones, personal data assistants (PDA's), palm-top computers, laptop computers, wireless electronic mail receivers (e.g., the Blackberry® and Treo® devices), multimedia Internet enabled cellular telephones (e.g., the iPhone® ), and similar personal electronic devices which include a programmable processor and memory. In a preferred embodiment, the mobile device is a cellular handset that can communicate via a cellular telephone network (e.g., a cellphone). However, cellular telephone communication capability is not necessary in all embodiments. Moreover, wireless data communication may be achieved by the mobile device connecting to a wireless data network (e.g., a WiFi network) instead of a cellular telephone network.
As used herein, the term “server” refers to any of a variety of commercially available computer systems configured to operate in a client-server architecture. In particular, the term “server” refers to network servers, particularly Internet accessible servers, which typically include a processor, memory (e.g., hard disk memory), and network interface circuitry configured to connect the server processor to the network, such as the Internet.
As used herein, the term “theme” refers to the collection of user-configurable settings that may be implemented on a mobile handset to personalize the mobile handset to the user's preferences. A theme is defined by the files and settings used for any and all of the wallpaper (i.e., images presented on the mobile handset display), ring tones (i.e., audio files triggered by different events), ring settings (e.g., loud, medium, soft and silent, as well as vibrate mode), button tones (i.e., tones played when buttons are pressed), button functions, display icons, and speed dial settings (i.e., the telephone number associated with each button configured for speed dialing). A theme may also include settings for other user-configurable settings like password protections, keypad locks, carrier selections, etc. Composed of such data, a theme can be stored as a mix of files (e.g., image and audio files), as well as configuration data (e.g., telephone numbers associated with particular speed dial buttons).
With the advent of modern computing and mobile communications, individuals are able to communicate with one another in a variety of ways and at all times. In the past, if an individual wanted to communicate with another, communication could be done through face to face conversations, letters, or the telephone. Today, in addition to these conventional means of communications, individuals may communicate with one another via e-mail, SMS, instant messaging, voice over internet protocol (VoIP) calls, video over internet protocol calls, internet forum chats, and telephone communications via mobile device (handset) calls. With so many different channels of communications, individuals expect to be able to contact others whenever they desire. However, some individuals may desire to not be disturbed. For example, an individual may be conducting an important meeting and does not want his mobile device to ring during the meeting. While he may simply turn off his mobile device (or the ringer) he may also wish to inform any callers of the reason he is temporarily unavailable. With mobile communications so ubiquitous, many users expect to be able to contact their intended call recipient at all times. Thus, when an intended recipient does not answer an email, SMS, phone call, etc., the initiating caller is often left wondering why the intended recipient is not responding.
Avatars have gained increasing popularity in use as graphical representations of an individual. An avatar can be text (such as a screen name) or a two or three-dimensional graphical representation (e.g., a photograph, cartoon or machine-generated image). Avatars can be static images or dynamic (animated) images. Examples of some avatars are illustrated in
Mobile devices, particularly cellular telephones, are practically ubiquitous and indispensible. Consequently, mobile devices can be ideal platforms for housing sensors that can measure the environment and activities of user. By properly analyzing motion, sound, location and other sensed information obtained from sensors mounted on user's mobile devices, computer systems can infer user activities, information that can be used in the various embodiments to update users' avatars to reflect their real world activities. For example, a user's mobile device may “learn” that the user's current GPS location is a conference room in the office. Accordingly, the mobile device may automatically set the mobile device to vibrate mode and also automatically update the user's avatar to depict “do not disturb.”
By including the data from a variety of sensors, the various embodiments incorporate or make use of a variety of sensors housed in a user's mobile device, and use the sensor information to update or generate avatars which can be displayed to reflect the user's status, location, mood and/or activity. The various embodiments may employ a variety of sensors and access schedule or calendar information maintained within the mobile device to more accurately reflect the user's status. Such an avatar may be made available for public or private viewing, in order to quickly inform viewers of the user's current status, location, mood and/or activity. Such avatars may be sent to others proactively, such as appended to or included within an SMS or e-mail message, or posted to a server where others can access or download the avatar, such as by accessing an on-line game or website where the avatar is maintained. Avatars may be changed according to users' status on a pre-scheduled basis (e.g., periodic updating), whenever the user's status is requested (e.g., in response to a request for an avatar), or whenever the user's status changes (e.g., when sensors indicate the user's location, mood and/or activity have changed.
Users of mobile devices may communicate with one another or with any other user connected through the Internet 108. For example, a first user may send an email from his laptop 101 to a user at desktop computer 113, a user of a PDA 114, a user of a laptop computer 115, or other users via their cell phones 116, 117. In such a case the user would send an email from the laptop 101 which would be wirelessly transmitted to the base station 105. The email would be sent via a router 106 to a server 107, across the Internet 108 to a server 110 servicing the intended recipient's computing device to a router 111 where it might be sent via a wired connection to a desktop computer 112 or via a wireless base station 112 to a mobile device 114-117. Similarly, the recipient can reply or initiate communications to the user in the reverse manner.
Mobile device users may be unavailable to respond to incoming messages from time to time and may wish to provide some indication of their current status to explain why they are non-responsive. Alternatively, mobile device users may want to inform others as to their current status so that others can know if they are available to communicate. Further, some users may wish to inform their friends and family of their current status and activities as part of their social networking lifestyle. Such notification of a user's status may be accomplished efficiently using an avatar that can be accessed by or presented to selected individuals.
Such an avatar may be maintained and displayed, for example, on the user's social networking webpage (e.g., myspace.com, facebook.com, etc) or any other webpage maintained on an Internet accessible server. The avatar, along with the contents of the webpage and data contained therein, may be stored in the memory of a server 109. The server 109 is connected to the Internet 108 and may be accessed by devices with Internet 108 capabilities and proper access rights.
In an embodiment, users may access a person's avatar, such as by accessing a webpage to display the current status avatar, prior to communicating (e.g., calling, sending an e-mail or sending an SMS message). In another embodiment, when a user attempts to communicate with a person, the user may be automatically directed to the webpage containing the current status avatar if the person does not respond or if the person has selected a “do not disturb” option. In another embodiment, an avatar file may be automatically and directly sent back to a user's device 113-117 for display. In another embodiment, avatar files may be proactively sent to a pre-approved list of recipients whenever a user's status changes, or on regularly scheduled or predetermined intervals. In another embodiment, a link to a user's webpage including an avatar may be sent to a pre-approved list of recipients whenever a user's status changes, or on regularly scheduled or predetermined intervals. As one of skill in the art would appreciate, server 109 may not be necessary if avatar files are being sent directly to a user's device 113-117. In addition, avatars may be hosted on a variety of servers 107, 110, and need not be limited to a particular server 109.
In addition to automatically updating avatars based upon sensed and recorded information, an embodiment enables users to override the automatic updating in order to select a particular avatar regardless of the user's current status. Thus, users may elect to have their avatars reflect their current status at some times, while selecting particular avatars at other times. In alternative embodiments, the user may also set permission or authorization levels for avatars to control who may view a particular avatar, at what times, and during what activities. For example, a user may elect to enable the user's supervisor to view a particular avatar only during work hours. Outside work hours the avatar may be hidden, not transmitted, or reflect a general status, such as busy. In such embodiments, users can set the information and avatars that are public. Such embodiments may be used in updating an avatar on a website or in a general broadcast.
Information from sensors within a user's mobile device can be used to determine how an avatar should be updated. Each of the mobile devices 101-104 may include a variety of sensors which can provide information used to determine the respective users' status. Examples of various sensors will be described in more detail below. In addition, the day, time, and mobile device settings may be considered to provide further information regarding the respective users' status. Further, the mobile device's own operating status may provide information on the user's status.
For example, if a user regularly visits a coffee shop and uses the shops' wireless WiFi network this status could be determined based upon (1) the time of day and day of the week (TOD/DOW), particularly if the breaks are regular or scheduled, (2) activation of the WiFi transceiver, perhaps in combination with TOD/DOW information, (3) GPS (or other location sensor) coordinates, perhaps in combination with TOD/DOW information and activation of the WiFi transceiver, (4) background noise picked up by the device's microphone (e.g., the unique sound of an espresso machine), perhaps in combination with TOD/DOW information, activation of the WiFi transceiver, and GPS coordinate information. Other sensors may also confirm the user's status is consistent with a coffee break, including accelerometers (indicating little if any motion) and temperature (indicating an ambient temperature consistent with an indoor location). If the user skipped his/her coffee break, was home sick or on vacation, an avatar display based solely on TOD/DOW information would inaccurately depict the user's current status. By further referencing the mobile device's GPS sensor, the system can determine if the user is within (or close to) the coffee shop location. Using background noise sensing, the system may confirm a coffee break status recognizing espresso machine noise. Having made this determination of the user's current status, a system (whether the user's mobile device 101 or a server 109 receiving information from the device 101) can select or generate an avatar that graphically depicts this status. In addition, if the device's microphone detected a noise level that exceeded a predetermined limit, the avatar could be altered to reflect the user's ambient environment. This may suggest to someone viewing the avatar that a non-verbal means of communication (e.g., text message) may be the best form of communication if someone wanted to contact the user.
As another example, background noise may be monitored (e.g., using the mobile device's microphone) for music and other sounds that may be used to infer the user's mood. For example, if the background noise includes music with added up-tempo beat, an avatar expressing a happy mood may be selected. As this example illustrates, by increasing the number of sensors used and the variety of information considered, a system can better infer the user's current status.
As another example, a user status of flying in an airplane may be determined by comparing TOD/DOW information to the user's Calendar information. If the user is scheduled to be on an airplane flight and there is no contrary information (e.g., an open WiFi connection with the user's mobile device 102), a system (whether the user's mobile device 101 or a server 109 receiving information from the device 101) can select or generate an avatar that graphically depicts this status. If airlines begin allowing mobile devices to communicate during flights, the mobile device 102 may also report sensor information consistent with airline travel to permit confirmation of the status. For example, constantly changing GPS data from the mobile device 102, accelerometer data and/or barometric pressure data from a sensor on the mobile device 102 may all be used to confirm airline travel.
As another example, a user driving or riding in a car may be determined based upon TOD/DOW (e.g., the time and day being consistent with rush hour) in combination with GPS location information and accelerometer sensor readings. Constantly changing GPS data and accelerometer data from the mobile device 103 may be used to confirm that the user's status as driving.
As another example, the status of a user of a cell phone 104 in a business meeting may be inferred from TOD/DOW information compared against the user's Calendar. The TOD/DOW and calendar information may be maintained within the cell phone 104 and/or a server 109. Such a preliminary determination can be confirmed by considering other sensor information, such as GPS location information, and accelerometer and temperature sensor readings, and the cell phone 104 operating settings. For example, if the user has switched his mobile device 104 to vibrate or silent ring, these settings are consistent with the user being in a meeting and help to confirm the user's status. Similarly, if the GPS location is consistent with a conference room, the accelerometer readings show little significant movement and the temperature is consistent with an indoor location, this information can be used to confirm that the user is in a meeting. With the user's status so inferred, an appropriate avatar file can be selected or generated.
As another example, a user of a cell phone 104 may choose to display a set avatar while on vacation (e.g., an avatar sleeping in a hammock) instead of displaying current status. In this example, the cell phone 104 may be configured to not relay sensor data or communicate a set avatar, or a server 109 may be configured to ignore sensor data and TOD/DOW information and display a selected avatar.
In using a variety of sensors and information sources to infer a user's status, certain sensors or parameters may be given higher priority, particularly with respect to each other. For example, GPS location information may override TOD/DOW plus calendar information, such as when the GPS location is different from the location indicated by the calendar information. Logic rules may be generated to deal with such contradictory information. Additionally, user inputs and settings regarding current status may override all sensor, phone settings, or calendar parameter data.
Each of the mobile device 301 sensors 350-356 are connected to the processor 391, which is in turn connected to an internal memory unit 392. In this manner, the processor 391 may collect parameter data from the various sensors 350-356 and may store the data in memory unit 392 or transmit the data via transmitter 398. It should be noted that while mobile device 301 is depicted in
The various sensors illustrated in
While
After, while, or before receiving sensor data, the processor 391 may also retrieve calendar data stored in memory 392, step 402. As discussed above, calendar data may be used to infer the activity and location of the user of the mobile device 301. Calendar data may be obtained for the particular time of day and day of week. As part of this step, the processor 391 may also obtain the current time and date that may be stored in a data register.
After, while, or before receiving sensor and calendar data, the processor 391 may retrieve various mobile device settings, step 403. Such device settings may include the selected ringer type (e.g., silent or audible), theme settings, normal or roaming communication mode, battery power level, current cellular communications status, such as whether the user is engaged in a phone call or accessing the Internet on a mobile browser, current wireless communications status, such as whether the user is presently connected to a WiFi network, and current local area wireless network status, such as whether the user is presently using a Bluetooth device. Each of these mobile device settings and operating conditions may provide information regarding the user's status. For example, if the user has set the mobile device 301 to display a casual theme (e.g., whimsical wallpaper, musical ringer, etc.) this may indicate that the user is not engaged in business or work activities.
In each of the various embodiments, the order of the method steps described herein may be varied from that illustrated in the figures. For example, the step of gathering sensor data may be performed after the step of gathering mobile device setting and calendar data. In this manner, the information provided by the device settings or indicated in the user's calendar may be used to make an initial inference regarding the user's activity which may be confirmed or further refined by selected sensor data. For example, if the device settings indicate that any cellular telephone call is in process, the mobile device processor 391 may be configured to poll only the GPS sensor, since the user's status (talking on a cell phone) is already established, and only the location remains to be determined. As another example, if the calendar data indicates that the user is in a meeting, the processor 391 may be configured with software to poll only those sensors necessary to confirm that status.
Using information gathered from mobile device sensors and stored data and settings, a processor can infer a user's current status and determine which one of a group of avatars to display, step 404. A variety of processors may be used to make this determination in the various embodiments. In one embodiment the processor 391 of the mobile device 301 is configured with software instructions to select an avatar for display based upon criteria stored in its memory 392. In such an embodiment, the avatar selected for display by the mobile device 301 may be uploaded to another computing device 113-117. For example, in an embodiment the selected avatar may be sent to another mobile computing device as an attachment to an e-mail or SMS message. In another embodiment, the mobile device can send a URL or other network address to another computing device to enable the receiving computer to obtain the avatar by accessing the provided URL or other address. In another embodiment, the mobile computing device 301 may transmit a memory pointer indicating that the memory storage location of the selected avatar file to another computing device so that the receiving computing device can obtain the avatar from its own memory. This embodiment may be employed where the avatar file is stored in memory (e.g., hard disc memory) of a server 109, thereby enabling the server to load the selected avatar to the user's webpage. This embodiment may also be employed with other computing devices 113-117.
In other embodiments, the mobile computing device 301 may transmit the sensor, calendar, and settings data to another computing device, such as server 109 or other computing devices 113-117, so that the avatar determination, step 404, can be performed by the processor of the receiving computing device. In such embodiments, the sensor, calendar, and settings data are received and stored in memory before the computing device's processor makes the avatar determination.
Once the proper avatar to display has been determined, the avatar file can be made available for display on a computing device 113-117, step 405. In an embodiment, the computing device 113-117 displays the selected avatar by accessing the avatar file that was either pre-stored in memory (e.g., using an address or memory pointer communicated by the mobile device 301) or downloaded from either the mobile device 101-104 or a server 109. In another embodiment, the avatar may be accessed and displayed as part of an Internet webpage hosted by a server 109. In alternative embodiments, the avatar files may be updated annually or some other period of time such that the avatar reflects the age of the user. As the user matures and grows older, the various avatar files may be updated to display an older and more mature avatar. For example, the avatar files may depict the user with graying hair or weight loss or gain as is appropriate with the actual appearance of the user. In this manner, when the appropriate avatar files is accessed or retrieved, the avatar will accurately reflect the user's age.
Specific process flow steps of the various embodiments will now be described in greater detail with reference to
The processor 391 can evaluate the data stored in the parameter data table, step 410, to determine which avatar to display, step 411. A variety of method embodiments may be used to evaluate the stored parameter data and select a particular avatar for display. In a preferred embodiment, the parameters stored in the data table are compared to values stored in a selection table, such as illustrated in
In another embodiment, the parameter data may be evaluated in a logic tree programmed in software. In this example embodiment, the processor 391 executes a software routine in which the particular steps performed (e.g., a series of “if X, then Y” logic tests) depend upon certain parameter values. While the use of a programmed logic tree routine may operate faster, this embodiment may be more difficult for a user to configure and may allow fewer user selection options.
Once the appropriate avatar has been selected, the processor 391 can direct the transmitter 398 to transmit the selected avatar to a server 109 via a wireless or cellular network, and the Internet 108, step 415. In this step, the processor 391 may transmit the selected avatar file, a pointer or address to memory containing the selected avatar file, or an identifier of the selected avatar that the server 109 can use to locate the corresponding avatar file within its memory.
Once the selected avatar has been transmitted to the server 109, the processor 391 may periodically repeat the steps of obtaining parameter values so as to continually update the displayed avatar. If the user has generated appropriate avatars and configured the mobile device to appropriately link individual avatars to various parameter values, this embodiment can enable the server 109 to display an avatar that reflects the user's current status. In an embodiment, the processor 391 may optionally pause for a pre-determined amount of time before repeating the steps of obtaining parameter values in order to reduce the demand on the processor 391, step 450.
The transmitted data indicating a particular avatar to display is received by the server 109, step 420. The server processor (not shown separately) of the server 109 may include the selected avatar file in a webpage hosted on the server 109 for the user for public display, step 430. When a second user accesses the user's webpage, step 440, the access request is received by the server 109, step 431. In response, the server 109 transmits the webpage with the selected avatar file as an HTML file to the computing device 113-117 of the second user, step 432. The receiving computing device 113-117 then displays the selected avatar to the second user, step 441. Since the mobile device processor 391 is continually updating the avatar selection, this embodiment insures that whenever a second user accesses the first user's webpage, step 440, an avatar reflecting the first user's current status is displayed, step 441. Since the polling and analysis of sensor and settings data is performed autonomously, the user's avatar presented to others is maintained consistent with the user's current status without input by the user.
A variety of data structures may be used with the various embodiments, an example of which is displayed in
The processed data values stored in the parameter value table 600 illustrated in
With sensor and setting data stored in a parameter value table 600, this information can easily be compared to criteria stored in an avatar selection logic table 601, an illustrative example of which is shown in
An avatar selection logic table 601 may be stored in memory of the computing device which determines the appropriate avatar to display. Thus, if the mobile device 301 determines the avatar to be displayed, such as described above with reference to
Using an avatar selection logic table 601 to perform the avatar selection provides greater user flexibility and control over the process. The various embodiments are intended to provide users with a flexible and accurate means for presenting avatar's reflecting their personal preferences and activities. Thus, there is benefit in giving users fine control over the process used to select and display the avatars of their choice. Use of a avatar selection logic table 601 also simplifies the user's setup process when many sensor and setting criteria are employed. A native application may be provided on the mobile device to enable a user to change the avatar selection criteria or otherwise manually control the avatar presented at any given time.
Users may customize the avatar selection logic table 601 such that the avatar file selected for display is chosen based upon a variety of parameters. This customization may be accomplished during a user setup process. Also, avatar selection criteria can be selected and the avatar selection logic table 601 populated during a user setup process in which the user makes personal selections in order to customize the user's own avatar behavior. Such a setup process may be accomplished with the aid of an interactive menu application running on a computing device. Such an interactive menu application may include user tools for creating and modifying and storing avatars, as well as menus for programming the avatar selection logic table 601. As part of the process for creating avatars, such a menu application may require the user to assign a name or descriptor of each avatar that can be saved in the avatar selection logic table 601. Once an avatar is created, or after all avatars have been created, the menu application may then prompt the user to enter values or ranges to be used as criteria for selecting each avatar, and store the user's responses in the appropriate fields of the avatar selection logic table 601.
For example,
Alternatively, the mobile device may be programmed with a software application to enable a user to populate the avatar selection logic table by performing an activity while recording sensor and setting data, and then identifying the avatar to associate with the recorded values. For example, a user that frequently jogs on a particular track may calibrate the avatar selection logic table by activating a calibration process while jogging at the track. In a calibration process, the mobile device 301 can record the sensor values and device settings during the activity, average the values, and store the average or range within the avatar selection logic table 601. In this manner, the mobile device can record the GPS coordinates of the track, the speed range of the user while jogging, the ambient noise, light and temperature conditions, and the accelerometer readings while jogging. In particular, some sensor values may exhibit characteristic patterns during particular activities that may be recognized and recorded in a calibration process. For example, an accelerometer may be able to recognize when a user is jogging based upon periodic accelerations with values and periodicity consistent with foot falls. The mobile device 301 may also record the device settings selected by the user during the activity.
c illustrates an example embodiment calibration method suitable for completing an avatar selection logic table. To begin the process, a user may select a particular avatar to be calibrated, step 610. This avatar may have already been created by the user and given a name. Alternatively, the user may enter a name for an avatar yet to be created. The user then begins the activity and initiates the calibration, such as by pressing a particular key on the mobile handset, step 612. During the activity, the mobile device 301 records this sensor data and device settings, step 614. For some sensors, this may involve recording sensor readings over a period of time along with the time of each recording in order to be able to recognize time-based patterns. After a period of time, the user may end the calibration, such as by a pressing a particular key on the mobile device, step 616. Alternatively, the calibration may proceed for a preset amount of time, so that step 616 occurs automatically.
Once calibration data gathering is completed, the processor 391 of the mobile device 301 can analyze the recorded sensor data using well-known statistical processes. For example, the sensor data may be statistically analyzed to determine the average sensor value and the standard deviation of sensor values. This calculation may be used to provide a mean with range (i.e., ±) value characterizing the particular activity. Alternatively, the sensor data may be analyzed to determine the maximum and minimum value, thereby determining the actual range of measurements during the activity. This analysis may be appropriate particularly for GPS coordinate values in order to determine the boundaries of the activity (e.g., perimeter of a jogging track). Sensor data may also be analyzed over time to determine characteristics of the values, such as whether accelerometer readings vary periodically, as may be the case while jogging or walking, or randomly, as may be the case in other activities. More sophisticated analysis of data may be employed as well, such as processing recorded ambient noise to detect and record particular noise patterns, such as the sound of an espresso machine. Once analyzed, the conclusions of the analyzed sensor data and mobile device settings may be stored in the avatar selection logic table 601 in a data record including the avatar name, step 620. The avatar selection logic table 601 may be stored in memory 392 of the mobile device 301. The user may repeat the process of selecting an avatar for calibration, engaging in the associated activity while allowing the mobile device 301 to perform a calibration routine and storing the analyzed sensor data in the avatar selection logic table 601 until criteria have been saved for all of the user's avatars. Optionally, when the avatar selection logic table 601 is completed, the table may be transmitted to another computing device, such as a server 109 on which the user's avatar is hosted, step 622.
Users may repeat the process illustrated in
Since users may not know the GPS coordinates of their various activities, and are unlikely to be able to accurately estimate ambient noise and accelerometer characteristics of an activity, this self-calibration method simplifies the process of setting up the avatar selection criteria. Further, users may not recognize how various activities impact their mobile device, and thus this embodiment method enables avatars to be selected based on as many sensors and setting values as can be recorded, even those of which the user may not be aware. For example, a coffee shop may have a number of characteristic background noises, such as the sound of an espresso machine, which the user may not notice.
The avatar selection avatar selection logic table 601 shown in
To select a particular avatar based upon the values stored in the parameter value table 600, a processor (be it in the mobile device 301, a server 109, or another computing device) can compare each value to corresponding criteria in the avatar selection logic table 601. A variety of algorithms may be employed to determine which of the avatar's selection criteria are most closely satisfied by any of the values in the parameter value table 600. In some implementations, a simple sum of the number of satisfied criteria may be sufficient to determine the appropriate avatar to assign. In an embodiment, weighting factors may be applied to selected criteria so that some measured sensor values are given greater weight when selecting an avatar. In another embodiment, one or two criteria may be used to make a preliminary determination of the current status, followed by a comparison of parameter values against confirmatory criteria. For example, GPS and calendar data may be used as primary indicators of particular activities, with noise, light and accelerometer data used to confirm the activity indicated by GPS location or calendar entries. For example, the GPS values stored in the parameter value table 600 most closely matches the criteria for the running avatar, data record 618. The running avatar can then be confirmed as an appropriate selection by comparing the accelerometer data with the corresponding criteria in the avatar selection logic table 601. In this example, the accelerometer data distinguishes the activity from driving by or walking near the track. By making such comparisons of the values stored in the parameter value table 600 to the criteria in the avatar selection logic table 601, a processor can determine that the “Running” avatar should be displayed. In another embodiment, the mobile device 301 may be configured with software instructions to ask the user whether it has correctly diagnosed the current activity (such as running), or ask the user to name the current activity. This method can enable the mobile device 301 to “learn” when parameters meet the desired criteria. Alternatively, in embodiments where avatar files and avatar selection logic tables are hosted on a remote server, the avatar selection logic tables of other previous users may be used to populate a new avatar selection logic table for a new user. For example, if a number of previous users have assigned a “jogging” avatar to a set of sensor data which includes a particular GPS location (track), accelerometer readings, noise, light, etc. sensor reading, then an artificial intelligence routine running on the server may recommend to the new user the “jogging” avatar when the same or similar set of sensor data is generated during a calibration routine. The artificial intelligence routine may analyze each of the hosted avatar selection logic tables to identify patterns or commonalities of assigned avatars and corresponding sensor data. By identifying these patterns, the server may recommend avatar based upon the sensor data gathered during a calibration process.
For many activities, the GPS sensor location and velocity data will provide a good indication of the avatar to display. However, in many situations multiple avatars may be associated with the same GPS location. For example, in data records 610 and 611 in the avatar selection logic table 601, both includes GPS location criteria of the user's office. This ambiguity may be resolved by considering the ambient noise level, which if it exceeds 50 dB would indicate a meeting, or considering the ringtone setting, which if it is set to “silent” would also indicate a meeting, indicating that the “Meeting” avatar should be displayed instead of the “Work” avatar. Alternative embodiments may ask the user (such as by way of a prompt presented on the display) the nature of an activity in order to learn and associate the recorded sensor and setting data with the specific activity. Instead of asking the user to define the nature of the activity, the mobile device 301 may ask the user to identify a particular avatar to be associated with the present activity in the future. In this manner, the displayed avatar may more accurately represent the user's status.
Depending upon the periodicity of“pings” from the server 109, step 455, the user may complete and change activities between avatar updates. As a result, the displayed avatar may not always accurately represent the user's current status. The currency of the displayed avatar can be enhanced by increasing the frequency of avatar update requests sent to the mobile device, step 455, but at the cost of additional processing overhead. Similarly, by increasing the period between avatar update requests sent to the mobile device, step 455, mobile device 301 may reduce processing overhead. Thus, by varying the periodicity of avatar update requests sent to the mobile device, step 455, a trade-off can be managed between currency and mobile device processor overhead.
The parameter value table 600 is received by the server 109, step 417, where the table may be stored in hard disk memory. In the alternative embodiment, mobile device 301 may transmit sensor parameter values sequentially to the server 109, which then can populate the parameter value table 600 as the data is received in step 417. Once the parameter value table 600 has been received, the values may be compared to avatar selection criteria in the avatar selection logic table 601, step 418. The process of comparing parameter values to the avatar selection criteria may be accomplished in the server 109 in a manner substantially similar to that described above with reference to step 411 and
The foregoing embodiments were described with reference to a server 109 which serves as a host or access point for a user's avatar. These embodiments make use of the current Internet architecture in which avatars are maintained on servers to provide their accessibility. However, alternative embodiments may permit the display of a user's avatar on a second user's computing device (e.g., 113-117) without the need to access a server 109. In these embodiments, avatar files are stored on either the first user's mobile device 301 or the second user's device 313-317, or both. Because the storage and display of avatar files may require significant memory storage space and processor time (particularly if the avatar file is three-dimensional or animated) pre-authorization to request and receive avatar files between the users may be desired. An illustrative embodiment of such a method to display a user's avatar is shown in
Referring to
The foregoing process steps ensure that the mobile device 301 has a current avatar selection stored in memory. A request for the avatar may be sent by a second user, step 460, by e-mail, SMS message and/or a phone call. In response to receiving a request for the avatar, step 461, the mobile device 301 processor 391 recalls the avatar file from memory and transmits the file to the requester, step 462. Upon receiving the avatar file, step 463, the requesting computing device can then display the avatar, step 441.
In an embodiment the processor 391 may compare the source (e.g., return address or telephone number) of the contact request against a list of pre-approved second user devices (e.g., 113-117) to determine if the requesting device is authorized to receive the user's avatar. This list of preapproved users can be similar to a list of friends and family telephone numbers and e-mail addresses.
In an embodiment, a plurality of the user's avatar files can be pre-stored on the device initiating contact, such as on all computing devices that are preapproved to receive the user's avatar. Unlike the foregoing embodiment where the entire avatar file is transmitted to the requester, only the avatar name needs to be transmitted in order for any requester to recall the avatar file from its own memory. By pre-storing avatar files on pre-approved computing devices, the delay between the avatar request and its presentation to the requester or is minimized since only the avatar name is transmitted. However, such embodiments may require significant storage requirements on multiple devices as well as time required to download all of the user's avatar files to each preapproved computing device.
In an alternative embodiment, the avatar files may be pre-stored on the pre-approved computing device as well as directly transmitted to the pre-approved computing device. In order to minimize power consumption required by both the mobile device 301 and the requesting device, only an identifier of the selected avatar file is transmitted to the second user's device. The alternative embodiment checks the local memory of the requesting device to determine whether the transmission of the avatar file is necessary or not. If the avatar file already exists in the local memory of the requesting device, then the avatar file is immediately displayed. If the avatar file does not exist in local memory, then a request for the avatar file is made and subsequently transmitted for display.
a illustrates an alternative to the foregoing embodiment shown in
Similar to the manner in which the embodiment illustrated in
In an embodiment the processor 391 may compare the source (e.g., return address or telephone number) of the contact request against a list of pre-approved second user devices (e.g., 113-117) to determine if the requesting device is authorized to receive the user's avatar. This list of preapproved users can be similar to a list of friends and family telephone numbers and e-mail addresses.
In an embodiment, a plurality of the user's avatar files can be pre-stored on the computing device initiating requesting an avatar. For example the user's avatar files may be stored on all computing devices that are preapproved to receive the user's avatar. Unlike the foregoing embodiment where the entire avatar file is transmitted to the requester, only the avatar name needs to be transmitted in order for any requester to recall the avatar file from its own memory. By pre-storing avatar files on pre-approved computing devices, the delay between the avatar request and its presentation to the requester is minimized since only the avatar name is transmitted. However, such embodiments may require significant storage requirements on multiple devices as well as time required to download all of the user's avatar files to each preapproved computing device.
Alternatively, similar to the embodiments shown in
In an embodiment illustrated in
In alternative embodiments, the user may also set authorization selection criteria to control who may view each avatar, at what times, and during what activities. A user may provide a plurality of avatars corresponding to the same sensor settings but differing depending upon the identity of a requester (i.e., the second user making the request). Some avatars may provide less detail regarding the user's exact activity or may differ significantly from the activity in which the user is actually engaged. Further, a user may set authorization controls to override sensor information to ensure the displayed avatar does not correspond to the user's actual activity.
For example, a user may set authorization levels to ensure that the user's boss may only view an avatar detailing the user's activity during work hours. At other times the avatar may be denied or hidden, or if an avatar is present, it may be a simple avatar indicating that the user is busy without depicting the activity in which the user is engaged. In another example, a user may set authorization levels (also referred to as permission controls) so that the user's boss may view an avatar depicting the user at work, when the sensor data indicates that the user is at the gym.
Any of a variety of methods may be implemented to check a requestor's authorization level. For example, a requestor may be asked to provide some form of authentication credential, such as a user name and password to verify that the requestor is authorized to receive an avatar and/or has a particular authorization level. Alternatively, some specific computing devices may be authorized to view selected avatars. The processor may check a static internet protocol (IP) address of computing device submitting a request for the avatar to determine if the computing device is authorized to receive the avatar, such as by comparing the static IP address received in the avatar request message to a list of static IP addresses authorized to receive the avatar. Any method which authenticates a requestor or a computing device transmitting a request to receive an avatar as an authorized user/device or categorizes a requestor or device into various authorization levels may be used in the various methods to perform the step of checking the authorization level of a requestor. Once the determination is made of whether the requestor is authorized to receive a particular avatar (or the level of the requestor's authorization), this criterion may be entered into the parameter table as another criterion to determine which avatar to display or transmit to the requestor.
Using information gathered from the various mobile device sensors, the various mobile device settings and calendar data, a processor can infer a user's current status. Combining this inference with the requestor's authorization level, the processor may select an avatar to display in accordance with the user's preferences, step 404. As discussed above with respect to the various embodiments illustrated in
Similar to embodiments discussed above, data from mobile device sensors (e.g., 350-356), the user's calendar, and the mobile device settings can be stored in a parameter value table 602. Such data may be stored in the form of absolute values (i.e., the raw sensor information), or in the form of processed information (i.e., interpreted sensor information). This distinction turns on the amount of processing of the information that is done before it is stored in the parameter value table 602. In addition, the parameter table 602 may include an additional column for recording whether the requestor is authorized or the requestor's authorization level, if there are more than two levels. For example,
With sensor, calendar, setting data and the authorization level of the requestor stored in a parameter value table 602, this information can be compared to criteria stored in an avatar selection logic table 603. An illustrative example of an avatar selection logic table 603 is shown in
As described above with reference to
The authorization level may be simply a binary level denoting whether a requestor is authorized. In such a binary system, two different avatars may be displayed for identical selection criteria of sensor, calendar and settings data depending on whether the requestor is authorized. For example, a more general avatar may be displayed in instances where the requestor is not authorized while a detailed or more accurate avatar may be displayed for the identical selection criteria of sensor, calendar and setting data if the requestor is authorized. The first user may simply elect to assign a value of “no avatar,” meaning that no avatar is to be displayed or transmitted if the requestor is not authorized. Alternatively, a first user may set multiple levels of authorization, each resulting in the display of a different avatar for identical selection criteria of sensor, calendar and settings values.
In the example illustrated in
In contrast, if the requester is authorized (i.e., authorization level stored in table 602 is “yes”), this may mean that the requestor is a co-worker, boss, or family member (for example) to which the user wants to disclose an accurate avatar. For such requesters, the user may wish to accurately indicate the activity in which the user is engaged so that more information will be conveyed to a known requestor. Thus, if the requestor is authorized, the “meeting” avatar may be displayed showing the user engaged in a meeting or presentation.
In the case of data records 652 and 653, the selection criteria include: a location at home and low velocity as recorded by a GPS sensor; “dark” ambient light conditions; zero accelerometer readings (e.g., consistent with sitting or sleeping). Both data records 652 and 653 may not include values for some parameters which the user anticipates are not likely to help resolve the user's status for that particular avatar. For example, the user has decided that the ambient temperature, calendar data, wallpaper and ring tone values provide no additional conference value for selecting either the avatar to display. Rather, the user may feel that if the user is at home, the user is likely sleeping. However, the user may not want to indicate to co-workers or more specifically the user's boss that the user is sleeping. Thus, only authorized requestors, such as friends and family members, will receive the “sleeping” avatar. Non-authorized requesters such as the user's boss will simply be receive a “busy” avatar according to the selection criteria in data record 653.
A user may program the avatar selection logic table 603 with multiple levels of authorization such that more than two different avatars may be displayed for the identical sensor, calendar and settings data depending upon the authorization level of the requestor. For example, in data records 654-656, the selection criteria includes: a location at the golf course and low velocity as recorded by a GPS sensor; “Bright” ambient light conditions; zero to low accelerometer readings (e.g., consistent with walking or riding in a golf cart); ambient temperature being greater than 75 degrees; and a calendar setting indicating a weekday. In such circumstances, the user may not wish to inform either the user's spouse or boss that the user is golfing on a weekday during business hours. Accordingly, a requester whose authorization level indicates that the requestor is on the user's buddy list will be sent the “golfing” avatar (see data record 655). However, the user's spouse may have a authorization level of “family” which causes the “busy” avatar to be selected for display. Additionally, the user's boss may have an authorization level which would cause a “work” avatar to be selected for display.
Users may populate the avatar selection logic table 603 in a manner similar to that described above with reference to
Once analyzed, the conclusions of the analyzed sensor data and mobile device settings may be stored in the avatar selection logic table 603 in a data record including the authorization level and avatar name, step 620. The avatar selection logic table 603 may be stored in memory 392 of the mobile device 301. The user may repeat the process of selecting an avatar for calibration, engaging in the associated activity while allowing the mobile device 301 to perform a calibration routine and storing the analyzed sensor data in the avatar selection logic table 603 until criteria have been saved for all of the user's avatars. This may include multiple avatar settings for different authorization levels. Optionally, when the avatar selection logic table 603 is completed, the table may be transmitted to another computing device, such as a server 109 on which the user's avatar is hosted, step 622. In addition, a learning method as described above with respect to avatar selection logic table 601 may be implemented.
Users may repeat the process illustrated in
Once the webpage access request is received, the processor of the server 109 can implement any of a number of methods discussed previously to check the authorization level of the person or device requesting access to the webpage(the “second user”), step 501. Data regarding the second user may be sent from the second user's device back to the server 109 processor to complete the check authorization level step, step 503.
Once the authorization level of the second user is determined, this information is stored in the parameter table held in memory of the server 109. The data record of the parameter table is compared against the criteria in an avatar selection logic table 603 stored in memory of the server 109 to determine which avatar to display, step 411. As described more fully above, a variety of method embodiments may be used to evaluate the stored parameter data and select a particular avatar for display. For example, the processor of the server 109 may select the avatar the avatar for which the greatest number of selection criteria are satisfied by the parameters stored in the parameter data table. Once the appropriate avatar has been selected, the processor of server 109 can insert the selected avatar into the webpage and sent to the requestor as described more fully above with reference to
a is a process flow diagram of another embodiment method suitable for displaying an avatar selected based upon sensor and setting data as well as the authorization level of a second user directly on the requesting device. The embodiment illustrated in
Alternatively, similar to the embodiments shown in
Similar to the embodiment illustrated in
Again, similar to the embodiments shown in
Similar to the embodiment illustrated in
In an embodiment, artificial intelligence routines may be implemented on the mobile device to prompt users to select an avatar to display when repeated parameter patterns are recognized. For example, if a mobile device continuously polls the GPS sensor 354 and an artificial intelligence application notices that the device is at the same GPS location coordinates between 9 am and 5 pm during weekdays, the processor may prompt the user to identify the name of the location, and suggest a name of “Work”. Alternatively, if the processor 391 notices that the user's pulse detected by a pulse sensor (not shown) has risen above 100 beats per minute, the processor 391 may prompt the user to identify the user's current activity as “exercise.” At the same time, if the processor 391 recognizes from the GPS sensor that the mobile device 301 is moving at a speed greater than 3 miles per hour, the processor 391 may prompt the user to identify the user's current activity as “Running.”
In embodiments where the number of avatar files stored in memory is limited, such as when the avatar files are stored directly on the computing device that will display them, fewer and more generic avatars may be desired. In such instances, fewer parameters may be needed to accurately reflect a user's current status. Conversely, if avatar files are stored on a computing device with greater storage and processing capabilities, such as server 109, the number of avatar files may be increased, as well as the level of precision in matching an avatar with the user's current status.
In further embodiments, parameter data may cause an avatar to change consistent with changes in the user's activity. By refining the avatar selection table 601, 603, varying avatars may be displayed in response to changes in parameter data. For example, if a user is running as inferred by the GPS sensor 354 measuring a velocity of about 6 miles per hour and an accelerometer 353 indicating a periodicity of accelerations consistent with running, the avatar selected for display may be an animated image of a runner. As the GPS sensor 354 records increased, the avatar selected for display may show the running image moving faster, and/or showing the increased effort by displaying an avatar that is running, sweating and breathing harder. To include such additional avatars, including animated avatars, simply requires including another line in the avatar selection logic table 601, 603 linking the increased speed as a requirement to display a new avatar file.
By implementing the various methods disclosed herein, a first user can provide a second user with an accurate representation of the first user's current activity. For example, in an internet forum setting, the displayed avatars could dynamically change as the user changes status. In conventional usage, a user may pro-actively change his/her avatar that is displayed to other members of the internet forum. However, the avatar will only change if the user proactively changes the file selected for display. In the embodiments disclosed herein, other members of the internet forum may observe the user's avatar change automatically as the user changes activity or status. In this way, the user no longer has to actively alter the avatar to be displayed in order to reflect his or her status. One of ordinary skill in the art would appreciate that similar applications can be implemented in instant messaging, text messaging, or even regular phone call situations.
The various embodiments provide a number of new applications for mobile devices. Such applications include improving communications with colleagues, monitoring activities of children, broadening participation in games, and medical monitoring.
As mentioned above, an avatar can quickly communicate information regarding a user since “a picture is worth a thousand words.” Users may select avatars and program the avatar selection criteria so that colleagues can quickly determine their status prior to sending an e-mail or making a telephone call. For example, if a colleague has access to a user's avatar, a quick view of the avatar prior to sending an e-mail or placing a telephone call will inform the colleague whether the user is involved in some activity that will preclude a prompt reply, such as out for a run, in a meeting, or on vacation. Access links to avatars (e.g., a hyperlink to an IP address hosting the avatar) may be incorporated into address books so that if an individual has been given access rights to a user's avatar the individual can check on the user's status prior to or as part of sending an e-mail or placing a telephone call. In this manner, the user of an avatar can proactively inform selected colleagues of the user's status. For example, by posting an avatar showing the user is in a meeting (or on travel or vacation), those who may want to contact the user will be informed that a call will not be answered and e-mail may not be promptly read.
In an application of the various embodiments, parents may be able to keep track of children when they are out of their sight. For example, children wearing a mobile device configured according to one or more of the embodiments can be tracked by parents accessing a website that displays avatars of their children including their location and present activity. For example, children involved in a game of “hide ‘n’ seek” may be monitored by their parents viewing a map of the play area which includes avatars indicating the location and movement/status of each child.
In game settings, the displayed avatar may be more closely linked to the movements and activities of the user's real world movements and activities. For example, the various embodiments may enable a user to be involved in a game of paintball while spectators watch the paintball match in a virtual world representation. Each participant can be equipped with a mobile device 301 including a suite of motion, position and location sensors, each reporting sensor data in near-real time to a central server 109. Additionally, position sensors may be outfitted on users' limbs and coupled to the mobile device by a wireless data link (e.g., Bluetooth) to provide data on the posture and movements of participants, such as the direction in which an individual is facing or aiming a paintball gun. An avatar representing each paintball participant can be generated in the virtual world representation of the match, with each user's avatar changing location and activity (i.e., running, sitting, hiding) based on the mobile device 301 sensor data which are sampled in real time. The virtual world avatar representations may therefore, accurately mimic the movement and activity of the real world users carrying the mobile devices 301.
Clearly, the same settings that apply to games could be transferred to training exercises of a national defense nature.
In a medical monitoring application, medical sensors on the mobile device 301 or connected to a processor by wireless data links (e.g., Bluetooth) can report their data (e.g., through the mobile device 301) to a system that uses such information to select an avatar that reflects the patient's current status. In a medical setting, the processor need not be mobile, and instead may be associated with a facility, such as an emergency room or hospital information system. Sensor data associated with patients can be received from a variety of medical sensors coupled to each patient, such as blood pressure, pulse, EKG, and EEG sensors for example. Avatar selection criteria associated with each of the sensors may be used to select an avatar that reflects a patient's medical needs or condition. For example, if a medical sensor provides data that satisfies an avatar criteria for a patient in distress, a processor can select an avatar consisting of the patient's photograph with a red background, and display that avatar on a nursing station. The use of such an avatar can more efficiently communicate critical information than text (e.g., the patient's name and the medical data) presented on the screen. As another example, a pacemaker may be configured to transmit information regarding the condition of the device or the patient's heart to a mobile device, such as by means of a Near Field Communications data link, which can relay the data to a server accessible by the patient's doctor. That server can use the patient's pacemaker data to select an appropriate avatar to efficiently communicate the patient's status to the doctor.
The hardware used to implement the foregoing embodiments may be processing elements and memory elements configured to execute a set of instructions, wherein the set of instructions are for performing method steps corresponding to the above methods. Alternatively, some steps or methods may be performed by circuitry that is specific to a given function.
Those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software module may reside in a processor readable storage medium and/or processor readable memory both of which may be any of RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other tangible form of data storage medium known in the art. Moreover, the processor readable memory may comprise more than one memory chip, memory internal to the processor chip, in separate memory chips, and combinations of different types of memory such as flash memory and RAM memory. References herein to the memory of a mobile handset are intended to encompass any one or all memory modules within the mobile handset without limitation to a particular configuration, type or packaging. An exemplary storage medium is coupled to a processor in either the mobile handset or the theme server such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC.
The foregoing description of the various embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein, and instead the claims should be accorded the widest scope consistent with the principles and novel features disclosed herein.