1. Field of the Invention
The invention relates generally to providing a user-profile image of a user at a webpage and, more particularly, to providing a personalized user-profile image that changes according to context, content and mood of the user as reflected in the webpage.
2. Description of the Related Art
An avatar is an Internet user's representation of the user commonly in the form of a two-dimensional icon that can be used in Internet forums and other virtual communities. For example, today Internet user's can use avatars to communicate their activities, location, or mood to other users. However, these avatars or user-profile images on various webpages and blogs are generally static. In order for the avatars to communicate the users various characteristics, such as mood, location, activity, etc., users must update their avatar to display their current activity, location, or mood, each time there is a change in the user's context.
In view of the forgoing, there is a need to automatically update an avatar based on user context in a manner that guarantees that a user's avatar is an accurate representation of the user's current state of mind or user's contextual interest.
Embodiments of the disclosure provide methods and system define a mechanism that is used for generating contextual user-profile images for a user on a webpage. In some embodiments, the user-profile images are dynamically generated and automatically changed to reflect a user's current mood or contextual interest based on context and content provided in the webpage by the user. In other embodiments, changes to user-profile images may be identified and implemented only upon obtaining permission from the relevant user(s). The changing of user-profile images is implementation-specific. The mood change mechanism can be incorporated in more than one webpage that a user uses for social interaction and allow synchronization of the user-profile image amongst the different webpages. It should be appreciated that the present embodiments can be implemented in numerous ways, such as a process, an apparatus, a system, a device, or a method on a computer readable medium. Several embodiments are described below.
In one embodiment, the present invention provides a method for generating contextual user-profile image on a webpage. The method includes capturing textual content provided by a user at the webpage. The textual content is parsed to identify keywords related to context. The keywords are contextually analyzed to identify one or more mood indicators. Current mood or contextual interest of the user is identified based on the one or more mood indicators. One or more modifiers for applying to the user-profile image are determined. The user-profile image is updated to incorporate the modifiers so as to reflect the current mood or contextual interest of the user. The updated user-profile image is returned to the webpage for rendering, in response to the textual content received from the user.
In another embodiment, a method for generating contextual user-profile image on a webpage is provided. The method includes obtaining user identifier of a user accessing the webpage. It is then determined if the user-profile image exists for the user associated with the user identifier. If the user-profile image does not exist for the user, then user profile data defining user attributes provided by the user is identified. A user-profile image for the user is generated from a default image using the user attributes defined in the user profile data. The generated user-profile image is dynamically modified to reflect a current mood or contextual interest of the user based on analysis of textual content provided by the user at the webpage. The modified user-profile image is returned to the webpage for rendering, in response to the textual content received from the user.
In yet another embodiment, a system for generating contextual user-profile image on a webpage is disclosed. The system includes a server device and a client device each coupled to the Internet. The server is equipped with a server-side application programming interface code (API) for generating the contextual user-profile image on the webpage. The server-side API is configured to receive an API call from the webpage. The API call includes one or more call parameters and is made requesting an updated user-profile image of a user. In response to the API call, the server-side API generates the updated user-profile image based on content provided by a user at the webpage. The generation of the user-profile image by the server-side API includes downloading the content provided by the user from the webpage; extracting the keywords from the downloaded content; identifying one or more mood indicators by performing contextual analysis of the extracted keywords; determining current mood or contextual interest of the user based on the one or more mood indicators; identifying modifiers for applying to the user-profile image for the current mood or contextual interest; and updating the user-profile image of the user by incorporating the identified modifiers to the user-profile image. The updated user-profile image reflecting the current mood or contextual interest of the user. The client device is used to request and render the webpage, wherein the webpage includes a client-side API that is configured to interact with the server-side API for requesting and receiving content data of the webpage and the updated user-profile image data. The webpage is a primary page used by the user for social interaction.
In another embodiment, the present invention provides a computer-readable media equipped with programming instructions, which when executed by a computer system directs the computer system to generate contextual user-profile image on a webpage. The computer-readable media comprises instructions for capturing textual content received from a user at the webpage. The computer-readable media further comprises instructions for parsing the textual content to identify keywords related to context; performing contextual analysis of the keywords to identify one or more mood indicators; identify current mood of the user based on the mood indicators, determine modifiers for the user-profile image for the current mood or contextual interest; and update the user-profile image to incorporate the modifiers so as to reflect the current mood or contextual interest. The updated user-profile image is returned to the webpage for rendering, in response to the textual content received from the user.
Other aspects and advantages of the invention will become apparent from the following detailed description, taken in conjunction with the embodiments and accompanying drawings, illustrating, by way of example, the principles of the invention.
The invention, together with further advantages thereof, may best be understood by reference to the following description taken in conjunction with the accompanying drawings.
Embodiments of the present invention provide system, methods and computer readable media for generating context related user-profile image for a user on a webpage based on context of content provided by the user. More particularly, according to various embodiments of present invention, an image generating mechanism of the present invention can contextually analyze content provided by a user at the webpage, identify various mood modifiers based on the contextual analysis and dynamically update the user-profile image of the user to incorporate the identified mood modifiers. The resulting user-profile image or “avatar” of the user reflects the current mood, state of mind and/or contextual interest of the user that is reflected in the content presented by the user at the webpage. The image/avatar is returned to the webpage for publishing or rendering. The webpage represents a virtual environment. Such virtual environments can include a webpage, an Internet forum, a virtual community, or any other virtual environment that is used for social interaction by the user. This approach allows user's avatar/user-profile image to be automatically updated without requiring the user to explicitly access and update the image, thereby providing better image personalization.
According to embodiments of the present invention, the user-profile image (or “avatar”) includes multiple parts that can be updated using one or more attributes including, but not limited to, shape and color of the user's facial structure and features, color and shape of the user's facial features, such as eyes/nose/lips/ears/forehead, the user's body type and size (upper and lower), type of clothes worn (upper and lower, including color and style), accessories worn on the different parts of the body (hats, shoes, glasses, gloves, scarves, etc.), occasion or activity specific background (e.g., ski resort during a skiing vacation, high-rise hotel during a business trip, ocean-related theme during a cruise, favorite breed of dog), occasion-specific props (e.g., branded t-shirts specific for an occasion like soccer world cup/super-bowl, etc.), flags, during Presidential election campaign, etc. The above list is exemplary and should not be considered restrictive. Additional attributes related to user-profile image may be captured and updated based on the content provided by the user. The various embodiments, thus, allows for selection of individual items for each of the above type of attributes, from a large collection, and combines them during avatar-creation or avatar-update time, creating the display user-profile image.
The embodiments allow for automatically changing an avatar of a user on a webpage in which they are embedded, such as primary blog pages or webpages, according to the context, content and mood of the user providing input at the webpage. The embodiments also allow for automatic updating of the avatar of the user in various secondary webpages that a user accesses for social interaction by using the updates from the primary blog pages or webpages. Still further, the embodiments allow synchronization of the user-profile image between a primary webpage and a secondary webpage by allowing automatic updating of the avatar/user profile image on both a primary webpage/blog page and a secondary webpage/blog page and updating the user-profile image at either one of the pages to reflect the most current mood or contextual interest of the user. It should be noted herein that for the sake of simplicity, various embodiments are described in detail that are directed toward generation and updating of user-profile image of a user to reflect current mood by identifying and adjusting attributes related to a user, such as facial and other physical features. The current mood, as used in this application, does not only include various mood related attributes defined by facial and physical features of a user-profile image, but may also include attributes that encompass current state of mind, and/or contextual interest of the user. Thus, the current state of mind and/or contextual interest of the user may be expressed by both mood related attributes and non-mood related attributes. The mood related attributes may set a neutral expression in the user-profile image and the non-mood related attributes may be used to set background images, accessories, etc., to reflect the context provided by the user. In the description set forth herein for embodiments of the present invention, numerous specific details are provided, such as examples of components and/or methods, to provide a thorough understanding of embodiments of the present invention. One skilled in the relevant art will recognize, however, that an embodiment of the invention can be practiced without one or more of the specific details, or with other apparatus, systems, assemblies, methods, components, materials, parts, and/or the like. In other instances, well-known structures, materials, or operations are not specifically shown or described in detail to avoid obscuring aspects of embodiments of the present invention. The present invention includes several aspects and is presented below and discussed in connection with the Figures and embodiments.
In one embodiment of the present invention, the server 104 is configured to create or update the context avatar of the user 101 by invoking the image generation script 116 in response to a user action at the client device 102. The user action can be an explicit user request for a webpage or entry of textual content on the webpage. The generated/updated context avatar of the user can be a composite image. The composite image can include, among other things, a virtual person image of the user with distinguishing features that can be adjusted to reflect a current mood of the user. In addition, the composite image can also include features related to a background image that is defined/adjusted by the script 116 based on contextual analysis of the content provided by the user at the webpage.
In the case where the user has a user-profile image defined, the script identifies and includes the user-profile image of the user in the webpage returned to the client. In the case where there is more than one user-profile images defined for the user, the script 116 selects the user-profile image that is currently active or the primary image to include in the webpage returned to the client, in response to the request. The webpage returned to the client is rendered, wherein the user is allowed to socially interact with other users.
Upon the rendering of the webpage, the user may provide comments related to content at the webpage. The comments may be in the form of textual content, such as a weblog (i.e. blog) in response to an article published on the webpage, a blog post in response to a comment posted by another user on the article published on the webpage, or blog post initiated by the user. The client-side script component 116-a at the webpage includes logic to detect the input from the user and generate an event trigger. The event trigger could be initiated automatically based on user action at the webpage. The client-side script 116-a defines the various event triggers that can be initiated based on user action at the webpage. For instance, the event trigger could be initiated by an event, such as a page load event, a page save event, a page update event, etc., based on the related user action at the webpage. The above list of events should be considered exemplary and should not be considered restrictive. The event trigger could also be initiated automatically based on a variable that is programmable. For instance, the variable could be a time-based variable that could be automatically initiated after lapse of certain period of time, such as after every minute, after every 5 minutes, etc, or after a certain period of inaction at the webpage or may be content based variable, such as presence of a delimiter in the textual content provided by the user, upon occurrence of number of delimiters in the textual content, etc. In one embodiment, the automatic event trigger may be initiated dynamically during run-time as the user continues to enter textual content on the webpage.
In response to the event trigger, the script component 116-a generates a submit request and transmits the request to the server. The client-side component 116-a communicates the submit request to the server-side script 116. The submit request is routed to the script 116 through the client-side API and the server-side API. In one embodiment, the request is communicated to the server using AJAX communication. Asynchronous JavaScript and XML (AJAX) communication is a communication technique that is used by a browser on a client device to request and receive information from a server asynchronously without interfering with the rendering and behavior of the webpage.
The server 104 receives the submit request and the trigger event, and in response, invokes the script 116 to generate or update the user-profile image. The script 116 receives the request, and in response, downloads the textual content provided by the user at the webpage. The script then parses the downloaded textual content to identify keywords. In one embodiment, the script 116 may use search algorithm or an algorithm similar to the search algorithm to analyze the textual content and identify the token keywords. Other algorithms may be used so long as the algorithm is capable of analyzing the textual content and identifying the keywords. The script then performs contextual analysis of the keywords to identify certain mood and context indicators. In one embodiment, the mood indicators describe one or more facial or bodily expressions defining a mood or activity related to the user. For instance, the mood indicator may include expressions, such as sad, happy, angry, frustrated, etc. The aforementioned list of expressions is exemplary and should not be considered restrictive. In one embodiment, the context indicators may describe a background to define a mood or activity associated with the user based on the contextual analysis of the content. For instance, the context indicator may define a background of the user based on location indicated in the content, such as an exotic vacation spot, location based on a business trip, etc., or interest of the user as provided in the content, such as pet related background, biking, other hobby related background, etc.
In one embodiment, the script may rely on a mapping table to identify the appropriate mood and context indicators that relate to specific keywords. The mapping table is generated by first extracting keywords from description of user-profile image sub-components. In one embodiment, a default user-profile image may be used for defining keywords that describe user-profile image sub-components. The mapping table is indexed by the keywords and stored in a database for subsequent matching by the script 116. The script 116 uses the keywords identified from the analysis of the textual content provided by the user at the webpage and finds matches with the corresponding keywords in the mapping table to identify the corresponding mood indicators. In one embodiment, when more than one mood indicator matches a particular keyword, the script 116 includes an algorithm to score each of the mood indicators. The scoring may be based on type of the mood indicator and the user interaction at each of the identified mood indicators over time. Scoring based on user interactions related to the respective mood indicators for a particular keyword is indicative of a strength of the relationship of the respective mood indicators to the particular keyword, such that the mood indicator with the highest score defining a greater match to the keyword. As a result, the script 116 selects the mood indicators with the highest scores as a match to the keywords. The context indicators are similarly selected by the script 116. It should be noted herein that although the various embodiments are directed toward identifying only mood indicators to define a current mood, the embodiments can be easily extended to identify context indicators to define a state of mind of the user.
The script 116 then identifies the current mood or state of mind of the user using the mood and context indicators. The mood and context indicators may identify more than one mood. When the identified mood indicators identify different moods, the script 116 may rely on a ranking algorithm to identify the mood indicators that define the current mood. For instance, based on the contextual analysis of the keywords, the script 116 may identify a sad mood and a happy mood for the user. The various moods identified by the script are exemplary and should not be considered restrictive. In order to define the current mood, the script 116, in one embodiment, weighs the mood indicators related to one mood against the mood indicators related to the other mood using a ranking algorithm and the current mood is identified based on the relative ranking of the respective mood indicators. The ranking algorithm is provided within the script 116 or is available to the script at the server 104. The ranking algorithm, in one embodiment, may include logic to rank the respective mood indicators for each of the moods based on the order in which they appear in the textual content provided by the user at the client device. For instance, the ranking algorithm may provide more weight to the mood indicators associated with last sentence in the textual content provided by the user and provide less weight to the mood indicators associated with earlier sentences of the textual content. Based on the relative ranking, the script 116 identifies the related mood indicators to reflect a current mood of the user. The aforementioned weighted ranking is one way of identifying the appropriate mood indicators for defining a current mood of the user. Other ranking algorithms may be used to identify the mood indicators to reflect the current mood of the user.
Once the current mood of the user is identified, the script 116 then determines the mood modifiers that relate to the current mood. The mood modifiers are identified based on the mood indicators related to the current mood. The mood modifiers, for instance, may include items or features relating to the mood indicators that can be adjusted in the image to define the current mood of the user, such as accessories (hats, sunglasses, beachwear, dress, etc.), facial features (shape/color of ear, nose, chin, forehead, mouth, etc), background images, etc. The mood modifiers identified by the script 116 are then incorporated into the user-profile image of the user to reflect the current mood of the user. The updated user-profile image is packaged with the webpage and transmitted to the client-device for rendering, in response to the trigger event initiated at the client-device. The updated user-profile is also stored in a database for subsequent rendering at the webpage or for sharing with other webpages.
In one embodiment, the updated user-profile image from the webpage can be shared with other webpages that the user would use to socially interact with other users. When the user starts a secondary webpage for social interaction, the secondary webpage may or may not accommodate rendering of the user-profile image of the user. If the secondary webpage accommodates the rendering of the user-profile image, then the HTML code defining the secondary webpage is updated to embed an URL of the initial webpage that includes the updated user-profile image, which acts as a primary webpage. When a user selects the secondary webpage for social interaction, the API at the client device detects the webpage selection and retrieves the secondary webpage for loading at the client-device. During the loading of the secondary webpage, the API identifies the embedded URL and sends a request to the client-side script component 116-a at the primary webpage to provide the updated user-profile image. The client-side script 116-a receives the request and in response to the request, the client-side script 116-a retrieves the updated user-profile image and forwards the same to the secondary webpage for rendering. The HTML code may include additional logic to trigger a request to obtain the updated user-profile image from the primary webpage periodically.
In yet another embodiment, the primary webpage and the secondary webpage may both include user-profile image of the user. The user-profile image at the respective webpages is updated as and when the user uses the respective webpages for social interaction. The update reflects the current mood of the user based on the contextual analysis of the content provided by the user at the respective webpages. In addition, the primary webpage and the secondary webpage have the ability to synchronize the user-profile image as and when the user-profile image is updated at any one of the primary and the secondary webpages. For instance, the user-profile image of the primary webpage may be accessed first and the user may have provided textual content, such as blog, comment, etc. Based on the contextual analysis of the textual content, the user-profile image of the primary webpage may have been updated to reflect the user's current mood. Along with updating the user-profile image at the primary webpage, the image of the user at the secondary webpage may also be simultaneously updated to reflect the current mood. Subsequently, when the user accesses the secondary webpage, the user-profile image at the secondary webpage reflects the current mood of the user as presented in the primary webpage. Upon accessing the secondary webpage, the user may post a textual content, such as comment, at the secondary webpage. In response to the textual context, the user-profile image at the secondary webpage may be updated to reflect the user's current mood by contextually analyzing the content provided at the secondary webpage. The mood reflected at the secondary webpage may be different from the mood reflected in the primary webpage or include additional attributes. As a result, the image at the primary webpage may be automatically updated from the secondary webpage so that both the primary and the secondary webpages are synchronized and reflect the user's current mood. In order to achieve this, the HTML code at the respective primary and secondary webpages is modified to include the URL of one another. In addition to the URL, the primary and the secondary pages include client-side script component 116-a at the respective webpages. The API at the client-device interacts with the script components 116-a at the respective webpages and with the server-side script 116 to request the updated user-profile image either from the server or from one another. In one embodiment, in response to the request or a trigger event, either the server-side script 116 forwards the updated user-profile image to the client-side script directly for rendering at the respective webpages or the primary/secondary webpages exchange the updated user-profile image with one another.
FIGS. 3.1-3.3 illustrate the composite user-profile image generated/updated by the script 116 based on user content provided at a webpage. As illustrated in
Thus, the various embodiments of the invention provide a way to contextually analyze textual content provided by a user and generate appropriate user-profile image to reflect the current mood of the user. The generated user-profile image can be updated to other webpages and such updates to the webpages can be carried out automatically without any user interaction. Further, these updates can be carried out dynamically in substantial real-time making this a more efficient way of personalizing a user's profile image.
In
The mood indicators are sorted, scored and analyzed to determine a current mood of the user, as illustrated in operation 450. When more than one mood indicator is identified for a particular keyword, the script may score the mood indicators based on user interaction at each of the mood indicators and the type of user interaction to determine the most popular mood indicators for a particular keyword. Using the mood indicators that define the current mood, the script identifies the various modifiers that need to be incorporated into the user-profile image to reflect the current mood of the user, as illustrated in operation 460. The modifiers are incorporated into the user-profile image for the user so that the user-profile image reflects the current mood of the user, as illustrated in operation 470.
When there is no user-profile image defined for the user, the script identifies a default image based on the user-profile data provided by the user and updates the default image with the modifiers to reflect the user's current mood. The updated user-profile image is returned to the webpage at the client device for rendering. The updated user-profile image can be shared among other webpages that are used by the user for social interaction. In one embodiment, a user may have a defined user-profile image and this user-profile image may be used in one or more webpages during social interaction. In one embodiment, the user may access his user-profile image by logging into a related site that provides access and allows modification of the user-profile image. Upon accessing the user-profile image, the user may provide textual content, such as “I went to Tokyo for a conference,” or “I played baseball yesterday.” The script analyzes the textual content and updates the relevant attributes of the user-profile image to reflect the contextual interest or current mood of the user. In this embodiment, the user's current mood may be expressed with a neutral expression. Upon update, the user may save the updated user-profile image as the default avatar that is shared among other websites where the user-profile image is rendered.
The current embodiments provide an efficient tool to generate user-profile images (avatars) that automatically change according to the context, content, interest and mood of the content provided in the webpage in which the images are embedded or defined in a designated user-profile page. The script identifies a user's mood or state of mind as expressed in their blog and updates the user's profile image, if one is available, or generates a user-profile image according to the blog, leading to better personalization. To manage the context efficiently, the script builds an indexed mapping of keywords extracted from description of avatar sub-components/items to the sub-components graphics. The mapping may be done offline and updated periodically. The personalized user-profile image generated automatically reflects the current state of mind of the user.
Embodiments of the present invention may be practiced with various computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. The invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a wire-based or wireless network. A sample computer system is depicted in
In
As with the external physical configuration shown in
In
Similarly, other computers at 584 are shown utilizing a local network at a different location from USER1 computer. The computers at 584 are couple to the Internet via Server2. USER3 and Server3 represent yet a third installation.
Note that the concepts of “client” and “server,” as used in this application and the industry are very loosely defined and, in fact, are not fixed with respect to machines or software processes executing on the machines. Typically, a server is a machine or process that is providing information to another machine or process, i.e., the “client,” that requests the information. In this respect, a computer or process can be acting as a client at one point in time (because it is requesting information). Some computers are consistently referred to as “servers” because they usually act as a repository for a large amount of information that is often requested. For example, a World Wide Web (WWW, or simply, “Web”) site is often hosted by a server computer with a large storage capacity, high-speed processor and Internet link having the ability to handle many high-bandwidth communication lines.
A server machine will most likely not be manually operated by a human user on a continual basis, but, instead, has software for constantly, and automatically, responding to information requests. On the other hand, some machines, such as desktop computers, are typically thought of as client machines because they are primarily used to obtain information from the Internet for a user operating the machine. Depending on the specific software executing at any point in time on these machines, the machine may actually be performing the role of a client or server, as the need may be. For example, a user's desktop computer can provide information to another desktop computer. Or a server may directly communicate with another server computer. Sometimes this is characterized as “peer-to-peer,” communication. Although processes of the present invention, and the hardware executing the processes, may be characterized by language common to a discussion of the Internet (e.g., “client,” “server,” “peer”) it should be apparent that software of the present invention can execute on any type of suitable hardware including networks other than the Internet.
Although software of the present invention may be presented as a single entity, such software is readily able to be executed on multiple machines. That is, there may be multiple instances of a given software program, a single program may be executing on different physical machines, etc. Further, two different programs, such as a client a server program, can be executing in a single machine, or in different machines. A single program can be operating as a client for information transaction and as a server for a different information transaction.
A “computer” for purposes of embodiments of the present invention may include any processor-containing device, such as a mainframe computer, personal computer, laptop, notebook, microcomputer, server, personal data manager or personal information manager (also referred to as a “PIM”) smart cellular or other phone, so-called smart card, set-top box, or any of the like. A “computer program” may include any suitable locally or remotely executable program or sequence of coded instructions which are to be inserted into a computer, well known to those skilled in the art. Stated more specifically, a computer program includes an organized list of instructions that, when executed, causes the computer to behave in a predetermined manner. A computer program contains a list of ingredients (called variables) and a list of directions (called statements) that tell the computer what to do with the variables. The variables may represent numeric data, text, audio or graphical images. If a computer is employed for synchronously presenting multiple video program ID streams, such as on a display screen of the computer, the computer would have suitable instructions (e.g., source code) for allowing a user to synchronously display multiple video program ID streams in accordance with the embodiments of the present invention. Similarly, if a computer is employed for presenting other media via a suitable directly or indirectly coupled input/output (I/O) device, the computer would have suitable instructions for allowing a user to input or output (e.g., present) program code and/or data information respectively in accordance with the embodiments of the present invention.
A “computer-readable medium” or “computer-readable media” for purposes of embodiments of the present invention may be any medium/media that can contain, store, communicate, propagate, or transport the computer program for use by or in connection with the instruction execution system, apparatus, system or device. The computer readable medium can be, by way of example only but not by limitation, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, system, device, propagation medium, carrier wave, or computer memory. The computer readable medium may have suitable instructions for synchronously presenting multiple video program ID streams, such as on a display screen, or for providing for input or presenting in accordance with various embodiments of the present invention.
With the above embodiments in mind, it should be understood that the invention could employ various computer-implemented operations involving data stored in computer systems. These operations can include the physical transformations of data, saving of data, and display of data. These operations are those requiring physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared and otherwise manipulated. Data can also be stored in the network during capture and transmission over a network. The storage can be, for example, at network nodes and memory associated with a server, and other computing devices, including portable devices.
Any of the operations described herein that form part of the invention are useful machine operations. The invention also relates to a device or an apparatus for performing these operations. The apparatus can be specially constructed for the required purpose, or the apparatus can be a general-purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general-purpose machines can be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
The invention can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data, which can thereafter be read by a computer system. The computer readable medium can also be distributed over a network-coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
Although the foregoing invention has been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications can be practiced within the scope of the appended claims. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.