Mobile device and system for multi-step activities

Information

  • Patent Grant
  • 9392429
  • Patent Number
    9,392,429
  • Date Filed
    Monday, January 6, 2014
    10 years ago
  • Date Issued
    Tuesday, July 12, 2016
    7 years ago
Abstract
An multistep guided system for mobile devices that facilitates the creation and dissemination of multistep guided activities from a source computer/device to a plurality of other recipient mobile devices, wherein the multistep guided activities is disseminated to the recipient mobile devices in a form that is compatible with the capabilities of the respective recipient mobile devices. The audio guided system comprises the source computer/device, the plurality of other recipient mobile devices and a server.
Description
BACKGROUND

1. Technical Field


The present invention relates generally to the interactions between mobile handset and a server within a network, and more specifically to the ability to browse through a multi-step process or activity using a mobile handset such as a cell phone.


2. Related Art


Electronic devices, such as mobile phones and personal digital assistants (PDA's), often contain small screens with very limited viewing area. They are constrained in terms of how much information can be displayed, and in terms of user interaction capabilities. The keyboards on cell phones, for example, are not conducive for user data entry, and only brief user inputs can be solicited from a user without annoying the user.


Often a user would want to seek online help using a mobile phone for conducting an activity such as fixing a problem with a car (changing tires for example) or baking a cake, without having to use a bulky notebook computer that might get damaged due to various constraints and problems of a work area. The use of a computer/notebook is not always possible to retrieve helpful information when they are needed, such as during an accident on the highway, or while cooking in a kitchen that has limited space. The use of a mobile phone is preferable in such circumstances but mobile phones in general are not endowed with the features or applications necessary to facilitate easy access to such information in a format that is useable and convenient. The whole process of retrieving necessary information using a mobile phone is inconvenient due to the inability of the Internet websites to provide information that a typical user can easily read, browse through or view on his mobile phone. Information access from Internet based websites from mobile devices are quite often unsatisfactory and not useful due to several factors, not least of which is the multi-media and graphics rich format in which most Internet websites are designed and made available. A mobile phone with a small screen is not a good candidate for viewing such complicated and graphics rich (with graphics, flash screens, video components, etc.) content.


Often, when a user is driving, he would like to access information from a remote source, such as a website maintained by the operator of the network. However, while driving it is very dangerous to read the information displayed on a cell phone. It is also almost impossible to read those small screens on a cell phone and manipulate the buttons on the cell phone while also driving. It is hard enough manipulating a cell phone keyboard when one is not driving, due to the nature of the keyboard and the tiny keys it provides and the small displays it comes with.


Online help, which typically tends to be verbose, is almost unreadable and altogether complex and inappropriate for access from a cell phone. For example, online help for configuring a network card on a PC, or baking a turkey for Thanksgiving, tend to involve a multi-step activity and therefore detailed in its descriptions. Not only are online help websites not suitable for access via cell phones—they make for bad user experience, but also too verbose and repetitive. Thus, users of cell phones refrain from seeking online help from cell phones.


User interaction in real time, such as those provided for a user using a PC on the Internet, are often not possible for a user using a cell phone. For example, the amount of textual information cannot be a full page of textual information that is typically made available o a PC. Graphical information also cannot be large. A typical website provides a rich multi-media experience. The same website, when accessed from a cell phone, would not only be unreadable, due to its large amount of text, graphics and even video, but also frustrating due to the nature of web sites—the design of websites often being multi-media based (predominantly providing large multi-media web pages full of text, graphics, flash-based and even containing videos). Thus, there is a problem in presenting a mobile user with information in order to solicit user input when the user is using a cell phone. Soliciting user input from a user when the user is using a cell phone, rather than a PC, is a big problem.


Cell phones are therefore a device for which traditional websites are ill prepared to provide information. In addition, surveys or questionnaires that are created for Internet based access via a PC are not appropriate for cell phone access. Asking one or more detailed questions with information on how to answer them is possible on a web page that is accessed from a PC. However, the same web page would be unmanageable and difficult to browse and navigate on a cell phone with a small LCD screen and small keyboard for user input.


Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of ordinary skill in the art through comparison of such systems with the present invention as set forth in the remainder of the present application with reference to the drawings.


BRIEF SUMMARY OF THE INVENTION

The present invention is directed to apparatus and methods of operation that are further described in the following Brief Description of the Drawings, the Detailed Description of the Invention, and the claims. Other features and advantages of the present invention will become apparent from the following detailed description of the invention made with reference to the accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

The numerous objects and advantages of the present invention may be better understood by those skilled in the art by reference to the accompanying figures in which:



FIG. 1 is a perspective block diagram of an audio guided system for mobile devices that facilitates the creation and dissemination of audio guided activities (AGAs) from a source computer/device to a plurality of other recipient mobile devices, wherein the AGA is disseminated to the recipient mobile devices in a form that is compatible with the capabilities of the respective recipient mobile devices.



FIG. 2 is a perspective block diagram of a system that supports AGA creation and dissemination, the AGA creation facilitated by the use of a PC/computer, by a user.



FIG. 3A is an exemplary notification message display screen for a mobile device 307 that supports the display of AGA using a client component, or the browsing through an AGA from the mobile device.



FIG. 3B is an exemplary notification message displayed in a notification window (or pane) on a mobile device, wherein the user is provided the opportunity to start reviewing an AGA using a link provided, that a browser can retrieve, and wherein the user is also provided with the links to download the client component if necessary in order to view the AGA locally in the mobile device.



FIG. 3C is an exemplary browser window on a mobile device that is used to retrieve and display activity steps provided as a web pages regarding an audio guided activity, that is distributed by a server in an audio guided system or network.



FIG. 4 is a perspective block diagram of the layout of an exemplary audio guided activity that comprises one or more tasks, each with a preamble, a textual description and supplementary information.



FIG. 5 is an interaction diagram that depicts an exemplary interaction between a sender's PC, notebook, PDA or laptop that is used to create and upload AGAs and questionnaires and a recipient mobile device used to respond to the AGAs and questionnaire, wherein the sender's PC, notebook, PDA or laptop is used by a user to create AGAs with textual and audio components that a server enhances, if necessary, and sends it to recipients.



FIG. 6 is a schematic block diagram of a mobile device that supports presenting an activity guided activity (AGA) to a user.



FIG. 7 is a flow chart of an exemplary operation of a server that receives, stores and disseminates AGAs to mobile devices.



FIG. 8 is a flowchart of an exemplary operation of the server that facilitates tailored AGAs wherein the AGA is tailored based on device capabilities of the user's mobile device or PC.



FIG. 9 is another schematic block diagram of a mobile device that supports presenting an activity guided activity (AGA) to a user.



FIG. 10 is a perspective block diagram of a network wherein guided activities are provided to users such that a user can view them on their televisions.





DETAILED DESCRIPTION OF THE DRAWINGS


FIG. 1 is a perspective block diagram of an audio guided system 105 (AUGUST) for mobile devices that facilitates the creation and dissemination of audio guided activities (AGAs) from a source computer/device 107 to a plurality of other recipient mobile devices 111, 113, wherein the AGA is disseminated to the recipient mobile devices in a form that is compatible with the capabilities of the respective recipient mobile devices 111, 113. The audio guided system 105 comprises the source computer/device 107, the plurality of other recipient mobile devices 111, 113 and a server 109. The display of AGAs in a recipient mobile device, such as the recipient mobile device A 111, requires the use of a corresponding client component, such as a QClient, that can display one step of a multi-step activity at a time. Each AGA comprises textual descriptions, audio preambles, optional audio supplementary information, an optional textual supplementary information, for each step of a multi-step audio guided activity. An AGA is used to describe the method to cook a dish using an associated recipe, the process of executing an operation, such as changing the tire on a car, using an associated multi-step operation, etc. The display of each step involves the display of textual descriptions, the playing of audio information such as a preamble, the optional display of supplementary information and the playing of audio supplementary information, if available. A user can view (often using text, and even graphics if available) and optionally listen to the detailed descriptions of each step of an AGA, one step at a time, and browse through each step.


Some of the plurality of other recipient mobile devices 111, 113 are legacy devices that do not have a necessary client component capable of handling the download and display of AGAs. Others of the plurality of other recipient mobile devices 111, 113 have the client component capable of handling the download and display of the AGAs.


In one embodiment, the server 109 determines which recipient mobile device can handle AGAs (because they comprise the client component capable of handling the AGAs), and which need to be sent a simpler subset of the AGAs that can be displayed/rendered without the client component, such as by the use of a browser in the recipient mobile device that may be used to browse through a hosted version of the AGAs that can present web pages of the AGAs.


The audio guided activity is created/stored/distributed as a packaged content with a multiple step activity, such as an XML file, wherein each step comprises:

    • an audio preamble, used to describe in audio form the purpose of the current step and provide an overview of the step (other types of information may also be provided if necessary, in a brief format)
    • a textual step description regarding the activity, in succinct form, with minimal text, and
    • an audio supplementary information, providing additional details that may help a user better understand the step, its benefits, alternate steps, if any, and any additional detail that may aid the user's comprehension of the step.


Typically, the audio preamble and audio supplementary information are presented to a user in order to provide audio based description and additional details of specific steps of a multi-step activity. The textual step description of each step of a multi-step activity is designed to provide minimal necessary information for a user, with supplementary information in audio or textual (or video and graphics forms too, in some embodiments) form provided to aid user comprehension of the associated step in the multi-step activity.


The textual step description typically comprises a textual description (of the specific step in the multi-step activity) in the form of a small paragraph. Optionally, it also comprises of a graphics or a picture (to be supported in specific embodiments, or in specific devices based on device capability information). In general, the mobile device employs a client that provides a window pane or dialog box (or other similar graphical widgets) for textual description display, with a menu-item “Info” for playing the preamble, and another menu-item “SupplInfo” for playing audio supplementary information while also displaying any supplementary textual information that may be available. In addition, a Next menu-item is provided to advance to a next step of the multi-step activity, if any) and a cancel menu-item that is provided to terminate the multi-step activity.


The source computer device 107 captures audio preambles and supplementary information provided by a user in audio form, captures text descriptions typed in or provided by user in some form, creates a package of such content associated with an audio guided activity, and sends it to a server 109 to be saved and/or disseminated to designated recipients. The recipients use their respective recipient mobile devices 111, 113 for browsing through the audio guided activity when they receive it. They can use the client component, if it is available, for such browsing. Alternatively, they can use a browser (such as a WAP browser) to browse through the AGA.


The server 109 receives AGAs from the source mobile computer/device 107, adds boilerplate text and audio if necessary, determines which of the recipient mobile devices specified, such as recipient mobile devices 111 and 113, can handle the all the contents of the AGA (audio components, textual components, graphics if any, video is any), and which need to be sent a simpler subset of the AGA, such as only text, or only audio components of the AGA.


The server 109 is capable of converting recipient list to a list of phone numbers or IP addresses as needed, in order to communicate the AGA, or a notification regarding the AGA, to the recipient mobile devices 111, 113. In order to play all the components of an AGA, if required, the recipient devices, such as the recipient device 111, have a client component that can handle all the components of a AGA, audio, textual, graphics and even video components).


In one embodiment the client component is required in a recipient mobile device to handle the components of an AGA, such as audio and textual. In order to play all the components of an AGA, if required, the recipient devices, such as the recipient device 113, do not have a client component. Instead, the server 109 makes it possible for them to receive and display/play the AGA by sending them the same AGA in an alternate form, such as a simplified set of web pages, that the recipient client device 113 can display using a browser or some other existing client in the recipient mobile device 113. In addition, the recipient client device 113 will be sent a notification regarding the AGA that also comprises a link that can be activated to download the client component so that it could be installed before displaying the AGA.


The recipient mobile device 113 without client component gets an opportunity to download & install the necessary client component. The user can then activate the download link whereupon the client component is downloaded and installed automatically (or with user opt-in). The user of the recipient mobile device 113 also is given the option, selectively, to receive a subset of AGA that recipient mobile device 113 can handle without the client component.


The recipient mobile device 111 with the client component receives an AGA, lets user browse through each step and view the textual components and listen to audio components for each step them. It is able to play/render/display all portions of an AGA that may be provided, such as audio, text, graphics, video, etc.


The server 109 is capable of completing the incomplete AGA received from the source computer/device 107 or another server (not shown). For example, the source computer/device 107 may send an incomplete AGA with two steps, each with only the audio preamble created (by a user recording the steps of an activity in audio form that incorporates a brief descriptions of steps involved) and the server 109 incorporates a generic textual preamble and a generic textual description in order to complete the AGA. In one embodiment, the server transcribes the audio components into textual components and sends those transcribed text (perhaps along with the other boilerplate text) to recipients who cannot handle audio components of an AGA. Thus, for example, spoken preambles and supplementary information, in a questionnaire can be converted into equivalent textual components by the server 109, in an automated way, so that devices that cannot handle audio preambles and audio supplementary information (or video preambles and video supplementary information) can be provided with equivalent and/or relevant textual components.


The server 109 receives an AGA from a user, incorporates text or graphics as needed, and generic prompt to user, and sends questionnaire to recipients. The recipients are either specified by the user along with the questionnaire or pre-configured and stored in the server 109 to be used to forward AGA. The server 109 provides coupons to a user at the end of the AGA, or during the display of an AGA by means of a menu-item provided to retrieve a coupon and information associated with a coupon. Additionally, it is possible to configure the server 109 to provide coupons (one or more) to a recipient mobile device 111, 113 along with an AGA, such that those coupons can be saved on the recipient mobile device 111, 113 for subsequent usage using the recipient mobile devices 111, 113.


The server 109 also supports the notification of the availability of the AGA and the dissemination of an AGA to the recipient mobile devices 111. The user interaction is facilitated by a client component in the recipient mobile device 111, which is either made available by the manufacturer of the mobile handsets or subsequently downloaded over the air, or otherwise installed by the user. The client component is able to process the received AGA (or portions thereof), playing audio portions such as audio preambles, audio supplementary information, etc. and displaying textual preambles and textual descriptions of individual steps of a multi-step activity/operation.


In one embodiment, the system 105 comprises mobile devices 107, 111, 113 which are a combination of cellular phones, PDAs, etc. and the network is 115 is a wireless and/or wired network, cellular network such as 3G, UMTS, CMDA, GSM, etc., a WLAN network, or a WiMAX network, Internet, Bluetooth, IrDA, etc.


The server 109 receives Audio Guided Activity from source computer/device 107, adds boilerplate text if needed, and forwards it to specified recipients 111, 113. In one embodiment, it multicasts/broadcasts it over a multicast network 115 or a broadcast network 115.


In one embodiment, more than one version of an audio guided activity is created by the source computer device 107, and stored for dissemination at the server 109. Particular versions of the audio guided activity are communicated to specific mobile device 107, 111, 113, based on one or more criteria, such as user preferences, user interests, user group affiliations, membership information, etc. Such preferences user interests, user group affiliations, membership information are stored in the server 109 in one embodiment, and in the mobile devices 107, 111, 113 in another. In a related embodiments, they are stored in both.


In one embodiment, the source computer/device is used to create a multistep audio guided activity that provides at least one (if not more than one) of an audio preambles, short textual descriptions, and audio supplementary information for each of the steps of the multistep audio guided activity. The multistep audio guided activity is sent to server 109 to be disseminated to one or more recipient devices that are mobile phones, PDAs, computers, PCs, etc.


The AGAs facilitated by the present invention are used to create and disseminates multistep activity information such as the steps necessary to diagnose and fix a problem with a machine, an electronic device or a vehicle. AGAs, in accordance with the present invention, are used to provide detailed help/guidance, in a timely fashion to people trying to configure a device, configure a service, or work on a vehicle, etc. For example, an audio guided activity can comprise a sequence of steps necessary to configure a service, a machine, an electronic device or a vehicle.



FIG. 2 is a perspective block diagram of a system 205 that supports AGA creation and dissemination, the AGA creation facilitated by the use of a PC/computer 231, by a user. The system 205 comprises the PC/computer 231 that a user uses to create AGAs, a server 217 that receives the AGAs and sends them to one or more recipient mobile devices 211, 227, and a hosted AGA creation component 207 that facilitates AGA creation using the PC/laptop/computer 231, or via web pages provided by the server 217. The system 205 also comprises a storage 215 that is used to store AGAs and questionnaires if necessary, and a results and activity logging component 219 that can be used to track AGA creation, AGA dissemination, and other related activities. In addition, the system 205 comprises a billing system 223 that can facilitate billing for the creation of AGAs, the distribution of AGAs, etc.


AGA creation in facilitated by the hosted AGA creation component 207 that can be accessed and used by a user employing the PC/Notebook/Laptop 231. AN AGA creation tool installed in the PC/Notebook/Laptop 231 may also be used by a user to create AGAs that can be uploaded to the server 217. A user with AGA creation tool in the PC/Notebook/Laptop 231 creates an AGA and sends the created AGA to recipients/a mailing-list.


The user can also employ a PC/Notebook/Laptop 231 communicatively coupled to a hosted AGA creation component 217 to create AGAs with only audio inputs and textual inputs provided by the user for the various steps of an associated activity. The AGA is likely to comprise of audio and/or textual preambles for the steps of an audio guided activity, textual descriptions of the steps of the associated activity, supplementary information in audio and textual formats (even graphics and video formats) for each of the steps, etc. Then user provides a recipient list in one or more formats. The server 217 sends out the AGA to recipients specified by the user, using their corresponding mobile phone numbers, IP addresses, email addresses, etc. A recipient user can use his recipient computer 211 to receive or browse thorough the AGA. A different recipient user can use the recipient mobile device 227 to do the same.


When a recipient using the recipient mobile device 227 gets the AGA on his mobile device, the steps of the AGA themselves are provided to the recipient by the server 217, starting with the first step of a multi-step activity. Thus, in the beginning of the AGA, the recipient would view the first step perhaps with an audio preamble and appropriate textual description, and would be able activate an Info menu item to hear the audio preamble for the first step. The user advances to the next step by activating the Next menu item to proceed.


In one embodiment, the recipient device is a legacy device 227 and not capable of letting a recipient user work through the steps of the AGA. For such a device, the server 217 sends a voice mail notification to the recipient device 227 and when triggered by the recipient device 227, causes the audio preamble of the steps to be played as voice information, such as those that employ interactive voice response (IVR) systems (not shown). The user is provided with the option to advance to the next step when ready. Thus, part of the AGA, the audio preamble and audio supplementary information, is played as part of a UVR based audio playback. Such a solution wherein IVR is used makes it possible to incorporate “legacy” devices and land line devices into the system 205 and have them participate in receiving AGAs and browsing through them. The server 217 thus employs the services of an IVR component to provide AGAs to recipients on legacy devices (and other devices that do not have an appropriate client software installed) in order to facilitate access from such devices.


In one embodiment, the recipient mobile device 227 is a legacy device 227 and not capable of letting a recipient user work through the AGA as it does not have a client component. Instead, it has a browser that can be used to browse though the steps of the AGA, the steps provided employing web pages hosted by the server 217, presenting one or more steps in each web page of the AGA.


In one embodiment, an XML audio guided activity (AGA) is created/stored/by a user using a PC/notebook/laptop 231. It is created as an XML file comprising a multiple step activity—wherein each step comprises:

    • an audio preamble,
    • d a textual step description, and
    • d an audio supplementary information.


The audio preamble and audio supplementary information are played/rendered during a display of a step of activity, when invoked by the user using appropriate menu-items or buttons. The textual step description comprises Textual description in the form of a small paragraph. Optionally, it also comprises of a graphics or a picture (that is also provided as part of the XML AGA).


In one embodiment, the PC/notebook/Laptop 231 comprises a tool called the QCreator that can operate in two modes, a questionnaire creation mode and a AGA creation mode. The output created is a Questionnaire or an Audio guided Activity (AGA) to be used by an Audio Guided System (AUGUST). If Questionnaire is the mode set during the use of the tool, a questionnaire is created, with user interface customized for such creation. If AGA is the mode set on the tool, then screens appropriate for the creation of an AGA are provided to the user.



FIG. 3A is an exemplary notification message display screen 309 for a mobile device 307 that supports the display of AGA using a client component, or the browsing through an AGA from the mobile device 307. Notifications, such as those received as an SMS message, received by user (for example, of type Service message) on mobile device 307, offers the user an opportunity to download a client component that is capable of displaying an AGA. If the user of the recipient mobile device does not have an appropriate client component (such as a qClient component capable of displaying an AGA and questionnaires), then the user can still view the AGA by viewing the hosted version of the AGA using a browser in the device, such a browser retrieving one or more web pages for the AGA using a link (URL or some such reference) to the AGA provided as part of the notification message (such as a URL in an SMS message). Thus, the notification also offers link to the AGA that a browser can use to provide access to associated web pages for the AGA. Notification messages (such as SMS based ones) can be flagged to be service messages, and are sent to mobile device by service providers supporting/providing AGAs. They contain links through which the message content, such as AGAs and questionnaires, can be downloaded.


It is possible to automatically download client component for a AGA (such as qClient) and the associated AGA to the mobile device 307, if the user has configured the mobile device to download messages/content automatically.



FIG. 3B is an exemplary notification message 367 displayed in a notification window (or pane) 359 on a mobile device 357, wherein the user is provided the opportunity to start reviewing an AGA using a link provided, that a browser can retrieve, and wherein the user is also provided with the links to download the client component if necessary in order to view the AGA locally in the mobile device 357.


The user who receives notification (such as SMS) can open a URL link provided to review an audio guided activity, using a client such as a browser, interacting with server that is remotely hosted. Alternatively, user can download client activating Download link and then review the AGA locally using the downloaded client.


In addition to AGAs, questionnaires can also be received and reviewed using the mobile device 357. A user who receives notification (such as SMS) can also open a URL link provided to respond to a questionnaire, using client software such as a browser, interacting with server that is remotely hosted that provides web pages for the questionnaire. Alternatively, the user can download client component (such as qClient) by activating a Download link in the received notification (such as SMS message) and then respond locally using downloaded client (that is then installed too).



FIG. 3C is an exemplary browser window 379 on a mobile device 377 that is used to retrieve and display activity steps provided as a web pages regarding an audio guided activity, that is distributed by a server in an audio guided system or network. For the mobile device 377 with no Qclient installed, a browser in the mobile device 377 is used by a user to interact with a server that provides web pages of the AGA. The browser in the mobile device 377 retrieves one step of the multi-step activity at a time and displays it. Audio component, if any, in each page is played by a media player (or some audio player) in mobile device 377. Similarly video content and graphics content, if any are displayed using appropriate plug-ins to the browser.


Thus, in a mobile device 377 with no Qclient, a browser in the mobile device 377 is used to interact with a server that provides the webpages for the various steps of an activity. The browser in mobile device 377 retrieves one question at a time and displays it. Audio component in each page is played by a media player (or some audio player) in the mobile device 377.



FIG. 4 is a perspective block diagram of the layout of an exemplary audio guided activity 407 that comprises one or more tasks 409, 411, each with a preamble, a textual description and supplementary information. The audio guided activity 407 supports multiple levels of details, in that specific tasks associated with specific steps of an activity may comprise of sub tasks. The details of these subtasks may also be specified in the AGA 407. For example, a subtask 437 of task 2411 may be displayed if necessary by a client component of a mobile device—which would display current portions of the AGA when requested by a user. For the subtask 437, the client displaying the AGA would then display the textual description 443 associated and play the audio preamble 441. In addition, when invoked, the supplementary information 445, 447 for the subtask 437 is also played/displayed respectively.



FIG. 5 is an interaction diagram that depicts an exemplary interaction between a sender's PC, notebook, PDA or laptop 507 that is used to create and upload AGAs and questionnaires and a recipient mobile device 515 used to respond to the AGAs and questionnaire, wherein the sender's PC, notebook, PDA or laptop 507 is used by a user to create AGAs with textual and audio components that a server 509 enhances, if necessary, and sends it to recipients. In one embodiment, the sender's PC, notebook, PDA or laptop 507 comprise the plugin client that works with browsers to facilitate creation of AGAs and browsing through any AGAs received. The user's PC, notebook, PDA or laptop 507 initially sends helps a user create an AGA and then send it to the server 509, along with a list of recipients. The server 509 then forwards the AGA for optional storage and retrieves it when requested by a recipient.—the storage being temporary or permanent.


Then, the server 509 sends a push notification to the recipient devices such as recipient mobile device 515. In response, the recipient mobile device 515 initiates the browsing of the AGAs starting with the first step of a multi-step activity. The server 509 sends the first segment of the AGA, which may comprise of a set of steps, to the recipient mobile device 515. In one embodiment, the browser in the recipient mobile device 515 determines that the client plugin should process the AGA and invoke it, passing the AGA to it for processing. The client plugin manages the subsequent processing of the steps in the AGA.



FIG. 6 is a schematic block diagram of a mobile device 611 that supports presenting an activity guided activity (AGA) to a user. The mobile device 611 comprises a processing circuitry 621, an AGA storage 625 where the downloaded/received AGA is stored and managed, a communication circuitry 623 that facilitates AGA downloads, an authentication circuitry 615 that can be used for optional user and/or mobile device 611 authentication, and a player circuitry 617 that is used to play/render audio and/or video components of an AGA. In addition, it comprises a client component 613, that is capable of presenting an AGA to a user, gathering instructions from a user to move forward or step back across the AGA as it is being played, and responding to those user instructions. The client 613 comprises an usage monitoring component 619 that keeps track of the various AGAs that the user has accessed and reviewed, and reports on such usage as required.



FIG. 7 is a flow chart of an exemplary operation of a server that receives, stores and disseminates AGAs to mobile devices. At a start block 707, the server receives an AGA (or a subset of an AGA) from a user's PC/Laptop/computer where the user creates the AGA. At a next block 709, the server determines recipients for the AGA. In one embodiment, recipients are provided as a default list by a user, and reused on subsequent AGAs. In another embodiment, recipients sent along with an AGA are used instead of a default list of recipients already provided.


Then, at a next block 711, the server sends a notification, such as an SMS message, or an email, to a user of a mobile device or a PC, the user being one of the recipients. The notification to recipient devices comprises an URL from where the AGA can be downloaded. It also comprises (optionally) an URL where the AGA can be remotely browsed from a website, such a feature being useful for access from a mobile device or PC that does not have the necessary client component. Typically, the URL referencing the AGA points to a webpage or AGA content hosted by the server, although URLs for other websites may also be used.


At a next decision box 713, the server determines if a client exists in a mobile device/PC that can download an AGA and present it to the user. For example, the server can determines if the mobile device is ready to download an AGA from the server when the mobile device a notification and the server receives a response to the notification sent to the mobile device. If the server does determine that the client exists in the device and the client is ready, then, at a next block 715, the server facilitates download of the AGA to the client. Then, at a next block 717, the server monitors the usage by the client 717. Then, at a next block 719, the server reports usage by the mobile device/PC if necessary, such reporting being periodic, event based, etc. based on policies and preferences. Finally, at an end block 719 the processing terminates.


If, at the decision block 713, server does determine that the client does not exist or is not ready, then, at a next block 723, the server receives a request for webpages from the mobile device and provides webpages to the mobile device or PC where a browser, such as an HTTP based browser, receives and presents the AGA steps to the user. The server presents the webpages sequentially to the browser in the mobile device or PC. Then, at a next block, the server monitors usage and subsequently reports usage if necessary at a next block 719.



FIG. 8 is a flowchart of an exemplary operation of the server that facilitates tailored AGAs wherein the AGA is tailored based on device capabilities of the user's mobile device or PC. At a start block 805, the server starts processing. Then, at a next block 807, the server receives a multi-media AGA and a recipient list from a sender's PC or computer. The sender might be the person who created the AGA, or another person responsible for creation and distribution of an AGA. The recipient-list can be a list of phone numbers, email addresses, IP addresses or a combination of these.


Then, at a next block 809, the server processes the AGA and stores it. For example, if the server has to insert a boilerplate text, pictures or boilerplate audio components to an AGA, it does it. This is done based on preferences and policies. At a next block 811, the server notifies recipients from the recipient list provided by user or setup as a default.


Then, at a next block 815, for each recipient in the recipient list, when the AGA is requested from the recipient mobile device or PC, the server determines corresponding device's capabilities and tailor's the AGA to the user's device. Tailoring involves reducing, shrinking or cropping images, reducing or eliminating audio content, etc. Then, at a next block 817, the server provides the tailored AGA per device capabilities. Then, at a next block 819, the server receives usage information and other statistics optionally sent by the client in the recipient mobile device or PC. Finally, processing terminates at an end block 823.



FIG. 9 is another schematic block diagram of a mobile device 911 that supports presenting an activity guided activity (AGA) to a user. The mobile device 911 comprises a processing circuitry 921, a storage 925 where the downloaded/received AGA is stored and managed, and a communication circuitry 923 that facilitates AGA downloads. It also comprises a questionnaire and AGA client 913 that handles audio-guided questionnaires as well as audio guided activities, a media player circuitry 917 that is used to play/render audio and/or video components of an AGA or a questionnaire that may be currently presented to a user, and a notification client 929 that receives notifications for AGAs and questionnaires from a server distributing them. In addition, it comprises a recording circuitry 915 and input circuitry 927 that can be used by a user to create audio guided activities in an ad hoc manner using the mobile device 911.


In one embodiment, a user of the mobile device 911 can create adhoc audio guided activity with the help of the questionnaire and AGA client 913. The user employs the recording circuitry to record audio and/or video components that are incorporated into an AGA created by the user employing the questionnaire and AGA client 913. The user employs the input circuitry 927 to provide textual inputs that might be stored as textual preambles for the AGA. The questionnaire and AGA client 913 employs the communication circuitry 923 to send the adhoc AGA created in the mobile device 911 to a server with whom the mobile device is communicatively coupled.



FIG. 10 is a perspective block diagram of a network wherein guided activities are provided to users such that a user can view them on their televisions. A user creates guided activity using PC/Notebook/Computer and uploads it to a server 1009, or uses webpages provided by the server 1009 to create it. The user can uses or creates Video clips, graphics, audio, etc. to create guided activity. The server 1009 receives guided activity (including multi-media and videos) from user's PC/Notebook/computer 1031 and adds boilerplate content if needed. The server 1009 determines which recipient TVs 1011, 1013 need to receive the guided activity and forwards it. Optionally the server 1009 broadcasts the guided activity to all TVs and set-top-boxes (and mobile devices too).


The recipient TV A 1011 with the necessary client component receives the guided activity, lets the user browse through each step, and request additional details as supplementary information when the user requests them. In addition, the recipient TV B 1013 that does not have a client component uses a client component provided by a STB 1023 to receive and browse through the guided activity.


The terms “circuit” and “circuitry” as used herein may refer to an independent circuit or to a portion of a multifunctional circuit that performs multiple underlying functions. For example, depending on the embodiment, processing circuitry may be implemented as a single chip processor or as a plurality of processing chips. Likewise, a first circuit and a second circuit may be combined in one embodiment into a single circuit or, in another embodiment, operate independently perhaps in separate chips. The term “chip”, as used herein, refers to an integrated circuit. Circuits and circuitry may comprise general or specific purpose hardware, or may comprise such hardware and associated software such as firmware or object code.


The terms “audio preamble” and “voice preamble” as used herein may refer to recorded voice inputs that a user records, to provide a question/prompt in human language, that also selectively incorporates responses in multiple choice format to aid selection by a recipient. The audio preamble may be captured by a mobile device in MP3 format, AMR format, WMA format, etc.


The term “audio-assisted questionnaire” as used herein may refer to a questionnaire comprising audio portions, such as audio preambles, audio supplementary information, audio descriptions of multiple choices, etc. that make it possible for a recipient to listen to most of the information of the questions in a questionnaire (employing human voices, in audible form) without having to read all of that in a small screen of a mobile device, without requiring scrolling through textual descriptions on a limited/constrained device.


As one of ordinary skill in the art will appreciate, the terms “operably coupled” and “communicatively coupled,” as may be used herein, include direct coupling and indirect coupling via another component, element, circuit, or module where, for indirect coupling, the intervening component, element, circuit, or module does not modify the information of a signal but may adjust its current level, voltage level, and/or power level. As one of ordinary skill in the art will also appreciate, inferred coupling (i.e., where one element is coupled to another element by inference) includes direct and indirect coupling between two elements in the same manner as “operably coupled” and “communicatively coupled.”


The present invention has also been described above with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claimed invention.


The present invention has been described above with the aid of functional building blocks illustrating the performance of certain significant functions. The boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claimed invention.


One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof.


Moreover, although described in detail for purposes of clarity and understanding by way of the aforementioned embodiments, the present invention is not limited to such embodiments. It will be obvious to one of average skill in the art that various changes and modifications may be practiced within the spirit and scope of the invention, as limited only by the scope of the appended claims.

Claims
  • 1. A mobile device communicatively coupled to a server based on communication needs, the mobile device comprising: at least one non-volatile memory having stored therein one or both of firmware and software;a communication circuitry;at least one processor operably coupled to the non-volatile memory and the communication circuitry, wherein the at least one processor, during operation, at least: receives and presents to a user of the mobile device instructions associated with an activity to be conducted by the user, wherein the instructions are provided as a multistep guided activity;presents interaction screens corresponding to specific steps of the multistep guided activity;receives user input associated with the steps of the multistep guided activity via the interaction screens;wherein the at least one processor presents supplementary information comprising additional multiple levels of details for any step of the multistep guided activity, when requested; andselectively advances to a subsequent step and to a corresponding subsequent screen, or reverts to a previous step and to a corresponding previous screen, of the multistep guided activity, in response to the user input;wherein each step of the multistep guided activity is retrieved one step at a time from a server in the network by the mobile device and presented to the user, wherein user interactions for that one step gathered by the recipient device is communicated back to the server upon completion of that one step.
  • 2. The mobile device according to claim 1 wherein the at least one processor, during operation, further at least: supports browsing for multistep guided activities by the user; andretrieves multistep guided activities from the server based on one or more criteria, wherein the one or more criteria comprises user interests, user group affiliations, user membership information and device capabilities.
  • 3. The mobile device according to claim 1 wherein the at least one processor, when it presents the steps, presents a corresponding textual information, an audio information and a graphics information for each of the steps of the multistep guided activity, and wherein the multistep guided activity comprises a sequence of steps necessary to configure a service, a machine, an electronic device or a vehicle.
  • 4. The mobile device according to claim 1 wherein for each step of the multistep activity, a purpose of the current step with an overview of the current step, a step description regarding the current step in the activity and supplementary information is presented, wherein the supplementary information comprises additional details to help a user better understand the step, its benefits, alternate steps if any, and other detail to aid the user's comprehension of the related activity.
  • 5. The mobile device according to claim 1 wherein for each step of the of the multistep activity, a minimal textual step description is provided designed to provide minimal necessary information for a user, and a supplementary information is provided for optional review by a user, wherein the supplementary information is in audio, textual, video and graphics forms, wherein the supplementary information is provided to aid user comprehension of the associated step in the multistep activity.
  • 6. A method of creating and disseminating step by step instructions of a multistep activity in a network to a plurality of mobile devices, the method comprising: creating a multistep guided activity package comprising instructions associated with an activity to be conducted by the user;making different versions of the multistep guided activity package for communication to different types of mobile devices and users based on one or more criteria, wherein the one or more criteria comprises device capabilities, user group affiliations, membership information, user interests and user preferences;storing the different versions of the multistep guided activity package at a server;identifying target devices among the of the plurality of mobile devices; andcommunicating an appropriate version of the entire multistep guided activity to the target devices;wherein each step of the multistep guided activity is retrieved one step at a time from a server in the network by a recipient device and presented to the user, wherein user interactions for that one step gathered by the recipient device is communicated back to the server upon completion of that one step.
  • 7. The method of claim 6, further comprising: transferring a newly created multistep guided activity for distribution to a server, wherein the newly created multistep guided activity provides the steps necessary to diagnose and fix a problem with a machine, an electronic device or a vehicle, configure a device, configure a service, or work on a vehicle.
  • 8. The method of claim 6 further comprising: sending a notification regarding the availability of the multistep guided activity package to the target devices wherein the notification also comprises a link that can be activated to download a client component and install it before displaying the multistep guided activity;wherein the multiple step activity package is packaged as an XML document comprising details for each step of the multistep activity, wherein the details are provided as a combination of an audio or video preamble, a textual step description, and supplementary information; andwherein the audio or video preamble is played during the display of a corresponding step of the multistep guided activity and the supplementary information is provided upon user request.
  • 9. The method of claim 6, the method further comprising: billing for the creation of multistep guided activity packages and for the distribution of multiple step guided activity packages.
  • 10. The method of claim 6 wherein the multistep guided activity is an activity comprising multiple levels of tasks and subtasks, each of the tasks and subtasks associated with a corresponding step of the multistep guided activity and comprising a corresponding level of preambles, description and supplementary information appropriate to that step in the activity.
  • 11. The method of claim 6 wherein the multistep guided activity is a multistep process of executing an operation.
  • 12. The method of claim 6 wherein the multistep guided activity package comprises, for each step, an audio or video preamble that provides a description of the corresponding step, an audio or video supplementary information that provides additional details of the corresponding steps of the multistep activity, and a textual step description for that step designed to provide minimal necessary information for that corresponding step to a user.
  • 13. The method of claim 6 wherein the multistep guided activity package also comprises a coupon to be provided to a user at the end of the multistep guided activity or during the display of the multistep guided activity.
  • 14. The method of claim 6 further comprising: notifying the target devices with information regarding the availability of the multistep guided activity package, wherein the notification comprises a link to the multistep guided activity used by the recipient mobile device among the target devices to activate a browser to present the multistep guided activity, and wherein the notification also comprises with a second link that triggers a download of a client component necessary for display of the multistep guided activity locally in the recipient mobile device without the use of the browser.
  • 15. The method of claim 6 wherein creating comprises: specifying details of the activity to be conducted as a multistep collection of tasks, each task expressed as a set of subtasks;defining each task and subtask as a step in the multistep guided activity;packaging the multistep guided activity as an XML document; andsending the XML document package of the multistep guided activity to a server for dissemination along with a list of recipients.
  • 16. A server communicatively coupled to a mobile device based on communication needs, the server comprising: a memory;a communication circuitry;at least one processor operably coupled to the memory and the communication circuitry, wherein the at least one processor, during operation, at least: receives and sends to the mobile device instructions associated with an activity to be conducted by a user, wherein the instructions are provided as a multistep guided activity;tracks creation and dissemination of the multistep guided activity for billing purposes;receives user input associated with the steps of the multistep guided activity;selectively provides content for a subsequent step or for a previous step of the multistep guided activity, in response to the user input, if the mobile device is only capable of browser based interactions;wherein the at least one processor provides supplementary information comprising additional details for any step of the multistep guided activity that helps a user better understand the step, its benefits, alternate steps if any, and any other details that aid the user's comprehension of the related activity; andwherein the at least one processor provides the multistep guided activity as a package comprising, for each step of the multistep guided activity, a purpose description of the current step with an overview of the current step, a step description regarding the current step in the activity, and the supplementary information.
  • 17. The server of claim 16, wherein the at least one processor disseminates the multistep guided activity to a plurality of mobile devices, wherein the multistep guided activity comprises a sequence of steps necessary to configure a service, a machine, an electronic device or a vehicle.
  • 18. The server of claim 16 wherein for each step of the of the multistep activity, a minimal textual step description is provided that is designed to provide minimal necessary information to a user, and a supplementary information for optional review by the user, wherein the supplementary information is in audio, textual, video and graphics forms.
CROSS REFERENCES TO RELATED APPLICATIONS

The present application is a continuation of, makes reference to, claims priority to, and claims benefit of U.S. Non-Provisional application Ser. No. 11/881,195, entitled AUDIO GUIDED SYSTEM FOR PROVIDING GUIDANCE TO USER OF MOBILE DEVICE ON MULTI-STEP ACTIVITIES filed on Jul. 25, 2007, which in turn is a makes reference to, claims priority to, and claims benefit of U.S. Provisional Application Ser. No. 60/860,700 entitled AUDIO GUIDED SYSTEM FOR PROVIDING GUIDANCE TO USER OF MOBILE DEVICE ON MULTI-STEP ACTIVITIES filed on Nov. 22, 2006, the complete subject matter of which is hereby incorporated herein by reference, in its entirety. This patent application makes reference to U.S. provisional patent Ser. No. 60/849,715, entitled “QUESTIONNAIRE CLIENT FOR MOBILE DEVICE”, filed on Oct. 4, 2006, The complete subject matter of the above-referenced United States patent application is hereby incorporated herein by reference, in its entirety. The present application makes reference to U.S. Provisional Application Ser. No. 60/850,084 entitled MOBILE DEVICE FOR CREATING ADHOC QUESTIONNAIRE filed on Oct. 7, 2006, the complete subject matter of which is hereby incorporated herein by reference, in its entirety. In addition, the present application makes reference to U.S. application Ser. No. 10/985,702 entitled QUESTIONNAIRE NETWORK FOR MOBILE HANDSETS filed on Nov. 10, 2004, the complete subject matter of which is hereby incorporated herein by reference, in its entirety.

US Referenced Citations (414)
Number Name Date Kind
3647926 Rohloff et al. Mar 1972 A
5036389 Morales Jul 1991 A
5680305 Apgar, IV Oct 1997 A
5740035 Cohen et al. Apr 1998 A
5801754 Ruybal et al. Sep 1998 A
5805821 Saxena Sep 1998 A
5838314 Neel et al. Nov 1998 A
5862223 Walker et al. Jan 1999 A
5870454 Dahlen Feb 1999 A
5873068 Beaumont et al. Feb 1999 A
5905974 Fraser et al. May 1999 A
5915243 Smolen Jun 1999 A
5970473 Gerszberg et al. Oct 1999 A
6057872 Candelore May 2000 A
6161458 Spatafora Dec 2000 A
6175822 Jones Jan 2001 B1
6189029 Fuerst Feb 2001 B1
6267379 Forrest et al. Jul 2001 B1
6282713 Kitsukawa et al. Aug 2001 B1
6302698 Ziv-El Oct 2001 B1
6430624 Jamtgaard et al. Aug 2002 B1
6477504 Hamlin et al. Nov 2002 B1
6501779 McLaughlin et al. Dec 2002 B1
6502242 Howe et al. Dec 2002 B1
6513014 Walker et al. Jan 2003 B1
6587835 Treyz et al. Jul 2003 B1
6631377 Kuzumaki Oct 2003 B2
6721713 Guheen et al. Apr 2004 B1
6735778 Khoo et al. May 2004 B2
6801931 Ramesh et al. Oct 2004 B1
6819669 Rooney Nov 2004 B2
6873688 Aarnio Mar 2005 B1
6873967 Kalagnanam et al. Mar 2005 B1
6907402 Khaitan Jun 2005 B1
6941324 Plastina et al. Sep 2005 B2
6973432 Woodard et al. Dec 2005 B1
7020685 Chen et al. Mar 2006 B1
7024381 Hastings et al. Apr 2006 B1
7032030 Codignotto Apr 2006 B1
7096464 Weinmann Aug 2006 B1
7107311 Zittrain et al. Sep 2006 B1
7194756 Addington et al. Mar 2007 B2
7197120 Luehrig et al. Mar 2007 B2
7222158 Wexelblat May 2007 B2
7261239 Rao Aug 2007 B2
7336928 Paalasmaa et al. Feb 2008 B2
7373320 McDonough May 2008 B1
7373323 Dalal et al. May 2008 B1
7405752 Kondo et al. Jul 2008 B2
7418472 Shoemaker et al. Aug 2008 B2
7434050 Jeffries et al. Oct 2008 B2
7444380 Diamond Oct 2008 B1
7487435 Aviv Feb 2009 B2
7496943 Goldberg et al. Feb 2009 B1
7542920 Lin-Hendel Jun 2009 B1
7657022 Anderson Feb 2010 B2
7660864 Markki et al. Feb 2010 B2
7664734 Lawrence et al. Feb 2010 B2
7688820 Forte et al. Mar 2010 B2
7706740 Collins et al. Apr 2010 B2
7715790 Kennedy May 2010 B1
7725424 Ponte et al. May 2010 B1
7783529 Sandholm et al. Aug 2010 B2
7797186 Dybus Sep 2010 B2
7827235 Iizuka Nov 2010 B2
7853272 Tipnis et al. Dec 2010 B2
7899700 Floyd et al. Mar 2011 B2
7941092 Rao May 2011 B2
7956272 Wysocki Jun 2011 B2
7970818 Guedalia et al. Jun 2011 B2
7974714 Hoffberg Jul 2011 B2
7983611 Rao Jul 2011 B2
8037506 Cooper et al. Oct 2011 B2
8041713 Lawrence Oct 2011 B2
8055546 Cassone et al. Nov 2011 B1
8073013 Coleman et al. Dec 2011 B2
8078096 Rao Dec 2011 B2
8131270 Rao Mar 2012 B2
8135331 Rao Mar 2012 B2
8180276 Rao May 2012 B2
8191104 Gordon et al. May 2012 B2
8195749 Rao Jun 2012 B2
8249920 Smith Aug 2012 B2
8270893 Rao Sep 2012 B2
8285196 Rao Oct 2012 B2
8290810 Ramer et al. Oct 2012 B2
8380175 Rao Feb 2013 B2
8385813 Rao Feb 2013 B2
8428645 Rao Apr 2013 B2
8433299 Rao Apr 2013 B2
8478250 Rao Jul 2013 B2
8532636 Rao Sep 2013 B2
8532713 Rao Sep 2013 B2
8565719 Rao Oct 2013 B2
8700014 Rao Apr 2014 B2
8700015 Rao Apr 2014 B2
8898708 Rao Nov 2014 B2
9092794 Rao Jul 2015 B2
9100800 Rao Aug 2015 B2
9158437 Rao Oct 2015 B2
20010005837 Kojo Jun 2001 A1
20010023436 Srinivasan et al. Sep 2001 A1
20010034607 Perschbacher et al. Oct 2001 A1
20010042041 Moshal et al. Nov 2001 A1
20010044327 Kanefsky Nov 2001 A1
20010044751 Pugliese et al. Nov 2001 A1
20010047373 Jones et al. Nov 2001 A1
20010049286 Hansmann et al. Dec 2001 A1
20010056374 Joao Dec 2001 A1
20010056396 Goino Dec 2001 A1
20020001395 Davis et al. Jan 2002 A1
20020006124 Jimenez et al. Jan 2002 A1
20020007303 Brookler et al. Jan 2002 A1
20020016818 Kirani et al. Feb 2002 A1
20020035488 Aquila et al. Mar 2002 A1
20020040346 Kwan Apr 2002 A1
20020046200 Floven et al. Apr 2002 A1
20020054089 Nicholas et al. May 2002 A1
20020059132 Quay et al. May 2002 A1
20020059373 Boys May 2002 A1
20020059621 Thomas et al. May 2002 A1
20020069161 Eckert et al. Jun 2002 A1
20020070961 Xu et al. Jun 2002 A1
20020071528 Kumamoto Jun 2002 A1
20020106617 Hersh Aug 2002 A1
20020107737 Kaneko et al. Aug 2002 A1
20020108109 Harris et al. Aug 2002 A1
20020116258 Stamatelatos et al. Aug 2002 A1
20020120593 Iemoto et al. Aug 2002 A1
20020123359 Wei et al. Sep 2002 A1
20020128908 Levin et al. Sep 2002 A1
20020138334 DeCotiis et al. Sep 2002 A1
20020138392 LeBlanc Sep 2002 A1
20020138462 Ricketts Sep 2002 A1
20020143975 Kimura et al. Oct 2002 A1
20020155419 Banerjee et al. Oct 2002 A1
20020161708 Offer Oct 2002 A1
20020161833 Niven et al. Oct 2002 A1
20020165666 Fuchs et al. Nov 2002 A1
20020178223 Bushkin Nov 2002 A1
20020183059 Noreen et al. Dec 2002 A1
20020188746 Drosset et al. Dec 2002 A1
20020198769 Ratcliff, III Dec 2002 A1
20030003946 Bocconi Jan 2003 A1
20030009371 Gauba et al. Jan 2003 A1
20030014400 Siegel Jan 2003 A1
20030028871 Wang et al. Feb 2003 A1
20030036935 Nel Feb 2003 A1
20030037033 Nyman et al. Feb 2003 A1
20030037068 Thomas et al. Feb 2003 A1
20030046140 Callahan et al. Mar 2003 A1
20030050959 Faybishenko et al. Mar 2003 A1
20030060284 Hamalainen et al. Mar 2003 A1
20030065784 Herrod Apr 2003 A1
20030077559 Braunberger et al. Apr 2003 A1
20030083895 Wright et al. May 2003 A1
20030084051 Depura et al. May 2003 A1
20030088452 Kelly May 2003 A1
20030088609 Guedalia et al. May 2003 A1
20030096625 Lee et al. May 2003 A1
20030097280 Fitzpatrick et al. May 2003 A1
20030115463 Wheeler et al. Jun 2003 A1
20030115602 Knee et al. Jun 2003 A1
20030131055 Yashchin et al. Jul 2003 A1
20030144873 Keshel Jul 2003 A1
20030144899 Kokubo Jul 2003 A1
20030154126 Gehlot Aug 2003 A1
20030154242 Hayes et al. Aug 2003 A1
20030163527 Hsu Aug 2003 A1
20030182245 Seo Sep 2003 A1
20030204406 Reardon et al. Oct 2003 A1
20030208433 Haddad et al. Nov 2003 A1
20030208756 Macrae et al. Nov 2003 A1
20030211856 Zilliacus Nov 2003 A1
20030212804 Hashemi Nov 2003 A1
20030216982 Close et al. Nov 2003 A1
20030218604 Wood et al. Nov 2003 A1
20030229533 Mack et al. Dec 2003 A1
20030233422 Csaszar et al. Dec 2003 A1
20040019497 Volk et al. Jan 2004 A1
20040030631 Brown et al. Feb 2004 A1
20040034561 Smith Feb 2004 A1
20040034684 Payne Feb 2004 A1
20040039684 Sandor Feb 2004 A1
20040043372 Jebb et al. Mar 2004 A1
20040054577 Inoue et al. Mar 2004 A1
20040058694 Mendiola et al. Mar 2004 A1
20040064351 Mikurak Apr 2004 A1
20040064833 Lee et al. Apr 2004 A1
20040072136 Roschelle et al. Apr 2004 A1
20040073476 Donahue et al. Apr 2004 A1
20040082346 Skytt et al. Apr 2004 A1
20040122856 Clearwater Jun 2004 A1
20040128183 Challey et al. Jul 2004 A1
20040139232 Giannetti et al. Jul 2004 A1
20040139472 Furet et al. Jul 2004 A1
20040142720 Smethers Jul 2004 A1
20040148219 Norris, III Jul 2004 A1
20040179039 Blattner et al. Sep 2004 A1
20040190767 Tedesco et al. Sep 2004 A1
20040193683 Blumofe Sep 2004 A1
20040210923 Hudgeons et al. Oct 2004 A1
20040225606 Nguyen Nov 2004 A1
20040230656 Sugawara Nov 2004 A1
20040234936 Ullman et al. Nov 2004 A1
20040244037 Yamaguchi et al. Dec 2004 A1
20040250272 Durden et al. Dec 2004 A1
20040252814 Eakin Dec 2004 A1
20040260761 Leaute et al. Dec 2004 A1
20050005174 Connors Jan 2005 A1
20050009465 Ross et al. Jan 2005 A1
20050010651 Xu et al. Jan 2005 A1
20050018766 Iwamura Jan 2005 A1
20050021754 Alda et al. Jan 2005 A1
20050027616 Jones Feb 2005 A1
20050027654 Adrian Feb 2005 A1
20050028005 Carson et al. Feb 2005 A1
20050044223 Meyerson Feb 2005 A1
20050048954 Gortz et al. Mar 2005 A1
20050054286 Kanjilal et al. Mar 2005 A1
20050055310 Drewett et al. Mar 2005 A1
20050060759 Rowe et al. Mar 2005 A1
20050066361 Iijima Mar 2005 A1
20050080683 Jordan Apr 2005 A1
20050086355 Deshpande Apr 2005 A1
20050086605 Ferrer et al. Apr 2005 A1
20050108750 Nishikawa et al. May 2005 A1
20050114400 Rao May 2005 A1
20050132192 Jeffries et al. Jun 2005 A1
20050144061 Rarity et al. Jun 2005 A1
20050144538 Lawrence et al. Jun 2005 A1
20050149501 Barrett Jul 2005 A1
20050150943 Rao Jul 2005 A1
20050159142 Giniger et al. Jul 2005 A1
20050165663 Razumov Jul 2005 A1
20050181722 Kopra et al. Aug 2005 A1
20050193333 Ebert Sep 2005 A1
20050203940 Farrar et al. Sep 2005 A1
20050209050 Bartels Sep 2005 A1
20050223068 Shohfi et al. Oct 2005 A1
20050235318 Grauch et al. Oct 2005 A1
20050240472 Postrel Oct 2005 A1
20050240623 Kobza et al. Oct 2005 A1
20050242189 Rohs Nov 2005 A1
20050251749 Lamkin et al. Nov 2005 A1
20050256866 Lu et al. Nov 2005 A1
20050262540 Swix et al. Nov 2005 A1
20050267816 Jaramillo Dec 2005 A1
20050283405 Mallo et al. Dec 2005 A1
20050283428 Bartels et al. Dec 2005 A1
20050283736 Elie Dec 2005 A1
20050288958 Eraker et al. Dec 2005 A1
20050288999 Lerner et al. Dec 2005 A1
20060015637 Chung Jan 2006 A1
20060029051 Harris Feb 2006 A1
20060031591 Hollstrom et al. Feb 2006 A1
20060034266 Harris Feb 2006 A1
20060036448 Haynie et al. Feb 2006 A1
20060047729 Yuan Mar 2006 A1
20060059174 Mese et al. Mar 2006 A1
20060068818 Leitersdorf et al. Mar 2006 A1
20060080232 Epps Apr 2006 A1
20060085816 Funk et al. Apr 2006 A1
20060085823 Bell et al. Apr 2006 A1
20060091203 Bakker et al. May 2006 A1
20060123082 Digate et al. Jun 2006 A1
20060126544 Markel et al. Jun 2006 A1
20060129455 Shah Jun 2006 A1
20060148420 Wonak et al. Jul 2006 A1
20060155513 Mizrahi et al. Jul 2006 A1
20060170956 Jung et al. Aug 2006 A1
20060178947 Zsigmond et al. Aug 2006 A1
20060190403 Lin et al. Aug 2006 A1
20060194185 Goldberg et al. Aug 2006 A1
20060195441 Julia et al. Aug 2006 A1
20060200384 Arutunian et al. Sep 2006 A1
20060203758 Tee et al. Sep 2006 A1
20060206493 Lipscomb et al. Sep 2006 A1
20060227364 Frank Oct 2006 A1
20060240851 Washburn Oct 2006 A1
20060242687 Thione et al. Oct 2006 A1
20060246915 Shrivastava Nov 2006 A1
20060259866 Prasad et al. Nov 2006 A1
20060261151 Hansen et al. Nov 2006 A1
20060262922 Margulies et al. Nov 2006 A1
20060265280 Nakada et al. Nov 2006 A1
20060265281 Sprovieri et al. Nov 2006 A1
20060277129 Johnston Dec 2006 A1
20060282304 Bedard et al. Dec 2006 A1
20060288363 Kunkel et al. Dec 2006 A1
20060290661 Innanen et al. Dec 2006 A1
20060294186 Nguyen et al. Dec 2006 A1
20070001806 Poll Jan 2007 A1
20070016472 Reznik Jan 2007 A1
20070022214 Harcourt Jan 2007 A1
20070025538 Jarske et al. Feb 2007 A1
20070036282 Engelke et al. Feb 2007 A1
20070038941 Wysocki Feb 2007 A1
20070050256 Walker et al. Mar 2007 A1
20070053513 Hoffberg Mar 2007 A1
20070061260 deGroeve Mar 2007 A1
20070086773 Ramsten Apr 2007 A1
20070099636 Roth May 2007 A1
20070101358 Ambady May 2007 A1
20070105496 Bonta et al. May 2007 A1
20070113263 Chatani May 2007 A1
20070115346 Kim et al. May 2007 A1
20070121580 Forte et al. May 2007 A1
20070121846 Altberg et al. May 2007 A1
20070130463 Law et al. Jun 2007 A1
20070130585 Perret et al. Jun 2007 A1
20070136374 Guedalia Jun 2007 A1
20070136773 O'Neil et al. Jun 2007 A1
20070150452 Tsurumaki et al. Jun 2007 A1
20070150608 Randall et al. Jun 2007 A1
20070154168 Cordray et al. Jul 2007 A1
20070155411 Morrison Jul 2007 A1
20070156828 Bramoulle Jul 2007 A1
20070157223 Cordray et al. Jul 2007 A1
20070162459 Desai et al. Jul 2007 A1
20070162502 Thomas et al. Jul 2007 A1
20070162566 Desai et al. Jul 2007 A1
20070174861 Song et al. Jul 2007 A1
20070201681 Chen et al. Aug 2007 A1
20070204003 Abramson Aug 2007 A1
20070220575 Cooper et al. Sep 2007 A1
20070233729 Inoue et al. Oct 2007 A1
20070235527 Appleyard et al. Oct 2007 A1
20070244758 Xie Oct 2007 A1
20070245365 Mitsui Oct 2007 A1
20070245366 Mitsui Oct 2007 A1
20070281692 Bucher et al. Dec 2007 A1
20070288315 Skillen et al. Dec 2007 A1
20070294354 Sylvain Dec 2007 A1
20070299681 Plastina et al. Dec 2007 A1
20080005341 Subbian Jan 2008 A1
20080009268 Ramer Jan 2008 A1
20080010351 Wardhaugh et al. Jan 2008 A1
20080013700 Butina Jan 2008 A1
20080021721 Jones et al. Jan 2008 A1
20080022325 Ober et al. Jan 2008 A1
20080027874 Monseignat et al. Jan 2008 A1
20080040303 Fogelson Feb 2008 A1
20080066080 Campbell Mar 2008 A1
20080069120 Thomas Mar 2008 A1
20080072139 Salinas et al. Mar 2008 A1
20080082394 Floyd et al. Apr 2008 A1
20080085675 Rao Apr 2008 A1
20080085682 Rao Apr 2008 A1
20080092181 Britt Apr 2008 A1
20080098071 Jones et al. Apr 2008 A1
20080107244 Setzer May 2008 A1
20080109278 Rao May 2008 A1
20080119133 Rao May 2008 A1
20080119167 Rao May 2008 A1
20080126113 Manning et al. May 2008 A1
20080126193 Robinson May 2008 A1
20080126226 Popkiewicz et al. May 2008 A1
20080132252 Altman et al. Jun 2008 A1
20080139239 O'Connor Jun 2008 A1
20080159178 Syrjanen et al. Jul 2008 A1
20080163075 Beck Jul 2008 A1
20080167946 Bezos Jul 2008 A1
20080201731 Howcroft Aug 2008 A1
20080209491 Hasek Aug 2008 A1
20080221986 Soicher et al. Sep 2008 A1
20080222046 McIsaac Sep 2008 A1
20080261524 Grushkevich Oct 2008 A1
20080261625 Hughes Oct 2008 A1
20080267155 Aragones et al. Oct 2008 A1
20080269636 Burrows et al. Oct 2008 A1
20080281687 Hurwitz et al. Nov 2008 A1
20080281711 Bridges et al. Nov 2008 A1
20080294760 Sampson et al. Nov 2008 A1
20080299953 Rao Dec 2008 A1
20080301231 Mehta et al. Dec 2008 A1
20090011748 Hotta Jan 2009 A1
20090037265 Moona Feb 2009 A1
20090063379 Kelly Mar 2009 A1
20090076882 Mei et al. Mar 2009 A1
20090117845 Rao May 2009 A1
20090119700 Sansom May 2009 A1
20090125510 Graham et al. May 2009 A1
20090176522 Kowalewski et al. Jul 2009 A1
20090187814 Raff Jul 2009 A1
20090210347 Sarcanin Aug 2009 A1
20090240569 Ramer et al. Sep 2009 A1
20090259552 Chenard et al. Oct 2009 A1
20090320077 Gazdzinski Dec 2009 A1
20100036970 Sidi et al. Feb 2010 A1
20100094878 Soroca et al. Apr 2010 A1
20100125498 Jaramillo May 2010 A1
20100128666 Masson et al. May 2010 A1
20100262923 Citrin et al. Oct 2010 A1
20110041077 Reiner Feb 2011 A1
20110113090 Peeri May 2011 A1
20110125838 Rao May 2011 A1
20110154397 Macrae et al. Jun 2011 A1
20110178877 Swix et al. Jul 2011 A1
20110197236 Rao Aug 2011 A1
20110265116 Stern et al. Oct 2011 A1
20120022905 Meyer et al. Jan 2012 A1
20120060184 Nguyen et al. Mar 2012 A1
20120079525 Ellis et al. Mar 2012 A1
20120164937 Rao Jun 2012 A1
20120233644 Rao Sep 2012 A1
20120240146 Rao Sep 2012 A1
20120265613 Ramer et al. Oct 2012 A1
20120278823 Rogers et al. Nov 2012 A1
20120284324 Jarville et al. Nov 2012 A1
20120297311 Duggal Nov 2012 A1
20130238445 Rao Sep 2013 A1
20140038159 Rao Feb 2014 A1
20150381759 Rao Dec 2015 A1
Foreign Referenced Citations (1)
Number Date Country
WO 2006051858 May 2006 WO
Non-Patent Literature Citations (108)
Entry
U.S. Appl. No. 14/985,330, filed Dec. 30, 2015, Rao.
U.S. Appl. No. 14/985,334, filed Dec. 30, 2015, Rao.
U.S. Appl. No. 14/985,336, filed Dec. 30, 2015, Rao.
U.S. Appl. No. 14/985,340, filed Dec. 30, 2015, Rao.
U.S. Appl. No. 14/985,342, filed Dec. 30, 2015, Rao.
U.S. Appl. No. 14/985,344, filed Dec. 30, 2015, Rao.
U.S. Appl. No. 14/985,351, filed Dec. 30, 2015, Rao.
U.S. Appl. No. 14/985,352, filed Dec. 30, 2015, Rao.
U.S. Appl. No. 14/985,353, filed Dec. 30, 2015, Rao.
U.S. Appl. No. 14/985,355, filed Dec. 30, 2015, Rao.
U.S. Appl. No. 10/985,702, filed Oct. 4, 2007, Office Action.
U.S. Appl. No. 10/985,702, filed Apr. 28, 2008, Office Action.
U.S. Appl. No. 10/985,702, filed Sep. 11, 2008, Office Action.
U.S. Appl. No. 10/985,702, filed Apr. 28, 2009, Office Action.
U.S. Appl. No. 10/985,702, filed Dec. 8, 2009, Office Action.
U.S. Appl. No. 10/985,702, filed Aug. 6, 2010, Office Action.
U.S. Appl. No. 11/010,985, filed Nov. 22, 2006, Office Action.
U.S. Appl. No. 11/010,985, filed May 18, 2007, Notice of Allowance.
U.S. Appl. No. 11/807,670, filed Dec. 22, 2009, Office Action.
U.S. Appl. No. 11/807,670, filed Sep. 7, 2010, Office Action.
U.S. Appl. No. 11/807,670, filed May 27, 2011, Office Action.
U.S. Appl. No. 11/807,670, filed Jan. 11, 2012, Office Action.
U.S. Appl. No. 11/807,670, filed May 17, 2012, Notice of Allowance.
U.S. Appl. No. 11/807,672, filed Jul. 9, 2009, Office Action.
U.S. Appl. No. 11/807,672, filed Jul. 29, 2010, Office Action.
U.S. Appl. No. 11/807,672, filed Apr. 27, 2011, Office Action.
U.S. Appl. No. 11/807,672, filed Mar. 20, 2012, Notice of Allowance.
U.S. Appl. No. 11/810,597, filed Jan. 28, 2010, Office Action.
U.S. Appl. No. 11/810,597, filed Oct. 13, 2010, Office Action.
U.S. Appl. No. 11/810,597, filed May 16, 2011, Office Action.
U.S. Appl. No. 11/810,597, filed Oct. 21, 2011, Office Action.
U.S. Appl. No. 11/810,597, filed Apr. 5, 2012, Office Action.
U.S. Appl. No. 11/810,597, filed Sep. 25, 2012, Office Action.
U.S. Appl. No. 11/821,771, filed Nov. 26, 2010, Office Action.
U.S. Appl. No. 11/821,771, filed Jun. 29, 2011, Office Action.
U.S. Appl. No. 11/821,771, filed Dec. 14, 2011, Notice of Allowance.
U.S. Appl. No. 11/823,006, filed Nov. 28, 2011, Office Action.
U.S. Appl. No. 11/823,006, filed Apr. 11, 2012, Office Action.
U.S. Appl. No. 11/823,006, filed Jun. 3, 2013, Office Action.
U.S. Appl. No. 11/823,006, filed Mar. 10, 2014, Office Action.
U.S. Appl. No. 11/881,195, filed Sep. 28, 2010, Office Action.
U.S. Appl. No. 11/881,195, filed Jun. 9, 2011, Office Action.
U.S. Appl. No. 11/881,195, filed May 21, 2012, Office Action.
U.S. Appl. No. 11/881,195, filed Oct. 18, 2012, Office Action.
U.S. Appl. No. 11/881,195, filed Jul. 18, 2013, Office Action.
U.S. Appl. No. 11/881,195, filed Dec. 11, 2013, Notice of Allowance.
U.S. Appl. No. 11/888,100, filed Aug. 4, 2010, Office Action.
U.S. Appl. No. 11/888,100, filed May 27, 2011, Office Action.
U.S. Appl. No. 11/888,100, filed Dec. 19, 2011, Notice of Allowance.
U.S. Appl. No. 11/891,193, filed Sep. 2, 2010, Office Action.
U.S. Appl. No. 11/891,193, filed May 16, 2011, Office Action.
U.S. Appl. No. 11/891,193, filed Jan. 27, 2012, Office Action.
U.S. Appl. No. 11/891,193, filed Apr. 13, 2012, Notice of Allowance.
U.S. Appl. No. 11/891,193, filed Jan. 4, 2013, Notice of Allowance.
U.S. Appl. No. 11/897,183, filed Oct. 5, 2010, Office Action.
U.S. Appl. No. 11/897,183, filed Mar. 15, 2011, Office Action.
U.S. Appl. No. 11/897,183, filed Dec. 16, 2011, Office Action.
U.S. Appl. No. 11/897,183, filed Jul. 2, 2012, Notice of Allowance.
U.S. Appl. No. 11/897,183, filed Oct. 16, 2012, Notice of Allowance.
U.S. Appl. No. 11/977,763, filed Aug. 4, 2010, Office Action.
U.S. Appl. No. 11/977,763, filed Apr. 4, 2011, Notice of Allowance.
U.S. Appl. No. 11/977,764, filed Sep. 2, 2010, Office Action.
U.S. Appl. No. 11/977,764, filed Feb. 22, 2011, Notice of Allowance.
U.S. Appl. No. 11/978,851, filed Feb. 24, 2011, Office Action.
U.S. Appl. No. 11/978,851, filed Nov. 2, 2011, Office Action.
U.S. Appl. No. 11/978,851, filed Jun. 18, 2012, Notice of Allowance.
U.S. Appl. No. 12/011,238, filed Jul. 8, 2010, Office Action.
U.S. Appl. No. 12/011,238, filed Feb. 9, 2011, Office Action.
U.S. Appl. No. 12/011,238, filed Sep. 14, 2011, Office Action.
U.S. Appl. No. 12/011,238, filed Aug. 14, 2012, Office Action.
U.S. Appl. No. 12/011,238, filed Feb. 27, 2013, Office Action.
U.S. Appl. No. 12/011,238, filed Sep. 19, 2013, Office Action.
U.S. Appl. No. 13/017,024, filed Nov. 21, 2012, Office Action.
U.S. Appl. No. 13/075,144, filed Aug. 25, 2011, Notice of Allowance.
U.S. Appl. No. 13/075,882, filed Mar. 25, 2013, Office Action.
U.S. Appl. No. 13/075,882, filed Oct. 8, 2013, Office Action.
U.S. Appl. No. 13/075,882, filed Oct. 17, 2014, Notice of Allowance.
U.S. Appl. No. 13/093,733, filed Sep. 14, 2011, Office Action.
U.S. Appl. No. 13/093,733, filed Jan. 26, 2012, Office Action.
U.S. Appl. No. 13/093,733, filed Mar. 19, 2012, Notice of Allowance.
U.S. Appl. No. 13/237,625, filed Oct. 15, 2012, Office Action.
U.S. Appl. No. 13/237,625, filed Nov. 30, 2012, Notice of Allowance.
U.S. Appl. No. 13/354,811, filed May 9, 2013, Notice of Allowance.
U.S. Appl. No. 13/397,136, filed Jun. 4, 2012, Office Action.
U.S. Appl. No. 13/397,136, filed Jan. 24, 2013, Notice of Allowance.
U.S. Appl. No. 13/402,880, filed Sep. 10, 2012, Office Action.
U.S. Appl. No. 13/402,880, filed Apr. 18, 2013, Office Action.
U.S. Appl. No. 13/402,880, filed Jun. 25, 2013, Notice of Allowance.
U.S. Appl. No. 13/412,574, filed Dec. 20, 2012, Office Action.
U.S. Appl. No. 13/412,574, filed Aug. 15, 2013, Office Action.
U.S. Appl. No. 13/412,574, filed Jan. 5, 2015, Office Action.
U.S. Appl. No. 13/412,574, filed Jul. 15, 2015, Office Action.
U.S. Appl. No. 13/473,603, filed Nov. 7, 2013, Office Action.
U.S. Appl. No. 13/473,606, filed May 30, 2014, Office Action.
U.S. Appl. No. 13/473,606, filed Aug. 21, 2014, Office Action.
U.S. Appl. No. 13/484,605, filed Oct. 11, 2012, Office Action.
U.S. Appl. No. 13/484,605, filed Jun. 25, 2013, Notice of Allowance.
U.S. Appl. No. 13/554,619, filed Mar. 12, 2013, Office Action.
U.S. Appl. No. 13/554,619, filed Jun. 13, 2013, Notice of Allowance.
U.S. Appl. No. 13/554,685, filed Dec. 29, 2014, Office Action.
U.S. Appl. No. 13/554,685, filed Feb. 26, 2015, Notice of Allowance.
U.S. Appl. No. 13/869,678, filed Oct. 20, 2015, Office Action.
U.S. Appl. No. 13/902,839, filed Oct. 30, 2013, Office Action.
U.S. Appl. No. 13/902,839, filed Feb. 4, 2014, Notice of Allowance.
U.S. Appl. No. 13/908,447, filed Jun. 24, 2015, Notice of Allowance.
U.S. Appl. No. 14/047,015, filed Nov. 30, 2015, Office Action.
U.S. Appl. No. 14/059,878, filed May 20, 2015, Notice of Allowance.
U.S. Appl. No. 14/059,878, filed Jun. 24, 2015, Notice of Allowance.
Related Publications (1)
Number Date Country
20140120868 A1 May 2014 US
Provisional Applications (1)
Number Date Country
60860700 Nov 2006 US
Continuations (1)
Number Date Country
Parent 11881195 Jul 2007 US
Child 14147600 US