The present disclosure relates to conducting searches for information and presenting search results and other events on a timeline user interface to organize the events. Searches may be initiated via voice commands.
Mobile computing devices such as smartphones and tablet computers are continuously evolving into more complex devices with increasing computational and storage capacity. As mobile devices become more powerful, users are storing increasing amount of data on their mobile computing devices, executing an increasing number of applications on their mobile computing devices, and accessing an increasing number of services on their mobile computing devices. The increasing amount of applications and services used to find information in a particular domain is making it increasingly difficult for users to retrieve the information they desire. A user must often navigate through a series of menus and screens associated with different applications or services to find and retrieve information that a user desires.
Many applications have built-in search mechanisms to search for information associated with the application. For example, some applications specialize in finding information related to certain domains such as restaurants, music, sports, stocks and so forth. Furthermore, even when a user is able to find useful results, it is often difficult to organize and retrieve the results when a user wants to view results at a later time. A user is often required to re-launch the particular application that previously found information for the user, navigate a history page and select the desired entry if the user can find it. History pages often don't summarize results so finding the desired entry from a previous search is often a challenge.
Furthermore, although voice functionality is included in some applications, such functionality is often cumbersome and frustrating for many users. Users are often reluctant to utter voice commands in a natural way, and instead, attempt to modify their natural way of speaking so that the application on the mobile computing device will accurately derive their intention.
Embodiments disclose a method, non-transitory computer readable storage medium and a system for performing commands and presenting search results associated with applications and services on a computing device such as a smartphone. The search results are provided by applications or services that are configured to retrieve and present search results to a user for a specific domain.
in one embodiment, the method includes the steps of receiving a command from the user of the computing device, the command including at least one command and being related to a domain and at least one task. The command may be a voice command uttered by the user such as “Find me a Chinese restaurant in San Francisco”. The domain, task and at least one parameter are identified from the command, and suitable services that are configured to perform the command are also identified. At least one service is selected and the command is performed by the service. In various embodiments, the command is executed by calling an application programming interface made available by a third-party. The service returns results once the command is performed and a results page is generated and presented to the user on the display screen of the mobile device. At least a portion of the results are stored so that the user may access the results at a later time if desired.
The results are organized on a results history page in event entries in which each result is visually indicated in a respective entry by a graphical representation identifying the domain of the result. Each result also includes a summary of details for the result, formatted to optimize the real estate available on the screen of the particular mobile computing device. The summary may include the time that the command was performed, the location and time of specific events such as sports games, the number of results that match a query such as “Chinese restaurants in San Francisco” and so forth. The results history page is displayed on the screen of the mobile computing device when a user input is received to show the results history page.
When a user is viewing the results history page, and in response to receiving a user input for selecting one of the results on the results history page, the results page associated with the selected item is displayed on the screen of the mobile computing device.
The command inputted by the user may be inputted by any input device such as a voice command with a microphone, a touch screen, keyboard, mouse, and so forth. In cases where the inputted command is a voice command uttered by the user, natural language processing is performed on the voice command to identify the domain, the at least one parameter, and the at least one task to which the voice command relates.
In some embodiments, the results are presented on the history results page in chronological or reverse-chronological order. In some embodiments, the results are grouped by domain and/or ordered by time.
This summary is provided to introduce a selection of representative concepts in a simplified form that are further described below in the Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used in any way that would limit the scope of the claimed subject matter.
Other aspects and advantages of the invention will become apparent from the following detailed description taken in conjunction with the accompanying drawings which illustrate, by way of example, the principles of the invention.
The disclosed embodiments have other advantages and features which will be more readily apparent from the detailed description, the appended claims, and the accompanying claims, in which:
For convenience, like reference numerals may refer to like parts and components in the various drawings.
The Figures (FIGS.) and the following description relate to preferred embodiments by way of illustration only. It should be noted that from the following discussion, alternative embodiments of the structures, components and methods disclosed herein will be readily recognized as viable alternatives that may be employed without departing from the principles disclosed herein.
As will be appreciated by one of skilled in the art, the present invention may be embodied as a method, system, apparatus or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects all generally referred to herein as a “circuit”, “module”, “library” and the like. Furthermore, the present invention may take the form of a computer program product on a computer-usable storage medium having computer-usable program code embodied in the medium.
Reference will now be made in detail to several embodiments, examples of which are illustrated in the accompanying figures. It is noted that wherever practicable similar or like reference numbers may be used in the figures and may indicate similar or like functionality. The figures depict embodiments of the disclosed system (and method) for the purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of structures, components and methods illustrated herein may be employed without departing from the principles described herein.
Embodiments disclosed include a method, a device, a system and a non-transitory computer readable storage medium for enhancing user experience associated with searching for and retrieving information associated with one or more applications/services on a computing device. The result of a particular search is organized on a results history page (e.g. as an event entry) grouped by the domain to which a search relates and/or ordered by time (chronologically or reverse-chronologically). Each search result event includes a visual representation (such as a graphical icon and/or color coding) and additional details related to the search result. Additional details may include the time that the search was performed, an indication of the service that was called to perform the search, and/or a summary of the results, as well as other information. A user may click on the event (e.g. a part of a particular result in the results history page) which will direct the user to the original results screen that was shown to the user after the command was performed. Though the term “page” is used herein, akin to a web page typically comprising a mark-up language and hypertext for linking to other pages, for the display of the results on a display screen of the mobile computing device via a graphical user interface for interaction with a user, it is understood that the graphical user interface presentation of the search results may be implemented in other forms/structures to view and interact with the search results than strict page-oriented technology. The term “page” or “pages” when used herein includes such other forms/structures.
As an example, a user may perform a search such as “When are the Yankees playing next?” using an application such as an intelligent assistant configured to find information using a variety of services and/or applications. The search may be entered by a touch screen, a keyboard, and/or may be uttered by a user in the form of a voice command. The voice command in this case includes information about the domain to which the search relates, the task the user would like performed, as well as parameters that are included for finding specific information desired by the user. In this specification, a domain refers to a general field or classification of information. In various embodiments, the example query may be classified as belonging to the domain of SPORTS. Domain classification may be performed for any command by the application so that an appropriate service may be identified that is capable of finding the information that the user wants. The command is analyzed to obtain the specific task that the user intends to have performed, in this case, finding the next Yankees game. Parameter information is also extracted from the command. In this case, some of the parameters that may be extracted from the command include the name of the team (i.e. the New York Yankees) and the date of the game (i.e. closest game in the future to the present time). The intelligent assistant may then create a software object and/or data structure containing the domain, task and parameters that were derived from the command and call an appropriate service that is configured to find and return information about sports games.
In various embodiments, the invention is implemented on a mobile computing device that can be carried around by a user. The mobile computing device includes, amongst others, an MP3 player, a cellular phone, a smartphone, a PDA (Personal Digital Assistant), a set-top box, a video game console, and so forth. The invention may also be implemented with other hardware comprising a computer processor such as personal computers, notebook computers, appliances, etc.
Applications are computer programs that interact with users to allow the users to perform desired tasks on their mobile computing device. The application programs may include, among others, web browsers, media players, calendars, time and reminder applications, search programs specializing in specific domains such as restaurants and movie tickets, and so forth. Two or more applications may operate in conjunction to perform a desired task on the mobile computing device.
Services are a group of data and/or functions accessible by applications. The services are often managed independently of the applications. The services provide various useful data and perform various functions in conjunction with the applications. The services may be implemented locally on the mobile computing device or remotely in a computing device separate from the mobile computing device. An application may call external and internal services via pre-determined interface such as an application programming interface (API). When used in the context of web development, an API is typically defined as a set of Hypertext Transfer Protocol (HTTP) request messages, along with a definition of the structure of response messages, which is usually in an Extensible Markup Language (XML) or JavaScript Object Notation (EON) format. A “Web API” is often used as a synonym for web service, and include Simple Object Access Protocol (SOAP) based services as well as direct Representational State Transfer (REST) style communications. Web APIs allow the combination of multiple services into new applications known as mash-ups.
Services that may be used with the invention include, among others, web mapping services, traffic information services, public transit services, contact management services, calendar services, news services, business finder services, global positioning system (GPS) services, and so forth. Functions conventionally provided by applications may be moved to services where the applications provided basic user interfaces while the service performs the bulk of the functions. For example, an application may perform functions of receiving user inputs, deriving the intent of the user, identifying and calling an appropriate service to accomplish a command according to the derived intent of the user, and generating output screen views and/or audio while a contact information service (for example) searches contacts, manages contacts, and retrieves contact information requested from the application. In some embodiments, the user's interaction with the search results may be evaluated, for example, to identify a last results screen navigated by the user for storing for later presentation to the user.
A data entry is a piece of information associated with an application or service. The data entry includes, among others, a file, an entry in a database, and a string of characters in a menu or parameter setting of an application or a service. Each data entry may be associated with one or more applications or services.
The network architecture illustrated in
The local services 120 or external services 118 are accessed via applications executed on the mobile computing device 102 to perform functions requested by the user as described with reference to
The intelligent services engine 150 provides functionality relating to interpreting the desired intent of the user from user inputs (e.g. voice commands) to mobile computing device 102, identifying appropriate services to accomplish the desired intent of the user, and managing service requests with internal and external services 120, 118. The intelligent services engine 150 may be viewed as a particular type of remote service 118 that provides functionality to receive user input, interpret user intent from the user input, and, among other functionality, to accomplish what the user wants by interfacing with appropriate services 118, 120. In some embodiments, intelligent services engine 150 is not entirely a remote service but may also reside partly or entirely on mobile computing device 102. Alternatively, the data and/or results provided by intelligent services engine 150 may be cached on the mobile computing device 102 to improve speed and so that the mobile computing device 102 can perform operations when network access is unavailable.
The mobile computing device 102 includes, among others, a processor 220, input devices 230, a screen 240, a communication module 250, and a memory 260. The components of the mobile computing device 102 communicate via a bus 282. The processor 220 executes instructions stored in the memory 260 to perform various types of s on the mobile computing device 102. Although
The input devices 230 receive various user inputs and detect user actions on the mobile computing device 102. The input devices 230 may include, among others, one or more switches, sliders, motion sensors, a touch screen 240, one or more cameras, a microphone and so forth.
The screen 240 of the mobile computing device 102 may be implemented using various display technology such as liquid crystal display (LCD), organic light-emitting diode (OLED), light-emitting diode display (LED), electroluminescent displays (ELDs), bistable liquid crystal displays, cholesteric displays, and filed emission displays (FEDs). The screen 240 displays various screen views associated with applications or services as well as windows associated with search operation.
The communication module 250 communicates with the network 110 via conventional wired or wireless protocols including, among others, Bluetooth, Wireless Fidelity (WiFi), General Packet Radio Service (GPRS), third-generation (3G) mobile, High Speed Download Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Worldwide Interoperability for Microwave Access (WiMAX) and other technologies currently available or under development. In one embodiment, two or more distinct communication modules may be provided to communicate with the same or different network via multiple protocols. For example, the mobile computing device 102 may include a Bluetooth communication module for short range-communication and a 3G communication module for long-range communication.
The memory 260 may be implemented as any conventional data memory including, among others, various types of volatile or non-volatile memory. Two or more types of memory may also be used in conjunction. Further, removable memory such as memory stick may also be used.
The memory 260 includes software components including among others, local services 114, applications 264, and an operating system 268. The local services 114 are accessed by one or more applications 264 to provide various services to the user. In one embodiment, one or more local services 114 include or are associated with a database for storing data entries. The interoperation between the local services 114 and the applications 264 is described below in detail with reference to
In various embodiments, application 201 (also referred to herein as intelligent assistant 201) may act as an interface to allow users to access remote/local services via an intelligent services engine 150 by providing input (such as voice queries) to intelligent assistant 201. The intelligent service engine 150 is a special type of application dedicated to deriving user intent from the user input and performing searches on data associated with the remote/local services 408 according to the derived user intent.
The intelligent assistant 201 (i.e. a particular type of application on mobile computing device 102) operates in conjunction with the intelligent services engine 150 to organize and select the search result for presentation to the user. The search results from the remote/local services 408A may include a very large number of hits matching the derived intent of the user.
Reference is now made to
Delegate Service 308 may be chiefly responsible for receiving requests from mobile computing devices 102, coordinating the processing of components and directing data between components (e.g. 312, 330, 340, 314, 316, 350, etc.) as well as providing results to mobile computing devices 102 that made requests to delegate service 308. It will be appreciated that each of the components shown in
In some embodiments, intelligent services engine 150 may include an automated speech recognition (ASR) module 312 for converting voice-based input commands into a text string representation of the voiced input. A natural language processing (NLP) engine 314 may be provided to receive the text string representation of the voice command from ASR module 312 and derive the user's intention form the voiced (or otherwise inputted) command. NLP engine 314 may be further configured to recognize the domain (and perhaps one or more sub-domains) to which the user command relates, the specific task the user wants to have performed, as well as perform entity extraction on the user command to identify relevant parameters embodied in the user command. Services manager 330 receives data from NIP engine 314 and identifies one or more remote and/or local services 118,120 configured to accomplish the task according to the derived user intent.
Some or all of the components of intelligent services engine 150 may be cloud-based (in that the components are stored and executed on remote servers), and in other embodiments, some or all of the components are of intelligent services engine 150 are stored and executed on the mobile computing device 102. Although the components of intelligent services engine 150 are sometimes referred to herein in the singular (i.e. delegate service 308), it will be appreciated that some or all of the components may be instantiated as several web services, the number of which may be determined by the load balancer, number of requests from other components and/or mobile computing devices 102, and so forth. Dialogue manager 316 may be for interacting with the user in a conversational manner to elicit additional information (such as parameters), confirm commands about to be performed, confirm results, and so forth. Timeline module 350 is for generating timeline views that allow a user to view task results, organize tasks, connect with relationships in a social network setting, etc. Display module 340 is for formatting the results from the other modules (e.g. 312, 314, 316, etc.) before the results are communicated to the mobile computing device 102 making the request. Formatting the results may involve protocol-specific formatting, phone-specific formatting, operating system specific formatting, and so forth. Database 315 is for storing long-term and short-term data that is relevant to the operations of intelligent services engine 150 and may include user history, user preferences, cached results form services manager 330, list of appropriate services 118, 120 and their associated functionality and API calls, etc.
Reference is next made to
The position of the group on the timeline may be determined according to the time that a last event in the group was performed or according to the time that a first event in the group was performed. A status may represent whether a user has reviewed the action performed.
reference is next made to
The timeline 1050 may include one or more timeline items 1060 that indicate items that have occurred, are occurring, are scheduled to occur, as well as grouped items that are related in some way. Each timeline item 1060 may include one or more details about the item such as the time 1062 corresponding to the item, a graphical representation 1066 indicating the category of the item (such as an icon), a brief description 1064 of the item, and so forth. A category (also referred to herein as a domain) in the context of this specification is a field of action, thought, or influence in which individual items that belong to a category are logically related. For example, text messages, phone calls, emails and social media communications may all be grouped together under the category of communications. Other examples of categories that may be implemented with the conversational system 300 include alarms and reminders, restaurant events, to-do items, searches (via the Internet or affiliates), and so forth. It should be appreciated that the user interfaces 1002, timelines 1050, timeline items 1060 and the categories thereof referred to in this specification are merely exemplary, and that the invention contemplates many other embodiments that are within the scope of the invention.
The description 1064 of the item may include such information as the title of the item (for example, “Wake up”), the address of the item in the case of events, names of people attending an event (for example, “Mary Smith”), the address where the event is scheduled to make place (for example, “53 Fairview Ave.”, the number of items grouped in a particular category (for example, “4 new messages”), and so forth.
The timeline 1050 shown in
Continuing with the elements of user interface 1002, a cover photo 1004 is included that corresponds to a particular date, in this case, Wednesday August 29th. Cover photo 1004 may be selected to provide a visually pleasing environment for the user, and in some embodiments, the colors found in color photo 1004 may correspond to the elements of the user interface 1002 (such as timeline toggle 1012, date display 1006, the lines between timeline items 1060, etc. to give the user interface 1002 a pleasing color-coordinated appearance. Although not shown in the drawings, in some embodiments, a user may view past and/or future cover photos 1004 that do not correspond with the current date 1006 by clicking on visual icons such as arrows. Cover photos 1004 may also include a clickable caption that is meant to provide more information about the photo and to prompt the user to explore the cover photo 1004 in more detail. For example, a cover photo for a particular day may show the national animal of a country that also includes a caption such as “Do you know what country this animal represents”. In some cases the caption may be clickable so that a user can learn more about the animal, the country, or a related topic. It will be appreciated that any given cover photo 1004 may include more than one caption whereby clicking on a particular caption take the user to a particular destination such as a webpage.
User interface 1002 also includes a local weather display 1010 which may include useful weather information such as the temperature, an icon representing the weather conditions (e.g. Sunny, cloudy, rainy, etc.), probability of precipitation, wind conditions and so forth. The application 201 may periodically access the global positioning system (GPS) coordinates of the device 102 by calling an internal GPS service 120 to retrieve the location of the device. Once the application 201 retrieves the current GPS location of the device 120, the application may call an appropriate weather service 118 (from a list of services stored by the conversational system 300) and display the weather information on the user interface 1002. The weather location may be user configurable (not shown).
As shown in
In one embodiment, in order to display the speech entry form 2810, a user must press and hold the speech button 1014 on the touch screen 240, and drag the speech button 1014 generally along the speech line 2610 until the speech button 1014 makes contact with the target 2620, as shown in
When the user successfully drags the speech button 1014 to the target 2620, the application 201 will display a speech entry form 2810, an example of which is shown in
In various embodiments, the speech command uttered by the user is converted by ASR engine 312 into a text representation of the uttered speech command. The ASR engine 312 may direct the text representation to NLP engine 314 which is configured to identify the domain that the command relates to, at least one task that the user desired to have performed, and at least one parameter relevant to the task. In this specification, the voice input may be referred to as the “voice command” and the text representation of the voice command that is generated by the ASR engine 312 may be referred to as the “text command”.
As will be appreciated, ASR engine 312 will not always produce a text command that exactly matches the voice command uttered by the user. The conversational system 300 may include functionality that allows a user to correct a misinterpretation by the ASR engine 312 or if the user changed their mind about the details of the task they desire to have accomplished. For example, let's say that the user utters the voice command “What's the weather like in Toronto today?” while the speech but that the ASR engine 312 produces a text command of “What's the weather like in Torino today?” Once ASR engine 312 has produced a text command representing the voice command, the application 201 displays a speech correction screen 2910, an example of which is shown in
Speech correction screen 2910 is displayed to the user for a predetermined time period which may be indicated by progress bar 2920. A user may edit the text command or repeat a voice command at any time while the speech correction form 2910 is displayed. A caption 2916 of the text command is displayed on the speech correction form 2910 so that the user can view the text command produced by ASR engine 312 and make any corrections if desired. Speech correction form includes an edit button 2914 and a “resay” (i.e. repeat) button 2912 that respectively allow a user to manually change the text command (by using the touch screen 240 for example) or to utter another voice command respectively.
As shown in
In various embodiments, the conversational system 300 is configured to prompt a user for additional information (i.e. parameters) where a particular task has been identified by NIP engine 314 but not enough information is derived from the voice command. For example, in the case of voice commands related to booking flights, it will be appreciated that some parameters are required in order perform a useful search. Specifically, in some embodiments the services manager 330 may require at least the following parameters (also referred to herein as entities): departure city, departure date and arrival city. In other embodiments, the services manager 330 may require additional information such as number of tickets, class, airline, and so forth.
Referring to
As shown in
With reference to
In some embodiments, a speech button 3716 is placed adjacent to each field 3712 so that a user may enter additional parameter information by pressing the speech button 3716 and uttering the parameter information. For example, a user may enter the date and time of the meeting by pressing the speech button 3716a and voicing the date and time (for example, by saying “3 pm”). The speech button 3716a may be animated once the user presses the speech button 3716a so that the user is aware the application 201 is receiving voice utterances via the one or more microphones of the mobile device 102. After the user is finishing voicing the parameter information the application processes the voice utterance (by converting the voice utterance to a text representation with ASR engine 312) and fills in the field with the text representation. The user may also have the option to directly input the entity information into fields 3712 using the touch screen or another input device.
After the parameter information has been received and processed for each required field by ASR engine 312, a user may press a submit button 3718 to direct the parameters to the services manager 330. If the required fields are not filled then the user will get a message the more information is required before the user is able to press the submit button 3718. At any time the user may cancel the process and will be returned to a home screen (for example, such as the user interface 1002 shown in
Once the parameter information has been entered into each field 3712,a,b,c and the user presses the submit button 3718, services manager 330 may verify that all the parameters have been provided for the particular task identified as relating to the user's intent. If services manager 330 (or another component of conversational system 300) determines that all the parameter information has been received, then application 201 may display a task confirmation screen 3810 on the mobile device 102 as shown in
It will be appreciated that calling the appropriate service may involve calling one or more methods via an API associated with the service and providing parameter information to the one or more methods. The one or more methods may return results in the form of XML, JSON, or other formats which may be processed by the conversational system 300 and presented on the mobile device 102 by the application 201. As shown in
Interaction with the Timeline 1050
Reference is next made to
As shown in
Reference is next made to
In some embodiments, a user may access the mini-apps by touching the screen as shown by contact point 1310 and dragging the screen in the direction of arrow 1312. If the user drags the screen a predetermined distance the application 201 will display the mini-app user interface 1410 shown in
In some embodiments, not all of the mini-app icons will be shown on the user interface at the same time. For such occasions, the interface 1410 is scrollable up and down as shown in
With reference to
In another embodiment, clicking on as mini-app icon (for example calendar icon 1412a) causes the application 201 to display the form 3710 which allows a user to enter voice command related to particular entities (i.e. fields) of the form 3710.
In other embodiments, pressing mini-app icon (for example 1412g pertaining to restaurants) may cause the application 201 to display a business finder user interface 4210 (
If a user utters a voice command then the application 201 directs voice command (which may be in any one of several audio formats such as raw audio (pcm) or various lossy or lossless audio formats which supported by the exemplary ASR engine 312. The ASR engine 312 generates a text command representing the voice command and passes this text command to NLP engine 314 which performs entity extraction. The NLP engine extracts the relevant entities (in this example, “Sushi” and “Toronto”) and creates a template representing the derived intent of the user and provides the template to services manager 330 for processing. The services manager 330 is configured to select one or more services 118,120 from a list of available services and calls an appropriate service 118,120 to accomplish the derived intent of the user. In the exemplary interaction shown in
The processing of voice commands by the conversational system 300 will now be described in detail. Given an example voice command of “Schedule a meeting with Bob for 3 p.m. today at Headquarters”, NLP engine 314 may classify the command as relating to the calendar domain and further identify the desired task as scheduling a meeting. NLP engine 314 may further derive one or parameters from the text command such as the meeting attendees (i.e. Bob), the time of the meeting (i.e. 3 p.m.) and the location of the meeting (i.e. Headquarters). The location “Headquarters” may be stored as a user learned preference that is stored in database 315 and may be associated with an actual address. Once NLP engine has derived the relevant information from the text command, NLP engine (or another module of intelligent service engine 150) may create a software object, template, data structure and/or the like (referred to herein as template) to represent the intention of the user as embodied in the text command. The template may be stored in database 315 for further access, to learn from past user behavior, for analytical purposes, etc.
Once NIP engine 314 has finished processing the text command, a template that represents processed command may be provided to services manager 330 to process the task desired by the user. Services manager 330 uses the domain and the task the user wants to perform to determine an appropriate service from a services list. Continuing with the meeting example, the service manager 330 may determine that an appropriate service for accomplishing the desired task is an internal service 120 that is provided by the operating system. In other example interactions, services manager may identify one or more external services 118 for accomplishing the desired task. The internal and external services 120,118 may be accessible by an application programming interface (API) as will be understood by a skilled person in the art. The internal/external services 120,118 may provide results in any of several known formats such as an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format. The response provided by the API called by services manager 330 may then be directed to the display module 340 for formatting of the result and communication of the result to the mobile device 102. The application 201 receives the formatted result from display module 340, and may further format the result depending on the specific capabilities and/or setting on the device 102. Application 201 displays the result to the user in the form of an exemplary user interface 1002 where the result may be interacted with by the user.
Reference is next made to
Search history timeline 2100 includes one or more search entries 2110, each of which correspond to a previous search and/or task that was conducted by the user (e.g. a search event). Each search event entry 2110 includes information that allows a user to quickly glance at the search history timeline 2100 to grasp what was accomplished in the past and to further look into an entry 2110 if desired.
For example, the first search entry 2110 includes a title caption “Hair Salons in Mississauga”, an icon 2112 that represents the business finder domain, a search detail caption 2114 that gives more information about the search results, and may also include other captions and/or icons such as navigation icon 2116 that may be pressed to find directions to various businesses that match the search performed.
As can be seen in
The user interface is configured to receive a command from a user of the mobile computing device where the command comprises at least one parameter and is related to at least one domain and at least one task or action to be performed in response. The at least one domain, at least one task, and at least one parameter are identified from the command and at least one service configured to execute the command is identified. The command is executed via the at least one service. Results from the at least one service are received and a results page summarizing the results is generated. The results page is presented to the user such as on the display screen of the mobile computing device.
The search results may be organized and presented in a summary form on a timeline oriented results history page such as in an event entry. Part of the results provided by the at least one service may be stored. Each event entry may comprise a graphical representation identifying the domain of each result and a summary of details for the result. Each event entry may be configured to present the results page when the user selects and invokes the event entry. In response to receiving a user input for selecting one of the results (event entries) on the organized results history page, the results page may be presented such as by displaying on the screen of the mobile computing device.
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. For example, any of the any of the elements associated with conversational system 300, intelligent services engine 150, and application 201 may employ any of the desired functionality set forth hereinabove. Furthermore, in various embodiments the conversational system 300, intelligent services engine 150 and application 201 may have more components or less components than described herein to employ the desired functionality set forth herein. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described embodiment.
Headings within this patent application and the title of this patent application are for convenience only, and are not to be taken as limiting the disclosure in any way.
Devices that are in communication with each other need not be in continuous communication with each other, unless expressly specified otherwise. In addition, devices that are in communication with each other may communicate directly or indirectly through one or more intermediaries.
Number | Date | Country | |
---|---|---|---|
61662652 | Jun 2012 | US |