1. Field of Invention
The present application generally relates to processing search queries and, more particularly, to methods and systems for processing search queries using local caching and predictive fetching of results from a remote server to offset network latencies during incremental searching.
2. Description of Related Art
There are many user-operated devices such as mobile phones, PDAs (personal digital assistants), personal media players, and television remote control devices that have small keypads for text input. Largely because of device size restrictions, a full “QWERTY” keyboard often cannot be provided. Instead, a small keypad is provided having only a limited number of keys, which are overloaded with alpha-numeric characters.
Text entry using such a keypad with overloaded keys can result in an ambiguous text entry, which requires some type of a disambiguation action. For instance, with a multi-press interface, a user can press a particular key multiple times in quick succession to select a desired character (e.g., to choose “B”, the user would press the “2” key twice quickly, and to choose “C”, the user would press the key three times). Alternatively, text entry can be performed using T9 and other text input mechanisms that provide vocabulary based completion choices for each word entered. Neither of these methods is however particularly useful for performing searches because of the number of steps needed to get to the result. One deficiency of the multi-press interface is that too many key strokes are needed. A drawback of applying a vocabulary based word completion interface is the need for the additional step of making a choice from a list of all possible word matches generated by the ambiguous text input. Furthermore vocabulary based word disambiguation systems are designed typically for composition applications (as opposed to search applications) where user explicitly disambiguates each word by performing a word completion action to resolve that word before proceeding to the next word in the composition.
The cumbersome text entry interface on mobile and other devices makes incremental searching a particularly convenient way of finding desired information. With incremental searching, the user-operated device returns results for each character of the search query entered by the user, unlike non-incremental search systems where the user has to enter the complete query string prior to initiating the search. In addition to facilitating the return of results without having to enter the full query string, incremental searching also enables the user to recover from an erroneous input even before the entire query string is fully input. This is a significant improvement over non-incremental search systems where the user often discovers an error only after submitting a fully formed query to the server.
Mobile devices such as phones and PDAs communicate over wireless networks, which typically have high network latencies, making incremental searching unfavorable. In particular, these networks have perceptible startup latencies to establish data communication links on wireless networks. Additionally, the network round trip latencies are perceptible even on networks with moderate to high bandwidth (>=100 kbps) from server to the mobile device. For instance the latency on a CDMA 1xRTT network could be greater than 600 msec (milliseconds). A GSM EDGE network could have latency as high as 500 msec. It has been found that latency in server responses exceeding 200-300 msec after the user types in a character is perceptible to users. These latencies result in a poor user experience when performing incremental searching with wireless mobile devices.
Perceptible network latencies also exist in wired networks. For instance, when using a personal computer (located, e.g., in the U.S.) for retrieving data from a server located a large distance away (e.g., in India), roundtrip latencies can be about 200 ms even with high speed network connections. These perceptible latencies diminish the user experience in performing incremental searching.
In accordance with one or more embodiments of the invention, a method and system are provided for offsetting network latencies in an incremental processing of a search query entered by a user of a device having connectivity to a remote server over a network. The search query is directed at identifying an item from a set of items. In accordance with the method and system, data expected to be of interest to the user is stored in a local memory associated with the device. Upon receiving a key entry or a browse action entry of the search query from the user, the system searches the local memory associated with the device to identify results therein matching the key entry or browse action entry. The results identified in the local memory are displayed on a display associated with the device. Also upon receiving a key entry or a browse action entry of the search query from the user, the system sends the search query to the remote server and retrieves results from the remote server matching the key entry or browse action entry. The results from the remote server are merged with the results from the local memory for displaying on the display. The process is repeated for additional characters or browse actions entered by the user when he or she does not find the desired item on the display.
These and other features will become readily apparent from the following detailed description wherein embodiments of the invention are shown and described by way of illustration. As will be realized, the invention is capable of other and different embodiments and its several details may be capable of modifications in various respects, all without departing from the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not in a restrictive or limiting sense with the scope of the application being indicated in the claims.
For a more complete understanding of various embodiments of the present invention, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:
Like reference numerals generally refer to like elements in the drawings.
Briefly and as will be described in further detail below, various embodiments of the present invention are directed to methods and systems for offsetting network startup and/or roundtrip latencies during incremental searching performed using client devices connected to a remote server over a communication network. The latencies are offset using a predictive fetch scheme and local caching of results on user operated client devices. The local cache (or other memory) on the client device can be used to store a portion of the top results from searchable data spaces in the system. This cache can be searched to allow the user to see results generally instantly on inputting the first character or browse action of the query, which can be even before that device has established connection with the server. Also, upon entry of the first character or browse action of the query, the client device begins to dynamically and predictively fetch from the remote search server results pertinent to the user input expected to be of interest to the user. The choice of results to be fetched can be based on various criteria as will be described below. The remote server results are merged with the local cache results. The remote server predictive fetching operation is continued for any subsequent user input, making use of the time gap between each character entry or browse action performed by the user. This input driven predictive fetch operation from server enables the user to see results with reduced latency on average. The data fetch sequence also preferably adapts over time to the user's typical information finding behavior, which could be exclusively or a combination of text entry and browse actions. On devices with multiple alphabets overloaded on the same key (as shown, e.g., in
The predictive fetch method described in accordance with various embodiments of the invention can function like a continuous user-input driven media stream compensating for the network round trip latencies and fluctuations, enabling the user to see the results in real time with the entry of each character or browse action constituting the input query. Furthermore, in accordance with one or more embodiments of the invention, the predictive fetched results can serve as a cache for subsequent user queries, further reducing perceived latencies.
In accordance with one or more embodiments of the invention, the server can dispatch during a predictive fetch or at another time, results that are predicted in advance to be highly requested query spikes, thus reducing server overloads and response degradation during the actual occurrence of the information query spikes.
Various embodiments of the present invention are particularly suited for use with mobile devices (such as cellular phones, PDAs, digital radios, personal media players and other devices) used in communications networks having high latency. The system however can also be used with various other devices communicating on a network such as PCs, television sets, and desk phones having a limited display space.
Search queries entered by users on the user-operated devices can include both text input comprising a set of characters or a browse action. A browse action can include a node descend through a node hierarchy (e.g., a set of categories and subcategories) or navigation of a linear list of nodes. The search queries entered by users are directed at identifying an item from a set of items. Each of the items has one or more associated descriptors or metadata. The descriptors can include words in the name of the item or other information relating to the item. For example, if the item is a restaurant, the descriptors can include the name of the restaurant, the type of food served, price range, and the location of the restaurant. In a television application, the item can be a television content item such as a movie or television program, and the descriptors can be information on the title of the movie or program, the cast, directors, and other keywords and descriptions of the movie or program.
If the user-operated device includes an ambiguous text input interface, the user can type in a search query by pressing overloaded keys of the text input interface once to form an ambiguous query string. In accordance with one or more embodiments of the invention, in an ambiguous text input system, the search space at both the remote server and the client device can be initially indexed by performing a many-to-many mapping from the alphanumeric space of terms to numeric strings corresponding to the various prefixes of each alphanumeric term constituting the query string. In a numeric string, each alphanumeric character in the string is replaced by its corresponding numeric equivalent based on, e.g., the arrangement of characters on the commonly used twelve-key reduced keypad of the type shown in
There are numerous possible applications for the search techniques described herein including, e.g., assisting users of mobile devices such as cell phones and PDAs in finding or identifying desired items in various databases (e.g., performing searches in directories of people or businesses, searching for and purchasing products/services like airline tickets and groceries, searching through transportation schedules such as airline schedules, searching for movies being shown at theaters, and searching for audio/video content) or for assisting television viewers in identifying desired television content items and channels.
In the context of television systems, the term “television content items” can include a wide variety of video/audio content including, but not limited to, television shows, movies, music videos, or any other identifiable content that can be selected by a television viewer. Searching for television content items can be performed across disparate content sources including, but not limited to, broadcast television, VOD, IPTV, and PVR (local and network).
The network 204 transmits data between the server 202 to the devices 206, 208, 210 operated by the users. The network 204 could be wired or wireless connections or some combination thereof. Examples of possible networks include computer networks, cable television networks, satellite television networks, IP-based television networks, and mobile communications networks (such as, e.g., wireless CDMA and GSM networks).
The search devices could have a wide range of interface capabilities. A device, e.g., could be a hand-held mobile communications device 208 such as a phone or PDA having a limited display size and a reduced keypad with overloaded keys or a full QWERTY keypad. Another type of search device is a television system 210 with a remote control device 212 having an overloaded keypad or a full QWERTY keypad. Another possible search device is a Personal Computer (PC) 206 with a full QWERTY or reduced keyboard and a computer display.
The user inputs a character or performs a browse action (e.g., descending down a node or traversing a linear list of nodes) at step 502 using, e.g., a mobile device user interface shown in
The local cache on the device is searched at step 504 to determine if there is matching data, i.e., search results for the user's input. Identifying matching data can be performed using, e.g., a trie structure search of the type shown in
If matching data are found for the user search query, then an additional optional check 506 can be performed to determine the “freshness” the resident cache data. Certain types of cached data such as, e.g., stock quotes, may become stale and not have any practical value after a given time period. If there is matching cached data that is not stale, the data are displayed to the user at 508, allowing the user to view and select a displayed result.
In response to the user input at step 502 and generally parallel to the local cache search operation 504, the user input is sent to a remote server in a predictive fetch operation at step 510. The results of the search performed at the remote server are merged with any local cache results at step 512. The merging is time delayed because the results received from the remote server will be typically received after the results from the local cache search are retrieved and displayed. The data are preferably merged and displayed in a manner that is not overly intrusive or disruptive to the usage of the device since the user may already be viewing local cache results. One way of merging the results can be to append or prepend the results of the server fetch operation to the end or beginning, respectively, of the results from the local cache. Another way to merge the results is to fold the results from the remote server into results displayed from local cache. Duplicate results from the remote server are preferably ignored during merging.
At step 514, a check is made to determine whether the user has found the desired item in the displayed results. If so, the process terminates at step 516. If not, the user can enter an additional character in the search query text or perform another browse action again at step 502, repeating the process described above.
The choice of results for the predictive fetch stream from the remote server can be based on one or more given criteria. The criteria can include one or a combination of some or all of the following: (1) the personalization preferences of the user, (2) the popularity of particular items, (3) the temporal and location relevance of the items, (4) breadth of spread of results across the alphabets of the language used for searches (since in a given language certain sequences of characters will appear more frequently in words than others), and (5) the relevance of terms (in relation to the popularity of the containing item) having the character entered by the user in that ordinal position. This stream can be dynamically adapted to match the incremental query input by the user, by walking down a trie data structure along the path of the prefix string entered by the user. For example, in searching for the movie entitled “Guns of Navarone”, if the user enters the query string “GU NAV”, a trie walk can be done down the path “GU NAV” as illustrated in
The personalization preferences of the user can be based, e.g., on user preferences, both explicitly and implicitly defined. Preferences can be implicitly defined based on repetitive user behavior. For instance, if a given user performs a search for the price of a particular stock at a certain time every morning, the system can provide a high rank to matching results relating to said stock.
In the case of text entry, the choice of results displayed can be based upon given criteria such as the five predictive fetch criteria described above. In one or more embodiments of the invention, a server trie walk is done (as shown, e.g., in
In the case of fold descend, the local server proxy in coordination with the remote server can fetch children of all non-terminals that are rendered on the display area. These results are fetched after fetching the top results needed for displaying in the display window 501A.
In the case of a linear scroll, the local server proxy in coordination with the remote server, can fetch results from the remote server that are not displayed in the results window. In the scenario where the displayed results are mostly folds, predictive fetching of the children of the folds can be done before linear scroll. In other cases, the linear scroll results are fetched before fetching the top child results of folds visible in the display window. These fetch sequences (trie walk fetch, linear scroll results fetch, folded children fetch) are preferably adapted over time to match the typical user's information finding behavior. For instance, on a device operated by a user who typically does not descend down folds but enters multi prefix queries, the system would perform trie walk fetch (with emphasis on results spread over all the alphabets) and linear scroll fetch. As another example, for a user who typically browses after the first text entry, the system could prioritize the fold fetch after a text entry fetch.
In accordance with one or more embodiments of the invention, the predictive fetch sequence can also be influenced by the device capabilities and the mode of text entry. For example, on mobile devices where a 12-key keypad (e.g., of the type shown in
Latency=MTrepeat+MTkill
where MTrepeat is 165*2=330 msec. 165 msec is the time between each consecutive press using index finger. MTkill is 1500 msec (the time for automatic timeout and selection of the currently entered character).
When using a single press mode of text entry with a limited keypad (e.g., the 12-key keypad shown in
In accordance with one or more embodiments of the invention, in addition to user-input driven data predictive fetching, the server may on its own also send, time permitting, data that are projected to be information spikes in those areas of interest to the user. For instance, if a popular movie is being released, and the user is observed to have a preference for that genre of movies, information about that movie could automatically be sent to the user device. This type of predictive fetching of data, in addition to eliminating the response latency, has the benefit of reducing server overload during the actual occurrence of the information spike. Such predictive fetching and caching can also be done to address the initial startup latency inherent in most communication networks. The size of this cache can be dependant on the available client memory resources. In another scenario when the user moves from a lower latency network such as an 1xRTT network to a higher latency network such as EVDO, the server could initiate a larger and prolonged download of data to offset the latency. This approach can be used even in contention based television cable networks where the uplink could get crowded. In this case, the server could perform a broadcast/multicast/unicast of data.
In the
Methods of processing search query inputs from users in accordance with various embodiments of the invention are preferably implemented in software, and accordingly one of the preferred implementations is as a set of instructions (program code) in a code module resident in the random access memory of a user-operated computing device. Until required by the device, the set of instructions may be stored in another memory, e.g., in a hard disk drive, or in a removable memory such as an optical disk (for eventual use in a CD ROM) or floppy disk (for eventual use in a floppy disk drive), or downloaded via the Internet or some other network. In addition, although the various methods described are conveniently implemented in a computing device selectively activated or reconfigured by software, one of ordinary skill in the art would also recognize that such methods may be carried out in hardware, in firmware, or in more specialized apparatus constructed to perform the specified method steps.
Having described preferred embodiments of the present invention, it should be apparent that modifications can be made without departing from the spirit and scope of the invention.
The present application is based on and claims priority from U.S. Patent Application Ser. No. 60/727,561 filed on Oct. 17, 2005 and entitled “Method And System For Predictive Prefetch And Caching Of Results To Offset Network Latencies During Incremental Search With Reduced User Input On Mobile Devices,” which is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
60727561 | Oct 2005 | US |