Method and system for performing searches for television content using reduced text input

Information

  • Patent Grant
  • 7895218
  • Patent Number
    7,895,218
  • Date Filed
    Tuesday, May 24, 2005
    19 years ago
  • Date Issued
    Tuesday, February 22, 2011
    13 years ago
Abstract
A method and system are provided for identifying a television content item desired by a television viewer from a set of television content items. Each of the television content items has one or more associated descriptors. The system receives from the television viewer a reduced text search entry directed at identifying the desired television content item. The search entry is a prefix substring of one or more words relating to the desired television content item. The system dynamically identifies a group of one or more television content items from the set of television content items having one or more descriptors matching the search entry as the television viewer enters each character of the search entry. The system then transmits the names of the one or more television content items of the identified group to be displayed on a device operated by the television viewer.
Description
BACKGROUND OF THE INVENTION

1. Field of Invention


The present invention generally relates to a method and system for performing searches for television content and, more particularly, to a method and system for performing searches with text entry by a user reduced to prefix substrings representing elements of a namespace containing a set of names composed of one or more words that are either ordered or unordered.


2. Description of Related Art


Search engines have become increasingly important for finding needed information on the Internet using Personal Computers (PCs). While performing searches is predominantly a PC based activity to date, searching has begun percolating to non-PC domains such as televisions and hand-held devices, as content choices for these domains proliferate. Text input continues to be the primary input technique for search engines since speech input and other input technologies have not sufficiently matured. Though progress has been made recently for PCs with full QWERTY keyboards to reduce the amount of text input needed to arrive at a desired result, the search input process is still grossly deficient and cumbersome when it comes to searching for desired information or content on a large ten-foot interface television environment or a hand-held device. In these usage scenarios, the text input is ordinarily made using keys that are typically overloaded with multiple characters. Of the various device interactions (key stroke, scroll, selection etc.) during a search process in these non-PC systems, text input remains a dominant factor in determining the usability of search. This usability criterion typically constrains text input to a single keyword (such as a name) or a few keywords to describe the item that is being searched. Rich text input such as “natural language input” is generally precluded in the non-PC systems not by the limitations of search engines, but by the difficulty of entering text.


A useful usage scenario for searching in these limited input capability environments could be to find information on a keyword a user has in mind, where the keyword could be the name of a person, place, object, media entity etc. Examples of such a search could be finding the movie “Guns of Navarone” (which as further described below can be considered a three-word name instance from an ordered name space), and “John Doe” (a two-word name instance from an unordered name space). An interesting property of certain search domains is that the percentage of names in the search domain with two or more words is quite significant. For instance, in the case of searching for a person's name (e.g., John Doe) in a phone database, the search domain name size (number of words constituting a name—2 in the case of John Doe) is at least two. In the movie space, a random sampling of 150,000 English movie titles revealed that 86% of the titles have name size greater than or equal to two, even with the removal of some of the most frequently occurring “article stop words” such as “a”, “an”, and “the.”


It would be desirable for search engines for devices (with limited input capabilities in particular) to enable user to get to desired results with reduced input representing a namespace. In particular, a search method or system able to perform one or more of the following would be desirable:

    • (1) Captures information from one or more words making up a name, using a reduced number of characters to represent the original name. The number of results matched for the name entry is preferably limited to a given threshold, which can, e.g., be determined by the display space for rendering the results and the ease of scrolling through the results.
    • (2) Allows users to enter words in the namespace in any order. For example, a person lookup search such as “John Doe” should be possible either as “John Doe or Doe John.” In this example, “John” and “Doe” is a two-word instance of a name from an unordered namespace.
    • (3) Facilitates learning of an efficient usage of the reduced text entry scheme intuitively and gradually. First time users should preferably be able to even enter the full string if they choose to. The system preferably provides users with cues and assistance to help learn to key in the reduced string to get to desired results.
    • (4) Works across search domains with diverse attributes such as (a) size of the search domain (b) the language used for search, (c) the clustering characteristics of names in the search domain, (d) the interface capabilities of the device used for search, and (e) computational power, memory, and bandwidth availability of the search system.


BRIEF SUMMARY OF EMBODIMENTS OF THE INVENTION

In accordance with one or more embodiments of the invention, a method and system are provided for identifying a television content item desired by a television viewer from a set of television content items. Each of the television content items has one or more associated descriptors. The system receives from the television viewer a reduced text search entry directed at identifying the desired television content item. The search entry is a prefix substring of one or more words relating to the desired television content item. The system dynamically identifies a group of one or more television content items from the set of television content items having one or more descriptors matching the search entry as the television viewer enters each character of the search entry. The system then transmits the names of the identified group of one or more television content items to be displayed on a device operated by the television viewer.


These and other features will become readily apparent from the following detailed description wherein embodiments of the invention are shown and described by way of illustration. As will be realized, the invention is capable of other and different embodiments and its several details may be capable of modifications in various respects, all without departing from the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not in a restrictive or limiting sense with the scope of the application being indicated in the claims.





BRIEF DESCRIPTION OF THE DRAWINGS

For a more complete understanding of various embodiments of the present invention, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:



FIG. 1 illustrates a reduced text entry search system in accordance with one or more embodiments of the invention being used in different device and network configurations.



FIG. 2 illustrates configuration options of exemplary devices for performing searches in accordance with one or more embodiments of the invention.



FIG. 3 illustrates examples of a discrete structural composition of text input to a search system in accordance with one or more embodiments of the invention.



FIG. 4 illustrates a process of user starting a new search and entering text and arriving at a desired result in accordance with one or more embodiments of the invention.



FIG. 5 illustrates a preprocessing step on a search space prior to indexing it in accordance with one or more embodiments of the invention.



FIG. 6 illustrates an example of a data structure to enable dynamic search leveraging off pre-indexed substring prefixes in accordance with one or more embodiments of the invention.



FIG. 7 illustrates internal steps of search as each character is input in accordance with one or more embodiments of the invention.



FIGS. 8A and 8B illustrate interface characteristics of two search devices in accordance with one or more embodiments of the invention.





In the figures, like reference numerals refer to generally like elements.


DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

Briefly, as will be described in further detail below, in accordance with more embodiments of the invention, methods and systems are provided for identifying a television content item desired by a television viewer from a set of available television content items. Such television content items can include a wide variety of video/audio content including, but not limited to, television programs, movies, music videos, video-on-demand, or any other identifiable content that can be selected by a television viewer.


The television viewer can enter into a search device having a text input interface a reduced text search entry directed at identifying the desired television content item. The text can be one or more characters, which can be any alphanumeric character, symbol, space or character separator that can be entered by the user. Each television content item has one or more associated descriptors, particularly names in a namespace relating to the desired television content item. The descriptors specify information about the content, which can include, e.g., information on titles, cast, directors, descriptions, and key words. The names are composed of one or more words that can be either ordered or unordered. The user's search entry comprises one or more prefix substrings that represent a name or names in the namespace. A prefix substring of a word in a name captures information from the word and can be a variable length string that contains fewer than all the characters making up the word.


The system identifies a group of one or more television content items from the set of available television content items having descriptors matching the search entry. The names of the identified group of one or more television content items is then transmitted to and displayed on a device operated by the television viewer. The viewer can then select the desired content item from the group displayed, or enter further characters or edit the substring to narrow or change the results as desired.


The descriptors can include a preferably partial subset of pre-indexed prefix substring combinations. The prefix substrings entered by a user are input to an algorithm that can dynamically generate results leveraging off the pre-indexed prefix substring combinations. The size of the pre-indexed prefix substring combinations can be based on some balance between computational power, memory availability, and optionally bandwidth constraints of the system in which reduced text entry search is deployed.


The variable prefix substring search algorithm can allow multiple names to be entered, preferably without regard to the order. The results list for search is preferably dynamically culled by the text input of each character. The results are preferably ordered based on a relevance function that can be a domain specific combination of, e.g., popularity, temporal relevance, location relevance, personal preferences, and the number of words in the input search string.


One or more embodiments of the present invention also includes a system for intuitively informing and educating the user of the smallest or generally smallest substring for yielding a particular result, thereby empowering the user with a user-friendly search method particularly on platforms with limited text entry capabilities. The flexibility of entering prefix substrings of variable sizes to get to the desired result makes the reduced text entry scheme intuitive and easy to use.



FIG. 1 illustrates an overall system for performing searches with reduced text entry using a wide range of devices in accordance with one or more embodiments of the invention. A server farm 101 can serve as the source of search data and relevance updates with a network 102 functioning as the distribution framework. The distribution framework could be a combination of wired and wireless connections. Examples of possible networks include cable television networks, satellite television networks, and IP-based television networks. The search devices could have a wide range of interface capabilities such as a hand-held device 103 (e.g., a phone or PDA) with limited display size and overloaded or small QWERTY or other keypad, a television 104a coupled with a remote control device 104b having an overloaded or small QWERTY or other keypad, and a Personal Computer (PC) 105 with a full QWERTY or other keyboard and a computer display.



FIG. 2 illustrates multiple exemplary configurations for search devices in accordance with one or more embodiments of the invention. In one configuration, a search device (e.g., PC 105) can have a display 201, a processor 202, volatile memory 203, text input interface 204 (which can be on-device or through a wireless remote control 104b), remote connectivity 205 to the server 101 through the network 102, and a persistent storage 206. A device configuration for a device such as the hand-held device 103 might not include local persistent storage 206. In this case, the device 103 could have remote connectivity 205 to submit the query to the server 101 and retrieve results from it. Another configuration of the device 103 may not have remote connectivity 205. In this case, the search database may be locally resident on a local persistent storage 206. The persistent storage 206 may be, e.g., a removable storage element too such as SD, SmartMedia, CompactFlash card etc. In a configuration of the device with remote connectivity 205 and persistent storage 206 for search (e.g., television 104a), the device may use the remote connectivity for search relevance data update or for the case where the search database is distributed on the local storage 206 and on the server 101. In one or more exemplary embodiments of the invention, a television 104a may have a set-top box with a one-way link to a satellite network. In this configuration, all search data including relevance updates may be downloaded to the device through a satellite link to perform local searching.



FIG. 3 illustrates an exemplary structure of a reduced text entry query for search in accordance with one or more embodiments of the invention. Each query can be composed of one or more words preferably delimited by a separator such as, e.g., a space character or a symbol. Adjacent words of the query may constitute an ordered name, e.g., “Guns of Navarone” or an unordered name, e.g., “John Doe” as illustrated in example 303. Individual words can also be part of a set of ordered or unordered names such as “Malkovich” or “Casablanca,” though the ordering attribute is irrelevant in this case. A set of names that is either ordered or unordered constitutes a namespace. An example of an unordered namespace is a phone book with names of people. An example of an ordered namespace is a database of movie titles.



FIG. 4 illustrates an exemplary process of user starting a new search, entering characters and arriving at the desired result in accordance with one or more embodiments of the invention. A user enters one or more search string characters at 401, which could be a variable size prefix of the intended query (e.g., to represent ‘Brad Pitt’, the user can enter B P, BR P, B PI etc.). Results are then preferably dynamically retrieved for the cumulative substring of characters entered up to that point at 402 and displayed. The user determines at 403 as to whether the desired result is shown in a display window. If the result is displayed in the display window, the user can scroll to the desired result within the display window and select the desired result at 405. If the desired result is the first entry in the display window 405, it can be selected by default obviating the need to scroll through the display window.


The ordering of results in the display window is preferably governed by a relevance function that is a domain specific combination of, e.g., popularity, temporal and location relevance. For example when a user is searching for a restaurant using a phone or Personal Digital Assistant (PDA) with GPS capabilities, then the listings could be ordered in descending order of the most popular restaurants in that area. If the user entered NBA, then the system could list the games in order of temporal relevance such as those in progress or are scheduled to begin in the near future are listed first.


If the desired result is not in the display window at step 403, the user can decide whether or not to scroll through pages of results not currently displayed in the window at 404. If the user decides to scroll through the pages, he or she can scroll down the display window linearly or page by page at 406 to reveal more results. If user does not want to scroll through pages, he or she can enter additional characters at 401 to narrow the results.


In the scenario where user does not reach the result due to misspelling or due to the case of a word whose uniqueness (e.g., Tom Brown, Todd Brown) is embedded in the suffix of a word in the query (as opposed to the prefix), the user would have to either go back to the first word and enter more characters or erase one or more of the typed characters and re-enter characters to reach the desired result. The dynamic update of results for each character entry enables the user to recover from an error during the text entry process itself, in contrast to discovering that no results match after typing the entire text.



FIG. 5 illustrates various steps in a pre-processing phase in accordance with one or more embodiments of the invention. As illustrated in FIG. 3, the input to this phase can be a semi-structured space of any size composed of entities or descriptors (e.g., titles, cast, directors, description, key words) with their metadata values. This semi-structured search space can have a wide range of sizes, e.g., from the size of a PDA phone book to a large subspace obtained by a focused web crawl followed by relevant text processing to derive entities. In scenarios where the search space size is large, it can be possible to organize the space into smaller sub-spaces based on a categorization scheme. The first step 501 is the breakup of entities into terms (e.g., Tom Hanks, Secret Discoveries in Ancient China). A term is a set of ordered or unordered words. In accordance with one or more embodiments of the invention, multiple permutations of the words in the entity may be considered as candidate terms (e.g., Secret Discoveries of Ancient China, Discoveries of Ancient China, Ancient China, China). This allows searching a given entity using variable prefixes of any of the candidate terms. The second step is the cleanup of the entity space at 502a. The cleanup phase involves finding the locations of stop words such as “a”, “an”, “the”. In the next step at 502b, entity names can be duplicated for phonetic equivalence handling (e.g., Jeff and Geoff). The duplication may be either implemented by actually creating multiple variants in data, or tagging for future algorithmic equivalence determination. A misspelling handling step 503 can address typical misspellings committed while entering text. An unordered names handling step 504 can first identify all the ordered and unordered names in a namespace, and then duplicate the unordered names (e.g., John Doe, Doe John). Duplication can involve either data duplication or tagging for algorithmic determination. The steps 501 through 504 determine a set of candidate terms T for each entity. A record is any particular prefix string of a term in T. For example, for the term “guns of navarone”, “g o navarone” and “gu of navarone” are two of the many possible records. The set of all possible records of the terms in T is denoted by P(T), and searching for the given item could potentially be accomplished by using any of the prefix in this set.


At step 505, the number of variable prefix strings I(T) that will be pre-computed and stored in the index is determined. In many situations, it is not practical to pre-compute and store all the possible prefixes for all the terms due to expensive memory requirements. One or more embodiments of the present invention accordingly use a flexible method to store only a subset of P(T) based on different parameters such as available memory, computational power, and in some usage scenarios—bandwidth.


While computing I(T), there are a number of terms that are meant to recall entity names. Denote any such term ‘T’ of length N>=1 as

    • T=W1W2W3 . . . WN where Wi denotes the ith word and ‘_’ denotes a space (which is an example of a word delimiter)


For any integer ‘k’, let Wk denote the k-character prefix of word W. If k is greater than length of word W, Wk=W. Let W(K) denote the set of words Wk for 1<=k<=K, where K denotes the upper bound of the size of the prefix. For example, for the word “guns”, W(2) consists of prefixes “g” and “gu”. For any term T, its corresponding indexed set of I(T, K, C) of bounded multi-word prefix strings can be defined as follows

I(T, K, C)={X1X2X3X4X5 . . . XCWC+1 . . . WN}

Where XiεWi(K) and Wi is the ith word in the term T, and where C denotes the number of words for which prefixes are pre-computed. In a preferred embodiment of the invention, the set I(T, K, C) (also denoted by I(T)) is the set of strings pre-computed on account of term T and tunable parameters K and C. The set I(T) represents the pre-computed records corresponding to the terms in T and is usually a proper subset of P(T). The computation method indexes only the set I(T) as a part of the pre-computation, though the user could input any string in P(T) (which possibly may not belong to I(T)) to efficiently retrieve the term T. This is done by performing some appropriate computation at runtime during the search process leveraging of the set I(T).


The special case of I(T, ∞, ∞) (i.e., K=∞ and C=∞) is the scenario where each record is pre-computed for all the terms in T. In this case I(T)=P(T). It may be impractical to implement this case since the memory requirements would be high even for search spaces of modest size. The case K=0 and C=0, is the scenario where no records are pre-computed, and the search for any term is done dynamically by lookup of all possible terms matching the prefix query. In such a scenario, the runtime costs could be high due to a complete lookup during the search process especially for small prefixes that match with a large number of terms. A practical implementation would choose a value of K and C that is a balance between available memory, computational power and in some usage scenarios bandwidth. For example, a practical implementation may choose K=2 and C=1. In this case for a term “guns of navarone”, the pre-computed prefix strings (or records) would be “g_ of navarone, gu_ of navarone” in addition, to the term “guns of navarone” itself. Though I(T) would in most practical implementations be a proper subset of P(T), the system would dynamically match terms that are not in I(T) (such as gun o nav) leveraging off the set I(T). It may be noted that such queries that are not in I(T) contain at least K initial characters of the first word thereby reducing the potential number of matching terms significantly. These terms may then be collected and analyzed for the matching of the remaining words in the term.



FIG. 6 illustrates a data structure that enables searching using variable prefix strings. This exemplary illustration shows the case of K=2 and C=1 (although subsequent words in the term are not illustrated). The illustration uses a trie data structure 601 to index the prefix strings. Each character in the trie 604 points to a set of top M 602 records that contains the most popular terms that begin with the prefix corresponding to the path from the root to that character. The ordering could be governed, e.g., by popularity, temporal relevance, location relevance, and personal preference. Single word terms may be selectively given a boost in the ordering in order for it to be discovered quickly since it cannot leverage off the “K” factor or “C” factor. The TOP M records corresponding to every node in the trie may be placed in memory that enables quick access to them. The value of M may be determined by factors such as the display size of the devices from which search would be done and the available memory capacity of the server or client system where the search metadata is stored. Each character in the trie also points to a container 603 that holds all records following the TOP M. For the term “guns of navarone”, two new prefix strings in addition to the previous term, are created for the case K=2, “g_ of navarone” and “gu_ of navarone”. The prefix strings “g_” and “gu_” both point to node starting the next word “o” 605.



FIG. 7 illustrates a process of finding results using the variable prefix string scheme in accordance with one or more embodiments of the invention. When user inputs a character of a prefix string at 701, the system examines if it is a word separator at 702. If it is not a word separator, the system fetches the top M records at 704 for that character. If it is a word separator, system examines if the prefix with the word separator is in I(T) at 703. If it is in I(T), the system accesses the top M records for that node in the trie at 704. If the word separator is not in I(T), the system does a complete search at 707 for the records beginning with that prefix string. Also, after step 704, if user scrolls through the results list beyond top M results at 705, the system would perform a complete search at 707. If the user does not scroll beyond the top M results, and the user does not arrive at the result at 706, he can go back and enter another character at 701. So by having just a proper subset I(T) of the prefix strings precomputed, the system can leverage off the precomputed strings. For example, if user entered “gun_o” for the case K=2, C=1, the system would perform a complete search under strings beginning with gun and generate dynamically the top records that have the second word starting with ‘o’. Accordingly, the dynamic search process rides on top of the information provided by the precomputed prefix strings.



FIGS. 8A and 8B illustrates two exemplary search devices in accordance with one or more further embodiments of the invention. In FIG. 8A, a television 801 is controlled by a remote control device 809 over a wireless connection 807. The device 809 has a keypad 810, a navigation interface 811, a ‘next word’ button 808a, and a ‘previous’ button 808b. A preferred interface layout for performing searches is illustrated on the television screen with a permanent text entry focus (which has only one text entry) and decoupled tab focus. This enables user to enter text at any time without having to explicitly switch focus to the text window 803. A results window 806 is displayed with a scroll control 805. The results window 806 can be navigated using the navigation interface 811 on the remote 809. As a user types in “JE SE” at 802, the results window content 804 is dynamically culled to show the results. The remote control 809 has a prominent and easily accessible ‘next word’ button 808a, that facilitates entry of a space character to delimit words. The ‘next word’ interface facilitates easy entry of multiple prefix strings. Additionally the remote also has the “previous word” button 808b to facilitate easy traversal to the end of the previous words. This can be used in the remote scenario where the user did not enter sufficient characters for the first ‘m’ prefixes of a term and has to go back to add more characters if the desired result is not reached.


The second device illustrated in FIG. 8B is a hand-held device (e.g., a phone) 812 that has a built-in keypad 816 and navigation interface 815. The display window 813 on this device is likely to be much smaller and hence hold fewer results in a results area 817. Scrolling may be cumbersome on these devices. Aggregation of words can be used wherever applicable to reduce bucket sizes and hence scrolling.


In accordance with one or more embodiments of the invention, the system provides visual cues to users to assist in educating the user on what would be a generally optimal prefix string entry. In the illustrated examples, the visual cues are in the form of underlined prefixes beneath each result 804, 818. The user may over time learn to enter a generally optimal prefix string input due to the visual cues. The optimal prefix string that can display a term within the display space of a device without scrolling can be determined in advance by the system taking into account the number of lines in the display and the relevance factor of the term.


In accordance with one or more embodiments of the invention, entity and term space complexity is considered in designing a search/disambiguating mechanism and operations, in addition to device characteristics themselves. In some cases, in order to apply one or more embodiments of this invention to a given entity/term space, it is useful to appropriately partition the space and have multiple distinct computing engines to serve requests. For example a movie search space could be broken down into smaller search spaces by categorizing them into genres. Similarly a phone book search space can be broken down by categorizing it into cities, and towns. The average size of the hash bucket would set a lower bound on the prefix size. Furthermore, the number of characters to be entered may have to be increased to keep the hash collision count within the tolerable limit of scrolling. For example, in a study done on the movie space, a random sampling of 150,000 English movie titles revealed that 99% of the search space can be covered by 6 characters with hash collisions below 10, while approximately 50% of the search space was covered by a 4 character scheme with a hash collision size below 10. It is interesting to note while the search space was only 150,000 items, it took 6 characters or 300 million buckets to contain the collisions within 10. A study of a restaurant namespace in Westchester, N.Y. with a listing of 1,500 restaurants showed that 98-99% of the restaurants were listed within a display list of top 5 restaurants with the entry of 4 characters, where 2 characters were taken from the first word and two from the next. A study of phonebook namespace for Connecticut State Govt. with 29,500 employees expanded to 58,000 to accommodate for unordered namespace revealed that for a bucket size of 10 and with 4 characters (first word 2 characters and 2 characters from the second word), 62% were listed in the top 10 names. When the number of characters entered increased to 6, 96.5% were listed within the top 10 names.


Methods of identifying content from reduced text input in accordance with various embodiments of the invention are preferably implemented in software, and accordingly one of the preferred implementations is as a set of instructions (program code) in a code module resident in the random access memory of a computer. Until required by the computer, the set of instructions may be stored in another computer memory, e.g., in a hard disk drive, or in a removable memory such as an optical disk (for eventual use in a CD ROM) or floppy disk (for eventual use in a floppy disk drive), or downloaded via the Internet or some other computer network. In addition, although the various methods described are conveniently implemented in a general purpose computer selectively activated or reconfigured by software, one of ordinary skill in the art would also recognize that such methods may be carried out in hardware, in firmware, or in more specialized apparatus constructed to perform the specified method steps.


Having described preferred embodiments of the present invention, it should be apparent that modifications can be made without departing from the spirit and scope of the invention.


Method claims set forth below having steps that are numbered or designated by letters should not be considered to be necessarily limited to the particular order in which the steps are recited.

Claims
  • 1. A method of incrementally identifying and selecting a television content item to be presented from a relatively large set of selectable television content items, the television content items being associated with descriptive terms that characterize the selectable television content items, the method comprising: using an ordering criteria to rank and associate subsets of television content items with corresponding strings of one or more descriptor prefix strings, each descriptor prefix string being a variable length string containing a subset of the characters of the descriptive terms that characterize the selectable television content items, wherein each descriptor prefix string contains less than all characters of the descriptive terms;subsequent to ranking and associating the television content items with strings of one or more descriptor prefix strings, receiving incremental text input entered by a user, the incremental text input including a first descriptor prefix of a word entered by the user for incrementally identifying at least one desired television content item of the relatively large set of television content items, wherein the first descriptor prefix contains less than all characters of the word the user is using to incrementally identify the at least one desired television content item;selecting and presenting on a display device the subset of television content items that is associated with the first descriptor prefix string;subsequent to receiving the first descriptor prefix, receiving subsequent incremental text input entered by the user, the subsequent incremental text input including a second descriptor prefix of a word entered by the user for incrementally identifying the at least one desired television content item and forming a string of prefixes including the first descriptor prefix and the second descriptor prefix in the order received, wherein the second descriptor prefix contains less than all characters of the word the user is using to incrementally identify the at least one desired television content item; andselecting and presenting on the display device the subset of television content items that is associated with the string of prefixes received.
  • 2. The method of claim 1, wherein the first and second prefixes are in an ordered format.
  • 3. The method of claim 1, wherein the first and second prefixes are in an unordered format.
  • 4. The method of claim 1, wherein the first and second prefixes are separated by a word separator.
  • 5. The method of claim 1, wherein the selected and presented subset of television content item comprises two or more television content items, and wherein the selected and presented subset of television content items are ordered for presentation in accordance with a given relevance function.
  • 6. The method of claim 5, wherein the given relevance function comprises popularity of the television content items.
  • 7. The method of claim 5, wherein the given relevance function comprises temporal relevance of the television content items.
  • 8. The method of claim 5, wherein the given relevance function comprises location relevance of the television content items.
  • 9. The method of claim 1, wherein the incremental text input specifies at least a portion of a title of the at least one desired television content item.
  • 10. The method of claim 1, wherein the method is implemented in a server system remote from the user.
  • 11. The method of claim 1, wherein the method is implemented in a device included in or proximate to a television set for displaying the subset of television content items.
  • 12. The method of claim 1, further comprising determining the descriptive terms prior to receiving the incremental text input from the user.
  • 13. The method of claim 12, wherein determining the descriptive terms comprises identifying a set of candidate terms comprising ordered or unordered words.
  • 14. The method of claim 13, further comprising identifying the location of stop words in the descriptive terms.
  • 15. The method of claim 12, wherein determining the descriptive terms comprises adding phonetically equivalent words to the descriptive terms.
  • 16. The method of claim 12, wherein determining the descriptive terms comprises adding commonly misspelled words of words in the descriptive terms.
  • 17. The method of claim 1, further comprising providing the user with visual cues to assist the viewer in entering generally optimal incremental text input for a search.
  • 18. The method of claim 1, wherein the descriptive terms include at least one of title, cast, director, description, and keyword information relating to the television content item.
  • 19. A system for incrementally identifying and selecting a television content item to be presented from a relatively large set of selectable television content items, the television content items being associated with descriptive terms that characterize the selectable television content items, the system comprising: a database in an electronically readable medium for storing the relatively large set of selectable television content items and associated descriptive terms that characterize the selectable television content items;a plurality of subsets of television content items, each subset being ranked and associated with corresponding strings of one or more descriptive prefix strings based on an ordering criteria, each descriptor prefix string being a variable length string containing a subset of the characters of the descriptive terms that characterize the selectable television content items, wherein each descriptor prefix string contains less than all characters of the descriptive terms; andprogram code on a computer-readable medium, which when executed on a computer system performs functions including: receiving incremental text input entered by a user, the incremental text input including a first descriptor prefix of a word entered by the user for incrementally identifying at least one desired television content item of the relatively large set of television content items, wherein the first descriptor prefix contains less than all characters of the word the user is using to incrementally identify the at least one desired television content item;selecting and presenting on a display device the subset of television content items that is associated with the first descriptor prefix string;subsequent to receiving the first descriptor prefix, receiving subsequent incremental text input entered by the user, the subsequent incremental text input including a second descriptor prefix of a word entered by the user for incrementally identifying the at least one desired television content item and forming a string of prefixes including the first descriptor prefix and the second descriptor prefix in the order received, wherein the second descriptor prefix contains less than all characters of the word the user is using to incrementally identify the at least one desired television content item; andselecting and presenting on the display device the subset of television content items that is associated with the string of prefixes received.
  • 20. The system of claim 19, wherein the first and second prefixes are in an ordered format.
  • 21. The system of claim 19, wherein the first and second prefixes are in an unordered format.
  • 22. The system of claim 19, wherein the first and second prefixes are separated by a word separator.
  • 23. The system of claim 19, wherein the selected and presented subset of television content item comprises two or more television content items, and wherein the selected and presented subset of television content items are ordered for presentation in accordance with a given relevance function.
  • 24. The system of claim 23, wherein the given relevance function comprises popularity of the television content items.
  • 25. The system of claim 23, wherein the given relevance function comprises temporal relevance of the television content items.
  • 26. The system of claim 23, wherein the given relevance function comprises location relevance of the television content items.
  • 27. The system of claim 19, wherein the incremental text input specifies at least a portion of a title of the at least one desired television content item.
  • 28. The system of claim 19, wherein the computer system is a server system remote from the user.
  • 29. The system of claim 19, wherein the computer system is a device included in or proximate to a television set for displaying the selected subset of television content items.
  • 30. The system of claim 19, wherein the plurality of subsets of television content items is present in the system prior to receiving the incremental text input from the user.
  • 31. The system of claim 19, wherein the descriptive terms are determined by identifying a set of candidate terms comprising ordered or unordered words.
  • 32. The system of claim 19, wherein the descriptive terms are determined by identifying the location of stop words in said terms.
  • 33. The system of claim 19, wherein the descriptive terms comprise phonetically equivalent words to the descriptive terms.
  • 34. The system of claim 19, wherein the descriptive terms comprise commonly misspelled words of words in the descriptive terms.
  • 35. The system of claim 19, wherein the program code when executed on the computer system further performs the function of providing the user with visual cues to assist the user in entering a generally optimal incremental text input for a search.
  • 36. The system of claim 19, wherein the descriptive terms include at least one of title, cast, director, description, or keyword information relating to the television content item.
RELATED APPLICATIONS

The present application is based on and claims priority from the following two U.S. provisional patent applications, the specifications of which are each incorporated herein in their entirety: (1) Ser. No. 60/626,274 filed on Nov. 9, 2004 and entitled “Television Systems and Associated Methods,” and (2) Ser. No. 60/664,879 filed on Mar. 24, 2005 and entitled “Method and System for Performing Searches for Television Programming Using Reduced Text Input.”

US Referenced Citations (214)
Number Name Date Kind
1261167 Russell Apr 1918 A
4453217 Boivie Jun 1984 A
4760528 Levin Jul 1988 A
4797855 Duncan, IV et al. Jan 1989 A
4893238 Venema Jan 1990 A
5224060 Ma et al. Jun 1993 A
5337347 Halstead-Nussloch et al. Aug 1994 A
5369605 Parks Nov 1994 A
5487616 Ichbiah Jan 1996 A
5532754 Young et al. Jul 1996 A
5623406 Ichbiah Apr 1997 A
5635989 Rothmuller Jun 1997 A
5774588 Li Jun 1998 A
5805155 Allibhoy et al. Sep 1998 A
5818437 Grover et al. Oct 1998 A
5828420 Marshall et al. Oct 1998 A
5828991 Skiena et al. Oct 1998 A
5859662 Cragun et al. Jan 1999 A
5880768 Lemmons et al. Mar 1999 A
5912664 Eick et al. Jun 1999 A
5930788 Wical Jul 1999 A
5937422 Nelson et al. Aug 1999 A
5945928 Kushler et al. Aug 1999 A
5945987 Dunn Aug 1999 A
5953541 King et al. Sep 1999 A
6005565 Legall et al. Dec 1999 A
6005597 Barrett et al. Dec 1999 A
6006225 Bowman et al. Dec 1999 A
6009459 Belfiore et al. Dec 1999 A
6011554 King et al. Jan 2000 A
6047300 Walfish et al. Apr 2000 A
6075526 Rothmuller Jun 2000 A
6133909 Schein et al. Oct 2000 A
6184877 Dodson et al. Feb 2001 B1
6189002 Roitblat Feb 2001 B1
6223059 Haestrup et al. Apr 2001 B1
6260050 Yost et al. Jul 2001 B1
6266048 Carau, Sr. Jul 2001 B1
6266814 Lemmons et al. Jul 2001 B1
6269361 Davis et al. Jul 2001 B1
6286064 King et al. Sep 2001 B1
6307548 Flinchem et al. Oct 2001 B1
6307549 King et al. Oct 2001 B1
6360215 Judd et al. Mar 2002 B1
6377945 Risvik et al. Apr 2002 B1
6385602 Tso et al. May 2002 B1
6438751 Voyticky et al. Aug 2002 B1
6463586 Jerding Oct 2002 B1
6466933 Huang et al. Oct 2002 B1
6529903 Smith Mar 2003 B2
6564213 Ortega et al. May 2003 B1
6564313 Kashyap May 2003 B1
6594657 Livowsky et al. Jul 2003 B1
6600496 Wagner et al. Jul 2003 B1
6614455 Cuijpers et al. Sep 2003 B1
6615248 Smith Sep 2003 B1
6664980 Bryan et al. Dec 2003 B2
6721954 Nickum Apr 2004 B1
6732369 Schein et al. May 2004 B1
6757906 Look et al. Jun 2004 B1
6766526 Ellis Jul 2004 B1
6772147 Wang Aug 2004 B2
6785671 Bailey et al. Aug 2004 B1
6801909 Delgado et al. Oct 2004 B2
6839702 Patel et al. Jan 2005 B1
6839705 Grooters Jan 2005 B1
6850693 Young et al. Feb 2005 B2
6865575 Smith et al. Mar 2005 B1
6865746 Herrington et al. Mar 2005 B1
6907273 Smethers Jun 2005 B1
6965374 Villet et al. Nov 2005 B2
6999959 Lawrence et al. Feb 2006 B1
7013304 Schuetze et al. Mar 2006 B1
7117207 Kerschberg et al. Oct 2006 B1
7136845 Chandrasekar et al. Nov 2006 B2
7136854 Smith Nov 2006 B2
7146627 Ismail et al. Dec 2006 B1
7149983 Robertson et al. Dec 2006 B1
7213256 Kikinis May 2007 B1
7225180 Donaldson et al. May 2007 B2
7225184 Carrasco et al. May 2007 B2
7225455 Bennington et al. May 2007 B2
7269548 Fux et al. Sep 2007 B2
7293231 Gunn et al. Nov 2007 B1
7461061 Aravamudan et al. Dec 2008 B2
7509313 Colledge et al. Mar 2009 B2
7529744 Srivastava et al. May 2009 B1
7536384 Venkataraman et al. May 2009 B2
7539676 Aravamudan et al. May 2009 B2
7548915 Ramer et al. Jun 2009 B2
7644054 Garg et al. Jan 2010 B2
7657526 Aravamudan et al. Feb 2010 B2
7779011 Venkataraman et al. Aug 2010 B2
7788266 Venkataraman et al. Aug 2010 B2
20020002550 Berman Jan 2002 A1
20020042791 Smith et al. Apr 2002 A1
20020049752 Bowman et al. Apr 2002 A1
20020052873 Delgado et al. May 2002 A1
20020059621 Thomas et al. May 2002 A1
20020077143 Sharif et al. Jun 2002 A1
20020083448 Johnson Jun 2002 A1
20020133481 Smith et al. Sep 2002 A1
20020184373 Maes Dec 2002 A1
20020188488 Hinkle Dec 2002 A1
20020199194 Ali Dec 2002 A1
20030005452 Rodriguez Jan 2003 A1
20030005462 Broadus et al. Jan 2003 A1
20030011573 Villet et al. Jan 2003 A1
20030014753 Beach et al. Jan 2003 A1
20030023976 Kamen et al. Jan 2003 A1
20030033292 Meisel et al. Feb 2003 A1
20030037043 Chang et al. Feb 2003 A1
20030046698 Kamen et al. Mar 2003 A1
20030066079 Suga Apr 2003 A1
20030084270 Coon et al. May 2003 A1
20030103088 Dresti et al. Jun 2003 A1
20030226146 Thurston et al. Dec 2003 A1
20030237096 Barrett et al. Dec 2003 A1
20040021691 Dostie et al. Feb 2004 A1
20040046744 Rafii et al. Mar 2004 A1
20040049783 Lemmons et al. Mar 2004 A1
20040073926 Nakamura et al. Apr 2004 A1
20040078815 Lemmons et al. Apr 2004 A1
20040078816 Johnson Apr 2004 A1
20040078820 Nickum Apr 2004 A1
20040083198 Bradford et al. Apr 2004 A1
20040093616 Johnson May 2004 A1
20040111745 Schein et al. Jun 2004 A1
20040128686 Boyer et al. Jul 2004 A1
20040139091 Shin Jul 2004 A1
20040143569 Gross et al. Jul 2004 A1
20040194141 Sanders Sep 2004 A1
20040216160 Lemmons et al. Oct 2004 A1
20040220926 Lamkin et al. Nov 2004 A1
20040221308 Cuttner et al. Nov 2004 A1
20040261021 Mittal et al. Dec 2004 A1
20050015366 Carrasco et al. Jan 2005 A1
20050038702 Merriman et al. Feb 2005 A1
20050071874 Elcock et al. Mar 2005 A1
20050086234 Tosey Apr 2005 A1
20050086691 Dudkiewicz et al. Apr 2005 A1
20050086692 Dudkiewicz et al. Apr 2005 A1
20050174333 Robinson et al. Aug 2005 A1
20050192944 Flinchem Sep 2005 A1
20050210020 Gunn et al. Sep 2005 A1
20050210383 Cucerzan et al. Sep 2005 A1
20050210402 Gunn et al. Sep 2005 A1
20050223308 Gunn et al. Oct 2005 A1
20050240580 Zamir et al. Oct 2005 A1
20050246311 Whelan et al. Nov 2005 A1
20050246324 Paalasmaa et al. Nov 2005 A1
20050278175 Hyvonen Dec 2005 A1
20050283468 Kamvar et al. Dec 2005 A1
20060010477 Yu Jan 2006 A1
20060010503 Inoue et al. Jan 2006 A1
20060013487 Longe et al. Jan 2006 A1
20060015906 Boyer et al. Jan 2006 A1
20060036640 Tateno et al. Feb 2006 A1
20060059044 Chan et al. Mar 2006 A1
20060069616 Bau Mar 2006 A1
20060075429 Istvan et al. Apr 2006 A1
20060090182 Horowitz et al. Apr 2006 A1
20060090185 Zito et al. Apr 2006 A1
20060101499 Aravamudan et al. May 2006 A1
20060101503 Venkataraman et al. May 2006 A1
20060101504 Aravamudan et al. May 2006 A1
20060112162 Marot et al. May 2006 A1
20060161520 Brewer et al. Jul 2006 A1
20060163337 Unruh Jul 2006 A1
20060167676 Plumb Jul 2006 A1
20060167859 Verbeck Sibley et al. Jul 2006 A1
20060190308 Janssens et al. Aug 2006 A1
20060195435 Laird-McConnell et al. Aug 2006 A1
20060206454 Forstall et al. Sep 2006 A1
20060248078 Gross et al. Nov 2006 A1
20060256078 Flinchem et al. Nov 2006 A1
20060259479 Dai Nov 2006 A1
20060274051 Longe et al. Dec 2006 A1
20070005563 Aravamudan Jan 2007 A1
20070016476 Hoffberg et al. Jan 2007 A1
20070027852 Howard et al. Feb 2007 A1
20070043750 Dingle Feb 2007 A1
20070050337 Venkataraman et al. Mar 2007 A1
20070050348 Aharoni et al. Mar 2007 A1
20070061244 Ramer et al. Mar 2007 A1
20070061317 Ramer et al. Mar 2007 A1
20070061321 Venkataraman Mar 2007 A1
20070061754 Ardhanari et al. Mar 2007 A1
20070088681 Aravamudan et al. Apr 2007 A1
20070130128 Garg et al. Jun 2007 A1
20070143567 Gorobets Jun 2007 A1
20070150606 Flinchem et al. Jun 2007 A1
20070219984 Aravamudan et al. Sep 2007 A1
20070219985 Aravamudan et al. Sep 2007 A1
20070255693 Ramaswamy et al. Nov 2007 A1
20070260703 Ardhanari et al. Nov 2007 A1
20070266021 Aravamudan et al. Nov 2007 A1
20070266026 Aravamudan et al. Nov 2007 A1
20070266406 Aravamudan et al. Nov 2007 A1
20070271205 Aravamudan et al. Nov 2007 A1
20070276773 Aravamudan et al. Nov 2007 A1
20070276821 Aravamudan et al. Nov 2007 A1
20070276859 Aravamudan et al. Nov 2007 A1
20070288456 Aravamudan et al. Dec 2007 A1
20070288457 Aravamudan et al. Dec 2007 A1
20080065617 Burke et al. Mar 2008 A1
20080114743 Venkataraman et al. May 2008 A1
20080177717 Kumar et al. Jul 2008 A1
20080195601 Ntoulas et al. Aug 2008 A1
20080313564 Barve et al. Dec 2008 A1
20090077496 Aravamudan et al. Mar 2009 A1
20090198688 Venkataraman et al. Aug 2009 A1
20100121845 Aravamudan et al. May 2010 A1
20100153380 Garg et al. Jun 2010 A1
Foreign Referenced Citations (24)
Number Date Country
1050794 Nov 2000 EP
1143691 Oct 2001 EP
1338976 Aug 2003 EP
1463307 Sep 2004 EP
1622054 Feb 2006 EP
1810508 Jul 2007 EP
WO-9856173 Dec 1998 WO
WO-0070505 Nov 2000 WO
WO-2004031931 Apr 2004 WO
WO-2004010326 Nov 2004 WO
WO-2005033967 Apr 2005 WO
WO-2005084235 Sep 2005 WO
WO-2006052959 May 2006 WO
WO-2006052966 May 2006 WO
WO-2007025148 Mar 2007 WO
WO-2007025149 Mar 2007 WO
WO-2007062035 May 2007 WO
WO-2007118038 Oct 2007 WO
WO-2007124429 Nov 2007 WO
WO-2007124436 Nov 2007 WO
WO-2007131058 Nov 2007 WO
WO-2008034057 Mar 2008 WO
WO-2008091941 Jul 2008 WO
WO-2008148012 Dec 2008 WO
Related Publications (1)
Number Date Country
20060101503 A1 May 2006 US
Provisional Applications (2)
Number Date Country
60626274 Nov 2004 US
60664879 Mar 2005 US