The present disclosure relates responding to a user's interactions based on at least one determined context, which determined context may include context(s) such as and without limitation context of content, device and/or user, and more particularly to functionality to provide a response to a user's interaction with a device based on the at least one context.
Computing devices, such as desktop computers, laptop computers, tablets, smartphones, etc., are being used more and more to access various content, including without limitation web pages, multimedia, audio, video, etc. While users' time may be more limited, the amount of content available for consumption by users is growing exponentially. In fact, users are unaware of a great deal of content that is available for consumption.
The present disclosure seeks to address failings in the art and to provide functionality to facilitate a user's content experience. Embodiments of the present disclosure determine whether or not to display a menu in response to user interaction with a device, and in a case that it a determination is made to display a menu in response to the user interaction, a determination may be made what options to make available via the menu. In accordance with one or more such embodiments, a determination whether or not to display a menu and/or a determination what options to make available via the menu may be based on contextual determinations, e.g., determination of a context of content that is being output by the device, context of the device and/or context of the user.
By way of a non-limiting example, embodiments of the present disclosure may be used on a device that is displaying web page content, such as that provided via a browser application, and the user may interact with the device to select a portion of content, e.g., text, image, video, etc., from the web page content. In response and in accordance with at least one embodiment of the present disclosure, a context of the content selected from the web page may be used in combination with context of the device, e.g., a location determined using the device's location services, etc., and optionally a context of the user interacting with the device, e.g., information or knowledge gathered about the user such as a language preference of the user, location, one or more histories such as and without limitation browsing, searching, purchasing, etc., to determine whether or not to display a menu of options. The menu may comprise one or more options, or actions, that are selectable by the user to perform an action using the content selected from the web page. By way of a further non-limiting example, a determination may be made not to display a menu of options but to provide a response other than the menu comprising one or more options as a response to a user's selection of a English-language word from the web page where it is determined from the context of the content, e.g., the web page as a whole or the selected content, together with the user's context, which indicates that the user's primary language is French. The response that is provided rather than providing a menu from which the user might select a translation option is to display a definition of the English-language content and/or translation of the content from English to French, which translation may be in audio and/or visual form. By way of some non-limiting examples, a response might be generated using results retrieved from a dictionary search, results retrieved from a translation service, or services, etc.
As yet another non-limiting example, embodiments of the present disclosure may display a menu, e.g., a contextual menu, which is displayed in response to a user's selection of a portion of a web page. or other windowing display. The user might highlight a word, or words, being displayed at a device, and the menu appears in response with options to search photographs, images, news, web, Wikipedia, etc. A response to user selection of one of the options may be displayed in the web page, or other display component, that contains the highlighted word or words. In accordance with one or more embodiments, the menu is a contextual menu in that its contents are determined based on a determined context, or determined contexts, such as context of content, context of the device, context of the user, or a combination of determined contexts.
In accordance with one or more embodiments, a method is provided, the method comprising determining, via at least one processing unit, a context of content being output by a device; determining, via the at least one processing unit, at least one other context; detecting, via the at least one processing unit, user interaction with the device, the user interaction associated with selected content output by the device; using, via the at least one processing unit, the context of the content and the at least one other context to make a determination whether or not to respond to the user interaction with a menu comprising a plurality of user-selectable options, the plurality of user-selectable options comprising at least one search option selectable to perform a search using the selected content; and instructing, via the at least one processing unit, the device to output a response to the user interaction that is based on the determination.
In accordance with other embodiments of the present disclosure a system is provided, which system comprises at least one computing device comprising one or more processors to execute and memory to store instructions to determine a context of content being output by a device; determine at least one other context; detect user interaction with the device, the user interaction associated with selected content output by the device; use the context of the content and the at least one other context to make a determination whether or not to respond to the user interaction with a menu comprising a plurality of user-selectable options, the plurality of user-selectable options comprising at least one search option selectable to perform a search using the selected content; and instruct the device to output a response to the user interaction that is based on the determination.
In accordance yet one or more other embodiments, a computer readable non-transitory storage medium for tangibly storing thereon computer readable instructions that when executed cause at least one processor to determine at least one other context; detect user interaction with the device, the user interaction associated with selected content output by the device; use the context of the content and the at least one other context to make a determination whether or not to respond to the user interaction with a menu comprising a plurality of user-selectable options, the plurality of user-selectable options comprising at least one search option selectable to perform a search using the selected content; and instruct the device to output a response to the user interaction that is based on the determination.
In accordance with one or more embodiments, a system is provided that comprises one or more computing devices configured to provide functionality in accordance with such embodiments. In accordance with one or more embodiments, functionality's embodied in steps of a method performed by at least one computing device. In accordance with one or more embodiments, program code to implement functionality in accordance with one or more such embodiments is embodied in, by and/or on a computer-readable medium.
The above-mentioned features and objects of the present disclosure will become more apparent with reference to the following description taken in conjunction with the accompanying drawings wherein like reference numerals denote like elements and in which:
Subject matter will now be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific example embodiments. Subject matter may, however, be embodied in a variety of different forms and, therefore, covered or claimed subject matter is intended to be construed as not being limited to any example embodiments set forth herein; example embodiments are provided merely to be illustrative. Likewise, a reasonably broad scope for claimed or covered subject matter is intended. Among other things, for example, subject matter may be embodied as methods, devices, components, or systems. Accordingly, embodiments may, for example, take the form of hardware, software, firmware or any combination thereof (other than software per se). The following detailed description is, therefore, not intended to be taken in a limiting sense.
Throughout the specification and claims, terms may have nuanced meanings suggested or implied in context beyond an explicitly stated meaning. Likewise, the phrase “in one embodiment” as used herein does not necessarily refer to the same embodiment and the phrase “in another embodiment” as used herein does not necessarily refer to a different embodiment. It is intended, for example, that claimed subject matter include combinations of example embodiments in whole or in part.
In general, terminology may be understood at least in part from usage in context. For example, terms, such as “and”, “or”, or “and/or,” as used herein may include a variety of meanings that may depend at least in part upon the context in which such terms are used. Typically, “or” if used to associate a list, such as A, B or C, is intended to mean A, B, and C, here used in the inclusive sense, as well as A, B or C, here used in the exclusive sense. In addition, the term “one or more” as used herein, depending at least in part upon context, may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures or characteristics in a plural sense. Similarly, terms, such as “a,” “an,” or “the,” again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context. In addition, the term “based on” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context.
The detailed description provided herein is not intended as an extensive or detailed discussion of known concepts, and as such, details that are known generally to those of ordinary skill in the relevant art may have been omitted or may be handled in summary fashion.
Certain embodiments of the present disclosure will now be discussed with reference to the aforementioned figures, wherein like reference numerals refer to like components.
In general, the present disclosure includes a system, method and architecture of using context in selecting a response to user interaction with a device, such context may be context of content being output by the device, the device's context, the user's content, or some combination of contexts. In accordance with one or more embodiments, a response may comprise a menu the content of which, e.g., one or more user-selectable actions or options, can be determined based on a determined context, or contexts, or a determination may be made to provide a response other than an option menu, which determination may be made based on a determined context or combination of contexts. As yet another alternative, a determination may be made to provide a response that includes a menu, the content of which is determined based on one or more determined contexts, and another response, such as an answer identified based on one or more determined contexts. In accordance with one or more such embodiments, a determination whether or not to display a menu and/or a determination of what options are made available in the menu may be based on contextual determinations, e.g., determination of a context of content that is being output by the device, context of the device and/or context of the user.
In a non-limiting example discussed above, a response may comprise instructing a device, e.g., a device from which user interaction is received and at which content is being output, e.g., played audibly and/or visually, to provide an audio and/or visual response, which is in response to a user's interaction with the device, and/or the response may comprise a menu of user-selectable options or other content. By way of some non-limiting examples, a response may comprise a pop-up menu and may be triggered by user input, such as without limitation the user selecting or highlighting text displayed at a device, by voice input that is received at the device, by a gesture detected via the device, by a hand wave or other motion detected via the device, etc., triggered by other input, such as without limitation pre-selected text from a web page or other content being displayed at the device.
In accordance with one or more embodiments, the content of the menu, e.g., a pop-up menu, may comprises user-selectable options or actions, such as without limitation a search, e.g., web search, image search, video search, map search, etc. By way of some non-limiting examples, selection of an option included in the menu may trigger an overlay, a sidebar, a search and/or a display of search results, launch a new web page and/or application, etc. The menu may comprise sponsored content or advertisements, etc. By way of some non-limiting examples, the menu might display a definition of selected text, content returned from a search, sponsored content, advertisements, etc. By way of some non-limiting examples, the response may comprise audio content in addition to or in place of content that is displayed
With reference to
In accordance with one or more embodiments, the computing device may include a number of mechanisms for receiving user input, including without limitation, keyboard, mouse or other pointer device, touch screen, microphone, camera, etc. If it is determined, at step 102, that user interaction is received, processing continues at step 104 to determine context of content, which content may comprise content that is being output by the device before, during and/or after the user interaction is detected, for example. By way of some further non-limiting examples, the content may be audio, video, multimedia, still image such as a photograph, streaming content, etc.
A context analysis may be used to identify a context of the content, e.g., content selected from a web page. The context analysis may be used to identify a meaning or context of the content. By way of a further non-limiting example, the content for which a context is determiner may be selected content, such as content selected by the user via the user interaction and/or preselected content, which context may be used to determine whether or not a response to the user interaction includes a menu, and where a menu is used in response, the context of the content may be used to determine the content of the menu, such as and without limitation one or more user-selectable options or action, advertisement and/or other content. The content analysis may analyze content of any type, including without limitation audio, video, multimedia, etc. content. In addition to the content, the content analysis may use additional information, such as metadata, associated with the content in determining a context of the content.
At step 106, other contexts may be determined, such as and without limitation a context of the device and/or, a content of the user. By way of some non-limiting examples, the context of the device may comprise information about the device, such as a type, model, hardware and software resources, network connectivity, location, weather, etc. The content of the device may comprise any information that may be obtained via the device. A device type may include information to identify the device as a smartphone, tablet, etc., for example. Network connectivity may include information to identify a current network connectivity, such as and without limitation whether the device is communicating via a cellular network, a wireless network, a wired network, etc., or some combination of networks. The device context may further provide information used in determining at least a portion of a user context, such as without limitation the device's location service(s) that provide information useful in determining the user's location. User context may include information about the user, which information may be explicitly supplied by the user, such as preferences, demographic information, such as age, gender, home and/or work address(es), social and other contacts, calendar, education, hobbies, preferred language, etc., or gathered from on the user's interactions, e.g., user's web browsing history, content playback, purchasing history, software application usage, interaction with menu options/actions previously provided to the user, etc. The device context and/or user context may be used in combination with the context of content to determine whether or not to respond to the user interaction with a menu of options and, if so, what options to include in the menu.
At step 108, a determination is made whether or not to display the menu of actions/options based on one or more determined contexts. In accordance with one or more embodiments, a determination may be made by determining based on the one or more determined contexts, a response to the user interaction. By way of a non-limiting example, a determination whether or not to display a menu of user-selectable options might be based on a likelihood that one of the options is the one being sought by the user. Where the likelihood that the option is the one being sought by the user, a response may be provided by performing the action associated with the option and providing the result of the action as a response rather than providing the user with a menu of options for the user to select an action(s) to be taken. The determination of whether or not to display the menu may be made by determining whether or not the likelihood satisfies a threshold probability, or likelihood, that a given option/action is the option/action being sought by the user.
At step 110, where it is determined not to display the menu, processing continues at step 112 to select an action, e.g., an action associated with the option identified at step 108, to perform in response to the user interaction, the action is performed at step 124, and a response may be provided, at step 126, based on the action taken at step 124. By way of a non-limiting example, the action might be performing a search using a portion of content selected by the user, or a portion of content selected for the user, e.g., content selected based on one or more of the determined contexts, and to provide a set of search results, which set of search results may be filtered based on one or more of the determined contexts. By way of a further non-limiting example, the search conducted, e.g., web search, encyclopedia search, dictionary search, image search, etc., and/or the results provided may be determined based on or more of the determined contexts.
At step 110, if it is determined to display a menu, processing continues at step 114, to make a determination whether or not to select actions/options and/or content for inclusion in the menu based on one or more determined contexts. If not, processing continues at step 116 to use a default menu, e.g., a menu personalized by the user and including at least one option selected by the user for inclusion in the menu. If at least some portion of the menu is to be determined based on a context, or contexts, processing continues at step 118 to determine the menu that is to be provided to the user. By way of a non-limiting example, the menu may include a menu option that when selected would provide a verbal pronunciation of a word may be included in the menu where the context of the content and the content of the user indicate that the selected content is in a foreign language, e.g., a language other than the user's native/preferred language. By way of another non-limiting example, the menu may include a menu option that when selected provides a search of news content where the context of the content indicates that it relates to a current event and/or the user's context indicates that the user is interested in news content. By way of yet another non-limiting example, a menu might include a menu option that when selected would search for nearby coffee shops where the context of the device includes information indicating that it is early morning, the context of the content is related to coffee and the context of the user indicates the user's preference for coffee.
At step 120, the device is instructed to display the menu, e.g., a default menu selected at step 116 or a menu determined at step 118. At step 122, a determination is made whether or not an action is selected by the user. If not, processing awaits an action. If an action is selected, processing continues at step 124 to perform the action and at step 126 to provide a response based on the action. Processing continues at step 102 to await further user interaction with the device.
In one exemplary embodiment discussed above in connection with the process flow of
Embodiments of the present disclosure provide an ability for the user to personalize the menu. A menu may be personalized in various ways. By way of a non-limiting example, the size of icons that represent user-selectable actions may be increased or decreased. By way of a further non-limiting example, an icon representing a user-selectable option/action may be added to or removed from the menu, such as without limitation, search icon(s), multimedia icon(s), language icon(s), media/sharing icon(s), discovery icon(s), text-to-speech icon(s), dictionary icon(s), language translation icon(s) etc. By way of a further non-limiting example, a discovery icon may allow a user to explore a subject, such as a subject identified from selected text; in response to the user clicking on a discovery, or exploration, icon in the menu, the user is provided with a graph visualizing related entities, such as without limitation actors, recipes, movies, etc. By way of a further non-limiting example, a menu icon can be used for a vertical search to focus, or filter, a search on a segment of online content, such as without limitation Y! Finance®, Y! Sports®, Y! Answers®, YouTube®, Wikipedia®, etc. It should be apparent that embodiments of the present disclosure may select from these as well as other actions/options, e.g., where an action is selected at step 112 of
A set of action buttons, or icons, 302 may be displayed in window 206, which buttons may be included in the set based on to one or more contexts, such as without limitation context of content selected by the user and/or context of other content, such as content surrounding the user selected content, context of the device and/or context of the user. Scroll buttons 306 may be used to scroll through the set of buttons 302. The user may scroll through results displayed in area 304 using scroll bar 310, which scroll bar may include scroll buttons. Embodiments of the present disclosure maintain a focus of attention corresponding to the user's focus of attention, e.g., the user's content selection. An indicator, such as pointer 318 in
Positioning of window 206 may vary depending on the location of the content selected by the user, such as and without limitation window 206 might be above, below, to the right or to the left of the selection.
As a user becomes more familiar with the tool provided in accordance with embodiments of the present disclosure, the user may wish to personalize or customize it according to their needs/desires. By way of some non-limiting examples, the user may wish to turn the tool off temporarily or permanently, choose a manner in which results are displayed, such as switching between window 206 of
In the example of
With reference to
In example 802B, the user is provided with an option to select the language into which the selection is to be translated, together with a button, e.g., “Go” button, to initiate the translation. In example 802C, the user is provided with an option to select the language into which the selection is to be translated, which selection initiates the translation. In examples 802A, 802B and 802C, a default language may be provided as the language into which the selection is to be translated. By way of some non-limiting examples, the default language may be the official language of the country in which the user lives, country in which the user is currently located, a language preferred by the user, etc. Each of the examples 802A, 802b and 802C further provides a close button should the user wish to cancel the operation.
In accordance with one or more embodiments, a popup window such as that shown in
The examples provided herein display a set of icons in a horizontal or vertical fashion. It should be apparent that other alternatives may be used in accordance with one or more embodiments of the present disclosure.
In the example shown in
In accordance with one or more embodiments, at least some portion of process flow of
Computing device 1202 can serve content to user computing devices 1204 using a browser application via a network 1206. Data store 1208 can be used to store program data for use with program code that configures a server 1202 to execute at least some portion of the process flow of
The user computing device 1204 can be any computing device, including without limitation a personal computer, personal digital assistant (ppm, wireless device, cell phone, internet appliance, media player, home theater system, and media center, or the like. For the purposes of this disclosure a computing device includes a processor and memory for storing and executing program code, data and software, and may be provided with an operating system that allows the execution of software applications in order to manipulate data. In accordance with one or more embodiments, computing device 1204 may implement program code to implement at least some portion of the process flow of
In accordance with one or more embodiments, a computing device 1202 can make a user interface available to a user computing device 1204 via the network 1206. The user interface made available to the user computing device 1204 can include content items, or identifiers (e.g., URLs) selected for the user interface in accordance with one or more embodiments of the present invention. In accordance with one or more embodiments, computing device 1202 makes a user interface available to a user computing device 1204 by communicating a definition of the user interface to the user computing device 1204 via the network 1206. The user interface definition can be specified using any of a number of languages, including without limitation a markup language such as Hypertext Markup Language, scripts, applets and the like. The user interface definition can be processed by an application executing on the user computing device 1204, such as a browser application, to output the user interface on a display coupled, e.g., a display directly or indirectly connected, to the user computing device 1204.
In an embodiment the network 1206 may be the Internet, an intranet (a private version of the Internet), or any other type of network. An intranet is a computer network allowing data transfer between computing devices on the network. Such a network may comprise personal computers, mainframes, servers, network-enabled hard drives, and any other computing device capable of connecting to other computing devices via an intranet. An intranet uses the same Internet protocol suit as the Internet. Two of the most important elements in the suit are the transmission control protocol (TCP) and the Internet protocol (IP).
As discussed, a network may couple devices so that communications may be exchanged, such as between a server computing device and a client computing device or other types of devices, including between wireless devices coupled via a wireless network, for example. A network may also include mass storage, such as network attached storage (NAS), storage area network (SAN), or other forms of computer or machine readable media, for example. A network may include the Internet, one or more local area networks (LANs), one or more wide area networks (WANs), wire-line type connections, wireless type connections, or any combination thereof. Likewise, sub-networks, such as may employ differing architectures or may be compliant or compatible with differing protocols, may interoperate within a larger network. Various types of devices may, for example, be made available to provide an interoperable capability for differing architectures or protocols. As one illustrative example, a router may provide a link between otherwise separate and independent LANs. A communication link or channel may include, for example, analog telephone lines, such as a twisted wire pair, a coaxial cable, full or fractional digital lines including T1, T2, T3, or T4 type lines, Integrated Services Digital Networks (ISDNs), Digital Subscriber Lines (DSLs), wireless links including satellite links, or other communication links or channels, such as may be known to those skilled in the art. Furthermore, a computing device or other related electronic devices may be remotely coupled to a network, such as via a telephone line or link, for example.
A wireless network may couple client devices with a network. A wireless network may employ stand-alone ad-hoc networks, mesh networks, Wireless LAN (WLAN) networks, cellular networks, or the like. A wireless network may further include a system of terminals, gateways, routers, or the like coupled by wireless radio links, or the like, which may move freely, randomly or organize themselves arbitrarily, such that network topology may change, at times even rapidly. A wireless network may further employ a plurality of network access technologies, including Long Term Evolution (LTE), WLAN, Wireless Router (WR) mesh, or 2nd, 3rd, or 4th generation (2G, 3G, or 4G) cellular technology, or the like. Network access technologies may enable wide area coverage for devices, such as client devices with varying degrees of mobility, for example. For example, a network may enable RF or wireless type communication via one or more network access technologies, such as Global System for Mobile communication (GSM), Universal Mobile Telecommunications System (UMTS), General Packet Radio Services (GPRS), Enhanced Data GSM Environment (EDGE), 3GPP Long Term Evolution (LTE), LTE Advanced, Wideband Code Division Multiple Access (WCDMA), Bluetooth, 802.11b/gln, or the like. A wireless network may include virtually any type of wireless communication mechanism by which signals may be communicated between devices, such as a client device or a computing device, between or within a network, or the like.
Signal packets communicated via a network, such as a network of participating digital communication networks, may be compatible with or compliant with one or more protocols. Signaling formats or protocols employed may include, for example, TCP/IP, UDP, DECnet, NetBEUI, IPX, Appletalk, or the like. Versions of the Internet Protocol (IP) may include IPv4 or IPv6. The Internet refers to a decentralized global network of networks. The Internet includes local area networks (LANs), wide area networks (WANs), wireless networks, or long haul public networks that, for example, allow signal packets to be communicated between LANs. Signal packets may be communicated between nodes of a network, such as, for example, to one or more sites employing a local network address. A signal packet may, for example, be communicated over the Internet from a user site via an access node coupled to the Internet. Likewise, a signal packet may be forwarded via network nodes to a target site coupled to the network via a network access node, for example. A signal packet communicated via the Internet may, for example, be routed via a path of gateways, servers, etc. that may route the signal packet in accordance with a target address and availability of a network path to the target address.
It should be apparent that embodiments of the present disclosure can be implemented in a client-server environment such as that shown in
Memory 1304 interfaces with computer bus 1302 so as to provide information stored in memory 1304 to CPU 1312 during execution of software programs such as an operating system, application programs, device drivers, and software modules that comprise program code, and/or computer-executable process steps, incorporating functionality described herein, e.g., one or more of process flows described herein. CPU 1312 first loads computer-executable process steps from storage, e.g., memory 1304, computer-readable storage medium/media 1306, removable media drive, and/or other storage device. CPU 1312 can then execute the stored process steps in order to execute the loaded computer-executable process steps. Stored data, e.g., data stored by a storage device, can be accessed by CPU 1312 during the execution of computer-executable process steps.
Persistent storage, e.g., medium/media 1306, can be used to store an operating system and one or more application programs. Persistent storage can also be used to store device drivers, such as one or more of a digital camera driver, monitor driver, printer driver, scanner driver, or other device drivers, web pages, content files, playlists and other files. Persistent storage can further include program modules and data files used to implement one or more embodiments of the present disclosure, e.g., listing selection module(s), targeting information collection module(s), and listing notification module(s), the functionality and use of which in the implementation of the present disclosure are discussed in detail herein.
For the purposes of this disclosure a computer readable medium stores computer data, which data can include computer program code that is executable by a computer, in machine readable form. By way of example, and not limitation, a computer readable medium may comprise computer readable storage media, for tangible or fixed storage of data, or communication media for transient interpretation of code-containing signals. Computer readable storage media, as used herein, refers to physical or tangible storage (as opposed to signals) and includes without limitation volatile and non-volatile, removable and non-removable media implemented in any method or technology for the tangible storage of information such as computer-readable instructions, data structures, program modules or other data. Computer readable storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other physical or material medium which can be used to tangibly store the desired information or data or instructions and which can be accessed by a computer or processor.
Those skilled in the art will recognize that the methods and systems of the present disclosure may be implemented in many manners and as such are not to be limited by the foregoing exemplary embodiments and examples. In other words, functional elements being performed by single or multiple components, in various combinations of hardware and software or firmware, and individual functions, may be distributed among software applications at either the client or server or both. In this regard, any number of the features of the different embodiments described herein may be combined into single or multiple embodiments, and alternate embodiments having fewer than, or more than, all of the features described herein are possible. Functionality may also be, in whole or in part, distributed among multiple components, in manners now known or to become known. Thus, myriad software/hardware/firmware combinations are possible in achieving the functions, features, interfaces and preferences described herein. Moreover, the scope of the present disclosure covers conventionally known manners for carrying out the described features and functions and interfaces, as well as those variations and modifications that may be made to the hardware or software or firmware components described herein as would be understood by those skilled in the art now and hereafter.
While the system and method have been described in terms of one or more embodiments, it is to be understood that the disclosure need not be limited to the disclosed embodiments. It is intended to cover various modifications and similar arrangements included within the spirit and scope of the claims, the scope of which should be accorded the broadest interpretation so as to encompass all such modifications and similar structures. The present disclosure includes any and all embodiments of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5204947 | Bernstein et al. | Apr 1993 | A |
5805167 | van Cruyningen | Sep 1998 | A |
5859636 | Pandit | Jan 1999 | A |
6181344 | Tarpenning et al. | Jan 2001 | B1 |
6233448 | Alperovich | May 2001 | B1 |
6338059 | Fields | Jan 2002 | B1 |
6341306 | Rosenschein | Jan 2002 | B1 |
6563430 | Kemink | May 2003 | B1 |
6701144 | Kirbas | Mar 2004 | B2 |
6785670 | Chiang | Aug 2004 | B1 |
6813618 | Loui et al. | Nov 2004 | B1 |
6848077 | McBrearty | Jan 2005 | B1 |
6912386 | Himberg | Jun 2005 | B1 |
6975874 | Bates | Dec 2005 | B1 |
6981242 | Lehmeier | Dec 2005 | B2 |
7100123 | Todd et al. | Aug 2006 | B1 |
7113168 | Oya | Sep 2006 | B2 |
7213048 | Parupudi | May 2007 | B1 |
7231229 | Hawkins | Jun 2007 | B1 |
7457628 | Blumberg | Nov 2008 | B2 |
7464153 | Abbott | Dec 2008 | B1 |
7512874 | Yano | Mar 2009 | B2 |
7581188 | Hiles | Aug 2009 | B2 |
7603349 | Kraft et al. | Oct 2009 | B1 |
7634718 | Nakajima | Dec 2009 | B2 |
7697943 | Jung | Apr 2010 | B2 |
7711550 | Feinberg | May 2010 | B1 |
7712024 | Reynar | May 2010 | B2 |
7743048 | Baldwin | Jun 2010 | B2 |
7890888 | Glasgow | Feb 2011 | B2 |
7983401 | Krinsky | Jul 2011 | B1 |
7992085 | Wang-Aryattanwanich | Aug 2011 | B2 |
8050690 | Neeraj | Nov 2011 | B2 |
8086265 | White | Dec 2011 | B2 |
8108794 | Sattler | Jan 2012 | B2 |
8117542 | Radtke | Feb 2012 | B2 |
8146110 | Lyndersay | Mar 2012 | B2 |
8176431 | Scannell | May 2012 | B1 |
8280831 | Rubin | Oct 2012 | B1 |
8341529 | Li | Dec 2012 | B1 |
8407577 | Franklin | Mar 2013 | B1 |
8429103 | Aradhye | Apr 2013 | B1 |
8437779 | Phukan | May 2013 | B2 |
8473988 | Ohta | Jun 2013 | B2 |
8477109 | Freed | Jul 2013 | B1 |
8630662 | Herz | Jan 2014 | B2 |
8631009 | Lisa | Jan 2014 | B2 |
8671365 | Berus | Mar 2014 | B2 |
8799255 | Ramachandran | Aug 2014 | B2 |
8881057 | Mori | Nov 2014 | B2 |
8942995 | Kerr | Jan 2015 | B1 |
8990384 | Tallgren | Mar 2015 | B2 |
9008693 | Boldyrev | Apr 2015 | B2 |
9092400 | Lin | Jul 2015 | B1 |
9100779 | LaMarca | Aug 2015 | B2 |
9146665 | Gandhi | Sep 2015 | B2 |
9183306 | Robarts | Nov 2015 | B2 |
9189252 | Chu | Nov 2015 | B2 |
9356894 | Appelman | May 2016 | B2 |
9396281 | Kim | Jul 2016 | B2 |
9652109 | Borzello | May 2017 | B2 |
9703596 | Lyndersay | Jul 2017 | B2 |
9779168 | Lieb | Oct 2017 | B2 |
9811320 | McCoy | Nov 2017 | B2 |
9858925 | Gruber | Jan 2018 | B2 |
9921718 | Kuster | Mar 2018 | B2 |
10057736 | Gruber | Aug 2018 | B2 |
20020069223 | Goodisman et al. | Jun 2002 | A1 |
20020147004 | Ashmore | Oct 2002 | A1 |
20020188603 | Baird et al. | Dec 2002 | A1 |
20020194300 | Lin et al. | Dec 2002 | A1 |
20020199018 | Diedrich | Dec 2002 | A1 |
20030050815 | Seigel | Mar 2003 | A1 |
20030135582 | Allen | Jul 2003 | A1 |
20030160830 | DeGross | Aug 2003 | A1 |
20040054744 | Karamchedu et al. | Mar 2004 | A1 |
20040259536 | Keskar | Dec 2004 | A1 |
20050003804 | Huomo et al. | Jan 2005 | A1 |
20050039141 | Burke et al. | Feb 2005 | A1 |
20050197826 | Neeman | Sep 2005 | A1 |
20050204309 | Szeto | Sep 2005 | A1 |
20050255861 | Wilson | Nov 2005 | A1 |
20060005156 | Korpipaa | Jan 2006 | A1 |
20060023945 | King et al. | Feb 2006 | A1 |
20060148528 | Jung et al. | Jul 2006 | A1 |
20060168541 | Hill et al. | Jul 2006 | A1 |
20060184579 | Mills et al. | Aug 2006 | A1 |
20060190171 | Cross | Aug 2006 | A1 |
20060238382 | Kimchi | Oct 2006 | A1 |
20060248553 | Mikkelson | Nov 2006 | A1 |
20060294094 | King | Dec 2006 | A1 |
20070005449 | Mathew | Jan 2007 | A1 |
20070106956 | Platt | May 2007 | A1 |
20070118661 | Vishwanathan | May 2007 | A1 |
20070198505 | Fuller | Aug 2007 | A1 |
20070233692 | Lisa | Oct 2007 | A1 |
20070261030 | Wadhwa | Nov 2007 | A1 |
20080005071 | Flake | Jan 2008 | A1 |
20080005679 | Rimas-Ribikauskas | Jan 2008 | A1 |
20080125103 | Mock | May 2008 | A1 |
20080125173 | Chen | May 2008 | A1 |
20080146245 | Appaji | Jun 2008 | A1 |
20080163088 | Pradhan et al. | Jul 2008 | A1 |
20080229218 | Maeng | Sep 2008 | A1 |
20080243788 | Reztlaff | Oct 2008 | A1 |
20080244460 | Louch | Oct 2008 | A1 |
20080313172 | King | Dec 2008 | A1 |
20090005981 | Forstall | Jan 2009 | A1 |
20090100342 | Jakobson | Apr 2009 | A1 |
20090102859 | Athsani | Apr 2009 | A1 |
20090228804 | Kim | Sep 2009 | A1 |
20090313244 | Sokolenko | Dec 2009 | A1 |
20100031176 | Song | Feb 2010 | A1 |
20100031198 | Zimmerman | Feb 2010 | A1 |
20100070484 | Kraft | Mar 2010 | A1 |
20100122194 | Rogers | May 2010 | A1 |
20100138416 | Bellotti | Jun 2010 | A1 |
20100146383 | Kim | Jun 2010 | A1 |
20100175116 | Gum | Jul 2010 | A1 |
20100192098 | Kim | Jul 2010 | A1 |
20100241663 | Huang | Sep 2010 | A1 |
20100279667 | Wehrs et al. | Nov 2010 | A1 |
20110015996 | Kassoway | Jan 2011 | A1 |
20110028138 | Davies-Moore | Feb 2011 | A1 |
20110047557 | Koskimies | Feb 2011 | A1 |
20110072338 | Caldwell | Mar 2011 | A1 |
20110072395 | King | Mar 2011 | A1 |
20110072492 | Mohler | Mar 2011 | A1 |
20110078243 | Carpenter et al. | Mar 2011 | A1 |
20110119628 | Carter et al. | May 2011 | A1 |
20110167350 | Hoellwarth | Jul 2011 | A1 |
20110250875 | Huang | Oct 2011 | A1 |
20110265010 | Ferguson et al. | Oct 2011 | A1 |
20110265035 | Lepage | Oct 2011 | A1 |
20110282700 | Cockcroft | Nov 2011 | A1 |
20120016678 | Gruber | Jan 2012 | A1 |
20120022872 | Gruber | Jan 2012 | A1 |
20120046068 | Katpelly | Feb 2012 | A1 |
20120059780 | Kononen | Mar 2012 | A1 |
20120117499 | Mori et al. | May 2012 | A1 |
20120127082 | Kushler | May 2012 | A1 |
20120133650 | Lee | May 2012 | A1 |
20120151310 | El-kalliny | Jun 2012 | A1 |
20120265528 | Gruber | Oct 2012 | A1 |
20120272144 | Radakovitz | Oct 2012 | A1 |
20120324395 | Cohen | Dec 2012 | A1 |
20130014040 | Jagannathan | Jan 2013 | A1 |
20130019172 | Kotler | Jan 2013 | A1 |
20130019182 | Gil | Jan 2013 | A1 |
20130047115 | Migos | Feb 2013 | A1 |
20130061148 | Das | Mar 2013 | A1 |
20130085848 | Dyor | Apr 2013 | A1 |
20130086056 | Dyor | Apr 2013 | A1 |
20130091467 | Pallakoff | Apr 2013 | A1 |
20130117130 | Dyor et al. | May 2013 | A1 |
20130132566 | Olsen | May 2013 | A1 |
20130132899 | Scott | May 2013 | A1 |
20130151963 | Costenaro | Jun 2013 | A1 |
20130311411 | Senanayake | Nov 2013 | A1 |
20130311870 | Worsley | Nov 2013 | A1 |
20140052681 | Nitz | Feb 2014 | A1 |
20140111542 | Wan | Apr 2014 | A1 |
20150161149 | Genera | Jun 2015 | A1 |
20170052659 | Ivanov | Feb 2017 | A1 |
Entry |
---|
http://www.kikin.com; kikin Touch-Based Contextual Search; visited Feb. 20, 2013; 3 pages. |
http://www.kikin.com/blog/2012/02/kikin-browser-fast-easy-streamlined; kikin Easy-Search API: Fast, Easy, Streamlined; posted Feb. 13, 2012; Feb. 20, 2013; 2 pages. |
http://www.kikin.com/blog/2012/09/kikin-contextual-search-demo; kikin contextual search demo: Web; Posted on Sep. 28, 2012; visited on Feb. 20, 2013; 1 page. |
http://www.kikin.com/blog/2012/02/sneak-preview-of-kikin-for-the-iphone-and-ipod-touch Sneak Preview of kikin for the iPhone and iPod Touch!; Posted on Feb. 29, 2012; visited on Feb. 20, 2013; 1 page. |
http://www.bing.com/community/site_blogs/b/search/archive/2011/07/05/bing-for-ipad-update-searching-without-a-search-box.aspx Bing for iPad Update: Searching Without a Search Box; Posted on Jul. 5, 2011; visited on Feb. 21, 2013; 1 page. |
Number | Date | Country | |
---|---|---|---|
20140237425 A1 | Aug 2014 | US |