System and method for context-launching of applications

Information

  • Patent Grant
  • 10713312
  • Patent Number
    10,713,312
  • Date Filed
    Tuesday, December 1, 2015
    9 years ago
  • Date Issued
    Tuesday, July 14, 2020
    4 years ago
  • CPC
    • G06F16/9535
  • Field of Search
    • US
    • 707 760000
    • 707 770000
    • CPC
    • G06F16/34
    • G06F16/951
    • G06F16/9535
    • G06F16/24575
    • G06F16/245
  • International Classifications
    • G06F16/34
    • G06F16/9535
    • G06F16/2457
    • G06F16/951
    • Term Extension
      1192
Abstract
A system, method, and user device for executing actions respective of contextual scenarios. The method comprises: determining at least one variable based in part on at least one signal captured by at least one sensor of the user device; generating at least one insight based on the at least one variable; generating a context for the user device based on the at least one insight; determining, based on the context, a user intent, wherein the user intent includes at least one action; and causing execution of the at least one action on the user device.
Description
TECHNICAL FIELD

The present disclosure relates generally to analysis of user behavior, and more specifically to systems and methods for generating contextual scenarios respective of user device activity.


BACKGROUND

Mobile launchers are used for navigating through content existing on mobile devices as well as content available over the World Wide Web. The mobile launchers usually configure user interfaces and/or provide content upon receiving requests from a user of the mobile device. Such requests are usually received as a query that a user provides via the interfaces.


As content providers are usually motivated by financial incentives, the content may not be user-oriented. As a result, users may not receive content that efficiently meets their requirements. Specifically, the content may be arranged or otherwise presented so as to emphasize content that provides a financial benefit for content providers rather than emphasizing content that best matches the user's needs at a given time. As a result, user would be required to wade through significant amounts of content that are irrelevant to their present needs before finding relevant content. This would waste user time and computing resources, and may result in user frustration.


Further, even when content is intended to be user-oriented, existing mobile launchers face several drawbacks with respect to user convenience. For example, existing mobile launchers typically require interaction with the user to obtain desired content. The interaction with the user device may be undesirable when, e.g., the user is driving or otherwise engaged in activity that makes interaction difficult or unsafe. Additionally, due to the increasing number of sources of content, a user may not be aware of which content source will provide appropriate content based on the user's current intent.


It would therefore be advantageous to provide a solution that would overcome the deficiencies of the prior art.


SUMMARY

A summary of several example embodiments of the disclosure follows. This summary is provided for the convenience of the reader to provide a basic understanding of such embodiments and does not wholly define the breadth of the disclosure. This summary is not an extensive overview of all contemplated embodiments, and is intended to neither identify key or critical elements of all embodiments nor to delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later. For convenience, the term “some embodiments” may be used herein to refer to a single embodiment or multiple embodiments of the disclosure.


The disclosed embodiments include a method for executing actions respective of contextual scenarios. The method comprises determining at least one variable based in part on at least one signal captured by at least one sensor of the user device; generating at least one insight based on the at least one variable; generating a context for the user device based on the at least one insight; determining, based on the context, a user intent, wherein the user intent includes at least one action; and causing execution of the at least one action on the user device.


The disclosed embodiments also include a user device configured to execute actions respective of contextual scenarios related to the user device. The user device comprises at least one sensor; a processing unit; and a memory, the memory containing instructions that, when executed by the processing unit, configure the user device to: determine at least one variable based in part on at least one signal captured by the at least one sensor; generate at least one insight respective of the at least one environmental variable generated by the at least one sensor; generate a context for the user device respective of the at least one insight; determine, based on the context, a user intent, wherein the user intent includes at least one action; and execute the at least one action.


The disclosed embodiments also include a system for executing actions respective of contextual scenarios related to a user device. The system comprises a processing unit; and a memory, the memory containing instructions that, when executed by the processing unit, configure the system to: determine at least one variable based in part on at least one signal captured by at least one sensor of the user device; generate at least one insight based on the at least one variable; generate a context for the user device based on the at least one insight; determine, based on the context, a user intent, wherein the user intent includes at least one action; and cause execution of the at least one action on the user device.





BRIEF DESCRIPTION OF THE DRAWINGS

The subject matter disclosed herein is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the disclosed embodiments will be apparent from the following detailed description taken in conjunction with the accompanying drawings.



FIG. 1 is a schematic diagram of a system utilized to describe the various embodiments.



FIG. 2 is a flow diagram illustrating operation of a system utilized to describe the various embodiments.



FIG. 3 is a flowchart illustrating generating contextual scenarios and executing actions respective thereof based on use of a user device according to an embodiment.





DETAILED DESCRIPTION

It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views.


The various disclosed embodiments include a method and system for executing actions on a user device respective of context. Insights are generated based on signals collected by the user device. The collected insights are analyzed and aggregated. Contextual scenarios are generated respective of the analysis and aggregation. Based on the contextual scenarios, actions are executed.



FIG. 1 shows an exemplary and non-limiting schematic diagram of a networked system 100 utilized to describe the various disclosed embodiments. The system 100 includes a user device 110, a network 120, a context enrichment server (CES) 130, a contextual database (DB) 140, and a variables database (DB) 150. The user device 110, the databases 140 and 150, and the CES 130 are communicatively connected to the network 120. The network 120 may be, but is not limited to, a local area network (LAN), a wide area network (WAN), a metro area network (MAN), the world wide web (WWW), the Internet, a wired network, a wireless network, and the like, as well as any combination thereof. The user device 110 may be a smart phone, a mobile phone, a laptop, a tablet computer, a wearable computing device, a personal computer (PC), a smart television, a portable device having a processing unit integrated therein, and the like. In certain implementations, the contextual database (DB) 140 and a variables database (DB) 150 may be implemented in a single storage device.


The user device may allow access to a plurality of applications 113 (hereinafter referred to individually as an application (app) 113 and collectively as applications 113, merely for simplicity purposes). An application 113 may include a native application and/or a virtual application. A native application (or app) is installed and executed on the user device 110. A virtual application (app) is executed on a server and only relevant content is rendered and sent to the user device 110. That is, the virtual application typically runs within a browser embedded in another program, thereby permitting users to utilize virtual versions of the application without any download to the user device 110 directly.


The user device 110 has an agent 115 installed therein. The agent 115 may be implemented as an application program having instructions that reside in a memory of the respective user device 110. The agent 115 is communicatively connected to the CES 130 over the network 120. The user device 110 further includes a plurality of sensors 111-1 through 111-N (hereinafter referred to collectively as sensors 111 and individually as a sensor 111, merely for simplicity purposes) implemented thereon. The sensors 111 are configured to collect signals related to a current state of a user of the user device 110. The sensors 111 may include, but are not limited to, a global positioning system (GPS), a temperature sensor, an accelerometer, a gyroscope, a microphone, a speech recognizer, a camera, a web camera, a touch screen, and so on. The signals may include, but are not limited to, raw data related to a user of the user device and/or to the user device 110.


In an embodiment, the agent 115 may be configured to determine one or more variables based on the sensor signals and to send the determined variables to the CES 130. Such variables may include environmental variables that indicate, for example, temporal information (e.g., a time of day, a day, a week, a month, a year), location information, motion information, weather information, sounds, images, user interactions with the user device 110, connections to external devices, and so on. In an embodiment, any of the variables may be received from a resource external to the user device 110, e.g., from the variables database 150. Optionally, the received variables may further include personal variables such as, e.g., a user profile, demographic information related to the user, user preferences, and so on.


Respective of the variables, the CES 130 is configured to generate one or more insights related to a user of the user device 110. The insights may include, but are not limited to, classifications of variables as well as conclusions based on the variables. The insights may include environmental insights and/or personal insights. Environmental insights may be generally generated based on variables over which users have typically limited or no control. Examples for such variable include, but not limited to, a time of day, location, motion information, weather information, sounds, images and more. Personal insights may include, but are not limited to, user profile information of the user, demographic information related to the user, actions executed by the user on the user device 110, and so on.


As a non-limiting example, an environmental insight related to a variable indicating a time of 1:00 P.M. on a Monday may classify the variable as a temporal variable and may include a conclusion that the user is at work. As another non-limiting example, an environmental insight related to a variable indicating a Bluetooth connection to a computing system in a car may classify the variable as a connection and include a conclusion that the user is driving and is about to drive.


Based on the insights, the CES 130 is configured to determine a contextual scenario of a context related to the user device 110. The determination of the contextual scenario is performed by the CES 130 using a contextual database 140 having stored therein a plurality of contexts associated with a plurality of respective insights. The context represents a current state of the user as demonstrated via the insights. In an embodiment, based on the contextual scenario, the CES 130 is configured to predict an intent of the user.


The intent may represent the type of content, the content, and/or actions that may be of an interest to the user at a current time period. For example, if the time is 8 A.M. and it is determined that the user device 110 is communicatively connected to a car via Bluetooth, the user intent may be related to “traffic update.” In an embodiment, the sensor signals may be further monitored to determine any changes that may in turn change the intent. In a further embodiment, the changed signals may be analyzed to determine a current (updated) intent of the user.


The CES 130 is further configured to perform a plurality of actions on the user device 110 via the agent 115 based on the contextual scenarios. The actions may include, but are not limited to, displaying and/or launching application programs, displaying and/or providing certain content from the web and/or from the user device 110, and so on.


In an embodiment, performing of an action may include selecting resources (not shown) that would serve the user intent. The resources may be, but are not limited to, web search engines, servers of content providers, vertical comparison engines, servers of content publishers, mobile applications, widgets installed on the user device 110, and so on. In certain configurations, the resources may include “cloud-based” applications, that is, applications executed by servers in a cloud-computing infrastructure, such as, but not limited to, a private-cloud, a public-cloud, or any combination thereof. Selecting resources based on user intent is described further in the above-reference U.S. patent application Ser. No. 13/712,563, assigned to the common assignee, which is hereby incorporated by reference.


The actions to be performed may be determined so as to fulfill the determined user intent(s). For example, if the user intent is determined to be “soccer game update,” the actions performed may be to launch a sports news application program and/or to display a video clip showing highlights of the most recent soccer game. As another example, if the user intent is determined to be “weather forecast,” the actions performed may be to launch a weather application and/or to display an article from the Internet regarding significant meteorological events.


The CES 130 typically includes a processing unit (PU) 138 and a memory (mem) 139 containing instructions to be executed by the processing unit 138. The processing unit 138 may comprise, or be a component of, a larger processing unit implemented with one or more processors. The one or more processors may be implemented with any combination of general-purpose microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), controllers, state machines, gated logic, discrete hardware components, dedicated hardware finite state machines, or any other suitable entities that can perform calculations or other manipulations of information.


The memory 139 may also include machine-readable media for storing software. Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the processing unit 138, cause the processing unit 138 to perform the various functions described herein.


The agent 115 In some implementations, the agent 115 can operate and be implemented as a stand-alone program or, alternatively, can communicate and be integrated with other programs or applications executed in the user device 110.



FIG. 2 depicts an exemplary and non-limiting flow diagram 200 illustrating an operation of the CES 130 based on sensor signals according to an embodiment. In an embodiment, the CES 130 includes a plurality of insighters 1311 through 131O, an insight aggregator 132, a scenario engine 133, a prediction engine 134, an application predictor 135, an action predictor 136, and a contact predictor 137.


The operation of the CES 130 starts when one or more of a plurality of sensors 1111 through 111N collects a plurality of signals 2011 through 201M (hereinafter referred to individually as a signal 201 and collectively as signals 201, merely for simplicity purposes). The signals 201 are received by the CES 130. Based on the collected signals 201, the plurality of insighters 1311 through 131O are configured to generate a plurality of insights 2021 through 202P (hereinafter referred to individually as an insight 202 or collectively as insights 202, merely for simplicity purposes). Each insight 202 relates to one of the collected signals 201.


The insight aggregator 132 is configured to differentiate between the plurality of insights 202 generated by the insighters 131. The differentiation may include, but is not limited to, identifying common behavior patterns as opposed to frequent uses, thereby increasing the efficiency of the insights generation.


According to the disclosed embodiments, a common behavior pattern may be identified when, for example, a particular signal is received at approximately regular intervals. For example, a common behavior pattern may be identified when a GPS signal indicates that a user is at a particular location between 8 A.M. and 10 AM every business day (Monday through Friday). Such a GPS signal may not be identified as a common behavior pattern when the signal is determined at sporadic intervals. For example, a user occupation of the same location on a Monday morning one week, a Friday afternoon the next week, and a Saturday evening on a third week may not be identified as a common behavior pattern. As another example, a common behavior pattern may be identified when an accelerometer signal indicates that a user is moving at 10 miles per hour every Saturday morning. As yet another example, a common behavior pattern may be identified when a user calls a particular contact in the evening on the first day of each month.


The differentiated insights 202 are sent to a contextual scenario engine (CSE) 133 and/or to a prediction engine 134. The CSE 133 is configured to generate one or more contextual scenarios 203 associated with the insights 202 using the contextual database 140 and to execute actions 204 on the user device 110 using the agent 115 respective thereof. Generating contextual scenarios and executing actions respective thereof are described further herein below with respect to FIG. 3.


The prediction engine 134 is configured to predict future behavior of the user device 110 based on the insights 202. Based on the predicted future behavior, the prediction engine 134 may be configured to generate a prediction model. The prediction model may be utilized to determine actions indicating user intents that may be taken by the user in response to particular contextual scenarios. Further, the prediction model may include a probability that a particular contextual scenario will result in a particular action. For example, if user interactions used to generate the prediction model indicate that a user launched an application for seeking drivers 3 out of the last 4 Saturday nights, an action of launching the driver application may be associated with a contextual scenario for Saturday nights and with a 75% probability that the user intends to launch the application on a given Saturday night.


The prediction engine 134 may include an application program predictor (app. predictor) 135 for identifying application programs that are likely to be launched on the user device. The prediction engine 134 may further include an actions predictor 136 for identifying actions that a user is likely to perform on the user device 110. The prediction engine 134 may further include a contact predictor 137 used to identify data related to persons that a user of the user device 110 is likely to contact.


The various elements discussed with reference to FIG. 2 may be realized in software, firmware, hardware or any combination thereof.


It should be noted that FIG. 2 is described with respect to received signals merely for simplicity purposes and without limitation on the disclosed embodiments. The insighters 131 may, alternatively or collectively, generate insights based on variables (e.g., environmental variables and/or personal variables) without departing from the scope of the disclosure.


It should be further noted that the embodiments described herein above with respect to FIGS. 1 and 2 are described with respect to a single user device 110 having one agent 115 merely for simplicity purposes and without limitation on the disclosed embodiments. Multiple user devices, each equipped with an agent, may be used and actions may be executed respective of contextual scenarios for the user of each user device without departing from the scope of the disclosure.



FIG. 3 depicts an exemplary and non-limiting flowchart 300 of a method for generating contextual scenarios and executing actions based on use of a user device (e.g., the user device 110) according to an embodiment.


In S310, environmental and/or personal variables are determined. In an embodiment, the variables may be, but are not limited to, determined based on sensor signals, retrieved from a database, and so on.


In S320, insights are generated respective of the variables. The insight generation may include, but is not limited to, classifying the variables, generating a conclusion based on the variables, and so on. Generating conclusions about the variables may further be based on past user interactions. Such generated conclusions may indicate, for example, whether a particular variable is related to a user behavior pattern.


In optional S330, a weighted factor is generated respective of each insight. Each weighted factor indicates the level of confidence in each insight, i.e., the likelihood that the insight is accurate for the user's current intent. The weighted factors may be adapted over time. To this end, the weighted factors may be based on, for example, previous user interactions with the user device. Specifically, the weighted factors may indicate a probability, based on previous user interactions, that a particular insight is associated with the determined variables.


In S340, a context is generated respective of the insights and their respective weighted factors. Generating contexts may be performed, for example, by matching the insights to insights stored in a contextual database (e.g., the contextual database 140). The generated context may be based on the contextual scenarios associated with each matching insight. In an embodiment, the matching may further include matching textual descriptions of the insights.


In S350, based on the generated context, a user intent and one or more corresponding actions are determined. The user intent and actions may be determined based on a prediction model. In an embodiment, the prediction model may have been generated as described further herein above with respect to FIG. 2.


In S360, the determined actions are executed on the user device. It should be noted that the actions are performed without user intervention, i.e., the user may not be required to enter any queries, activate any function on the user device, and so on. In an embodiment, the actions are triggered by the agent independently or under the control of the CES 130.


In S370, it is checked whether additional variables have been determined and, if so, execution continues with S310; otherwise, execution terminates. The checks for additional variables may be performed, e.g., continuously, at regular intervals, and/or upon determination that one or more signals have changed.


In an embodiment, the method of FIG. 3 may be performed by a context enrichment server (e.g., the CES 130). In another embodiment, the method performed by an agent (e.g., agent 115) operable in the user device.


As a non-limiting example, a GPS signal is used to determine environmental variables indicating that the user is at the address of a supermarket on a Saturday morning. Based on the variables, an insight illustrating that the variable is related to a location and a conclusion that variables are in accordance with the user's typical behavior pattern is generated. The insight is further matched to insights stored in a contextual database to identify contextual scenarios associated thereto and a context is generated based on the contextual scenarios. The context indicates that the user is performing his weekly grocery shopping. A prediction model is applied to the context to determine that there is a 90% probability that the user launches a grocery shopping application when at this location on a Saturday morning. Accordingly, the grocery shopping application is launched.


The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.


All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the disclosed embodiment and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosed embodiments, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.

Claims
  • 1. A method for executing actions based on contextual scenarios related to a user device, comprising: determining at least one variable based on at least one signal captured by at least one sensor of the user device, wherein at least one of the at least one variable is a personal variable;generating at least one insight based on the at least one variable;generating a weight factor for at least one of the at least one insight, wherein each generated weight factor indicates a level of confidence in each insight;generating a context for the user device based on the at least one insight and the generated weight factors, wherein the context represents a current state of the user as demonstrated by the at least one insight;determining, based on the context, a user intent, wherein the user intent includes at least one action; andcausing execution of the at least one action on the user device based on the determined context and the determined user intent.
  • 2. The method of claim 1, wherein at least one of the at least one variable is any of: an environmental variable.
  • 3. The method of claim 2, wherein the environmental variable is determined based on the at least one sensor signal collected by the at least one sensor of the user device.
  • 4. The method of claim 1, wherein generating the context further comprises: matching each insight to a contextual database insight; anddetermining, based on the matching, at least one contextual scenario, wherein each determined contextual scenario is associated with one of the matching contextual database insights.
  • 5. The method of claim 1, wherein generating the at least one insight further comprises: classifying the variables; andgenerating a conclusion based on the variables.
  • 6. The method of claim 1, further comprising: wherein at least one of the at least one action is launching an application program.
  • 7. The method of claim 1, wherein each of the at least one action is any of: displaying an application program, launching an application program, displaying content from the Internet, providing content from the Internet, displaying content from the user device, and providing content from the user device.
  • 8. The method of claim 1, further comprising: identifying at least one insight demonstrating a common behavior pattern, wherein the context is generated based on the at least one common behavior pattern insight.
  • 9. The method of claim 1, wherein the user intent is determined based on a prediction model.
  • 10. The method of claim 9, wherein the prediction model is generated based on at least one user interaction and at least one insight.
  • 11. A non-transitory computer readable medium having stored thereon instructions for causing one or more processing units to execute the method according to claim 1.
  • 12. A user device configured to execute actions based on contextual scenarios related to the user device, comprising: at least one sensor;a processing unit; anda memory, the memory containing instructions that, when executed by the processing unit, configure the user device to:determine at least one variable based on at least one signal captured by at least one sensor of the user device, wherein at least one of the at least one variable is a personal variable;generate at least one insight based on the at least one variable generated by the at least one sensor;generate a weight factor for at least one of the at least one insight, wherein each generated weight factor indicates a level of confidence in each insight;generate a context for the user device based on the at least one insight and the generated weight factors, wherein the context represents a current state of the user as demonstrated by the at least one insight;determine, based on the context, a user intent, wherein the user intent includes at least one action; andexecute the at least one action on the user device, wherein the at least one action is selected based on the determined context and the determined user intent.
  • 13. A system for executing actions based on contextual scenarios related to a user device, comprising: a processing unit; anda memory, the memory containing instructions that, when executed by the processing unit, configure the system to:determine at least one variable based in part on at least one signal captured by at least one sensor of the user device, wherein at least one of the at least one variable is a personal variable;generate at least one insight based on the at least one variable;generate a weight factor for at least one of the at least one insight, wherein each generated weight factor indicates a level of confidence in each insight;generate a context for the user device based on the at least one insight and the generated weight factors, wherein the context represents a current state of the user as demonstrated by the at least one insight;determine, based on the context, a user intent, wherein the user intent includes at least one action; andcause execution of the at least one action on the user device, wherein the at least one action is selected based on the determined context and the determined user intent.
  • 14. The system of claim 13, wherein at least one of the at least one variable is an environmental variable.
  • 15. The system of claim 14, wherein the environmental variable is determined based on the at least one sensor signal collected by the at least one sensor of the user device.
  • 16. The system of claim 13, wherein the system is further configured to: match each insight to a contextual database insight; anddetermine, based on the matching, at least one contextual scenario, wherein each determined contextual scenario is associated with one of the matching contextual database insights.
  • 17. The system of claim 13, wherein the system is further configured to: classify the variables; andgenerate a conclusion based on the variables.
  • 18. The system of claim 13, wherein the system is further configured to: wherein at least one of the at least one action is launching an application program.
  • 19. The system of claim 13, wherein each of the at least one action is any of: displaying an application program, launching an application program, displaying content from the Internet, providing content from the Internet, displaying content from the user device, and providing content from the user device.
  • 20. The system of claim 13, wherein the system is further configured to: identify at least one insight demonstrating a common behavior pattern, wherein the context is generated based on the at least one common behavior pattern insight.
  • 21. The system of claim 13, wherein the user intent is determined based on a prediction model.
  • 22. The system of claim 21, wherein the prediction model is generated based on at least one user interaction and at least one insight.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 62/086,728 filed on Dec. 3, 2014. This application is also a continuation-in-part of U.S. patent application Ser. No. 14/850,200 filed on Sep. 10, 2015 which is a continuation of U.S. patent application Ser. No. 13/712,563 filed on Dec. 12, 2012, now U.S. Pat. No. 9,141,702. The Ser. No. 13/712,563 application is a continuation-in-part of: (a) U.S. patent application Ser. No. 13/156,999 filed on Jun. 9, 2011, which claims the benefit of U.S. provisional patent application No. 61/468,095 filed Mar. 28, 2011 and U.S. provisional application No. 61/354,022, filed Jun. 11, 2010; and (b) U.S. patent application Ser. No. 13/296,619 filed on Nov. 15, 2011, now pending. This application is also a continuation-in-part of U.S. patent application Ser. No. 14/583,310 filed on Dec. 26, 2014, now pending. The Ser. No. 14/583,310 application claims the benefit of U.S. Provisional Patent Application No. 61/920,784 filed on Dec. 26, 2013. The Ser. No. 14/583,310 application is also a continuation-in-part of the Ser. No. 13/712,563 application. The contents of the above-referenced applications are hereby incorporated by reference.

US Referenced Citations (177)
Number Name Date Kind
5911043 Duffy et al. Jun 1999 A
5924090 Krellenstein Jul 1999 A
6101529 Chrabaszcz Aug 2000 A
6484162 Edlund et al. Nov 2002 B1
6546388 Edlund et al. Apr 2003 B1
6560590 Shwe et al. May 2003 B1
6564213 Ortega et al. May 2003 B1
6605121 Roderick Aug 2003 B1
6668177 Salmimaa et al. Dec 2003 B2
7181438 Szabo Feb 2007 B1
7266588 Oku Sep 2007 B2
7302272 Ackley Nov 2007 B2
7359893 Sadri et al. Apr 2008 B2
7376594 Nigrin May 2008 B2
7461061 Aravamudan et al. Dec 2008 B2
7529741 Aravamudan et al. May 2009 B2
7533084 Holloway et al. May 2009 B2
7565383 Gebhart et al. Jul 2009 B2
7594121 Eytchison et al. Sep 2009 B2
7599925 Larson et al. Oct 2009 B2
7636900 Xia Dec 2009 B2
7707142 Ionescu Apr 2010 B1
7721192 Milic-Frayling et al. May 2010 B2
7774003 Ortega et al. Aug 2010 B1
7783419 Taniguchi et al. Aug 2010 B2
7792815 Aravamudan et al. Sep 2010 B2
7797298 Sareen et al. Sep 2010 B2
7899829 Malla Mar 2011 B1
7958141 Sundaresan et al. Jun 2011 B2
7966321 Wolosin et al. Jun 2011 B2
7974976 Yahia et al. Jul 2011 B2
8032666 Srinivansan et al. Oct 2011 B2
8073860 Venkataraman et al. Dec 2011 B2
8086604 Arrouye et al. Dec 2011 B2
8271333 Grigsby et al. Sep 2012 B1
8312484 McCarty et al. Nov 2012 B1
8392449 Pelenur et al. Mar 2013 B2
8571538 Sprigg et al. Oct 2013 B2
8572129 Lee et al. Oct 2013 B1
8606725 Agichtein et al. Dec 2013 B1
8626589 Sengupta et al. Jan 2014 B2
8700804 Meyers et al. Apr 2014 B1
8718633 Sprigg et al. May 2014 B2
8793265 Song et al. Jul 2014 B2
8799273 Chang et al. Aug 2014 B1
8825597 Houston et al. Sep 2014 B1
8843853 Smoak et al. Sep 2014 B1
20030018778 Martin et al. Jan 2003 A1
20030182394 Ryngler Sep 2003 A1
20040186989 Clapper Sep 2004 A1
20040229601 Zabawskyj et al. Nov 2004 A1
20050060297 Najork Mar 2005 A1
20050071328 Lawrence Mar 2005 A1
20050076367 Johnson et al. Apr 2005 A1
20050102407 Clapper May 2005 A1
20050108406 Lee et al. May 2005 A1
20050138043 Lokken Jun 2005 A1
20050149496 Mukherjee et al. Jul 2005 A1
20050232423 Horvitz et al. Oct 2005 A1
20050243019 Fuller et al. Nov 2005 A1
20050283468 Kamvar et al. Dec 2005 A1
20060004675 Bennett et al. Jan 2006 A1
20060031529 Keith Feb 2006 A1
20060064411 Gross et al. Mar 2006 A1
20060085408 Morsa Apr 2006 A1
20060089945 Paval Apr 2006 A1
20060095389 Hirota et al. May 2006 A1
20060112081 Qureshi May 2006 A1
20060129931 Simons et al. Jun 2006 A1
20060136403 Koo Jun 2006 A1
20060167896 Kapur et al. Jul 2006 A1
20060190439 Chowdhury et al. Aug 2006 A1
20060200761 Judd et al. Sep 2006 A1
20060206803 Smith Sep 2006 A1
20060217953 Parikh Sep 2006 A1
20060224448 Herf Oct 2006 A1
20060224593 Benton et al. Oct 2006 A1
20060248062 Libes et al. Nov 2006 A1
20060265394 Raman et al. Nov 2006 A1
20060271520 Ragan Nov 2006 A1
20060277167 Gross et al. Dec 2006 A1
20070011167 Krishnaprasad et al. Jan 2007 A1
20070055652 Hood et al. Mar 2007 A1
20070082707 Flynt et al. Apr 2007 A1
20070112739 Burns et al. May 2007 A1
20070136244 MacLaurin et al. Jun 2007 A1
20070174900 Marueli et al. Jul 2007 A1
20070195105 Koberg Aug 2007 A1
20070204039 Inamdar Aug 2007 A1
20070226242 Wang et al. Sep 2007 A1
20070239724 Ramer et al. Oct 2007 A1
20070255831 Hayashi et al. Nov 2007 A1
20070300185 Macbeth et al. Dec 2007 A1
20080065685 Frank Mar 2008 A1
20080077883 Kim et al. Mar 2008 A1
20080082464 Ozzie et al. Apr 2008 A1
20080104195 Hawkins et al. May 2008 A1
20080114759 Yahia et al. May 2008 A1
20080133605 MacVarish Jun 2008 A1
20080172362 Shacham et al. Jul 2008 A1
20080172374 Wolosin et al. Jul 2008 A1
20080222140 Lagad et al. Sep 2008 A1
20080256443 Li et al. Oct 2008 A1
20080306913 Newman et al. Dec 2008 A1
20080306937 Whilte et al. Dec 2008 A1
20080307343 Robert et al. Dec 2008 A1
20090013052 Robarts et al. Jan 2009 A1
20090013285 Blyth et al. Jan 2009 A1
20090031236 Robertson et al. Jan 2009 A1
20090049052 Sharma et al. Feb 2009 A1
20090063491 Barclay et al. Mar 2009 A1
20090070318 Song et al. Mar 2009 A1
20090077034 Kim et al. Mar 2009 A1
20090077047 Cooper et al. Mar 2009 A1
20090094213 Wang Apr 2009 A1
20090125374 Deaton et al. May 2009 A1
20090125482 Peregrine et al. May 2009 A1
20090150792 Laakso et al. Jun 2009 A1
20090210403 Reinshmidt et al. Aug 2009 A1
20090228439 Manolescu et al. Sep 2009 A1
20090234811 Jamil et al. Sep 2009 A1
20090234814 Boerries et al. Sep 2009 A1
20090239587 Negron et al. Sep 2009 A1
20090240680 Tankovich et al. Sep 2009 A1
20090265328 Parekh et al. Oct 2009 A1
20090277322 Cai et al. Nov 2009 A1
20090327261 Hawkins Dec 2009 A1
20100030753 Nad et al. Feb 2010 A1
20100042912 Whitaker Feb 2010 A1
20100082661 Beaudreau Apr 2010 A1
20100094854 Rouhani-Kalleh Apr 2010 A1
20100106706 Rorex et al. Apr 2010 A1
20100162183 Crolley Jun 2010 A1
20100184422 Ahrens Jul 2010 A1
20100228715 Lawrence Sep 2010 A1
20100257552 Sharan et al. Oct 2010 A1
20100262597 Han Oct 2010 A1
20100268673 Quadracci Oct 2010 A1
20100274775 Fontes et al. Oct 2010 A1
20100280983 Cho Nov 2010 A1
20100299325 Tzvi et al. Nov 2010 A1
20100306191 Lebeau et al. Dec 2010 A1
20100312782 Li et al. Dec 2010 A1
20100332958 Weinberger et al. Dec 2010 A1
20110029541 Schulman Feb 2011 A1
20110029925 Robert et al. Feb 2011 A1
20110035699 Robert et al. Feb 2011 A1
20110041094 Robert et al. Feb 2011 A1
20110047145 Ershov Feb 2011 A1
20110047510 Yoon Feb 2011 A1
20110055759 Robert et al. Mar 2011 A1
20110058046 Yoshida et al. Mar 2011 A1
20110072492 Mohler et al. Mar 2011 A1
20110078767 Cai et al. Mar 2011 A1
20110093488 Amacker et al. Apr 2011 A1
20110131205 Iyer et al. Jun 2011 A1
20110225145 Greene et al. Sep 2011 A1
20110252329 Broman Oct 2011 A1
20110264656 Dumais et al. Oct 2011 A1
20110295700 Gilbane et al. Dec 2011 A1
20110314419 Dunn et al. Dec 2011 A1
20120158685 White et al. Jun 2012 A1
20120166411 MacLaurin et al. Jun 2012 A1
20120198347 Hirvonen et al. Aug 2012 A1
20120284247 Jiang et al. Nov 2012 A1
20120284256 Mahajan et al. Nov 2012 A1
20130132896 Lee et al. May 2013 A1
20130166525 Naranjo et al. Jun 2013 A1
20130173513 Chu et al. Jul 2013 A1
20130219319 Park et al. Aug 2013 A1
20130290319 Glover et al. Oct 2013 A1
20130290321 Shapira et al. Oct 2013 A1
20140007057 Gill et al. Jan 2014 A1
20140025502 Ramer et al. Jan 2014 A1
20140049651 Voth Feb 2014 A1
20140279013 Chelly et al. Sep 2014 A1
20150032714 Simhon et al. Jan 2015 A1
Foreign Referenced Citations (12)
Number Date Country
2288113 Feb 2011 EP
2009278342 Nov 2009 JP
20090285550 Nov 2009 JP
2011044147 Mar 2011 JP
20030069127 Aug 2003 KR
20070014595 Feb 2007 KR
20110009955 Jan 2011 KR
2007047464 Apr 2007 WO
2009117582 Sep 2009 WO
2010014954 Feb 2010 WO
2011016665 Feb 2011 WO
2012083540 Jun 2012 WO
Non-Patent Literature Citations (18)
Entry
Alice Corp V. CLS Bank International, 573 US___ 134 S. Ct. 2347 (2014).
“Categories App Helps Organize iPhone Apps on your iPhone's Home Screen,” iPhoneHacks, url: http://www.phonehacks.com/2008/10/categoriesapp.html, pp. 1-4, date of publication: Oct. 5, 2008.
“iOS 4.2 for iPad New Features: Folders,” Purcell, url: http://notebooks.com/2010/11/22/ios-4-2-foripad-new-features-folders/, pp. 1-5, date of publication Nov. 22, 2010.
Foreign Office Action for Patent Application No. 201380000403.X dated Jun. 2, 2017 by the State Intellectual Property Office of the P.R.C.
Chinese Foreign Action dated Mar. 13, 2017 from the State Intellectual Property of the P.R.C. for Chinese Patent Application: 201280004301.0, China.
The Second Office Action for Chinese Patent Application No. 201280004301.0 dated Jan. 19, 2018, SIPO.
Second Office Action for Chinese Patent Application No. 201280004300.6 dated Aug. 23, 2017, SIPO.
Foreign Office Action for JP2015-537680 dated Dec. 6, 2016 from the Japanese Patent Office.
Kurihara, et al., “How to Solve Beginner's Problem, Mac Fan Supports” Mac Fan, Mainichi Communications Cooperation, Dec. 1, 2009, vol. 17, 12th issue, p. 92.
Notice of the First Office Action for Chinese Patent Application No. 201280004300.6, State Intellectual Property Office of the P.R.C., dated Oct. 26, 2016.
Chinese Foreign Action dated Sep. 3, 2018 from the State Intellectual Property of the P.R.C. for Chinese Patent Application: 201280004301.0, China.
Currie, Brenton, Apple adds search filters, previous purchases to iPad App Store, Neowin.net, Feb. 5, 2011, http://www.neowin.net/news/apple-adds-search-filters-previous-purchases-to-ipad-app-store.
International Search Authority: “Written Opinion of the International Searching Authority” (PCT Rule 43bis.1) including International Search Report for International Patent Application No. PCT/US2012/059548, dated Mar. 26, 2013.
International Search Authority: “Written Opinion of the International Searching Authority” (PCT Rule 43bis.1) including International Search Report for corresponding International Patent Application No. PCT/US2012/069250, dated Mar. 29, 2013.
International Searching Authority: International Search Report including “Written Opinion of the International Searching Authority” (PCT Rule 43bis.1) for the related International Patent Application No. PCT/US2011/039808, dated Feb. 9, 2012.
Nie et al., “Object-Level Ranking: Bringing Order to Web Objects”, International World Wide Web Conference 2005; May 10-14, 2005; Chiba, Japan.
Qin et al., “Learning to Rank Relationship Objects and Its Application to Web Search”, International World Wide Web Conference 2008 / Refereed Track: Search—Ranking & Retrieval Enhancement; Apr. 21-25, 2008; Beijing, China.
Kurihara, et al., “How to Solve Beginner's Problem, Mac Fan Supports” Mac Fan, Mainichi Communications Cooperation, Dec. 1, 2009, vol. 17, 12th issue, p. 92, Translated.
Related Publications (1)
Number Date Country
20160077715 A1 Mar 2016 US
Provisional Applications (4)
Number Date Country
62086728 Dec 2014 US
61920784 Dec 2013 US
61468095 Mar 2011 US
61354022 Jun 2010 US
Continuations (1)
Number Date Country
Parent 13712563 Dec 2012 US
Child 14850200 US
Continuation in Parts (5)
Number Date Country
Parent 14850200 Sep 2015 US
Child 14955831 US
Parent 14583310 Dec 2014 US
Child 14850200 US
Parent 13296619 Nov 2011 US
Child 13712563 US
Parent 13156999 Jun 2011 US
Child 13296619 US
Parent 13712563 Dec 2012 US
Child 14583310 US