This disclosure relates to user interfaces, and in particular a user interface that is adaptive based on the context of the environment.
The Internet of Things (IoT) allows for the internetworking of devices to exchange data among themselves to enable sophisticated functionality. For example, devices configured for home automation can exchange data to allow for the control and automation of lighting, air conditioning systems, security, etc. In the smart home environment, this can also include home assistant devices providing an intelligent personal assistant to respond to speech. For example, a home assistant device can include a microphone array to receive voice input and provide the corresponding voice data to a server for analysis to provide an answer to a question asked by a user. The server can provide that answer to the home assistant device, which can provide the answer as voice output using a speaker. As another example, the user can provide a voice command to the home assistant device to control another device in the home, for example, a command to turn a light bulb on or off. As such, the user and the home assistant device can interact with each other using voice, and the interaction can be supplemented by a server outside of the home providing the answers. However, homes can have different users interacting with the home assistant device within different contextual environments (e.g., from different locations and at different times) within the home.
Some of the subject matter described herein includes a method for providing a graphical user interface (GUI) on a touchscreen of a home assistant device with artificial intelligence (AI) capabilities, the GUI providing content related to playback of media content within an environment of the home assistant device. The method can include identifying, by a processor of the home assistant device, infrared (IR) signals generated by a remote control and for selecting channels for playback on a television. The playback of media content on the television is switching between different channels of the television can be determined based on the identification of the IR signals. The different channels can include a first channel, a second channel, and a third channel. Image data depicting a user watching the playback of media content on the television can be received. It can then be determined that the user is depicted in the image data as watching the playback of media content on the television while the first channel and the second channel are selected for playback, and that the user is depicted as not watching the playback of media content on the television while the third channel is selected for playback. Watched channel information can then be provided to a server, the watched channel information representing that the first channel and the second channel are of interest to the user based on the depiction of the user in the image data. The processor can then receive similar channel information from the server, the similar channel information indicating a fourth channel providing playback of media content similar to one or both of the first channel or the second channel. The first channel can be determined to be currently selected to provide playback of media content on the television. A first hot button can also be generated for display on the GUI of the touchscreen of the home assistant device based on the determination that the first channel is currently selected to provide playback of media content, the first hot button configured to instruct the television to switch playback to the second channel. Based on the determination that the first channel is currently selected to provide playback of media content, a second hot button can be generated for display on the GUI of the touchscreen of the home assistant device, the second hot button configured to instruct the television to switch playback to the fourth channel, the first hot button and second hot button displayed on the GUI at a same time, and the first hot button being larger in size than the second hot button. It can then be determined that the first hot button or the second hot button was selected via a touch on the touchscreen of the home assistant device. This results in transmitting an IR signal instructing the television to switch playback from the first channel to one of the second channel or the fourth channel based on the selection of the first hot button or the second hot button.
Some of the subject matter described herein also includes a method, including: determining, by a processor of an assistant device, that playback of media content on a display device is switching among different sources including a first source and a second source; receiving image content depicting a user watching the display device as it provides the playback of media content; determining, by the processor, that the user is depicted in the image content as watching the first source for a threshold time period, and that the user is depicted as not watching the second source for the threshold time period; providing watched channel information to a server, the watched channel information representing that the user is more interested in playback of the first source than playback of the second source based on the first source being watched for the threshold time period; receiving similar source information from the server, the similar source information representing a third source providing playback of media content that is similar to playback of media content of the first source; generating, by the processor, a first selectable item on a graphical user interface (GUI) of the assistant device, the selectable item indicating the third source; and instructing the display device to switch playback from the first source or the second source to the third source based on a selection of the first selectable item.
In some implementations, the method includes receiving audio content corresponding to speech of the user as the user watches the playback of media content on the display device, wherein the watched channel information is further based on the speech of the user.
In some implementations, wherein third source providing playback of media content that is similar to playback of media content of the first source includes identifying similar characteristics between media content being played back via the third source and media content being played back via the first source.
In some implementations, the different sources correspond to different channels of a television, each of the channels providing playback of different media content.
In some implementations, determining that playback of media content is switching among the different sources includes identifying infrared (IR) signals generated by a remote control.
In some implementations, the method includes determining that the user is depicted in the image content as watching a fourth source within the threshold time period; and generating a second selectable item on the GUI, the second selectable item indicating the fourth source and configured to switch playback to the fourth source upon selection.
In some implementations, a size of the second selectable item is larger than a size of the first selectable item.
In some implementations, instructing the display device to switch playback from the first source or the second source to the third source based on the selection of the first selectable item includes transmitting an infrared (IR) signal instructing the display device to switch playback to the third source.
Some of the subject matter described herein also includes a computer program product, comprising one or more non-transitory computer-readable media having computer program instructions stored therein, the computer program instructions being configured such that, when executed by one or more computing devices, the computer program instructions cause the one or more computing devices to: determine that playback of media content on a display device is switching among different sources including a first source and a second source; receive image content depicting a user watching the display device as it provides the playback of media content; determine that the user is depicted in the image content as watching the first source for a threshold time period, and that the user is depicted as not watching the second source for the threshold time period; provide watched channel information to a server, the watched channel information representing that the user is more interested in playback of the first source than playback of the second source based on the first source being watched for the threshold time period; receive similar source information from the server, the similar source information representing a third source providing playback of media content that is similar to playback of media content of the first source; generate a first selectable item on a graphical user interface (GUI), the selectable item indicating the third source; and instruct the display device to switch playback from the first source or the second source to the third source based on a selection of the first selectable item.
In some implementations, the computer program instructions cause the one or more computing devices to: receive audio content corresponding to speech of the user as the user watches the playback of media content on the display device, wherein the watched channel information is further based on the speech of the user.
In some implementations, the third source providing playback of media content that is similar to playback of media content of the first source includes identifying similar characteristics between media content being played back via the third source and media content being played back via the first source.
In some implementations, the different sources correspond to different channels of a television, each of the channels providing playback of different media content.
In some implementations, determining that playback of media content is switching among the different sources includes identifying infrared (IR) signals generated by a remote control.
In some implementations, the computer program instructions cause the one or more computing devices to: determine that the user is depicted in the image content as watching a fourth source within the threshold time period; and generate a second selectable item on the GUI, the second selectable item indicating the fourth source and configured to switch playback to the fourth source upon selection.
In some implementations, a size of the second selectable item is larger than a size of the first selectable item.
In some implementations, instructing the display device to switch playback from the first source or the second source to the third source based on the selection of the first selectable item includes transmitting an infrared (IR) signal instructing the display device to switch playback to the third source.
Some of the subject matter described herein also includes an electronic device, comprising: a display screen; one or more processors; and memory storing instructions, wherein the one or more processors are configured to execute the instructions such that the one or more processors and memory are configured to: determine that playback of media content on a display screen of a display device is alternating among different sources including a first source and a second source; receive image content depicting a user watching the display device as it provides the playback of media content; determine that the user is depicted in the image content as watching the first source for a threshold time period, and that the user is depicted as not watching the second source for the threshold time period; provide watched channel information to a server, the watched channel information representing that the user is more interested in playback of the first source than playback of the second source based on the first source being watched for the threshold time period; receive similar source information from the server, the similar source information representing a third source providing playback of media content that is similar to playback of media content of the first source; generate a first selectable item on a graphical user interface (GUI) on the display screen of the display device, the selectable item indicating the third source; and instruct the display device to switch playback from the first source or the second source to the third source based on a selection of the first selectable item.
In some implementations, the one or more processors are configured to execute the instructions such that the one or more processors and memory are configured to: receive audio content corresponding to speech of the user as the user watches the playback of media content on the display device, wherein the watched channel information is further based on the speech of the user.
In some implementations, third source providing playback of media content that is similar to playback of media content of the first source includes identifying similar characteristics between media content being played back via the third source and media content being played back via the first source.
In some implementations, the different sources correspond to different channels of a television, each of the channels providing playback of different media content.
In some implementations, determining that playback of media content is switching among the different sources includes identifying infrared (IR) signals generated by a remote control.
In some implementations, the one or more processors are configured to execute the instructions such that the one or more processors and memory are configured to: determine that the user is depicted in the image content as watching a fourth source within the threshold time period; and generate a second selectable item on the GUI, the second selectable item indicating the fourth source and configured to switch playback to the fourth source upon selection.
In some implementations, a size of the second selectable item is larger than a size of the first selectable item.
In some implementations, instructing the display device to switch playback from the first source or the second source to the third source based on the selection of the first selectable item includes transmitting an infrared (IR) signal instructing the display device to switch playback to the third source.
This disclosure describes devices and techniques for providing a user interface for a home assistant device based on the context, or characteristics, of its surrounding environment. In one example, the user interface of the home assistant device (e.g., a graphical user interface (GUI) generated for display on a display screen of the home assistant device) can be different based on a combination of contextual factors of the surrounding environment including the person interacting with the home assistant device, the people in the surrounding environment, the time, the location of the home assistant device within the home, the location of the person interacting with the home assistant device, the presence of strangers, interests of the users, etc. As a result, based on the contextual factors, different content (e.g., information, graphical icons providing access to functionality of the home assistant device, etc.) can be displayed by the home assistant device.
Additionally, the same content can be displayed differently. For example, different languages, visual effects, etc. can be provided based on the context of the environment. In another example, two different users (or even the same user at different times) might ask the same question to the home assistant device. Based on differences within the context of the environment when the question is asked, the user interface can provide the same answers to the question differently.
This disclosure also describes devices and techniques for providing a user interface based on playback of media content within the environment. In one example, the home assistant device can determine that a user is switching among different television channels to watch different media content. For example, the user might switch between two different channels of the television using a remote control, each of the different channels providing playback of different media content based on the user's cable television package. When the user switches the channels, the remote control can generate and transmit infrared (IR) light (e.g., using a light-emitting diode (LED)) that can be received by the television (e.g., using a photodiode) to cause it to change channels. The home assistant device can also detect the transmission of IR light as signals (based on pulses of the IR light) indicating the channel that the television is to switch to. Thus, the home assistant device can determine that the user is toggling between the two channels, and provide information regarding the channels that are being watched (i.e., toggled among) to a server. That server can then determine information regarding the channels that are being watched (e.g., if one or both of the two channels is playing back a basketball game, then the teams that are playing, the current score, or other information regarding the media content being played back) and provide that information to the home assistant device. Additionally, information regarding other channels that might be of interest to the user (e.g., other channels that are playing back similar or related content such as another basketball game) can be provided as well.
The home assistant device can then generate “hot buttons” on a GUI providing information regarding the media content played back on channels that are currently not being watched and allowing the user to quickly change the television to that channel. For example, one of the hot buttons can display the score, team names, and time left for a basketball game that is playing back on a channel that the user was previously watching. Thus, the user can be provided information related to the other channels. If the user wants to quickly switch to a channel due to the information provided on the hot button, then the user can quickly and easily select the button (e.g., touch the button on a touchscreen display of the home assistant device), and the home assistant device can transmit the IR signals to the television to emulate the remote control such that the channel can be changed.
In more detail,
In
For example, in
By contrast, user 130b can be in the living room at 8:30 AM of the same home as home assistant device 110. Because the user, time, and location of the user are different, home assistant device 110 can generate a different user interface 120b providing a different GUI having different content as depicted in
In another example, because user interface 120a was generated in the evening, it can have different colors, brightness, or other visual characteristics than display 120b. This might be done because the user interface should not be too disruptive in different lighting situations. For example, a light sensor (e.g., a photodiode) can be used to determine that a room is dark. Home assistant device 110 can then adjust the brightness of the display screen based on the determined lighting situation in the environment.
Additionally, because users 130a and 130b are in different rooms and, therefore, at different distances from home assistant device 110, the user interfaces 120a and 120b can be different to take that into account. For example, because user 130a in
User interfaces 120a and 120b can also be different in other ways. For example, the location of content, the number of content, etc. as depicted in
Next, home assistant device 110 can pick up more speech at a different time. However, if the context of the environment is different, then a different user interface than that generated at block 210 can be generated. Thus, even if the content of the speech at the two different times was the same, the user interfaces generated can be different if the context of the environment was different.
Other details can include the skill level of the user at block 320. For example, home assistant device 110 can determine the skill level of a user as they interact more with the user interface. If the user uses more functionality, more complicated functionality, requests significant amount of detail regarding functionality, etc. then the user can be identified by home assistant device 110 as a more sophisticated user. By contrast, if another user tends to ask the same repetitive tasks or questions of home assistant device 110 then the user can be identified as a less sophisticated user. If the user tends to use less complicated functionality, less functionality, or does not request significant detail, then the user can also be identified as a less sophisticated user. In
As previously discussed, the user interface can include other visual components other than displaying content as part of a GUI on a display screen. In
In one example, the ring can be a color corresponding to the traffic or commute status for the user to go to their next expected location, such as the workplace in the morning or a coffee meeting scheduled on their calendar. If the ring is set to a green color, then this can indicate to the user that the traffic is relatively light. By contrast, a red color can indicate that the traffic is relatively heavy. This type of user interface can provide a user with information while they are far away from home assistant device 110 because the colors can be easily seen from a distance. In some implementations, the ring can also indicate whether the user needs to leave soon or immediately if they want to make the next appointment on their schedule. For example, the intensity or brightness of the color can be increased, the ring can be blinking, etc. This can provide further detail from a distance for a user. In some implementations, the user interface can also display on the display screen a route to the location of the next event on their schedule, provide a time estimate, etc. As a result, if the user decides that they want more detail and walks closer to home assistant device 110, information can be readily displayed and available. In some implementations, home assistant device 105 can determine that the user is walking closer after the ring has been activated and then process information and display the additional information on the display screen so that information is available when they are closer. In some implementations, the color of the ring can indicate other determinations, for example, an unexpected situation such as a window or door being open, water flooding detected, or the temperature is within a temperature range corresponding to an anomaly.
The user interface can also include audio sounds for playback. For example, user interface 120a in
Characteristics regarding the speech received by home assistant device 110 can also be determined at block 330. For example, home assistant device 110 can determine the volume, speed, accent, language, tone, etc. of speech and use that as a contextual factor in providing a user interface. In one example, if a user is speaking quickly (e.g., at a speed or rate determined to be within a words per minute range corresponding to speaking quickly), then content of the user interface may be updated faster than if the user was speaking slowly, for example, by updating the GUI of the user interface sooner. In another example, if the user's speech is determined to be indicative of stress or frustration, then the user interface might provide content differently than if the user's speech is determined to be relatively free of stress or frustration. As an example, if the user is stressed or frustrated, then the amount of content provided on the user interface can be reduced in comparison with the user not being stressed or frustrated.
In some implementations, if the user is determined to be stressed or frustrated, then the user interface can include the playback of music. For example, calming music can be played back using the speaker of home assistant device 110.
In some implementations, the lighting of home assistant device 110 can be different based on what is provided on the user interface. For example, different types of content can result in different brightness, colors, etc.
The user interface can also be changed to account for privacy expectations of a user when the context of the environment changes (i.e., the conditions or characteristics of the environment change).
For example, user 130a might want some content to be provided on a user interface if he is alone, but might not want that content to be displayed if others are within the home. Likewise, user 130b also might not want some content to be provided. In some implementations, user 130a might find it acceptable to have the content provided on the user interface even if the presence of user 130b is detected because user 130b is a member of the same household. However, user 130a might want that content to not be displayed if strangers or guests are in the home. User 130c can be a stranger or newcomer into the home environment and has never interacted with home assistant device 110 and therefore, is unrecognized by home assistant device 110.
Home assistant device 110 can recognize the different users or persons within the home and generate user interface 120c based on the users 130a-c. For example, home assistant device 110 can take some details of user interfaces 120a and 120b (e.g., user interfaces normally for users 130a and 130b, respectively) and generate user interface 120c in
In some implementations, upon detection of user 130c (i.e., a stranger or guest in the environment), the user interface can also be adapted to take into account an unrecognized user. For example, upon detection of an unrecognized user, some content might be removed from a user interface. When the unrecognized user leaves, this can be detected, and therefore, home assistant device 110 can then provide the removed content back with the user interface. As a result, the user's privacy expectations can be maintained when guests are nearby.
Other types of changes in context of the environment other than detection of strangers or guests can include determining differences in time. For example, a user might find it acceptable to display some content on the GUI late at night or early in the morning, but might not want that content displayed during the daytime because the likelihood of others seeing that content might be higher. Another example can include activities of persons within the environment. For example, if several people in the environment are discussing a particular topic, a social gathering is taking place, etc. then perhaps a user's privacy expectations can be elevated and, therefore, some of the content that would otherwise be displayed can be removed.
In some implementations, a user's privacy expectations can be set by that user or learned by home assistant device 110 over time, or a combination of both. For example, the user can indicate that certain content should not be displayed when unrecognized persons are in the environment. As another example, the user might remove content from the GUI and home assistant device 110 can identify the context in the environment when the user removed the content to determine the user's privacy expectations.
When the stranger or guest leaves, this can be determined as a change in the context of the environment and, therefore, also a change in the privacy expectations for the user. Because the user might be the only person within the environment, the GUI can be modified again to include the content that was previously removed. Thus, if the context of the environment changes and, therefore, the user for whom the GUI is provided has a change in privacy expectations, then the GUI can be adapted.
Many of the examples disclosed herein discuss visual adaptations for the user interface. However, audio adaptations can also be performed based on the context situations described above. For example, the type of voice, accent, volume, etc. can also be adjusted for different user interfaces using the techniques described herein.
Many of the examples disclosed herein discuss speech being recognized. However, other types of audio can also be used with the techniques. For example, noise from objects such as television or radio, a doorbell ringing, a door opening, glass shattering, etc. can also be detected occurrences of activity other than speech.
In some implementations, the content of the user interface can also be changed based on whether or not it is determined that a user is looking at home assistant device 110 or speaking to home assistant device 110. For example, the display screen of home assistant device 110 might be turned off, but can turn on when it is determined that a user is looking at it.
In some implementations, the volume of playback of audio provided by home assistant device 110 can be adjusted (e.g., lowered) upon detection of an incoming phone call or page (e.g., via a mobile phone within the home environment). In another example, the content displayed can be adjusted based on the status of another device. For example, a recipe displayed on the display screen of home assistant device 110 can be changed based on determined statuses of a kitchen appliance (e.g., oven, timer, etc.) used for the recipe.
In some implementations, the content provided via the user interface can be based on how a user is using another device within the home. For example, the infrared signals of a television and/or remote control of the television can be detected to indicate which channels are being switched among. This information can be provided to a cloud server by home assistant device 110, which can provide home assistant device 110 with information regarding the media content on those channels being watched. For example, the media content to be provided via the user interface can include “hot buttons” that can show information regarding the channels (e.g., schedule, current programming, popularity ratings for what is currently being played on the channel, etc.). In another example, if a channel is determined to be playing a sports game, then the score, team information (e.g., team rosters) can be displayed. In some implementations, if the user is determined to be switching between three channels within a short period of time and repeating some of the channels during that short period of time (e.g., each channel is visited at least twice in a five minute period), then hot buttons can be generated for each of those channels. The hot buttons can be displayed in different parts of the display screen and each button can include content representing information corresponding to the channel. For example, the user can be switching between three channels playing three different basketball games. Each of the hot buttons can include the scores and time (e.g., 3:23 left in the fourth quarter) of the game played on that channel. Thus, switching between the different channels can be determined and content for the channels that aren't even being watched can be displayed via the hot buttons. The user can then select one of those buttons and the television can switch to the channel corresponding to the selected button. This can be done with home assistant device 110 communicating with the television either via the wireless network or by generating infrared signals to simulate a remote control.
In more detail, home assistant device 110 can be placed within the home where it is easily accessible to users and the user interface displayed on its display screen is easily seen. For example, some users might place home assistant device 110 in the living room where they also watch media content played back on their television, engage in social activities, etc. Thus, home assistant device 110 might be placed on a coffee table or end table in the living room where it is close to where people are engaged in a variety of activities in the home. In some implementations, home assistant device 110 can determine information regarding the playback of media content on the television (or other display device such as a computer monitor, tablet, smartphone, etc.) and then generate content for its user interface based on the playback of the media content.
In
Server 725 can be a cloud server that tracks information regarding the media content being played back on channels. For example, server 725 can receive or generate real-time information regarding the media content being played back on different channels. If channels 12 and 32 are playing back different basketball games (or other types of sports), then server 725 can store information indicating the teams playing, the time left in the game or portion of the game (e.g., how much time is left in a period or quarter), the score, team logos, team records (e.g., wins, losses, ties), etc. Other types of media content can include different types of information. For example, if a movie is being played back on another channel, then ratings, reviews, box office revenue, time left to finish playback of the movie, actors and actresses starring in the movie, director and/or other filmmaker credits, etc. can be stored by server 725. In
If server 725 receives watched channels information 730, then it can provide channel information 735 using information from channel information database 740. For example, if channels 12 and 32 are playing back different basketball games, then the information indicated above (e.g., scores, etc.) can be provided to home assistant device 110. Home assistant device 110 can then generate content on user interface 720 displayed upon its display screen using that information. Thus, characteristics of a live broadcast (e.g., the score of a live basketball game) can be provided to home assistant device 110 to be depicted upon its display screen.
For example, in
The user might also not be aware of other channels providing playback of media content that they might be interested in. For example, the user might be watching the basketball games being played back on channels 12 and 32. However, those basketball games might be part of a larger basketball tournament and other games related to that tournament might be playing back on other channels. Server 725 can determine this (e.g., by determining similarities in the media content being played back on the different channels) and include information regarding those other channels in channel information 735. For example, as depicted in
The channels being watched can be determined based on the IR signals (810). For example, in
In some implementations, the channels can be identified based on how often the user is switching among the channels within a time duration (e.g., a particular channel is watched or switched to a threshold number of times within a threshold time duration, such as four times in twenty minutes). If the user is switching among five channels within the time duration, then those five channels can be identified as channels being watched. In other implementations, if the IR signals are for specific channels (e.g., indicating to television 715 in
In another implementation, the user's speech can be detected, for example, using microphones of home assistant device 110. If the user is determined to be talking regarding the media content (e.g., the subject that is the target of the media content) of the channel, then the channel can be one of the identified channels. This might be done because if the user is discussing the media content being played back, then they might be interested in watching that channel. In another example, the user's activity can be detected using video cameras to generate image frames of the environment depicting what the user is doing while watching the channels. If the user is perceived (via image recognition algorithms) to be staring at the television for a time period (e.g., for at least thirty seconds), then the channel being played can be an identified channel. If the user is not staring at the television for at least that time period, then that channel might not be of interest to the user. Other visual characteristics of the user can also be identified. For example, if the user or others in the environment are wearing attire or holding paraphernalia such as a cap, jersey, flag, etc. indicating a sports team, then that can be recognized (including the sports team in some implementations) and be used to identify channels playing games related to that sport.
In another implementation, characteristics of the channel or the media content played back on that channel can be used. For example, channels can be marked as favorites by a user, indicating that they are channels that the user prefers to watch and, therefore, should be identified as being watched. The type of content being played back can also be determined. For example, news, sports, or other programming or media content that are generally played back in real-time (i.e., currently ongoing) can be identified as a watched channel if the user selects it.
The watched channel information can then be provided to a server (815). For example, in
For example, in
Additionally, other channels providing media content similar to the media content of the watched channels (i.e., identified channels) can be determined (830). For example, server 725 can determine the type of media content being played back (e.g., news, sports, a movie, a comedy television show, stand-up comedy routine, etc.), people acting in the media content, the director or filmmaker of the media content, or other characteristics of the media content itself to identify similar media content currently being played back on other channels (e.g., as indicated in channel information database 740). As previously discussed, these other channels might be of interest for the user to watch. The media content information for the watched channels and the similar channels can then be provided (835) to the home assistant device (840).
The home assistant device can then generate hot buttons for the channels on the user interface displayed upon the display screen of the home assistant device (845). For example, home assistant device 110 in
If the user selects one of the hot buttons, then this can be determined (e.g., by detecting touches on the touchscreen that are at coordinates corresponding to the position of the hot button) and the home assistant device can generate or transmit an IR signal for the television to switch the channel based on the selected hot button. For example, in
In some implementations, server 725 in
Any of the techniques described elsewhere herein can also be used for the content (e.g., hot buttons, etc.). For example, different users might result in different sizes of hot buttons, positions of hot buttons, number of hot buttons, etc. In another example, hot buttons for channels that the user was identified as watching can be a different size (e.g., larger) than other channels recommended by server 725.
Additionally, the techniques describe switching among different channels of a television. However, switching among different streaming media content can also be performed using similar techniques.
Many of the aforementioned examples discuss a home environment. In other examples, the devices and techniques discussed herein can also be set up in an office, public facility, etc.
Those skilled in the art will appreciate that the logic and process steps illustrated in the various flow diagrams discussed herein may be altered in a variety of ways. For example, the order of the logic may be rearranged, sub-steps may be performed in parallel, illustrated logic may be omitted, other logic may be included, etc. One will recognize that certain steps may be consolidated into a single step and that actions represented by a single step may be alternatively represented as a collection of substeps. The figures are designed to make the disclosed concepts more comprehensible to a human reader. Those skilled in the art will appreciate that actual data structures used to store this information may differ from the figures and/or tables shown, in that they, for example, may be organized in a different manner; may contain more or less information than shown; may be compressed, scrambled and/or encrypted; etc.
From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications can be made without deviating from the scope of the invention. Accordingly, the invention is not limited except as by the appended claims.
This application is a continuation of U.S. patent application Ser. No. 15/600,563, entitled, “Contextual User Interface Based on Media Playback,” by Roman et al., and filed on May 19, 2017. U.S. patent application Ser. No. 15/600,563 claims priority to U.S. Provisional Patent Application No. 62/506,168, entitled “Contextual User Interface Based on Media Playback,” by Roman et al., and filed on May 15, 2017. U.S. patent application Ser. No. 15/600,563 is also a continuation-in-part of U.S. patent application Ser. No. 15/587,201, entitled “Contextual User Interface Based on Environment,” by Roman et al., and filed on May 4, 2017, which claims priority to U.S. Provisional Patent Application No. 62/448,912, entitled “Contextual User Interface Based on Environment,” by Roman et al., and filed on Jan. 20, 2017, U.S. Provisional Patent Application No. 62/486,359, entitled “Contextual User Interface Based on Environment,” by Roman et al., and filed on Apr. 17, 2017, and U.S. Provisional Patent Application No. 62/486,365, entitled “Contextual User Interface Based on Changes in Environment,” by Roman et al., and filed on Apr. 17, 2017. The content of the above-identified applications are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
62506168 | May 2017 | US | |
62448912 | Jan 2017 | US | |
62486359 | Apr 2017 | US | |
62486365 | Apr 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15600563 | May 2017 | US |
Child | 15673911 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15587201 | May 2017 | US |
Child | 15600563 | US |