This application describes, among other things, user interface objects as well as, systems and devices associated with user interfaces which employ such user interface objects.
Technologies associated with the communication of information have evolved rapidly over the last several decades. Television, cellular telephony, the Internet and optical communication techniques (to name just a few things) combine to inundate consumers with available information and entertainment options. Taking television as an example, the last three decades have seen the introduction of cable television service, satellite television service, pay-per-view movies and video-on-demand. Whereas television viewers of the 1960s could typically receive perhaps four or five over-the-air TV channels on their television sets, today's TV watchers have the opportunity to select from hundreds, thousands, and potentially millions of channels of shows and information. Video-on-demand technology, currently used primarily in hotels and the like, provides the potential for in-home entertainment selection from among thousands of movie titles.
The technological ability to provide so much information and content to end users provides both opportunities and challenges to system designers and service providers. One challenge is that while end users typically prefer having more choices rather than fewer, this preference is counterweighted by their desire that the selection process be both fast and simple. Unfortunately, the development of the systems and interfaces by which end users access media items has resulted in selection processes which are neither fast nor simple. Consider again the example of television programs. When television was in its infancy, determining which program to watch was a relatively simple process primarily due to the small number of choices. One would consult a printed guide which was formatted, for example, as series of columns and rows which showed the correspondence between (1) nearby television channels, (2) programs being transmitted on those channels and (3) date and time. The television was tuned to the desired channel by adjusting a tuner knob and the viewer watched the selected program. Later, remote control devices were introduced that permitted viewers to tune the television from a distance. This addition to the user-television interface created the phenomenon known as “channel surfing” whereby a viewer could rapidly view short segments being broadcast on a number of channels to quickly learn what programs were available at any given time.
Despite the fact that the number of channels and amount of viewable content has dramatically increased, the generally available user interface, control device options and frameworks for televisions has not changed much over the last 30 years. Printed guides are still the most prevalent mechanism for conveying programming information. The multiple button remote control with up and down arrows is still the most prevalent channel/content selection mechanism. The reaction of those who design and implement the TV user interface to the increase in available media content has been a straightforward extension of the existing selection procedures and interface objects. Thus, the number of rows in the printed guides has been increased to accommodate more channels. The number of buttons on the remote control devices has been increased to support additional functionality and content handling, e.g., as shown in
In addition to increases in bandwidth and content, the user interface bottleneck problem is being exacerbated by the aggregation of technologies. Consumers are reacting positively to having the option of buying integrated systems rather than a number of segregable components. An example of this trend is the combination television/VCR/DVD in which three previously independent components are frequently sold today as an integrated unit. This trend is likely to continue, potentially with an end result that most if not all of the communication devices currently found in the household will be packaged together as an integrated unit, e.g., a television/VCR/DVD/internet access/radio/stereo unit. Even those who continue to buy separate components will likely desire seamless control of, and interworking between, the separate components. With this increased aggregation comes the potential for more complexity in the user interface. For example, when so-called “universal” remote units were introduced, e.g., to combine the functionality of TV remote units and VCR remote units, the number of buttons on these universal remote units was typically more than the number of buttons on either the TV remote unit or VCR remote unit individually. This added number of buttons and functionality makes it very difficult to control anything but the simplest aspects of a TV or VCR without hunting for exactly the right button on the remote. Many times, these universal remotes do not provide enough buttons to access many levels of control or features unique to certain TVs. In these cases, the original device remote unit is still needed, and the original hassle of handling multiple remotes remains due to user interface issues arising from the complexity of aggregation. Some remote units have addressed this problem by adding “soft” buttons that can be programmed with the expert commands. These soft buttons sometimes have accompanying LCD displays to indicate their action. These too have the flaw that they are difficult to use without looking away from the TV to the remote control. Yet another flaw in these remote units is the use of modes in an attempt to reduce the number of buttons. In these “moded” universal remote units, a special button exists to select whether the remote should communicate with the TV, DVD player, cable set-top box, VCR, etc. This causes many usability issues including sending commands to the wrong device, forcing the user to look at the remote to make sure that it is in the right mode, and it does not provide any simplification to the integration of multiple devices. The most advanced of these universal remote units provide some integration by allowing the user to program sequences of commands to multiple devices into the remote. This is such a difficult task that many users hire professional installers to program their universal remote units.
Some attempts have also been made to modernize the screen interface between end users and media systems. However, these attempts typically suffer from, among other drawbacks, an inability to easily scale between large collections of media items and small collections of media items. For example, interfaces which rely on lists of items may work well for small collections of media items, but are tedious to browse for large collections of media items. Interfaces which rely on hierarchical navigation (e.g., tree structures) may be speedier to traverse than list interfaces for large collections of media items, but are not readily adaptable to small collections of media items. Additionally, users tend to lose interest in selection processes wherein the user has to move through three or more layers in a tree structure. For all of these cases, current remote units make this selection processor even more tedious by forcing the user to repeatedly depress the up and down buttons to navigate the list or hierarchies. When selection skipping controls are available such as page up and page down, the user usually has to look at the remote to find these special buttons or be trained to know that they even exist. Accordingly, organizing frameworks, techniques and systems which simplify the control and screen interface between users and media systems as well as accelerate the selection process, while at the same time permitting service providers to take advantage of the increases in available bandwidth to end user equipment by facilitating the supply of a large number of media items and new services to the user have been proposed in U.S. patent application Ser. No. 10/768,432, filed on Jan. 30, 2004, entitled “A Control Framework with a Zoomable Graphical User Interface for Organizing, Selecting and Launching Media Items”, the disclosure of which is incorporated here by reference.
As mentioned in the above-incorporated application, various different types of remote devices can be used with such frameworks including, for example, trackballs, “mouse”-type pointing devices, light pens, etc. However, another category of remote devices which can be used with such frameworks (and other applications) is 3D pointing devices with scroll wheels. The phrase “3D pointing” is used in this specification to refer to the ability of an input device to move in three (or more) dimensions in the air in front of, e.g., a display screen, and the corresponding ability of the user interface to translate those motions directly into user interface commands, e.g., movement of a cursor on the display screen. The transfer of data between the 3D pointing device may be performed wirelessly or via a wire connecting the 3D pointing device to another device. Thus “3D pointing” differs from, e.g., conventional computer mouse pointing techniques which use a surface, e.g., a desk surface or mousepad, as a proxy surface from which relative movement of the mouse is translated into cursor movement on the computer display screen. An example of a 3D pointing device can be found in U.S. patent application Ser. No. 11/119,663, the disclosure of which is incorporated here by reference.
Of particular interest for this specification is how these remote devices interact with information and objects in a graphical user interface (GUI). A currently popular mechanism for interacting with objects in a GUI is the dropdown list. Typically a remote device moves a cursor over an object of interest and a dropdown list 200 appears as shown in
Firstly, a visual browser (or bookshelf view as seen in
Thus, these drawbacks demonstrate that there is significant room for improvement in the area of handheld device interactions with GUIs, generally, and interactions between 3D pointers with zoomable GUIs using hover-buttons specifically.
Systems and methods according to the present invention address these needs and others by providing systems and methods for interacting with user-selectable objects in a graphical user interface.
According to one exemplary embodiment of the present invention, a method for interacting with primary and secondary user-selectable objects in a graphical user interface comprising the steps of: associating secondary user-selectable objects with primary user-selectable objects; displaying secondary user-selectable objects associated with a respective primary user-selectable object when the respective primary user-selectable object is selected; and selecting one of the secondary user-selectable objects when a cursor is proximate of the one of the secondary user-selectable objects.
According to another exemplary embodiment of the present invention, a user interface for interfacing with primary and secondary user-selectable objects comprising: primary and secondary user-selectable objects, wherein the secondary user-selectable objects are associated with a respective primary user-selectable objects; a display, wherein the secondary user-selectable objects associated with a respective primary user-selectable object are displayed upon the display when the respective primary user-selectable object is selected; and a cursor, wherein when the cursor is proximate of the secondary user-selectable object, the secondary user-selectable object is selected.
The accompanying drawings illustrate exemplary embodiments of the present invention, wherein:
The following detailed description of the invention refers to the accompanying drawings. The same reference numbers in different drawings identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims.
In order to provide some context for this discussion, an exemplary aggregated media system 400 in which the present invention can be implemented will first be described with respect to
In this exemplary embodiment, the media system 400 includes a television/monitor 412, a video cassette recorder (VCR) 414, digital video disk (DVD) recorder/playback device 416, audio/video tuner 418 and compact disk player 420 coupled to the I/O bus 410. The VCR 414, DVD 416 and compact disk player 420 may be single disk or single cassette devices, or alternatively may be multiple disk or multiple cassette devices. They may be independent units or integrated together. In addition, the media system 400 includes a microphone/speaker system 422, video camera 424 and a wireless I/O control device 426. According to exemplary embodiments of the present invention, the wireless I/O control device 426 is a 3D pointing device although the present invention is not limited thereto. The wireless I/O control device 426 can communicate with the entertainment system 400 using, e.g., an IR or RF transmitter or transceiver. Alternatively, the I/O control device can be connected to the entertainment system 400 via a wire.
The entertainment system 400 also includes a system controller 428. According to one exemplary embodiment of the present invention, the system controller 428 operates to store and display entertainment system data available from a plurality of entertainment system data sources and to control a wide variety of features associated with each of the system components. As shown in
As further illustrated in
More details regarding this exemplary entertainment system and frameworks associated therewith can be found in the above-incorporated by reference U.S. Patent Application entitled “A Control Framework with a Zoomable Graphical User Interface for Organizing, Selecting and Launching Media Items”. Alternatively, remote devices in accordance with the present invention can be used in conjunction with other systems, for example computer systems including, e.g., a display, a processor and a memory system or with various other systems and applications.
3D pointing devices enable the translation of movement, e.g., gestures, into commands to a user interface. An exemplary 3D pointing device 500 is depicted in
Hover-Buttons
Exemplary embodiments of the present invention describe how to improve interacting with objects in a graphical user interface (GUI) through the use of secondary user-selectable objects, some of which are referred to herein as “hover-buttons”.
Prior to describing specific details of these secondary user-selectable objects regarding, a brief description of an exemplary GUI in which they can be deployed is presented. The GUI contains one or more target objects (also referred to herein as graphical objects or primary user-selectable objects). The target objects can be presented and organized in many different ways on a display such as: (1) single buttons or zoomable objects arbitrarily positioned on the screen, (2) one dimensional lists of buttons or zoomable objects which may be scrollable, (3) two dimensional grids of objects possibly scrollable and pannable, (4) three dimensional matrices of objects possibly scrollable and (5) various combinations of the above. It may be desirable for some GUI objects to be immediately available at all times because of their functionality. In the exemplary GUIs described herein, objects with hover-buttons are presented in a bookshelf format, however as described above other presentations are possible.
According to exemplary embodiments of the present invention, a cursor is used to indicate the current location of interest in the user interface associated with movement of a corresponding pointing device. When the cursor enters the area occupied by a target object and hovers within the area for a predetermined amount of time, such as 100 ms to 1000 ms, that object is highlighted. Note that hovering includes, but is not limited to pausing, such that the cursor can still be moving and trigger a change in object focus. Highlighting is visible through a color change, a hover-zoom effect, enlargement or any other visual method that makes the object over which the cursor has paused distinguishable from other objects on the display. The highlighted object is the object on the GUI that has the focus of both the user and the system. Hover-button(s) can be associated and attached to the currently highlighted (or focused) object to enable the user to actuate, or otherwise further interact with, that object. These attached hover-buttons make it clear to a user which object the hover-buttons are associated with.
In this specification, an object can gain the focus of the system and the user, e.g., by having a cursor hover thereover, which may be different from selection of that object. Selecting an object typically involves some form of actuation which can, for example, execute a function related to the object which currently has the focus of the system. According to some exemplary embodiments described herein, a cursor moves over an object and the object enlarges, or otherwise provides feedback to the user that that object has gained focus (e.g., it is highlighted). The user may then perform an action such as, for example, “clicking” on the object. This clicking selects the object and activates a function associated with the object. For example, if the focused object was a movie cover, and the user clicked on the focused object an action such as playing the movie could occur. Alternatively, a user may change the system's focus to another object on the user interface without selecting or actuating the object which previously had the user and the system's focus.
Prior to describing examples using hover-buttons with user-selectable objects, a description of some of the exemplary features of hover-buttons is presented. According to exemplary embodiments of the present invention, hover-buttons are a type of secondary user-selectable object that are associated with, and often geographically attached to, a primary user-selectable object, such as a picture in a picture organizing portion of a user interface. Hover-buttons can be geographically disbursed around the edge of the associated target object in order to increase the distance between the hover-buttons associated with the same target object so that it is easier for a user to point and gain the focus of one hover-button over another hover-button. To achieve this geographical disbursement, hover-buttons can, for example, be located at geographic corners on the edge of an object. A typical pattern of cursor movement is from the center of the hovered target object to one of the corners where a hover-button is located. The effect generated is a single vector movement in one of four directions relative to the hovered object. These same relative movements towards corners of target objects tend to become a habit forming gesture that simplifies using the GUI. Another exemplary feature of hover-buttons is that hover-buttons can become visible only when the object to which they are attached has the focus. Upon losing the focus of the object, the hover-buttons then become invisible. Also as a cursor comes near a hover-button, the hover-button enlarges and upon the cursor moving away from the hover-button, the hover button shrinks in size to allow the associated object to become clearly visible. Additionally, only one hover-button tends to be enlarged at a time to increase the ease of selection for a user. Using combinations of these exemplary features of hover-buttons, examples of using hover-buttons are presented below.
According to exemplary embodiments of the present invention, hover-buttons can be associated with objects in a GUI. As shown in
According to an exemplary embodiment of the present invention, an animation sequence is used to illustrate the flow of actions from having an object on the screen to enabling or actuating a hover-button. This exemplary animation sequence is illustrated in
According to another exemplary embodiment of the present invention, hover-buttons can be applied to text objects as shown in
One benefit of the afore-described techniques is to create a simple GUI. One expectation of a simple GUI is to have a reduced set of needed functions for use in the simple GUI. Accordingly, in one exemplary embodiment of the present invention, an object will have a maximum of four hover-buttons associated with each target object. Each hover button corresponds to a different function that can be performed in association with the object.
According to other exemplary embodiments of the present invention more than four functions can be associated with an object. To achieve this functionality, an exemplary embodiment of the present invention allows a hover-button to have a sub-menu. An exemplary animation sequence involving a hover-button with a sub-menu is shown in
According to another exemplary embodiment of the present invention, instead of using the animation sequence described above, a hover-button can reach its maximum or minimum size instantaneously based upon the cursor's location.
As described above, hover-buttons can become enlarged when a cursor moves towards a hover-button. Hover-buttons can have associated area thresholds that, when crossed, trigger actions related to the hover-button. As illustrated in
According to other exemplary embodiments of the present invention, hover-buttons can gain focus based on a movement gesture made by the user depicted by the cursor motion on the screen. For example, after an object has gained the focus, when the cursor is moved towards a hover-button, that hover-button gains the focus and becomes enlarged.
According to another exemplary embodiment, scrolling can be used in conjunction with hover-buttons. Each primary user-selectable object in, e.g., a bookshelf view would have a scrolling order number assigned to it, with one of the objects in each view being considered the starting object for scrolling. Additionally, the hover-buttons associated with each object in the bookshelf view would be part of the predetermined scrolling sequence. In an exemplary scrolling order, the scrolling order would be to visit the primary object then visit each hover-button associated with the primary object followed by moving to the next primary object. The next object in the scrolling order would gain the focus of the system and the user with one index rotation of the scroll-wheel.
Numerous variations of the afore-described exemplary embodiments are contemplated. The above-described exemplary embodiments are intended to be illustrative in all respects, rather than restrictive, of the present invention. Thus the present invention is capable of many variations in detailed implementation that can be derived from the description contained herein by a person skilled in the art. All such variations and modifications are considered to be within the scope and spirit of the present invention as defined by the following claims. No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, used herein, the article “a” is intended to include one or more items.
This application is related to, and claims priority from, U.S. Provisional Patent Application Ser. No. 60/708,851 filed on Aug. 17, 2005, entitled “Hover-Buttons for a Zoomable Interface”, the disclosure of which is incorporated here by reference.
Number | Date | Country | |
---|---|---|---|
60708851 | Aug 2005 | US |