The present invention deals user interfaces and more particularly providing a dynamic user interface on a second screen control device to control media content on a primary display screen.
The recent progress on internet based distribution and consumption of media content have caused abundance in the media content availability. This is only going to increase in the future. This explosion of content production and distribution has created interesting issues for the end user in the selection of content. The conventional set top boxes or home gateways are also evolving to enable consumption of media content through media pipes and data pipes coming to the home. This will enable the user to consume media from multiple sources regardless of the distribution channel behind the scene. In this situation, a conventional remote control or any other existing static navigation or control device prove insufficient for navigating these choices.
In addition to set top boxes and home gateways, the remotes for these systems are also evolving. There are several types of remote control devices available to control the entertainment systems at home. Some of them have a touch screen in addition to the normal hard buttons which display a small scale mapping of television screen and control panel. Other types include gesture based remote controls which depend on camera based gesture detection schemes. Still others are second screen devices, such as tablets or smart phones, running remote control software. But none of these devices incorporate a complete dynamic UI based control. A remote control which does not have access to the program-meta information or the context of currently watching program cannot adapt its interface dynamically according to the context. In other words almost all of the available remote controls are static in nature as far as its interface is concerned.
This disclosure provides a solution to this problem by introducing an adaptable user interface system to allow a second screen control device to control content on a primary display screen.
In accordance with one embodiment, a method is provided for creating a dynamic user interface on a second screen control device to control content on a primary display screen. The method includes the steps of monitoring the content being displayed on the primary display screen; obtaining additional information about content being displayed on primary screen; generating a view context based on the content being monitored, additional information, and functionality of the touch screen control device; and providing the view context to the second screen control device.
In accordance with another embodiment, a system is provided for controlling content on a primary display screen using a dynamically created user interface on a second screen control device. The system includes a client and a server. The client includes a first display control and an event listener. The first display control is configured to control a display of the second screen control device. The event listener is configured to receive commands from a user on the second screen control device. The server is in communication with the client and includes a view context creator and an event interpreter. The view context creator is configured to generate a view context based on the content being displayed on the primary display screen, additional information, and functionality of the second screen control device. The event interpreter is configured to receive the commands from the user provided by the event listener and interpret the commands in view of the view context generated by the view context creator.
The present principles may be better understood in accordance with the following exemplary figures, in which:
The present principles are directed to user interfaces and more particularly a software system which provide dynamic user interface for the navigation and control of the media content.
It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the present invention and are included within its spirit and scope.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the present invention and the concepts contributed by the inventor(s) to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
Moreover, all statements herein reciting principles, aspects, and embodiments of the present invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the present invention. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (“DSP”) hardware, read-only memory (“ROM”) for storing software, random access memory (“RAM”), and non-volatile storage.
Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The present invention as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
Reference in the specification to “one embodiment” or “an embodiment” of the present invention, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment”, as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment.
Turning now to
A second form of content is referred to as special content. Special content may include content delivered as premium viewing, pay-per-view, or other content otherwise not provided to the broadcast affiliate manager, e.g., movies, video games or other video elements. In many cases, the special content may be content requested by the user. The special content may be delivered to a content manager 110. The content manager 110 may be a service provider, such as an Internet website, affiliated, for instance, with a content provider, broadcast service, or delivery network service. The content manager 110 may also incorporate Internet content into the delivery system. The content manager 110 may deliver the content to the user's receiving device 108 over a separate delivery network, delivery network 2 (112). Delivery network 2 (112) may include high-speed broadband Internet type communications systems. It is important to note that the content from the broadcast affiliate manager 104 may also be delivered using all or parts of delivery network 2 (112) and content from the content manager 110 may be delivered using all or parts of delivery network 1 (106). In addition, the user may also obtain content directly from the Internet via delivery network 2 (112) without necessarily having the content managed by the content manager 110.
Several adaptations for utilizing the separately delivered content may be possible. In one possible approach, the special content is provided as an augmentation to the broadcast content, providing alternative displays, purchase and merchandising options, enhancement material, etc. In another embodiment, the special content may completely replace some programming content provided as broadcast content. Finally, the special content may be completely separate from the broadcast content, and may simply be a media alternative that the user may choose to utilize. For instance, the special content may be a library of movies that are not yet available as broadcast content.
The receiving device 108 may receive different types of content from one or both of delivery network 1 and delivery network 2. The receiving device 108 processes the content, and provides a separation of the content based on user preferences and commands. The receiving device 108 may also include a storage device, such as a hard drive or optical disk drive, for recording and playing back audio and video content. Further details of the operation of the receiving device 108 and features associated with playing back stored content will be described below in relation to
The receiving device 108 may also be interfaced to a second screen such as a second screen control device such as a touch screen control device 116. The second screen control device 116 may be adapted to provide user control for the receiving device 108 and/or the display device 114. The second screen device 116 may also be capable of displaying video content. The video content may be graphics entries, such as user interface entries, or may be a portion of the video content that is delivered to the display device 114. The second screen control device 116 may interface to receiving device 108 using any well known signal transmission system, such as infra-red (IR) or radio frequency (RF) communications and may include standard protocols such as infra-red data association (IRDA) standard, Wi-Fi, Bluetooth and the like, or any other proprietary protocols. Operations of touch screen control device 116 will be described in further detail below.
In the example of
Turning now to
In the device 200 shown in
The decoded output signal is provided to an input stream processor 204. The input stream processor 204 performs the final signal selection and processing, and includes separation of video content from audio content for the content stream. The audio content is provided to an audio processor 206 for conversion from the received format, such as compressed digital signal, to an analog waveform signal. The analog waveform signal is provided to an audio interface 208 and further to the display device or audio amplifier. Alternatively, the audio interface 208 may provide a digital signal to an audio output device or display device using a High-Definition Multimedia Interface (HDMI) cable or alternate audio interface such as via a Sony/Philips Digital Interconnect Format (SPDIF). The audio interface may also include amplifiers for driving one more sets of speakers. The audio processor 206 also performs any necessary conversion for the storage of the audio signals.
The video output from the input stream processor 204 is provided to a video processor 210. The video signal may be one of several formats. The video processor 210 provides, as necessary a conversion of the video content, based on the input signal format. The video processor 210 also performs any necessary conversion for the storage of the video signals.
A storage device 212 stores audio and video content received at the input. The storage device 212 allows later retrieval and playback of the content under the control of a controller 214 and also based on commands, e.g., navigation instructions such as fast-forward (FF) and rewind (Rew), received from a user interface 216 and/or control interface 222. The storage device 212 may be a hard disk drive, one or more large capacity integrated electronic memories, such as static RAM (SRAM), or dynamic RAM (DRAM), or may be an interchangeable optical disk storage system such as a compact disk (CD) drive or digital video disk (DVD) drive.
The converted video signal, from the video processor 210, either originating from the input or from the storage device 212, is provided to the display interface 218. The display interface 218 further provides the display signal to a display device of the type described above. The display interface 218 may be an analog signal interface such as red-green-blue (RGB) or may be a digital interface such as HDMI. It is to be appreciated that the display interface 218 will generate the various screens for presenting the search results in a three dimensional gird as will be described in more detail below.
The controller 214 is interconnected via a bus to several of the components of the device 200, including the input stream processor 202, audio processor 206, video processor 210, storage device 212, and a user interface 216. The controller 214 manages the conversion process for converting the input stream signal into a signal for storage on the storage device or for display. The controller 214 also manages the retrieval and playback of stored content. Furthermore, as will be described below, the controller 214 performs searching of content and the creation and adjusting of the gird display representing the content, either stored or to be delivered via the delivery networks, described above.
The controller 214 is further coupled to control memory 220 (e.g., volatile or non-volatile memory, including RAM, SRAM, DRAM, ROM, programmable ROM (PROM), flash memory, electronically programmable ROM (EPROM), electronically erasable programmable ROM (EEPROM), etc.) for storing information and instruction code for controller 214. Control memory 220 may store instructions for controller 214. Control memory may also store a database of elements, such as graphic elements containing content. The database may be stored as a pattern of graphic elements. Alternatively, the memory may store the graphic elements in identified or grouped memory locations and use an access or location table to identify the memory locations for the various portions of information related to the graphic elements. Additional details related to the storage of the graphic elements will be described below. Further, the implementation of the control memory 220 may include several possible embodiments, such as a single memory device or, alternatively, more than one memory circuit communicatively connected or coupled together to form a shared or common memory. Still further, the memory may be included with other circuitry, such as portions of bus communications circuitry, in a larger circuit.
The user interface process of the present disclosure employs an input device that can be used to express functions, such as fast forward, rewind, etc. To allow for this, a second screen control device such as a touch panel device 300 may be interfaced via the user interface 216 and/or control interface 222 of the receiving device 200, as shown in
Turning now to
Bumping 420 is defined by a two-stroke drawing indicating pointing in one direction, either up, down, left or right. The bumping gesture is associated with specific commands in context. For example, in a TimeShifting mode, a left-bump gesture 420 indicates rewinding, and a right-bump gesture indicates fast-forwarding. In other contexts, a bump gesture 420 is interpreted to increment a particular value in the direction designated by the bump. Checking 440 is defined as in drawing a checkmark. It is similar to a downward bump gesture 420. Checking is identified in context to designate a reminder, user tag or to select an item or element. Circling 440 is defined as drawing a circle in either direction. It is possible that both directions could be distinguished. However, to avoid confusion, a circle is identified as a single command regardless of direction. Dragging 450 is defined as an angular movement of the controller (a change in pitch and/or yaw) while pressing a button (virtual or physical) on the tablet 300 (i.e., a “trigger drag”). The dragging gesture 450 may be used for navigation, speed, distance, time-shifting, rewinding, and forwarding. Dragging 450 can be used to move a cursor, a virtual cursor, or a change of state, such as highlighting outlining or selecting on the display. Dragging 450 can be in any direction and is generally used to navigate in two dimensions. However, in certain interfaces, it is preferred to modify the response to the dragging command. For example, in some interfaces, operation in one dimension or direction is favored with respect to other dimensions or directions depending upon the position of the virtual cursor or the direction of movement. Nodding 460 is defined by two fast trigger-drag up-and-down vertical movements. Nodding 460 is used to indicate “Yes” or “Accept.” X-ing 470 is defined as in drawing the letter “X.” X-ing 470 is used for “Delete” or “Block” commands. Wagging 480 is defined by two trigger-drag fast back-and-forth horizontal movements. The wagging gesture 480 is used to indicate “No” or “Cancel.”
Depending on the complexity of the sensor system, only simple one dimensional motions or gestures may be allowed. For instance, a simple right or left movement on the sensor as shown here may produce a fast forward or rewind function. In addition, multiple sensors could be included and placed at different locations on the touch screen. For instance, a horizontal sensor for left right movement may be placed in one spot and used for volume u/down, while a vertical sensor for up down movement may be place in a different spot and used for channel up/down. In this way specific gesture mappings may be used.
In one embodiment, the system is a receiving device 108 based software system. The system primarily makes use of the electronic program guide provided by the service provider (e.g. Comcast, Verizon, etc.) to retrieve related information of the program. In an Internet enabled receiving device 108 the system can also query different web services to get additional information about the program. The major components of the system are shown in
In the currently available receiving devices 108 the user interface is configured statically. In other words the user interface is prebuilt and it gets activated on remote control key press. For example if the user is watching a sports program, regardless of whether multiple angles of the event is available or not the interface by which the user selects the program will be same. The user options will explode with availability of content from the cloud services (internet). In which case a statically pre built interface will make the navigation and selection more complex.
The software system 500 as shown in
The view context creator 522 is the central piece of the system. The basic idea behind the functionality of the system is creation of user interface components according to the view context. The view context may depend upon several things like currently displayed program or content, user's personal preference, or device used as second screen control device. The tuner component 524 of the system will provide the channel identification or program identification of the event that set top box or gateway device 550 is currently tuned to. The EPG component 526 will provide the program guide information available for that particular program. The related data extractor component 528 will parse the EPG information further and produce context information for the currently consumed program. This component can optionally contact several cloud services through the data pipe (internet) and extract more context information. A user profiler 530 which provides user data can also be used by this component to enrich the context information further.
In one embodiment, the view context represents a smaller iconic view of the primary screen content enhanced with back ground information and navigational controls. For example the view context of live sports events could contain a down scaled smaller view port of the live video plus iconic representation of other available view angles of the event. The view context created by the set top box 550 will get sent over to the display control module 512 in the second screen control device 540. Display control module 512 takes care of the rendering of the view context. The display module 512 will adapt the rendering according to the device specifics. By having this module, multiple devices varying in display size and capabilities could be used as the second screen control device 540. The set top box/gate way 550 can also have a default display controller 532 which takes care of rendering the view context on the primary display screen 560 such as a television in case a rudimentary remote control without display can be used.
The second part of the system is the event module. This also has client side 510 and server side 520 components. The client side 510 component is an event listener 514 running on the second screen control device 540 to capture the event happening on the device 540 and transfer the event data to the event interpreter 534 running in the set top box 550. The event data includes all peripheral user events plus associated data. This includes events raised through touch screen, accelerometer, compass and proximity sensor etc. For example single touch, multi touch, scroll, tilt, spin and proximity.
As shown in
The functionality of the system is detailed with example scenarios in the following section. These example scenarios explain how the view context or the user interface could be different according to the context of the program.
Suppose the user is watching a wild life documentary. The system can collect the following information:
Consider television programs of the sort of discussion forums or competition events where the viewers also get participated.
Once the view context is created, it is passed to the display control module 512. The view context information will be used by display controller 512 to form the user interface. The display controller 512 is a functional module in the second screen control device 540 which adapts the user interface according to the capability of the device. Set top box/gate way 550 can also have a default display controller 532 which will provide the user interface displayed on the television or primary display screen 560. The seconds screen control device 540 should also have an Event Listener component 514 which captures the event and send it back to the event interpreter 534 in the set top box 550. The event interpreter 534 in the set top box 550 executes the event in the current view context and updates the display.
The view context can be represented using HTML/XML or any other compatible format can be used. If the view context gets converted to HTML a browser can be used as an event listener and event interpreter. An example of this can be seen in
In the examples of
The related data extractor 840 obtains the program guide info and user data as well as addition data related to the content from the Internet (step 740) as indicated by arrow 842. All this data is then used by the related data extractor 840 to create context for the content being displayed which is provided to the view context creator 850 as indicated by arrow 844.
The view context creator 850 generates a view context (step 760) as well as any updates to the view context necessitated by detected and interpreted events (step 770). The view context is provided to the display controller 860 as indicated by arrow 852. The display controller 860 uses the view context to generate the displayed user interface as indicated by arrow 862.
These and other features and advantages of the present principles may be readily ascertained by one of ordinary skill in the pertinent art based on the teachings herein. It is to be understood that the teachings of the present principles may be implemented in various forms of hardware, software, firmware, special purpose processors, or combinations thereof.
Most preferably, the teachings of the present principles are implemented as a combination of hardware and software. Moreover, the software may be implemented as an application program tangibly embodied on a program storage unit. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPU”), a random access memory (“RAM”), and input/output (“I/O”) interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.
It is to be further understood that, because some of the constituent system components and methods depicted in the accompanying drawings are preferably implemented in software, the actual connections between the system components or the process function blocks may differ depending upon the manner in which the present principles are programmed. Given the teachings herein, one of ordinary skill in the pertinent art will be able to contemplate these and similar implementations or configurations of the present principles.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the present principles is not limited to those precise embodiments, and that various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present principles. All such changes and modifications are intended to be included within the scope of the present principles as set forth in the appended claims.
This application claims the benefit of U.S. Provisional Application Ser. No. 61/343,546 filed Apr. 30, 2010, which is incorporated by reference herein in its entirety.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US11/00753 | 4/29/2011 | WO | 00 | 9/14/2012 |
Number | Date | Country | |
---|---|---|---|
61343546 | Apr 2010 | US |