Customization of 3DTV user interface element positions may be provided. In conventional systems, user interface elements are required to share a video plane in the 3D television environment with other elements, such as a content stream. For example, a program information interface element may overlap with portions of the content, causing interference in the visual element, or closed captioning text may be obscured by the content. Furthermore, the depth of user interface elements cannot be changed in existing systems, causing comfort and readability issues for many users.
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate various embodiments of the present disclosure. In the drawings:
Consistent with embodiments of the present disclosure, systems and methods are disclosed for providing a customization of a 3DTV user interface. A content stream, such as a three-dimensional television signal, comprising a plurality of video planes may be displayed. In response to receiving a request to adjust a depth of at least one of the video planes, the display depth of the requested video plane may be adjusted relative to at least one other video plane.
It is to be understood that both the foregoing general description and the following detailed description are examples and explanatory only, and should not be considered to restrict the disclosure's scope, as described and claimed. Further, features and/or variations may be provided in addition to those set forth herein. For example, embodiments of the disclosure may be directed to various feature combinations and sub-combinations described in the detailed description.
The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar elements. While embodiments of the disclosure may be described, modifications, adaptations, and other implementations are possible. For example, substitutions, additions, or modifications may be made to the elements illustrated in the drawings, and the methods described herein may be modified by substituting, reordering, or adding stages to the disclosed methods. Accordingly, the following detailed description does not limit the disclosure. Instead, the proper scope of the disclosure is defined by the appended claims.
A 3D television (3D-TV) is a television set that employs techniques of 3D presentation, such as stereoscopic capture, multi-view capture, or 2D plus depth, and a 3D display—a special viewing device to project a television program into a realistic three-dimensional field. In a 3D-TV signal such as that described in the 3D portion of the High Definition Multimedia Interface HDMI 1.4a specification, three-dimensional images may be displayed to viewing users using stereoscopic images. That is, two slightly different images may be presented to a viewer to create an illusion of depth in an otherwise two-dimensional image. These images may be presented as right-eye and left-eye images that may be viewed through lenses such as anaglyphic (with passive red-cyan lenses), polarizing (with passive polarized lenses), and/or alternate-frame sequencing (with active shutter lenses).
The 3D-TV signal may comprise multiple planes of content. For example, main content may be included on one or more video planes, a channel guide may occupy another plane, and closed-captioning may be displayed on another plane. Consistent with embodiments of the disclosure, each of these planes may be displayed at different relative depths to a viewing user, such as where the closed-captioning plane appears “closer” to the user than the main content.
An out-of-band (OOB) channel coupled with an upstream transmitter may enable STB 100 to interface with the network so that STB 100 may provide upstream data to the network, for example via quadrature phase-shift keying (QPSK) or quadrature amplitude modulation (QAM) channels. This allows a subscriber to interact with the network. Encryption may be added to the OOB channels to provide privacy.
Additionally, STB 100 may comprise a receiver 140 for receiving externally generated information, such as user inputs or commands for other devices. STB 100 may also include one or more wireless or wired communication interfaces (not shown), for receiving and/or transmitting data to other devices. For instance, STB 100 may feature USB (Universal Serial Bus) (for connection to a USB camera or microphone), Ethernet (for connection to a computer), IEEE-1394 (for connection to media devices in an entertainment center), serial, and/or parallel ports. A computer or transmitter may for example, provide the user inputs with buttons or keys located either on the exterior of the terminal or by a hand-held remote control device 150 or keyboard that includes user-actuated buttons. In the case of bi-directional services, a user input device may include audiovisual information such as a camera, microphone, or videophone. As a non-limiting example, STB 100 may feature USB or IEEE-1394 for connection of an infrared wireless remote control or a wired or wireless keyboard, a camcorder with an integrated microphone or to a video camera and a separate microphone.
STB 100 may simultaneously decompress and reconstruct video, audio, graphics and textual data that may, for example, correspond to a live program service. This may permit STB 100 to store video and audio in memory in real-time, to scale down the spatial resolution of the video pictures, as necessary, and to composite and display a graphical user interface (GUI) presentation of the video with respective graphical and textual data while simultaneously playing the audio that corresponds to the video. The same process may apply in reverse and STB 100 may, for example, digitize and compress pictures from a camera for upstream transmission.
A memory 155 of STB 100 may comprise a dynamic random access memory (DRAM) and/or a flash memory for storing executable programs and related data components of various applications and modules for execution by STB 100. Memory 155 may be coupled to processor 125 for storing configuration data and operational parameters, such as commands that are recognized by processor 125. Memory 155 may also be configured to store user preference profiles associated with viewing users.
Method 500 may then advance to stage 520 where computing device 500 may identify a viewer of the plurality of video content planes. For example, STB 100 may store a plurality of user preference profiles in memory 155 each associated with a viewing user. An appropriate profile may be selected for a currently viewing user, such as by receiving a sign-in by the user, receiving a user's selection from a displayed list of profiles, and/or detecting the presence of a user identifier such as a personalized control. For example, some and/or all of the users may be associated with a smartphone-based remote control application that may allow STB 100 to identify which user(s) are viewing the content. Consistent with embodiments of the disclosure, a default preference profile may be used and/or STB 100 may select a most recently used preference profile.
Method 500 may then advance to stage 530 where computing device 500 may adjust an apparent depth of at least one of the plurality of video content planes. For example, in over-under configuration 300, STB 100 may adjust the separation between left-eye image 310 and right-eye image 320 to create an apparent depth of video plane 300 of a user's preferred setting. This may comprise, for example, setting the apparent depth of a closed-captioning plane to a depth that allows the viewing user to comfortably focus on and read the closed-captioning text.
Method 500 may then advance to stage 540 where computing device 500 may determine whether a request to adjust the depth of at least one of the video content planes has been received. For example, the viewing user may use a remote control to select one of plurality of video planes 210(A)-(D) and then use the remote control to request a change to the depth of the selected plane.
Method 500 may then advance to stage 520 where computing device 500 may adjust the depth of the at least one of the video content planes. For example, if the viewing user selects video plane 210(A) and then requests that the depth be decreased, the separation between left-eye image 310 and right-eye image 320 may be decreased.
Method 500 may then advance to stage 560 where computing device 500 may determine if the request was received from a new user. For example, STB 100 may determine that the current display depths of plurality of video planes 210(A)-(D) are at a default depth and/or no user profiles have previously been stored. Consistent with embodiments of the disclosure, STB 100 may display a request to the user for confirmation that the user does not have a current profile. Conversely, STB 100 may have already identified the user and selected the appropriate user preference profile, as described above.
If the request to adjust the plane's depth was not received from a new user, method 500 may advance to stage 570 where computing device 500 may update a preference profile associated with the user. For example, STB 100 may store a new preferred depth for the selected video plane in a preference profile in memory 155.
If the request to adjust the plane's depth was received from a new user, method 500 may advance to stage 580 where computing device 500 may create a new preference profile associated with the user. For example, STB 100 may create a preference profile comprising values associated with the current depth for each of plurality of video planes 210(A)-(D). Consistent with embodiments of the disclosure, the preference profile may comprise only those depth values that deviate from a default depth value for the respective video plane.
After the appropriate preference profile is updated at stage 570 or created at stage 580, or if no request to adjust a plane depth was received at stage 540, method 500 may then end at stage 590.
An embodiment consistent with the disclosure may comprise a system for providing a customized interface depth. The system may comprise a memory storage and a processing unit coupled to the memory storage. The processing unit may be operative to display a content stream comprising a plurality of video planes, receive a request to adjust a depth of a first video plane of the plurality of video planes, and, in response to receiving the request, modify the display depth of the first video plane relative to at least one second video plane of the plurality of video planes. The request may be received, for example, from a remote control device. The video planes may comprise interface planes such as a program guide, a closed caption, a channel identifier, a list of recorded programs (e.g., a list of programs stored on a digital video recorder (DVR) and/or a list of programs scheduled for recording in the future), a playback status indicator (e.g., a “play”, “pause”, and/or “record” symbol), and an information banner.
Another embodiment consistent with the disclosure may comprise a system for providing a customized interface depth. The system may comprise a memory storage and a processing unit coupled to the memory storage. The processing unit may be operative to display a content stream comprising a plurality of video planes, identify a viewer of the content stream, and adjust a depth of a first video plane of the plurality of video planes relative to a second video plane of the plurality of video planes according to a preference profile associated with the identified user. The processing unit may be further operative to receive a request to adjust the depth of the first video plane relative to the second video plane and, in response to receiving the request, modify the display depth of the first video plane relative to the second video plane.
Yet another embodiment consistent with the disclosure may comprise a system for providing a customized interface depth. The system may comprise a memory storage and a processing unit coupled to the memory storage. The processing unit may be operative to display a content stream comprising a plurality of video planes associated with a three-dimensional television program, receive a selection of at least one first video plane of the plurality of video planes, receive a request to adjust a depth of the at least one first video plane, and in response to receiving the request, modifying the display depth of the at least one first video plane. The processing unit may be further operative to receive a selection of at least one second video plane of the plurality of video planes, receive a request to adjust a depth of the at least one second video plane, and in response to receiving the request, modify the display depth of the at least one second video plane.
Computing device 600 may be implemented using a personal computer, a network computer, a mainframe, a computing appliance, or other similar microcomputer-based workstation. The processor may comprise any computer operating environment, such as hand-held devices, multiprocessor systems, microprocessor-based or programmable sender electronic devices, minicomputers, mainframe computers, and the like. The processor may also be practiced in distributed computing environments where tasks are performed by remote processing devices. Furthermore, the processor may comprise a mobile terminal, such as a smart phone, a cellular telephone, a cellular telephone utilizing wireless application protocol (WAP), personal digital assistant (PDA), intelligent pager, portable computer, a hand held computer, a conventional telephone, a wireless fidelity (Wi-Fi) access point, or a facsimile machine. The aforementioned systems and devices are examples and the processor may comprise other systems or devices.
Embodiments of the present disclosure, for example, are described above with reference to block diagrams and/or operational illustrations of methods, systems, and computer program products according to embodiments of the disclosure. The functions/acts noted in the blocks may occur out of the order as shown in any flowchart. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
While certain embodiments of the disclosure have been described, other embodiments may exist. Furthermore, although embodiments of the present disclosure have been described as being associated with data stored in memory and other storage mediums, data can also be stored on or read from other types of computer-readable media, such as secondary storage devices, like hard disks, floppy disks, or a CD-ROM, a carrier wave from the Internet, or other forms of RAM or ROM. Further, the disclosed methods' stages may be modified in any manner, including by reordering stages and/or inserting or deleting stages, without departing from the disclosure.
All rights including copyrights in the code included herein are vested in and the property of the Applicant. The Applicant retains and reserves all rights in the code included herein, and grants permission to reproduce the material only in connection with reproduction of the granted patent and for no other purpose.
While the specification includes examples, the disclosure's scope is indicated by the following claims. Furthermore, while the specification has been described in language specific to structural features and/or methodological acts, the claims are not limited to the features or acts described above. Rather, the specific features and acts described above are disclosed as example for embodiments of the disclosure.