1. Field
Embodiments of the present invention may relate to a method for displaying a three-dimensional (3D) User Interface (UI) and an apparatus for processing an image signal. More particularly, embodiments of the present invention may relate to a 3D UI displaying method and apparatus for indicating an item selected from a hierarchical 3D UI with an increased visibility.
2. Background
Along with growth of digital broadcasting, digital contents have been increased in amount and diversified, as compared to analog broadcasting.
While most digital contents are a two-dimensional (2D) content, more and more 3D content has been produced to allow a viewer to enjoy a 3D effect and a sense of reality. It is expected from this trend of 3D content will be enjoyed through a receiver at home.
A User Interface (UI) for assessing and controlling a broadcast receiver and contents may be used to form a two-dimensional (2D) UI. However, if a viewer selects a 3D content or a 3D mode, the viewer may feel confusion or an inconvenience because of the provided 2D UI.
When 2D content coexists with 3D content in received contents, a receiver may not identify and indicate the 3D contents when providing information about content to a viewer. Therefore, if the viewer selects 3D contents based on the contents information provided by the receiver without knowing that they are 3D contents, the viewer may view broken and/or flickering images.
Arrangements and embodiments may be described in detail with reference to the following drawings in which like reference numerals refer to like elements and wherein:
Reference may now be made in detail to preferred embodiments of the present invention, examples of which may be illustrated in the accompanying drawings. The same reference numbers may be used throughout the drawings to refer to the same or like parts. In addition, although the terms are selected from generally known and used terms, some of the terms mentioned in the description of embodiments have been selected by the applicant at his or her discretion, the detailed meanings of which may be described in relevant parts of the description herein. Further, embodiments of the present invention may understood, not simply by the actual terms used but by the meaning of each term lying within.
A method for displaying a three-dimensional (3D) User Interface (UI) and an apparatus for processing a signal according to exemplary embodiments of the present invention may be described below in detail.
3D Menu UI Displaying Method
A signal processing apparatus may process 3D image data based on a stereoscopic principle. More specifically, left image data (or first image data) and right image data (or second image data) of one object may be created by capturing an object with two cameras at different positions and input perpendicularly to the human left and right eyes, respectively. Thus, the image data input through the left and right eyes may be combined into a 3D image in a brain of the viewer. When the left image data and the right image data are input perpendicularly, this means that the input data does not interfere each other.
As shown in
As shown in
The distance d1 is greater (or larger) than the distance d2. This implies that the image is formed farther from the left and right eyes in
The distance from the image to the left and right eyes may depend on lateral spacing between the left and right image data along a horizontal direction in
For example, the spacing between the left image data 102 and the right image data 101 in
Accordingly, as spacing between the left image data and the right image data is decreased, an image obtained by combining the left image data and the right image data may be perceived as being farther from the human eye.
Based on this principle, a sense of depth may be applied when a 3D menu UI is configured.
The spacing between left image data 210 (or first image data) and right image data 220 (or second image data) that form a 3D menu UI in
An image resulting from combining the left image data and the right image data may give a different sense of perspective to the human eye depending on spacing between the left image data and the right image data.
An image formed by combining the left image data 210 with the right image data 220 may look like a 3D menu UI 230 to the human eye based on spacing between the left image data 210 and the right image data 220 in
An image formed by combining the left image data 310 with the right image data 320 may look like a 3D menu UI 330 to the human eye based on spacing between the left image data 310 and the right image data 320 in
Compared with the 3D menu UI 230 shown in
The 3D menu UI shown in
When a user selects, for example, a color calibration item from the menu UI, only the left image data and the right image data of the color calibration item may be laterally spaced from each other as shown in
Referring to
A 2D menu UI may be compared with a 3D menu UI based on the description of
When the user selects Menu 1, the selection of Menu 1 may be shown by changing a color of Menu 1 or a shading of Menu 1 in the 2D menu UI. In presence of lower menu items of the selected menu item 511 (i.e., Menu 1), a secondary menu 610 may be displayed as shown in
If the user selects Menu 1, Menu 1 (i.e., menu item 551′ in
Menu items 651 to 654 of the secondary menu 650 (i.e., Menu 1-1 to Menu 1-4) may also have a specific perceived depth. The depth level of the secondary menu 650 may not be necessarily equal to the depth level of the primary menu 550. When the Menu 1-1 is selected, Menu 1-1 may have a different perceived depth level from the other menu items Menu 1-2, Menu 1-3 and Menu 1-4, as shown in
A selected menu item may be identified, on the whole, in a color on a plane in the 2D menu UI, whereas a selected menu item may be indicated by a perceived depth that is realized based on spacing between the left image data and the right image data of the selected menu item in the 3D menu UI.
As shown in
While many menu items may be changed in their values, the changing of sound volume may be provided as an example.
A sound volume may be set to a level 10 as shown in
When the user changes the sound volume, circular cylinder (or cone-shaped) left image data and right image data representing the changed sound volume may be spaced from each other in an amount proportional to the sound volume variation. Thus, the image (or menu item) representing sound volume may be perceived by a viewer as protruding.
A sense of volume corresponding to a value may be expressed in a form of a circular cylinder image as shown in
As shown in
As shown in
Assuming that senses of sound volume range from a level 0 to a level 100, a menu item may provide a sense of volume to the user (or viewer) as spacing between the left image data and the right image data of the menu item is set according to a value set for the menu item by the user, and thus perceived depth of the menu item may change based on the spacing. For example, as the value of a menu item increases, the menu item may get a higher depth level and may be perceived to be more protruding (as the left image data and the right image data are laterally spaced further apart from each other).
In
While menu items may be represented in circular cylinders or bars, the menu items may take a form of a tetrahedron, and/or the like. That is, other embodiments may be provided.
In
In
The weights may be pre-mapped to depth levels, and specifically weight 1 to depth level 25 (i.e., 3D effect level 25), weight 2 to depth level 50 (i.e., 3D effect level 50), weight 3 to depth level 75 (i.e., 3D effect level 75), and weight 4 to depth level 100 (i.e., 3D effect level 100).
When a menu item with weight 1 (i.e., 3D effect level 25) is selected, the menu item may be perceived by a viewer as being more protruding gradually as its depth increases from level 0 to level 25 stepwise, rather than reaching level 25 all at one time.
As circular left image data 1010 and circular right image data 1020 are spaced apart from each other and toward both sides (i.e., their lateral spacing is increased), as shown in
In
If a menu item 1120-1 under Menu 1-1, namely Menu 1-2, is selected through a remote controller as shown in
The selected menu item 1120-1, Menu 1-2, may have a perceived depth that increases gradually over time. Therefore, the resulting menu item 1120-2, Menu 1-2, may be perceived as being more protruding gradually in
Signal Processing Apparatus
As shown in
The DTV main board 1210 may primarily process image data in a received signal. For example, the DTV main board 1210 may be a general DTV for processing a digital broadcast signal. The primary processing may refer to a series of processes including tuning of a channel carrying a digital broadcast signal including image data, reception of the digital broadcast signal through the tuned channel, demodulation and demultiplexing of the digital broadcast signal, and decoding the image data demultiplexed from the digital broadcast signal.
The DTV main board 1210 may include a UI controller 1211 for configuring a 3D menu UI and a Frame Rate Converter (FRC) 1212. The FRC 1212 may be configured separately from the DTV main board 1210. The FRC 1222 may be incorporated into the 3D formatter 1220 in a single module.
The UI controller 1211 may control a 3D menu UI to be configured on the display module 1230. As described above with reference to
The FRC 1212 may process an input image signal and a control signal received from the UI controller 1211 in correspondence with an output frequency of the display module 1230. For example, if the image signal from the DTV main board 1210 is 60 Hz and an output frequency of the display module 1230 is 240 Hz, the FRC 1212 may process the image signal in a predetermined scheme such that frequency of the image signal becomes 240 Hz. The predetermined scheme may be temporal interpolation of an input image signal or simple repetition of an input image signal frame.
For ease of description, the output frequency of the display module 1230 is 240 Hz, although embodiments are not limited.
The temporal interpolation scheme may process an input 60 Hz image signal to a 240 Hz image signal by dividing the 60 Hz image signal by 4 (0, 0.25, 0.5, 0.75).
The simple repetition scheme may repeat each frame of the input 60 Hz image signal three times (i.e., four occurrences) so that the frame is 240 Hz.
The temporal interpolation or the simple repetition may be appropriately selected according to format of the input 3D image in the FRC 1212.
A description may now be provided of processing the 3D image data that was primarily processed in the DTV main board 1210.
The 3D formatter 1220 may include a switch 1221 and a core 1222 for configuring the 3D image data that was matched to the output frequency by the FRC 1212, according to an output format (i.e., a 3D format).
The switch 1221 may process 2D image data as well as 3D image data, received from the DTV main board 1210. Upon receipt of the 2D image data from the DTV main board 1210, the switch 1221 may switch the 2D image data to bypass the core 1222. On the other hand, if the received image data is of a 3D format, the switch 1221 may switch it to the core 1222.
The 3D formatter 1220 may output the configured 3D image data to the display module 1230. The 3D formatter 830 may generate a synchronization signal Vsync for the 3D image data, for synchronization during viewing, and output the synchronization signal V_sync to an InfraRed (IR) emitter (not shown) so that the user may view the image with shutter glasses (or googles) in synchronization to the display module 1230. The IR emitter may output the synchronization signal V_sync to a light receiver of the shutter glasses. The shutter glasses may be synchronized to the 3D image data output from the display 1230 by adjusting a shutter open period according to the synchronization signal V_sync.
As shown in
The tuner 1301 may tune a specific channel to receive a broadcast signal including an image data.
The demodulator 1302 may demodulate the received broadcast signal.
The DEMUX 1303 may demultiplex the demodulated broadcast signal by Packet Identifier (PID) filtering.
The external input receiver 1304 may be connected to an external input device in various manners and receive a signal from the external input device. The external input may be High Definition Multimedia Interface/Digital Video Interface (HDMI/DVI) input, Component input, or Red, Green, Blue (RGB) input.
The key input receiver 1306 may receive a user input through a remote controller or a local key.
The controller/MPEG decoder 1305 may decode the image data in the signal received via the tuner 1301 or the external input receiver 1304 and output a control signal according to a user command received from the key input receiver 1306.
The 3D UI processor 1307 may generate depth information and display time information about an UI and may make the UI look three-dimensional based on the generated depth information and the generated display time information about highlighted UI. That is, the 3D UI processor 1307 may increase visibility of a menu item selected from a hierarchical menu by differentiating perceived depth of the selected menu item over time. The 3D UI processor 1307 may maximize visibility of a 3D UI by differentiating perceived depth of the 3D UI based on a user setting value. A major function of the 3D UI processor 1307 may be to add weight information based on a UI bitmap and a depth to a 2D UI in order to give various senses of depth to the 2D UI.
The mixer 1308 may mix the output of the controller/MPEG decoder 1305 with the output of the 3D UI processor 1307 and output the mixed signal to the FRC 1309.
The FRC 1309 may convert a 2-channel Low Voltage Differential Signal (2CH LVDS) signal to a 120-Hz 4CH LVDS signal by motion compensation.
Upon receipt of 2D image data, the 3D formatter 1310 may simply output the received 2D image data. Upon receipt of 3D image data, the 3D formatter 1310 may mix left image data and right image data configuring of the 3D image data to an LVDS signal and output the LVDS signal through the display module 1311.
Referring to
In operation S1403, the demultiplexed image signal may be MPEG-decoded.
3D UI information may be configured so that a UI may look three-dimensional based on depth information and display time information about UI information to be highlighted in operation S1404.
The configured 3D UI information may increase visibility of a menu item selected from a hierarchical menu by changing a perceived depth of the selected item over time, and maximize visibility of the 3D UI by differentiating the perceived depth of the 3D VI based on a user setting value.
In operation S1405, the MPEG-decoded image signal may be mixed with the configured 3D UI information. The mixed image signal and the 3D UI information may be configured in a 3D format in operation S1406 and may be output through the LCD module in operation S1407.
Accordingly, embodiments of the present invention may be directed to a 3D UI displaying method and an image signal processing apparatus for increasing visibility of an item selected from a hierarchical 3D UI by applying a different perceived depth level to the selected item in the 3D UI.
An embodiment of the present invention may provide a method and apparatus for indicating determined 3D contents distinguishably from other contents.
A method may be provided for displaying a User Interface (UI). The method may include (a) receiving, at a receiving unit, a request for displaying a UI including one or more items, (b) changing, at a UI controller, a (perceived) depth of the UI according to the request, and (c) displaying, at a display unit, the UI with the changed (perceived) depth.
The UI may include changed (perceived) depths of all items in the UI.
The UI may include only a changed (perceived) depth of a selected item in the UI.
The perceived depth of the UI may be changed by controlling a spacing between first image data and second image data that configure the UI.
The perceived depth of the UI may be changed stepwise over time.
The perceived depth of each item in the UI may be differentiated according to a value set for each item.
A method for displaying a User Interface (UI) may include (a) displaying, at a display unit, a first UI among hierarchical UIs, (b) changing, at a UI controller, a (perceived) depth of an item, upon selection of the item from the displayed first UI, and (c) displaying, at the display unit, the changed first UI and a second UI associated with the selected item, the second UI having a (perceived) depth different from a (perceived) depth of the first UI.
The perceived depth of the selected item in the first UI may be changed by adjusting a spacing between first image data and second image data that configure the first
UI.
The perceived depth of the selected item in the first UI may be changed stepwise over time.
The perceived depth of the selected item in the first UI may be different from a perceived depth of other items in the first UI.
The perceived depth of the second 3D UI may be larger than the perceived depth of the first UI.
The method may further include changing, at the UI controller, a perceived depth of an item, upon selection of the item from the displayed second UI and displaying, at the display unit, the second UI including the item with the changed perceived depth.
The perceived depth of each item in the first UI and the second UI may be differentiated based on a value set for the item.
An apparatus may be provided for processing a signal. The apparatus may include a receiving unit for receiving a request for displaying a User Interface (UI) including one or more items, a UI controller for generating a control signal according to the request, to change a predetermined depth to a UI, a formatter for configuring the UI to have the predetermined depth in a three-dimensional (3D) format and a display unit for displaying the UI configured in the 3D format.
The UI controller may control a spacing between first image data and second image data that configure the UI so that the UI has the predetermined depth.
The UI controller may control the spacing stepwise over time.
The UI controller may control a perceived depth of each item in the UI to be different based on a value set for the item.
Upon receipt of a request for selecting a specific item in the displayed UI from the receiver, the UI controller may control a perceived depth of the selected specific item to be different from other items in the UI.
Any reference in this specification to “one embodiment,” “an embodiment,” “example embodiment,” etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with any embodiment, it is submitted that it is within the purview of one skilled in the art to effect such feature, structure, or characteristic in connection with other ones of the embodiments.
Although embodiments have been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art.
This application claims priority and benefit from U.S. Provisional Application No. 61/223,385, filed Jul. 7, 2009, the subject matter of which is hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5880729 | Johnston, Jr. et al. | Mar 1999 | A |
6157383 | Loop | Dec 2000 | A |
6621509 | Eiref et al. | Sep 2003 | B1 |
7441201 | Printezis | Oct 2008 | B1 |
7685619 | Herz | Mar 2010 | B1 |
20040070673 | Nakamura | Apr 2004 | A1 |
20040100479 | Nakano et al. | May 2004 | A1 |
20060139448 | Ha et al. | Jun 2006 | A1 |
20060192776 | Nomura et al. | Aug 2006 | A1 |
20060253802 | Kim | Nov 2006 | A1 |
20070003134 | Song et al. | Jan 2007 | A1 |
20070121182 | Fukushima et al. | May 2007 | A1 |
20070165027 | Nakadaira et al. | Jul 2007 | A1 |
20070274672 | Itoi | Nov 2007 | A1 |
20070300184 | Song | Dec 2007 | A1 |
20080122837 | Woo | May 2008 | A1 |
20080240549 | Koo et al. | Oct 2008 | A1 |
20080252604 | Huang et al. | Oct 2008 | A1 |
20080309755 | Yoshida et al. | Dec 2008 | A1 |
20080310499 | Kim et al. | Dec 2008 | A1 |
20090002368 | Vitikainen et al. | Jan 2009 | A1 |
20090116818 | Sasaki et al. | May 2009 | A1 |
20090141172 | Liu | Jun 2009 | A1 |
20090141180 | Kondo et al. | Jun 2009 | A1 |
20090153737 | Glen | Jun 2009 | A1 |
20090217187 | Kendall et al. | Aug 2009 | A1 |
20090263007 | Kitaura et al. | Oct 2009 | A1 |
20100026790 | Ohba et al. | Feb 2010 | A1 |
20100039428 | Kim et al. | Feb 2010 | A1 |
20100074584 | Jin et al. | Mar 2010 | A1 |
20100118120 | Takahashi et al. | May 2010 | A1 |
20100225645 | Suh et al. | Sep 2010 | A1 |
20100225741 | Hong | Sep 2010 | A1 |
20100303442 | Newton et al. | Dec 2010 | A1 |
20120162367 | Ha | Jun 2012 | A1 |
20120182402 | Hwangbo et al. | Jul 2012 | A1 |
Number | Date | Country |
---|---|---|
1497923 | May 2004 | CN |
1859549 | Nov 2006 | CN |
1882106 | Dec 2006 | CN |
101015220 | Aug 2007 | CN |
101282492 | Oct 2008 | CN |
101448109 | Jun 2009 | CN |
1 187 495 | Mar 2002 | EP |
2002-095014 | Mar 2002 | JP |
2005-065162 | Mar 2005 | JP |
2005-110121 | Apr 2005 | JP |
10-2007-0052260 | May 2007 | KR |
10-2008-0028366 | Mar 2008 | KR |
10-2009-0004181 | Jan 2009 | KR |
2009 20121 | May 2009 | TW |
WO 0131585 | May 2001 | WO |
WO 2008001264 | Jan 2008 | WO |
WO 2008013131 | Jan 2008 | WO |
WO 2008033961 | Mar 2008 | WO |
WO 2009077927 | Jun 2009 | WO |
WO 2009077929 | Jun 2009 | WO |
Entry |
---|
Office Action for U.S. Appl. No. 12/792,386 dated Jul. 31, 2012. |
European Search Report for Application PCT/KR2010/003451 dated Dec. 28, 2010. |
European Search Report for Application PCT/KR2010/003452 dated Dec. 29, 2010. |
European Search Report for Application PCT/KR2010/003009 dated Dec. 29, 2010. |
D.D.R.J. Bolio, “Integration of 3D Video into the Blu-ray format,” Master Thesis, Dept. of Mathematics and Computer Science, Technische universiteit Eindhoven, Eindhove, Oct. 2007. |
Office Action dated Dec. 5, 2012 for U.S. Appl. No. 12/792,386. |
Office Action dated Jul. 23, 2012 for U.S. Appl. 12/788,393. |
U.S. Office Action for U.S. Appl. No. 12/792,386 dated Mar. 21, 2013. |
U.S. Office Action for U.S. Appl. No. 12/788,393 dated May 8, 2013. |
HDMI: “High Definition Multimedia Interface. Specification Version 1.4”, Internet Citation, Jun. 5, 2009, 425 pages, XP009133650, Retrieved from the Internet: URL: http: / /wenku.baidu.com/view/e7db77d184254b35eefd34d0.html. |
D.D.R.J. Bolio, “Integration of 3D video into the Blu-ray format”, Master Thesis Technische Universiteit Eindhoven Department of Mathematics and Computer Science, Technische Universiteit Eindhoven, NL, Oct. 31, 2007, 118 pages (pages I-IX, 1, XP008148221), Retrieved from the Internet: URL: http:/ /alexandria.tue.nl/extral/afstversl/wsk-i/bolio2007.pdf. |
European Search Report dated Jul. 11, 2013 issued in Application No. 10792258.5. |
Chinese Office Action for Application 201080035859.6 dated Nov. 13, 2013 and English language translation. |
Chinese Search Report for Application 201080035859.6 dated Nov. 13, 2013 and English language translation. |
Chinese Office Action dated Mar. 5, 2014 issued in Application No. 201080046352.0 (with English translation). |
Extended European Search Report for Application 10797238.2 dated Dec. 20, 2013. |
Korean Notice of Allowance for Application 10-2012-7005784 dated Dec. 20, 2013. |
U.S. Notice of Allowance for U.S. Appl. No. 12/792,386 dated Jan. 6, 2014. |
U.S. Office Action for U.S. Appl. No. 12/788,393 dated Dec. 30, 2013. |
“3D Active Button Magic” XP007922454 MultiMedia Soft Jan. 1, 2002 pp. 1-12. |
Chinese Office Action for Application No. 201080028300.0 dated Apr. 30, 2014 and English language translation. |
Chinese Office Action for Application No. 201080035859.6 dated May 9, 2014 and English language translation. |
Number | Date | Country | |
---|---|---|---|
20110010666 A1 | Jan 2011 | US |
Number | Date | Country | |
---|---|---|---|
61223385 | Jul 2009 | US |