1. Field of the Invention
The present invention relates to techniques for formatting video frames.
2. Background Art
Images may be generated for display in various forms. For instance, television (TV) is a widely used telecommunication medium for transmitting and displaying images in monochromatic (“black and white”) or color form. Conventionally, images are provided in analog form and are displayed by display devices in two-dimensions. More recently, images are being provided in digital form for display in two-dimensions on display devices having improved resolution (e.g., “high definition” or “HD”). Even more recently, images capable of being displayed in three-dimensions are being generated.
Conventional displays may use a variety of techniques to achieve three-dimensional image viewing functionality. For example, various types of glasses have been developed that may be worn by users to view three-dimensional images displayed by a conventional display. Examples of such glasses include glasses that utilize color filters or polarized filters. In each case, the lenses of the glasses pass two-dimensional images of differing perspective to the user's left and right eyes. The images are combined in the visual center of the brain of the user to be perceived as a three-dimensional image. In another example, synchronized left eye, right eye LCD (liquid crystal display) shutter glasses may be used with conventional two-dimensional displays to create a three-dimensional viewing illusion. In still another example, LCD display glasses are being used to display three-dimensional images to a user. The lenses of the LCD display glasses include corresponding displays that provide images of differing perspective to the user's eyes, to be perceived by the user as three-dimensional.
Some displays are configured for viewing three-dimensional images without the user having to wear special glasses, such as by using techniques of autostereoscopy. For example, a display may include a parallax barrier that has a layer of material with a series of precision slits. The parallax barrier is placed proximal to a display so that a user's eyes each see a different set of pixels to create a sense of depth through parallax. Another type of display for viewing three-dimensional images is one that includes a lenticular lens. A lenticular lens includes an array of magnifying lenses configured so that when viewed from slightly different angles, different images are magnified. Displays are being developed that use lenticular lenses to enable autostereoscopic images to be generated.
Techniques for achieving three-dimensional image viewing functionality often use predefined frame formats in an attempt to ensure that displays and other components in display systems are capable of interpreting the data that are included in frame sequences. For instance, a High-Definition Multimedia Interface (HDMI) industry standard currently provides support for three-dimensional video communication cabling. However, HDMI and other such standards are directed to rigid frame structures supporting, for example, only a single full-screen left eye frame sequence and a single full-screen right eye frame sequence to be consumed by using shutter lens glasses.
Methods, systems, and apparatuses are described for frame formatting supporting mixed two and three dimensional video data communication, substantially as shown in and/or described herein in connection with at least one of the figures, as set forth more completely in the claims.
The accompanying drawings, which are incorporated herein and form part of the specification, illustrate embodiments of the present invention and, together with the description, further serve to explain the principles involved and to enable a person skilled in the relevant art(s) to make and use the disclosed technologies.
The features and advantages of the disclosed technologies will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.
The following detailed description refers to the accompanying drawings that illustrate exemplary embodiments of the present invention. However, the scope of the present invention is not limited to these embodiments, but is instead defined by the appended claims. Thus, embodiments beyond those shown in the accompanying drawings, such as modified versions of the illustrated embodiments, may nevertheless be encompassed by the present invention.
References in the specification to “one embodiment,” “an embodiment,” “an exemplary embodiment,” or the like, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Furthermore, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the relevant art(s) to implement such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Furthermore, it should be understood that spatial descriptions (e.g., “above,” “below,” “up,” “left,” “right,” “down,” “top,” “bottom,” “vertical,” “horizontal,” etc.) used herein are for purposes of illustration only, and that practical implementations of the structures described herein can be spatially arranged in any orientation or manner.
Exemplary embodiments relate to frame formatting supporting mixed two and three dimensional video data communication. For example, frames in frame sequence(s) may be formatted to indicate that a first screen configuration (a.k.a. display configuration) is to be used for displaying first video content, that a second screen configuration is to be used for displaying second video content, and so on.
Any suitable screen configuration may be used to display video content. For example, a two-dimensional (2D) configuration is used to display a 2D representation of the video content. In another example, a three-dimensional (3D) configuration is used to display a 3D representation of the video content. A 3D configuration may include any number of viewpoints (a.k.a. perspectives), two of which may be combined to provide a three-dimensional viewing experience. For instance, a 3D configuration that includes n viewpoints is said to be a 3Dn configuration, where n is a positive integer greater than or equal to two. The configurations that are used to display the different video contents may be different or the same.
Different video contents may be associated with respective regions of a screen. For instance, the first video content may be associated with a first region of the screen; the second video content may be associated with a second region of the screen, and so on. The frames in the frame sequence(s) may be formatted to indicate the associations of the video contents with the regions of the screen. The regions of the screen may partially overlap, fully overlap, not overlap, be configured such that one or more regions are within one or more other regions, etc.
Data that corresponds to the different video contents may be received from a common source or from different sources. The frame formatting may support simultaneous streamed display of the different video contents.
The following subsections describe numerous exemplary embodiments of the present invention. For instance, the next subsection describes embodiments for frame formatting supporting mixed two and three dimensional video data communication, followed by a subsection that describes exemplary display device screen environments, a subsection that describes exemplary display environments, and a subsection that describes exemplary electronic devices.
It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made to the embodiments described herein without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention should not be limited by any of the exemplary embodiments described herein.
A. Exemplary Frame Formatting Embodiments
Embodiments for frame formatting supporting mixed two and three dimensional video data communication may be implemented in a variety of environments. For instance,
As shown in
Source 102 includes an encoder 122. Encoder 122 encodes frame(s) of frame sequence(s) 126 that include video data 128. For instance, encoder 122 may format the frame(s) by placing field(s) therein to indicate screen configurations that are to be used to display the differing video content 128 and/or screen regions in which the differing video content 128 is to be displayed.
In one example, encoder 122 may format each of one or more frames to indicate that first video content that corresponds to first video data is to be displayed using a first screen configuration, to indicate that second video content that corresponds to second video data is to be displayed using a second screen configuration, and so on. In another example, encoder 122 may format first frame(s) to indicate that the first video content is to be displayed using the first screen configuration; encoder 122 may format second frame(s) to indicate that the second video content is to be displayed using the second screen configuration, and so on.
In yet another example, encoder 122 may format each of one or more frame(s) to indicate that the first video content is to be displayed at a first region of a screen (e.g., screen 114), that the second video content is to be displayed at a second region of the screen, and so on. In still another example, the encoder 122 may format first frame(s) to indicate that the first video content is to be displayed at the first region of the screen; encoder 122 may format second frame(s) to indicate that the second video content is to be displayed at the second region of the screen, and so on. Source 102 delivers the frame sequence(s) 126 to intermediate device 104 via communication pathway 108 using well-known communication protocols.
Source 102 is shown to include an encoder 122 for illustrative purposes and is not intended to be limiting. It will be recognized that source 102 need not necessarily include encoder 122. For instance, source 102 may store the video data in an encoded state.
Intermediate device 104 decodes the frame(s) of the frame sequence(s) 126 that include the video data 128 upon receipt of the frame sequence(s) 126 from source 102. Intermediate device 104 determines the screen configuration and/or screen region with which each of the differing video data is associated based on the format of the frame(s). For instance, intermediate device 104 may determine the screen configurations and/or screen regions based on fields that encoder 122 places in the frame(s). Intermediate device 104 provides the video data 128 to display device 106, along with indicators 130 that specify the screen configurations and/or screen regions that are associated with the respective video data 128. For instance, indicators 130 may include fields that encoder 122 places in the frame(s) or information that is based on the fields. In some embodiments, intermediate device 104 is included in display device 106.
Display device 106 provides the video content that corresponds to the video data 128 based on the indicators 130 that are received from intermediate device 104. Display device 106 may be implemented in various ways. For instance, display device 106 may be a television display (e.g., a liquid crystal display (LCD) television, a plasma television, etc.), a computer monitor, a projection system, or any other type of display device.
Display device 106 includes interface circuitry 110, display circuitry 112, and a screen 114. Interface circuitry 110 provides the video data 128 and the indicators 130 that are received from intermediate device 104 to display circuitry 112 for further processing. The video data 128 is described below as including first video data and second video data for illustrative purposes and is not intended to be limiting. The video data 128 may include any number of differing video data.
Display circuitry 112 directs display of the video content 128 among regions of screen 114 based on the indicators 130. For example, as shown in
In one embodiment, the first video content and the second video content are unrelated. For instance, the first video content may depict a scene from a movie, and the second video content may depict a scene from another movie, a television show, a home video, etc. In another embodiment, the first video content and the second video content are related. For example, the first video content may depict a scene from a first perspective (a.k.a. viewpoint or orientation), and the second video content may depict the scene from a second perspective.
Each of the first and second video data may be two-dimensional video data or three-dimensional video data. Two-dimensional video data corresponds to video content that is configured to be perceived as a sequence of two-dimensional images. Three-dimensional video data corresponds to video content that is configured to be perceived as a sequence of three-dimensional images. For example, the first video data may be two-dimensional video data, and the second video data may be three-dimensional video data. In accordance with this example, the first video data may include video frames that correspond to respective sequential two-dimensional images, and the second video data may include second video frames and respective third video frames to provide respective frame pairs that correspond to respective sequential three-dimensional images.
In another example, the first video data may be three-dimensional video data, and the second video data may be two-dimensional video data. In yet another example, both the first video data and the second video data may be two-dimensional video data. In still another example, both the first video data and the second video data may be three-dimensional video data.
Screen 114 displays the video content among screen regions 116A and 116B as directed by display circuitry 112. Screen 114 is capable of simultaneously supporting multiple screen configurations. Accordingly, screen 114 is capable of simultaneously displaying video content that corresponds to two-dimensional video data and other video content that corresponds to three-dimensional video data. Screen 114 is also capable of simultaneously displaying video content that corresponds to first three-dimensional video data that represents a first number of perspectives and other video content that corresponds to second three-dimensional video data that represents a second number of perspectives that is different from the first number.
As shown in
Source 102 and intermediate device 104 may be implemented as any suitable source and any suitable intermediate device, respectively.
As shown in
As shown in
As shown in
As shown in
As shown in
The exemplary implementations of source 102 and intermediate device 104 described above are provided for illustrative purposes and are not intended to be limiting. For instance, source 102 may be implemented as a network attached storage (NAS) device, which may provide the frame sequences 126 to any suitable intermediate device 104.
It will be recognized that an intermediate device may decode frame sequences that are received from any number of sources. For example,
For example, encoder 822A may format frame(s) of frame sequence 826A that includes first video data and second video data to indicate that the first video data is to be displayed in a first region 816A of screen 814 using a first screen configuration and that the second video data is to be displayed in a second region 816B of screen 814 using a second screen configuration. Encoder 822B may format frame(s) of frame sequence 826B that includes third video data to indicate that the third video data is to be displayed in a third region 816C of screen 814 using a third screen configuration, and so on.
Intermediate device 804 decodes the frames of the frame sequences 826A-826N that include the respective portions of the video data 828 upon receipt thereof from source 802. Intermediate device 804 determines the screen configuration and screen region with which each of the differing video data from among the frame sequences 826A-826N is associated based on the format of the frames. Intermediate device 804 provides the video data 828 to display device 806, along with indicators 830 that specify the screen configurations and screen regions that are associated with the respective video data 828. In some embodiments, intermediate device 804 is included in display device 806.
Display device 806 provides the video content that corresponds to the video data 828 based on the indicators 830 that are received from intermediate device 804. Display device 806 includes interface circuitry 810, display circuitry 812, and a screen 814. Interface circuitry 810 provides the video data 828 and the indicators 830 that are received from intermediate device 804 to display circuitry 812 for further processing. The video data 828 is described below as including first video data, second video data, and third video data for illustrative purposes and is not intended to be limiting.
Display circuitry 812 directs display of the video content 828 among regions of screen 814 based on the indicators 830. For example, as shown in
Any of the first video content, the second video content, and/or the third video content may be related or unrelated. Moreover, any of the first video content, the second video content, and/or the third video content may be two-dimensional content or three-dimensional content.
Screen 814 displays the video content among screen regions 816A, 816B, and 816C as directed by display circuitry 812. Screen 814 is capable of simultaneously supporting multiple screen configurations. Accordingly, screen 814 is capable of simultaneously displaying video content that corresponds to two-dimensional video data and other video content that corresponds to three-dimensional video data. Screen 814 is also capable of simultaneously displaying video content that corresponds to first three-dimensional video data that represents a first number of perspectives and other video content that corresponds to second three-dimensional video data that represents a second number of perspectives that is different from the first number.
As shown in
Each of sources 802A-802N may be a remote source or a local source. Exemplary implementations of a set top box that supports processing of mixed 2D/3D video data from both remote and local sources is provided in commonly-owned, co-pending U.S. patent application Ser. No. 12/982,062 filed on Dec. 30, 2010, titled “Set-Top Box Circuitry Supporting 2D and 3D Content Reductions to Accommodate Viewing Environment Constraints,” the entirety of which is incorporated by reference herein.
The exemplary systems (e.g., systems 100 and 800) described herein may include any number of intermediate devices. Such intermediate devices are capable of formatting and delivering video data that corresponds to various regions of a screen (i.e., regional data) separately. Such intermediate devices are also capable of combining regional data and then formatting the combined data for delivery. For example,
As shown in
First intermediate device 904A includes a first transcoder 924A. First transcoder 924A combines the regional data 902A-902N and the combined regional data 942 in accordance with the second format standard to provide cumulative data 952. Cumulative data 952 is a formatted combination of the regional data 902A-902N and the multiple regional data from combined regional data 942. First intermediate device 904A delivers the cumulative data 952 to second intermediate device 904B in accordance with a second communication standard.
Second intermediate device 904B is shown to include a second transcoder 924B. In a first example, second transcoder 924B may re-format the cumulative data 952 to have a third format that is defined by a third format standard. In accordance with this example, second intermediate device 904B may deliver the re-formatted cumulative data as processed data 962 to display 906 in accordance with a third communication standard. In a second example, second intermediate device 904B may not re-format the cumulative data 952. In accordance with this example, second intermediate device 904B may deliver the non-re-formatted cumulative data 952 as the processed data 962 to display 906 in accordance with the third communication standard. In further accordance with this example, second intermediate device 904B need not necessarily include second transcoder 924B.
Display 906 is shown to include a decoder 924C. In accordance with the first example above in which second transcoder 924 re-formats the cumulative data 952 to have the third format, decoder 924C determines the screen configuration and/or screen region with which each regional data that is included in the processed data 962 is associated based on the third format. In accordance with the second example above in which second transcoder 924 does not re-format the cumulative data 952 from the second format, decoder 924C determines the screen configuration and/or screen region with which each regional data that is included in the processed data 962 is associated based on the second format. Display 906 directs display of video contents that correspond to the respective regional data that are included in the processed data 962 among regions of a screen based on the determined screen configurations and/or screen regions.
Formatting techniques described herein may reduce an amount of bandwidth that is consumed with respect to delivery of video content. For example, assume that various regional data correspond to respective regions of a screen that overlap in areas of overlap, as described above with reference to
In another example, assume that a communication pathway supports an overall maximum bandwidth corresponding to 3D4 HD. In such environment, an attempt to send video data corresponding to full-screen 3D4 HD movie content along with media data that represents a 2D interactive interface element, for example, may not be supported because of the bandwidth limitation. Similarly, video data corresponding to 3D2 HD full-screen content plus other video data that corresponds to regional 3D4 HD content may or may not be supported depending on how big the region happens to be. Lastly, 3D2 full-screen data that is sent three times in the form of three full-screen 3D2 movies which the viewer decides to regionalize in some manner may not be supported.
In yet another example, a source or intermediate device sends three or four sources of media content of various 2D/3D configurations, some for full-screen HD and others for lesser full-screen or regional presentation. In accordance with this example, the viewer configures the regions and overlaps via a remote control live or via setup. A display may send the configuration information upstream to an intermediate device or a source of the video data to assist in an overall construction of the content for final delivery to the display.
For example,
Intermediate device 1004 reformats the various video data based on the configuration information 1010. For example, intermediate device 1004 may remove one or more of multiple data portions that correspond to an area of overlap so that a single data portion corresponds to the area of overlap. In another example, intermediate device 1004 may reduce resolution of any one or more of the various video data. In accordance with this example, intermediate device 1004 may reduce resolution of video data based on a reduction in a size of a screen region in which video data is to be displayed. In yet another example, intermediate device 1004 may reduce a number of perspectives that are represented by any one or more of the various video data. In accordance with this example, intermediate device 1004 may reformat video data from being 3D16 data to 3D2 data, from 3D8 data to 3D4 data, from 3D4 data to 3D2 data, from 3D2 data to 2D data, and so on. Display 1006 displays video content that corresponds to the reformatted data 1012.
In one example, assume that interface device 1004 reformats the video data to take into consideration that display 1006 is capable of supporting full-screen 3D4 data. If the reformatting involves removing overlapping data, it can be appreciated that combining any number of overlapping 3D4 portions will not exceed the supported bandwidth, so long as overlapping data is removed and a number of perspectives that are represented by the reformatted data 1012 is reduced to if necessary. On the other hand, if no removal of overlapping data occurs, intermediate device 1004 may remove overlapping data and/or reduce a number of perspectives that are represented by the reformatted data 1012 more often or more aggressively so as not to exceed the overall bandwidth limitations associated with the downstream link toward the display.
It will be recognized that limitations of the capabilities of display 1006 (e.g., bandwidth capability) may change over time as when other communications flow in competition (network communication flow). Accordingly, intermediate device 1004 or any other node involved in the process may remove overlapping data, reduce resolution, and/or reduce a number of perspectives represented by the reformatted data 1012 more or less aggressively to meet the current conditions. It will be recognized that intermediate device 1004 need not perform the reformatting of the video data. For instance, intermediate device 1004 may forward the configuration information 1010 or information regarding the configuration information 1010 to another intermediate device or a source of the video data for reformatting of the video data in accordance with the configuration information 1010.
Video data may be formatted and/or delivered in a variety of ways according to embodiments. For instance,
Flowchart 1100 includes step 1102. In step 1102, a first frame sequence and a second frame sequence are delivered via a channelized, single communication pathway (e.g., pathway 108 of
The first video requires a first screen region (e.g., first screen region 116A or 816A) having a first configuration. The second video requires a second screen region (e.g., second screen region 116B or 816B) having a second configuration. The first frame sequence relates to the first video and has first field content that at least assists in identifying the first configuration. The second frame sequence relates to the second video and has second field content that at least assists in identifying the second configuration.
The first configuration and the second configuration may be the same or different. For example, the first and second configurations may be two-dimensional configurations. In another example, the first and second configurations may be three-dimensional configurations that include a common number of perspectives. In yet another example, the first configuration may be a two-dimensional configuration and the second configuration may be a three-dimensional configuration, or vice versa. In still another example, the first configuration may be a three-dimensional configuration that includes a first number of perspectives, and the second configuration may be another three-dimensional configuration that includes a second number of perspectives that is different from the first number of perspectives.
Flowchart 1200 will be described with continued reference to exemplary frame format 1300 shown in
Flowchart 1200 begins with step 1202. In step 1202, a first entry is placed in a first region field. The first entry at least assists in identifying a first display region associated with first video data. For example, the first entry may specify the first display region. In another example, the first entry may include information that corresponds to the first display region (e.g., a location in storage at which the identity of the first display region is stored). As shown in
At step 1204, a second entry is placed in a second region field. The second entry at least assists in identifying a second display region associated with second video data. As shown in
At step 1206, a third entry is placed in a first configuration field. The third entry at least assists in identifying a first screen configuration to be used with the first video data. The first video data may be two-dimensional data (or a part thereof) or three-dimensional data (or a part thereof). As shown in
At step 1208, a fourth entry is placed in a second configuration field. The fourth entry at least assists in identifying a second screen configuration to be used with the second video data. The second video data is at least a part of three-dimensional video data. The second screen configuration is different from the first screen configuration. As shown in
At step 1210, portion(s) of the first video data are placed in respective first payload field(s). As shown in
At step 1212, portion(s) of the second video data are placed in respective second payload field(s). As shown in
At step 1214, frame(s) of a series of frames are constructed. The frame(s) contain at least content(s) of the respective first payload field(s) and content(s) of the respective second payload field(s). For example, the frame(s) may include at least the portions 1322A-1322N of the first video data, which are included in the first payload fields 1306A-1306N, and the portions 1324A-1324N of the second video data, which are included in the second payload fields 1316A-1316N. In one variant, multiple instances of the first, second, third, and fourth entries are placed into frame(s) of the series of frames. For example, an instance of the first and third entries may be placed into the frame(s) for each portion (or subset of the portions) of the first video data. In another example, an instance of the second and fourth entries may be placed into the frame(s) for each portion (or subset of the portions) of the second video data.
In some example embodiments, one or more steps 1202, 1204, 1206, 1208, 1210, 1212, and/or 1214 of flowchart 1200 may not be performed. Moreover, steps in addition to or in lieu of steps 1202, 1204, 1206, 1208, 1210, 1212, and/or 1214 may be performed.
Flowchart 1400 will be described with continued reference to exemplary frame format 1500 shown in
Flowchart 1400 begins with step 1402. In step 1402, a first parameter is placed in at least a first field. The first parameter identifies a first display region associated with first video data. As shown in
At step 1404, a second parameter is placed in at least a second field. The second parameter identifies a second display region associated with second video data. The second display region and the first display region overlap at least in part. For example, the second display region and the first display region may overlap at least in part on a single screen. As shown in
In one variant, the first display region and the second display region fully overlap. In another variant, the first video data corresponds to first video capture at a first perspective, and the second video data corresponds to second video capture at a second perspective. In accordance with this variant, together the first video data and the second video data are intended for a three-dimensional visual presentation in the overlapping region.
At step 1406, portion(s) of the first video data are placed in respective first payload field(s). As shown in
At step 1408, portion(s) of the second video data are placed in respective second payload field(s). As shown in
At step 1410, a series of frames is assembled. The series of frames includes the first video data and the second video data. For example, each frame in the series of frames may include a respective portion of the first video data or the second video data.
At step 1412, a frame identifier is provided. For example, the frame identifier may specify a beginning and/or an end of one or more frames that are included in the series of frames. In another example, the frame identifier may specify portion(s) of the first and second video data that correspond to respective frame(s) of the series of frames.
In some example embodiments, one or more steps 1402, 1404, 1406, 1408, 1410, and/or 1412 of flowchart 1400 may not be performed. Moreover, steps in addition to or in lieu of steps 1402, 1404, 1406, 1408, 1410, and/or 1412 may be performed.
Communication interface 1600 includes input circuitry 1602, processing circuitry 1604, and transmitter circuitry 1606. Input circuitry 1602 delivers input 1608 to processing circuitry 1604. The input 1608 is at least related to the first video data and the second video data. The first video data requires the first screen configuration, which is different from the second screen configuration required by the second video data. The first screen configuration may be a two-dimensional configuration or a three-dimensional configuration. The second screen configuration is a three-dimensional configuration.
Processing circuitry 1604 generates output data that corresponds to other data that is within fields of each frame of the sequence of frames.
Transmitter circuitry 1606 receives at least portions 1610 (a.k.a. data portions) of the output data from processing circuitry 1604. Transmitter circuitry 1606 manages transmission of the data portions 1610.
In a variant, the input 1608 includes the sequence of frames that define the simultaneous visual presentation on the screen of the first type and the second type of video content. In another variant, processing circuitry 1604 produces the sequence of frames from the input 1608. In yet another variant, at least a first frame of the sequence of frames contains at least a portion of the first video data and at least a portion of the second video data.
A frame sequence may be structured in a variety of ways according to embodiments. For instance,
Flowchart 1700 begins with step 1702. In step 1702, first video data is received from a first source. The first video data may be associated with a first screen configuration and/or a first screen region, though the scope of the embodiments is not limited in this respect.
At step 1704, second video data is received from a second source that is different from the first source. The second video data may be associated with a second screen configuration that is different from or the same as the first screen region. The second video data may be associated with a second screen region that is different from or the same as the first screen region. It will be recognized, however, that the second video data need not necessarily be associated with a second screen configuration and/or a second screen region.
At step 1706, a frame sequence is structured to support simultaneous streamed display of first video content and second video content via a single screen that supports regional configuration. The first video content corresponds to the first video data. The second video content corresponds to the second video data. For example, fields may be incorporated into one or more frames of the frame sequence to indicate screen configurations and/or screen regions that correspond to the respective first and second video data. In accordance with this example, the frame sequence may be structured to support simultaneous streamed display of the first video content at a first region of the screen and the second video content at a second region of the screen that is different from the first region.
In one variant, the first video data is two-dimensional video data, and the second video data is three-dimensional video data, or vice versa. In another variant, the first video data is first three-dimensional video data, and the second video data is second three-dimensional video data.
In some example embodiments, one or more steps 1702, 1704, and/or 1706 of flowchart 1700 may not be performed. Moreover, steps in addition to or in lieu of steps 1702, 1704, and/or 1706 may be performed.
B. Exemplary Display Device Screen Embodiments
Embodiments described herein for frame formatting supporting mixed two and three dimensional video data communication may be implemented with respect to various types of display devices. For example, as described above, some display screens are configured for displaying two-dimensional content, although they may display two-dimensional images that may be combined to form three-dimensional images by special glasses worn by users. Some other types of display screens are capable of display two-dimensional content and three-dimensional content without the users having to wear special glasses using techniques of autostereoscopy.
As described above, display devices, such as display device 106 or 806 or display 906 or 1006, may be implemented in various ways. For instance, display device 106 or 806 or display 906 or 1006 may be a television display (e.g., an LCD (liquid crystal display) television, a plasma television, etc.), a computer monitor, or any other type of display device. Display device 106 or 806 or display 906 or 1006 may include any suitable type or combination of light and image generating devices, including an LCD screen, a plasma screen, an LED (light emitting device) screen (e.g., an OLED (organic LED) screen), etc. Furthermore, display device 106 or 806 or display 906 or 1006 may include any suitable type of light filtering device, such as a parallax barrier (e.g., an LCD filter, a mechanical filter (e.g., that incorporates individually controllable shutters), etc.) and/or a lenticular lens, and may be configured in any manner, including as a thin-film device (e.g., formed of a stack of thin film layers), etc. Furthermore, display device 106 or 806 or display 906 or 1006 may include any suitable light emitting device as backlighting, including a panel of LEDs or other light emitting elements.
For instance,
Examples of light manipulator 1804 include a parallax barrier and a lenticular lens. For instance, light manipulator 1804 may be a parallax barrier that has a layer of material with a series of precision slits. The parallax barrier is placed proximal to a light emitting pixel array so that a user's eyes each see a different set of pixels to create a sense of depth through parallax. In another embodiment, light manipulator 1804 may be a lenticular lens that includes an array of magnifying lenses configured so that when viewed from slightly different angles, different images are magnified. Such a lenticular lens may be used to deliver light from a different set of pixels of a pixel array to each of the user's eyes to create a sense of depth. Embodiments are applicable display devices that include such light manipulators, include other types of light manipulators, and that may include multiple light manipulators.
As shown in
In contrast,
Device 1900 receives one or more control signals 1906 that are configured to place screen 1902 in a desired display mode (e.g., either a two-dimensional display mode or a three-dimensional display mode), and/or to configure three-dimensional characteristics of any number and type as described above, such as configuring adaptable light manipulator 1904 to deliver different types of three-dimensional images, to deliver three-dimensional images to different/moving regions of a viewing space, and to deliver two-dimensional and/or three-dimensional images from any number of regions of screen 1902 to the viewing space.
As shown in
Content signals 1808 and 1908 may include video content according to any suitable format. For example, content signals 1808 and 1908 may include video content delivered over an HDMI (High-Definition Multimedia Interface) interface, over a coaxial cable, as composite video, as S-Video, a VGA (video graphics array) interface, etc.
Exemplary embodiments for display devices 1900 and 2000 of
Display devices 1800 and 1900 may include parallax barriers as light manipulators 1804 and 1904, respectively. For instance,
Pixel array 2008 includes a two-dimensional array of pixels (e.g., arranged in a grid or other distribution). Pixel array 2008 is a self-illuminating or light-generating pixel array such that the pixels of pixel array 2008 each emit light included in light 2052 emitted from image generator 2012. Each pixel may be a separately addressable light source (e.g., a pixel of a plasma display, an LCD display, an LED display such as an OLED display, or of other type of display). Each pixel of pixel array 2008 may be individually controllable to vary color and intensity. In an embodiment, each pixel of pixel array 2008 may include a plurality of sub-pixels that correspond to separate color channels, such as a trio of red, green, and blue sub-pixels included in each pixel.
Parallax barrier 2020 is positioned proximate to a surface of pixel array 2008. Barrier element array 2010 is a layer of parallax barrier 2020 that includes a plurality of barrier elements or blocking regions arranged in an array. Each barrier element of the array is configured to be selectively opaque or transparent. Combinations of barrier elements may be configured to be selectively opaque or transparent to enable various effects. For example, in one embodiment, each barrier element may have a round, square, or rectangular shape, and barrier element array 2010 may have any number of rows of barrier elements that extend a vertical length of barrier element array 2010. In another embodiment, each barrier element may have a “band” shape that extends a vertical length of barrier element array 2010, such that barrier element array 2010 includes a single horizontal row of barrier elements. Each barrier element may include one or more of such bands, and different regions of barrier element array may include barrier elements that include different numbers of such bands.
One advantage of such a configuration where barrier elements extend a vertical length of barrier element array 2010 is that such barrier elements do not need to have spacing between them because there is no need for drive signal routing in such space. For instance, in a two-dimensional LCD array configuration, such as TFT (thin film transistor) display, a transistor-plus-capacitor circuit is typically placed onsite at the corner of a single pixel in the array, and drive signals for such transistors are routed between the LCD pixels (row-column control, for example). In a pixel configuration for a parallax barrier, local transistor control may not be necessary because barrier elements may not need to be changing as rapidly as display pixels (e.g., pixels of pixel array 2008). For a single row of vertical bands of barrier elements, drive signals may be routed to the top and/or bottom of barrier elements. Because in such a configuration drive signal routing between rows is not needed, the vertical bands can be arranged side-by-side with little-to-no space in between. Thus, if the vertical bands are thin and oriented edge-to-edge, one band or multiple adjacent bands (e.g., five bands) may comprise a barrier element in a blocking state, followed by one band or multiple adjacent bands (e.g., two bands) that comprise a barrier element in a non-blocking state (a slit), and so on. In the example of five bands in a blocking state and two bands in a non-blocking state, the five bands may combine to offer a single black barrier element of approximately 2.5 times the width of a single transparent slit with no spaces therein.
It is noted that in some embodiments, barrier elements may be capable of being completely transparent or opaque, and in other embodiments, barrier elements may not be capable of being fully transparent or opaque. For instance, such barrier elements may be capable of being 95% transparent when considered to be “transparent” and may be capable of being 5% transparent when considered to be “opaque.” “Transparent” and “opaque” as used herein are intended to encompass barrier elements being substantially transparent (e.g., greater than 75% transparent, including completely transparent) and substantially opaque (e.g., less than 25% transparent, including completely opaque), respectively.
Display driver circuit 2002 receives control signal 2022 and content signal 2024. As described below, content signal 2024 includes two-dimensional and/or three-dimensional content for display. Control signal 2022 may be control signal 1806 of
For example, drive signal 2014 may control sets of pixels of pixel array 2008 to each emit light representative of a respective image, to provide a plurality of images. Drive signal 2016 may control barrier elements of barrier element array 2010 to filter the light received from pixel array 2008 according to the provided images such that one or more of the images are received by users 2018 in two-dimensional form. For instance, drive signal 2016 may select one or more sets of barrier elements of barrier element array 2010 to be transparent, to transmit one or more corresponding two-dimensional images or views to users 2018. Furthermore, drive signal 2016 may control sections of barrier element array 2010 to include opaque and transparent barrier elements to filter the light received from pixel array 2008 so that one or more pairs of images or views provided by pixel array 2008 are each received by users 2018 as a corresponding three-dimensional image or view. For example, drive signal 2016 may select parallel strips of barrier elements of barrier element array 2010 to be transparent to form slits that enable three-dimensional images to be received by users 2018.
In embodiments, drive signal 2016 may be generated by barrier array driver circuit 2006 to configure one or more characteristics of barrier element array 2010. For example, drive signal 2016 may be generated to form any number of parallel strips of barrier elements of barrier element array 2010 to be transparent, to modify the number and/or spacing of parallel strips of barrier elements of barrier element array 2010 that are transparent, to select and/or modify a width and/or a length (in barrier elements) of one or more strips of barrier elements of barrier element array 2010 that are transparent or opaque, to select and/or modify an orientation of one or more strips of barrier elements of barrier element array 2010 that are transparent, to select one or more areas of barrier element array 2010 to include all transparent or all opaque barrier elements, etc.
Backlighting 2116 is a backlight panel that emits light 2138. Light element array 2136 (or “backlight array”) of backlighting 2116 includes a two-dimensional array of light sources. Such light sources may be arranged, for example, in a rectangular grid. Each light source in light element array 2136 is individually addressable and controllable to select an amount of light emitted thereby. A single light source may comprise one or more light-emitting elements depending upon the implementation. In one embodiment, each light source in light element array 2136 comprises a single light-emitting diode (LED) although this example is not intended to be limiting.
Parallax barrier 2020 is positioned proximate to a surface of backlighting 2116 (e.g., a surface of the backlight panel). As described above, barrier element array 2010 is a layer of parallax barrier 2020 that includes a plurality of barrier elements or blocking regions arranged in an array. Each barrier element of the array is configured to be selectively opaque or transparent. Barrier element array 2010 filters light 2138 received from backlighting 2116 to generate filtered light 2140. Filtered light 2140 is configured to enable a two-dimensional image or a three-dimensional image (e.g., formed by a pair of two-dimensional images in filtered light 2072) to be formed based on images subsequently imposed on filtered light 2140 by pixel array 2122.
Similarly to pixel array 2008 of
Display driver circuit 2002 of
For example, drive signal 2134 may control sets of light sources of light element array 2136 to emit light 2138. Drive signal 2016 may control barrier elements of barrier element array 2010 to filter light 2138 received from light element array 2136 to enable filtered light 2140 to enable two- and/or three-dimensionality. Drive signal 2132 may control sets of pixels of pixel array 2122 to filter filtered light 2140 according to respective images, to provide a plurality of images. For instance, drive signal 2016 may select one or more sets of the barrier elements of barrier element array 2010 to be transparent, to enable one or more corresponding two-dimensional images to be delivered to users 2018. Furthermore, drive signal 2016 may control sections of barrier element array 2010 to include opaque and transparent barrier elements to filter the light received from light element array 2136 so that one or more pairs of images provided by pixel array 2122 are each enabled to be received by users 2018 as a corresponding three-dimensional image. For example, drive signal 2016 may select parallel strips of barrier elements of barrier element array 2010 to be transparent to form slits that enable three-dimensional images to be received by users 2018.
Flowchart 2200 begins with step 2202. In step 2202, light is received at an array of barrier elements. For example, as shown in
In step 2204, a first set of the barrier elements of the array of barrier elements is configured in the blocking state and a second set of the barrier elements of the array of barrier elements is configured in the non-blocking state to enable a viewer to be delivered a three-dimensional view. Three-dimensional image content may be provided for viewing in viewing space 2070. In such case, referring to
For instance,
Referring back to
For example, as shown in
Furthermore, light emanating from pixel array 2302 is filtered by barrier element array 2304 to form a plurality of images in a viewing space 2326, including a first image 2306a at a first location 2308a and a second image 2306b at a second location 2308b. A portion of the light emanating from pixel array 2302 is blocked by blocking barrier elements 2310, while another portion of the light emanating from pixel array 2302 passes through non-blocking barrier elements 2312, according to the filtering by barrier element array 2304. For instance, light 2324a from pixel 2314a is blocked by blocking barrier element 2310a, and light 2324b and light 2324c from pixel 2314b are blocked by blocking barrier elements 2310b and 2310c, respectively. In contrast, light 2318a from pixel 2314a is passed by non-blocking barrier element 2312a and light 2318b from pixel 2314b is passed by non-blocking barrier element 2312b.
By forming parallel non-blocking slits in a barrier element array, light from a pixel array can be filtered to form multiple images or views in a viewing space. For instance, system 2300 shown in
First and second images 2306a and 2306b are configured to be perceived by a user as a three-dimensional image or view. For example, a viewer may receive first image 2306a at a first eye location and second image 2306b at a second eye location, according to an exemplary embodiment. First and second images 2306a and 2306b may be generated by first set of pixels 2314a-2314d and second set of pixels 2316a-2316d as images that are slightly different perspective from each other. Images 2306a and 2306b are combined in the visual center of the brain of the viewer to be perceived as a three-dimensional image or view. In such an embodiment, first and second images 2306a and 2306b may be formed by display system 2300 such that their centers are spaced apart a width of a user's pupils (e.g., an “interocular distance”).
Note that in the embodiments of
For instance,
In another example,
Furthermore, as shown in
As such, in
As described above, in an embodiment, display device 1902 of
In embodiments, display systems may be configured to generate multiple two-dimensional images or views for viewing by users in a viewing space. For example,
As such, display system 2800 of
In an embodiment, display system 2300 may be configured to generate multiple three-dimensional images that include related image content (e.g., each three-dimensional image is a different viewpoint of a common scene), or that each include unrelated image content, for viewing by users in a viewing space. Each of the three-dimensional images may correspond to a pair of images generated by pixels of the pixel array. The barrier element array filters light from the pixel array to form the image pairs in a viewing space to be perceived by users as three-dimensional images.
For instance,
Flowchart 2900 begins with step 2902. In step 2902, light is received from an array of pixels that includes a plurality of pairs of sets of pixels. For instance, in the example of
As described above, in the current embodiment, pixel array 3002 is segmented into a plurality of pairs of sets of pixels. For instance, in the example of
In step 2904, a plurality of strips of barrier elements of a barrier element array is selected to be non-blocking to form a plurality of parallel non-blocking slits. As shown in
In step 2906, the light is filtered at the barrier element array to form a plurality of pairs of images in a viewing space corresponding to the plurality of pairs of sets of pixels, each pair of images of the plurality of pairs of images being configured to be perceived as a corresponding three-dimensional image of a plurality of three-dimensional images. As shown in
In the embodiment of
In the example of
In
Further description regarding using a parallax barrier to deliver three-dimensional views, including adaptable versions of parallax barriers, is provided in pending U.S. Ser. No. 12/845,409, titled “Display With Adaptable Parallax Barrier,” in pending U.S. Ser. No. 12/845,440, titled “Adaptable Parallax Barrier Supporting Mixed 2D And Stereoscopic 3D Display Regions,” and in pending U.S. Ser. No. 12/845,461, titled “Display Supporting Multiple Simultaneous 3D Views,” which are each incorporated by reference herein in their entireties.
In embodiments, as described herein, display devices 1800 and 1900 of
In one embodiment, lenticular lens 3100 may be fixed in size. For example, light manipulator 1804 of
Further description regarding using a lenticular lens to deliver three-dimensional views, including adaptable versions of lenticular lenses, is provided in pending U.S. Ser. No. 12/774,307, titled “Display with Elastic Light Manipulator,” which is incorporated by reference herein in its entirety.
Display devices 1800 and 1900 may include multiple layers of light manipulators in embodiments. Multiple three-dimensional images may be displayed in a viewing space using multiple light manipulator layers, according to embodiments. In embodiments, the multiple light manipulating layers may enable spatial separation of the images. For instance, in such an embodiment, for example, a display device that includes multiple light manipulator layers may be configured to display a first three-dimensional image in a first region of a viewing space (e.g., a left-side area), a second three-dimensional image in a second region of the viewing space (e.g., a central area), a third three-dimensional image in a third region of the viewing space (e.g., a right-side area), etc. In embodiments, a display device may be configured to display any number of spatially separated three-dimensional images, as desired for a particular application (e.g., according to a number and spacing of viewers in the viewing space, etc.).
For instance,
Flowchart 3300 begins with step 3302. In step 3302, light is received from an array of pixels that includes a plurality of pairs of sets of pixels. For example, as shown in
In step 3304, the light from the array of pixels is manipulated with a first light manipulator. For example, first light manipulator 3414a may be configured to manipulate light 2052 received from pixel array 2008. As shown in
In step 3306, the light manipulated by the first light manipulator is manipulated with a second light manipulator to form a plurality of pairs of images corresponding to the plurality of pairs of sets of pixels in a viewing space. For example, as shown in
As such, display system 3400 has a single viewing plane or surface (e.g., a plane or surface of pixel array 2008, first light manipulator 3414a, second light manipulator 3414b) that supports multiple viewers with media content in the form of three-dimensional images or views. The single viewing plane of display system 3400 may provide a first three-dimensional view based on first three-dimensional media content to a first viewer, a second three-dimensional view based on second three-dimensional media content to a second viewer, and optionally further three-dimensional views based on further three-dimensional media content to further viewers. First and second light manipulators 3414a and 3414b each cause three-dimensional media content to be presented to a corresponding viewer via a corresponding area of the single viewing plane, with each viewer being enabled to view corresponding media content without viewing media content directed to other viewers. Furthermore, the areas of the single viewing plane that provide the various three-dimensional views of media content overlap each other at least in part. In the embodiment of
Display system 3400 may be configured in various ways to generate multiple three-dimensional images according to flowchart 3300, in embodiments. Furthermore, as described below, embodiments of display system 3400 may be configured to generate two-dimensional views, as well as any combination of one or more two-dimensional views simultaneously with one or more three-dimensional views.
For instance, in an embodiment, delivery of three-dimensional images may be performed in system 3400 using multiple parallax barriers.
As shown in the example of
Each of pixels 3514a-3514c, 3516a-3516c, 3518a-3518c, and 3520a-3520c is configured to generate light, which emanates from the surface of pixel array 3502 towards first barrier element array 3504. Each set of pixels is configured to generate a corresponding image. For example,
First-fourth images 3606a-3606d may be formed in viewing space 3602 at a distance from pixel array 3502 and at a lateral location of viewing space 3602 as determined by a configuration of display system 3500 of
In an embodiment, system 3400 of
As shown in
C. Exemplary Display Environments
As described above, light manipulators may be reconfigured to change the locations of delivered views based on changing viewer positions. As such, a position of a viewer may be determined/tracked so that a parallax barrier and/or light manipulator may be reconfigured to deliver views consistent with the changing position of the viewer. For instance, with regard to a parallax barrier, a spacing, number, arrangement, and/or other characteristic of slits may be adapted according to the changing viewer position. With regard to a lenticular lens, a size of the lenticular lens may be adapted (e.g., stretched, compressed) according to the changing viewer position. In embodiments, a position of a viewer may be determined/tracked by determining a position of the viewer directly, or by determining a position of a device associated with the viewer (e.g., a device worn by the viewer, held by the viewer, sitting in the viewer's lap, in the viewer's pocket, sitting next the viewer, etc.).
For instance,
Remote control 3804a is a device that viewer 3806a may use to interact with display device 3802, and remote control 3804b is a device that viewer 3806b may use to interact with display device 3802. For example, as shown in
Headsets 3812a and 3812b are worn by viewers 3806a and 3806b, respectively. Headsets 3812a and 3812b each include one or two speakers (e.g., earphones) that enable viewers 3806a and 3806b to hear audio associated with the media content of views 3808a and 3808b. Headsets 3812a and 3812b enable viewers 3806a and 3806b to hear audio of their respective media content without hearing audio associated the media content of the other of viewers 3806a and 3806b. Headsets 3812a and 3812b may each optionally include a microphone to enable viewers 3806a and 3806b to interact with display device 3802 using voice commands.
Display device 3802a, headset 3812a, and/or remote control 3804a may operate to provide position information 3810a regarding viewers 3806a to display device 3802, and display device 3802b, headset 3812b, and/or remote control 3804b may operate to provide position information 3810b regarding viewers 3806b to display device 3802. Display device 3802 may use position information 3810a and 3810b to reconfigure one or more light manipulators (e.g., parallax barriers and/or lenticular lenses) of display device 3802 to enable views 3808a and 3808b to be delivered to viewers 3806a and 3806b, respectively, at various locations. For example, display device 3802a, headset 3812a, and/or remote control 3804a may use positioning techniques to track the position of viewer 3806a, and display device 3802b, headset 3812b, and/or remote control 3804b may use positioning techniques to track the position of viewer 3806b.
As shown in
Control circuitry 3902 may also include one or more secondary storage devices (not shown in
Control circuitry 3902 further includes a user input interface 3918, and a media interface 3920. User input interface 3918 is intended to generally represent any type of interface that may be used to receive user input, including but not limited to a remote control device, a traditional computer input device such as a keyboard or mouse, a touch screen, a gamepad or other type of gaming console input device, or one or more sensors including but not limited to video cameras, microphones and motion sensors.
Media interface 3920 is intended to represent any type of interface that is capable of receiving media content such as video content or image content. In certain implementations, media interface 3920 may comprise an interface for receiving media content from a remote source such as a broadcast media server, an on-demand media server, or the like. In such implementations, media interface 3920 may comprise, for example and without limitation, a wired or wireless internet or intranet connection, a satellite interface, a fiber interface, a coaxial cable interface, or a fiber-coaxial cable interface. Media interface 3920 may also comprise an interface for receiving media content from a local source such as a DVD or Blu-Ray disc player, a personal computer, a personal media player, smart phone, or the like. Media interface 3920 may be capable of retrieving video content from multiple sources.
Control circuitry 3902 further includes a communication interface 3922. Communication interface 3922 enables control circuitry 3902 to send control signals via a communication medium 3952 to another communication interface 3930 within driver circuitry 3904, thereby enabling control circuitry 3902 to control the operation of driver circuitry 3904. Communication medium 3952 may comprise any kind of wired or wireless communication medium suitable for transmitting such control signals.
As shown in
In one example mode of operation, processing unit 3914 operates pursuant to control logic to receive video content via media interface 3920 and to generate control signals necessary to cause driver circuitry 3904 to render such video content to screen 3906 in accordance with a selected viewing configuration. The control logic that is executed by processing unit 3914 may be retrieved, for example, from a primary memory or a secondary storage device connected to processing unit 3914 via communication infrastructure 3912 as discussed above. The control logic may also be retrieved from some other local or remote source. Where the control logic is stored on a computer readable medium, that computer readable medium may be referred to herein as a computer program product.
Among other features, driver circuitry 3904 may be controlled in a manner previously described to send coordinated drive signals necessary for simultaneously displaying two-dimensional images, three-dimensional images and multi-view three-dimensional content via different display regions of the screen. The manner in which pixel array 3942, adaptable light manipulator 3944 (e.g., an adaptable parallax barrier), and light generator 3946 may be manipulated in a coordinated fashion to perform this function was described previously herein. Note that in accordance with certain implementations (e.g., implementations in which pixel array comprises an OLED/PLED pixel array), screen 3906 need not include light generator 3946.
In one embodiment, at least part of the function of generating control signals necessary to cause pixel array 3942, adaptable light manipulator 3944 and light generator 3946 to render video content to screen 3906 in accordance with a selected viewing configuration is performed by drive signal processing circuitry 3938 which is integrated within driver circuitry 3904. Such circuitry may operate, for example, in conjunction with and/or under the control of processing unit 3914 to generate the necessary control signals.
In certain implementations, control circuitry 3902, driver circuitry 3904 and screen elements 3906 are all included within a single housing. For example and without limitation, all these elements may exist within a television, a laptop computer, a tablet computer, or a telephone. In accordance with such an implementation, the link 3952 formed between communication interfaces 3922 and 3930 may be replaced by a direct connection between driver circuitry 3904 and communication infrastructure 3912. In an alternate implementation, control circuitry 3902 is disposed within a first housing, such as set top box or personal computer, and driver circuitry 3904 and screen 3906 are disposed within a second housing, such as a television or computer monitor. The set top box may be any type of set top box including but not limited to fiber, Internet, cable, satellite, or terrestrial digital.
Media node includes processing circuitry 4004, communication interface circuitry 4028, built-in media storage 4030, and a built-in screen assembly 4032. In one example, communication interface circuitry 4028 receives the 2D and/or 3Dx media data from the source(s) via link(s) 4034E. In another example, communication interface circuitry 4028 receives the 2D and/or 3Dx media data from built-in media storage 4030 via link 4034C. In yet another example, communication interface circuitry 4028 receives a first portion of the 2D and/or 3Dx media data from the source(s) via link(s) 4034E and a second portion of the 2D and/or 3Dx media data from built-in media storage 4030 via link 4034C. Communication interface circuitry 4028 forwards the various 2D and/or 3Dx media data to processing circuitry 4004 via link 4034A to be processed. Communication interface circuitry forwards the processed media data that is received from processing circuitry 4004 to built-in screen assembly 4032 via link 4034D and/or toward a single external screen via link 4034F.
Processing circuitry 4004 adjusts the 2D and/or 3Dx media data and/or a frame structure associated therewith to provide output data having a resource requirement that accommodates constraint(s) of one or more communication links (e.g., any one or more of communication links 4034A-4034F). For example, a constraint may be a maximum bandwidth that a communication link or a device that is connected to the communication link is capable of supporting. Processing circuitry 4004 includes content resource requirement adjustment (CRRA) circuitry 4006 and frame structure selection (FSS) circuitry 4020. CRRA circuitry 4006 adjusts the \media data based on the constraint(s) of the communication link(s). CRRA circuitry 4006 includes decryption/encryption circuitry 4008, decoding/encoding circuitry 4010, overlap removal circuitry 4012, resolution modification circuitry 4014, camera view circuitry 4016, and frame rate circuitry 4018.
Decryption/encryption circuitry 4008 is configured to select a type of encryption that is to be used to encrypt the media data based on the constraint(s). For example, decryption/encryption circuitry 4008 may encrypt each of the various 2D and/or 3Dx media data individually based on the constraint(s). In another example, decryption/encryption circuitry 4008 may combine two or more (e.g., all) of the various 2D and/or 3Dx media data and perform an encryption operation on the combined media data.
Decoding/encoding circuitry 4010 is configured to modify a type of encoding that is associated with the media data based on the constraint(s). For example, decoding/encoding circuitry 4010 may encode each of the various 2D and/or 3Dx media data individually based on the constraint(s). In another example, decoding/encoding circuitry 4010 may combine two or more (e.g., all) of the various 2D and/or 3Dx media data and perform an encoding operation on the combined media data.
The various 2D and/or 3Dx video media data may correspond to respective regions of a screen. Some (or all) of the regions may overlap in areas of overlap. Accordingly, multiple portions of data may correspond to each area of overlap. Overlap removal circuitry 4012 is configured to remove one or more of such data portions for each area of overlap, so that a single portion of data represents each area of overlap, based on the constraint(s). For instance, a first area of overlap may be represented by a first single data portion; a second area of overlap may be represented by a second single data portion, and so on. Overlap removal circuitry 4012 need not necessarily remove data portion(s) for every area of overlap. For instance, overlap removal circuitry 4012 may determine for which areas of overlap and/or for how many areas of overlap to remove data portion(s) based on the constraint(s).
Resolution modification circuitry 4014 is configured to modify (e.g., increase or decrease) resolution of one or more of the 2D and/or 3Dx media data based on the constraint(s). For example, resolution modification circuitry 4014 may reduce a resolution of media data in response to a decrease in size of a screen region in which media content that is associated the data is to be displayed. In another example, resolution modification circuitry 4014 may decrease resolution that is associated with a first subset of the 2D and/or 3Dx media data and increase resolution that is associated with a second subset of the 2D and/or 3Dx media data.
Camera view circuitry 40116 is configured to modify a number of camera views that are represented by one or more of the 2D and/or 3Dx media data based on the constraint(s). For example, camera view circuitry 4016 may remove two perspectives from 3D4 media data to provide 3D2 media data to reduce a bandwidth that is used to deliver the data. In another example, camera view circuitry 4016 may remove one perspective from 3D2 media data to provide 2D media data. In yet another example, camera view circuitry 4016 may add four perspectives to 3D4 media data to provide 3D8 media data, so as to more thoroughly utilize available bandwidth.
Frame rate circuitry 4018 is configured to modify a frame rate that is associated with one or more of the various 2D and/or 3Dx media data based on the constraint(s). For example, frame rate circuitry 4018 may reduce a frame rate that is associated with data if consecutive frames of the data are substantially the same. In another example, frame rate circuitry 4018 may decrease the frame rate that is associated with the data
Any of decryption/encryption circuitry 4008, decoding/encoding circuitry 4010, overlap removal circuitry 4012, resolution modification circuitry 4014, camera view circuitry 4016, and/or frame rate circuitry 4018 may be connected by one or more links.
FSS circuitry 4020 includes adaptive structure circuitry 4022 and predefined selection circuitry 4024. Predefined selection circuitry 4024 is configured to select a frame structure from among a plurality of fixed frame structures to be used with respect to the various 2D and/or 3Dx media data based on the constraint(s). Adaptive structure circuitry 4022 is configured to modify aspect(s) of a frame structure for use with respect to the various 2D and/or 3Dx media data based on the constraint(s).
Adaptive structure circuitry 4022 and predefined selection circuitry 4024 may be used in combination in some embodiments. For example, FSS circuitry 4020 may select a frame structure from among adaptive frame structures that are provided by adaptive structure circuitry 4022 and fixed frame structures that are provided by predefined selection circuitry 4024 based on the constraint(s). In another example, predefined selection circuitry 4024 may select a frame structure based on the constraint(s), and adaptive structure circuitry 4022 may modify the selected frame structure based on the constraint(s).
Links 4034A-4034F are shown in
Media node 4000 may exist in any network element or node anywhere in an end-to-end pathway. For instance, media node 4000 may be included in a source (e.g., source 102 or any of sources 802A-802N), an intermediate device (e.g., intermediate device 104, 804, 904A, 904B, or 1004), or a display device (e.g., display device 106 or 806 or display 906 or 1006).
Source nodes 4102A-4102F are connected to media nodes 4106A-4106E via communication links 4104. As shown in
Each of the source nodes 4102A-4102F, media nodes 4106A-4106E, and intermediate network nodes 4108 may include CRRA circuitry, FSS circuitry, and/or CIF circuitry as described above with reference to
It will be recognized that any of source nodes 4102A-4102F may be connected by one or more links. Moreover, any of media nodes 4106A-4106E may be connected by one or more links. Each link, whether it be a dedicated link (between two devices) or shared link and external to device housings or on some internal bus structures therebetween, may operate pursuant to a proprietary or industry standard protocol that takes into consideration bandwidth constraints. Thus, each processing circuit that manages a communication interface through which a combination of two or more different portions of media data are transferred may make adjustments to the underlying media portions in order to satisfy a bandwidth constraint. Such adjustments may be made with respect to each internal or external cable or bus structure, plus every shared or point-to-point wired or wireless pathway between any two devices or between any two circuit elements.
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be understood by those skilled in the relevant art(s) that various changes in form and details may be made to the embodiments described herein without departing from the spirit and scope of the invention. Accordingly, the breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
This application claims the benefit of U.S. Provisional Application No. 61/291,818, filed on Dec. 31, 2009, which is incorporated by reference herein in its entirety. This application also claims the benefit of U.S. Provisional Application No. 61/303,119, filed on Feb. 10, 2010, which is incorporated by reference herein in its entirety. This application is a continuation-in-part of the following pending U.S. Patent Applications, which are each incorporated by reference herein in their entireties: U.S. patent application Ser. No. 12/845,409, titled “Display With Adaptable Parallax Barrier,” filed on Jul. 28, 2010; U.S. patent application Ser. No. 12/845,440, titled “Adaptable Parallax Barrier Supporting Mixed 2D And Stereoscopic 3D Display Regions,” filed on Jul. 28, 2010; U.S. patent application Ser. No. 12/845,461, titled “Display Supporting Multiple Simultaneous 3D Views,” filed on Jul. 28, 2010; and U.S. patent application Ser. No. 12/774,307, titled “Display with Elastic Light Manipulator,” filed on May 5, 2010.” This application is also related to the following U.S. Patent Applications, each of which also claims the benefit of U.S. Provisional Patent Application Nos. 61/291,818 and 61/303,119 and each of which is incorporated by reference herein: U.S. patent application Ser. No. 12/982,053, filed on titled “Hierarchical Video Compression Supporting Selective Delivery of Two-Dimensional and Three-Dimensional Video Content,” filed on Dec. 30, 2010; U.S. patent application Ser. No. 12/982,199, titled “Transcoder Supporting Selective Delivery of 2D, Stereoscopic 3D and Multi-View 3D Content from Source Video,” filed on Dec. 30, 2010; U.S. patent application Ser. No. 12/982,248, titled “Interpolation of Three-Dimensional Video Content” filed on Dec. 30, 2010; U.S. patent application Ser. No. 12/982,062, titled “Set-Top Box Circuitry Supporting 2D and 3D Content Reductions to Accommodate Viewing Environment Constraints”; and U.S. patent application Ser. No. 12/982,330, titled “Multi-path and Multi-Source 3D Content Storage, Retrieval and Delivery,” filed on Dec. 30, 2010.
Number | Name | Date | Kind |
---|---|---|---|
4829365 | Eichenlaub | May 1989 | A |
5315377 | Isono et al. | May 1994 | A |
5493427 | Nomura et al. | Feb 1996 | A |
5615046 | Gilchrist | Mar 1997 | A |
5855425 | Hamagishi | Jan 1999 | A |
5945965 | Inoguchi et al. | Aug 1999 | A |
5959597 | Yamada et al. | Sep 1999 | A |
5969850 | Harrold et al. | Oct 1999 | A |
5990975 | Nan et al. | Nov 1999 | A |
6023277 | Osaka et al. | Feb 2000 | A |
6049424 | Hamagishi | Apr 2000 | A |
6094216 | Taniguchi et al. | Jul 2000 | A |
6144375 | Jain et al. | Nov 2000 | A |
6188442 | Narayanaswami | Feb 2001 | B1 |
6285368 | Sudo | Sep 2001 | B1 |
6697687 | Kasahara et al. | Feb 2004 | B1 |
6710920 | Mashitani et al. | Mar 2004 | B1 |
6909555 | Wohlstadter | Jun 2005 | B2 |
7030903 | Sudo | Apr 2006 | B2 |
7038698 | Palm et al. | May 2006 | B1 |
7091471 | Wenstrand et al. | Aug 2006 | B2 |
7123213 | Yamazaki et al. | Oct 2006 | B2 |
7190518 | Kleinberger et al. | Mar 2007 | B1 |
7359105 | Jacobs et al. | Apr 2008 | B2 |
7389214 | Yelich et al. | Jun 2008 | B1 |
7440193 | Gunasekaran et al. | Oct 2008 | B2 |
7511774 | Lee et al. | Mar 2009 | B2 |
7626644 | Shestak et al. | Dec 2009 | B2 |
7646451 | Vogel et al. | Jan 2010 | B2 |
7671935 | Mather et al. | Mar 2010 | B2 |
7692859 | Redert et al. | Apr 2010 | B2 |
7769668 | Balabon | Aug 2010 | B2 |
7885079 | Chen et al. | Feb 2011 | B2 |
7911442 | Wang et al. | Mar 2011 | B2 |
7924456 | Kahn et al. | Apr 2011 | B1 |
7954967 | Kashiwagi et al. | Jun 2011 | B2 |
7997783 | Song et al. | Aug 2011 | B2 |
8040952 | Park et al. | Oct 2011 | B2 |
8044983 | Nonaka et al. | Oct 2011 | B2 |
8049710 | Shestak et al. | Nov 2011 | B2 |
8072411 | Chen et al. | Dec 2011 | B2 |
8139024 | Daiku | Mar 2012 | B2 |
8154686 | Mather et al. | Apr 2012 | B2 |
8154799 | Kim et al. | Apr 2012 | B2 |
8174564 | Kim et al. | May 2012 | B2 |
8183788 | Ma | May 2012 | B2 |
8209396 | Raman et al. | Jun 2012 | B1 |
8233034 | Sharp et al. | Jul 2012 | B2 |
8284119 | Kim et al. | Oct 2012 | B2 |
8310527 | Ko et al. | Nov 2012 | B2 |
8334933 | Tsukada et al. | Dec 2012 | B2 |
8363928 | Sharp | Jan 2013 | B1 |
8368745 | Nam et al. | Feb 2013 | B2 |
8368749 | Lambdin et al. | Feb 2013 | B2 |
8384774 | Gallagher | Feb 2013 | B2 |
8400392 | Kimura et al. | Mar 2013 | B2 |
8411746 | Chen et al. | Apr 2013 | B2 |
8438601 | Putterman et al. | May 2013 | B2 |
8441430 | Lee | May 2013 | B2 |
8466869 | Kobayashi et al. | Jun 2013 | B2 |
8482512 | Adachi et al. | Jul 2013 | B2 |
8487863 | Park et al. | Jul 2013 | B2 |
8525942 | Robinson et al. | Sep 2013 | B2 |
8587642 | Shestak et al. | Nov 2013 | B2 |
8587736 | Kang | Nov 2013 | B2 |
8605136 | Yu et al. | Dec 2013 | B2 |
8687042 | Karaoguz et al. | Apr 2014 | B2 |
8736659 | Liu | May 2014 | B2 |
8766905 | Adachi | Jul 2014 | B2 |
8767050 | Bennett et al. | Jul 2014 | B2 |
8788676 | Alameh et al. | Jul 2014 | B2 |
8823782 | Karaoguz et al. | Sep 2014 | B2 |
8854531 | Karaoguz et al. | Oct 2014 | B2 |
8885026 | Endo | Nov 2014 | B2 |
8922545 | Bennett et al. | Dec 2014 | B2 |
8964013 | Bennett et al. | Feb 2015 | B2 |
8988506 | Bennett et al. | Mar 2015 | B2 |
20020010798 | Ben-Shaul et al. | Jan 2002 | A1 |
20020037037 | Van Der Schaar | Mar 2002 | A1 |
20020167862 | Tomasi et al. | Nov 2002 | A1 |
20020171666 | Endo et al. | Nov 2002 | A1 |
20020194604 | Sanchez et al. | Dec 2002 | A1 |
20030012425 | Suzuki et al. | Jan 2003 | A1 |
20030103165 | Bullinger et al. | Jun 2003 | A1 |
20030137506 | Efran et al. | Jul 2003 | A1 |
20030154261 | Doyle et al. | Aug 2003 | A1 |
20030223499 | Routhier et al. | Dec 2003 | A1 |
20040027452 | Yun et al. | Feb 2004 | A1 |
20040036763 | Swift et al. | Feb 2004 | A1 |
20040041747 | Uehara et al. | Mar 2004 | A1 |
20040081302 | Kim et al. | Apr 2004 | A1 |
20040109093 | Small-Stryker | Jun 2004 | A1 |
20040141237 | Wohlstadter | Jul 2004 | A1 |
20040164292 | Tung et al. | Aug 2004 | A1 |
20040239231 | Miyagawa et al. | Dec 2004 | A1 |
20040252187 | Alden | Dec 2004 | A1 |
20040255337 | Doyle et al. | Dec 2004 | A1 |
20050044489 | Yamagami et al. | Feb 2005 | A1 |
20050073472 | Kim et al. | Apr 2005 | A1 |
20050128353 | Young et al. | Jun 2005 | A1 |
20050185515 | Berstis et al. | Aug 2005 | A1 |
20050237487 | Chang | Oct 2005 | A1 |
20050248561 | Ito et al. | Nov 2005 | A1 |
20050259147 | Nam et al. | Nov 2005 | A1 |
20060026090 | Balabon | Feb 2006 | A1 |
20060050785 | Watanabe et al. | Mar 2006 | A1 |
20060087556 | Era | Apr 2006 | A1 |
20060109242 | Simpkins | May 2006 | A1 |
20060139448 | Ha et al. | Jun 2006 | A1 |
20060139490 | Fekkes et al. | Jun 2006 | A1 |
20060244918 | Cossairt et al. | Nov 2006 | A1 |
20060256136 | O'Donnell et al. | Nov 2006 | A1 |
20060256302 | Hsu | Nov 2006 | A1 |
20060262376 | Mather et al. | Nov 2006 | A1 |
20060271791 | Novack et al. | Nov 2006 | A1 |
20070002041 | Kim et al. | Jan 2007 | A1 |
20070008406 | Shestak et al. | Jan 2007 | A1 |
20070008620 | Shestak et al. | Jan 2007 | A1 |
20070052807 | Zhou et al. | Mar 2007 | A1 |
20070061845 | Barnes, Jr. | Mar 2007 | A1 |
20070072674 | Ohta et al. | Mar 2007 | A1 |
20070085814 | Ijzerman et al. | Apr 2007 | A1 |
20070096125 | Vogel et al. | May 2007 | A1 |
20070097103 | Yoshioka et al. | May 2007 | A1 |
20070097208 | Takemoto et al. | May 2007 | A1 |
20070110035 | Bennett | May 2007 | A1 |
20070139371 | Harsham et al. | Jun 2007 | A1 |
20070146267 | Jang et al. | Jun 2007 | A1 |
20070147827 | Sheynman et al. | Jun 2007 | A1 |
20070153122 | Ayite et al. | Jul 2007 | A1 |
20070153916 | Demircin et al. | Jul 2007 | A1 |
20070162392 | McEnroe et al. | Jul 2007 | A1 |
20070225994 | Moore | Sep 2007 | A1 |
20070226258 | Lambdin et al. | Sep 2007 | A1 |
20070258140 | Shestak et al. | Nov 2007 | A1 |
20070270218 | Yoshida et al. | Nov 2007 | A1 |
20070296874 | Yoshimoto et al. | Dec 2007 | A1 |
20080025390 | Shi et al. | Jan 2008 | A1 |
20080037120 | Koo et al. | Feb 2008 | A1 |
20080043096 | Vetro et al. | Feb 2008 | A1 |
20080043644 | Barkley et al. | Feb 2008 | A1 |
20080068329 | Shestak et al. | Mar 2008 | A1 |
20080086321 | Walton | Apr 2008 | A1 |
20080086391 | Maynard et al. | Apr 2008 | A1 |
20080086685 | Janky et al. | Apr 2008 | A1 |
20080126557 | Motoyama et al. | May 2008 | A1 |
20080133122 | Mashitani et al. | Jun 2008 | A1 |
20080150853 | Peng et al. | Jun 2008 | A1 |
20080165176 | Archer et al. | Jul 2008 | A1 |
20080168129 | Robbin et al. | Jul 2008 | A1 |
20080184301 | Boylan et al. | Jul 2008 | A1 |
20080191964 | Spengler | Aug 2008 | A1 |
20080192112 | Hiramatsu et al. | Aug 2008 | A1 |
20080204550 | De Zwart et al. | Aug 2008 | A1 |
20080246757 | Ito | Oct 2008 | A1 |
20080259233 | Krijn et al. | Oct 2008 | A1 |
20080273242 | Woodgate et al. | Nov 2008 | A1 |
20080284844 | Woodgate et al. | Nov 2008 | A1 |
20080303832 | Kim et al. | Dec 2008 | A1 |
20090002178 | Guday et al. | Jan 2009 | A1 |
20090010264 | Zhang | Jan 2009 | A1 |
20090051759 | Adkins et al. | Feb 2009 | A1 |
20090052164 | Kashiwagi et al. | Feb 2009 | A1 |
20090058845 | Fukuda et al. | Mar 2009 | A1 |
20090066785 | Lee | Mar 2009 | A1 |
20090102915 | Arsenich | Apr 2009 | A1 |
20090115783 | Eichenlaub | May 2009 | A1 |
20090115800 | Berretty et al. | May 2009 | A1 |
20090133051 | Hildreth | May 2009 | A1 |
20090138280 | Morita et al. | May 2009 | A1 |
20090138805 | Hildreth | May 2009 | A1 |
20090141182 | Miyashita et al. | Jun 2009 | A1 |
20090167639 | Casner et al. | Jul 2009 | A1 |
20090174700 | Daiku | Jul 2009 | A1 |
20090232202 | Chen et al. | Sep 2009 | A1 |
20090238378 | Kikinis et al. | Sep 2009 | A1 |
20090244262 | Masuda et al. | Oct 2009 | A1 |
20090244266 | Brigham | Oct 2009 | A1 |
20090268816 | Pandit et al. | Oct 2009 | A1 |
20090319625 | Kouhi | Dec 2009 | A1 |
20100007582 | Zalewski | Jan 2010 | A1 |
20100045782 | Morita | Feb 2010 | A1 |
20100066850 | Wilson et al. | Mar 2010 | A1 |
20100070987 | Amento et al. | Mar 2010 | A1 |
20100071015 | Tomioka et al. | Mar 2010 | A1 |
20100079374 | Cortenraad et al. | Apr 2010 | A1 |
20100097525 | Mino | Apr 2010 | A1 |
20100107184 | Shintani | Apr 2010 | A1 |
20100128112 | Marti et al. | May 2010 | A1 |
20100135640 | Zucker et al. | Jun 2010 | A1 |
20100182407 | Ko et al. | Jul 2010 | A1 |
20100208042 | Ikeda et al. | Aug 2010 | A1 |
20100215343 | Ikeda et al. | Aug 2010 | A1 |
20100218231 | Frink et al. | Aug 2010 | A1 |
20100225576 | Morad et al. | Sep 2010 | A1 |
20100231511 | Henty et al. | Sep 2010 | A1 |
20100238274 | Kim et al. | Sep 2010 | A1 |
20100238367 | Montgomery et al. | Sep 2010 | A1 |
20100245548 | Sasaki et al. | Sep 2010 | A1 |
20100272174 | Toma et al. | Oct 2010 | A1 |
20100299390 | Alameh et al. | Nov 2010 | A1 |
20100302461 | Lim et al. | Dec 2010 | A1 |
20100306800 | Jung et al. | Dec 2010 | A1 |
20100309290 | Myers | Dec 2010 | A1 |
20110016004 | Loyall et al. | Jan 2011 | A1 |
20110043475 | Rigazio et al. | Feb 2011 | A1 |
20110050687 | Alyshev et al. | Mar 2011 | A1 |
20110063289 | Gantz | Mar 2011 | A1 |
20110064146 | Chen et al. | Mar 2011 | A1 |
20110090233 | Shahraray et al. | Apr 2011 | A1 |
20110090413 | Liou | Apr 2011 | A1 |
20110093882 | Candelore et al. | Apr 2011 | A1 |
20110109964 | Kim et al. | May 2011 | A1 |
20110113343 | Trauth | May 2011 | A1 |
20110122944 | Gupta et al. | May 2011 | A1 |
20110137894 | Narayanan et al. | Jun 2011 | A1 |
20110149026 | Luthra | Jun 2011 | A1 |
20110157167 | Bennett et al. | Jun 2011 | A1 |
20110157168 | Bennett et al. | Jun 2011 | A1 |
20110157169 | Bennett et al. | Jun 2011 | A1 |
20110157170 | Bennett et al. | Jun 2011 | A1 |
20110157172 | Bennett et al. | Jun 2011 | A1 |
20110157257 | Bennett et al. | Jun 2011 | A1 |
20110157264 | Seshadri et al. | Jun 2011 | A1 |
20110157309 | Bennett et al. | Jun 2011 | A1 |
20110157315 | Bennett et al. | Jun 2011 | A1 |
20110157322 | Bennett et al. | Jun 2011 | A1 |
20110157326 | Karaoguz et al. | Jun 2011 | A1 |
20110157327 | Seshadri et al. | Jun 2011 | A1 |
20110157330 | Bennett et al. | Jun 2011 | A1 |
20110157336 | Bennett et al. | Jun 2011 | A1 |
20110157339 | Bennett et al. | Jun 2011 | A1 |
20110157471 | Seshadri et al. | Jun 2011 | A1 |
20110157696 | Bennett et al. | Jun 2011 | A1 |
20110157697 | Bennett et al. | Jun 2011 | A1 |
20110159929 | Karaoguz et al. | Jun 2011 | A1 |
20110161843 | Bennett et al. | Jun 2011 | A1 |
20110164034 | Bennett et al. | Jul 2011 | A1 |
20110164111 | Karaoguz et al. | Jul 2011 | A1 |
20110164115 | Bennett et al. | Jul 2011 | A1 |
20110164188 | Karaoguz et al. | Jul 2011 | A1 |
20110169913 | Karaoguz et al. | Jul 2011 | A1 |
20110169930 | Bennett et al. | Jul 2011 | A1 |
20110199469 | Gallagher | Aug 2011 | A1 |
20110234754 | Newton et al. | Sep 2011 | A1 |
20110254698 | Eberl et al. | Oct 2011 | A1 |
20110268177 | Tian et al. | Nov 2011 | A1 |
20110282631 | Poling et al. | Nov 2011 | A1 |
20120016917 | Priddle et al. | Jan 2012 | A1 |
20120081515 | Jang | Apr 2012 | A1 |
20120212414 | Osterhout et al. | Aug 2012 | A1 |
20120235900 | Border et al. | Sep 2012 | A1 |
20120308208 | Karaoguz et al. | Dec 2012 | A1 |
20130127980 | Haddick et al. | May 2013 | A1 |
20150015668 | Bennett et al. | Jan 2015 | A1 |
20150156473 | Bennett et al. | Jun 2015 | A1 |
Number | Date | Country |
---|---|---|
0833183 | Apr 1998 | EP |
1662808 | May 2006 | EP |
1816510 | Aug 2007 | EP |
1993294 | Nov 2008 | EP |
2454771 | May 2009 | GB |
200938878 | Sep 2009 | TW |
2005045488 | May 2005 | WO |
2007024118 | Mar 2007 | WO |
2008038068 | Apr 2008 | WO |
2008126557 | Oct 2008 | WO |
2009031872 | Mar 2009 | WO |
2009098622 | Aug 2009 | WO |
Entry |
---|
European Search Report received for European Patent Application No. 10015984.7, mailed on May 3, 2011, 3 pages. |
European search Report received for European Patent Application No. 10016055.5, mailed on Apr. 12, 2011, 3 pages. |
EPO Communication received for European Patent Application No. 10016055.5, mailed on Apr. 5, 2013, 6 pages. |
“How Browsers Work”, retrieved from <http://taligarsiel.com/Projects/howbrowserswork1.htm> on Oct. 21, 2010, 54 pages. |
IEEE 100 The Authoritative Dictionary of IEEE Standards Terms Seventh Edition, entry for “engine”, IEEE 100-2000, 2000, pp. 349-411. |
IEEE 100 The Authoritative Dictionary of IEEE Standards Terms Seventh Edition, entry for “Web page”, IEEE 100-2000, 2000, pp. 1269-1287. |
Wikipedia entry on “Scripting language”, retrieved on Aug. 16, 2012, 4 pages. |
Peterka, Thomas, “Dynallax: Dynamic Parallax Barrier Autostereoscopic Display”, PH.D. Dissertation, University of Illinois at Chicago, 2007, 134 pages. |
Shan et al., “Principles and Evaluation of Autostereoscopic Photogrammetric Measurement”, Photogrammetric Engineering and Remote Sensing, vol. 72, No. 4, Apr. 2006, pp. 365-372. |
Yanagisawa et al., “A Focus Distance Controlled 3DTV”, SPIE, vol. 3012, Stereoscopic Displays and Virtual Reality Systems IV, May 15, 1997, pp. 256-261. |
Yanaka, Kazuhisa “Stereoscopic Display Technique for Web3D Images”, SIGGRAPH 2009, New Orleans, Louisiana, Aug. 3-7, 2009, 1 page. |
Fono, et al.,“EyeWindows: Evaluation of Eye-Controlled Zooming Windows for Focus Selection”,CHI 2005, Papers: Eyes on Interaction, Portland, Oregon, Apr. 2-7, 2005, pp. 151-160. |
Kumar et al.,“Eye Point: Practical Pointing and Selection Using Gaze and Keyboard”,CHI 2007, Apr. 28-May 3, 2007, 10 pages. |
Liao, et al., “The Design and Application of High-Resolution 3D Stereoscopic graphics Display on PC”, Purdue University School of Science, 2000., pp. 1-7. |
“Displaying Stereoscopic 3D (S3D) with Intel HD Graphics Processors for Software Developers”, Intel, Aug. 2011, pp. 1-10. |
Office Action Received for Chinese Patent Application No. 201010619646.3, mailed on Mar. 31, 2014, 7 pages of Chinese Office action only. |
Office Action Received for Chinese Patent Application No. 201010619646.x, mailed on Mar. 5, 2014, 4 pages of Chinese Office action only. |
Office Action Received for Taiwanese Patent Application No. 099147124, mailed on Mar. 31, 2014, 8 pages. |
Ko et al., “Facial Feature Tracking and Head Orientation-Based Gaze Tracking”, ETRI, 2000, 4 pages. |
Ruddarraju et al., “Perceptual User Interfaces Using Vision-Based Eye Tracking”, ICMI, Nov. 5-7, 2003, 7 pages. |
Office Action received for Chinese Patent Application No. 201010619649.3, mailed on Oct. 11, 2014, 5 pages. |
Number | Date | Country | |
---|---|---|---|
20110169919 A1 | Jul 2011 | US |
Number | Date | Country | |
---|---|---|---|
61303119 | Feb 2010 | US | |
61291818 | Dec 2009 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12774307 | May 2010 | US |
Child | 12982289 | US | |
Parent | 12845409 | Jul 2010 | US |
Child | 12774307 | US | |
Parent | 12845440 | Jul 2010 | US |
Child | 12845409 | US | |
Parent | 12845461 | Jul 2010 | US |
Child | 12845440 | US |