Certain embodiments of the invention relate to the processing of three-dimensional (3D) video. More specifically, certain embodiments of the invention relate to a method and system for utilizing mosaic mode to create 3D video.
The availability and access to 3D video content continues to grow. Such growth has brought about challenges regarding the handling of 3D video content from different types of sources and/or the reproduction of 3D video content on different types of displays.
Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with the present invention as set forth in the remainder of the present application with reference to the drawings.
A system and/or method for utilizing a mosaic mode to create 3D video, as set forth more completely in the claims.
Various advantages, aspects and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.
Certain embodiments of the invention may be found in a method and system for utilizing a mosaic mode to create 3D video. Various embodiments of the invention may relate to a processor comprising a video feeder, such as an MPEG video feeder, for example, wherein the processor may receive video data from multiple sources through the video feeder. The video data from one or more of those sources may comprise 3D video data. The video data from each source may be stored in a corresponding different area in memory during a capture time for a single picture. Each of the different areas in memory may correspond to a different window of multiple windows in an output video picture. The processor may store the video data from each source in memory in either a 2D format or a 3D format, based on a format of the output video picture. For example, when a 3D format is to be used, left-eye and right-eye information may be stored in different portions of memory. The video data may be read from the different areas in memory to a single buffer during a feed time for a single picture before being utilized to generate the output video picture.
By utilizing a mosaic mode to process video data as described herein, a user may be provided with multiple windows in an output video picture to concurrently display video from different sources, including 3D video. The multi-windowed output video picture may have a 2D output format or a 3D output format based on, for example, the characteristics of the device in which the output video picture is to be displayed, reproduced, and/or stored. That is, while the different sources may provide 2D video data and/or 3D video data, the video data in each of the windows in the output video picture may be in the same format.
The SoC 100 may generate one or more output signals that may be provided to one or more output devices for display, reproduction, and/or storage. For example, output signals from the SoC 100 may be provided to display devices such as cathode ray tubes (CRTs), liquid crystal displays (LCDs), plasma display panels (PDPs), thin film transistor LCDs (TFT-LCDs), plasma, light emitting diode (LED), Organic LED (OLED), or other flatscreen display technology. The characteristics of the output signals, such as pixel rate, resolution, and/or whether the output format is a 2D output format or a 3D output format, for example, may be based on the type of output device to which those signals are to be provided. Moreover, the output signals may comprise one or more output video pictures, each of which may comprise multiple windows to concurrently display video from different sources.
The host processor module 120 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of the SoC 100. For example, parameters and/or other information, including but not limited to configuration data, may be provided to the SoC 100 by the host processor module 120 at various times during the operation of the SoC 100. The host processor module 120 may be operable to control and/or select a mode of operation for the SoC 100. For example, the host processor module 120 may enable a mosaic mode for the SoC 100.
The memory module 130 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to store information associated with the operation of the SoC 100. For example, the memory module 130 may store intermediate values that result during the processing of video data, including those values associated with the processing of video data during mosaic mode. Moreover, the memory module 130 may store graphics data that may be retrieved by the SoC 100 for mixing with video data. For example, the graphics data may comprise 2D graphics data and/or 3D graphics data for mixing with video data in the SoC 100.
The SoC 100 may comprise an interface module 102, a video processor module 104, and a core processor module 106. The SoC 100 may be implemented as a single integrated circuit comprising the components listed above. The interface module 102 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to receive multiple signals that comprise video content and/or graphics content. Similarly, the interface module 102 may be operable to communicate one or more signals comprising video content to output devices communicatively coupled to the SoC 100. For example, the SoC 100 may communicate one or more signals that comprise a sequence of output video pictures comprising multiple windows to concurrently display video from different sources. The format of the multi-windowed output video pictures may be based on, for example, the characteristics of the output devices.
The video processor module 104 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to process video data and/or graphics data. The video processor module 104 may be operable to support multiple formats for video data and/or graphics data, including multiple input formats and/or multiple output formats. The video processor module 104 may be operable to perform various types of operations on 2D video data and/or 3D video data. For example, when video data from several sources is received by the video processor module 104, and the video data from any one of those sources may comprise 2D video data or 3D video data, the video processor module 104 may generate output video comprising a sequence of output video pictures having multiple windows, wherein each of the windows in an output video picture corresponds to a particular source of video data. In this regard, the output video pictures that are generated by the video processor module 104 may be in a 2D output format or in a 3D output format in accordance with the device in which the output video pictures are to be displayed, reproduced, and/or stored. That is, even when a portion of the sources provide video data comprising 2D video data and another portion of the sources provide video data comprising 3D video data, the output video pictures generated by the video processor module 104 may be generated in either a 2D output format or a 3D output format. In some embodiments, when the video content comprises audio data, the video processor module 104, and/or another module in the SoC 100, may be operable to handle the audio data.
The core processor module 106 may comprise suitable logic, circuitry, code, and/or interfaces that may be operable to control and/or configure the operation of the SoC 100. For example, the core processor module 106 may be operable to control and/or configure operations of the SoC 100 that are associated with processing video content and/or and graphics content. In some embodiments of the invention, the core processor module 106 may comprise memory (not shown) that may be utilized in connection with the operations performed by the SoC 100. For example, the core processor module 106 may comprise memory that may be utilized during the processing of video data and/or graphics data by the video processor module 104. The core processor module 106 may be operable to control and/or enable mosaic mode in the SoC 100. Such control and/or enabling may be performed in coordination with the host processor module 120, for example.
In operation, mosaic mode may be enabled in the SoC 100 by the host processor module 120 and/or the core processor module 106, for example. In this mode, the SoC 100 may receive video data from more than one source through the interface module 102. The video data from any one of the sources may comprise 2D video data or 3D video data. The video processor module 104 may decode the video data and may store the decoded video data from each source in a different buffer in memory. In this regard, the video processor module 104 may comprise a video decoder (not shown), such as an MPEG decoder, for example. The memory may be dynamic random access memory (DRAM), which may be part of the memory module 130, for example, and/or part of the SoC 100, such as embedded memory in the core processor module 106, for example.
A video feeder (not shown) within the video processor module 104, such as an MPEG video feeder, for example, may be utilized to obtain the video data from the buffers and feed the video data for capture into memory. The video data from each of the buffers may be fed and captured separately into a corresponding area in memory. Moreover, the capture of the video data from the various buffers may occur during a single picture capture time, that is, may occur during the time it takes to capture a single picture into memory by the SoC 100.
The capture of the video data allows for the generation of the multiple windows in the output video picture by storing the video data from a particular source in an area of memory that correspond to a particular window in the output video picture. In other words, the windowing operation associated with mosaic mode occurs in connection with the capture of the video data to memory.
During capture, the video data may be stored in memory in a 2D format or in a 3D format, based on a format of the output video picture. When the video data is stored in memory in a 3D format, left-eye and right-eye information may be stored in different portions of memory. The video data may be read from memory to a single buffer during a video feed process for a single picture, that is, during the time it takes to feed a single picture from memory by the SoC 100. Once the video data has been placed in the single buffer, it may be subsequently processed to generate the output video picture with multiple windows for communication to an output device through the interface module 102.
Both the first format 200 and the second format 210 may be utilized as native formats by the SoC 100 to process 3D video data. The SoC 100 may also be operable to utilize the sequential format as a native format, which may be typically handled by the SoC 100 in a manner that is substantially similar to the handling of the second format 210. The SoC 100 may also support converting from the first format 200 to the second format 210 and converting from the second format 210 to the first format 200. Such conversion may be associated with various operations performed by the SoC 100, including but not limited to operations associated with mosaic mode. The SoC 100 may support additional native formats other than the first format 200, the second format 210, and the sequential format, for example.
In the embodiment of the invention described in
Each of the crossbar modules 310a and 310b may comprise multiple input ports and multiple output ports. The crossbar modules 310a and 310b may be configured such that any one of the input ports may be connected to one or more of the output ports. The crossbar modules 310a and 310b may enable pass-through connections 316 between one or more output ports of the crossbar module 310a and corresponding input ports of the crossbar module 310b. Moreover, the crossbar modules 310a and 310b may enable feedback connections 318 between one or more output ports of the crossbar module 310b and corresponding input ports of the crossbar module 310a. The configuration of the crossbar modules 310a and/or 310b may result in one or more processing paths being configured within the processing network 300 in accordance with the manner and/or order in which video data is to be processed. For example, one or more processing paths may be configured in accordance with a mode of operation, such as mosaic mode, for example.
The MFD module 302 may be operable to read video data from memory and provide such video data to the crossbar module 310a. The video data read by the MFD module 302 may have been stored in memory after being generated by the MPEG decoder module 324. During mosaic mode, the MFD module 302 may be utilized to feed video data from multiple sources to one of the CAP modules 320, for example.
The MPEG decoder module 324 may be operable to decode video data received through one or more bit streams. For example, the MPEG encoder 324 may receive up to N bit streams, BS1, . . . , BSN, corresponding to N different sources of video data. Each bit stream may comprise 2D video data or 3D video data, depending on the source. When the MPEG decoder module 324 decodes a single bit stream, the decoded video data may be stored in a single buffer or area in memory. When the MPEG decoder module 324 decodes more than one bit stream, the decoded video data from each bit stream may be stored in a separate buffer or area in memory. In this regard, the MPEG decoder module 324 may be utilized during mosaic mode to decode video data from several sources and to store the decoded video data in separate buffers.
The MPEG decoder module 324 may be operable to provide the decoded video data in one or more formats supported by the processing network 300. For example, the MPEG decoder module 324 may provide decoded video data in a 2D format and/or in a 3D format, such as an L/R format, an O/U format, and a sequential format. The decoded video data may be stored in memory in accordance with the format in which it is provided by the MPEG decoder module 324.
Since mosaic mode is utilized to generate an output video picture with multiple windows, the amount of video data that is displayed in one of the windows in the output video picture is a small portion of the amount of video data that is needed to display the entire output video picture. Therefore, the size of each buffer utilized in mosaic mode may be smaller than the size of a single buffer utilized to store decoded video data from a single source during a different mode of operation.
Each VFD module 304 may be operable to read video data from memory and provide such video data to the crossbar module 310. The video data read by the VFD module 304 may have been stored in memory in connection with one or more operations and/or processes associated with the processing network 300. The HDMI module 306 may be operable to provide a live feed of high-definition video data to the crossbar module 310a. The HDMI module 306 may comprise a buffer (not shown) that may enable the HDMI module 306 to receive the live feed at one data rate and provide the live feed to the crossbar module 310a at another data rate.
Each SCL module 308 may be operable to scale video data received from the crossbar module 310a and provide the scaled video data to the crossbar module 310b. The MAD module 312 may be operable to perform motion-adaptive deinterlacing operations on interlaced video data received from the crossbar module 310a, including operations related to inverse telecine (IT), and provide progressive video data to the crossbar module 310b. The DNR module 314 may be operable to perform artifact reduction operations on video data received from the crossbar module 310a, including block noise reduction and mosquito noise reduction, for example, and provide the noise-reduced video data to the crossbar module 310b. In some embodiments of the invention, the operations performed by the DNR module 314 may be utilized before the operations of the MAD module 312 and/or the operations of the SCL module 308.
Each CAP module 320 may be operable to capture video data from the crossbar module 310b and store the captured video data in memory. One of the CAP modules 320 may be utilized during mosaic mode to capture video data stored in one or more buffers and fed to the CAP module 320 by the MFD module 302. In this regard, the video data in one of the buffers is fed and captured separately from the video data in another buffer. That is, instead of a single capture being turned on for all buffers, a separate capture is turned on for each buffer, with all the captures occurring during a single picture capture time. The video data stored in each buffer may be captured to different areas in memory that correspond to different windows in the output video picture.
Each of the CMP modules 322a and 322b may be operable to combine or mix video data received from the crossbar module 310b. Moreover, each of the CMP modules 322a and 322b may be operable to combine or mix video data received from the crossbar module 310b with graphics data. For example, the CMP module 322a may be provided with a graphics feed, Gfxa, for mixing with video data received from the crossbar module 310b. Similarly, the CMP module 322b may be provided with a graphics feed, Gfxb, for mixing with video data received from the crossbar module 310b.
The processing path 400 illustrates a flow of data within the processing network 300 when operating in mosaic mode. For example, bit streams, BS1, . . . , BS2, are provided to the MPEG decoder module 324. Each bit stream may correspond to a different source of video data and to a different window in the output video picture. The video data in the bit streams may be decoded by the MPEG decoder module 324 and stored in separate buffers in a memory 410. The decoded video data associated with each bit stream may be in one of the multiple formats supported by the processing network 300. The memory 410 may correspond to the memory described above with respect to
The video data may be captured in such a manner that the captured video data from each source is stored within the memory 410 in an area of memory that corresponds to a window in the output video picture. When the output video picture is to have a 2D output format, the captured video data may be stored in the memory 410 in a 2D format. When the output video picture is to have a 3D output format, the captured video data may be stored in the memory 410 in a 3D format. In this regard, the left-eye information and the right-eye information in the video data may be stored in different portions of the memory 410, for example.
The VFD module 304 may read and feed the captured video data to a single buffer. In this regard, the operation of the VFD module 304 during mosaic mode may be substantially the same as in other modes of operation. When the captured video data is stored in a 2D format in the memory 410, the VFD module 304 may read and feed the captured video data to the single buffer in the appropriate order to enable the generation of the output video picture in a 2D output format. When the captured video data is stored in a 3D format in the memory 410, the VFD module 304 may read and feed the captured video data to the single buffer in the appropriate order to enable the generation of the output video picture in a 3D output format. The 3D output format may be a 3D L/R output format or a 3D O/U output format. The video data in the single buffer may be communicated to a compositor module, such as the CMP module 322a, for example, which in turn provides an input to a video encoder to generate the output video picture in the appropriate output format.
The processing path 400 is provided by way of illustration and not of limitation. Other data flow paths may be implemented during mosaic mode that may comprise more or fewer of the various devices, components, modules, blocks, circuits, or the like, of the processing network 300 to enable the generation of output video pictures comprising multiple windows for concurrently displaying video data from different sources.
The output video picture 500 is provided by way of illustration and not of limitation. For example, the SoC 100 may generate output video pictures that may comprise more or fewer windows than those shown in the output video picture 500. The size, layout, and/or arrangement of the video windows need not follow the size, layout, and/or arrangement of the output video picture 500. For example, the windows in an output video picture may be of different sizes. In another example, the windows in an output video picture need not be in a grid pattern. Moreover, the SoC 100 may be operable to dynamically change the characteristics and/or the video data associated with any one window in the output video picture 500.
As noted above, the MPEG decoder module 324 may be operable to provide the decoded video data in one or more formats. For example, the MPEG decoder module 324 may provide decoded video data in a 2D format and/or in a 3D format, such as an L/R format, an O/U format, and a sequential format. The decoded video data may be stored in the buffers 610 in the memory 410 in accordance with the format in which it is provided by the MPEG decoder module 324.
The video data from the sources S1, S2, S3, and S4 is provided to the MPEG decoder module 324 through bit streams BS1, BS2, BS3, and BS4, respectively. The MPEG decoder module 324 may decode the video data and provide the decoded video data to the buffers 610 in the memory 410 for storage. As shown in
As described above with respect to
The windowing, that is, the arrangement of the video data in the memory 410 to construct or layout the windows in the output video picture 730, may be carried out by capturing the video data from a particular source in an area of the memory 410 that corresponds to the window in the output video picture 730 in which the video data from that particular source is to be displayed. For example,
The video data from each of the buffers 610 may be stored in a corresponding memory area in the canvas 720. For example, the 2D video data, 2D1, associated with the first source, S1, may be stored in the top-left memory area of the canvas 720. The L2 video data and the R2 video data associated with the second source, S2, are not in 2D format. In such instances, the L2 video data may be used and stored in the top-right memory area of the canvas 720 while the R2 video data may be discarded. Similarly, the L3 video data and the R3 video data associated with the third source, S3, are not in 2D format. As before, the L3 video data may be used and stored in the bottom-left memory area of the canvas 720 while the R3 video data may be discarded. Moreover, the 2D video data, 2D4, associated with the fourth source, S4, may be stored in the bottom-right memory area of the canvas 720.
Once the video data has been captured to the memory 410 as described above, the VFD module 304 may be utilized to read and feed the entire contents of the canvas 720 to a single buffer for further processing and to subsequently generate the output video picture 730. In this regard, a compositor module, such as the CMP module 222a or the CMP module 222b, for example, may be utilized to perform a masking operation in connection with the generation of the output video picture 730 in mosaic mode.
The video data from the buffers 610 may be stored in the left picture canvas 810 and the right picture canvas 810 as appropriate. For example, the 2D video data, 2D1, associated with the first source, S1, which is not in a 3D format, may be stored in the top-left memory area of both the left picture canvas 810 and the right picture canvas 820. The L2 video data and the R2 video data associated with the second source, S2, may be stored in the top-right memory area of the left picture canvas 810 and in the top-right memory area of the right picture canvas 820, respectively. Similarly, the L3 video data and the R3 video data associated with the third source, S3, may be stored in the bottom-left memory area of the left picture canvas 810 and in the bottom-left memory area of the right picture canvas 820, respectively. Moreover, the 2D video data, 2D4, associated with the fourth source, S4, which is not in a 3D format, may be stored in the bottom-right memory area of both the left picture canvas 810 and the right picture canvas 820.
Once the video data has been captured to the memory 410 as described above, the VFD module 304 may be utilized to read and feed the entire contents of the left picture canvas 810 and of the right picture canvas 820 to a single buffer for further processing and to subsequently generate the output video picture 830. The output video picture 830 comprises a left picture 832L and a right picture 832R in a side-by-side configuration, each of which comprises four windows corresponding to the four sources of the video data. In this regard, a compositor module, such as the CMP module 222a or the CMP module 222b, for example, may be utilized to perform a masking operation in connection with the generation of the output video picture 830 in mosaic mode.
In one embodiment of the invention, the feeding and capturing of video data into the left picture canvas 810 may occur before feeding and capturing of video data into the right picture canvas 820. In another embodiment of the invention, the video data in the buffers 610 may be fed and captured sequentially to the memory 410.
For example, the video data from the first of the buffers 610, B1, may be fed and captured to the memory 410 first, the video data from the second of the buffers 610, B2, may be fed and captured to the memory 410 second, the video data from the third of the buffers 610, B3, may be fed and captured to the memory 410 next, and the video data from the fourth of the buffers 610, B4, may be fed and captured to the memory 410 last. In this regard, the left-eye information stored in one of the buffers 610 may be fed and captured to the memory 410 before the right-eye information stored in that same buffer may be fed and captured to the memory 410.
Referring to
Referring to
The various embodiments of the invention described above with respect to
At step 930, the video data stored in the various buffers may be fed and captured utilizing the MFD module 302 and one of the CAP modules 320. Multiple captures may be turned on based on the number of buffers from which video data is being captured. Moreover, the capture of the video data in all of the buffers may occur during a single picture capture time. The video data may be captured to memory according to the output format of the output video picture. For example, when the output video picture is to be in a 2D output format, the video data associated with a source, whether it is 2D video data or 3D video data, may be captured into memory utilizing a 2D canvas such as the 2D canvas 720. When the output video picture is to be in a 3D output format, the video data associated with a source, whether it is 2D video data or 3D video data, may be captured into memory utilizing a 3D canvas such as the left picture canvas 810 and the right picture canvas 820. In the case of having to capture video data into memory in a 3D format, the left-eye information and the right-eye information may be stored in different portions of the memory. Moreover, the windowing associated with the output video picture may be achieved through the capturing of the video data into portions of memory that correspond to a particular window in the output video picture.
At step 940, the VFD module 304 may be utilized to read and feed the captured video data stored in memory to a single buffer for further processing. The video feed may be performed in a single picture feed time. At step 950, the output video picture may be generated in the appropriate output format from the video data in the single buffer.
Various embodiments of the invention relate to a processor, such as the SoC 100 described above with respect to
The SoC 100 may be operable to store the received video data from each source of the plurality of sources in a 2D format or in a 3D format. The SoC 100 may be operable to store left-eye information associated with a current source of the plurality of sources before storage of right-eye information associated with the current source. The SoC 100 may subsequently store left-eye information associated with a next source of the plurality of sources before storage of right-eye information associated with the next source. Moreover, the SoC 100 may be operable to store left-eye information associated with two or more sources of the plurality of sources before storage of right-eye information associated with the two or more sources.
The SoC 100 may be operable to read the stored video data from the memory to a single buffer during a feed time for a single picture. The single buffer may be comprised within a device, component, module, block, circuit, or the like, that follows or is subsequent in operation to one of the VFD modules 304, such as the CMP modules 322a and 322b, and/or a video encoder, for example. When the stored video data is stored in the memory in a 3D format, the processor may be operable to read the stored video data from the memory to the single buffer in an L/R format or an O/U format.
In another embodiment of the invention, a non-transitory machine and/or computer readable storage and/or medium may be provided, having stored thereon a machine code and/or a computer program having at least one code section executable by a machine and/or a computer, thereby causing the machine and/or computer to perform the steps as described herein for utilizing mosaic mode to create 3D video.
Accordingly, the present invention may be realized in hardware, software, or a combination of hardware and software. The present invention may be realized in a centralized fashion in at least one computer system or in a distributed fashion where different elements may be spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
The present invention may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.
This application makes reference to, claims priority to, and claims the benefit of: U.S. Provisional Patent Application Ser. No. 61/267,729 (Attorney Docket No. 20428US01) filed on Dec. 8, 2009; U.S. Provisional Patent Application Ser. No. 61/296,851 (Attorney Docket No. 22866US01) filed on Jan. 20, 2010; and U.S. Provisional Patent Application Ser. No. 61/330,456 (Attorney Docket No. 23028US01) filed on May 3, 2010. This application also makes reference to: U.S. Provisional patent application Ser. No. ______ (Attorney Docket No. 20428U502) filed on Dec. 8, 2010; U.S. Provisional patent application Ser. No. ______(Attorney Docket No. 23437U502) filed on Dec. 8, 2010; U.S. Provisional patent application Ser. No. ______ (Attorney Docket No. 23438U502) filed on Dec. 8, 2010; and U.S. Provisional patent application Ser. No. ______ (Attorney Docket No. 23440U502) filed on Dec. 8, 2010. Each of the above referenced applications is hereby incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61267729 | Dec 2009 | US | |
61296851 | Jan 2010 | US | |
61330456 | May 2010 | US |