The present disclosure relates generally to video signal coding and processing and, in particular, to coding and processing of chrominance information. Aspects of the disclosure relate to video standards and to broadcast video equipment and systems, including chroma key systems such as virtual sets, cameras, and video production switchers.
Y′CbCr is a method of color encoding that is used in digital video. Y′ is the luminance (luma) component, Cb is the blue-difference chrominance (chroma) component and Cr is the red-difference chroma component. For example, for High Definition TeleVision (HDTV) video formats, the following definitions hold:
Y′=0.2126R+0.7152G+0.0772B
Cb=B−Y′
Cr=R−Y′,
where R, G and B refer to Red, Green and Blue color components of an original image.
Some video coding approaches compress video signals in order to lower required video bandwidth, by limiting the bandwidth of the chroma information. A loss of chroma bandwidth is acceptable because the human eye is less sensitive to chroma position, resolution, and movement than it is to luma position, resolution and movement. However, in such approaches the Cb and Cr components are typically bandwidth limited at the video source (camera). It is therefore impossible to recover all chroma information that is lost at the source.
According to an aspect of the present disclosure, a video encoder includes: an interface to receive luminance information and chrominance information that is associated with a video signal; a first encoder to encode the received luminance information and a subset of the received chrominance information into a first encoded video signal, the subset of the chrominance information including less than all of the received chrominance information; and a second encoder to encode, into a second encoded video signal, at least the received chrominance information that is not encoded into the first encoded video signal.
In an embodiment, the second encoder is configured to encode all of the received chrominance information into the second encoded video signal.
The second encoder could be configured to map the received chrominance information to two data streams and to interlace the two data streams to generate the second encoded video signal.
The second encoder could instead be configured to map the received luminance information to a first data stream; to map, to a second data stream, the received chrominance information that is not encoded into the first encoded video signal; and to interlace the first data stream and the second data stream to generate the second encoded video signal.
In an embodiment, a video camera includes: an image detector to capture video image information for the video signal; a color space converter, operatively coupled to the image detector, to generate luminance information and chrominance information from the video signal; and a video encoder as disclosed herein, operatively coupled to receive the luminance information and the chrominance information from the color space converter.
A video production system is also provided, and includes such a video camera and a video processor, operatively coupled to the video camera, to receive the first encoded video signal and the second encoded video signal from the video camera, to decode chrominance information from the second encoded video signal, and to use the chrominance information that is decoded from the second encoded video signal in video processing for the video signal.
The video processor could be further configured to decode chrominance information from the first encoded video signal, and to use the chrominance information that is decoded from the first encoded video signal and the chrominance information that is decoded from the second encoded video signal in the video processing for the video signal.
In an embodiment, the video processor includes a video keyer and the video processing includes alpha generation for video keying.
A method according to another aspect includes: receiving luminance information and chrominance information that is associated with a video signal; encoding the received luminance information and a subset of the received chrominance information into a first encoded video signal, the subset of the chrominance information including less than all of the received chrominance information; and encoding, into a second encoded video signal, at least the received chrominance information that is not encoded into the first encoded video signal.
In an embodiment, the encoding into a second encoded video signal involves encoding all of the received chrominance information into the second encoded video signal.
The encoding into a second encoded video signal could involve mapping the received chrominance information to two data streams and interlacing the two data streams to generate the second encoded video signal.
The encoding into a second encoded video signal could instead involve: mapping the received luminance information to a first data stream; mapping, to a second data stream, the received chrominance information that is not encoded into the first encoded video signal; and interlacing the first data stream and the second data stream to generate the second encoded video signal.
The method could also include capturing the video signal and generating the luminance information and the chrominance information from the video signal.
In an embodiment, the also involves receiving the first encoded video signal and the second encoded video signal, decoding chrominance information from the second encoded video signal, and using the chrominance information that is decoded from the second encoded video signal in video processing for the video signal.
The method could additionally include decoding chrominance information from the first encoded video signal. In this case, using chrominance information in video processing could involve using both the chrominance information that is decoded from the first encoded video signal and the chrominance information that is decoded from the second encoded video signal in the video processing for the video signal.
The video processing could include alpha generation for video keying.
Such a method, and/or possibly other methods disclosed herein, could be embodied, for example, in a non-transitory computer-readable medium storing instructions which when executed by a processor cause the processor to perform the method.
According to another aspect, a video decoder includes an interface to receive a first encoded video signal and a second encoded video signal. The first encoded video signal has encoded therein luminance information associated with a video signal and a subset of chrominance information associated with the video signal. The subset of chrominance information includes less than all chrominance information associated with the video signal, and the second encoded video signal has encoded therein at least chrominance information that is associated with the video signal but not encoded into the first encoded video signal.
The video decoder also includes: a first decoder, operatively coupled to the interface, to decode at least the luminance information from the first encoded video signal; and a second decoder, operatively coupled to the interface, to decode from the second encoded video signal at least the chrominance information that is associated with the video signal but not encoded into the first encoded video signal. All of the chrominance information associated with the video signal is decoded either from the second encoded video signal by the second decoder, or partially from the first encoded video signal by the first decoder and partially from the second encoded video signal by the second decoder.
In an embodiment, the second encoded video signal has all of the chrominance information that is associated with the video signal encoded in it, and the second decoder is configured to decode all of the chrominance information that is associated with the video signal from the second encoded video signal.
A video processing system could include such a video decoder, and a video processor, operatively coupled to the video decoder, to receive all of the chrominance information associated with the video signal, and to use the decoded chrominance information that is associated with the video signal in video processing for the video signal.
The video processor, as noted above, could include a video keyer, and the video processing could include alpha generation for video keying.
A method according to another aspect includes: receiving a first encoded video signal and a second encoded video signal, the first encoded video signal having encoded therein luminance information associated with a video signal and a subset of chrominance information associated with the video signal, the subset of chrominance information including less than all chrominance information associated with the video signal, the second encoded video signal having encoded therein at least chrominance information that is associated with the video signal but not encoded into the first encoded video signal; decoding the luminance information from the first encoded video signal; and decoding all of the chrominance information associated with the video signal from either from the second encoded video signal, or partially from the first encoded video signal and partially from the second encoded video signal.
In an embodiment, the second encoded video signal has all of the chrominance information that is associated with the video signal encoded in it, and decoding all of the chrominance information involves decoding all of the chrominance information from the second encoded video signal.
The method could also include: using all of the chrominance information that is associated with the video signal in video processing for the video signal.
The video processing could include alpha generation for video keying.
A method according to yet another includes: receiving chrominance information that is associated with a video signal but is not encoded into a first encoded video signal with luminance information that is associated with the video signal; and encoding the received chrominance information into a second encoded video signal.
Other aspects and features of embodiments of the present disclosure will become apparent to those skilled in the art upon review of the following description.
Examples of embodiments of the invention will now be described in greater detail with reference to the accompanying drawings.
As noted above, some video coding approaches compress video signals in order to lower required video bandwidth, by limiting the bandwidth of the chroma information. So-called “4:2:2” video coding, for example, is a subsampling scheme and digital video encoding method specified in Appendix D of SMPTE 274M from the Society of Motion Picture & Television Engineers (SMPTE).
This and similar schemes are represented as a three part ratio, J:a:b (4:2:2, for example) that describes the number of luma and chroma samples in a conceptual region that is J pixels wide, and 2 pixels high. The parts are (in their respective order):
Video encoded with 4:2:2 coding has the chroma information encoded at half the horizontal resolution of an original video image. That is, exactly half of the horizontal chroma information is discarded in this subsampling scheme. Each pixel contains full Y′ information, but alternating pixels contain only Cb or Cr information.
4:2:2 coding is used in many modern broadcast video formats, including: 480i, 576i, 720p50, 720p59.94, 1080i50, 1080i59.94, 1080p50, and 1080p59.94. With this coding method, 1080p video formats can be transmitted on a 3 Gb/s serial link, for example. Other types of links could also or instead be used.
Cameras of the type shown in
Thus, in
A virtual set is a video studio environment where actors (talent) perform in front of a specially painted monochromatic background, which is usually blue or green. During production, the monochromatic background is removed using video processing and replaced with a completely different background, either computer generated or video from another source. This process is called chroma keying. Using this method, the talent can be virtually placed in any background, real or imagined.
In
Chroma keying is a keying technique in which video pixels in the foreground video that have a pre-selected chroma are removed and replaced with the background video. Chroma keys are used to replace the monochromatic background with a completely different background.
The industry standard chroma keyer 700 shown in
The resulting full chroma bandwidth video output of the chroma interpolator 702 is analyzed by the alpha generator 704 and used to generate the chroma key alpha based on pixel color. The alpha generator 704 generates the alpha by identifying pixels that are a pre-selected color, usually the color used in the monochromatic screen, and assigning these pixels to the background. The remaining pixels are assigned to the foreground. This alpha, along with the foreground video encoded in Y′CbCr format, is used by the foreground processor 706 to generate the foreground image. Finally, the key processor 708 takes both the intended background and the processed foreground image and combines them together to create the final output. The key processor 708 uses the key alpha to remove the monochromatic portion of the foreground video and layer the result over the background video. This key processor 708 thus uses the key alpha to decide whether each pixel is foreground, background, or a mix, on a pixel by pixel basis.
Because the Cb and Cr signals are bandwidth limited at the video source (camera) in 4:2:2 coding, it is impossible to recover all information that is lost in the source filtering and subsampling of the chroma channels. This may make it difficult to produce a high resolution key alpha, which has a dramatic impact on the quality and realism of the combined output. Alpha signals generated using these methods can have a high degree of horizontal aliasing.
Horizontal aliasing may result in a final image that has many defects along the edges of a foreground object.
A new method of chroma keying is proposed herein. The new method utilizes full bandwidth chroma information, which may help avoid horizontal aliasing errors that are common with systems designed around industry standard 4:2:2 video. Using full bandwidth chroma information may provide a superior combined image with heightened realism that requires fewer subsequent image processing steps to correct for horizontal aliasing errors.
Aspects of the present disclosure include, for example:
1. A novel digital video encoder and encoding method that encode full bandwidth chroma data without subsampling, which can be transported using industry standard 3 Gb/s infrastructure. One encoding method is called 0:4:4 encoding herein, for ease of reference. The 0:4:4 designation is intended to convey the notion that full chroma information is encoded without luma information, and does not follow the standard nomenclature noted above for 4:2:2 coding. Another encoding method is also disclosed, and is referred to herein as 4:2′:2′ encoding.
2. A novel full bandwidth chroma video camera that, in addition to generating industry standard 4:2:2 video, also generates a second encoded video signal, such as a full chroma bandwidth 0:4:4 encoded video signal using a 0:4:4 encoding method or a 4:2′:2′ encoded video signal.
3. A novel full bandwidth chroma keying technique that utilizes both the industry standard 4:2:2 video and a second encoded video signal such as a 0:4:4 or a 4:2′:2′ encoded video signal, to generate a chroma key that may be visibly superior and use fewer resources compared to key generation using only a 4:2:2 encoded video signal.
4. A novel virtual set environment that utilizes, for example, 0:4:4 or 4:2′:2′ video coding, a full bandwidth chroma video camera, and a full bandwidth chroma keyer to generate virtualized productions which may have heightened realism compared to standard 4:2:2-based video systems.
In
Data stream one (a)=Cr0 Cr1 Cr2 Cr3 . . . .
Data stream two (b)=Cb0 Cb1 Cb2 Cb3 . . . .
These are combined into a single data stream (c), in a manner similar to the method described in SMPTE 425M section 4.2.1 for example, with the exception that all luma information is replaced with chroma information. The bit rate of the encoded video signal stream (c) is 3 Gb/s in an embodiment. The EAV, Line number, CRC, Ancillary data, and SAV sections shown in
On a comparison of
0:4:4 video encoding, in combination with 4:2:2 video encoding, provides two encoded video signals which together provide full luma information and full chroma information associated with a video signal. Half of the chroma information is transmitted twice in this embodiment, since a 4:2:2 encoded video signal includes half of the original chroma information, and full chroma information (including the half of the original chroma information in the 4:2:2 video) is encoded into the 0:4:4 encoded video signal.
Other coding techniques are also possible. For example, according to a 4:2′:2′ encoding technique, luma information could be encoded along with only the chroma information that was not used to generate the 4:2:2 encoded video signal, in order to generate a second encoded video signal. In this case, the luma information is transmitted twice, in both the 4:2:2 encoded video signal and the 4:2′:2′ encoded video signal. The 4:2:2 encoded video signal and the 4:2′:2′ encoded video signal together provide full luma information and full chroma information. In a 4:2:2/4:2′:2′ system, the 4:2:2 encoded video signal could be as shown at (c) in
In both of these examples, 0:4:4 encoded video signals and 4:2′:2′ encoded video signals are compatible with industry standard video transmitters and receivers that are capable of transmitting and receiving 4:2:2 encoded video.
Generating a second encoded video signal as disclosed herein is not simply a reversal of the compression that is used in 4:2:2 video coding. In embodiments disclosed herein, there are two encoded video signals instead of just the typical, single 4:2:2 encoded video signal. The two encoded video signals disclosed herein, together and in combination with each other, provide full luma information and full chroma information. For example, some disclosed embodiments use the additional second encoded video signal in addition to the industry standard 4:2:2 video signal, and thus two encoded video signals are used instead of just one. As noted above, at least some chroma information or luma information could actually be encoded twice, into both of the encoded video signals. Although this duplication of encoding and generation of a second encoded video signal increases the amount of information that is transferred between a video source and a video processor, this approach may be preferred to provide for compatibility of both of the encoded video signals with industry standard 4:2:2 transmitters and receivers.
A new digital video camera according to an aspect of the present disclosure produces two outputs, which in an embodiment are as follows:
Output 1 is intended to be used both with a full bandwidth chroma keyer and in non-virtual set applications. Output formats include but are not limited to 480i, 576i, 720p50, 720p59.57, 1080i50, 1080i59.97, 1080p50, 1080p59.97, 2160p50, 2160p59.97, 2160p120.
Output 2 is intended to be used with a full bandwidth chroma keyer and the novel virtual set environment as described herein. This output in 0:4:4 format includes no luma information, but includes full bandwidth chroma information as described above. In 4:2′:2′ format, this output includes luma information and the other half of the chroma information, which was not included in output 1. Output formats include but are not limited to 480i, 576i, 720p50, 720p59.57, 1080i50, 1080i59.97, 1080p50, 1080p59.97, 2160p50, 2160p59.97, 2160p120.
The example video camera 1100 includes a lens 1102, detectors 1104, a color space converter 1106, an LPF 1108, and multiplexers 1110, 1112. These parts of the example video camera are also shown in the industry standard video camera 300 in
However, the example video camera 1100 also includes a multiplexer 1114. In addition to generating an industry standard 4:2:2 video output, a second encoded video output such as a 0:4:4 video output is generated as Output 2. This is achieved by encoding the Cb and Cr information into a separate data stream. In the example shown, all of the Cb and Cr information is encoded by the multiplexer 1114 without first being subsampled. The multiplexer 1114 is coupled to the color space converter 1106 to receive all of the chroma information, before it is subsampled by the LPF 1108. This preserves all of the original chroma information in the 0:4:4 output.
4:2′:2′ encoding could be implemented using a set of two multiplexers as shown at 1110 and 1112, but with one of the multiplexers coupled to receive the half of the chroma information that is to be encoded into the second encoded video signal. This could involve a second LPF (not shown) that selects Cr and Cb samples that are not selected by the LPF 1108. Another possible option for 4:2′:2′ encoding could be to replace the LPF 1108 with a distributor that distributes chroma information for the 4:2:2 encoded video signal to a first set of Cr/Cb outputs coupled to the multiplexer 1110 and distributes remaining chroma information for the 4:2′:2′ encoded video signal to a second set of Cr/Cb outputs coupled to another multiplexer (not shown). Other implementations are also possible.
The LPF 1108 and the multiplexers 1110, 1112, 1114 are an example of one embodiment of a video encoder 1120. The video encoder 1120 receives luma information Y′ and chroma information Cb and Cr that is associated with a video signal. Y′, Cb, and Cr are received in the example shown through an interface to the color space converter 1106. This interface could be or include any of various types of physical connections and/or connectors, and the type of connection(s)/connector(s) could be implementation-dependent.
The video encoder 1120 generates two encoded video signals at Output 1 and Output 2. The LPF 1108 and the multiplexers 1110, 1112 could be considered a form of a first encoder to encode the received luma information Y′ and a subset of the received chroma information Cb and Cr into a first encoded video signal, which is a 4:2:2 encoded video signal in the example shown. The subset of the chroma information that is encoded into the first encoded video signal includes less than all of the received chroma information. For 4:2:2 encoding, only half of the chroma information is encoded into the 4:2:2 encoded video signal. The other half of the chroma information is removed by the LPF 1108 and by subsampling during 4:2:2 encoding.
The multiplexer 1114 is an example of a second encoder to encode, into a second encoded video signal at Output 2 in
Therefore, at least the received chroma information that is not encoded into the first encoded video signal is encoded into the second encoded video signal. Other information, including the chroma information that is encoded into the first encoded video signal (for 0:4:4 encoding) or luma information (for 4:2′:2′ encoding) for example, could also be encoded into the second encoded video signal.
In an embodiment, the second encoder (multiplexer 1114) is configured to map the received chroma information to two data streams and to multiplex or interlace the two data streams to generate the second encoded video signal. In this case, the second encoder is configured to encode all of the received chroma information into the second encoded video signal. The second encoder could be implemented using hardware, firmware, one or more components that execute software, or some combination thereof. Electronic devices that might be suitable for implementing the multiplexer 1114 and/or other forms of a second encoder include, among others, microprocessors, microcontrollers, Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), and other types of “intelligent” integrated circuits. Such devices are configured for operation by executing software that is stored in an integrated or separate memory (not shown).
The second encoder such as the multiplexer 1114 could map the received chroma information to data stream one and data stream two as shown in
For 4:2′:2′ encoding, the second encoder could be configured to map the received luma information to a first data stream such as data stream one. The second encoder could then map the received chroma information that is not encoded into the first encoded video signal to a second data stream, such as data stream two. Data stream one and data stream two could then be multiplexed or interlaced to generate the second encoded video signal at Output 2 in
A video encoder 1120 could be part of a video capture or recording device, such as a video camera as shown in
A full bandwidth digital video chroma keyer uses two encoded video signals, such as 4:2:2 encoded video and 0:4:4 encoded video in one embodiment. The full bandwidth chroma information in the 0:4:4 video stream in this example, in conjunction with the original Y′ information in the 4:2:2 encoded video, can be used to create the chroma key alpha signal directly without chroma interpolation step. This allows for an implementation with fewer resources since chroma interpolation is not used.
There is no chroma interpolator in the example full bandwidth chroma keyer 1200. Chroma interpolation is eliminated because the chroma information is already at full bandwidth, in either the foreground video 2 input itself (in 0:4:4 encoding for example) or a combination of the foreground video 1 input and the foreground video 2 input (in 4:2′:2′ encoding for example).
The full bandwidth chroma information is analyzed directly by the alpha generator 1204 to generate the chroma key alpha based on pixel color. This alpha, along with the foreground video 1 input (encoded in Y′CbCr format) is used by the foreground processor 1206 to generate the foreground image. The key processor 1208 takes both the intended background and the processed foreground image and combines them together to create the final output. The key processor 1208 uses the key alpha to decide which pixel is foreground and which pixel is background.
In
A first decoder, which could be integrated with either or both of the alpha generator 1204 and the foreground processor 1206 or provided as a separate component in another embodiment, is operatively coupled to the interface to decode at least luma information from the first encoded video signal. The foreground video 1 input in
A second decoder could similarly be integrated with the alpha generator 1204 or implemented separately. The second decoder is operatively coupled to the interface, to decode from the second encoded video signal (the foreground video 2 input in the example shown) at least chroma information that is associated with the video signal but not encoded into the first encoded video signal. Some chroma information could be decoded from both encoded video signals in the case of a combination of 4:2:2/4:2′:2′ coding for example. The second encoder would then be decoding, from the second encoded video signal, chroma information that is not encoded into the first encoded video signal. In an embodiment that uses 4:2:2 coding in combination with 0:4:4 coding, the second encoder could decode all chroma information from the second encoded video signal, including chroma information that is also encoded into the first encoded video signal. Thus, all of the chrominance information associated with the video signal is decoded either from the second encoded video signal by the second decoder, or partially from the first encoded video signal by the first decoder and partially from the second encoded video signal by the second decoder. In either case, full bandwidth chroma information is available.
In a video processing system, a video processor could be operatively coupled to the video decoder, to receive all of the decoded chroma information associated with the video signal, and to use the decoded chroma information in video processing for the video signal. A video keyer such as the full bandwidth chroma keyer 1200 is an example of a video processor. The video processing by the video processor could include alpha generation for video keying, as in the example full bandwidth chroma keyer 1200.
Video coding devices, including encoders and decoders, are described above. Such devices could be used in a video production system such as a virtual set environment, for example. In one embodiment, a full bandwidth chroma camera and a full bandwidth chroma keyer are implemented to enable creation of a virtual reality where talent captured in a monochromatic (typically green or blue) screen environment can be keyed onto different backgrounds with a realism that might not be attainable with limited chroma bandwidth systems and methods.
Thus, a video production system could include a camera with a video encoder such as the video encoder 1120 shown in
Alpha signals generated using a full bandwidth chroma keyer may have reduced aliasing effects that are common in industry standard, limited bandwidth chroma keyers. This may produce a higher quality result as shown in
A higher quality alpha may allow the creation of a higher quality final video output.
Embodiments are described above primarily in the context of encoded signals and devices. Other embodiments such as methods are also contemplated.
The example method 1600 includes an operation 1602 of receiving luma information and chroma information that is associated with a video signal. The received luma information and a subset of the received chroma information is encoded at 1604 into a first encoded video signal. The subset of the chroma information includes less than all of the received chroma information. Another encoding operation at 1606 involves encoding chroma information into a second encoded video signal. At least the received chroma information that is not encoded into the first encoded video signal is encoded into the second encoded video signal at 1606. In one embodiment, all of the received chroma information is encoded into the second encoded video signal as in the 0:4:4 example herein. In another embodiment, the remaining chroma information that has not already been encoded into the first encoded video signal is encoded into the second encoded video signal, as in the 4:2′:2′ example herein.
Although shown as serial operations in
The operations at 1602, 1604, 1606 could be performed, for example, by a video camera. Other operations could also be performed by a video camera, such as capturing the video signal and generating the luma information and the chroma information from the video signal.
The example method 1700 includes operations related to processing encoded video signals. At 1702, a first encoded video signal and a second encoded video signal are received. The first encoded video signal has encoded therein luma information associated with a video signal and a subset of chroma information associated with the video signal. The subset of chroma information includes less than all chroma information associated with the video signal. The second encoded video signal has encoded therein at least chroma information that is associated with the video signal but not encoded into the first encoded video signal.
The luma information is decoded from the first encoded video signal at 1704. At 1706, all of the chroma information associated with the video signal is decoded from either from the second encoded video signal, or partially from the first encoded video signal and partially from the second encoded video signal. The second encoded video signal could include all of the chroma information associated with the video signal, in which case the decoding at 1706 involves decoding all of the chroma information from the second encoded video signal.
Although the decoding operations 1704, 1706 are shown a serial operations in
In an embodiment, the example method 1700 is implemented in conjunction with additional video processing in which all of the chroma information that is associated with the video signal is used in such video processing for the video signal. The video processing could include alpha generation, video keying, or both, for example.
The example methods 1600, 1700 are illustrative of embodiments. Examples of how operations could be performed and additional operations that may be performed will be apparent from the description and drawings relating to encoded signals and device or apparatus implementations, for example.
The encoding at 1606, for instance, could involve mapping the received chroma information to two data streams, and interlacing the two data streams to generate the second encoded video signal. Another possible option involves mapping the received luma information to a first data stream; mapping, to a second data stream, the received chroma information that is not encoded into the first encoded video signal; and interlacing the first data stream and the second data stream to generate the second encoded video signal.
Additional operations that are not explicitly shown in
Encoding and decoding operations could be implemented together in a virtual set environment or other type of video production system. Such an implementation could combine the operations shown in
Further variations may be or become apparent.
What has been described is merely illustrative of the application of principles of embodiments of the present disclosure. Other arrangements and methods can be implemented by those skilled in the art.
For example, the present disclosure is not limited to the particular example RGB/Y′CbCr transfer function noted above. Transfer functions for different video formats, such as Standard Definition TeleVision (SDTV) and Ultra-High Definition TeleVision (UHDTV) are also contemplated.
The embodiments shown in the drawings and described above are intended for illustrative purposes. The present disclosure is in no way limited to the particular example embodiments explicitly shown in the drawings and described herein. Other embodiments may include additional, fewer, and/or different device or apparatus components, for example, which are interconnected or coupled together as shown in the drawings or in a different order.
Similar comments also apply in respect of the example methods shown in the drawings and described above. There could be additional, fewer, and/or different operations performed in a similar or different order. For example, not all of the illustrated operations might necessarily be performed in every embodiment. Some embodiments could concentrate on generating just the second encoded video signal, for instance. In this case, chroma information that is associated with a video signal but is not encoded into a first encoded video signal with luma information that is associated with the same video signal could be received and encoded into a second encoded video signal. An encoder could be provided, in a video camera, for example, to encode the chroma information into the second encoded video signal, and corresponding decoding and a decoder could be provided to decode the chroma information.
In addition, although described primarily in the context of signals, devices, systems, or methods, other implementations are also contemplated, as instructions stored on a non-transitory computer-readable medium for execution by a processor, for example. Such instructions, when executed by a processor, cause the processor to perform a method as disclosed herein. The electronic devices described above are examples of a processor that could be used to execute such instructions.