This application relates generally to video encoding and decoding, including high dynamic range (HDR) non-constant luminance video encoding and decoding.
Each pixel of a color image is typically sensed and displayed as three color components, such as red (R), green (G), and blue (B). However, in between the time in which the color components of a pixel are sensed and the time in which the color components of a pixel are displayed, a video encoder is often used to transform the color components into another set of components to provide for more efficient storage and/or transmission of the pixel data.
More specifically, the human visual system has less sensitivity to variations in color than to variations in brightness (or luminance). Digital video encoders are designed to exploit this fact by transforming the R, G, and B components of a pixel into a luminance component (Y) that represents the brightness of the pixel and two color difference (chroma) components (CB and CR) that respectively represent the B and R components of the pixel separate from the brightness. Once the R, G, and B components of a color image's pixels are transformed into Y, CB, and CR components, the CB and CR components of the color image's pixels can be subsampled (relative to the luminance component Y) to reduce the amount of space required to store the color image and/or the amount of bandwidth needed to transmit the color image to another device. Assuming the CB and CR components are properly subsampled, the quality of the image as perceived by the human eye should not be affected to a large or even noticeable degree because of the human visual system's lesser sensitivity to variations in color.
In addition to subsampling of the chroma components, digital video encoders typically use perceptual quantization to further reduce the amount of space required to store a color image and/or the amount of bandwidth required to transmit the color image to another device. More specifically, the human visual system has been further shown to be more sensitive to differences in smaller luminance values (or darker values) than differences in larger luminance values (or brighter values). Thus, rather than quantizing or coding luminance linearly with a larger number of bits, a smaller number of bits with fewer code values assigned nonlinearly on a perceptual scale can be used. Ideally, the code values should be assigned such that each step between adjacent code values corresponds to a just noticeable difference in luminance. To this end, perceptual transfer functions have been defined to provide for such perceptual quantization of the luminance Y of a pixel. The perceptual transfer functions are generally power functions, such as the perceptual transfer function defined by The Society of Motion Picture and Television Engineers (SMPTE) and referred to as the SMPTE ST-2084.
The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate the present disclosure and, together with the description, further serve to explain the principles of the disclosure and to enable a person skilled in the pertinent art to make and use the disclosure.
The present disclosure will be described with reference to the accompanying drawings. The drawing in which an element first appears is typically indicated by the leftmost digit(s) in the corresponding reference number.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the disclosure. However, it will be apparent to those skilled in the art that the disclosure, including structures, systems, and methods, may be practiced without these specific details. The description and representation herein are the common means used by those experienced or skilled in the art to most effectively convey the substance of their work to others skilled in the art. In other instances, well-known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the disclosure.
References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
For purposes of this discussion, the term “module” shall be understood to include software, firmware, or hardware (such as one or more circuits, microchips, processors, and/or devices), or any combination thereof. In addition, it will be understood that each module can include one, or more than one, component within an actual device, and each component that forms a part of the described module can function either cooperatively or independently of any other component forming a part of the module. Conversely, multiple modules described herein can represent a single component within an actual device. Further, components within a module can be in a single device or distributed among multiple devices in a wired or wireless manner.
Before describing specific embodiments of the present disclosure, it is instructive to first consider the difference between non-constant and constant luminance video encoding. To this end,
As illustrated in
Y=0.2126*R+0.7152*G+0.0722*B (1)
C
B=0.5389*(B−Y) (2)
C
R=0.6350*(R−Y) (3)
It should be noted that the three equations represent only one possible implementation of decomposition transformation matrix 104 in
After the luminance component Y and two chroma components CB and CR are obtained, the luminance component Y undergoes perceptual quantization by perceptual transfer function 106 and the two chroma components CB and CR are respectively subsampled by subsampling filters 108 and 110. In general, subsampling filters 108 and 110 may respectively filter a group of CB components and a group of CR components that correspond to a rectangular region of pixels and then discard one or more of the CB and CR chroma components from their respective groups. The filtering can be implemented as a weighted average calculation, for example. Subsampling filters 108 and 110 pass the filtered and/or remaining CB and CR chroma component(s) to the decoder. Subsampling filters can implement one of the common 4:2:2 or 4:2:0 subsampling schemes, for example.
Video decoder 102 receives the perceptually quantized luminance component PQ(Y), where PQ( ) represents perceptual transfer function 106, and transforms the perceptually quantized luminance component PQ(Y) back into the luminance component Y using an inverse perceptual transfer function 112. Inverse perceptual transfer function 112 can implement a power function with an exponent equal (or approximately equal) to the reciprocal of the exponent of the power function of perceptual transfer function 106. Video decoder 102 further receives the subsampled chroma components and uses interpolation filters 114 and 116 to respectively recover (e.g., via interpolation), as best as possible or at least to some degree, the samples of chroma components CB and CR that were discarded by subsampling filters 108 and 110 at video encoder 100. The luminance component Y and the recovered chroma components CBrec and CRrec then transformed back into color components Rrec, Grec, and Brec using an inverse decomposition transformation matrix 118 that implements the inverse 3×3 matrix of decomposition transformation matrix 104.
One issue with old CRT displays and with other, more modern, display technologies is that the displays introduce their own power function. This power function is represented by display transformation matrix 120 in
At least historically, to avoid having to implement two such non-linear transfer functions, a simplification to video decoder 102 was often realized. In particular, the power functions implemented by inverse perceptual transfer function 112 and inverse display transformation matrix 122 were typically very close to being inverses of each other. As a result, by moving inverse display transformation matrix 122 in front of inverse decomposition transformation matrix 118 (as indicated by the right-most dark arrow in
Y′=0.2126*PQ(R)+0.7152*PQ(G)+0.0722*PQ(B) (4)
C
B=0.5389*(PQ(B)−Y′) (5)
C
R=0.6350*(PQ(R)−Y′) (6)
It should again be noted that the three equations represent only one possible implementation of decomposition transformation matrix 104 in
The implication of the changes to the rearranged video encoder 200 in
In the rearranged video encoder 200 in
Compounding reproduction errors in the luminance of a pixel encoded by a non-constant luminance video encoder is High Dynamic Range (HDR) video content, which is starting to become more widely supported by commercially available display devices. HDR video content contains information that covers a wider luminance range (e.g., the full luminance range visible to the human eye or a dynamic range on the order of 100,000:1) than traditional, non-HDR video content known as Standard Dynamic Range (SDR) video content. As will be explained further below, the present disclosure is directed to an apparatus and method for reducing errors in reproduced luminance of HDR video content (and other types of video content) at a video decoder and/or encoder due to non-constant luminance video encoding.
It should be noted that, in
To provide further context as to the errors in reproduced luminance that the apparatus and method of the present disclosure are directed to reducing, an example of a simplified non-constant luminance video encoding and decoding operation is provided with respect to
Referring now to
The above mentioned errors in luminance reproduced at a video decoder generally occur when pixels located near each other in a color image, such as pixels 302 and 304, both have either large red color component values or large blue color component values and both have small green color components. HDR video systems make such errors more possible because they generally provide for larger and smaller possible color component values of a pixel than LDR video systems. In other words, the color gamut of an HDR video system is generally wider.
For example, assume that the respective colors of pixels 302 and 304 are both within area 314, as shown in color gamut 310 by the two points or x's, and have the same large value of red (i.e., R1=R2), the same value of blue (i.e., B1=B2), and have small values of green that differ by at least some amount (i.e., G2=G1+ΔG). Despite having nearly identical colors and therefore nearly identical values of luminance as given by Eq. (1), the small difference between the small green component values of pixels 302 and 304 causes a large difference in their respective luma values, Y1′ and Y2′, as given by Eq. (4).
More specifically, from Eq. (4) above, the two luma values Y1′ and Y2′ can be written out as follows:
Y
1′=0.2126*PQ(R1)+0.7152*PQ(G1)+0.0722*PQ(B1) (7)
Y
2′=0.2126*PQ(R2)+0.7152*PQ(G2)+0.0722*PQ(B2) (8)
As can be seen, the components of luma values Y1′ and Y2′ that are dependent on red and blue will be identical because R1=R2 and B1=B2 as assumed above. The respective components of luma values Y1′ and Y2′ that are dependent on green, however, will vary by a large amount because of the small difference in G1 and G2 and the typically large slope of the perceptual quantization function PQ( ) for small input values.
For example,
Taking the above example further, the large difference between luma values Y1′ and Y2′ further results in a large difference between the respective red chroma components, CR1 and CR2, of pixels 302 and 304, which can be written out, based on Eq. (6) above, as follows:
C
R1=0.6350*(PQ(R1)−Y1′) (9)
C
R2=0.6350*(PQ(R2)−Y2′) (10)
Because of the methods in which the red chroma components are often subsampled by subsampling filter 110 in
In one instance, for example, the weighted average calculated by subsampling filter 110 can lean toward the red chroma component CR2 of pixel 304. After calculating the weighted average, subsampling filter 100 can pass the weighted average onto video decoder 102 in
It should be noted that subsampling filters 108 and 110, in general, are spatial low-pass filters. For example, in the case where subsampling filter 108 implements a weighted average of red chroma components of a group of pixels within a common neighborhood (e.g., pixels within a 4×1 or 4×2 rectangular region), the weighted average is a form of spatial low-pass filtering as would be appreciated by one of ordinary skill in the art.
Referring back to
R
rec=PQ−1(1/0.6350*CRrec+Y′) (11)
B
rec=PQ−1(1/0.5389*CBrec+Y′) (12)
G
rec=PQ−1(1/0.7152*(Y′−0.2126*PQ(Rrec)−0.0722*PQ(Brec))) (13)
Because of the error in the recovered red chroma component CR1rec, as explained above, the recovered red color component RrecI of pixel 302 will have an error. In fact, the error in the recovered red chroma component CR1rec, may be further amplified due to the large potential gain of inverse perceptual quantization function PQ−1( ) used in the calculation of the recovered red color component RrecI. The gain of inverse perceptual quantization function PQ−1( ) is typically larger for large encoded red component values, like those of pixels 302 and 304 in the example above.
For example,
Actual RrecI=PQ−1(1/0.6350*CR1rec+Y1′) (14)
Ideal RrecI=PQ−1(1/0.6350*CR1rec+Y1′) (15)
Visually, errors of this type can result in “dots” being displayed that are objectionable to viewers.
The above description provided one example when the color values of two closely located pixels in a color image may result in a large error in luminance reproduced at a video decoder for one or more of the two pixels. In general, when the color values of two closely located pixels in a color image are within either area 312 (i.e., have large blue components and small green components) or area 314 (i.e., have large red components and small green components), there is a potential for a large error in the luminance reproduced at a video decoder for at least one of the pixels similar to the error described above. Because of the larger values of the color components possible for HDR video content, this content is more prone to these large errors in luminance reproduced at a video decoder. Even more generally, when the color values of two closely located pixels in a color image are within a border region of a color gamut of a video system, such as border region 502 of color gamut 500 in
Referring now to
Filter controller 602 is configured to determine if the color of a pixel being processed by video encoder 600 falls within a region of a color gamut that may result in a large error in the reproduced luminance of the pixel at a video decoder due to non-constant luminance encoding. For example, filter controller 602 can determine if the color of a pixel being processed by video encoder 600 falls with either region 312 or 314 of color gamut 310 in
In one embodiment, filter controller 602 determines if the color of a pixel being processed by video encoder 600 falls with region 312 or 314 of color gamut 310 in
In another embodiment, filter controller 602 determines if the color of a pixel being processed by video encoder 600 falls with region 312 if the ratio of the blue color component B of the pixel to the green color component G of the pixel is above a threshold. Similarly, filter controller 602 determines if the color of a pixel being processed by video encoder 600 falls with region 314 if the ratio of the red color component R of the pixel to the green color component G of the pixel is above a threshold.
Upon determining that the color of a pixel being processed by video encoder 600 falls within region 312 or region 314, filter controller 602 can activate spatial low-pass filter 606 to spatially low-pass filter the green color component of the pixel being processed. Spatial low-pass filter 606 is configured to spatially smooth the green component of the pixel being processed by, for example, replacing the green component of the pixel with a weighted average of the green component of the pixel and the green components of pixels in a surrounding neighborhood of the pixel being processed. The neighborhood can be formed by a rectangular region of pixels, such as a 4×1 or a 4×2 region of pixels. The weights (or distribution of the weights) used to perform the weighted average by spatial low-pass filter 606 can be adjusted by filter controller 602 to increase or decrease the amount of spatial smoothing of the green component of the pixel being processed. For example, for larger ratios of the blue color component B of the pixel to the green color component G of the pixel, filter controller 602 can adjust the weights used by spatial low-pass filter 606 to increase the amount of spatial smoothing of the green component of the pixel.
By spatially smoothing the green component of the pixel being processed, the difference in the green component value of the pixel being processed as compared to the green components of the pixels in its neighborhood is reduced, which, in turn, should help to reduce the extent of any error of the type described above being produced.
With regard to spatial low-pass filter 604, filter controller 602 can control spatial low-pass filter 604 in a similar manner as spatial low-pass filter 606. In one embodiment, filter controller 602 can control spatial low-pass filter 604 to spatially low-pass filter the red color component R of the pixel being processed if the red color component R of the pixel is below a first threshold and the blue color component B of the pixel is above a second threshold. Similarly, filter controller 602 can control spatial low-pass filter 604 to spatially low-pass filter the red color component R of the pixel being processed if the red color component R of the pixel is below a first threshold and the green color component G of the pixel is above a second threshold.
In another embodiment, filter controller 602 can control spatial low-pass filter 604 to spatially low-pass filter the red color component R of the pixel being processed if the ratio of the blue color component B of the pixel to the red color component R of the pixel is above a threshold. Similarly, filter controller 602 can control spatial low-pass filter 604 to spatially low-pass filter the red color component R of the pixel being processed if the ratio of the green color component G of the pixel to the red color component R of the pixel is above a threshold.
With regard to spatial low-pass filter 608, filter controller 602 can control spatial low-pass filter 608 in a similar manner as spatial low-pass filter 606. In one embodiment, filter controller 602 can control spatial low-pass filter 608 to spatially low-pass filter the blue color component B of the pixel being processed if the blue color component B of the pixel is below a first threshold and the red color component R of the pixel is above a second threshold. Similarly, filter controller 602 can control spatial low-pass filter 608 to spatially low-pass filter the red color component B of the pixel being processed if the blue color component B of the pixel is below a first threshold and the green color component G of the pixel is above a second threshold.
In another embodiment, filter controller 602 can control spatial low-pass filter 608 to spatially low-pass filter the blue color component B of the pixel being processed if the ratio of the red color component R of the pixel to the blue color component B of the pixel is above a threshold. Similarly, filter controller 602 can control spatial low-pass filter 608 to spatially low-pass filter the blue color component B of the pixel being processed if the ratio of the green color component G of the pixel to the blue color component B of the pixel is above a threshold.
It should be noted that, in some embodiments, only one or two of spatial low-pass filters 604, 606, and 608 are used in video encoder 600. For example, in one embodiment, only spatial low-pass filter 606 is used and spatial low-pass filters 604 and 608 are omitted. It should be further noted that video encoder 600 can be implemented in any number of devices, including video recording devices, such as video cameras and smart phones with video recording capabilities.
Referring now to
The method of flowchart 700 begins at step 702. At step 702, a first color component of a pixel being encoded is spatially low-pass filtered based on the first color component of the pixel and at least one of a second or third color component of the pixel to provide a filtered first color component. For example, the first color component can be a green color component, the second color component can be a red color component, and the third color component can be a blue color component. The green color component can be spatially filtered if the color of the pixel falls within a region of a color gamut that may result in a large error in the reproduced luminance of the pixel at a video decoder due to non-constant luminance encoding. For example, if the color of the pixel being processed by video encoder 600 falls with either region 312 or 314 of color gamut 310 in
After step 702, the method of flowchart 700 proceeds to step 704. At step 704, the filtered first color component can be perceptually quantized. For example, the filtered first color component can be perceptually quantized using one of perceptual transfer function 106 in
After step 704, the method of flowchart 700 proceeds to step 706. At step 706, the perceptually quantized first color component together with a perceptually quantized second and third color component can be transformed into a luma component and chroma components. For example, decomposition transformation matrix 104 in
After step 706, the method of flowchart 700 proceeds to step 708. At step 708, the chroma components can be subsampled. For example, the chroma components can be subsampled using subsampling filters 108 and 110 in
Referring now to
Filter controller 802 is configured to determine if the color of a non-constant luminance encoded pixel being processed by video decoder 800 falls within a region of a color gamut that may result in a large error in the reproduced luminance of the pixel at video decoder 800 due to non-constant luminance encoding. For example, filter controller 802 can determine if the color of a pixel being processed by video decoder 800 falls within either region 312 or 314 of color gamut 310 in
As shown in
In one embodiment, once R, G, B color components for the pixel being processed by video decoder 800 are obtained, filter controller 802 determines if the color of a pixel being processed by video decoder 800 falls within region 312 or 314 of color gamut 310 in
In another embodiment, filter controller 802 determines if the color of the pixel being processed by video decoder 800 falls within region 312 if the ratio of the blue color component B of the pixel to the green color component G of the pixel is above a threshold. Similarly, filter controller 802 determines if the color of a pixel being processed by video encoder 800 falls with region 314 if the ratio of the red color component R of the pixel to the green color component G of the pixel is above a threshold.
In yet another embodiment, filter controller 802 determines if the color of the pixel being processed by video decoder 800 falls within the bottom part of border region 502 if the product of the green color component G of the pixel and the perceptual quantized red color component of the pixel PQ(R) is smaller than a given threshold.
Upon determining that the color of the pixel being processed by video decoder 800 falls within region 312, region 314, and/or within the bottom part of border region 502, filter controller 802 can activate spatial low-pass filter 804 to spatially low-pass filter the luma component Y′ of the pixel being processed. Spatial low-pass filter 804 is configured to spatially smooth the luma component Y′ of the pixel being processed by, for example, replacing the luma component Y′ of the pixel with a weighted average of the luma component Y′ of the pixel and the luma components of pixels in a surrounding neighborhood of the pixel being processed. The neighborhood can be formed by a rectangular region of pixels, such as a 4×1 or a 4×2 region of pixels. The weights (or distribution of the weights) used to perform the weighted average by spatial low-pass filter 804 can be adjusted by filter controller 802 to increase or decrease the amount of spatial smoothing of the luma component Y of the pixel being processed. For example, for larger ratios of the blue color component B of the pixel to the green color component G of the pixel, filter controller 802 can adjust the weights used by spatial low-pass filter 804 to increase the amount of spatial smoothing of the luma component Y′ of the pixel.
By spatially smoothing the luma component Y′ of the pixel being processed, the difference in the luma component Y′ of the pixel being processed as compared to the luma components of the pixels in its neighborhood is reduced, which, in turn, should help to reduce the extent of any error of the type described above being produced.
In another embodiment, upon determining that the color of the pixel being processed by video decoder 800 falls within region 312, region 314, and/or within the bottom part of border region 502, filter controller 802 can further check that the spatial variability of the green component G of the pixel being processed is above a threshold before activating spatial low-pass filter 804 as described above. The spatial variability of the green component G of the pixel being processed can be determined, for example, using the following equation:
S.V. of G=max[abs(G_ctr−G_left)/G_ctr, abs(G_ctr−G_right/G_ctr) (16)
where G_ctr is the value of the green component G of the pixel being processed, G_left is the value of the green component of the pixel to the left of the pixel being processed, and G_right is the value of the green component of the pixel to the right of the pixel being processed.
Referring now to
The method of flowchart 800 begins at step 802. At step 802, a luma component of a pixel being decoded is spatially low-pass filtered based on a first color component of the pixel and at least one of a second or third color component of the pixel to provide a filtered luma component. For example, the first color component can be a green color component, the second color component can be a red color component, and the third color component can be a blue color component. The luma component can be spatially filtered if the color of the pixel falls within a region of a color gamut that may result in a large error in the reproduced luminance of the pixel at the video decoder due to non-constant luminance encoding. For example, if the color of the pixel being processed by video decoder 800 falls with either region 312 or 314 of color gamut 310 in
After step 902, the method of flowchart 900 proceeds to step 904. At step 904, the filtered luma component and chroma components of the pixel being processed can be transformed into color components. For example, the filtered luma component and the chroma components of the pixel being processed can be transformed into recovered red, green, and blue color components using inverse decomposition transformation matrix 118 in
After step 904, the method of flowchart 900 proceeds to step 906. At step 906, the recovered red, green, and blue color components can be inverse perceptually quantized. For example, the recovered red, green, and blue color components can be inverse perceptually quantized using display transformation matrix 120 in
It will be apparent to persons skilled in the relevant art(s) that various elements and features of the present disclosure, as described herein, can be implemented in hardware using analog and/or digital circuits, in software, through the execution of instructions by one or more general purpose or special-purpose processors, or as a combination of hardware and software.
The following description of a general purpose computer system is provided for the sake of completeness. Embodiments of the present disclosure can be implemented in hardware, or as a combination of software and hardware. Consequently, embodiments of the disclosure may be implemented in the environment of a computer system or other processing system. An example of such a computer system 1000 is shown in
Computer system 1000 includes one or more processors, such as processor 1004. Processor 1004 can be a special purpose or a general purpose digital signal processor. Processor 1004 is connected to a communication infrastructure 1002 (for example, a bus or network). Various software implementations are described in terms of this exemplary computer system. After reading this description, it will become apparent to a person skilled in the relevant art(s) how to implement the disclosure using other computer systems and/or computer architectures.
Computer system 1000 also includes a main memory 1006, preferably random access memory (RAM), and may also include a secondary memory 1008. Secondary memory 1008 may include, for example, a hard disk drive 1010 and/or a removable storage drive 1012, representing a floppy disk drive, a magnetic tape drive, an optical disk drive, or the like. Removable storage drive 1012 reads from and/or writes to a removable storage unit 1016 in a well-known manner. Removable storage unit 1016 represents a floppy disk, magnetic tape, optical disk, or the like, which is read by and written to by removable storage drive 1012. As will be appreciated by persons skilled in the relevant art(s), removable storage unit 1016 includes a computer usable storage medium having stored therein computer software and/or data.
In alternative implementations, secondary memory 1008 may include other similar means for allowing computer programs or other instructions to be loaded into computer system 1000. Such means may include, for example, a removable storage unit 1018 and an interface 1014. Examples of such means may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, a thumb drive and USB port, and other removable storage units 1018 and interfaces 1014 which allow software and data to be transferred from removable storage unit 1018 to computer system 1000.
Computer system 1000 may also include a communications interface 1020. Communications interface 1020 allows software and data to be transferred between computer system 1000 and external devices. Examples of communications interface 1020 may include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, etc. Software and data transferred via communications interface 1020 are in the form of signals which may be electronic, electromagnetic, optical, or other signals capable of being received by communications interface 1020. These signals are provided to communications interface 1020 via a communications path 1022. Communications path 1022 carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an RF link and other communications channels.
As used herein, the terms “computer program medium” and “computer readable medium” are used to generally refer to tangible storage media such as removable storage units 1016 and 1018 or a hard disk installed in hard disk drive 1010. These computer program products are means for providing software to computer system 1000.
Computer programs (also called computer control logic) are stored in main memory 1006 and/or secondary memory 1008. Computer programs may also be received via communications interface 1020. Such computer programs, when executed, enable the computer system 1000 to implement the present disclosure as discussed herein. In particular, the computer programs, when executed, enable processor 1004 to implement the processes of the present disclosure, such as any of the methods described herein. Accordingly, such computer programs represent controllers of the computer system 1000. Where the disclosure is implemented using software, the software may be stored in a computer program product and loaded into computer system 1000 using removable storage drive 1012, interface 1014, or communications interface 1020.
In another embodiment, features of the disclosure are implemented primarily in hardware using, for example, hardware components such as application-specific integrated circuits (ASICs) and gate arrays. Implementation of a hardware state machine so as to perform the functions described herein will also be apparent to persons skilled in the relevant art(s).
Embodiments have been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
The foregoing description of the specific embodiments will so fully reveal the general nature of the disclosure that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
This application claims the benefit of U.S. Provisional Patent Application No. 62/245,368, filed Oct. 23, 2015, which is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
62245368 | Oct 2015 | US |