The invention relates to displays, such as, for example, LED (light-emitting diode) displays, although not being limited to this particular display technology. More specifically, the invention relates to obtaining more accurate color and/or greyscale representation in such displays by means of (non-linear) correction or compensation methods, even and particularly which may be applied after calibration (based on linear equations), but also when no calibration is applied.
From previous disclosures and applications, it has become clear that calibrating a display for color and brightness, uniformity has to be performed in a real-time fashion. For LED displays—up to present day—the calculations are made in the linear color space and it is not taken into account that non-linearities exist in generating the respective light output of the individual R(ed), G(reen) and B(lue) color per individual pixel.
Traditional LED's color light output is generated using PWM (Pulse Width Modulation) that act on constant current drivers (in general pixel drivers, possibly also referred to as PWM drivers, or LED drivers In the LED industry (see e.g. “Handbook of Visual Display Technology”: Thielemans R. (2012) LED Display Applications and Design Considerations—Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-79567-4_76) that LEDs change slightly in color when the drive current is changed. Hence, a constant current is applied. However, due to several reasons, the linearity between digitally generated PWM and light output measured on the individual LEDs is not guaranteed.
Hence, the inventors of the present application have found that the ‘traditional’ X, Y, Z color correction calculations (e.g. known as standard calibration, usually calculated in linear space) that act on PWM and the supposed ‘constant current drivers’ are not entirely correct. Further correction or compensation (for non-linearities) is therefore needed.
The aim of the invention is to look for a practical way how to determine deviations due to non-linearities (caused by the pixel drivers), and subsequently, once determined, how to correct or compensate (real-time) for these deviations. It is further aimed, to define a physical (constant current driver) implementation (of the correction or compensation) that also reduces the computing complexity in LED driving systems.
In brief, the present disclosure provides a method and system for improved color and/or greyscale representation of light-emitting displays by means of non-linear compensation, and moreover, electronics systems for implementing such method, either global (FPGA based), or local (Chip Based).
In particular, the inventors of the present application have found that the use of calibration as known from the art (and which is in essence assumed linear relationships) is insufficient e.g. for high-quality display performance, an additional correction or compensation (due to non-linearities) is preferably determined and used. Further, the inventors have found that even in the case no calibration took place, non-linearities occur, and hence any way a correction or compensation is required for achieving better color and/or greyscale output of video or images being displayed. Contrary to the standard calibration steps, which uses (3×3) matrices due to the interplay between (primary) colors, performed per display pixel, for the additional correction or compensation it may be sufficient to do this for only one (primary) color and per cluster of display pixels. It is noted that the calibration may be depending on a display content context and/or display set-up, hence content dependent calibration may be used as known in the art. The non-linear compensation may be based on the brightness (Y-value of color space coordinates) defined by a mathematical formula. Alternatively, the compensation may be relied on what is stored in one or more lookup tables (or memory), each comprising of input values and corresponding output values taking into account the non-linearities. Considering the amount of storage space (required) for the non-linear compensation, the use of a plurality of small lookup tables (having reduced bit-representation) instead of using one single large one may be suggested, including performing interpolation calculations amongst these small lookup tables. Further acknowledging that the above still might be insufficient for high-quality display performance, an additional temperature correction may be applied.
For some aspects of the present invention further described below, referral could be made to earlier application from the same Applicant, amongst which for example WO2019/215219 A1, entitled “STANDALONE LIGHT-EMITTING ELEMENT DISPLAY TILE AND METHOD” and published 14 Nov. 2019, and US2020/0286424, entitled “REAL-TIME DEFORMABLE AND TRANSPARENT DISPLAY” and published 10 Sep. 2020, both of which are herein incorporated by reference. Whenever appearing relevant for one of the aspects of the present invention, this referral will be particularly and explicitly made below.
According to a first aspect of the invention, a method is provided for determining non-linear display pixel driver compensation performed by a processing system of (or for) a light-emitting display characterized by one or more colors, said light-emitting display comprising of pixels being controlled by pixel drivers. The ‘determination’ method comprises the following steps: (i) measuring (real-time) (on-display) (color) values for at least one of the one or more colors (and/or one or more pixels, i.e. a pixel or cluster of pixels); (ii) calculating (theoretical) (color) values for the at least one of the one or more colors (based on a linear relationship) (and/or one or more pixels); (iii) comparing measured and corresponding calculated values; (iv) observing (per color and for at least one color) a deviation in the measured values due to non-linearities (caused by the pixel drivers), and determining said deviation as the non-linear pixel driver compensation. For the values is referred to color space or linear space, defined by 3 coordinates (x, y, Y) or (X, Y, Z). In general, and preferably, the processing system will be internally provided in the display. However, it could be technically speaking to have a processing system outside or external from the display. In general, one pixel driver can be used for a cluster of pixels, for example being a multiple of 16, such as e.g. 64 or 256 pixels (per cluster).
According to an embodiment, the light-emitting display is characterized by at least three (primary) colors. The determining of the non-linear display pixel driver compensation (and hence all steps being part thereof, or involved here) may be performed for each display pixel or cluster of display pixels.
According to an embodiment, prior to all steps (i) to (iv), calibration is performed by means of the following steps: (a) reading, loading or inputting the (native) (color) values (in 3-coordinates representation) measured (which you could interpret here as a pre-calibration measurement, i.e. a measurement before the calibration calculations or computations take place) (with a spectrometer) for the one or more colors of (each of the pixels of) the display; (b) reading, loading or inputting the target values (as being perceived by a human eye and/or a camera recording the display output) for the one or more colors of (each of the pixels of) the display; and (c) for the one or more colors, computing (via matrix operations) corresponding calibration matrices based on the measured and target values (in particular, based on the difference between them). Taking into account this calibration, and referring back to the ‘determination’ method itself, when now step (ii) of calculating (theoretical) (color) values is performed, the calibration matrices can be used. The calibration matrices can be based on display content contexts and/or display set-ups, as known in content dependent calibration (see e.g. earlier patent applications WO2019/215219 A1 and US2020/0286424 mentioned above, from the same Applicant).
According to a second aspect of the invention, a method is provided for implementing non-linear display pixel driver compensation, performed by a processing system of (or for) a light-emitting display characterized by one or more colors, said light-emitting display comprising of pixels being controlled by pixel drivers. The ‘implementation’ method comprises the following steps: (i) determining the non-linear display pixel driver compensation based on the method of first aspect, or reading, loading or inputting the non-linear display pixel driver compensation, determined based on the method of first aspect, and (ii) compensating (or correcting) for said deviation determined as the non-linear display pixel driver compensation.
According to an embodiment, said compensating is based on the brightness (Y-value of color space coordinates) defined by a mathematical formula.
According to an embodiment, said compensating is based on the use of one or more lookup tables (of which data being stored in (non-volatile) memory), in particular on what is stored in (or represented by) the one or more lookup tables, each comprising of input values and corresponding output values taking into account the non-linearities. Moreover, said compensating can be based on what is stored in a plurality (i.e. at least two) of lookup tables having reduced bit-representation (in order to reduce the amount of memory required), in particular said compensating being defined from interpolation computations performed amongst these.
According to an embodiment, said compensating is performed for each display pixel or cluster of display pixels.
According to a third aspect of the invention, a method is provided for displaying an image on a light-emitting display with non-linear display pixel driver compensation. The ‘displaying’ method comprises the following steps: (i) determining the non-linear display pixel driver compensation based on the method of first aspect, or reading, loading or inputting the non-linear display pixel driver compensation, determined based on the method of first aspect; (ii) implementing the non-linear display pixel driver compensation based on the method of second aspect; and (iii) displaying the image.
According to an embodiment, an additional temperature correction is applied, for further improving the displaying of the image.
According to a fourth aspect of the invention, a system is provided for a light-emitting display, in particular for driving light-emitting elements or pixels thereof. The system is possibly part of the light-emitting display, could be incorporated or attached thereto. The system comprises an input protocol for receiving (video) input (to be displayed) and a PWM generating module for transferring said input into signals to be delivered to pixel drivers (e.g. one or more), herewith defining and controlling the light-emitting elements or pixels in the (light) output to be emitted (and displayed in the form of video) by them. The system also comprises a (additional) module for determining and implementing non-linear display pixel driver compensation (due to non-linearities caused by the pixel drivers) according to the method of first and second aspect respectively.
According to an embodiment, the system further comprises a module for performing calibration (for example as referred to in an embodiment of the method of first aspect) and herewith determining calibration matrices to be used in defining (eventually) the output to be emitted by the light-emitting elements or pixels (of the display).
According to an embodiment, said compensating of the method of second aspect, for implementing non-linear display pixel driver compensation, is particularly based on the use of one or more lookup tables and the data for this one or more lookup tables being stored in and hence to be fetched from a non-volatile memory of the processing system. Moreover, herewith, said one or more lookup tables each comprise input values and corresponding output values which take into account the non-linearities that need to be incorporated in the signals for the pixel drivers.
Detailed Description on Calibration
Applicant's earlier patent application is herein referred to, U.S. patent application Ser. No. 16/813,113, filed on Mar. 9, 2020, which is published as US patent application publication US2020/0286424, entitled “REAL-TIME DEFORMABLE AND TRANSPARENT DISPLAY” and published 10 Sep. 2020, which is herein incorporated by reference.
Whereas each individual LED may deviate in e.g. color or brightness, calibration is assumed as important. The traditional calibration video pipeline in accordance with the art is shown in
Assume all RGB LEDs have been measured at a certain defined current and defined temperature. This measurement can happen with e.g. a spectrometer. This yields x,y and Y measurement values for every of the R, G and B colors in one LED. In case of an RGB (Red, Green, Blue) display, the measurements e.g. performed in the CIE 1931 color space, wherein every color is represented in (x, y) and Y, are for example (x,y,Y are converted to X,Y,Z for working in linear space):
Rin=(Rinx,Riny,RinY)=(RinX,RinY,RinZ)
Gin=(Ginx,Giny,GinY)=(GinX,GinY,GinZ)
Bin=(Binx,BinY,BinY)=(BinX,BinY,BinZ)
It is noted that (x, y) are normalized values of X, Y, and Z being the so-called tristimulus values, whereas Y is a measure for the luminance of a color.
Or in matrix format:
Color conversions are performed using following formulas:
It can now be defined what the color targets should be. (There are standards defined for e.g. HDTV, NTSC, PAL, REC2020 . . . ) These are the real colors that should be shown in the display.
So, all LEDs need to be ‘calibrated’ to these individual points as explained before. One can set the colors to these standards, but it isn't a must as the mathematics are general.
Target Red color can be defined as:
Rtarg=(Rtargx,Rtargy,RtargY)=(RtargX,RtargY,RtargZ)
Next, the linear relationship can be defined between the target values and the ‘measured values’ (example here for Red channel).
RonR means how much contribution of Red from the native LED needs to be used in the desired (target) color of Red. GonR means how much Green of the original LED color needs to be added to this Red and so on.
RtargX=RinX×RonR+GinX×GonR+BinX×BonR
RtargY=RinY×RonR+GinY×GonR+BinY×BonR
RtargZ=RinZ x RonR+GinZ×GonR+BinZ×BonR
Or in matrix form this becomes:
Performing this for also Green and Blue yield following matrix formula:
Since the input is known and targets are known, the matrix can be solved for RonR etc.:
The targets could look as follows:
(note here that the brightness Y for all the targets has been normalized to 1)
We already know from above that
And the final outcome for (A) then becomes:
Since the Y values in the targets have been normalized, extra information can now be added to not only set the LED calibrated colors to a target color, but use the brightness of each individual target color to set the display to a fixed white point when all RGB colors are on.
In this example, the display is set to the following white point:
WX, WY, WZ is the white point. The RX, GX, . . . is in this case the target color matrix as these target colors are going to be used, and the brightness of the targets is changed to get to the right white point. Hence, the equation needs to be solved for RGB.
So now, the final target with Y information added is:
The final thing to do, is to apply (i.e. scaling) the final multiplication factors to (A):
It is noted
And this is the matrix (A) that can be used in the video pipeline of
It is noted that the scheme of
Herewith, the math is set straight for the ‘straightforward’ calibration.
The next step is to make the calibration content dependent by means of so-called content dependent calibration.
The reason why this was previously invented, as for example described in earlier patent application US2020/0286424 from the same Applicant (and in the meantime frequently tested and implemented) is twofold: 1/ visual (perceptual issue) when calibrating blue colors, resulting in perceptual color depth improvement, and 2/ because of issues related to Blue calibration. Both are now further discussed.
1/ Visual (Perceptual Issue) when Calibrating Blue Colors, Resulting in Perceptual Color Depth Improvement
The background of this invention is derived from the visual parameters explained in “Handbook of Visual Display Technology”: Thielemans R. (2012) LED Display Applications and Design Considerations—Springer, Berlin, Heidelberg, which is incorporated herein by reference. Here's a small summary of the human eye factors that are quantified, but (as later discussed) not to the full.
2/ Issues Related to Blue Calibration
Assume you have 2 different blue LEDs that one wants to calibrate toward the same Blue color target. In order to achieve a same target, some minor Red and Green additions need to be applied to get to the right color. When measured with spectrometer (and also on some cameras), these LEDs will be perfectly adjusted and both having the same X, Y and Z. However, when viewed by a human, the Blue LED with most ‘Red’ addition will be perceived almost as dark, reddish in comparison to ‘whitish’ Blue LED where Green is more added. This phenomenon is also viewing distance dependent. The physical phenomenon happening is that the ‘human’ eye ‘locks’ onto the narrow band red emitter (remember human eye is sensitive for ‘resolution’). This ‘lock’ to the Red frequency has the effect that perceived Blue components are not ‘visible’ anymore and thus yielding a totally different visual perception. The CIE standard is made for wide band color emitters and not narrow band emitters, and hence does not take this ‘perception’ into account. Also—as stated earlier—it doesn't take luminance into account. So, although there is lots of (proved) math available on colors, still human perception is ‘king’. As a conclusion, we can say that although one can calibrate Blue LEDs to be totally equal according to CIE, still perception of color is different.
On the other hand, a variation of brightness in Blue is perceived as a color variation and this is shown in
This means—for calibration purposes of displays—that in the case Blue batches of colors need to be calibrated, brightness variations will yield much better ‘perceived’ uniformity. As a result, LEDs that need this Blue brightness variation, will need a different matrix. Hence the need of two matrices. But varying brightness of a target color has the effect that (when used with also R and G) the white point changes. So, the LEDs that need Blue calibration by changing its Blue brightness will give a totally different white point. This is not at all desirable. Therefore, the solution exists in using a factor that is content dependent. This factor will define when to use the ‘Blue’ matrix or the ‘normal’ matrix.
We can now better understand that this principle is not only useful for ‘calibrating’ a display, but it can also be used to improve color depth perception (and this also works on Green, Red . . . ). As an example, it is known from the LED industry that traditional Blue LEDs are (often) not ‘deep’ enough in color. Usually they are above 470 nm, while desirable is 460 nm. Changing a 470 nm Blue LED to play at low brightness gives the visual impression that it plays at 460 nm.
An important note is made in that, since now ‘visual perception’ is brought into the picture, the strict wording of ‘calibrating’ a display is no longer applicable here since the word ‘calibration’ implies perfect/measurable uniform settings based upon straight measurements and mathematically correct adjustments.
We end up with an example of how content dependent calibration can be achieved while making use of the video pipeline as depicted in
It is noted that the matrices (due to variations in LEDs) are different for every individual RGB LED. The example below is explained using one LED.
One defines a matrix which is to be used when only Blue needs to be shown (MBonly) and one defines a matrix when Blue is used whilst also showing Green and Red (MBmix). The matrices are derived the same way as previously explained in the ‘traditional’ example. For the sake of example here, the matrices can be derived by altering the target colors brightness of Blue.
The target for MBmix will use the same target values as in previous example (cfr. white point):
And this yields the MBmix matrix:
For the Blue only matrix (MBonly), the targets are set to:
This will give a totally wrong white point, but the target Blue will be set to 50% in this example.
And this yields the MBonly matrix:
Since only the luminance of target Blue has been changed, BonR, BonG and BonB only are affected in this example. It is to the user's imagination to modify and play with all kind of settings. One can even gradually change e.g. Blue to Green using this pipeline when a certain parameter in the content changes.
The next question to answer is, how to define when to use what matrix. Since in this example we want to show a perceived deep Blue color, the following is assumed:
So, the final matrix to be used is:
Mfinal=Factor×MBmix+(1−Factor)MBonly
When Factor=1, this means Mfinal=Mbmix. When Factor=0, Mfinal=MBonly.
Next, we define a formula that takes the above assumptions into account and that we can also implement in real-time:
Factor=max(2×(R+G)/(R+G+B); 1)
Factor=0 when there is only Blue to be shown.
0<Factor<1 when Blue has ‘slight’ involvement in the content to be shown.
Factor≥1 (clipped) when Blue is substantial in the mix of colors.
All kinds of other formula can be imagined, but this example has the advantage of being implemented fairly easy in real-time FPGA computation. Examples of Factors can be found in the table given in
While referring to the table of
The final calculation Mfinal then becomes:
Detailed Description of Embodiments and Examples Providing Solutions to Problems
As noted above, reference is made to earlier application from the same Applicant, amongst which for example WO2019/215219 A1, entitled “STANDALONE LIGHT-EMITTING ELEMENT DISPLAY TILE AND METHOD” and published 14 Nov. 2019, and US2020/0286424, entitled “REAL-TIME DEFORMABLE AND TRANSPARENT DISPLAY” and published 10 Sep. 2020. Whenever appearing relevant for one of the aspects of the present invention, this referral will be particularly and explicitly made below.
The Inventors' Identified Problem
When applying the calibration principle as described earlier (in detailed background description), measurement of the final result of e.g. Rtarg, Gtarg, Btarg and/or Wtarg (but also other calculated output values) show that there is a deviation to what is measured and what is really desired. This means that e.g. Rtarg-measured≠Rtarg-calculated. This is due to system non-linearities in the display after the PWM generation through the constant current driver to the LEDs.
Assuming, the PWM is perfectly calculated, this means there is an introduction of non-linearities.
A Proposed Solution
According to the above an extra calculation step for these non-linearities has to be implemented.
A potential solution can be to define 3 factors and adding multiple matrices to be applied and acting upon these. A drawback of this solution is that this augments the digital implementation complexity (latency and speed) for full recalculation. The factors then determine the amount or weight to take from e.g. multiple matrices defined in the XYZ (linear) space.
The number of matrices and/or matrix elements can be arbitrarily chosen dependent on the accuracy required and the factors need then to determine what matrices to use to interpolate from. All these computations need to be done on top of the earlier describing visual perception improvement should one require these. A further drawback is the amount of memory and fetch operations that need to take place for one pixel to compute. Hence, this also has a drawback on system performance in the case one needs to process a lot of pixels.
A simplification could be to assume the color (x,y) coordinates aren't changing (too much) due to these non-linearities. In that case, we only need to act on the Y value of the real primary color. The full pipeline then looks as in
At this point we added 3 compensation blocks Comp R, Comp G, Comp B acting on all individual primary colors RGB respectively. There are multiple ways on how to implement the compensation blocks.
Both A and B approaches require substantial hardware resources being either a computation pipeline or a substantial big memory. The next implementation C, also called sub-delta, is an approach that is limited in computation resources and RAM resources and can be performed with only a few clocks of latency.
As mentioned, the example diagram of
Assume a digital bitstream of R (Red channel) values of 16 bit: Rin (15 to 0), having 16 parallel bit lanes, and n=15. We split this R channel up into 2 parts: a top-bit channel Rt of n to a, wherein a=4 here by means of example. So, we end up with a top-bit channel Rt of 12 bits, i.e. Rt (15 to 4), and a bottom-bit channel of 4 bits, i.e. Rb (3 to 0). So, in this example, the top part Rt has a width of 12 bits (values between 0 and 4095, or in total 4096=212 values) and the bottom part Rb has a width of 4 bits (values between 0 and 15, or in total 16=24 values). Whereas Rt being defined from “n to a” bits, and Rb comprising “a” bits (0 being included), Rin can be defined as Rt×2a+Rb.
As depicted in the diagram of
It is noted that these 2 lookup tables (Toplut and Bottomlut) can be used to make an interpolation between the values contained in their respective and corresponding locations determined by the Rt value. In addition, the Rb value can be used to determine such interpolation, in that the Rb value can be used to interpolate between a value from the Toplut (i.e. at a certain location thereof) and a value from the Bottomlut (at corresponding location thereof).
Assume that for the Red channel e.g. Rb=10 and Rt=1500 by means of example. We assume R Toplut has at the address 1500 (Rt) value 2900 and R Bottomlut has value 2800 at corresponding location 1500 (Rt). Both values will emerge at the output at the lookup tables when they are addressed with Rt=1500, being 2900 and 2800 respectively. As shown in the diagram of
At this point, we end up with: (2900×10+2800×6) which is then divided by (2a=16) for normalization purposes as part of the interpolation. The formula becomes: (2900×10+2800×6)/16 which we call a delta value, to be added to the original color value to compensate for the non-linearities.
Eventually, for the output Rout, we add this delta value to the original input value Rin which is Rt×2a+Rb. Hence, the output value Rout becomes:
As a result, Rout=(1500*16+10)+(2900*10+2800*6)/16=24010+2862=26872.
The 2862 being the delta and is moreover the 2nd compensation on the color, and hence further referred to as sub-delta.
In case all individual LED primary colors have the same behavior, only 3 sets of lookup tables have to be made wherein every individual LED computation passes through the same table. It is noted that—in case wherein the compensation needs to be different—sets of computation lookup tables can be made and selected according to the LED that needs a certain compensation.
FPGA Implementation
All of the video pipeline above can be implemented in an FPGA (Field Programmable Gate Array) or standard ASIC. However, when the number of pixels to be computed with calibration and/or the sub-delta correction, this requires lots of memory access per pixel, especially for the calibration part. In case of e.g. using the visual perception calibration, this means that for every RGB pixel to be computed, at least 18 values (2 Matrices of 9 values) need to be fetched from a precomputed location in memory. Although this can be implemented, it has a drawback on computation speeds and/or pixel amount limitation. In order to reduce FPGA or ASIC complexity, part of the computation and non-linearity compensation can be done more locally at the LED level or around clusters of multiple LEDs, as described in next paragraph.
Chip Implementation
Single pixel LEDs with PWM generation already exist for a long time. Examples of these are e.g. LC8823 LEDs. These are called LEDs with intelligent control. Instead of having just anode and cathode for the LEDs, they have digital inputs and outputs as can be seen in
However, it is part of the invention to apply the calibration and/or sub-delta correction also as local solution by means of chip implementation (for example LED chip or driver chip). This can either be A, the mathematical approach wherein all the factors are determined by a protocol to fill in the values, or B, a total lookup table for every color, wherein the lookup table is also loaded by input protocol. Solution C can also be implemented in exact the same way as described earlier, for example acting on the individual LED.
In general, the invention of adding a lookup table for (brightness and/or color) compensation in this kind of LEDs is considered to be inventive and new to overcome all the above-mentioned issues, such as for example sequential processing being more time consuming (and possibly requiring more resources) than now distributed and/or parallel processing (e.g. locally) in accordance with the invention. Further on, adding the complexity closer to the LEDs reduces the complexity (and size) of the control for an LED display.
Now, not necessarily the LED needs to be in the same package. Constant current drivers (common cathode or common anode) are readily available such as e.g. TLC59731, a 3 channel 8-bit PWM driver with a single wire interface, wherein the out0, out1 and out2 and 3 pins are to be connected to the LEDs as depicted in
And more complex constant current drivers exist that can address multiple arrays of LEDs. Example here is e.g. MB15759 from Macroblock, i.e. an advanced LED driver built by 48 constant current source output channels and 32 switches packed into a compact BGA package. The block diagram of this (patented) 48-Channel PWM Constant Current LED Source Driver with Embedded Switch for 1:32 Time-multiplexing Applications is shown in
Oversimplified, the block diagrams are similar to all existing (integrated or not) LED drivers, as illustrated in
Apart from the general methods described earlier, it is part of the invention to add functionality blocks in the ‘constant current driver’ chipsets with or without integrated LEDs, as illustrated in
Further referring to
For the additional blocks 82, 83, it may be (as shown here) that multiple layers (or versions or phases) for the calibration 82 and for the non-linear compensation 83 (also called sub-delta) respectively are foreseen. Possibly such multiple layers need to be provided because of multiple colors and/or multiple types of LEDs for the calibration and/or the non-linear compensation has to be performed. It is noted that for each kind or set of colors, multiple instances can be added dependent on the complexity to solve. In general, one can state that every individual RGB pixel needs at least one set of calibration data (9 values). In case of visual perception enhancement, a set of 18 values is needed and in case of additional temperature compensation, a set of 3 extra values for each color is needed. If all the individual colors R, G, B (or others) have same non-linearity behavior, at least 3 non-linearity compensation blocks need to be added.
Reason of Non-Linearities
Multiple reasons can exist for non-linearities. Some reasons (but not limited to all reasons) are:
It is known from the industry that perfect constant current drivers don't exist. Most of these have a dependency on current needed and also supply voltage and can be found in the data sheets.
Example excerpt from constant current driver (MB15759 from Macroblock) datasheet:
As depicted in
Combinability of Embodiments and Features
This disclosure provides various examples, embodiments, and features which, unless expressly stated or which would be mutually exclusive, should be understood to be combinable with other examples, embodiments, or features described herein.
In addition to the above, further embodiments and examples include the following:
Number | Date | Country | |
---|---|---|---|
63317178 | Mar 2022 | US |