The present invention relates generally to video transport. More specifically, the present invention relates to transporting analog video samples within a display unit or to a display unit, for example.
Image sensors, display panels, and video processors are continually racing to achieve larger formats, greater color depth, higher frame rates, and higher resolutions. Local-site video transport includes performance-scaling bottlenecks that throttle throughput and compromise performance while consuming ever more cost and power. Eliminating these bottlenecks can provide advantages.
For instance, with increasing display resolution, the data rate of video information transferred from the video source to the display screen is increasing exponentially: from 3 Gbps a decade ago for full HD, to 160 Gbps for new 8K screens. Typically, a display having a 4K display resolution requires about 18 Gbps of bandwidth at 60 Hz while at 120 Hz 36 Gbps are needed (divided across P physical channels). And, an 8K display requires 72 Gbps at 60 Hz and 144 Gbps at 120 Hz.
Until now, the data is transferred digitally using variants of low-voltage differential signaling (LVDS) data transfer, using bit rates of 16 Gbps per signal pair, and parallelizing the pairs to achieve the required total bit rate. With a wiring delay of 5 ns/m, the wavelength of every bit on the digital connection is 12 mm, which is close to the limit of this type of connection and requires extensive data synchronization to obtain useable data. This digital information then needs to be converted to the analog pixel information on the fly using ultra-fast digital-to-analog (D-to-A) conversion at the source drivers of the display or using ultra-parallel slow conversion.
Nowadays, D-to-A converters use 8 bits; soon, D-to-A conversion may need 10 or even 12 bits and then it will become very difficult to convert accurately at a fast enough data rate. Thus, displays must do the D-to-A conversion in a very short amount of time, and the time being available for the conversion is also becoming shorter, resulting in stabilization of the D-to-A conversion also being an issue.
Accordingly, new apparatuses and techniques are desirable to eliminate the need for D-to-A conversion at a source driver of a display, to increase bandwidth, to utilize an analog video signal within a display unit, and to transport video signals in other locations.
To achieve the foregoing, and in accordance with the purpose of the present invention, a sampled analog video transport (SAVT) technique is disclosed that addresses the above deficiencies in the prior art. The technique may also be referred to as “clocked-analog video transport” or CAVT.
It is realized that the requirements for bit-perfect communication (e.g., text, spreadsheets) between computing devices are very different from those for communicating video content to humans for viewing. Fundamentally, as a video signal is a list of brightness values, it is realized that precisely maintaining fixed-bit-width (i.e., digital) brightness values is inefficient for video transport, and because there is no requirement for bit-accurate reproduction of these brightness values, analog voltages offer greater resolution. The unnecessary requirement for bit-perfect video transmission imposes a costly burden—a “digital overhead.” Therefore, the present invention proposes to transport video signals as analog signals rather than as digital signals.
Whereas conventional digital transport uses expensive, mixed-signal processes for high-speed digital circuits, embodiments of the present invention make use of fully depreciated analog processes for greater flexibility and lower production cost. Further, using an analog signal for data transfer between a display controller (for example) and source drivers of a display panel reduces complexity when compared to traditional transport between a signal source (via LVDS or Vx1 transmitter) and a source driver receiver having D-to-A converters.
In one embodiment, a transmitter is disclosed that processes incoming digital video samples, converts them to analog, and transports them to a display panel; also disclosed is a source driver of a display panel that receives the analog samples and drives them on to the display panel. An analog signal is used to transmit the digital video data received from a video source (or storage device) to a video sink for display. The analog signal may originate at a transmitter of a computer (or other processor) and be delivered to source drivers of a display unit for display upon a display panel, thus originating outside of the display unit, or the analog signal may be generated at a transmitter within the display unit itself.
In an alternative embodiment, portions of the, or the entire, source driver, may be integrated with the glass substrate of the display panel given the necessary analog speed and accuracies. Prior art source drivers have been mounted at the edge of the display panel (but not integrated with it) because of the complexity of high-speed digital circuits, as well as the large area required for D-to-A conversion. The present invention is able to integrate source drivers with the glass itself because no D-to-A converters are required in the source drivers, no decoders are needed, and because of the lower frequency sample transfer of an SAVT signal; e.g., the SAVT video signal arrives at the source drivers at a frequency of one-tenth the data rate of a 3 GHz digital video signal.
The invention may be used on any active-matrix display substrate. Best suited are substrates with high mobility (e.g., low-temperature poly-silicon (LTPS) or oxide (IGZO) TFTs). The resulting display panel can be connected to the GPU by only an arbitrary length of signal cable and a power supply when the entire source driver is integrated. There is no need for further electronics connected to the glass, providing great opportunity for further edge width reduction and module thinning.
The invention is especially applicable to displays used in computer systems, televisions, monitors, game displays, home theater displays, retail signage, outdoor signage, etc. Embodiments of the invention are also applicable to video transport within vehicles such as within automobiles, trains, airplanes, ships, etc., and applies not only to video transport from a transmitter to displays or monitors of the vehicle, but also to video transport within such a display or monitor. The invention is also applicable to video transport to or within a mobile device such as a telephone. In a particular embodiment, the invention is useful within a display unit where it is used to transmit and receive video signals. By way of example, a transmitter of the invention may be used to implement the transmitter as described in U.S. Pat. No. 11,769,468 (HYFYPO13), and a receiver of the invention may be used to implement the receiver as described in U.S. application Ser. No. 17/900,570 (HYFYPO09).
The invention, together with further advantages thereof, may best be understood by reference to the following description taken in conjunction with the accompanying drawings in which:
It is realized that the wiring loom in a display unit conforms closely to its design values, such that the resilience afforded by the use of spreading codes (to encode and decode video samples for transport within the display unit, such as is described in U.S. Pat. No. 10,158,396) may be outweighed by the circuit overhead of decoding at the source drivers. In particular, the use of spreading codes affords a degree of resilience against thermal noise in a transmitter's DAC and in the sample and hold amplifiers of a source driver. Nevertheless, it is realized that such thermal noise is stochastic and therefore should be imperceptible. Accordingly, in some applications spreading codes are not strictly necessary, obviating the need for encoding and then decoding in the source drivers. Accordingly, it is proposed to transmit video data as analog signals from a transmitter to any number of source drivers of a display panel.
It is further realized that digitization of a video signal typically takes place at the signal source of the system (often at a GPU) and then the digital signal is transferred, usually using a combination of high-performance wiring systems, to the display panel source drivers, where the digital signal is returned to an analog signal again, to be loaded onto the display pixels. So, the only purpose of the digitization is data transfer from video source to display pixel. Therefore, we realize that it is more beneficial to avoid digitization altogether (to the extent possible), and to directly transfer the analog data from video source (or from a suitable transmitter) to the display source drivers. Such an analog signal has high accuracy (subject to circuit imperfections) and is a continuous value meaning that its possible resolution in value is always higher than can be represented by an arbitrarily long digital representation. This means the sample rate is at least a factor of ten lower than in the case of digital transfer, leaving further bandwidth for expansion.
Further it can be easier to perform the D-to-A conversion at the point where less power is needed than at the end point where the display panel is driven. Thus, instead of transporting a digital signal from the video source (or from an SoC or timing controller) to the location where the analog signal needs to be generated, we convert to analog near the SoC or timing controller within a transmitter and then transport the analog signal to the display panel over a much lower sample rate than one would normally have with digitization. That means that instead of having to send Gigabits per second over a number of lines, we send only a few hundred mega samples per second in case of the analog signal, thus reducing the bandwidth of the channel that has to be used. The rate is approximately one-tenth of the digital rate required for the same number of physical communication paths. Further, with prior art digital transport, every bit will occupy just about 1.25 cm (considering that propagation in cable is approximately 0.2 m/ns, 16 Gbps means 1/16 ns/bit, so one bit is 0.2/16 meter), whereas transporting analog data results in an increase of tenfold amount of time available, meaning extra bandwidth available. And further, a bit in digital data must be well defined. This definition is fairly sensitive to errors and noise, and one needs to be able to detect the high point and the low point very accurately.
The invention is especially applicable to high-resolution, high-dynamic range display units used in computer systems, televisions, monitors, machine vision, automotive displays, aeronautical displays, virtual or augmented reality displays, mobile telephones, billboards, scoreboards, etc.
Shown is a video signal 110 being delivered to the display unit using an HDMI interface (an LVDS, HDBaseT, MIPI, IP video, etc., interface may also be used). Shown generally are the system-on-chip (SoC) 120 and the timing controller (TCON) 130 which deliver digital video samples from the video signal to the transmitter 140. SoC 120 performs functions such as a display controller, reverse compression, certain digital signal processing and outputs the video signal to the TCON. Typically, LVDS or V-by-One will be used to deliver the digital video data 122 from the SoC to the TCON. If via LVDS pairs (for example), the number of pairs is implementation specific and depends upon the data rate per pair as well as upon panel resolution, frame rate, bandwidth etc. Furthermore, a variety of physical layers may be used to transport the video data from SoC 120 to TCON 130 including a serial-deserializer or SerDes layer, as is known in the art; if transmitter 140 is integrated with TCON 130, then this physical layer delivers the video data from SoC 120 to the integrated TCON and transmitter as shown in
It is also possible that some or all digital or image processing is performed in the SoC, in which case there is no image processing performed after the line buffer and before the DAC in
Various embodiments are possible: a discrete implementation in which the transmitter 140 is embedded in a mixed-signal integrated circuit and the TCON and SoC are discrete components; a mixed implementation in which the transmitter 140 is integrated with the TCON in a single IC and the SoC is discrete; and a fully-integrated implementation in which as many functions as possible are integrated in a custom mixed-signal integrated circuit in which the transmitter is integrated with the TCON and the SoC.
In this example of
There is a significant advantage to using analog signals for transport within a display unit even if the signal input to the display unit is a digital video signal. In prior art display units, one decompresses the HDMI signal and then one has the full-fledged, full-bit rate digital data that must then be transferred from the receiving point of the display unit to all source drivers within the display unit. Those connections can be quite long for a 65- or 80-inch display; one must transfer that digital data from one position inside of the unit where the input is to another position (perhaps on the other side) where the final source driver is. Therefore, there is an advantage to converting the digital signal to analog signals internally and then sending those analog signals to the source drivers, such as the use of lower frequency signals.
Also shown within
Typically, a transmitter 140 and a receiver (in this case, source drivers 186) are connected by a transmission medium. In various embodiments, the transmission medium can be a cable (such as HDMI, flat cable, fiber optic cable, metallic cable, non-metallic carbon-track flex cables, metallic traces, etc.), or can be wireless. There may be numerous EM pathways of the transmission medium, one pathway per EM signal 192. The transmitter includes a distributor that distributes the incoming video samples to the EM pathways. The number of pathways may widely range from one to any number more than one. In this example, the transmission medium will be a combination of cable, traces on PCBs, IC internal connections, and other mediums used by those of skill in the art.
During operation, a stream of time-ordered digital video samples 110 containing color values and pixel-related information is received from a video source at display unit 100 and delivered to the transmitter 140 via the SoC and TCON. The number and content of the input video samples received from the video source depends upon the color space in operation at the source (and the samples may be in black and white). Regardless of which color space is used, each video sample is representative of a sensed or measured amount of light in the designated color space.
The signal from the SoC (typically an LVDS digital signal, but others may be used) in which the pixel values come in row-major order through successive video frames. More than one pixel value may arrive at a time (e.g., two, four, etc.); they are serial in the sense that groups of pixels are transmitted progressively, from one side of the line to the other. A processing unit such as an unpacker of a timing controller may be used to unpack (or expose) these serial pixel values into parallel RGB values, for example. Also, it should be understood that the exposed color information for each set of samples can be any color information (e.g., Y, C, Cr, Cb, etc.) and is not limited to RGB. Use of color information other than RGB sub-pixels may require additional processing before the source drivers can drive the columns (which are natively sub-pixel intensity values). The number of output sample values S in each set of pixel samples is determined by the color space applied by the video source. With RGB, S=3, and with YCbCr 4:2:2, S=2. In other situations, the sample values S in each set of samples can be just one or more than three.
The unpacker may also unpack from the digital signal framing information in the form of framing flags that come along with the pixel values. Framing flags indicate the location of pixels in a particular video frame; they mark the start of a line, the end of the line, the active video section, the horizontal and vertical blanking sections, etc., as is known in the art. Framing flags are used to tell the gate drivers which line is currently sent to the display panel and will also control the timing of gate drivers' action. Framing flags may be included within gate driver control signals 190 as is known in the art. In general, symbol and sampling synchronization occurs before extracting framing information such as Hsync and Vsync (and other line control information).
TCON 130 provides a reference clock 170 to each of source drivers 186, i.e., each source driver chip (e.g. a Hyphy HY1002 chip) has a clock input that is provided by the TCON (whether it is an FPGA or IC). Clock 170 is only shown input to the first source driver for clarity, but each source driver receives the reference clock. This reference clock may be relatively low frequency, around 10.5M Hz, for example. More detail on the reference clock is provided in
Controller 630 stores a line of pixels for the display into one of the line buffers and then that line is output (into the DACs or into the other line buffer as explained below) when the line is complete. Typically, pixels for a line of the display panel arrive serially from the SoC, but as the gate drivers will enable a line of pixels to be displayed at the same time, the source drivers will need pixels for an entire line to be ready at the same time. Thus, each line buffer provides storage for a line of pixels. Furthermore, at times only half of a line of pixels is enabled on the display panel by the gate drivers, thus a line is stored in a line buffer, and then extracted half-by-half to be transmitted, while a new line is being stored.
In general, as a stream of input digital video samples is received within the transmitter 140 in row-major order, the input digital video samples are repeatedly (1) distributed to one of the EM pathways according to a predetermined permutation (in this example, row major order, i.e., the identity permutation) (2) converted into analog, and (3) sent as an analog EM signal over the transmission medium, one EM signal per EM pathway. At each source driver 186 the incoming analog EM signal is received at an input terminal and each analog sample in turn is distributed via sampling circuitry to a storage cell of a particular column driver using the inverse of the predetermined permutation used in the transmitter. Once all samples for that source driver are in place they are driven onto the display panel. As a result, the original stream of time-ordered video samples containing color and pixel-related information is conveyed from video source to video sink. The inverse permutation effectively stores the incoming samples as a row in the storage array (for display on the panel) in the same order that the row of samples was received at the distributor. The samples may arrive serially, e.g., R then G then B, or in parallel i.e., RGB in parallel as three separate signals. Using distributor 240, we can reorder the samples as needed.
In one embodiment, four control signals for every 60 video samples are inserted into the stream of samples in the distributor to be sent to the source driver. As shown, each input vector 280 in the line buffer includes a total of 1024 values, including the four control signals per every 60 video samples. The control signals may be inserted into various positions in the input vector, by way of example, “samples” 960-1023 of the input vectors 280-288 may actually be control signals. Any number of control signals in each input vector may be used. Further, an arbitrary but finite number of control signals is possible. The more control signals that are transmitted, the higher the data transmission rate needed. Ideally, the number of control signals is limited to what fits into the blanking periods so that there can be a correspondence between transmit rate and displayed lines (thus reducing the amount of storage required, or any additional resynchronization). And further, the control signals may be inserted into the stream of samples at the distributor or insertion of control signals be performed in another location.
Distributor 240 is arranged to receive the pixel color information (e.g., R, G, and B values) exposed in the input sets of samples. The distributor 240 takes the exposed color information and writes multiple input vectors 280-288 into the first line buffer 241 (one input vector per EM pathway) according to the predefined permutation. Once line buffer 241 is full then each input vector 280-288 is read out via its corresponding output port 281-289 into its corresponding DAC or optionally into its corresponding image processor 250-259. As these input vectors from line buffer 241 are being read out (or once line buffer 241 is full) then the next line of RGB input samples are written into input vectors 290-298 in the second line buffer 242. Thus, once the second line buffer 242 is full (and the DACs or image processors have finished reading input vectors from the first line buffer 241) the DACs or image processors begin reading samples from the second line buffer 242 via their output ports 291-299. This writing to, and reading from, the first and second line buffers continues in this “ping-pong” fashion as long as input samples arrive at the transmitter. Output ports 281-289 and 291-299 may possibly be bit-serial communications, but are more likely to be sequential word-wide samples or even parallel word-wide samples.
In a preferred embodiment for writing into and reading out from the line buffers, samples are only written into one of the line buffers, e.g., into buffer 241, as they arrive at the transmitter 140. Once that buffer is full then all samples are written in parallel from buffer 241 into line buffer 242. Samples are then only output into the DACs (or image processors) from buffer 242. The process is continuous: buffer 241 is filled as buffer 242 outputs its samples, once buffer 242 is depleted the all samples of buffer 241 are written into buffer 242, and so on. The samples can be written from buffer 241 into buffer 242 during the horizontal blanking period.
The number of line buffers required depends on the relative time required to load the buffers and then to unload them. There is a continuous stream of data coming in on the RGB inputs. If it takes time T to load all the samples into a buffer and the same time T to unload them, we use two buffers (so that we can unload one while the other is being loaded). If the time taken to unload becomes shorter or longer, the buffer length can always be adjusted (i.e., adjust the number of input vectors or adjust N of each input vector) so that the number of line buffers required is always two. Nevertheless, more than two buffers may be used if desired and either embodiment described above may be used for writing into and reading from the buffers.
Distributor controller 230 controls the operation and timing of the line buffers. In particular, the controller is responsible for defining the permutation used and the number of samples N when building the four input vectors. In this example, N=1024. Controller 230 may also include a permutation controller that controls distribution of the RGB samples to locations in the input vectors.
Controller 230 may also include a permutation controller that controls distribution of the samples to locations in the input vectors. The controller is also responsible for coordinating the clock domain crossing from a first clock frequency to a second clock frequency. In one particular embodiment, the samples are clocked in at a frequency of FPIXEL and the samples are clocked out serially from each input vector at a sampled analog video transport (SAVT) frequency of Fsavt. It is also possible to clock in two samples at a time instead of one each, or three at a time, etc. The analog samples are transmitted along an electromagnetic pathway of a transmission medium as an analog EM signal 270-279 to the SAVT receiver.
In one particular embodiment, each line buffer 241 or 242 has three input ports for the incoming RGB samples and the samples are clocked in at a frequency of FPIXEL; each line buffer also has 24 output ports, e.g., 281 or 291 (in the case where there are 24 EM signals, each being sent to one of 24 source drivers) and the samples are clocked out from each input vector at a sampled analog video transport (SAVT) frequency of Fsavt. It is also possible to clock in two R, two G and two B samples at a time instead of one each, or three at a time, etc. In one embodiment, Fsavt=663.552 MHz for 24 channels.
For purposes of explanation, one possible permutation is one in which each of the input vectors includes N samples of color information and control signals. The exposed RGB samples of the sets of samples in this example are assigned to input vectors from left to right. In other words, the “R”, “G” and “B” values of the first set of samples, the “R”, “G” and “B” values of the next set of samples, etc. are assigned to input vector 280 in that order (i.e., RGBRGB, etc.). Once input vector 280 has been assigned its N samples and control signals, the above process is repeated for the other input vectors in order until each of the input vectors have N values. The number of N values per input vector may widely vary. As shown in this example, this predetermined permutation preserves the row-major order of the incoming samples, that is, the first input vector 280 includes sample 0 through sample 1023 of the first row in that order and the succeeding input vectors continue that permutation (including control signals). Thus, distributor controller 230 performs a permutation by assigning the incoming samples to particular addresses within the line buffer. It should also be understood that any permutation scheme may be used by the distributor 230, and, whichever permutation scheme that is used by the transmitter, its inverse will be used by control logic in each source driver in order to distribute the incoming samples to the column drivers. In the situation where only one electromagnetic pathway is used and where the video samples are received at the SAVT transmitter, the distributor writes into one input vector in each line buffer.
Image processors 250-259 are shown after the line buffers and before the DACs, although it is preferable to have an image processor (or processors) before the line buffers thus reducing the number needed, i.e., as the RGB samples arrive image processing is performed and then the samples are distributed into the line buffers. Shown are pixels arriving one at a time; if pixels arrive one at a time then one image processor is used, if two at a time then two are used, and so on. Certain processing such as gain management may be performed after the line buffers even if the image processors are located before the line buffers.
Typically, image processing: a) applies gamma correction on each sample; b) level shifts each gamma-corrected sample, mapping the range (0 . . . 255) to (−128 . . . 127), in order to remove the DC component from the signal; c) applies the path-specific amplifier variance correction to each gamma-corrected, level-shifted sample; performs gain compensation for each sample; performs offset adjustment for each sample; and performs demura correction for each sample. Other corrections and adjustments may also be made depending upon the target display panel. An individual image processor 250-259 may process each output stream of samples (e.g., 281 and 291) or a single, monolithic image processor may handle all outputs (e.g., 281 and 291, 285 and 295, etc.) at once. In order to avoid performing image processing on the control signals in the line buffer, the control signal timing and positions in buffers is known so that logic can determine that image processing of control signals should not be done. As mentioned above, image processing need not occur within transmitter 140 but may occur in SoC 120, in the TCON, or in another location such as in the receiver. E.g., Gamma correction is traditionally done in the receiver (source driver), but demura and more complex image processing are not feasible in a source driver.
The processed digital samples of each input vector are input serially into one of DACs 260-269 (whether image processing happens before or after the line buffers); each DAC converts these modified digital samples at a frequency of Fsavt and transmits the modified analog samples along an electromagnetic pathway of a transmission medium as an analog EM Signal 270-279 to a source driver of the display unit. Each DAC converts its received sample from the digital domain into a single analog level, which may be transmitted as a differential pair of voltage signals having a magnitude that is proportional to its incoming digital value, the analog levels being sent serially as they are output from each DAC. The output of the DACs may range from a maximum voltage to a minimum voltage, the range being about 1 volts to 4 volts, Vpp (peak-to-peak); about 2 volts Vpp works well. In one particular embodiment, we represent signals in the range of +/−500 mV or a 1V dynamic range (in reality the dynamic range at the input is about 30% higher or about 1.3V).
Although two line buffers are shown within distributor 240 (which is preferable), it is possible to use a single line buffer and as samples from a particular input vector are being read into its image processor (or its DAC) the distributor back fills that input vector with incoming samples such that there is no pause in the serial delivery of samples from the line buffer to the DAC or image processor. Further, and also less desirable, it is also possible to place each DAC (or a number of DACs per EM pathway) after the distributor and before the image processors (if any), thus performing image processing on analog samples.
The samples of input vectors 380-388 are then output from line buffer 245 into image processors 250-259 via output ports 381-389. As in the distributor of
In another embodiment (not shown), the predetermined permutation used by distributor 240 orders the samples by color for each input vector, i.e., send all 320 red sub-pixels, followed by all 320 green sub-pixels, followed by all 320 blue sub-pixels, followed by all 64 sub-band signals. Thus, using the first input vector as an example, sample positions 0-319 will contain the red sub-pixels, sample positions 320-639 contain the green sub-pixels, sample positions 640-959 contain the blue sub-pixels, and positions 960-1023 contain the 64 sub-band signals for a particular row. The samples are then sent out to the image processor (all red, all green, all blue, all sub-band). The other input vectors use the same permutation of grouping the samples by color. Of course, the color groupings of sub-pixels in an input vector may be in any order (not necessarily red, green, blue) and the 64 sub-band signals may be inserted anywhere in the groupings. The reason for this ordering is to exploit a heuristic of natural images that individual color components tend not to exhibit high spatial frequency, thereby reducing potential electromagnetic interference signals generated by the system when the samples are grouped in this fashion. In fact, substantial EMI is reduced as long as substantially most all of the sub-pixels of a particular color are grouped together. Further, this ordering not only allows for slower S/H amplifiers in the source driver to be used but also for a lower bandwidth requirement for the transmitter to receiver communication channels. Control logic in each source driver will then use the inverse of this permutation in order to direct the incoming samples to the correct column driver.
Input into source driver 400 at input terminal 410 is one of the EM Signals from transmitter 140. In this example, terminal 410 serially receives 1,024 analog values at a time which are then stored into either the row of A or B storage arrays 434 or 436 via S/H amplifiers 420-429. The analog video samples arrive in their natural order according to the predetermined permutation shown in example of
The source driver 400 consists of 16 interleaved Sample/Hold input amplifiers 420-429 that sample the input 410 at Fsavt/16. There are 16 blocks (430 being the first block) of 60 video and 4 control signals. Each of the S/H amplifiers 420-429 samples a particular analog sample in turn and stores it into one of the 64 storage arrays 434 or 436, 60 of which directly feed the column drivers 440. Because input amplifiers 420-429 are interleaved they may be run 16 times slower than the input signal, each one being phase shifted by one SAVT interval. As shown, S/H amplifier #0420 drives columns 0, 16, 32, etc., and S/H amplifier #1421 drives columns 1, 17, 33, etc., and so on. Therefore, each S/H amplifier output spans the range of all 960 columns (an amplifier output every 16 columns).
In one embodiment, each of these storage elements in storage array A or B is a storage cell, such as a switched capacitor. Other terms for the storage cell are “sampler capacitor” or “analog latch.” There are also 960 high-voltage column drivers 440 which each drive a column 450 (via output pins) providing the voltage that the display panel 480 requires. As shown, there are 16 blocks of 60 video plus four synchronization signals 460 (such as Hsync, Vsync, CTRL) per block.
Once stored (in row A, for example) these 960 samples are driven to each output column 450 via column drivers 440 while at the same time the next set of 960 analog samples are being stored into the other row (B, for example). Thus, while one set of incoming 960 samples are being driven to the columns from one of the A or B rows, the next set of 960 samples are being stored in the other row. In one particular implementation, each analog level is a differential signal arriving at an S/H amplifier 420 and swings between about +0.5 or −0.5 V, and has a maximum swing of about 18 V around a mid-range voltage in each single-ended column driver, thus requiring amplification. Note that in addition to amplification, at some point the differential signal (−full scale represents dark and +full scale represents bright) is converted to single-ended and drive polarity is applied so that dark sub-pixels are at, e.g., 9V and bright sub-pixels are either full-scale positive (e.g., 18V) or minimum voltage (e.g., 1V) depending upon the polarity setting.
Control logic 470 implements the inverse of the predetermined permutation used in the transmitter 140 and controls the timing of the S/H amplifiers (when each samples a particular incoming analog sample) via control lines 471, controls the timing of the storage elements of rows A and B (when each row latches a particular analog sample) via control lines 472, and controls the timing of when each column driver 440 drives each column via control lines 473. For example, if the transmitter uses the predetermined permutation described above in which particular sub-pixel colors are grouped together in an input vector and then transmits those groups within an EM signal then control logic 470 will use the inverse of this predetermined permutation in order to route incoming samples to the correct column driver. At the very least, control logic 470 is aware of how distributor controller 242 has placed the incoming samples into an input vector and uses this a priori knowledge in order to direct the incoming samples to the correct column driver so that the samples are displayed as they appeared in the original source image.
Note that the number of S/H amplifiers 420-429 is a tradeoff between number and quality. The more amplifiers that are added, the slower they run, the smaller and more noisy they become, and the smaller load each one drives. The load presented to the input terminal 410, however, grows with the number of S/H amplifiers, which will impact the quality of the transfer. Therefore, it is a design decision as to how many input amplifiers to use. It is possible to vary the intervals of each clock period slightly in order to address any RFI/EMI emissions issues. The inputs to the SHA amplifiers only have a +/−250 mV swing around their common mode voltage (each of positive and negative inputs) leading to +/−500 mV signal (lv dynamic range). This is a similar voltage swing to conventional digital signaling such as CEDS or LVDS. The clock modulation may be done to reduce the RFI/EMI emissions in both cases, although this modulation eats into the sampling window and is not preferred. In addition, in order to optimize the performance of the source driver (to counteract any process variations in the S/H amplifiers as implemented), a low-frequency feedback network may be added off-chip in order to characterize the gain and offset of every amplifier of the source driver, although this technique is not preferred due to area and performance constraints.
An alternative method to optimize the performance of the source driver outputs is to utilize existing compensation techniques of the display unit itself. Modern OLED (and micro-LED) manufacturing techniques characterize the response of every sub-pixel in the array and pre-compensate for the individual offsets from a table of manufacturing data stored in the TCON and used when generating samples. Thus, based upon the physics of the entire display unit (including transmitter, amplifiers, source drivers, each pixel, etc.) each sub-pixel may have a different characteristic response, i.e., it might be too bright or too dark. This table includes an individual offset for each characteristic response.
Note that in the source driver architecture 400 or 500, a predetermined one of the interleaved sampling amplifiers 420-429 or 520-529 stores pixel voltages into the switched capacitors that are then amplified into a given column. Thus, every column is driven through the same amplifiers on each row. Any linear errors in the amplifiers as manufactured, such as gain errors, will be overlaid as a regular pattern onto any other errors measured for the individual sub-pixels along the column via the existing compensation techniques. Therefore, these existing OLED error compensation techniques will compensate also for all linear errors in the proposed source driver's amplifiers. This observation suggests that it may be possible to relax the design requirements (for example with respect to gain accuracy) and thereby enable lower-cost implementations. In one particular preferred embodiment, there are three amplifier stages and the amplifiers include common-mode feedback amplifiers.
In this embodiment each S/H amplifier 520-529 drives 64 neighboring locations (60 columns plus four sub-band values), thus reducing the wiring complexity in the source driver, reducing the physical distance in which one of the amplifiers 520-529 must drive, and also making Demura correction easier. For example, a first block 530 of 60 video and four sub-band signals is driven by amplifier 520, block 531 is driven by amplifier 521, and block 539 is driven by amplifier 529. Because this configuration also causes the gain error from any given input sampling amplifier to manifest in 60 neighboring columns, it facilitates conventional high-MTF Mura compensation solutions making it easier for the Demura system to detect.
In order to implement this embodiment, as illustrated in
Above,
Although a desirable architecture for certain applications, when the control signals are spread across all S/H amplifiers (as in
Even though grouping by color has some advantages, and although is a desirable architecture for certain applications, the video data must be padded because 960 sub-pixels/15 amplifiers/3 colors is not an integer. The additional overhead for padding means that 66 samples per amplifier are sent per line instead of 64. This means that the transmission frequency needs to be increased by a factor of 64 or 66, which partially defeats the purpose of reducing the transmission bandwidth by grouping colors. And driving across 320 columns is not as desirable as driving only 64 columns.
This 1002 chip minimizes SAVT bandwidth requirements and thus uses the permutation shown whereby all sub-pixels of each color are transmitted as a group, with a blanked transition band between groups (i.e., a band of blanking transition signals) in order to lower the bandwidth required between groups. SHA amplifier 0 is the control channel 818 showing control signals 0 to 64, i.e., 65 samples are transmitted per line. As shown, the red sub-pixel indices extend from 0 to 957, the green from 1 to 958, and the blue from 2 to 959. Samples 815 are the red blanking transition signals (tr0 . . . tr4), samples 816 are the green blanking transition signals (tg0 . . . tg4), and samples 817 are the blue blanking transition signals (tb0 . . . tb4). These bands 815, 816 and 817 provide a blanked transition between the colors.
In view of the above, realizing that bandwidth limitations may not be critical in certain applications, that the control information is effectively random, that padding can be undesirable, and that grouping control signals on one channel is advantageous, another architecture is proposed.
Thus, 15 interleaved S/H amplifiers receive the incoming pixel data and each drives 64 columns which are adjacent, i.e., 64 video tracks, thereby minimizing the span of columns that are driven by each amplifier. This architecture provides 15 blocks of 64 video samples plus one sub-band channel (control signals) of 64 bits per display line (per source driver). For example, amplifier 0 drives columns 0-63, the second amplifier drives columns 64-127, etc., the 15th amplifier drives columns 896-959 and amplifier 826 drives the control signals. Having all control signals on one channel means no difference in amplitude, delays or other from one signal to the next (if they were on different channels). It is also possible that the control signals arrive on channel zero (i.e., amplifier 0) instead of amplifier 15; that is advantageous in that the control information arrives earlier than the pixel data. Another advantage of this architecture is that control signal extraction needs to look at only one de-interleaving amplifier output rather than be distributed across all amplifiers, simplifying synchronization.
In this figure there are 15 video amplifiers, each driving 64 subpixels=960 subpixels/chip. There is one channel devoted to control, carrying 64 symbols per line (per source driver). By using MFM for timing synchronization (as described below), the 64 symbols will be transition encoded, and after accounting for flag and command bits, that will leave 24 or 25 control bits per line.
As shown, the control channel receives a control signal at amplifier 826 which is input to comparator 836 having a reference voltage of 0 V and operating at a 16th of Fsavt or approximately 41.5 MHz. Assuming that the control signals are in the range of −0.5 V up to +0.5 V, the comparator will detect if the control signal is greater than 0 V (meaning a digital 1) or if the control signal is less than 0 V (meaning a digital zero). This digital data is then output at 838 and thus provides a single control bit every 16 samples. Control signals provide synchronization and phase alignment as described below.
This particular embodiment is for an 8K144 display and example parameter values are shown in Table 1 above. One of skill in the art will find it straightforward to modify the architecture to suit other display sizes and speeds. By reordering the samples in the transmitter, each interleaved S/H amplifier can drive adjacent columns while operating in rotation as is described below.
In this permutation, 15 of the amplifiers (0-14) each drive 64 adjacent columns with sub-pixel values, while amplifier 15 handles all 64 of the control signals. This variation minimizes the hardware in the source driver and also minimizes the wiring load on the input amplifiers. Further, this variation allows for the slowest possible SAVT (Sampled Analog Video Transport) transmission rate (64×16 sampled per line) as padding is not required in the data sequences. In order to best display text and other sharp transitions in intensity, it is preferable that the sampling amplifiers be able to settle to a new value every 1/Fsavt, or approximately 1.5 ns per sample. In order to implement this architecture, the sequence of sub-pixel indices for transmission in a transmitter is: 0, 64, 128, . . . 832, 896; 1, 65, . . . 897; . . . ; 63, 127, 191, . . . 895, 959.
The above architecture of source driver 820 of
Transmitter Integrated with Timing Controller
As mentioned above, in an alternative embodiment the transmitter is integrated with the timing controller, rather than the discrete implementation shown in
The integrated transmitter/timing controller 640 receives the digital video signal 664, distributes it into a line buffer or buffers, performs image processing and converts the digital samples into analog samples and transmits EM signals to source drivers as described above. Typically, EM signals 192 are delivered to the source drivers 186 using differential pairs of wires (or metallic traces), e.g., one pair per source driver. The gate driver control signals 190 control the gate drivers 160 so that the correct line of the display is enabled in synchronization with the source drivers. A single reference clock 170 from transmitter and timing controller 640 may be fanned out to all source drivers because each source driver chip performs its own synchronization, but practical realities in drive strength may mean that it is preferable that multiple clocks are distributed. In any case, frequency lock between source driver chips is maintained.
As above, controller 630 coordinates storage and retrieval of pixel values into and from the line buffers.
As mentioned earlier, framing flags 627 come from the unpacker 620 and are input into distributor controller 630 which uses these flags to determine the location of pixels in a line in order to store and then place them into the correct input vectors. After the framing flags are output from the controller 630 (typically delayed) they are input into gate driver controller 650 which will then generate numerous gate driver control signals 671 for control of the timing of the gate drivers. These signals 671 will include at least one clock signal, at least one frame-strobe signal, and at least one line-strobe signal. Once the pixel values have been pushed into the source drivers for a specific line the line-strobe signal is used for a particular line that has been enabled by the panel gate driver controller. The line-strobe signal, thus, drives the selected line at the right time. Control of the timing of the gate drivers may be performed as is known by a person skilled in the art. Also shown is bidirectional communication 637 between controller 630 and gate driver controller 650; this communication is used for timing management between the source and gate drivers.
Operation of the two line buffers, image processors 250-259 and DACs 260-269 may occur as has been described above. Preferably, image processing occurs after unpacker 620 and before the line buffers, in which case image processing blocks 250-259 are removed and replaced with a single image processing block between 620 and 241, 242. And, as mentioned above, image processing need not occur within transmitter 640 but may occur in SoC 120 or in another location.
Transmitter Integrated with Timing Controller and System-on-Chip
As mentioned above, in an alternative embodiment the transmitter is integrated with the timing controller and SoC, rather than the discrete implementation shown in
Shown is an input of a digital video signal 110 via an HDMI connector (or via LVDS, HDBaseT, MIPI, IP video, etc.) into the display unit 680, which is then transmitted internally 111 to the integrated SoC. The SoC performs its traditional functions such as display controller, reverse compression, brightness, contrast, overlays, etc. After the SoC performs its traditional functions, the modified digital video signal (not shown) is then delivered internally to the integrated transmitter and timing controller using a suitable protocol such as LVDS, V-by-one, etc. In this embodiment, the timing controller and transmitter are both integrated with the SoC and all three are implemented within a single circuit, preferably an integrated circuit on a semiconductor chip.
The transmitter within circuit 684 converts the modified digital video signal into analog EM signals 192 which are transported to display panel 690. Preferably, signals 192 are delivered to the source drivers 186 using differential pairs of wires (or metallic traces), e.g., one pair per source driver. Gate driver control signals 190 control the gate drivers 160 so that the correct line of the display is enabled in synchronization with the source drivers. Typically, the distance between chip 684 and source drivers 186 is in the range of about 5 cm to about 1.5 m, depending upon the panel size. A single reference clock 170 from transmitter, timing controller and SoC 684 may be fanned out to all source drivers.
The integrated chip 684 may be implemented as herein described, i.e., as shown in
A distributor of the transmitter includes line buffer 720, any number of input vectors (or banks) 722-726, and a distributor controller 728. The RGB samples (or black-and-white, or any other color space) are received continuously at the distributor and are distributed into the input vectors according to a predetermined permutation which is controlled by the distributor controller 728. In this example, a row-major order permutation is used and the first portion of the row of the incoming video frame (or image) from left to right is stored into input vector 722, and so on, with the last portion of the row being stored in input vector 726. Accordingly, line buffer 720 when full, contains all of the pixel information from the first row of the video frame which will then be transported and displayed in the first line of a video frame upon display panel 710. Each input vector is read out serially into its corresponding DAC 732-736 and each sample is converted into analog for transport. As samples arrive continuously from timing controller 702 they are distributed, converted, transported and eventually displayed as video upon display panel 710. There may be two or more line buffers, as shown and described in
Connecting the transmitter 704 to the source driver array 708 is a low-voltage wiring harness 706 consisting of differential wire pairs 742-746, each wire pair transporting a continuous stream of analog samples (an electromagnetic or EM signal) from one of the DACs 732-736. Each differential wire pair terminates at the input 760 of one of the source drivers 752-756. Other transmission media (e.g., wireless, optical) instead of a wiring harness are also possible.
Each source driver of the source driver array such as source driver 752 includes an input terminal 760, a collector 762 and a number of column drivers 764 (corresponding to the number of samples in each input vector, in this example, 1,024). Samples are received serially at the terminal 760 and then are collected into collector 762 which may be implemented as a one-dimensional storage array or arrays having a length equal to the size of the input vector. Each collector may be implemented using the A/B samplers (storage arrays) shown in
Synchronization may be used to provide for horizontal synchronization (beginning of a display line), vertical synchronization (first display line of a frame) and sample phase alignment (when to sample incoming sub-pixel samples). In other words, a receiver such as a source driver receiving a stream of video pixels needs information from a transmitter telling it where the start of a frame is, where the start of a line is, and at what point to sample data representing a particular sub-pixel. For example, each switch 842 needs to know when a sample on the input is valid and stable so that the correct value can be sampled; a process referred to as sample phase alignment determines when to sample. Sample phase alignment accounts for different delays from the TCON to geographically-distributed source drivers of a display; we optimize the phase of the locally-generated clock 171 (derived from reference clock 170) relative to the locally-delivered samples as described in more detail below.
Synchronization is useful (and can be made difficult) for a variety of reasons. For one, a constant stream of video sub-pixels does not inherently have information indicating the start of a frame, the start of a line or phase alignment data. Also, the delay along cables from a transmitter to receiver is potentially variable and is typically not known. Further, attenuation on the cables (which can be different between paths to the various source driver chips) can also be problematic. Finally, the wave shape of the incoming sub-pixel value may not be known or can be variable due to ringing, overshoot, filtering, rate of change of the input value, or other kinds of distortion of the signal.
Realizing that synchronization is important and can be made difficult by the above factors, a technique herein described provides commands for synchronization and phase alignment of analog samples.
We use a timing reference (i.e., a special timing violation not occurring during normal data transmission) referred to herein as a “flag” that informs the receiver, i.e. the source driver, that it may reset its clock and that what follows are known commands or data for synchronization. For instance, once the flag has been received we then receive a command to begin phase alignment and then determine the optimal sampling phase to sample an incoming sub-pixel. To begin with, we are aware of the frequency of the transmission of samples (i.e., the rate at which samples are arriving at the input terminal of the source driver); in the example herein Fsavt is approximately 664 MHz (673.92 MHz for the HY1002). As it can be impractical to transmit that high-frequency clock, in one embodiment we transmit a slower clock, Fsavt/64=10.375 MHz (10.53 MHz for the HY1002) to each source driver and each source driver uses a phase-locked loop to multiply that frequency up to the higher frequency clock. We also know that there are 15 sub-pixel data streams arriving at Fsavt/16 at each input amplifier, that each of these input amplifiers delivers 64 samples and that there will be a control stream (either analog or digital control signals) arriving at the 16th amplifier and having 64 control signals per line.
The central comparator will be reliable (as it is a zero-crossing detector) and generally the data extracted from this central comparator will correspond to the data extracted on the high and low channels assuming all is working correctly (i.e., if the control signal is +0.4 V both the central comparator 836 and the high comparator 835 will both detect a logical “1”). But, if sampling is occurring at the wrong time it is very likely that the central comparator will provide the correct bit but the high and the low values from the other two comparators will disappear. The concept here is that the high and low data require the input to have nearly settled to receive the correct value whereas the zero-crossing detector will be right even if the input sample is still slewing. Using only one of the high or low comparators along with the central comparator is also possible. Use of this information is discussed below with regard to phase alignment.
In this embodiment, it is realized that synchronization requires only a single comparator 836 (a zero crossing detector) on a single SHA channel and does not need DACs to set comparison thresholds. The algorithm for synchronization runs in the digital domain (the zero crossing detector output) and can perform both clock-level synchronization (alignment of SHA outputs so that the side-channel is seen on one particular SHA output) and phase-level synchronization (choosing the optimal sampling phase within a clock cycle).
At input terminal 822, there is one analog input differential with matched termination and ESD protection. This is driven by a 50R source impedance per side through a 50R transmission line. Hence, there will be a 50% reduction in voltage received compared to the voltage transmitted. The PLL of 821 multiplies the relatively slow reference clock 170 from the TCON (e.g., Fsavt/64) up to the full speed Fsavt clock 171 (e.g., approximately 675 MHz in HY1002) with 11 phases (for example), selectable per clock cycle. There is also high-speed timing generation to generate sampling strobes, reset signals and output transfer strobes for the SHA amplifiers 0-15. A 16-way de-interleaver 840 is built using the SHA amplifiers as shown in
This sub-pixel order minimizes the hardware in the source driver and also minimizes the wiring load on the input amplifiers. In order to best display text and other sharp transitions in intensity, it is preferable that the sampling amplifiers should be able to settle to a new value every 1/Fsavt, or approximately 1.5 ns per sample. As shown, SHA 0 carries control and timing; SHA 1-15 carries video data such that each SHA drives 64 adjacent columns of the display. Since the SHAs are sequentially sampled, this leads to a transmission order of: CTL[0], V[0], V[64], . . . V[896], CTL[1], V[1], V[65], . . . V[897], . . . , CTL[63], V[63], V[127], . . . V[959]. The order provides 64 control bits per line and 960 video samples per line and a total of 1,024 samples transmitted per line (per source driver).
The timing reference indicates a point in time at which afterward what follows are commands and data in the control sequence; the timing reference is an MFM Flag which is a deliberate timing violation. We assume that the wire length from Tx to Rx is >1 SAVT cycle and that the wire length may be a variable. We extract digital data on the control channel so it is robust, even if the analog samples are not perfect. Further, level values are irrelevant, only the transitions are important, and true and complement values have the same meaning.
Timing signal 852 is Fsavt/16 which corresponds to the timing of the output from the input amplifiers, i.e., the rate at which each amplifier outputs data. Control bit cells 854 represent a sequence of 64 bits received as a control signal at output 838 of one of the source drivers. MFM cells 856 represent the MFM-encoded bits, one MFM cell for every two control bits and payload 858 is a command and data. Control sequence 860 is a sequence of control bits received on a control channel at one of the source drivers and output at control 838 of source driver 820′″, for example.
As shown, the control sequence includes an MFM flag 862 which is a sequence that does not normally occur in the stream of control bits. Flag 862 consists of a sequence of transitions, spaced 4-3-4 control bits, and then the end of the fifth transition denotes the end of the flag (the timing violation). Then there is a trailing zero 864 which is an MFM-encoded zero ignored by the data receiver before the actual payload begins. The payload is then sent typically LSB first, although sending MSB first may also be used; the LSB 866 in the 0 position of MFM bit cells 856 is shown in the control sequence is having the value “0.” A total of 25 MFM-encoded cells are sent and the payload 867 is shown to the right of the control sequence. Shown at 868 is a second control sequence different from sequence 860 but having the same MFM flag and trailing zero; the payload it sends is different, reflecting different commands and data that may be sent over a single control channel. Another example of a control sequence is at 869. The payload sent may represent commands, parameters or be reserved for future use.
Synchronization is complicated because we receive data over 16 channels from the de-interleaving amplifiers. A source driver will not know which channel holds the control sequence until synchronization occurs. One proposed method is to use a flag sequence that appears on just one channel output. Identification of the MFM flag tells the source driver chip to resynchronize and be ready for commands or data, such as a horizontal or vertical synchronization command, or phase alignment mode. At power on (or after an outage) the transmitter will transmit the MFM flag and control sequence on all channels and the correct control channel (in this example, the 16th channel) will recognize the flag, resynchronize the timing and recognize commands and parameters. Once resynchronization has occurred, the control sequence need only be sent on the control channel and video data may be sent on the other 15 channels.
Another, more preferable, method is to transmit the MFM flag on one channel, not on all 16 channels initially. The receiver looks for the flag on one channel (and before synchronization is complete, this may be the wrong channel). If after one line time (˜1.5 us) the flag is not detected, the clock is slipped (skipped) for 1 cycle, effectively rotating the amplifier usage. Synchronization to the clock cycle can therefore take up to 16 display line times.
Because the control stream is continuous, the commands and parameters may be extended over multiple display lines if necessary. And even though conventional CEDS transmits approximately 28-32 control bits per display line, we realize that some of these bits do not need to be conveyed each line (some, e.g., low temperature mode, power control, etc., may be frame parameters), i.e., less frequent transmission is adequate. And such a control channel disclosed herein is robust enough to not require CRC since we extract digital data and the same channels convey analog video samples to 10-bit accuracy. Nevertheless, CRC may be added. Another advantage of using MFM in the context of video transmission is that when the flag occurs it provides an immediate and accurate timing reference; there is no waiting for correlation as is the case for other techniques such as Kronecker. Further, this control channel is adequate to convey commands such as frame synchronization, line synchronization, parameter data, and phase alignment information. The commands are sent distributed (sent sequentially one symbol at a time over one line period) and the received control information applies for the next display line. Typically, the control sequence never ends. One control packet is sent every line period. Information such as the polarity of the column driver, driver strength, etc. is carried per line.
Synchronization stream 878 is a stream transmitted on the control channel continuously during the phase alignment mode. It is contemplated that for purposes of determining the threshold of the upper comparator 835 (if phase alignment of
VSYNC 881 when asserted indicates that the current line being received is in the vertical blanking period, so no data will be displayed and the video controller state machine is re-initialized. Polarity control bits 882 determine the polarity in which pairs of columns are driven relative to the dark level). Each pair of columns is driven in a complementary fashion (one column output is driven more positive than the dark level and the adjacent column is driven more negative), and the polarity control for a column-pair determines the direction. The four polarity controls independently control the four column pairs in an 8-column group and the pattern is repeated every eight columns for all sub-pixel columns in a line. The polarity control can be updated each line. In practice it is likely that polarity control bits will be changed at most once per line (to reduce power consumption).
Shorting control bits 883 include “short_gena” which, when asserted, adjacent columns are shorted only if the corresponding polarity control bits have been changed, unless short_all is also asserted. When de-asserted no shorting is performed at all. “Short_all” enables shorting of all column pairs irrespective of the state of the polarity control changes, but only has effect if short_gena is also asserted. Drive Time Control 884 specifies the number of Fsavt/16 cycles from the start of the high voltage drivers drive period until the driver is tri-stated or charge-shared (depending on SHORTCTL). High Voltage driver's Sampling Phase 885 is a chopper clock that swaps both the inputs and outputs of the main amplifier to cancel the offset. SHA Calibration Control Signals 886 includes two SHA calibration control signals: sha_video (sha_cal[0]) and sha_meas (sha_cal[1]), both directly controllable from side-channel control bits. These signals control the SHA's Calibration_Phase1 and Calibration_Phase_2 signals respectively.
As mentioned above, due to delays and attenuation in the cables, the presence of ringing on the input samples, inter-symbol interference, etc., it is desirable to determine the optimal sampling phase of the incoming samples for a particular source driver. This phase alignment may be performed at power-on or at regular intervals such as at frame synchronization, during frame blanking, etc., and the phase alignment mode may be entered by issuing the “set phase alignment mode” described above. Two phase alignment techniques are proposed below.
Once phase alignment mode has been entered we send a synchronization stream 878 along the control channel of the source driver, e.g., the synchronization stream arrives at input amplifier 826, and the stream is detected by a central comparator 836 as well as an upper threshold comparator 835 and a lower threshold comparator 837. This synchronization stream is preferably a valid MFM data stream (i.e., MFM zeros and MFM ones) with a regular 50% duty pattern of positive and negative values of known amplitude. Because this is a valid MFM data stream, the “exit phase alignment mode” command may be issued at any time. Comparators 835 and 837 should be sufficiently fast, but do not need absolute accuracy as long as the offset is less than the difference of the last two amplitude levels.
Basically, central comparator 836 provides zero crossing detection and indicates whether the input detected is positive or negative. When the sample is positive and the sampling phase is correct, upper threshold comparator 835 should also produce a positive value, if not, this means that sampling has occurred too early or too late, i.e., before the transition to a positive value or after the transition. Lower threshold comparator 837 provides similar information when the sample is negative. If the upper comparator or the lower comparator do not agree with the central comparator then the sampling phase is adjusted. A detailed technique for adjusting the phase to correctly sample positive input is described below and one of skill in the art will be able to apply the technique to negative input.
The upper comparator (for example) should report the same value as detected by the zero crossing detector. As the sampling phase is rotated (by advancing the phase from the PLL) we will eventually get to a point where the next transition starts to occur. That transition will cause the upper comparator to provide a result that disagrees with the zero crossing detector. We then know that we have detected the transition, and we can set the sampling phase back by one (or two, for safety) phases, so that we sample late in the symbol period after the sample has settled, but before the transition to the next sample. Note that if the transition is very quick, it is possible that the zero crossing detector will also flip when the symbol transition occurs, in which case looking at the upper comparator is not required, so “the transition” may be determined by the OR of these two events.
As mentioned, the first step is to enter phase alignment mode and to send synchronization stream 890 along the control channel. Preferably, amplitudes 891 and 892 are set far enough apart to handle any ringing, etc., of the input and to provide a window (in this example, approximately 0.2 V) in which upper threshold 893 can be set so that when the sampling phase is roughly correct that pulse 891 does not trigger the upper comparator but that pulse 892 does. In one example, if the expected amplitudes of a control signal are approximately 1.5V and approximately −1.5V then amplitudes of pulses 891, 892 are set below that expected amplitude as shown. Corresponding amplitudes for pulses 897 and 898 may be set in the same manner.
In order to choose initial voltages for these two pulse amplitudes one aim is to set the modulation levels so that we can detect the transition. One embodiment uses 50% amplitude and 75% amplitude (of a positive pulse) for these two amplitudes of the synchronization stream. That makes it easy to set the DAC threshold between the two amplitudes (with allowance for noise, etc.), yet still provides a good indication of when transitions occur (when the 75% amplitude pulse drops below the DAC upper threshold).
Selecting an initial sampling phase may be a random selection, the reset value (e.g., phase 0) or some other phase selection. Because the zero crossing detector is used to determine the expected signal level, it would be unlikely (˜ 1/16 chance, but possible) to select a sampling phase at the symbol transition where the zero crossing output appears to be random. If that occurs, though, and we do not see a flag after all 16 clock skips, we advance the phase, and that puts us into a position where the zero crossing detector will work. There are 11 phases in our implementation; it is expected that shifting the phase about two or three positions will be sufficient. Other implementations will differ.
Once the synchronization stream arrives, logic and circuitry (not shown) in the source driver may adjust the upper threshold by sliding it up and down to determine its optimal voltage. For instance, if the upper threshold is too low the upper comparator will trigger on both pulses 891 and 892, if the upper threshold is too high it will not trigger on pulse 892; when the upper threshold is placed correctly it will not trigger on pulse 891 but will trigger on pulse 892. The source driver will not know what the amplitudes of pulses 891, 892 will be due to attenuation, etc., but as long as the amplitudes are far enough apart to place the upper threshold accurately then the source driver does not need to know what the precise values are. This adjustment process uses a sampling phase that is roughly correct, but not optimal. Once the upper threshold is correctly placed it will not be triggered by any ringing of pulse 891 but will be triggered by pulse 892. A DAC may be used to adjust the upper threshold.
Once the upper threshold has been placed then logic and circuitry in the source driver starts rotating the sampling phase around the eleven different phase positions to determine the best phase in which the sample. By way of example, an initial sampling phase may occur roughly in the middle of pulse 892 but this may not be the optimal point to sample because the pulse may not be stable at this point and may yield an incorrect value. Typically, the best point at which to sample is immediately before the transaction to the next pulse when the signal is most settled, i.e., before the trailing edge of the pulse. When the sampling phase is rotated to point 894 both the central comparator and the upper comparator trigger and signal that a positive value is received; when rotated to point 895 both trigger again. But when the sampling phase is rotated to point 896 suddenly the upper comparator will not trigger and we will know that we have just passed the transition. There will not be correspondence between the upper comparator and the central comparator. (Even though a perfect square wave is shown, the central comparator will still signal a positive value as the transition is not a steep drop down to −1.2 V but rather a more gradual descent). Accordingly, by going back one or two phase taps earlier, i.e., to point 895 or 894 we will have found the best sampling phase. Another quantity of phase positions may be used; eleven was determined by the process (number of stages of inversion that fit within the 1.5 ns clock period). An odd number of positions may work well, depending on the VCO structure.
Use of the two pulses 891, 892 in order to set the upper threshold provides certainty that the upper threshold is high enough in order to perform the search for the optimal sampling phase as described below. Providing an upper threshold that is below the amplitude of pulse 892 guarantees that when sampling occurs after the transition of pulse 892 that there will not be correspondence between the upper comparator and the central comparator, thus facilitating choosing the optimal sampling phase. Although it is possible to use a single sampling phase once detected, it is preferable to average over multiple measurements in order to handle noise and overshoot.
Above is described a technique for determining the optimal sampling phase of positive pulses. The same technique may be applied to the negative pulses as well and the results can be averaged. In one particular embodiment, the lower threshold comparator 837 is not necessary and only comparators 835 and 836 are used to determine the optimal sampling phase using the positive pulses as described above. In another embodiment, the upper threshold comparator is not used and only the lower threshold comparator and the central comparator are used with respect to negative pulses in order to determine the optimal sampling phase. In yet another embodiment, the upper threshold comparator is used exclusively with only positive pulses in the synchronization stream all having the same amplitude; the upper threshold is set to be below the amplitude of these positive pulses and the sampling phase is rotated forward and back depending upon when this upper comparator ceases to trigger. The upper threshold comparator may also be used exclusively when the synchronization stream includes alternating positive pulses of different amplitudes as shown in
Above,
At step 974 if no flag is detected and detector output=1, then move to step 972, shown in
Once the optimal phase is determined it is implemented within the source drivers by sending the output of the sampling phase adjustment circuit as a sampling clock to the SHA amplifiers. Preferably, all amplifiers of all source drivers act in unison; i.e., there is only one sampling phase alignment circuit and one clock cycle alignment that controls all SHA amplifiers. Within each source driver, these input SHA amplifiers are time interleaved and generate sample outputs that are skewed in time by one Fsavt cycle between adjacent channels. The SHA amplifiers then transfer these samples to the collectors (A/B samplers) with skewed timing also, but after all samples for a line are gathered by the collectors, the pre-amplifiers transfer all samples to the next stage (the level converters) in unison. This effectively “wastes” 16 Fsavt cycles of the transfer time at the pre-amplifier outputs, but as there is a decimation of the sampling rate, there is sufficient time for this to occur.
Other synchronization techniques may also be used. By way of example, we can provide for horizontal and vertical synchronization by forwarding a low-frequency clock. In order to phase adjust for where we sample, another technique is to send known black/white references in the sub-band and adjust the receiver's PLL until we find the blackest black and whitest white.
In another synchronization technique, the reference clock is more than simply a reference clock; it also includes data (such as parameters), but at a lower frequency. The clock and its parameters are sent via a wire separate from the SAVT samples (which wire already exists). There is no need to intermingle the side channel data with the video data, thus the SAVT rate is reduced and it is only necessary to send 60*16=960 samples per line, thus requiring lower bandwidth for communication. By using sub-pixel color grouping, the bandwidth requirements are reduced even more. It is also possible to introduce color-transition blanking into this technique; since there are no side channel bits embedded in the video stream, there are no issues with bleeding of the side channel bits into the video bits.
Shown is an input terminal 902 and one of the 16 input distribution amplifiers, in this case, SHA[0] 824, shown at 904; not shown are switches 842 of the input terminal nor the other 15 distribution amplifiers which drive video samples. The input sampling is illustrated by the switches sampling into capacitors. The input sampling switches are controlled symbolically by the signals b and t. There are 15 other identical amplifiers (with skewed timing) for carrying video, while SHA[0] carries the side channel information. It is arbitrary which SHA channel carries the side channel, but the advantage of using SHA[0] is that control information arrives before the video samples, not after it, giving some time for setup before the control information is needed. Each amplifier 904 has a nominal gain of one, which may vary.
Each SHA channel drives 64 columns 920 via a series of sampling blocks/collectors 908, preamplifiers 910, level converters 912, HV drivers 914 and column shorting switches 918 as indicated by the array notation [63:0] used in the component designators in the figures. Level converters 912 may also be referred to as differential-to-single-ended converters. Preamplifiers 910 provide the gain required for the signals coming from a transmission medium.
Shown in
Shown also is one of 64 level converters 912 of the channel that converts the differential signal into a single-ended signal, adds an offset, changes the polarity of the signal, and provides amplification. Output out_p 913=vmax+0.5*(Vinp-Vinn) if pol=0. Output out_p 913=vmin=0.5*(Vinp-Vinn) if pol=1. High-voltage driver 914 is one of 64 such drivers of the channel that multiplies the incoming signal to provide the voltage (plus or minus) expected by the display. Column shorting switch 918 provides shorting for LCD displays as is known in the art. Finally, the expected voltage is output to the column at 920. The preamplifier 910, level converter 912 and HV driver 914 may be considered an amplification stage before each column, and in this case is a pipeline amplifier, or simply “an amplifier.”
Switches 842 of
A split OLED DDIC architecture as shown in
Shown is a mobile telephone (or smartphone) 980 which may be any similar handheld, mobile device used for communication and display of images or video. Device 980 includes a display panel 982, a traditional mobile SoC 984, an integrated DDIC-TCON (Display Driver IC-Timing Controller) and transmitter module 988, and an integrated analog DDIC-SD (DDIC-source driver) and receiver 992. Mobile SoC 984 and module 988 are shown external to the mobile telephone for ease of explanation although they are internal components of the telephone.
Mobile SoC 984 is any standard SoC used in mobile devices and delivers digital video samples via MIPI DSI 986 (Mobile Industry Processor Interface Display Serial Interface) to the module 988 in a manner similar to Vx1 input signals discussed above. Included within module 988 is the DDIC-TCON integrated with a transmitter as is described above, for example the transmitter of
These analog signals 990 are received at the integrated analog DDIC-SD and receiver 992. DDIC-SD receiver 992 receives any number of analog signal pairs and generates voltages for driving display panel 982 and may be implemented as shown in
Analog DDIC-SD Rx 992 may be a single integrated circuit having 12 source drivers within it (each handling a single pair) or may be 12 discrete integrated circuits each being a source driver and handling one of the 12 signal pairs. Of course, there may be fewer signal pairs meaning correspondingly fewer source drivers. Analog Video Transport Source Driver Integration with Display Panel
As discussed above, analog video transport is used within a display unit to deliver video information to source drivers of the display panel. It is further realized that large display architecture nowadays consists of a large area of active-matrix display pixels. In early days, display drivers (source and gate) would be mounted at the glass edges, but not on the glass, providing source- and gate-driving circuits. Further integration of the driving electronics onto the glass has stagnated due to the complexity of high-speed digital circuits, as well as the large area required for D-to-A conversion. By way of example, digital transport to the source-driving circuits operates at around 3 GHz, a frequency much too high to allow integration with the glass. It is further realized that many display drivers have to be attached to the display edge in order to drive a complete, high-resolution LCD or OLED screen. A typical driver has approximately 1,000 outputs, so a typical 4K display requires 4,000×RGB=12,000 connections, meaning twelve source drivers. Increasing the panel resolution to 8K increases this number to 24 source drivers. Data rate, synchronization difficulties and bonding logistics make it difficult to continue in this direction.
A display panel (such as an LCD panel) is made from a glass substrate with thin-film transistors (TFTs) formed upon that glass substrate, i.e., field-effect transistors made by thin-film deposition techniques. These TFTs are used to implement the pixels of the display. It is therefore realized that those TFTs (along with appropriate capacitors and resistors, and other suitable analog components) can also be used to create logic circuitry to implement elements of the novel source drivers described herein which are then integrated with the glass. These elements are integrated at the extreme edges of the glass, just outside the pixel display area, but inside the perimeter seal of the glass. Thus, the source drivers disclosed herein may be integrated with the glass using these transistors, capacitors, resistors, and other analog components required, and may do so in the embodiments described below. Accordingly, the source drivers (or elements thereof) which had previously been located outside of and at the edge of the display panel glass are now moved onto the display panel glass itself. In addition, the gate driver functionality for the gate drivers may also be moved onto the display panel glass.
The SAVT video signal may be transported along the edge of the display glass using relatively simple wiring, and is less insensitive to interference, unlike existing Vx1 interfaces. The lower sample rate makes it possible to design the required analog electronics (which are less complex) of the source drivers on the edge of the TFT panel on the display panel glass itself. Building the source driver circuitry on the glass edge allows the following elements of a source driver to be integrated with the glass along with their typical functions: input terminal and switches (receives analog samples via the SAVT signal and distributes to collector); collector (receives the analog samples via input amplifiers and collects the samples in a storage array or line buffer); level converters (convert to single-ended, provide voltage inversion and voltage offset), and amplifiers such as high-voltage drivers (provide an amplified voltage and the current required to charge the display source lines capacitance).
If faster TFT transistors of higher quality are used then higher frequency portions of the source driver may be integrated with the glass. Also, smaller device sizes will allow for the transistors to switch faster, thus enabling implementation on glass of elements using those devices. For example, the channel length of a TFT affects its size; preferably the channel length for oxide TFTs is less than about 0.2 um and the preferable channel length for LTPS TFTs is less than about 0.5 um. Reducing the channel length by 50% yields an increase in speed by a factor of four. Further, implementation may depend upon the type of display; display sizes of smaller resolution 2K, 1K and smaller may use elements that do not require the high frequency of 4K and 8K displays. Typically, amorphous silicon transistors would not be used as they have a tendency to threshold shift and are not stable. Note that the source driver disclosed herein does not require any digital-to-analog converters to convert video samples nor any decoder to decode incoming video samples.
In a first embodiment 102, level converters 620 and amplifiers 621 are integrated with the glass because the level converters only require a relatively low-frequency clock. As the level converters switch once per line they require a switching frequency of about 50 kHz for a 2K display, 100 kHz for a 4K display, etc. Thus, the first embodiment of integration may use TFTs that can operate at a clock frequency of at least about 50 kHz, assuming a 2K panel (100 kHz for a 4K panel, etc.). Thus, IGZO or LTPS TFTs may be used in the first embodiment.
In a second embodiment 104 using faster transistors, level converters 620, amplifiers 621 and collector 786 may also be integrated with the glass, thus integrating the entire source driver. Collector 786 requires a higher-frequency clock as each collector is manipulating the pixel sequence and requires a switching frequency of about 50 MHz for a 2K display, 100 MHz for a 4K display, etc. Thus, the second embodiment of integration may use TFTs that can operate at a clock frequency of at least about 50 MHz, assuming a 2K panel. Thus, LTPS TFTs may be used in the second embodiment for 2K panels.
Shown also is rectangular area 140 also located upon the glass itself in which elements of the source drivers may be located. The source driver functionality may be partially or fully integrated with the glass by making use of TFT switches on the glass in this area 140. In the first embodiment is the integration of the amplifiers and level converters onto the glass (which are formed in region 140), while in the second embodiment is the integration of the amplifiers, level converters and collector (which are also formed in region 140).
As the source drivers disclosed herein do not receive digital signals, have no D-to-A converters and related circuitry for processing the digital video samples, nor decoders, the lower processing frequencies and smaller dimensions of these drivers allow for them to be integrated onto the glass. Thus, for example, since a typical 64-inch 4K television panel has a pixel width of 80 um (40 um in case of an 8K display), there is also sufficient width to integrate the drivers directly onto the glass because the dimensions of the output amplifiers are expected to fit within this space. Depending upon the pixel width of a particular implementation, specific TFTs may be chosen.
An interconnect printed circuit board 182 receives and passes the EM signals 602 via flexible PCBs 184 to the source drivers located on integrated circuits 186a and partially integrated with the glass in TFTs 186b. Passing the EM signal in this fashion is implemented for embodiment 1 as a portion of each source driver (at least the collector) will still be located within flexible PCBs 184 on the IC 186a and the level converters and amplifiers will be located on glass in TFTs 186b. As shown, each integrated circuit 186a passes analog signals 187 to its corresponding circuitry on glass 186b. The nature of these analog signals will depend upon whether embodiment 1 or embodiment 2 is being implemented. An implementation for embodiment 2 is shown below. Gate clocks 190 and 192 are delivered to the gate drivers via circuit board 182 and flexible PCBs 184. PCBs 184 attach to panel glass 150 as is known in the art.
Returning now to the example source driver of
Alternatively, all column drivers 916 of the source driver are implemented on glass while the rest of the upstream elements (i.e., collector 915) are implemented outside the edge of the glass. Or, all column drivers 916 and collector 915 of the source driver are all implemented on glass and there are no elements of the source driver implemented on flexible PCB 184 as shown in
In one particular embodiment, the SHA input amplifiers operate at an input rate of about 664 MHZ, the A/B sampling blocks operate at 1/16 of the input rate, and the preamplifiers and downstream components operate at 1/1024 of the input rate ( 1/64 of 1/16 the input rate). Of course, the input rate may be different and the fraction of the input rate at which the downstream components operate may vary depending upon the implementation, number of columns, interleaving technique used, etc. In another embodiment, only the HV driver is implemented on glass as the output from the level converter 912 is single ended (making implementation easier). Or, the preamplifiers, level converters and HV drivers are implemented on glass as they require a lower frequency than the SHA amplifiers and A/B blocks. It is also possible to only implement SHA amplifiers 904 in the source driver chip and all other downstream components on glass as the amplifiers 904 operate at the greatest frequency.
Improving Video Images Via Feedback from Column Amplifiers of a Display
We disclose a technique to compensate for that variation by sending appropriate feedback to the timing controller (TCON) of the display unit. This invention thus allows the 24 (or however many) demultiplexing/source driver chips to feed back the level of a single column amplifier back to the TCON. Upon gathering performance information from all 23,000 column amplifiers, the TCON can pre-scale values intended for a given column to equalize the performance between columns. Advantageously, there is no high-speed performance requirement: the invention may be used pre-sale for screen calibration purposes. In other words, the technique may be used during a production test where all columns are available and drive characteristics may be measured without an additional area penalty on chip (due to an area overhead per column for sampling). This pre-scaling of values based upon individual column feedback is in addition to any pre-scaling the TCON may do to equalize the performance of different rows in the display; rows farther away from the column drivers may require additional current to achieve the same light output.
There are two main embodiments: analog feedback and digital feedback. Both may use an interface like JTAG, I2C or SPI so that the TCON can issue a command for a particular column driver to send back the value coming out of its column amplifier. The command may also be issued using an MFM command as described above. In the analog version, that value is sampled through an analog switch to an analog bus-a single analog connector returning to the TCON-shared by all source driver chips. The TCON then does analog-to-digital conversion (ADC) and digital processing of the result. As the source drivers outputs are high voltage the multiplexing requires the use of high voltage transistors, or a low-voltage representation of the column voltage may be generated before multiplexing (done by resistor or capacitor dividers). In any case, there is an area overhead per column.
In the digital version, each source driver chip has its own A-to-D converter, so it does the column amplifier sampling locally (thus avoiding any loading effects of a long analog return path), and returns the digital value to the TCON, over the same JTAG, I2C or SPI path as the command.
In a separate embodiment, we model how the performance of a pixel varies with its neighbors (e.g. in the same column), we use the model to pre-scale sub-pixel inputs before they are sent to the source driver chips in order to produce the desired brightness. In general, it can be difficult for each sub-pixel to report its light output; such measurements require an elaborate testing setup. Instead, we measure the current through each sub-pixel at specific input values, we then use that measured current as a proxy for the light emitted in order to model the performance of a pixel (or sub-pixel).
In a variation, we do use real screen metrology to model the performance of the display, especially how neighboring sub-pixel values affect brightness (due to slew rate issues, etc.). We then use this model to pre-scale sub-pixel inputs before they are sent to the source driver chips in order to produce the desired brightness.
Above are described embodiments for transmitting video signals to a display panel and within a display unit. The present invention also includes embodiments for transmission of video signals using SAVT in other environments such as directly from a camera or other image sensor, from an SoC or other processor, and for receiving SAVT signals at an SAVT receiver that is not necessarily integrated with a display panel (as shown above), such as at an SoC, at a processor of a computer, or at an SAVT receiver that is not integrated within a legacy display panel. U.S. patent application Nos. 63/611,274 and 63/625,473 (HYFYPO17P2 and HYFYPO18P) incorporated by reference above disclose examples of such other environments, respectively within a mobile device and within a vehicle.
In this example there are multiple EM pathways; there may be a single EM pathway or multiple EM pathways. Depending upon the implementation and design decisions, multiple outputs may increase performance but require more pathways. In order to have as few wires as possible from transmitter 1240, only a single pathway transporting a single EM signal 1270 may be used. SAVT transmitter 1240 may be implemented substantially as described above with respect to the transmitter of
Depending upon the embodiment discussed immediately below, analog RGGB video samples 1239a may be input, analog or digital RGB samples 1239b may be input, digital G samples 1239c may be input, or analog BGBG . . . RGRG samples 1239d may be input. If the samples are digital then DACs 1260-1269 are used. In general, the transmitter can accept analog or digital video samples from any color space used, and not necessarily RGB. The samples may arrive serially, e.g., R then G then B, or in parallel i.e., RGB in parallel as three separate signals. Using distributor 1241, we can reorder the samples as needed.
As mentioned, the inputs may vary depending upon the implementation. Input 1239d may originate as follows. An image sensor may output raw analog samples without using ADCs nor performing “demosaicing” using interpolation. Thus, the image sensor output is a continuous serial stream of time-ordered analog video samples, each representative of a pixel in a row, from left to right, in row-major order (for example), frame after frame, so long as the image sensor is sensing. Of course, a different ordering may also be used. When Bayer filtering is used, the samples are output by a row of BGBG . . . followed by a row of RGRG . . . , often referred to as RGGB format as each 2×2 pattern includes one each of RGGB. These rows of analog video samples 1239d are input into SAVT transmitter 1240, transmitted as EM signals 1270-1279 to the SAVT receiver of
Input 1239c may originate as follows. Raw analog samples coming from an image sensor are converted to digital in ADCs and then “demosaicing” is performed within an image signal processor (ISP), resulting in digital RGB samples per pixel. Only the green channel (i.e., one G sample per element of the array) from each set of RGB samples per pixel is selected and sent to become input 1239c. These rows of G digital video samples 1239c are input into SAVT transmitter 1240, transmitted as EM signals 1270-1279 to the SAVT receiver of
Alternatively, as only the green channel will be sent, interpolation only need be performed at the R and B elements of the sensor in order to obtain their G sample; no interpolation is needed at the G elements because the G sample already exists and the R and B sample at those G elements are not needed, thus making interpolation simpler and quicker. As the green channel corresponds to the luminance (or “luma”) channel there will be no loss of perceived resolution, although any downstream display will show a monochrome image.
Input 1239a may originate as follows. We modify the readout from an image sensor and read at least two rows simultaneously. By way of example, the first two bottom rows of an image sensor are read out simultaneously which then outputs a serial stream of values such as BGRGBGRG . . . or GBGRGBGR . . . . The readout order is thus: first a blue value from the first row, then green and red values from the second row, followed by a green value from the first row, etc., resulting in a serial output BGRGBGRG. Or, an alternative readout order: first a green value from the second row, then blue and green values from the first row, followed by a red value from the second row, etc., resulting in serial output GBGRGBGR. Other readout orders may be used that intermix color values from two adjacent rows and the order of the pixel values may vary depending upon whether a particular row starts with a red, green or blue value.
Since two rows are read out at a time, every four values of those two rows (e.g. BG from the beginning of the first row and GR from the beginning of the second row i.e., two Gs an R and a B) are available to output serially, thus resulting in a serial pattern such as BGRG . . . or GBGR . . . as shown. After the first two rows are read out, then the next two rows are read out, etc. Other similar outputs are possible where each grouping of four values includes two green values, a red value and a blue value. The image sensor may be read starting from any particular corner, may be read from top-to-bottom or from bottom-to-top, may be read by rows or by columns, or in other similar manners. Thus, the output from the video source is a series of values BGRGBGRG . . . or GBGRGBGR or similar. “Demosaicing” may then occur in the analog domain in the SoC using this series of values without the need to convert these values to digital nor use any digital processing.
Such an ordering of color values facilitates interpolation in the analog domain. Other color spaces may be used in which reading out two or more rows at a time and intermixing the color values from different rows in the serial output also facilitates color interpolation in the analog domain. The video source including the image sensor then outputs a pattern such as BGRGBGRG . . . or GBGRGBGR, shown and referred to as “RGGB . . . ” 1239a in
Input 1239b may originate as follows. As shown in
As with the SAVT transmitter of
The collector controller 1330 sequences the loading of samples from the inputs 1270 . . . 1279, as well as controls the timing for unloading the samples for further processing. Since the input stream is continuous, the collector controller loads samples into one line buffer while the other line buffer samples are transferred to the output for further processing.
Shown is an embodiment of the receiver suitable for use with the embodiment discussed in
Although SAVT receiver 1300 only shows BG . . . RG . . . outputs, it may also receive and output samples input using the other embodiments discussed in
As output 1360 are analog samples, an ADC 1362 or ADCs may be used if desired to convert the samples to digital. Thus, each output vector may output its samples one at a time via analog-to-digital converter (ADC) 1362 in order to provide a continuous stream of digital samples 1364.
The present invention includes these additional embodiments.
This application claims priority to U.S. provisional patent application Nos. 63/500,341 (Docket No. HYFYP0015P2) filed May 5, 2023, entitled “ANALOG VIDEO TRANSPORT TO A DISPLAY PANEL AND SOURCE DRIVER INTEGRATION WITH A DISPLAY PANEL” and 63/447,241 (Docket No. HYFYP0015P) filed Feb. 21, 2023, entitled “ANALOG VIDEO TRANSPORT TO A DISPLAY PANEL” which are both hereby incorporated by reference. This application claims priority to U.S. provisional patent application Nos. 63/611,274 (Docket No. HYFYP0017P2) filed Dec. 18, 2023, entitled “VIDEO TRANSPORT WITHIN A MOBILE DEVICE” and 63/516,220 (Docket No. HYFYPO017P) filed Jul. 28, 2023, which are both hereby incorporated by reference. This application claims priority to U.S. provisional patent application No. 63/625,473 (Docket No. HYFYP0018P) filed Jan. 26, 2024, entitled “SIGNAL TRANSPORT WITHIN VEHICLES” which is hereby incorporated by reference. This application incorporates by reference U.S. patent application Ser. No. 17/900,570 (HYFYPO09), filed Aug. 31, 2022, U.S. patent application Ser. No. 18/098,612 (HYFYPO13), filed Jan. 18, 2023, now U.S. Pat. No. 11,769,468, and U.S. application Ser. No. 18/117,288 filed on Mar. 3, 2023 (Docket No. HYFYPO14), now U.S. Pat. No. 11,842,671. This application incorporates by reference U.S. patent application Ser. No. 18/442,447 (HYFYPO17), filed on an even date herewith.
Number | Date | Country | |
---|---|---|---|
63447241 | Feb 2023 | US | |
63500341 | May 2023 | US | |
63516220 | Jul 2023 | US | |
63611274 | Dec 2023 | US | |
63625473 | Jan 2024 | US |