The present invention relates generally to video transport. More specifically, the present invention relates to transporting analog video samples within a display unit or to a display unit, for example.
Image sensors, display panels, and video processors are continually racing to achieve larger formats, greater color depth, higher frame rates, and higher resolutions. Local-site video transport includes performance-scaling bottlenecks that throttle throughput and compromise performance while consuming ever more cost and power. Eliminating these bottlenecks can provide advantages.
For instance, with increasing display resolution, the data rate of video information transferred from the video source to the display screen is increasing exponentially: from 3 Gbps a decade ago for full HD, to 160 Gbps for new 8K screens. Typically, a display having a 4K display resolution requires about 18 Gbps of bandwidth at 60 Hz while at 120 Hz 36 Gbps are needed (divided across P physical channels). And, an 8K display requires 72 Gbps at 60 Hz and 144 Gbps at 120 Hz.
Until now, the data is transferred digitally using variants of low-voltage differential signaling (LVDS) data transfer, using bit rates of 16 Gbps per signal pair, and parallelizing the pairs to achieve the required total bit rate. With a wiring delay of 5 ns/m, the wavelength of every bit on the digital connection is 12 mm, which is close to the limit of this type of connection and requires extensive data synchronization to obtain useable data. This digital information then needs to be converted to the analog pixel information on the fly using ultra-fast digital-to-analog (D-to-A) conversion at the source drivers of the display or using ultra-parallel slow conversion.
Nowadays, D-to-A converters use 8 bits; soon, D-to-A conversion may need 10 or even 12 bits and then it will become very difficult to convert accurately at a fast enough data rate. Thus, displays must do the D-to-A conversion in a very short amount of time, and the time being available for the conversion is also becoming shorter, resulting in stabilization of the D-to-A conversion also being an issue.
Accordingly, new apparatuses and techniques are desirable to eliminate the need for D-to-A conversion at a source driver of a display, to increase bandwidth, to utilize an analog video signal within a display unit, and to transport video signals in other locations.
To achieve the foregoing, and in accordance with the purpose of the present invention, a sampled analog video transport (SAVT) technique is disclosed that addresses the above deficiencies in the prior art. The technique may also be referred to as “clocked-analog video transport” or CAVT.
It is realized that the requirements for bit-perfect communication (e.g., text, spreadsheets) between computing devices are very different from those for communicating video content to humans for viewing. Fundamentally, as a video signal is a list of brightness values, it is realized that precisely maintaining fixed-bit-width (i.e., digital) brightness values is inefficient for video transport, and because there is no requirement for bit-accurate reproduction of these brightness values, analog voltages offer greater resolution. The unnecessary requirement for bit-perfect video transmission imposes a costly burden—a “digital overhead.” Therefore, the present invention proposes to transport video signals as analog signals rather than as digital signals.
Whereas conventional digital transport uses expensive, mixed-signal processes for high-speed digital circuits, embodiments of the present invention make use of fully depreciated analog processes for greater flexibility and lower production cost. Further, using an analog signal for data transfer between a display controller (for example) and source drivers of a display panel reduces complexity when compared to traditional transport between a signal source (via LVDS or Vx1 transmitter) and a source driver receiver having D-to-A converters.
In one embodiment, a transmitter is disclosed that processes incoming digital video samples, converts them to analog, and transports them to a display panel; also disclosed is a source driver of a display panel that receives the analog samples and drives them on to the display panel. An analog signal is used to transmit the digital video data received from a video source (or storage device) to a video sink for display. The analog signal may originate at a transmitter of a computer (or other processor) and be delivered to source drivers of a display unit for display upon a display panel, thus originating outside of the display unit, or the analog signal may be generated at a transmitter within the display unit itself.
In an alternative embodiment, portions of the, or the entire, source driver, may be integrated with the glass substrate of the display panel given the necessary analog speed and accuracies. Prior art source drivers have been mounted at the edge of the display panel (but not integrated with it) because of the complexity of high-speed digital circuits, as well as the large area required for D-to-A conversion. The present invention is able to integrate source drivers with the glass itself because no D-to-A converters are required in the source drivers, no decoders are needed, and because of the lower frequency sample transfer of an SAVT signal; e.g., the SAVT video signal arrives at the source drivers at a frequency of one-tenth the data rate of a 3 GHz digital video signal.
The invention may be used on any active-matrix display substrate. Best suited are substrates with high mobility (e.g., low-temperature poly-silicon (LTPS) or oxide (IGZO) TFTs). The resulting display panel can be connected to the GPU by only an arbitrary length of signal cable and a power supply when the entire source driver is integrated. There is no need for further electronics connected to the glass, providing great opportunity for further edge width reduction and module thinning.
The invention is especially applicable to displays used in computer systems, televisions, monitors, game displays, home theater displays, retail signage, outdoor signage, etc. Embodiments of the invention are also applicable to video transport within vehicles such as within automobiles, trains, airplanes, ships, etc., and applies not only to video transport from a transmitter to displays or monitors of the vehicle, but also to video transport within such a display or monitor. The invention is also applicable to video transport to or within a mobile device such as a telephone. In a particular embodiment, the invention is useful within a display unit where it is used to transmit and receive video signals. By way of example, a transmitter of the invention may be used to implement the transmitter as described in U.S. Pat. No. 11,769,468 (HYFYP013), and a receiver of the invention may be used to implement the receiver as described in U.S. application Ser. No. 17/900,570 (HYFYP009).
The invention, together with further advantages thereof, may best be understood by reference to the following description taken in conjunction with the accompanying drawings in which:
This application incorporates by reference U.S. patent application Ser. No. 17/887,849 (docket No. HYFYP006), filed Aug. 15, 2022, U.S. patent application Ser. No. 17/946,479 (docket No. HYFYP010), filed Sep. 16, 2022, U.S. patent application Ser. No. 18/095,801 (HYFYP011), filed Jan. 11, 2023, U.S. patent application Ser. No. 18/098,612 (HYFYP013), filed Jan. 18, 2023, now U.S. Pat. No. 11,769,468, and U.S. application Ser. No. 18/117,288 filed on Mar. 3, 2023 (Docket No. HYFYP014), now U.S. Pat. No. 11,842,671.
It is realized that the wiring loom in a display unit conforms closely to its design values, such that the resilience afforded by the use of spreading codes (to encode and decode video samples for transport within the display unit, such as is described in U.S. Pat. No. 10,158,396) may be outweighed by the circuit overhead of decoding at the source drivers. In particular, the use of spreading codes affords a degree of resilience against thermal noise in a transmitter's DAC and in the sample and hold amplifiers of a source driver. Nevertheless, it is realized that such thermal noise is stochastic and therefore should be imperceptible. Accordingly, in some applications spreading codes are not strictly necessary, obviating the need for encoding and then decoding in the source drivers. Accordingly, it is proposed to transmit video data as analog signals from a transmitter to any number of source drivers of a display panel.
It is further realized that digitization of a video signal typically takes place at the signal source of the system (often at a GPU) and then the digital signal is transferred, usually using a combination of high-performance wiring systems, to the display panel source drivers, where the digital signal is returned to an analog signal again, to be loaded onto the display pixels. So, the only purpose of the digitization is data transfer from video source to display pixel. Therefore, we realize that it is more beneficial to avoid digitization altogether (to the extent possible), and to directly transfer the analog data from video source (or from a suitable transmitter) to the display source drivers. Such an analog signal has high accuracy (subject to circuit imperfections) and is a continuous value meaning that its possible resolution in value is always higher than can be represented by an arbitrarily long digital representation. This means the sample rate is at least a factor of ten lower than in the case of digital transfer, leaving further bandwidth for expansion.
Further, it can be easier to perform the D-to-A conversion at the point where less power is needed than at the end point where the display panel is driven. Thus, instead of transporting a digital signal from the video source (or from an SoC or timing controller) to the location where the analog signal needs to be generated, we convert to analog near the SoC or timing controller within a transmitter and then transport the analog signal to the display panel over a much lower sample rate than one would normally have with digitization. That means that instead of having to send Gigabits per second over a number of lines, we send only a few hundred mega samples per second in case of the analog signal, thus reducing the bandwidth of the channel that has to be used. The rate is approximately one-tenth of the digital rate required for the same number of physical communication paths. Further, with prior art digital transport, every bit will occupy just about 1.25 cm (considering that propagation in cable is approximately 0.2 m/ns, 16 Gbps means 1/16 ns/bit, so one bit is 0.2/16 meter), whereas transporting analog data results in an increase of tenfold amount of time available, meaning extra bandwidth available. And further, a bit in digital data must be well defined. This definition is fairly sensitive to errors and noise, and one needs to be able to detect the high point and the low point very accurately.
The invention is especially applicable to high-resolution, high-dynamic range display units used in computer systems, televisions, monitors, machine vision, automotive displays, aeronautical displays, virtual or augmented reality displays, mobile telephones, billboards, scoreboards, etc.
Shown is a video signal 110 being delivered to the display unit using an HDMI interface (an LVDS, HDBaseT, MIPI, IP video, etc., interface may also be used). Shown generally are the system-on-chip (SoC) 120 and the timing controller (TCON) 130 which deliver digital video samples from the video signal to the transmitter 140. SoC 120 performs functions such as a display controller, reverse compression, certain digital signal processing and outputs the video signal to the TCON. Typically, LVDS or V-by-One will be used to deliver the digital video data 122 from the SoC to the TCON. If via LVDS pairs (for example), the number of pairs is implementation specific and depends upon the data rate per pair as well as upon panel resolution, frame rate, bandwidth etc. Furthermore, a variety of physical layers may be used to transport the video data from SoC 120 to TCON 130 including a serial-deserializer or SerDes layer, as is known in the art; if transmitter 140 is integrated with TCON 130, then this physical layer delivers the video data from SoC 120 to the integrated TCON and transmitter as shown in
It is also possible that some or all digital or image processing is performed in the SoC, in which case there is no image processing performed after the line buffer and before the DAC in
Various embodiments are possible: a discrete implementation in which the transmitter 140 is embedded in a mixed-signal integrated circuit and the TCON and SoC are discrete components; a mixed implementation in which the transmitter 140 is integrated with the TCON in a single IC and the SoC is discrete; and a fully-integrated implementation in which as many functions as possible are integrated in a custom mixed-signal integrated circuit in which the transmitter is integrated with the TCON and the SoC.
In this example of
There is a significant advantage to using analog signals for transport within a display unit even if the signal input to the display unit is a digital video signal. In prior art display units, one decompresses the HDMI signal and then one has the full-fledged, full-bit rate digital data that must then be transferred from the receiving point of the display unit to all source drivers within the display unit. Those connections can be quite long for a 65- or 80-inch display; one must transfer that digital data from one position inside of the unit where the input is to another position (perhaps on the other side) where the final source driver is. Therefore, there is an advantage to converting the digital signal to analog signals internally and then sending those analog signals to the source drivers, such as the use of lower frequency signals.
Also shown within
Typically, a transmitter 140 and a receiver (in this case, source drivers 186) are connected by a transmission medium. In various embodiments, the transmission medium can be a cable (such as HDMI, flat cable, fiber optic cable, metallic cable, non-metallic carbon-track flex cables, metallic traces, etc.), or can be wireless. There may be numerous EM pathways of the transmission medium, one pathway per EM signal 192. The transmitter includes a distributor that distributes the incoming video samples to the EM pathways. The number of pathways may widely range from one to any number more than one. In this example, the transmission medium will be a combination of cable, traces on PCBs, IC internal connections, and other mediums used by those of skill in the art.
During operation, a stream of time-ordered digital video samples 110 containing color values and pixel-related information is received from a video source at display unit 100 and delivered to the transmitter 140 via the SoC and TCON. The number and content of the input video samples received from the video source depends upon the color space in operation at the source (and the samples may be in black and white). Regardless of which color space is used, each video sample is representative of a sensed or measured amount of light in the designated color space.
The signal from the SoC (typically an LVDS digital signal, but others may be used) in which the pixel values come in row-major order through successive video frames. More than one pixel value may arrive at a time (e.g., two, four, etc.); they are serial in the sense that groups of pixels are transmitted progressively, from one side of the line to the other. A processing unit such as an unpacker of a timing controller may be used to unpack (or expose) these serial pixel values into parallel RGB values, for example. Also, it should be understood that the exposed color information for each set of samples can be any color information (e.g., Y, C, Cr, Cb, etc.) and is not limited to RGB. Use of color information other than RGB sub-pixels may require additional processing before the source drivers can drive the columns (which are natively sub-pixel intensity values). The number of output sample values S in each set of pixel samples is determined by the color space applied by the video source. With RGB, S=3, and with YCbCr 4:2:2, S=2. In other situations, the sample values S in each set of samples can be just one or more than three.
The unpacker may also unpack from the digital signal framing information in the form of framing flags that come along with the pixel values. Framing flags indicate the location of pixels in a particular video frame; they mark the start of a line, the end of the line, the active video section, the horizontal and vertical blanking sections, etc., as is known in the art. Framing flags are used to tell the gate drivers which line is currently sent to the display panel and will also control the timing of gate drivers' action. Framing flags may be included within gate driver control signals 190 as is known in the art. In general, symbol and sampling synchronization occurs before extracting framing information such as Hsync and Vsync (and other line control information).
TCON 130 provides a reference clock 170 to each of source drivers 186, i.e., each source driver chip (e.g. a Hyphy HY1002 chip) has a clock input that is provided by the TCON (whether it is an FPGA or IC). Clock 170 is only shown input to the first source driver for clarity, but each source driver receives the reference clock. This reference clock may be relatively low frequency, around 10.5 MHz, for example. More detail on the reference clock is provided in
Controller 630 stores a line of pixels for the display into one of the line buffers and then that line is output (into the DACs or into the other line buffer as explained below) when the line is complete. Typically, pixels for a line of the display panel arrive serially from the SoC, but as the gate drivers will enable a line of pixels to be displayed at the same time, the source drivers will need pixels for an entire line to be ready at the same time. Thus, each line buffer provides storage for a line of pixels. Furthermore, at times only half of a line of pixels is enabled on the display panel by the gate drivers, thus a line is stored in a line buffer, and then extracted half-by-half to be transmitted, while a new line is being stored.
In general, as a stream of input digital video samples is received within the transmitter 140 in row-major order, the input digital video samples are repeatedly (1) distributed to one of the EM pathways according to a predetermined permutation (in this example, row major order, i.e., the identity permutation) (2) converted into analog, and (3) sent as an analog EM signal over the transmission medium, one EM signal per EM pathway. At each source driver 186 the incoming analog EM signal is received at an input terminal and each analog sample in turn is distributed via sampling circuitry to a storage cell of a particular column driver using the inverse of the predetermined permutation used in the transmitter. Once all samples for that source driver are in place they are driven onto the display panel. As a result, the original stream of time-ordered video samples containing color and pixel-related information is conveyed from video source to video sink. The inverse permutation effectively stores the incoming samples as a row in the storage array (for display on the panel) in the same order that the row of samples was received at the distributor. The samples may arrive serially, e.g., R then G then B, or in parallel i.e., RGB in parallel as three separate signals. Using distributor 240, we can reorder the samples as needed.
In one embodiment, four control signals for every 60 video samples are inserted into the stream of samples in the distributor to be sent to the source driver. As shown, each input vector 280 in the line buffer includes a total of 1024 values, including the four control signals per every 60 video samples. The control signals may be inserted into various positions in the input vector, by way of example, “samples” 960-1023 of the input vectors 280-288 may actually be control signals. Any number of control signals in each input vector may be used. Further, an arbitrary but finite number of control signals is possible. The more control signals that are transmitted, the higher the data transmission rate needed. Ideally, the number of control signals is limited to what fits into the blanking periods so that there can be a correspondence between transmit rate and displayed lines (thus reducing the amount of storage required, or any additional re-synchronization). And further, the control signals may be inserted into the stream of samples at the distributor or insertion of control signals be performed in another location.
Distributor 240 is arranged to receive the pixel color information (e.g., R, G, and B values) exposed in the input sets of samples. The distributor 240 takes the exposed color information and writes multiple input vectors 280-288 into the first line buffer 241 (one input vector per EM pathway) according to the predefined permutation. Once line buffer 241 is full then each input vector 280-288 is read out via its corresponding output port 281-289 into its corresponding DAC or optionally into its corresponding image processor 250-259. As these input vectors from line buffer 241 are being read out (or once line buffer 241 is full) then the next line of RGB input samples are written into input vectors 290-298 in the second line buffer 242. Thus, once the second line buffer 242 is full (and the DACs or image processors have finished reading input vectors from the first line buffer 241) the DACs or image processors begin reading samples from the second line buffer 242 via their output ports 291-299. This writing to, and reading from, the first and second line buffers continues in this “ping-pong” fashion as long as input samples arrive at the transmitter. Output ports 281-289 and 291-299 may possibly be bit-serial communications, but are more likely to be sequential word-wide samples or even parallel word-wide samples.
In a preferred embodiment for writing into and reading out from the line buffers, samples are only written into one of the line buffers, e.g., into buffer 241, as they arrive at the transmitter 140. Once that buffer is full then all samples are written in parallel from buffer 241 into line buffer 242. Samples are then only output into the DACs (or image processors) from buffer 242. The process is continuous: buffer 241 is filled as buffer 242 outputs its samples, once buffer 242 is depleted the all samples of buffer 241 are written into buffer 242, and so on. The samples can be written from buffer 241 into buffer 242 during the horizontal blanking period.
The number of line buffers required depends on the relative time required to load the buffers and then to unload them. There is a continuous stream of data coming in on the RGB inputs. If it takes time T to load all the samples into a buffer and the same time T to unload them, we use two buffers (so that we can unload one while the other is being loaded). If the time taken to unload becomes shorter or longer, the buffer length can always be adjusted (i.e., adjust the number of input vectors or adjust N of each input vector) so that the number of line buffers required is always two. Nevertheless, more than two buffers may be used if desired and either embodiment described above may be used for writing into and reading from the buffers.
Distributor controller 230 controls the operation and timing of the line buffers. In particular, the controller is responsible for defining the permutation used and the number of samples N when building the four input vectors. In this example, N=1024. Controller 230 may also include a permutation controller that controls distribution of the RGB samples to locations in the input vectors.
Controller 230 may also include a permutation controller that controls distribution of the samples to locations in the input vectors. The controller is also responsible for coordinating the clock domain crossing from a first clock frequency to a second clock frequency. In one particular embodiment, the samples are clocked in at a frequency of FPIXEL and the samples are clocked out serially from each input vector at a sampled analog video transport (SAVT) frequency of Fsavt. It is also possible to clock in two samples at a time instead of one each, or three at a time, etc. The analog samples are transmitted along an electromagnetic pathway of a transmission medium as an analog EM signal 270-279 to the SAVT receiver.
In one particular embodiment, each line buffer 241 or 242 has three input ports for the incoming RGB samples and the samples are clocked in at a frequency of FPIXEL; each line buffer also has 24 output ports, e.g., 281 or 291 (in the case where there are 24 EM signals, each being sent to one of 24 source drivers) and the samples are clocked out from each input vector at a sampled analog video transport (SAVT) frequency of Fsavt. It is also possible to clock in two R, two G and two B samples at a time instead of one each, or three at a time, etc. In one embodiment, Fsavt=663.552 MHz for 24 channels.
For purposes of explanation, one possible permutation is one in which each of the input vectors includes N samples of color information and control signals. The exposed RGB samples of the sets of samples in this example are assigned to input vectors from left to right. In other words, the “R”, “G” and “B” values of the first set of samples, the “R”, “G” and “B” values of the next set of samples, etc. are assigned to input vector 280 in that order (i.e., RGBRGB, etc.). Once input vector 280 has been assigned its N samples and control signals, the above process is repeated for the other input vectors in order until each of the input vectors have N values. The number of N values per input vector may widely vary. As shown in this example, this predetermined permutation preserves the row-major order of the incoming samples, that is, the first input vector 280 includes sample0 through sample1023 of the first row in that order and the succeeding input vectors continue that permutation (including control signals). Thus, distributor controller 230 performs a permutation by assigning the incoming samples to particular addresses within the line buffer. It should also be understood that any permutation scheme may be used by the distributor 230, and, whichever permutation scheme that is used by the transmitter, its inverse will be used by control logic in each source driver in order to distribute the incoming samples to the column drivers. In the situation where only one electromagnetic pathway is used and where the video samples are received at the SAVT transmitter, the distributor writes into one input vector in each line buffer.
Image processors 250-259 are shown after the line buffers and before the DACs, although it is preferable to have an image processor (or processors) before the line buffers thus reducing the number needed, i.e., as the RGB samples arrive image processing is performed and then the samples are distributed into the line buffers. Shown are pixels arriving one at a time; if pixels arrive one at a time then one image processor is used, if two at a time then two are used, and so on. Certain processing such as gain management may be performed after the line buffers even if the image processors are located before the line buffers.
Typically, image processing: a) applies gamma correction on each sample; b) level shifts each gamma-corrected sample, mapping the range (0 . . . 255) to (−128 . . . 127), in order to remove the DC component from the signal; c) applies the path-specific amplifier variance correction to each gamma-corrected, level-shifted sample; performs gain compensation for each sample; performs offset adjustment for each sample; and performs demura correction for each sample. Other corrections and adjustments may also be made depending upon the target display panel. An individual image processor 250-259 may process each output stream of samples (e.g., 281 and 291) or a single, monolithic image processor may handle all outputs (e.g., 281 and 291, 285 and 295, etc.) at once. In order to avoid performing image processing on the control signals in the line buffer, the control signal timing and positions in buffers is known so that logic can determine that image processing of control signals should not be done. As mentioned above, image processing need not occur within transmitter 140 but may occur in SoC 120, in the TCON, or in another location such as in the receiver. E.g., Gamma correction is traditionally done in the receiver (source driver), but demura and more complex image processing are not feasible in a source driver.
The processed digital samples of each input vector are input serially into one of DACs 260-269 (whether image processing happens before or after the line buffers); each DAC converts these modified digital samples at a frequency of Fsavt and transmits the modified analog samples along an electromagnetic pathway of a transmission medium as an analog EM Signal 270-279 to a source driver of the display unit. Each DAC converts its received sample from the digital domain into a single analog level, which may be transmitted as a differential pair of voltage signals having a magnitude that is proportional to its incoming digital value, the analog levels being sent serially as they are output from each DAC. The output of the DACs may range from a maximum voltage to a minimum voltage, the range being about 1 volts to 4 volts, Vpp (peak-to-peak); about 2 volts Vpp works well. In one particular embodiment, we represent signals in the range of +/−500 mV or a 1V dynamic range (in reality the dynamic range at the input is about 30% higher or about 1.3V).
Although two line buffers are shown within distributor 240 (which is preferable), it is possible to use a single line buffer and as samples from a particular input vector are being read into its image processor (or its DAC) the distributor back fills that input vector with incoming samples such that there is no pause in the serial delivery of samples from the line buffer to the DAC or image processor. Further, and also less desirable, it is also possible to place each DAC (or a number of DACs per EM pathway) after the distributor and before the image processors (if any), thus performing image processing on analog samples.
The samples of input vectors 380-388 are then output from line buffer 245 into image processors 250-259 via output ports 381-389. As in the distributor of
In another embodiment (not shown), the predetermined permutation used by distributor 240 orders the samples by color for each input vector, i.e., send all 320 red sub-pixels, followed by all 320 green sub-pixels, followed by all 320 blue sub-pixels, followed by all 64 sub-band signals. Thus, using the first input vector as an example, sample positions 0-319 will contain the red sub-pixels, sample positions 320-639 contain the green sub-pixels, sample positions 640-959 contain the blue sub-pixels, and positions 960-1023 contain the 64 sub-band signals for a particular row. The samples are then sent out to the image processor (all red, all green, all blue, all sub-band). The other input vectors use the same permutation of grouping the samples by color. Of course, the color groupings of sub-pixels in an input vector may be in any order (not necessarily red, green, blue) and the 64 sub-band signals may be inserted anywhere in the groupings. The reason for this ordering is to exploit a heuristic of natural images that individual color components tend not to exhibit high spatial frequency, thereby reducing potential electromagnetic interference signals generated by the system when the samples are grouped in this fashion. In fact, substantial EMI is reduced as long as substantially most all of the sub-pixels of a particular color are grouped together. Further, this ordering not only allows for slower S/H amplifiers in the source driver to be used but also for a lower bandwidth requirement for the transmitter to receiver communication channels. Control logic in each source driver will then use the inverse of this permutation in order to direct the incoming samples to the correct column driver.
Input into source driver 400 at input terminal 410 is one of the EM Signals from transmitter 140. In this example, terminal 410 serially receives 1,024 analog values at a time which are then stored into either the row of A or B storage arrays 434 or 436 via S/H amplifiers 420-429. The analog video samples arrive in their natural order according to the predetermined permutation shown in example of
The source driver 400 consists of 16 interleaved Sample/Hold input amplifiers 420-429 that sample the input 410 at Fsavt/16. There are 16 blocks (430 being the first block) of 60 video and 4 control signals. Each of the S/H amplifiers 420-429 samples a particular analog sample in turn and stores it into one of the 64 storage arrays 434 or 436, 60 of which directly feed the column drivers 440. Because input amplifiers 420-429 are interleaved they may be run 16 times slower than the input signal, each one being phase shifted by one SAVT interval. As shown, S/H amplifier #0420 drives columns 0, 16, 32, etc., and S/H amplifier #1421 drives columns 1, 17, 33, etc., and so on. Therefore, each S/H amplifier output spans the range of all 960 columns (an amplifier output every 16 columns).
In one embodiment, each of these storage elements in storage array A or B is a storage cell, such as a switched capacitor. Other terms for the storage cell are “sampler capacitor” or “analog latch.” There are also 960 high-voltage column drivers 440 which each drive a column 450 (via output pins) providing the voltage that the display panel 480 requires. As shown, there are 16 blocks of 60 video plus four synchronization signals 460 (such as Hsync, Vsync, CTRL) per block.
Once stored (in row A, for example) these 960 samples are driven to each output column 450 via column drivers 440 while at the same time the next set of 960 analog samples are being stored into the other row (B, for example). Thus, while one set of incoming 960 samples are being driven to the columns from one of the A or B rows, the next set of 960 samples are being stored in the other row. In one particular implementation, each analog level is a differential signal arriving at an S/H amplifier 420 and swings between about +0.5 or −0.5 V, and has a maximum swing of about 18 V around a mid-range voltage in each single-ended column driver, thus requiring amplification. Note that in addition to amplification, at some point the differential signal (−full scale represents dark and +full scale represents bright) is converted to single-ended and drive polarity is applied so that dark sub-pixels are at, e.g., 9V and bright sub-pixels are either full-scale positive (e.g., 18V) or minimum voltage (e.g., 1V) depending upon the polarity setting.
Control logic 470 implements the inverse of the predetermined permutation used in the transmitter 140 and controls the timing of the S/H amplifiers (when each samples a particular incoming analog sample) via control lines 471, controls the timing of the storage elements of rows A and B (when each row latches a particular analog sample) via control lines 472, and controls the timing of when each column driver 440 drives each column via control lines 473. For example, if the transmitter uses the predetermined permutation described above in which particular sub-pixel colors are grouped together in an input vector and then transmits those groups within an EM signal then control logic 470 will use the inverse of this predetermined permutation in order to route incoming samples to the correct column driver. At the very least, control logic 470 is aware of how distributor controller 242 has placed the incoming samples into an input vector and uses this apriori knowledge in order to direct the incoming samples to the correct column driver so that the samples are displayed as they appeared in the original source image.
Note that the number of S/H amplifiers 420-429 is a tradeoff between number and quality. The more amplifiers that are added, the slower they run, the smaller and more noisy they become, and the smaller load each one drives. The load presented to the input terminal 410, however, grows with the number of S/H amplifiers, which will impact the quality of the transfer. Therefore, it is a design decision as to how many input amplifiers to use. It is possible to vary the intervals of each clock period slightly in order to address any RFI/EMI emissions issues. The inputs to the SHA amplifiers only have a +/−250 mV swing around their common mode voltage (each of positive and negative inputs) leading to +/−500 mV signal (lv dynamic range). This is a similar voltage swing to conventional digital signaling such as CEDS or LVDS. The clock modulation may be done to reduce the RFI/EMI emissions in both cases, although this modulation eats into the sampling window and is not preferred. In addition, in order to optimize the performance of the source driver (to counteract any process variations in the S/H amplifiers as implemented), a low-frequency feedback network may be added off-chip in order to characterize the gain and offset of every amplifier of the source driver, although this technique is not preferred due to area and performance constraints.
An alternative method to optimize the performance of the source driver outputs is to utilize existing compensation techniques of the display unit itself modern OLED (and micro-LED) manufacturing techniques characterize the response of every sub-pixel in the array and pre-compensate for the individual offsets from a table of manufacturing data stored in the TCON and used when generating samples. Thus, based upon the physics of the entire display unit (including transmitter, amplifiers, source drivers, each pixel, etc.) each sub-pixel may have a different characteristic response, i.e., it might be too bright or too dark. This table includes an individual offset for each characteristic response.
Note that in the source driver architecture 400 or 500, a predetermined one of the interleaved sampling amplifiers 420-429 or 520-529 stores pixel voltages into the switched capacitors that are then amplified into a given column. Thus, every column is driven through the same amplifiers on each row. Any linear errors in the amplifiers as manufactured, such as gain errors, will be overlaid as a regular pattern onto any other errors measured for the individual sub-pixels along the column via the existing compensation techniques. Therefore, these existing OLED error compensation techniques will compensate also for all linear errors in the proposed source driver's amplifiers. This observation suggests that it may be possible to relax the design requirements (for example with respect to gain accuracy) and thereby enable lower-cost implementations. In one particular preferred embodiment, there are three amplifier stages and the amplifiers include common-mode feedback amplifiers.
In this embodiment each S/H amplifier 520-529 drives 64 neighboring locations (60 columns plus four sub-band values), thus reducing the wiring complexity in the source driver, reducing the physical distance in which one of the amplifiers 520-529 must drive, and also making Demura correction easier. For example, a first block 530 of 60 video and four sub-band signals is driven by amplifier 520, block 531 is driven by amplifier 521, and block 539 is driven by amplifier 529. Because this configuration also causes the gain error from any given input sampling amplifier to manifest in 60 neighboring columns, it facilitates conventional high-MTF Mura compensation solutions making it easier for the Demura system to detect.
In order to implement this embodiment, as illustrated in
Above,
Although a desirable architecture for certain applications, when the control signals are spread across all S/H amplifiers (as in
Even though grouping by color has some advantages, and although is a desirable architecture for certain applications, the video data must be padded because 960 sub-pixels/15 amplifiers/3 colors is not an integer. The additional overhead for padding means that 66 samples per amplifier are sent per line instead of 64. This means that the transmission frequency needs to be increased by a factor of 64 or 66, which partially defeats the purpose of reducing the transmission bandwidth by grouping colors. And driving across 320 columns is not as desirable as driving only 64 columns.
This 1002 chip minimizes SAVT bandwidth requirements and thus uses the permutation shown whereby all sub-pixels of each color are transmitted as a group, with a blanked transition band between groups (i.e., a band of blanking transition signals) in order to lower the bandwidth required between groups. SHA amplifier 0 is the control channel 818 showing control signals 0 to 64, i.e., 65 samples are transmitted per line. As shown, the red sub-pixel indices extend from 0 to 957, the green from 1 to 958, and the blue from 2 to 959. Samples 815 are the red blanking transition signals (tr0 . . . tr4), samples 816 are the green blanking transition signals (tg0 . . . tg4), and samples 817 are the blue blanking transition signals (tb0 . . . tb4). These bands 815, 816 and 817 provide a blanked transition between the colors.
In view of the above, realizing that bandwidth limitations may not be critical in certain applications, that the control information is effectively random, that padding can be undesirable, and that grouping control signals on one channel is advantageous, another architecture is proposed.
Thus, 15 interleaved S/H amplifiers receive the incoming pixel data and each drives 64 columns which are adjacent, i.e., 64 video tracks, thereby minimizing the span of columns that are driven by each amplifier. This architecture provides 15 blocks of 64 video samples plus one sub-band channel (control signals) of 64 bits per display line (per source driver). For example, amplifier 0 drives columns 0-63, the second amplifier drives columns 64-127, etc., the 15th amplifier drives columns 896-959 and amplifier 826 drives the control signals. Having all control signals on one channel means no difference in amplitude, delays or other from one signal to the next (if they were on different channels). It is also possible that the control signals arrive on channel zero (i.e., amplifier 0) instead of amplifier 15; that is advantageous in that the control information arrives earlier than the pixel data. Another advantage of this architecture is that control signal extraction needs to look at only one de-interleaving amplifier output rather than be distributed across all amplifiers, simplifying synchronization.
In this figure there are 15 video amplifiers, each driving 64 subpixels=960 subpixels/chip. There is one channel devoted to control, carrying 64 symbols per line (per source driver). By using MFM for timing synchronization (as described below), the 64 symbols will be transition encoded, and after accounting for flag and command bits, that will leave 24 or 25 control bits per line.
As shown, the control channel receives a control signal at amplifier 826 which is input to comparator 836 having a reference voltage of 0 V and operating at a 16th of Fsavt or approximately 41.5 MHz. Assuming that the control signals are in the range of −0.5 V up to +0.5 V, the comparator will detect if the control signal is greater than 0 V (meaning a digital 1) or if the control signal is less than 0 V (meaning a digital zero). This digital data is then output at 838 and thus provides a single control bit every 16 samples. Control signals provide synchronization and phase alignment as described below.
This particular embodiment is for an 8K144 display and example parameter values are shown in Table 1 above. One of skill in the art will find it straightforward to modify the architecture to suit other display sizes and speeds. By reordering the samples in the transmitter, each interleaved S/H amplifier can drive adjacent columns while operating in rotation as is described below.
In this permutation, 15 of the amplifiers (0-14) each drive 64 adjacent columns with sub-pixel values, while amplifier 15 handles all 64 of the control signals. This variation minimizes the hardware in the source driver and also minimizes the wiring load on the input amplifiers. Further, this variation allows for the slowest possible SAVT (Sampled Analog Video Transport) transmission rate (64×16 sampled per line) as padding is not required in the data sequences. In order to best display text and other sharp transitions in intensity, it is preferable that the sampling amplifiers be able to settle to a new value every 1/Fsavt, or approximately 1.5 ns per sample. In order to implement this architecture, the sequence of sub-pixel indices for transmission in a transmitter is: 0, 64, 128, . . . 832, 896; 1, 65, . . . 897; . . . ; 63, 127, 191, . . . 895, 959.
The above architecture of source driver 820 of
Transmitter Integrated with Timing Controller
As mentioned above, in an alternative embodiment the transmitter is integrated with the timing controller, rather than the discrete implementation shown in
The integrated transmitter/timing controller 640 receives the digital video signal 664, distributes it into a line buffer or buffers, performs image processing and converts the digital samples into analog samples and transmits EM signals to source drivers as described above. Typically, EM signals 192 are delivered to the source drivers 186 using differential pairs of wires (or metallic traces), e.g., one pair per source driver. The gate driver control signals 190 control the gate drivers 160 so that the correct line of the display is enabled in synchronization with the source drivers. A single reference clock 170 from transmitter and timing controller 640 may be fanned out to all source drivers because each source driver chip performs its own synchronization, but practical realities in drive strength may mean that it is preferable that multiple clocks are distributed. In any case, frequency lock between source driver chips is maintained.
As above, controller 630 coordinates storage and retrieval of pixel values into and from the line buffers.
As mentioned earlier, framing flags 627 come from the unpacker 620 and are input into distributor controller 630 which uses these flags to determine the location of pixels in a line in order to store and then place them into the correct input vectors. After the framing flags are output from the controller 630 (typically delayed) they are input into gate driver controller 650 which will then generate numerous gate driver control signals 671 for control of the timing of the gate drivers. These signals 671 will include at least one clock signal, at least one frame-strobe signal, and at least one line-strobe signal. Once the pixel values have been pushed into the source drivers for a specific line the line-strobe signal is used for a particular line that has been enabled by the panel gate driver controller. The line-strobe signal, thus, drives the selected line at the right time. Control of the timing of the gate drivers may be performed as is known by a person skilled in the art. Also shown is bidirectional communication 637 between controller 630 and gate driver controller 650; this communication is used for timing management between the source and gate drivers.
Operation of the two line buffers, image processors 250-259 and DACs 260-269 may occur as has been described above. Preferably, image processing occurs after unpacker 620 and before the line buffers, in which case image processing blocks 250-259 are removed and replaced with a single image processing block between 620 and 241, 242. And, as mentioned above, image processing need not occur within transmitter 640 but may occur in SoC 120 or in another location.
Transmitter Integrated with Timing Controller and System-On-Chip
As mentioned above, in an alternative embodiment the transmitter is integrated with the timing controller and SoC, rather than the discrete implementation shown in
Shown is an input of a digital video signal 110 via an HDMI connector (or via LVDS, HDBaseT, MIPI, IP video, etc.) into the display unit 680, which is then transmitted internally 111 to the integrated SoC. The SoC performs its traditional functions such as display controller, reverse compression, brightness, contrast, overlays, etc. After the SoC performs its traditional functions, the modified digital video signal (not shown) is then delivered internally to the integrated transmitter and timing controller using a suitable protocol such as LVDS, V-by-one, etc. In this embodiment, the timing controller and transmitter are both integrated with the SoC and all three are implemented within a single circuit, preferably an integrated circuit on a semiconductor chip.
The transmitter within circuit 684 converts the modified digital video signal into analog EM signals 192 which are transported to display panel 690. Preferably, signals 192 are delivered to the source drivers 186 using differential pairs of wires (or metallic traces), e.g., one pair per source driver. Gate driver control signals 190 control the gate drivers 160 so that the correct line of the display is enabled in synchronization with the source drivers. Typically, the distance between chip 684 and source drivers 186 is in the range of about 5 cm to about 1.5 m, depending upon the panel size. A single reference clock 170 from transmitter, timing controller and SoC 684 may be fanned out to all source drivers.
The integrated chip 684 may be implemented as herein described, i.e., as shown in
A distributor of the transmitter includes line buffer 720, any number of input vectors (or banks) 722-726, and a distributor controller 728. The RGB samples (or black-and-white, or any other color space) are received continuously at the distributor and are distributed into the input vectors according to a predetermined permutation which is controlled by the distributor controller 728. In this example, a row-major order permutation is used and the first portion of the row of the incoming video frame (or image) from left to right is stored into input vector 722, and so on, with the last portion of the row being stored in input vector 726. Accordingly, line buffer 720 when full, contains all of the pixel information from the first row of the video frame which will then be transported and displayed in the first line of a video frame upon display panel 710. Each input vector is read out serially into its corresponding DAC 732-736 and each sample is converted into analog for transport. As samples arrive continuously from timing controller 702 they are distributed, converted, transported and eventually displayed as video upon display panel 710. There may be two or more line buffers, as shown and described in
Connecting the transmitter 704 to the source driver array 708 is a low-voltage wiring harness 706 consisting of differential wire pairs 742-746, each wire pair transporting a continuous stream of analog samples (an electromagnetic or EM signal) from one of the DACs 732-736. Each differential wire pair terminates at the input 760 of one of the source drivers 752-756. Other transmission media (e.g., wireless, optical) instead of a wiring harness are also possible.
Each source driver of the source driver array such as source driver 752 includes an input terminal 760, a collector 762 and a number of column drivers 764 (corresponding to the number of samples in each input vector, in this example, 1,024). Samples are received serially at the terminal 760 and then are collected into collector 762 which may be implemented as a one-dimensional storage array or arrays having a length equal to the size of the input vector. Each collector may be implemented using the A/B samplers (storage arrays) shown in
Synchronization may be used to provide for horizontal synchronization (beginning of a display line), vertical synchronization (first display line of a frame) and sample phase alignment (when to sample incoming sub-pixel samples). In other words, a receiver such as a source driver receiving a stream of video pixels needs information from a transmitter telling it where the start of a frame is, where the start of a line is, and at what point to sample data representing a particular sub-pixel. For example, each switch 842 needs to know when a sample on the input is valid and stable so that the correct value can be sampled; a process referred to as sample phase alignment determines when to sample. Sample phase alignment accounts for different delays from the TCON to geographically-distributed source drivers of a display; we optimize the phase of the locally-generated clock 171 (derived from reference clock 170) relative to the locally-delivered samples as described in more detail below.
Synchronization is useful (and can be made difficult) for a variety of reasons. For one, a constant stream of video sub-pixels does not inherently have information indicating the start of a frame, the start of a line or phase alignment data. Also, the delay along cables from a transmitter to receiver is potentially variable and is typically not known. Further, attenuation on the cables (which can be different between paths to the various source driver chips) can also be problematic. Finally, the wave shape of the incoming sub-pixel value may not be known or can be variable due to ringing, overshoot, filtering, rate of change of the input value, or other kinds of distortion of the signal.
Realizing that synchronization is important and can be made difficult by the above factors, a technique herein described provides commands for synchronization and phase alignment of analog samples.
We use a timing reference (i.e., a special timing violation not occurring during normal data transmission) referred to herein as a “flag” that informs the receiver, i.e. the source driver, that it may reset its clock and that what follows are known commands or data for synchronization. For instance, once the flag has been received we then receive a command to begin phase alignment and then determine the optimal sampling phase to sample an incoming sub-pixel. To begin with, we are aware of the frequency of the transmission of samples (i.e., the rate at which samples are arriving at the input terminal of the source driver); in the example herein Fsavt is approximately 664 MHz (673.92 MHz for the HY1002). As it can be impractical to transmit that high-frequency clock, in one embodiment we transmit a slower clock, Fsavt/64=10.375 MHz (10.53 MHz for the HY1002) to each source driver and each source driver uses a phase-locked loop to multiply that frequency up to the higher frequency clock. We also know that there are 15 sub-pixel data streams arriving at Fsavt/16 at each input amplifier, that each of these input amplifiers delivers 64 samples and that there will be a control stream (either analog or digital control signals) arriving at the 16th amplifier and having 64 control signals per line.
The central comparator will be reliable (as it is a zero-crossing detector) and generally the data extracted from this central comparator will correspond to the data extracted on the high and low channels assuming all is working correctly (i.e., if the control signal is +0.4 V both the central comparator 836 and the high comparator 835 will both detect a logical “1”). But, if sampling is occurring at the wrong time it is very likely that the central comparator will provide the correct bit but the high and the low values from the other two comparators will disappear. The concept here is that the high and low data require the input to have nearly settled to receive the correct value whereas the zero-crossing detector will be right even if the input sample is still slewing. Using only one of the high or low comparators along with the central comparator is also possible. Use of this information is discussed below with regard to phase alignment.
In this embodiment, it is realized that synchronization requires only a single comparator 836 (a zero crossing detector) on a single SHA channel and does not need DACs to set comparison thresholds. The algorithm for synchronization runs in the digital domain (the zero crossing detector output) and can perform both clock-level synchronization (alignment of SHA outputs so that the side-channel is seen on one particular SHA output) and phase-level synchronization (choosing the optimal sampling phase within a clock cycle).
At input terminal 822, there is one analog input differential with matched termination and ESD protection. This is driven by a 50R source impedance per side through a 50R transmission line. Hence, there will be a 50% reduction in voltage received compared to the voltage transmitted. The PLL of 821 multiplies the relatively slow reference clock 170 from the TCON (e.g., Fsavt/64) up to the full speed Fsavt clock 171 (e.g., approximately 675 MHz in HY1002) with 11 phases (for example), selectable per clock cycle. There is also high-speed timing generation to generate sampling strobes, reset signals and output transfer strobes for the SHA amplifiers 0-15. A 16-way de-interleaver 840 is built using the SHA amplifiers as shown in
This sub-pixel order minimizes the hardware in the source driver and also minimizes the wiring load on the input amplifiers. In order to best display text and other sharp transitions in intensity, it is preferable that the sampling amplifiers should be able to settle to a new value every 1/Fsavt, or approximately 1.5 ns per sample. As shown, SHA 0 carries control and timing; SHA 1-15 carries video data such that each SHA drives 64 adjacent columns of the display. Since the SHAs are sequentially sampled, this leads to a transmission order of: CTL[0], V[0], V[64], . . . V[896], CTL[1], V[1], V[65], . . . V[897], . . . , CTL[63], V[63], V[127], . . . V[959]. The order provides 64 control bits per line and 960 video samples per line and a total of 1,024 samples transmitted per line (per source driver).
The timing reference indicates a point in time at which afterward what follows are commands and data in the control sequence; the timing reference is an MFM Flag which is a deliberate timing violation. We assume that the wire length from Tx to Rx is >1 SAVT cycle and that the wire length may be a variable. We extract digital data on the control channel so it is robust, even if the analog samples are not perfect. Further, level values are irrelevant, only the transitions are important, and true and complement values have the same meaning.
Timing signal 852 is Fsavt/16 which corresponds to the timing of the output from the input amplifiers, i.e., the rate at which each amplifier outputs data. Control bit cells 854 represent a sequence of 64 bits received as a control signal at output 838 of one of the source drivers. MFM cells 856 represent the MFM-encoded bits, one MFM cell for every two control bits and payload 858 is a command and data. Control sequence 860 is a sequence of control bits received on a control channel at one of the source drivers and output at control 838 of source driver 820′″, for example.
As shown, the control sequence includes an MFM flag 862 which is a sequence that does not normally occur in the stream of control bits. Flag 862 consists of a sequence of transitions, spaced 4-3-4 control bits, and then the end of the fifth transition denotes the end of the flag (the timing violation). Then there is a trailing zero 864 which is an MFM-encoded zero ignored by the data receiver before the actual payload begins. The payload is then sent typically LSB first, although sending MSB first may also be used; the LSB 866 in the 0 position of MFM bit cells 856 is shown in the control sequence is having the value “0.” A total of 25 MFM-encoded cells are sent and the payload 867 is shown to the right of the control sequence. Shown at 868 is a second control sequence different from sequence 860 but having the same MFM flag and trailing zero; the payload it sends is different, reflecting different commands and data that may be sent over a single control channel. Another example of a control sequence is at 869. The payload sent may represent commands, parameters or be reserved for future use.
Synchronization is complicated because we receive data over 16 channels from the de-interleaving amplifiers. A source driver will not know which channel holds the control sequence until synchronization occurs. One proposed method is to use a flag sequence that appears on just one channel output. Identification of the MFM flag tells the source driver chip to resynchronize and be ready for commands or data, such as a horizontal or vertical synchronization command, or phase alignment mode. At power on (or after an outage) the transmitter will transmit the MFM flag and control sequence on all channels and the correct control channel (in this example, the 16th channel) will recognize the flag, resynchronize the timing and recognize commands and parameters. Once resynchronization has occurred, the control sequence need only be sent on the control channel and video data may be sent on the other 15 channels.
Another, more preferable, method is to transmit the MFM flag on one channel, not on all 16 channels initially. The receiver looks for the flag on one channel (and before synchronization is complete, this may be the wrong channel). If after one line time (˜1.5 us) the flag is not detected, the clock is slipped (skipped) for 1 cycle, effectively rotating the amplifier usage. Synchronization to the clock cycle can therefore take up to 16 display line times.
Because the control stream is continuous, the commands and parameters may be extended over multiple display lines if necessary. And even though conventional CEDS transmits approximately 28-32 control bits per display line, we realize that some of these bits do not need to be conveyed each line (some, e.g., low temperature mode, power control, etc., may be frame parameters), i.e., less frequent transmission is adequate. And such a control channel disclosed herein is robust enough to not require CRC since we extract digital data and the same channels convey analog video samples to 10-bit accuracy. Nevertheless, CRC may be added. Another advantage of using MFM in the context of video transmission is that when the flag occurs it provides an immediate and accurate timing reference; there is no waiting for correlation as is the case for other techniques such as Kronecker. Further, this control channel is adequate to convey commands such as frame synchronization, line synchronization, parameter data, and phase alignment information. The commands are sent distributed (sent sequentially one symbol at a time over one line period) and the received control information applies for the next display line. Typically, the control sequence never ends. One control packet is sent every line period. Information such as the polarity of the column driver, driver strength, etc. is carried per line.
Synchronization stream 878 is a stream transmitted on the control channel continuously during the phase alignment mode. It is contemplated that for purposes of determining the threshold of the upper comparator 835 (if phase alignment of
VSYNC 881 when asserted indicates that the current line being received is in the vertical blanking period, so no data will be displayed and the video controller state machine is re-initialized. Polarity control bits 882 determine the polarity in which pairs of columns are driven relative to the dark level). Each pair of columns is driven in a complementary fashion (one column output is driven more positive than the dark level and the adjacent column is driven more negative), and the polarity control for a column-pair determines the direction. The four polarity controls independently control the four column pairs in an 8-column group and the pattern is repeated every eight columns for all sub-pixel columns in a line. The polarity control can be updated each line. In practice it is likely that polarity control bits will be changed at most once per line (to reduce power consumption).
Shorting control bits 883 include “short_gena” which, when asserted, adjacent columns are shorted only if the corresponding polarity control bits have been changed, unless short_all is also asserted. When de-asserted no shorting is performed at all. “Short_all” enables shorting of all column pairs irrespective of the state of the polarity control changes, but only has effect if short_gena is also asserted. Drive Time Control 884 specifies the number of Fsavt/16 cycles from the start of the high voltage drivers drive period until the driver is tri-stated or charge-shared (depending on SHORTCTL). High Voltage driver's Sampling Phase 885 is a chopper clock that swaps both the inputs and outputs of the main amplifier to cancel the offset. SHA Calibration Control Signals 886 includes two SHA calibration control signals: sha_video (sha_cal[0]) and sha_meas (sha_cal[1]), both directly controllable from side-channel control bits. These signals control the SHA's Calibration_Phase1 and Calibration_Phase_2 signals respectively.
As mentioned above, due to delays and attenuation in the cables, the presence of ringing on the input samples, inter-symbol interference, etc., it is desirable to determine the optimal sampling phase of the incoming samples for a particular source driver. This phase alignment may be performed at power-on or at regular intervals such as at frame synchronization, during frame blanking, etc., and the phase alignment mode may be entered by issuing the “set phase alignment mode” described above. Two phase alignment techniques are proposed below.
Once phase alignment mode has been entered we send a synchronization stream 878 along the control channel of the source driver, e.g., the synchronization stream arrives at input amplifier 826, and the stream is detected by a central comparator 836 as well as an upper threshold comparator 835 and a lower threshold comparator 837. This synchronization stream is preferably a valid MFM data stream (i.e., MFM zeros and MFM ones) with a regular 50% duty pattern of positive and negative values of known amplitude. Because this is a valid MFM data stream, the “exit phase alignment mode” command may be issued at any time. Comparators 835 and 837 should be sufficiently fast, but do not need absolute accuracy as long as the offset is less than the difference of the last two amplitude levels.
Basically, central comparator 836 provides zero crossing detection and indicates whether the input detected is positive or negative. When the sample is positive and the sampling phase is correct, upper threshold comparator 835 should also produce a positive value, if not, this means that sampling has occurred too early or too late, i.e., before the transition to a positive value or after the transition. Lower threshold comparator 837 provides similar information when the sample is negative. If the upper comparator or the lower comparator do not agree with the central comparator then the sampling phase is adjusted. A detailed technique for adjusting the phase to correctly sample positive input is described below and one of skill in the art will be able to apply the technique to negative input.
The upper comparator (for example) should report the same value as detected by the zero crossing detector. As the sampling phase is rotated (by advancing the phase from the PLL) we will eventually get to a point where the next transition starts to occur. That transition will cause the upper comparator to provide a result that disagrees with the zero crossing detector. We then know that we have detected the transition, and we can set the sampling phase back by one (or two, for safety) phases, so that we sample late in the symbol period after the sample has settled, but before the transition to the next sample. Note that if the transition is very quick, it is possible that the zero crossing detector will also flip when the symbol transition occurs, in which case looking at the upper comparator is not required, so “the transition” may be determined by the OR of these two events.
As mentioned, the first step is to enter phase alignment mode and to send synchronization stream 890 along the control channel. Preferably, amplitudes 891 and 892 are set far enough apart to handle any ringing, etc., of the input and to provide a window (in this example, approximately 0.2 V) in which upper threshold 893 can be set so that when the sampling phase is roughly correct that pulse 891 does not trigger the upper comparator but that pulse 892 does. In one example, if the expected amplitudes of a control signal are approximately 1.5V and approximately −1.5V then amplitudes of pulses 891, 892 are set below that expected amplitude as shown. Corresponding amplitudes for pulses 897 and 898 may be set in the same manner.
In order to choose initial voltages for these two pulse amplitudes one aim is to set the modulation levels so that we can detect the transition. One embodiment uses 50% amplitude and 75% amplitude (of a positive pulse) for these two amplitudes of the synchronization stream. That makes it easy to set the DAC threshold between the two amplitudes (with allowance for noise, etc.), yet still provides a good indication of when transitions occur (when the 75% amplitude pulse drops below the DAC upper threshold).
Selecting an initial sampling phase may be a random selection, the reset value (e.g., phase 0) or some other phase selection. Because the zero crossing detector is used to determine the expected signal level, it would be unlikely (˜ 1/16 chance, but possible) to select a sampling phase at the symbol transition where the zero crossing output appears to be random. If that occurs, though, and we do not see a flag after all 16 clock skips, we advance the phase, and that puts us into a position where the zero crossing detector will work. There are 11 phases in our implementation; it is expected that shifting the phase about two or three positions will be sufficient. Other implementations will differ.
Once the synchronization stream arrives, logic and circuitry (not shown) in the source driver may adjust the upper threshold by sliding it up and down to determine its optimal voltage. For instance, if the upper threshold is too low the upper comparator will trigger on both pulses 891 and 892, if the upper threshold is too high it will not trigger on pulse 892; when the upper threshold is placed correctly it will not trigger on pulse 891 but will trigger on pulse 892. The source driver will not know what the amplitudes of pulses 891, 892 will be due to attenuation, etc., but as long as the amplitudes are far enough apart to place the upper threshold accurately then the source driver does not need to know what the precise values are. This adjustment process uses a sampling phase that is roughly correct, but not optimal. Once the upper threshold is correctly placed it will not be triggered by any ringing of pulse 891 but will be triggered by pulse 892. A DAC may be used to adjust the upper threshold.
Once the upper threshold has been placed then logic and circuitry in the source driver starts rotating the sampling phase around the eleven different phase positions to determine the best phase in which the sample. By way of example, an initial sampling phase may occur roughly in the middle of pulse 892 but this may not be the optimal point to sample because the pulse may not be stable at this point and may yield an incorrect value. Typically, the best point at which to sample is immediately before the transaction to the next pulse when the signal is most settled, i.e., before the trailing edge of the pulse. When the sampling phase is rotated to point 894 both the central comparator and the upper comparator trigger and signal that a positive value is received; when rotated to point 895 both trigger again. But when the sampling phase is rotated to point 896 suddenly the upper comparator will not trigger and we will know that we have just passed the transition. There will not be correspondence between the upper comparator and the central comparator. (Even though a perfect square wave is shown, the central comparator will still signal a positive value as the transition is not a steep drop down to −1.2 V but rather a more gradual descent). Accordingly, by going back one or two phase taps earlier, i.e., to point 895 or 894 we will have found the best sampling phase. Another quantity of phase positions may be used; eleven was determined by the process (number of stages of inversion that fit within the 1.5 ns clock period). An odd number of positions may work well, depending on the VCO structure.
Use of the two pulses 891, 892 in order to set the upper threshold provides certainty that the upper threshold is high enough in order to perform the search for the optimal sampling phase as described below. Providing an upper threshold that is below the amplitude of pulse 892 guarantees that when sampling occurs after the transition of pulse 892 that there will not be correspondence between the upper comparator and the central comparator, thus facilitating choosing the optimal sampling phase. Although it is possible to use a single sampling phase once detected, it is preferable to average over multiple measurements in order to handle noise and overshoot.
Above is described a technique for determining the optimal sampling phase of positive pulses. The same technique may be applied to the negative pulses as well and the results can be averaged. In one particular embodiment, the lower threshold comparator 837 is not necessary and only comparators 835 and 836 are used to determine the optimal sampling phase using the positive pulses as described above. In another embodiment, the upper threshold comparator is not used and only the lower threshold comparator and the central comparator are used with respect to negative pulses in order to determine the optimal sampling phase. In yet another embodiment, the upper threshold comparator is used exclusively with only positive pulses in the synchronization stream all having the same amplitude; the upper threshold is set to be below the amplitude of these positive pulses and the sampling phase is rotated forward and back depending upon when this upper comparator ceases to trigger. The upper threshold comparator may also be used exclusively when the synchronization stream includes alternating positive pulses of different amplitudes as shown in
Above,
At step 974 if no flag is detected and detector output=1, then move to step 972, shown in
Once the optimal phase is determined it is implemented within the source drivers by sending the output of the sampling phase adjustment circuit as a sampling clock to the SHA amplifiers. Preferably, all amplifiers of all source drivers act in unison; i.e., there is only one sampling phase alignment circuit and one clock cycle alignment that controls all SHA amplifiers. Within each source driver, these input SHA amplifiers are time interleaved and generate sample outputs that are skewed in time by one Fsavt cycle between adjacent channels. The SHA amplifiers then transfer these samples to the collectors (A/B samplers) with skewed timing also, but after all samples for a line are gathered by the collectors, the pre-amplifiers transfer all samples to the next stage (the level converters) in unison. This effectively “wastes” 16 Fsavt cycles of the transfer time at the pre-amplifier outputs, but as there is a decimation of the sampling rate, there is sufficient time for this to occur.
Other synchronization techniques may also be used. By way of example, we can provide for horizontal and vertical synchronization by forwarding a low-frequency clock. In order to phase adjust for where we sample, another technique is to send known black/white references in the sub-band and adjust the receiver's PLL until we find the blackest black and whitest white.
In another synchronization technique, the reference clock is more than simply a reference clock; it also includes data (such as parameters), but at a lower frequency. The clock and its parameters are sent via a wire separate from the SAVT samples (which wire already exists). There is no need to intermingle the side channel data with the video data, thus the SAVT rate is reduced and it is only necessary to send 60*16=960 samples per line, thus requiring lower bandwidth for communication. By using sub-pixel color grouping, the bandwidth requirements are reduced even more. It is also possible to introduce color-transition blanking into this technique; since there are no side channel bits embedded in the video stream, there are no issues with bleeding of the side channel bits into the video bits.
Shown is an input terminal 902 and one of the 16 input distribution amplifiers, in this case, SHA[0] 824, shown at 904; not shown are switches 842 of the input terminal nor the other 15 distribution amplifiers which drive video samples. The input sampling is illustrated by the switches sampling into capacitors. The input sampling switches are controlled symbolically by the signals b and t. There are 15 other identical amplifiers (with skewed timing) for carrying video, while SHA[0] carries the side channel information. It is arbitrary which SHA channel carries the side channel, but the advantage of using SHA[0] is that control information arrives before the video samples, not after it, giving some time for setup before the control information is needed. Each amplifier 904 has a nominal gain of one, which may vary.
Each SHA channel drives 64 columns 920 via a series of sampling blocks/collectors 908, preamplifiers 910, level converters 912, HV drivers 914 and column shorting switches 918 as indicated by the array notation [63:0] used in the component designators in the figures. Level converters 912 may also be referred to as differential-to-single-ended converters. Preamplifiers 910 provide the gain required for the signals coming from a transmission medium.
Shown in
Shown also is one of 64 level converters 912 of the channel that converts the differential signal into a single-ended signal, adds an offset, changes the polarity of the signal, and provides amplification. Output out_p 913=vmax+0.5*(Vinp−Vinn) if pol=0. Output out_p 913=vmin−0.5*(Vinp−Vinn) if pol=1. High-voltage driver 914 is one of 64 such drivers of the channel that multiplies the incoming signal to provide the voltage (plus or minus) expected by the display. Column shorting switch 918 provides shorting for LCD displays as is known in the art. Finally, the expected voltage is output to the column at 920. The preamplifier 910, level converter 912 and HV driver 914 may be considered an amplification stage before each column, and in this case is a pipeline amplifier, or simply “an amplifier.”
Switches 842 of
A split OLED DDIC architecture as shown in
Shown is a mobile telephone (or smartphone) 980 which may be any similar handheld, mobile device used for communication and display of images or video. Device 980 includes a display panel 982, a traditional mobile SoC 984, an integrated DDIC-TCON (Display Driver IC-Timing Controller) and transmitter module 988, and an integrated analog DDIC-SD (DDIC-source driver) and receiver 992. Mobile SoC 984 and module 988 are shown external to the mobile telephone for ease of explanation although they are internal components of the telephone.
Mobile SoC 984 is any standard SoC used in mobile devices and delivers digital video samples via MIPI DSI 986 (Mobile Industry Processor Interface Display Serial Interface) to the module 988 in a manner similar to Vx1 input signals discussed above. Included within module 988 is the DDIC-TCON integrated with a transmitter as is described above, for example the transmitter of
These analog signals 990 are received at the integrated analog DDIC-SD and receiver 992. DDIC-SD receiver 992 receives any number of analog signal pairs and generates voltages for driving display panel 982 and may be implemented as shown in
Analog DDIC-SD Rx 992 may be a single integrated circuit having 12 source drivers within it (each handling a single pair) or may be 12 discrete integrated circuits each being a source driver and handling one of the 12 signal pairs. Of course, there may be fewer signal pairs meaning correspondingly fewer source drivers.
Analog Video Transport Source Driver Integration with Display Panel
As discussed above, analog video transport is used within a display unit to deliver video information to source drivers of the display panel. It is further realized that large display architecture nowadays consists of a large area of active-matrix display pixels. In early days, display drivers (source and gate) would be mounted at the glass edges, but not on the glass, providing source- and gate-driving circuits. Further integration of the driving electronics onto the glass has stagnated due to the complexity of high-speed digital circuits, as well as the large area required for D-to-A conversion. By way of example, digital transport to the source-driving circuits operates at around 3 GHz, a frequency much too high to allow integration with the glass. It is further realized that many display drivers have to be attached to the display edge in order to drive a complete, high-resolution LCD or OLED screen. A typical driver has approximately 1,000 outputs, so a typical 4K display requires 4,000×RGB=12,000 connections, meaning twelve source drivers. Increasing the panel resolution to 8K increases this number to 24 source drivers. Data rate, synchronization difficulties and bonding logistics make it difficult to continue in this direction.
A display panel (such as an LCD panel) is made from a glass substrate with thin-film transistors (TFTs) formed upon that glass substrate, i.e., field-effect transistors made by thin-film deposition techniques. These TFTs are used to implement the pixels of the display. It is therefore realized that those TFTs (along with appropriate capacitors and resistors, and other suitable analog components) can also be used to create logic circuitry to implement elements of the novel source drivers described herein which are then integrated with the glass. These elements are integrated at the extreme edges of the glass, just outside the pixel display area, but inside the perimeter seal of the glass. Thus, the source drivers disclosed herein may be integrated with the glass using these transistors, capacitors, resistors, and other analog components required, and may do so in the embodiments described below. Accordingly, the source drivers (or elements thereof) which had previously been located outside of and at the edge of the display panel glass are now moved onto the display panel glass itself. In addition, the gate driver functionality for the gate drivers may also be moved onto the display panel glass.
The SAVT video signal may be transported along the edge of the display glass using relatively simple wiring, and is less insensitive to interference, unlike existing Vx1 interfaces. The lower sample rate makes it possible to design the required analog electronics (which are less complex) of the source drivers on the edge of the TFT panel on the display panel glass itself. Building the source driver circuitry on the glass edge allows the following elements of a source driver to be integrated with the glass along with their typical functions: input terminal and switches (receives analog samples via the SAVT signal and distributes to collector); collector (receives the analog samples via input amplifiers and collects the samples in a storage array or line buffer); level converters (convert to single-ended, provide voltage inversion and voltage offset), and amplifiers such as high-voltage drivers (provide an amplified voltage and the current required to charge the display source lines capacitance).
If faster TFT transistors of higher quality are used then higher frequency portions of the source driver may be integrated with the glass. Also, smaller device sizes will allow for the transistors to switch faster, thus enabling implementation on glass of elements using those devices. For example, the channel length of a TFT affects its size; preferably the channel length for oxide TFTs is less than about 0.2 um and the preferable channel length for LTPS TFTs is less than about 0.5 um. Reducing the channel length by 50% yields an increase in speed by a factor of four. Further, implementation may depend upon the type of display; display sizes of smaller resolution 2K, 1K and smaller may use elements that do not require the high frequency of 4K and 8K displays. Typically, amorphous silicon transistors would not be used as they have a tendency to threshold shift and are not stable. Note that the source driver disclosed herein does not require any digital-to-analog converters to convert video samples nor any decoder to decode incoming video samples.
In a first embodiment 102, level converters 620 and amplifiers 621 are integrated with the glass because the level converters only require a relatively low-frequency clock. As the level converters switch once per line they require a switching frequency of about 50 kHz for a 2K display, 100 kHz for a 4K display, etc. Thus, the first embodiment of integration may use TFTs that can operate at a clock frequency of at least about 50 kHz, assuming a 2K panel (100 kHz for a 4K panel, etc.). Thus, IGZO or LTPS TFTs may be used in the first embodiment.
In a second embodiment 104 using faster transistors, level converters 620, amplifiers 621 and collector 786 may also be integrated with the glass, thus integrating the entire source driver. Collector 786 requires a higher-frequency clock as each collector is manipulating the pixel sequence and requires a switching frequency of about 50 MHz for a 2K display, 100 MHz for a 4K display, etc. Thus, the second embodiment of integration may use TFTs that can operate at a clock frequency of at least about 50 MHz, assuming a 2K panel. Thus, LTPS TFTs may be used in the second embodiment for 2K panels.
Shown also is rectangular area 140 also located upon the glass itself in which elements of the source drivers may be located. The source driver functionality may be partially or fully integrated with the glass by making use of TFT switches on the glass in this area 140. In the first embodiment is the integration of the amplifiers and level converters onto the glass (which are formed in region 140), while in the second embodiment is the integration of the amplifiers, level converters and collector (which are also formed in region 140).
As the source drivers disclosed herein do not receive digital signals, have no D-to-A converters and related circuitry for processing the digital video samples, nor decoders, the lower processing frequencies and smaller dimensions of these drivers allow for them to be integrated onto the glass. Thus, for example, since a typical 64-inch 4K television panel has a pixel width of 80 um (40 um in case of an 8K display), there is also sufficient width to integrate the drivers directly onto the glass because the dimensions of the output amplifiers are expected to fit within this space. Depending upon the pixel width of a particular implementation, specific TFTs may be chosen.
An interconnect printed circuit board 182 receives and passes the EM signals 602 via flexible PCBs 184 to the source drivers located on integrated circuits 186a and partially integrated with the glass in TFTs 186b. Passing the EM signal in this fashion is implemented for embodiment 1 as a portion of each source driver (at least the collector) will still be located within flexible PCBs 184 on the IC 186a and the level converters and amplifiers will be located on glass in TFTs 186b. As shown, each integrated circuit 186a passes analog signals 187 to its corresponding circuitry on glass 186b. The nature of these analog signals will depend upon whether embodiment 1 or embodiment 2 is being implemented. An implementation for embodiment 2 is shown below. Gate clocks 190 and 192 are delivered to the gate drivers via circuit board 182 and flexible PCBs 184. PCBs 184 attach to panel glass 150 as is known in the art.
Returning now to the example source driver of
Alternatively, all column drivers 916 of the source driver are implemented on glass while the rest of the upstream elements (i.e., collector 915) are implemented outside the edge of the glass. Or, all column drivers 916 and collector 915 of the source driver are all implemented on glass and there are no elements of the source driver implemented on flexible PCB 184 as shown in
In one particular embodiment, the SHA input amplifiers operate at an input rate of about 664 MHZ, the A/B sampling blocks operate at 1/16 of the input rate, and the preamplifiers and downstream components operate at 1/1024 of the input rate ( 1/64 of 1/16 the input rate). Of course, the input rate may be different and the fraction of the input rate at which the downstream components operate may vary depending upon the implementation, number of columns, interleaving technique used, etc. In another embodiment, only the HV driver is implemented on glass as the output from the level converter 912 is single ended (making implementation easier). Or, the preamplifiers, level converters and HV drivers are implemented on glass as they require a lower frequency than the SHA amplifiers and A/B blocks. It is also possible to only implement SHA amplifiers 904 in the source driver chip and all other downstream components on glass as the amplifiers 904 operate at the greatest frequency.
Improving Video Images Via Feedback from Column Amplifiers of a Display
We disclose a technique to compensate for that variation by sending appropriate feedback to the timing controller (TCON) of the display unit. This invention thus allows the 24 (or however many) demultiplexing/source driver chips to feed back the level of a single column amplifier back to the TCON. Upon gathering performance information from all 23,000 column amplifiers, the TCON can pre-scale values intended for a given column to equalize the performance between columns. Advantageously, there is no high-speed performance requirement: the invention may be used pre-sale for screen calibration purposes. In other words, the technique may be used during a production test where all columns are available and drive characteristics may be measured without an additional area penalty on chip (due to an area overhead per column for sampling). This pre-scaling of values based upon individual column feedback is in addition to any pre-scaling the TCON may do to equalize the performance of different rows in the display; rows farther away from the column drivers may require additional current to achieve the same light output.
There are two main embodiments: analog feedback and digital feedback. Both may use an interface like JTAG, I2C or SPI so that the TCON can issue a command for a particular column driver to send back the value coming out of its column amplifier. The command may also be issued using an MFM command as described above. In the analog version, that value is sampled through an analog switch to an analog bus—a single analog connector returning to the TCON—shared by all source driver chips. The TCON then does analog-to-digital conversion (ADC) and digital processing of the result. As the source drivers outputs are high voltage the multiplexing requires the use of high voltage transistors, or a low-voltage representation of the column voltage may be generated before multiplexing (done by resistor or capacitor dividers). In any case, there is an area overhead per column.
In the digital version, each source driver chip has its own A-to-D converter, so it does the column amplifier sampling locally (thus avoiding any loading effects of a long analog return path), and returns the digital value to the TCON, over the same JTAG, I2C or SPI path as the command.
In a separate embodiment, we model how the performance of a pixel varies with its neighbors (e.g. in the same column), we use the model to pre-scale sub-pixel inputs before they are sent to the source driver chips in order to produce the desired brightness. In general, it can be difficult for each sub-pixel to report its light output; such measurements require an elaborate testing setup. Instead, we measure the current through each sub-pixel at specific input values, we then use that measured current as a proxy for the light emitted in order to model the performance of a pixel (or sub-pixel).
In a variation, we do use real screen metrology to model the performance of the display, especially how neighboring sub-pixel values affect brightness (due to slew rate issues, etc.). We then use this model to pre-scale sub-pixel inputs before they are sent to the source driver chips in order to produce the desired brightness.
Above are described embodiments for transmitting video signals to a display panel and within a display unit. The present invention also includes embodiments for transmission of video signals using SAVT in other environments such as directly from a camera or other image sensor, from an SoC or other processor, and for receiving SAVT signals at an SAVT receiver that is not necessarily integrated with a display panel (as shown above), such as at an SoC, at a processor of a computer, or at an SAVT receiver that is not integrated within a legacy display panel. U.S. patent application Nos. 63/611,274 and 63/625,473 (HYFYP017P2 and HYFYP018P) incorporated by reference above disclose examples of such other environments, respectively within a mobile device and within a vehicle.
In this example there are multiple EM pathways; there may be a single EM pathway or multiple EM pathways. Depending upon the implementation and design decisions, multiple outputs may increase performance but require more pathways. In order to have as few wires as possible from transmitter 1240, only a single pathway transporting a single EM signal 1270 may be used. SAVT transmitter 1240 may be implemented substantially as described above with respect to the transmitter of
Depending upon the embodiment discussed immediately below, analog RGGB video samples 1239a may be input, analog or digital RGB samples 1239b may be input, digital G samples 1239c may be input, or analog BGBG . . . RGRG samples 1239d may be input. If the samples are digital then DACs 1260-1269 are used. In general, the transmitter can accept analog or digital video samples from any color space used, and not necessarily RGB. The samples may arrive serially, e.g., R then G then B, or in parallel i.e., RGB in parallel as three separate signals. Using distributor 1241, we can reorder the samples as needed.
As mentioned, the inputs may vary depending upon the implementation. Input 1239d may originate as follows. An image sensor may output raw analog samples without using ADCs nor performing “demosaicing” using interpolation. Thus, the image sensor output is a continuous serial stream of time-ordered analog video samples, each representative of a pixel in a row, from left to right, in row-major order (for example), frame after frame, so long as the image sensor is sensing. Of course, a different ordering may also be used. When Bayer filtering is used, the samples are output by a row of BGBG . . . followed by a row of RGRG . . . , often referred to as RGGB format as each 2×2 pattern includes one each of RGGB. These rows of analog video samples 1239d are input into SAVT transmitter 1240, transmitted as EM signals 1270-1279 to the SAVT receiver of
Input 1239c may originate as follows. Raw analog samples coming from an image sensor are converted to digital in ADCs and then “demosaicing” is performed within an image signal processor (ISP), resulting in digital RGB samples per pixel. Only the green channel (i.e., one G sample per element of the array) from each set of RGB samples per pixel is selected and sent to become input 1239c. These rows of G digital video samples 1239c are input into SAVT transmitter 1240, transmitted as EM signals 1270-1279 to the SAVT receiver of
Alternatively, as only the green channel will be sent, interpolation only need be performed at the R and B elements of the sensor in order to obtain their G sample; no interpolation is needed at the G elements because the G sample already exists and the R and B sample at those G elements are not needed, thus making interpolation simpler and quicker. As the green channel corresponds to the luminance (or “luma”) channel there will be no loss of perceived resolution, although any downstream display will show a monochrome image.
Input 1239a may originate as follows. We modify the readout from an image sensor and read at least two rows simultaneously. By way of example, the first two bottom rows of an image sensor are read out simultaneously which then outputs a serial stream of values such as BGRGBGRG . . . or GBGRGBGR . . . . The readout order is thus: first a blue value from the first row, then green and red values from the second row, followed by a green value from the first row, etc., resulting in a serial output BGRGBGRG. Or, an alternative readout order: first a green value from the second row, then blue and green values from the first row, followed by a red value from the second row, etc., resulting in serial output GBGRGBGR. Other readout orders may be used that intermix color values from two adjacent rows and the order of the pixel values may vary depending upon whether a particular row starts with a red, green or blue value.
Since two rows are read out at a time, every four values of those two rows (e.g. BG from the beginning of the first row and GR from the beginning of the second row i.e., two Gs an R and a B) are available to output serially, thus resulting in a serial pattern such as BGRG . . . or GBGR . . . as shown. After the first two rows are read out, then the next two rows are read out, etc. Other similar outputs are possible where each grouping of four values includes two green values, a red value and a blue value. The image sensor may be read starting from any particular corner, may be read from top-to-bottom or from bottom-to-top, may be read by rows or by columns, or in other similar manners. Thus, the output from the video source is a series of values BGRGBGRG . . . or GBGRGBGR or similar. “Demosaicing” may then occur in the analog domain in the SoC using this series of values without the need to convert these values to digital nor use any digital processing.
Such an ordering of color values facilitates interpolation in the analog domain. Other color spaces may be used in which reading out two or more rows at a time and intermixing the color values from different rows in the serial output also facilitates color interpolation in the analog domain. The video source including the image sensor then outputs a pattern such as BGRGBGRG . . . or GBGRGBGR, shown and referred to as “RGGB . . . ” 1239a in
Input 1239b may originate as follows. As shown in
As with the SAVT transmitter of
The collector controller 1330 sequences the loading of samples from the inputs 1270 . . . 1279, as well as controls the timing for unloading the samples for further processing. Since the input stream is continuous, the collector controller loads samples into one line buffer while the other line buffer samples are transferred to the output for further processing.
Shown is an embodiment of the receiver suitable for use with the embodiment discussed in
Although SAVT receiver 1300 only shows BG . . . RG . . . outputs, it may also receive and output samples input using the other embodiments discussed in
As output 1360 are analog samples, an ADC 1362 or ADCs may be used if desired to convert the samples to digital. Thus, each output vector may output its samples one at a time via analog-to-digital converter (ADC) 1362 in order to provide a continuous stream of digital samples 1364.
The inventions are applicable to high resolution, high dynamic range displays used in computer systems, televisions, monitors, machine vision, automotive displays, virtual or augmented reality displays, etcetera. As mentioned above at the beginning of the detailed description, embodiments may use SSVT (encoding and decoding), although SSVT may not be necessary. Accordingly, above are described SAVT techniques in which an encoder and decoder are not necessary. Or, an identity matrix may be used with SSVT encoding and decoding (preferably when chip values in the code set are constrained to be “+1” or “0”), thus transmitting an analog signal as if the encoder and decoder were not present. The below figures show use of either SSVT or SAVT techniques; SAVT is described in more detail above, while the below figures provide more details on SSVT techniques.
The GPU 1410 where the video data is processed may be within a computer. Once converted and encoded by the SSVT transmitter 1414 the analog signal 1416 is transported to the display unit 1401. That display unit may be nearby, 10 meters away, or even farther. Thus, the information path from the graphics or video processor, which may be effectively the computer, goes over a number of transfer connections directly to the display unit without ever being digital anywhere in that data path. Originally, the video signal may begin at a camera or similar device as shown in
Advantageously, the farther upstream of a display unit that we perform the D-to-A conversion and the encoding into an SSVT signal (i.e., not performing the conversion and encoding within the display unit itself), the more benefits we obtain, because we do not need to perform compression to transfer a compressed digital video signal across an HDMI cable. In this particular embodiment, we handle the full resolution display information in the GPU, then perform the conversion and encoding on a chip at the GPU, then all the transfer is via a relatively low-frequency SSVT signal until that signal reaches the display unit. In this case, we have handled the entire display resolution at full frame rate from the GPU source to the display unit endpoint without any internal compression.
In this example of
There is a significant advantage to using an SSVT signal internally in a display unit even if the input signal is not SSVT, i.e., it is a digital video signal. In prior art display units, one decompresses the HDMI signal and then one has the full-fledged, full bit rate digital data that must then be transferred from the receiving end of the display unit to all locations within the display unit. Those connections can be quite long for a 64- or 80-inch display; one must transfer that digital data from one side of the unit where the input is to the other side where the final display source driver is. Therefore, there is an advantage to converting the digital signal to SSVT internally and then sending that SSVT signal to all locations of the display unit where the source drivers are located. Specifically, the advantages are that it is possible to use lower frequency, lower EMI signals, and benefit from embedded synchronization/low latency initialization.
Also shown within
Typically, an SSVT transmitter and an SSVT receiver (in this case, source drivers 1586) are connected by a transmission medium. In various embodiments, the transmission medium can be a cable (such as HDMI, flat cable, fiber optic cable, metallic cable, non-metallic carbon-track flex cables), or can be wireless. There may be numerous EM pathways of the transmission medium, one pathway per encoder. The SSVT transmitter includes a distributor and multiple encoders. The SSVT receiver will include multiple decoders, the same number as the encoders. The number of pathways on the transmission medium may widely range from one to any number more than one. In this example, the medium will be a combination of cable, traces on PCBs, IC internal connections, and other mediums used by those of skill in the art.
During operation, a stream of time-ordered video samples containing color values and pixel-related information is received from a video source at the display unit 1500 and delivered to the SSVT transmitter 1540 via the SoC and TCON (processing by the SoC may be performed as is known in the art). The number and content of the input video samples received from the video source depends upon the color space in operation at the source (and the samples may be in black and white). Regardless of which color space is used, each video sample is representative of a sensed or measured amount of light in the designated color space.
As a stream of input digital video samples is received within the SSVT transmitter, the input digital video samples are repeatedly (1) distributed by assigning the video samples into encoder input vectors according to a predetermined permutation (one vector per encoder) and (2) encoded by applying an SSDS-based modulation to each of the multiple encoder input vectors, using orthogonal codes, to generate multiple composite EM signals with noise-like properties (one analog signal from each encoder). The analog EM signals are then transmitted (3) over a transmission medium, one signal per pathway.
The number of samples N may be more or less than 60. Also, it should be understood that the exposed color information for each set of samples can be any color information (e.g., Y, C, Cr, Cb, etc.) and is not limited to RGB. The number of EM pathways over the transmission medium can also widely vary. Accordingly, the number of vectors V and the number of encoders may also widely vary from one to any number larger than one. It should also be understood that the permutation scheme used to construct the vectors, regardless of the number, is arbitrary. Any permutation scheme may be used, limited only by whichever permutation scheme that is used on the transmit side is also used on the receive side.
Each vector of N samples is then encoded by its corresponding encoder and produces L output levels in parallel. Preferably, L>=N>=2. As described, the encoding may be analog (DACs placed before the encoders) or digital (in which the L levels are converted to analog by a DAC before being transmitted). The L analog output levels are then transmitted over its EM pathway as part of the SSVT signal to an SSVT receiver, which in this case are the source drivers 1586. Advantageously, the SSVT signal is an analog signal and no DACs are required at the source drivers. Although not shown in
Multiple source drivers are cascaded as shown and as known in the art; these multiple source drivers then drive the display panel. As shown, a source driver 1586 does not require a DAC (in the signal path for converting digital samples into analog samples for display) as required in prior art source drivers. Input to a decoding unit 1610 of each source driver is an analog SSVT signal 1592 that has been encoded upstream either within the display unit itself or external to the display unit as is described herein. As shown, SSVT signal 1592 is daisy chained between source drivers. In an alternative embodiment, each source driver will have its own SSVT signal and the TCON provides timing information to each source driver chip.
Decoding unit 1610 may have any number (P) of decoders and having only a single decoder is also possible. Unit 1610 decodes the SSVT signal or signals (described in greater detail below) and outputs numerous reconstructed analog sample streams 1612, i.e., analog voltages (the number of samples corresponding to the number of outputs of the source driver). Because these analog outputs 1612 may not be in the voltage range required by the display panel, they may require scaling, and may be input into a level shifter 1620 which shifts the voltages into a voltage range for driving the display panel using an analog transformation. Any suitable level shifters may be used as known in the art, such as latch type or inverter type. Level shifters may also be referred to as amplifiers.
By way of example, the voltage range coming out of the decoding unit might be 0 to 1 V and the voltage range coming out of the level shifter may be −8 up to +8 V (using the inversion signal 1622 to inform the level shifter to flip the voltage every other frame, i.e., the range will be −8 to 0 V for one frame and then 0 V to +8 V for the next frame). In this way, the SSVT signals do not need to have their voltages flipped every frame; the decoding unit provides a positive voltage range (for example) and the level shifter flips the voltage every other frame as expected by the display panel. The decoding unit may also implement line inversion and dot inversion. The inversion signal tells level shifter which voltages to switch. Some display panels such as OLED do not require this voltage flipping every other frame in which case the inversion signal is not needed, and the level shifter would not flip voltages every other frame. Display panels such as LCD do require this voltage flipping. The inversion signal 1622 is recovered from the decoding unit as will be explained below.
Also input into the level shifter 1620 can be a gain and a gamma value; gain determines how much amplification is applied and the gamma curve relates the luminous flux to the perceived brightness which linearizes human's optical perception of the luminous flux. Typically, in prior art source drivers both gain and gamma are set values determined by the manufactured characteristics of a display panel. In the analog level shifter 1620 gain and gamma may be implemented as follows. Gamma is implemented in the digital part of the system in one embodiment, and level shifting and gain are implemented in the driver by setting the output stage amplification. In the case of gamma, implementation is also possible in the output driver, by implementing a non-linear amplification characteristic. Once shifted, the samples are output into outputs 1634 which are used to drive the source electrodes in their corresponding column of the display panel as is known in the art.
In order to properly encode an SSVT signal for eventual display on a particular display panel (whether encoded within the display unit itself or farther upstream outside of that display unit) various physical characteristics or properties of that display panel are needed by the GPU (or other display controller) or whichever entity performs the SSVT encoding. These physical characteristics are labeled as 1608 and include, among others, resolution, tessellation, backlight layout, color profile, aspect ratio, and gamma curve. Resolution is a constant for a particular display panel; tessellation refers to the way of fracturing the plane of the panel into regions in a regular, predetermined way and is in units of pixels; backlight layout refers to the resolution and diffusing characteristic of the backlight panel; color profile is the precise luminance response of all primary colors, providing accurate colors for the image; and the aspect ratio of a display panel will have discrete, known values.
These physical characteristics of a particular display panel may be delivered to, hardwired into, or provided to a particular display controller in a variety of manners. In one example as shown in
Input to the display panel can also be a backlight signal 1604 that instructs the LEDs of the backlight, i.e., when to be switched on and at which level. In other words, it is typically a low-resolution representation of an image meaning that the backlight LEDs light up where the display needs to be bright and they are dimmed where the display needs to be dim. The backlight signal is a monochrome signal that can also be embedded within the SSVT signal, i.e., it can be another parallel and independent video signal traveling along with the other parallel video signals, R, G and B (for example), and may be low or high resolution.
Output from decoding unit 1610 is a gate driver control signal 1606 that shares timing control information with gate drivers 1560 on the left edge of the display panel in order to synchronize the gate drivers with the source drivers. Typically, each decoding unit includes a timing acquisition circuit that obtains the same timing control information for the gate drivers and one or more of the source driver flex foils (typically leftmost and/or rightmost source driver) will conduct that timing control information to the gate drivers. The timing control information for the gate drivers is embedded within the SSVT signal and is recovered from that signal using established spread spectrum techniques.
Typically, a conventional display driver is connected directly to glass using “COF” (Chip-on-Flex or Chip-on-Foil) IC packages; conventional COG (chip-on-glass) is also possible but is not common on large displays. It is possible to replace these drivers by the novel source drivers of
The P decoders 1780 (labeled 0 through P−1) are arranged to receive differential EM level signals Level0 through LevelP-1 respectively, 1702-1704. In response, each of the decoders 1780 generates N differential pairs of reconstructed samples (Sample0 through SampleN-1). In the case where there are four decoders 1780 (P=4), four vectors V0, V1, V2 and V3 are constructed respectively. The number of samples, N, is exactly equal to the number of orthogonal codes used for the earlier encoding i.e., there are N orthogonal codes used, meaning N codes from the code book.
Reconstruction banks 1782 sample and hold each of the differential pairs of N reconstructed samples (Sample0 through SampleN-1) for each of the four decoder output vectors V0, V1, V2 and V3 at the end of each decoding interval respectively. These received differential pair of voltage signals are then output as samples (SampleN-1 through Sample0) for each of the four vectors V0, V1, V2 and V3 respectively. Essentially, each reconstruction bank reconstructs from a differential pair to a single voltage. The staging bank 1786 receives all the reconstructed samples (Nn-1 through N0) for each of the four decoder output vectors V0, V1, V2 and V3 and serves as an analog output buffer as will be described in greater detail below. Once the samples are moved into staging bank 1786 they are triggered by a latch signal 1632 derived from the decoded SSVT signal. The latch signal may be daisy-chained between source drivers. Once the samples are released from the staging bank they are sent to level shifter 1620.
Decoding unit 1610 also includes a channel aligner 1787 and a staging controller 1789, which receives framing information and aperture information from each decoder 1780. In response, the staging controller 1789 coordinates the timing of the staging bank 1786 to ensure that all the samples come from a common time interval in which the level signals were sent by the SSVT transmitter. As a result, the individual channels of the transmission medium do not necessarily have to all be the same length since the channel aligner 1787 and staging controller 1789 compensate for any timing differences. The gate driver control signal 1606 provides the timing information to the gate drivers (or to intermediate circuitry) which in turn provides the correct timing and control signals to the gate drivers and may originate from channel aligner 1787. Note that
Array 1650 is suitable for use with a display panel having 8K resolution and a 144 Hz refresh rate, i.e., an “8K144” panel.
Theoretically, the amplifiers or level shifters may be left out if the encoded SSVT signals are higher voltages and the decoded signals result in sample voltages that are required by a display. But, as the SSVT signal will typically be low voltage (and a higher voltage output is required for a display), amplification is necessary. Note that
Augmented Reality or Virtual Reality Headset with Analog Transport
Until now, data is transferred within a VR headset and to and from that headset using digital video signals. This digital information then needs to be transferred to analog pixel information on-the-fly using D-to-A conversion at the source drivers of the displays. Transport using digital video signals requires compression, means higher power consumption (generating extra heat), more EMI emissions, greater latency and struggles to provide the color depth, high frame rates and high resolution desired. Latency—the time required to perform all of the computation needed for digital transport—is a particularly critical concern in VR systems in that any user-perceptible delays can induce nausea and make the system unusable. In addition, D-to-A conversion at the source drivers requires more space and expense. What is desirable is a VR or AR headset that uses an improved technique for video transport that addresses the above concerns.
It is realized that digitization of the video signal intended for a virtual reality (VR) or augmented reality (AR) visor may take place at the signal source of the system (e.g., at the GPU in the headset processor); then, the digital signal is transferred to the displays in the visor, where the digital signal is returned to analog again, to be loaded onto the displays. Or, the video content of a system may be originally digital. So, the only purpose of this digital signal is data transfer to displays of the visor. Therefore, we realize that it is much more beneficial to avoid digitization or digital signals altogether and directly transfer the analog data from video source to the displays. This can be done using SAVT, leading to accurate analog voltages to be transmitted to the display drivers. The analog data has high accuracy, so there is no need for high bit-depth. This means the sample rate is at least a factor of ten lower than in the case of digital data transfer, leaving further bandwidth for expansion.
In one embodiment, a video stream at a VR headset processor is sent as an SAVT analog signal to a display or displays of the VR visor where a source driver receives the SAVT analog signal and drives the display with the original video stream. Multiple displays may be driven in the same manner.
In a second embodiment, after the SAVT analog signal is created at the headset processor, the SAVT analog signal is sent wirelessly to the display or displays of the visor where it is received at a wireless receiver, converted back to wired format, received at an SAVT receiver and then displayed.
In a third embodiment, a wireless SAVT analog signal is received at the headset processor and then forwarded to the VR visor for reception and display.
In a fourth embodiment, a wireless SAVT analog signal is received at the headset processor, converted back to wired format, sent wirelessly to the display or displays of the visor where it is received at a wireless receiver, converted back to wired format, and then displayed.
In a fifth embodiment, a video stream is stored in persistent storage on the headset processor using SAVT techniques. The stored analog data may then be read from persistent storage and then transmitted from storage as the original video stream.
These wireless inventions apply to uncompressed video samples, the resulting compression-free video transport enables advanced virtual reality displays. Advantages include: negligible latency (one reason being that compression of a video signal is not required); low display chipset power consumption (less heat, longer battery life, lighter, less expensive, more robust cabling); greater field of view; greater color depth; high frame rates and resolutions; increased noise immunity; ready EMI emissions compliance; longer signal reach; greater video throughput; and SWaP-C advantages (size, weight, power and cost). The invention is especially applicable to displays used in VR headsets such as LCD and OLED panels. The advantage of low power consumption is particularly important for “untethered” VR systems which rely on batteries in the headset itself for power rather than on a cable to which the headset is tethered. With respect to the video throughput advantage, a wider field of view (usually expressed in degrees) provides a more immersive experience, but this wider field of view requires more video information; therefore, higher throughput also enables a greater field of view.
As known in the art, a VR headset is typically worn on the head of the user and includes a visor 20 that covers the user's eyes and a processor 60 typically integrated with the visor or mounted on the back of the user's head. Visor 20 has a left display 32 and a right display 34 for displaying the virtual-reality images or augmented-reality images to the user. Once the left and right displays receive the images to be seen by the user, different techniques may be used to display those images to the user. In one straightforward technique, left and right displays 32 and 34 are placed in front of the user's eyes. In another technique, typically referred to as a heads-up display (UD), displays 32 and 34 are not viewed directly by the user; rather, their images are projected and reflect off of a glass or other surface within the visor in front the user's eyes.
Processor 60 includes a core AI/ML module 62 (including a processor for executing artificial intelligence or machine learning applications, as well as other suitable processors, programs, memory, etc.), a GPU 64, SAVT storage 66 and any suitable interface to the outside world such as an RF access point 72 used to communicate wirelessly (using digital or SAVT signals) with a network, the Internet, other computers, etc. A USB port 74 may also be provided to communicate with another computer. Processor 60 may be mounted on the user's head and communicate with visor 20 via wires, cables or wirelessly. Or, processor 60 may be mounted anywhere else on the user's body (such as in a backpack or on a belt) or may be remote from the user (such as in a nearby computer, vehicle, building, etc.) and communicates with the visor 20 wirelessly.
During operation, headset 10 provides numerous advantages (such as less heat dissipation, less power consumption, greater noise immunity, fewer EMI emissions, negligible latency, greater image quality, etc., by using the novel sampled analog video transport (SAVT) technique to transport video signals to the visor from the processor, as well as to transport video signals between the processor and another computer wirelessly.
As shown, an SAVT transmitter 82 within the processor transmits an SAVT signal 92 to each of displays 32 and 34 using either a wired or a wireless connection. A technique for inputting a digital video signal, transmitting an SAVT signal to a display and integrating source drivers of that display with the SAVT signal is described in detail above in
VR visor may include only a single display, in which case SAVT transmitter 92 sends a single SAVT signal 92 to that single display. In the case of multiple displays (most often, two displays), there may be two (or multiple) SAVT transmitters 82, each receiving a video stream from GPU 64 (or from a VR bridge, a video board, a combined SoC/TCON/GPU, video splitter, etc., depending upon the implementation of the particular VR headset) and each transmitting an SAVT signal 92 to each of the displays 32 and 34. Typically, the video stream sent to each SAVT transmitter will be the same video stream in order to display the same images in front of each eye, although depending upon the implementation, the video stream sent to each SAVT transmitter may be different.
In an alternative embodiment in which only a single SAVT transmitter 82 is used, the input will be a single video stream and the output (i.e., each EM Signal) will be split or duplicated and transmitted to each of the two display panels. One of skill in the art will find it straightforward to split or duplicate a signal in order to send the same synchronized signal to two display panels. In this embodiment, each panel will display the same images based upon the input video stream.
As mentioned above, any of the SAVT signals shown in
Further, processor 60 may also include SAVT storage 66 which stores video or other data in a technique using an SAVT representation. A technique for implementing storage 66 is described in U.S. patent application Ser. No. 17/887,849 (docket No. HYFYP006) incorporated by reference above. In addition, the integrated source driver described herein may be fully or partially implemented directly upon the glass of either or both displays 32, 34 as described in U.S. patent application Ser. No. 18/117,288 (docket No. HYFYP014) incorporated by reference above.
Below are described embodiments describing various levels of integration of an SAVT transmitter 82 with a GPU 64. These embodiments provide the advantages discussed above. In each of these embodiments below, an SAVT signal is generated within processor 60 near GPU 64 and then delivered to source drivers of displays 32, 34 for displaying video data. Compared to conventional digital video transport techniques, these embodiments provide greater reach, greater noise immunity and use less power (depending upon the level of integration).
Alternative to
Or, alternative to
Returning now to
Also shown is SAVT transmitter 82 that generates SAVT signals 92 for the source drivers as well as power and control signals for the gate drivers (not shown). Also not shown are a rigid PCB as well as individual flexible PCBs each holding a source driver which generate source voltages for the display panel. Optional signals (not shown) provide information concerning the display panel back to the transmitter 82 to assist with the SAVT signals. Generation of the gate driver control signals may be performed by the timing controller (or by other specific hardware) based on synchronization information from the source drivers. Typically, most panels having more than about 1,024 columns are implemented with an array of source driver chips due to pin count constraints, one source driver per chip. For panels of fewer columns, it is contemplated that only a single source driver is needed. Typically, an SAVT transmitter and an SAVT receiver are connected by a transmission medium. In various embodiments, the transmission medium can be a cable (such as HDMI, flat cable, fiber optic cable, metallic cable, non-metallic carbon-track flex cables), or can be wireless. There may be numerous EM pathways of the transmission medium, one pathway per encoder.
As previously noted, one of the possible options for the transmission medium of the P EM signals from an SAVT transmitter 82 is wireless. As described in detail below, a wireless embodiment for transmitting and receiving SAVT electromagnetic signals is provided.
Referring to
Referring to
During operation, one or more electromagnetic (EM) signals (P), generated by the SAVT transmitter 82, are provided to the one or more modulators 50. In response, the modulators 50 each modulate one of the electromagnetic signals onto (P) different carrier frequency signals respectively. Preferably, the (P) carrier signals are different frequencies, but are all derived from the same base sine frequency. By performing the modulations, the (P) electromagnetic signals are essentially each superimposed onto the (P) carrier frequency signals respectively. The bandpass filters 52 then filter each of the modulated carrier frequency signals respectively. The bandpass filter outputs are next summed together at the summing node 53, which effectively sums all of the P voltage waveforms to produce a composite signal. The amplifier 54 amplifies the composite signal for the antenna 44. In response, the antenna 44 wirelessly broadcasts the composite signal (i.e., the amplified, summed, filtered and modulated carrier frequency signals). Preferably, both the amplifier and antenna are selected to be able to handle the additional bandwidth created by the composite signal. The above modulation and broadcasting operations are continually performed so long as the SAVT transmitter 82 is generating (P) electromagnetic signals from a stream of video samples. As a result, a wireless signal representing the stream of video samples is continually broadcast.
Referring to
During operation, the composite signal broadcast by the transmitter 42 is received by the antenna 48. The gain controller 55 adjusts the gain of the received composite signal signals; the gain controller may be implemented using either an Automated Gain Controller (AGC) or a Programmable Gain Amplifier (PGA). Either way, the gain-adjusted composite signal is provided to each of the demodulators 56.
In response, each demodulator 56 demodulates and produces one of the (P) electromagnetic signals from the composite signal. In one embodiment, each of the demodulators 56 is a super heterodyne receiver, which uses frequency mixing to convert the received signal to an Intermediate Frequency (IF) that can be more readily processed than the original incoming composite signal. Alternatively, each of the demodulators 56 is a Direct Conversion Receiver (DCR), which is a radio receiver designed to demodulate the incoming composite signal using synchronous detection driven by a local oscillator whose frequency is the same or very close to the carrier frequencies of the incoming composite signal. Regardless of the type of demodulator used, each of the (P) demodulated signals is provided to one of the low pass filters 57 respectively. Each lowpass filter filters its received demodulated electromagnetic signal and provides its output to the SSVT receiver 89 as previously described.
The discriminator circuit 58 provides a feedback loop between an output (P) from the demodulators and filters 56, 57 and the VCFS 59. In the event one or more of the frequencies used by a demodulator 56 for demodulation drifts, the discriminator circuit 58 acts to adjust the demodulation frequency (or frequencies) so that it locks onto and is the same as the received carrier frequency.
Above is described a wireless embodiment in which any number of electromagnetic signals are modulated, filtered and then summed in order to be amplified by and amplifier and output by an antenna, resulting in lower cost as only a single amplifier and single antenna are needed. In an alternative embodiment of the wireless transmitter 42, there is no summing node 53, and each of the (P) electromagnetic signals from the SAVT transmitter 82 are modulated and filtered as described, and then amplified and output using a power amplifier and an antenna per signal. In other words, instead of a single power amplifier and antenna, there will be (P) amplifiers and antennas. Similarly, the wireless receiver 46 may be implemented using (P) antennas, (P) gain controllers, and a demodulator and filter per signal as is described.
Although the above description describes video samples that are encoded and transported via SAVT in order to display images upon a panel of a VR visor, reference is also made herein to chemical samples or haptic samples that may also be encoded and delivered via SAVT. In other words, any sample value arriving at an SAVT transmitter 82 or at an SAVT transmitter producing an SAVT signal received at the headset processor 60 (e.g., arriving via access points 72 or 74, via paths 81, etc.) may represent a chemical (such as smell) or haptic sensation (such as touch). By way of example, analog samples have been described as being video samples, representing light, but a particular sample value may represent a certain chemical or a haptic sensation. By convention, it may be predetermined between an SAVT transmitter and an SAVT receiver (or between a video source and a destination processor) that certain sample positions within a frame of video (e.g., the last dozen positions of the first line of the frame) will hold chemical or haptic sample values instead of video sample values.
For instance, sample values representing the chemicals associated with the smell of a particular tree (e.g., value “0.1” means eucalyptus tree, value “0.2” means Jeffrey pine, etc.) may be embedded within the frame or frames in which the image of that tree appears in a video stream presented to the user on the display of the VR visor. When the user wearing the VR visor then turns to look at that tree, approaches the tree, attempts to touch the tree, etc., then the headset processor or VR visor may make use of those chemical sample values sent via SAVT to synthesize the odor of that particular tree at that time. An olfactometer associated with the VR headset then presents the synthesized odor to the user via a mask using controllable flow valves. One example of such a mask to deliver synthesized odors is described in “The Smell Engine: a System for Artificial Odor Synthesis in Virtual Environments,” published by IEEE in the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). Other odors may include: fresh water, contaminated water, smoke, multiple odors, etc. Advantageously, identification of specific odors is embedded along with the SAVT-encoded images of the object that produces those odors for easy synthesis of a particular odor in conjunction with the object that the user is viewing on a VR visor.
Further, sample values representing the haptic sensation (e.g., touch) associated with a particular tree (e.g., value “0.6” means smooth eucalyptus bark, value “0.7” means rough Jeffrey pine bark, etc.) may be embedded within the frame or frames in which the image of that tree appears in a video stream presented to the user on the display of the VR visor. When the user wearing the VR visor then attempts to touch the tree the headset processor or VR visor may make use of those haptic sample values to reproduce that sensation on the surface that the user is actually touching (e.g., a haptic pad, joystick, handheld controller, etc.) or on the surface in close proximity to the user's hand. Other examples of haptic sensations that may be transmitted via particular sample values that are encoded via SAVT include: heat, cold, wind, humid, dry, etc. Advantageously, identification of specific haptic sensations is embedded along with the SAVT-encoded images of the object that produces those sensations for easy synthesis of a particular sensation in conjunction with the object that the user is viewing or touching on a VR visor.
The inventions include these additional embodiments.
C1. An apparatus that integrates a DDIC-TCON (Display Driver Integrated Circuit—Timing Controller) with a transmitter, said apparatus comprising:
C2. A transmitter as recited in claim C1 wherein said distributor further includes
C3. A transmitter as recited in claim C1 wherein said digital video samples distributed into said input vectors make up a line of an image.
C4. A transmitter as recited in claim C1 wherein said digital video samples are distributed into said input vectors at a first frequency and wherein said digital video samples are serially output from each of said input vectors at a second frequency different from said first frequency.
C8. An integrated transmitter and timing controller as recited in claim C1 wherein said apparatus is located within about 2 cm of said system-on-chip.
C9. An apparatus as recited in claim C1 wherein said apparatus is integrated within a single integrated circuit of said mobile telephone.
C10. A transmitter as recited in claim C9 wherein said apparatus is also integrated with said system on-chip of said mobile telephone.
C11. An apparatus as recited in claim C1 further comprising:
D1. An analog DDIC-SD (Display Driver Integrated Circuit—Source Driver) of a mobile telephone comprising:
D2. An analog DDIC-SD as recited in claim D1 further comprising a second storage array having positions designated for each sampling amplifier, wherein said sampling amplifiers being further arranged to alternately write said respective portions of said analog video samples into said storage array or into said second storage array, and wherein said column drivers alternately read from said storage array while said sampling amplifiers write into said second storage array and read from said second storage array while said sampling amplifiers write into said storage array.
D3. An analog DDIC-SD as recited in claim D2 further comprising:
D3. An analog DDIC-SD as recited in claim D1 wherein a portion of said analog video samples are used for synchronization and are not driven into columns of said display panel.
D4. An analog DDIC-SD as recited in claim D1 wherein said analog DDIC-SD does not include any digital-to-analog-converters (DACs) used to convert video samples.
D5. An analog DDIC-SD as recited in claim D2 wherein said column drivers are further arranged to read in parallel from said storage array when said storage array is full or to read in parallel from said second storage array when said second storage array is full.
D6. An analog DDIC-SD as recited in claim D1 wherein said series of analog video samples arrive in a predetermined permutation that dictates that each sampling amplifier outputs its respective portion of analog video samples to contiguous storage locations in said storage array.
D7. An analog DDIC-SD as recited in claim D1 wherein said electromagnetic signal includes control signals used for synchronization and are not driven into columns of said display panel, said source driver further comprising:
F1. A video transport apparatus comprising:
F2. An apparatus as recited in claim F1 wherein said transmitter is integrated with a DDIC-TCON, said apparatus further comprising:
F3. An apparatus as recited in claim F2 wherein said transmitter is located within about 2 cm of said system-on-chip.
F4. An apparatus as recited in claim F2 wherein said transmitter is integrated within a single integrated circuit of said mobile telephone.
F5. An apparatus as recited in claim F2 wherein said transmitter is also integrated with said system on-chip of said mobile telephone.
F6. An apparatus as recited in claim F1 wherein each source driver is an analog DDIC-SD (Display Driver Integrated Circuit—Source Driver) of a mobile telephone.
F7. An apparatus as recited in claim F6 wherein said analog DDIC-SD does not include any digital-to-analog-converters (DACs) used to convert video samples.
I1. A source driver of a display unit comprising:
I2. A source driver as recited in claim I1 wherein said analog sample values are an ordered sequence of continuous-amplitude analog levels.
I3. A source driver as recited in claim I1 wherein said plurality of column drivers are located on said glass substrate of said display panel between a pixel display area and a perimeter of said glass substrate.
I4. A source driver as recited in claim I1 wherein said source driver does not include a digital-to-analog converter for converting video samples.
I5. A source driver as recited in claim I1 wherein said transistors are able to operate at a clock frequency required by said column drivers.
I6. A source driver as recited in claim I5 wherein said transistors are thin-film transistors (TFTs), and are either low-temperature poly-silicon (LTPS) transistors or are indium-gallium-zinc oxide (IGZO) transistors.
I7. A source driver as recited in claim I1 wherein pixels of said display panel are implemented using the same type of transistors used to implement said column drivers.
I8. A source driver as recited in claim I1 wherein said display unit is a display of a mobile telephone.
I9. A source driver as recited in claim I1 wherein each of said column drivers includes a high-voltage driver.
I10. A source driver as recited in claim I9 wherein each of said column drivers further includes a level converter, each level converter operating at least to shift said output voltage of said column driver.
I11. A source driver as recited in claim I1 wherein said collector is a two-stage collector.
J1. A source driver of a display unit comprising:
J2. A source driver as recited in claim J1 wherein said analog sample values are an ordered sequence of continuous-amplitude analog levels.
J3. A source driver as recited in claim J1 wherein said collector and said plurality of column drivers are located on said glass substrate of said display panel between a pixel display area and a perimeter of said glass substrate.
J4. A source driver as recited in claim J1 wherein said source driver does not include a digital-to-analog converter for converting video samples.
J5. A source driver as recited in claim J1 wherein said transistors are able to operate at a first clock frequency required by said column drivers and at a second clock frequency required by said collector.
J6. A source driver as recited in claim J5 wherein said transistors are thin-film transistors (TFTs), and are either low-temperature poly-silicon (LTPS) transistors or are indium-gallium-zinc oxide (IGZO) transistors.
J7. A source driver as recited in claim J1 wherein pixels of said display panel are implemented using the same type of transistors used to implement said collector and said column drivers.
J8. A source driver as recited in claim J1 wherein said display unit is a display of a mobile telephone.
J9. A source driver as recited in claim J1 wherein each of said column drivers includes a high-voltage driver.
J10. A source driver as recited in claim J9 wherein each of said column drivers further includes a level converter, each level converter operating at least to shift said output voltage of said column driver.
J11. A source driver as recited in claim J1 wherein said input terminal is implemented outside an edge of a display panel of said display unit.
J12. A source driver as recited in claim J1 wherein said input terminal is implemented on said glass substrate of said display panel.
J13. A source driver as recited in claim J1 wherein said collector is a two-stage collector.
K1. A display unit comprising:
K2. A display unit as recited in claim K1 wherein said collector of each source driver is implemented outside an edge of said display panel of said display unit, wherein said column drivers are implemented using at least transistors on said glass substrate of said display panel, and wherein said transistors are able to operate at a clock frequency required by said column drivers.
K3. A display unit as recited in claim K1 wherein said collector and said column drivers of said each source driver are implemented using at least transistors on said glass substrate of said display panel, and wherein said transistors are able to operate at clock frequencies required by said collector and by said column drivers.
K4. A display unit as recited in claim K1 wherein said analog sample values are an ordered sequence of continuous-amplitude analog levels.
K5. A display unit as recited in claim K2 wherein said plurality of column drivers of said each source driver are located on said glass substrate of said display panel between a pixel display area and a perimeter of said glass substrate.
K6. A display unit as recited in claim K1 wherein said each source driver does not include a digital-to-analog converter for converting video samples.
K7. A display unit as recited in claim K2 wherein said transistors are thin-film transistors (TFTs), and are either low-temperature poly-silicon (LTPS) transistors or are indium-gallium-zinc oxide (IGZO) transistors.
K8. A display unit as recited in claim K3 wherein said transistors are thin-film transistors (TFTs), and are either low-temperature poly-silicon (LTPS) transistors or are indium-gallium-zinc oxide (IGZO) transistors.
K9. A display unit as recited in claim K1 wherein said display unit is a display of a mobile telephone.
K10. A source driver as recited in claim K1 wherein each collector of said each source driver is a two-stage collector.
P1. A feedback apparatus of a source driver of a display unit, said feedback apparatus comprising:
P2. A feedback apparatus as recited in claim P1 wherein said control signal commands said analog switch to latch said voltage value when said column driver is driving a column of said display.
P3. A feedback apparatus as recited in claim P1 further comprising:
P4. A feedback apparatus as recited in claim P1 wherein said voltage value delivered to said timing controller is an analog voltage value.
P5. A source driver of said display unit comprising:
Q1. A method for providing feedback from a source driver to a timing controller of a display unit, said method comprising:
Q2. A method as recited in claim Q1 wherein said received voltage value is an analog voltage value, said method further comprising:
Q3. A method as recited in claim Q1 wherein said voltage value is an analog voltage value at said output of said amplifier, said method further comprising:
Q4. A method as recited in claim Q3, said method further comprising:
Q5. A method as recited in claim Q2, said method further comprising:
Q6. A method as recited in claim Q1, said method further comprising:
R1. A transmitter comprising:
R2. A transmitter as recited in claim R1 wherein said video samples are analog RGB video samples.
R3. A transmitter as recited in claim R1 wherein said video samples are digital RGB video samples, said transmitter further comprising:
R4. A transmitter as recited in claim R1 wherein said video samples are raw analog BGRG video samples.
R5. A transmitter as recited in claim R1 wherein said video samples are raw analog RGGB video samples output from said image sensor in modified form.
R6. A transmitter as recited in claim R1 wherein said video samples are digital G video samples, said transmitter further comprising:
S1. A receiver comprising:
S2. A source driver as recited in claim S1 further comprising:
S3. A receiver as recited in claim S1 wherein said first line buffer outputs analog video samples from said first output vectors once said first line buffer is full.
S4. A receiver as recited in claim S1 wherein said receiver does not include any digital-to-analog-converters (DACs) used to convert video samples.
S5. A receiver as recited in claim S1 wherein said receiver includes at least one analog-to-digital converter) to convert said stream of analog video samples to digital video samples.
G1. A method of sending a command to a source driver of a display unit:
G2. A method as recited in claim G1 further comprising:
G3. A method as recited in claim G1 further comprising:
G4. A method as recited in claim G1 further comprising:
G5. A method as recited in claim G1 further comprising:
G6. A method as recited in claim G1 further comprising:
H1. A method of receiving a command at a source driver of a display unit:
H2. A method as recited in claim H1 further comprising:
H3. A method as recited in claim H1 wherein said control sequence is received at a single input amplifier of said source driver.
H4. A method as recited in claim H1 further comprising:
H5. A method as recited in claim H1 wherein said source driver receives said control sequence including said introduced MFM flag on all input channels of said source driver until resynchronization has occurred; and
H6. A method as recited in claim H1 further comprising:
L1. A method for performing phase alignment to determine a source driver sampling phase, said method comprising:
L2. A method as recited in claim L1, wherein said synchronization stream includes said positive pulses alternating with second positive pulses having a second amplitude smaller than said first amplitude, said method further comprising:
L3. A method as recited in claim L1 for the comprising:
L4. A method as recited in claim L1 wherein said source driver includes a central comparator that indicates whether a pulse of said synchronization stream is positive or negative, said method further comprising:
L5. A method as recited in claim L1 wherein said command is an MFM (modified frequency modulation)-encoded command and said synchronization stream is an MFM-encoded stream.
L6. A method as recited in claim L1 wherein said command and said synchronization stream are all received on a single channel of said source driver.
L7. A method as recited in claim L1 wherein said synchronization stream includes negative pulses alternating with said positive pulses, said negative pulses having a negative amplitude, said method further comprising:
M1. A sample phase adjustment circuit of a source driver of a display unit comprising:
M2. A sample phase adjustment circuit as recited in claim M1 further comprising:
N1. A method of sending a command to a source driver of a display unit, said method comprising:
N2. A method as recited in claim N1 further comprising:
N3. A method as recited in claim N1 further comprising:
N4. A method as recited in claim N1 further comprising:
N5. A method as recited in claim N1 further comprising:
O1. A method of performing phase alignment within a display unit for sampling video signals, said method comprising:
O2. A method as recited in claim O1 further comprising:
O3. A method as recited in claim O1 further comprising:
O4. A method as recited in claim O1 further comprising:
T1. A virtual reality (VR) headset comprising:
T3. A VR headset as recited in claim T1 wherein said source driver further includes
T5. A VR headset as recited in claim T1 wherein said source driver does not include a digital-to-analog converter (DAC) for purposes of converting digital pixel data to analog pixel data.
T8. A VR headset as recited in claim T1 wherein said transmitter also transmits said sets of analog samples as a second analog waveform over a second electromagnetic pathway, and wherein said visor further including a second display having a second source driver that receives said sets of analog samples from said second analog waveform for display on said second display.
T9. A VR headset as recited in claim T1 wherein said transmitter is a wireless transmitter, wherein said receiver is a wireless receiver, wherein said electromagnetic pathway is an RF electromagnetic pathway, and wherein said headset processor being arranged to transmit said analog waveform over said RF electromagnetic pathway from said wireless transmitter to said wireless receiver.
T10. A VR headset as recited in claim T1 wherein one of said analog samples has a value that represents a chemical odor or a haptic sensation, said VR visor further including a device to reproduce said chemical odor or said haptic sensation for a user based upon the value of said one of said analog samples.
This application is a continuation-in-part of U.S. patent application Ser. No. 18/442,491 (Attorney Docket No. HYFYP015) filed Feb. 15, 2024, entitled “ANALOG VIDEO TRANSPORT TO A DISPLAY PANEL AND SOURCE DRIVER INTEGRATION WITH DISPLAY PANEL,” which claims priority to U.S. provisional patent application Nos. 63/500,341 (Docket No. HYFYP0015P2) filed May 5, 2023, entitled “ANALOG VIDEO TRANSPORT TO A DISPLAY PANEL AND SOURCE DRIVER INTEGRATION WITH A DISPLAY PANEL” and 63/447,241 (Docket No. HYFYP0015P) filed Feb. 21, 2023, entitled “ANALOG VIDEO TRANSPORT TO A DISPLAY PANEL,” all of which are hereby incorporated by reference. This application is a continuation-in-part of U.S. patent application Ser. No. 18/334,692 (Docket No. HYFYP009C1) filed Jun. 14, 2023, entitled “ANALOG VIDEO TRANSPORT INTEGRATION WITH DISPLAY DRIVERS,” which is a continuation of U.S. patent application Ser. No. 17/900,570 (Docket No. HYFYP009) filed Aug. 31, 2022, entitled “SPREAD-SPECTRUM VIDEO TRANSPORT INTEGRATION WITH DISPLAY DRIVERS,” which claims priority of application Nos. 63/280,017 (Docket No. HYFYP009P2) filed Nov. 16, 2021, entitled “SPREAD-SPECTRUM VIDEO TRANSPORT INTEGRATION WITH DISPLAY DRIVERS” and 63/240,630 (Docket No. HYFYP009P1) filed Sep. 3, 2021, entitled “SPREAD SPECTRUM VIDEO TRANSPORT AND DISPLAY INTEGRATION,” all of which are hereby incorporated by reference. This application is a continuation-in-part of U.S. patent application Ser. No. 18/442,447 (HYFYP017), filed Feb. 15, 2024, which claims priority to U.S. provisional patent application Nos. 63/611,274 (Docket No. HYFYP0017P2) filed Dec. 18, 2023, entitled “VIDEO TRANSPORT WITHIN A MOBILE DEVICE” and 63/516,220 (Docket No. HYFYP0017P) filed Jul. 28, 2023, all of which are hereby incorporated by reference. This application claims priority to U.S. provisional patent application No. 63/625,473 (Docket No. HYFYP0018P) filed Jan. 26, 2024, entitled “SIGNAL TRANSPORT WITHIN VEHICLES” which is hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
63500341 | May 2023 | US | |
63447241 | Feb 2023 | US | |
63280017 | Nov 2021 | US | |
63240630 | Sep 2021 | US | |
63611274 | Dec 2023 | US | |
63516220 | Jul 2023 | US | |
63625473 | Jan 2024 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17900570 | Aug 2022 | US |
Child | 18334692 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 18442491 | Feb 2024 | US |
Child | 18821542 | US | |
Parent | 18334692 | Jun 2023 | US |
Child | 18821542 | US | |
Parent | 18442447 | Feb 2024 | US |
Child | 18821542 | US |