The present disclosure relates generally to techniques for facilitating communication between a video source and display panel and, more particularly to, techniques for data centric rather than frame centric video information to facilitate efficient communication between the video source and the display panel.
This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present disclosure, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
In the marketplace today, there are a wide variety of electronic devices available for a wide variety of purposes. Such devices include cellular telephones, tablet computers, laptop computers, personal computers, televisions, headphones, Bluetooth® enabled watches, printers, and cameras, just to name a few. As display technology becomes more and more sophisticated, more and more data is communicated between a video source and the display panel that presents video information. For example, ever increasing screen resolutions are resulting in significant increases in data that is presented for display on these higher resolution displays. Oftentimes, the bandwidth constraints for transporting this data from the video source to the display panel may be a limiting factor for such increased data transports.
Additionally, there is a trend towards connector convergence, whereby the same connectors may be used for a variety of purposes, such as power, asynchronous data, and isochronous video data simultaneously. However, there may be a limited amount of communications bandwidth offered by techniques, as bandwidth considerations must be made for both video transport and the additional features (e.g., asynchronous data communications). Traditionally, when dedicated interfaces are used for video data (e.g., VGA for analog video data or DVI/DisplayPort/HDMI for digital video data), a full frame of video data is transmitted at the frame rate, requiring significant bandwidth.
Further, as higher-bandwidth data transmission becomes more desirable (e.g., for higher-resolution applications), it may be beneficial to incorporate forward error correction (FEC). FEC is a used to control errors in data transmission by the source of the transmission providing redundant data to the destination of the transmission. Using this redundant data, the destination may correct any erroneous data of the transmission. Unfortunately, traditional display interfaces use low level symbol encoding schemes (e.g., 8B10B encoding) that incur overhead (e.g., 20% overhead in 8B10B encoding).
To address some of the concerns mentioned above, it is proposed to allow video sources to communicate with display panels using a “data centric” approach, rather than the more traditional “frame centric” approach. Traditional display interfaces have historically moved fixed sized frames at fixed frame rates from the source to the display. This has included horizontal and vertical blanking intervals that consume transport bandwidth. This creates inefficiency, especially considering the modernization of display technologies, which allow for untethering from this fixed frame transmission of display data.
In some embodiments, deviations from traditional fixed frame rate schemes may be accomplished by enabling display panels to self-refresh based upon data provided to the display panel. However, in non-static image display, power utilization efficiencies are not greatly affected by such schemes. In non-static image rendering, frame updates are sent frequently. Thus, there is relatively little, if any, updated power savings realized because the display panel maintains a local frame buffer for the self-refresh.
Thus, concepts disclosed herein relate to a transport mechanism between the source and the display that is more data centric, rather than frame centric. In some embodiments, data is transported in a block that is sized around 200 symbols (e.g., 198 bytes) to 1000 symbols (e.g., 966 bytes). Such sizing may allow for particular overhead efficiencies related to the transport to become much more efficient than traditional transport encoding (e.g., 8B10B encoding with control characters as special codes). For example, using the block scheme described herein, provides greater efficiencies than encoding schemes of traditional display interfaces. For instance, as explained above, the 8B10B encoding of traditional display interfaces may have an overhead of 20%. Additionally, adding control symbols for FEC may increase the overhead (e.g., by 2%), resulting in overhead that is greater than 20% (e.g., 22%). Thus, the overall efficiency may be reduced (e.g., to 78%). In contrast, by using the block encoding scheme and the FEC blocks described herein, a minimal amount of overhead may be used to transport the stream, which is FEC protected data. For example, block headers of the schemes described herein may use less overhead (e.g., sub 1% to 4% overhead), resulting in increased overall efficiencies (e.g., 99+% to 96% efficiency).
The techniques provided herein, relating to removal of historical dependencies on the isochronous video formals and scheduling, allow for reduced bandwidth requirements over traditional frame-centric video data communications, as an entire frame of data need not be provided, especially when portions of the video data have not changed. This may resulted in an increased effective bandwidth that may support higher resolutions of video data, greater color ranges (e.g., “deep color”), high dynamic range, etc., which may result in an improved user viewing experience.
Additionally, the techniques provided herein may result in error reduction in Forward Error Correction (FEC) and other techniques that are associated with “visually lossless” compression of the video data.
Various aspects of this disclosure may be better understood upon reading the following detailed description and upon reference to the drawings in which:
One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Additionally, it should be understood that references to “one embodiment” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.
As mentioned above, data transmission bandwidth between video hosts and electronic displays is becoming an increasing concern as video display becomes more complex and/or display resolutions increase. To address some of the concerns mentioned above, it is proposed to allow video sources to communicate with display panels using a “data centric” approach, rather than the more traditional “frame centric” approach. Traditional display interfaces have historically moved fixed sized frames at fixed frame rates from the source to the display. This has included horizontal and vertical blanking intervals that consume transport bandwidth. This creates inefficiency, especially considering the modernization of display technologies, which allow for untethering from this fixed frame transmission of display data.
In some embodiments, deviations from traditional fixed frame rate schemes may be accomplished by enabling display panels to self-refresh based upon data provided to the display panel. However, such schemes may save little power in the display, because the display panel may maintain a local frame buffer for self-refresh. Moreover, when video data is provided (e.g., outside the periods of self refresh) the video data is still provided within the limitations of a fixed frame size at a fixed frame rate.
Thus, concepts disclosed herein relate to a transport mechanism between the source and the display that is more data centric, rather than frame centric. In other words, source data provided to a display may be provided as one or more data-centric blocks free of a fixed-frame size imposition, fixed-frame rate imposition, or both from the display. The source data may provide the presentation data to a display in a manner that does not comport with traditional frame-based requirements of displays. For example, source data may be provided for only a portion of a frame that has changed, without providing source data for portions of the frame that have not changed. Further, source data may be provided upon changes, rather than based upon fixed frame timings expected by the display.
In some embodiments, data is transported in a block that is sized around 200 bytes (e.g., 198 bytes). Such sizing may allow for particular overhead efficiencies related to the transport to become much more efficient than traditional transport encoding (e.g., 8B10B encoding with control characters as special codes). For example, encoding and packetization of the display data may be optimized towards an efficient implementation of Forward Error Correction schemes, which may be beneficial in embodiments where video compression schemes are used.
In certain embodiments, particular overhead elements desirable for the transport mechanism are described. For example, as may be appreciated, in certain embodiments, the overhead elements may include an indication of the particular data to be presented on the display. Additionally, timing indications for presentation of the particular data may be provided as an overhead element. Additionally or alternatively, Forward Error Correcting (FEC) parity (or Syndrome) bits may be included to reduce an error rate in presentation of the particular data at the display panel. Indeed, such FEC may be important in embodiments where compression is used, because a single bit error can result in undesirable visible artifacts during the presentation of the particular data at the display panel.
Turning now to the overhead elements, the discussion begins with the Forward Error Correction (FEC). Error correcting codes, such as Reed-Solomon codes, may be used to overcome erroneous data transmissions. Specifically, redundant information is added to data to ensure that the data may be reconstructed at a transmission destination, regardless of errors (e.g., transmission errors, access errors, storage errors, etc.). Using error-correcting codes, data is encoded as a sequence of symbols making up a codeword. Specifically, the Reed-Solomon encoding process assumes a code of RS(N, K) which generates code words of length N symbols each storing K symbols of data. The corresponding Reed-Solomon decoding process, when presented with a code word containing N (possibly corrupted) symbols, can reconstruct the K symbols of data (unless the N symbols are so badly corrupted that it is beyond the reconstruction capability of the code). Accordingly, (N−K)/2 errors may be corrected
In some embodiments, the symbol size may be selected based upon the color component size. For example many systems use 8-bit color. Accordingly, symbols that are 8-bit in size may be efficient. Further, in deeper-color systems (e.g., high dynamic range (HDR) displays, 10-bit or 12-bit data per color per pixel may be used. Accordingly, the symbol size may be matched to 10-bit or 12-bit for these embodiments.
The implementation may be fixed on a symbol size. Accordingly, the implementation may be designed such that 10-bit and 12-bit pixel sizes may be packed into larger block sizes of 8-bit symbols with a defined packing process. In such embodiments, only one symbol size may be used to encode different source content color depth.
In some embodiments, 8-bit data and 12-bit data may be packed into 10-bit fields. Alternatively, 8-bit data and 10-bit data may be packed into 12-bit fields. Further, a Forward error correction (FEC) code may be added on top of traditional encoding schemes (e.g., 8B10B), resulting in encoding of 8B10B symbols versus pixel colors.
The Reed Solomon example below is useful for 8-bit pixel depth per color. 8-bit symbols can support blocks up to 255 symbols in size. If 10-bit pixel depth per color is used, then a symbol size of 10-bits may be used. This enables block sizes of approximately 1000 symbols (e.g., up to 1023).
RS(198,194) for 8-bit symbols and RS (966,960) for 10-bit symbols are large enough that Forward error correcting codes may be efficient with 4 bytes being able to correct two symbol errors code. Such a scheme may greatly reduce error rates in the presented data (e.g., 1 error in 1E9 bits may be reduced to one uncorrectable error event in 1E20 bits).
Thus, in embodiments using this RS(198,194) scheme, 198 byte blocks of data may be used for transmission from the source to the display panel. Of the 198 symbols of data transmitted, there are 194 symbols of input data. In embodiments using this RS(966,960) scheme, 966 symbol blocks of data may be used for transmission from the source to the display panel. Of the 966 symbols of data transmitted, there are 960 symbols of input data. In alternative embodiments, other RS(N, K) schemes may be used.
As previously discussed, the block schemes described herein may result in increased efficiencies over traditional encoding schemes. FEC may be used to protect against errors. The FEC may use encoding blocks that are sized to enable proper error correction. Further, data may be labeled as pixel payload data or control data (e.g., timing information), e.g., by utilizing headers to identify particular data in a block of data. Previous interfaces have embedded timing control within the data stream. To identify pixel payload from timing control, these traditional techniques used 2-bits of data to identify whether symbols were pixel payload or timing control This traditional technique results in high overhead. For example, 8B10B encoding has an overhead of 20%. When desired, FEC may be implemented at the symbol level, by adding additional control signals to identify the start and/or end of FEC data. Thus, control symbols used for framing, etc. may not be protected by FEC. Further, additional overhead may be needed for these extra control symbols, resulting in increased overhead (e.g., 2%). Thus, the overhead for such implementations may be greater than 20% (e.g., 22%), resulting in decreased overall efficiency (e.g., 78%).
In contrast, by using the block scheme with FEC described herein, a minimal amount of overhead may be used to identify the various data of the blocks. Accordingly, the block data schemes described herein may use less overhead (e.g., 4% overhead), resulting in increased overall efficiencies (e.g., 96% efficiency). 4K Mapping
There are several sizes of data labeled as “4K.” The most common may be 3840 pixels per line, while another may include 4096 pixels per line (4k in binary 2̂12). Each pixel may have 3 colors. If the color is mapped to a symbol, then there are 3×3840=11,520 symbols. If FEC blocks are created with an integral number of pixel payload symbols per line, certain efficiencies may be achieved. For example, a FEC block with a pixel payload of 960 symbols may occur 12 times in the 4K line of 3840 pixels. This same code of 4096 pixels per line would have 12.8 blocks (of size 960) per line. The unused space in the last line can simply wrap to the next line, so the efficiency loss due to adding extra headers per FEC block is quite small.
To construct an FEC block from 960 payload symbols, symbols may be added for packet headers (e.g., at least 2 symbols). Accordingly, the number of data symbols may equal Pixel data+header data=962. The syndrome bytes for error detection and correction would have a count of 4 for correcting double errors. Thus, the entire FEC block would have size−Pixel data+header data+syndrome=950+2+4=966 symbols. The 960 symbol count is 15 occurrences of a 64 symbol count, with block sizes of approximately 1000.
Audio may be encoded such that its header bytes, control bytes, and audio data are packed to fit in a 64 symbol sized block. This may enable the audio to be placed at the start or end of any FEC block, without significant loss of efficiency. Audio can be a media stream type or conveyed in a special control character scheme within the methods described herein. Even when the audio data is not optimally packed within the audio protocol itself, audio is infrequent enough that overall efficiency is not negatively impacted.
In some embodiments, the input data may include a header byte and/or a length byte. For example, using the RS(198, 194) embodiment described above, of the 194 bytes of input data, one header byte and/or one length byte may be present. The header byte may be used to indicate what the data type of the input data is. The length byte may provide an indication of how long the data field (e.g., the input data) is. Thus, when one header byte and one length byte is present in an RS(198, 194) embodiment, 192 bytes remain for input data. As may be appreciated, the 192 byte payload is a very convenient size as an integral number of 192 byte payloads may transport lines of multiple video resolutions (e.g., HD, 4K, 5K, 8K video).
In situations where the data field (e.g., the input data) is less than amount allotted for the data field (e.g., 192 bytes in RS(198, 194) embodiments), the length field may provide an indication of the number of bytes used. A byte following the last data field byte associated with the first header may be used for an additional header with an associated length. These additional headers may be useful for additional data, such as audio data or other auxiliary data. In some embodiments, alternate video streams or other supplementary data payloads may be facilitated via these additional headers and associated data fields. The overhead for identifying data and data lengths is relatively small. For example, in the RS(198, 194) embodiment described above, the overhead to identify the data and its length is 2 bytes of 194 bytes, or roughly 1% overhead.
The headers can be defined to include start of frame bit and state of line bits to convey these timings as required. For example, in some embodiments, special control headers with a location byte may be used. These special control headers may be implemented as an auxiliary header byte and may be used to indicate when given control symbol functions should be invoked (e.g., in time or a per byte timing basis). Accordingly, the current embodiments may be used to convey timing of presentation data, while reducing overhead. For example, using the current embodiments it may not be necessary for each byte to contain overhead bits to provide an indication of whether the byte is a data or control byte.
With these features in mind, a general description of suitable electronic devices that may act as data centric display communication sources and/or display panels is provided. Turning first to
The electronic device 10 may act as a host device 30 that sources video data to the display 20. For example, data-centric block data 31 may be generated (e.g., by the processor 12) and may be provided to the display 20 (e.g., via the display interface 18 (e.g., a High-Definition Multimedia Interface (HDMI) port and/or a Universal Serial Bus (USB) port, such as a USB Type C port). Additionally and/or alternatively, the data centric block 31 data may be provided via the I/O interface 24. In some embodiments, multiple-streams may primarily fill different data centric blocks 31.
By way of example, the electronic device 10 may represent a block diagram of the notebook computer depicted in
In the electronic device 10 of
In certain embodiments, the display 20 may be a liquid crystal display (e.g., LCD), which may allow users to view images generated on the electronic device 10. In some embodiments, the display 20 may include a touch screen, which may allow users to interact with a user interface of the electronic device 10. Furthermore, it should be appreciated that, in some embodiments, the display 20 may include other display technologies, such as light emitting diodes (e.g., LED, OLED, AMOLED, etc.) displays, or some combination of LCD panels and LED panels.
The input structures 22 of the electronic device 10 may enable a user to interact with the electronic device 10 (e.g., e.g., pressing a button to increase or decrease a volume level). The I/O interface 24 may enable electronic device 10 to interface with various other electronic devices. The I/O interface 24 may include various types of ports that may be connected to cabling. These ports may include standardized and/or proprietary ports, such as USB, RS232, Apple's Lightning® connector, as well as one or more ports for a conducted RF link. The I/O interface 24 may also include, for example, interfaces for a personal area network (e.g., PAN), such as a Bluetooth network, for a local area network (e.g., LAN) or wireless local area network (e.g., WLAN), such as an 802.11a/b/g/n Wi-Fi network, and/or for a wide area network (e.g., WAN), such as a 3rd generation (e.g., 3G) cellular network, 4th generation (e.g., 4G) cellular network, or long term evolution (e.g., LTE) cellular network. The I/O interface 24 may also include interfaces for, for example, broadband fixed wireless access networks (e.g., WiMAX), mobile broadband Wireless networks (e.g., mobile WiMAX), and so forth.
As further illustrated, the electronic device 10 may include a power source 26. The power source 26 may include any suitable source of power, such as a rechargeable lithium polymer (e.g., Li-poly) battery and/or an alternating current (e.g., AC) power converter. The power source 26 may be removable, such as replaceable battery cell.
In certain embodiments, the electronic device 10 may take the form of a computer, a portable electronic device, a wearable electronic device, or other type of electronic device. Such computers may include computers that are generally portable (e.g., such as laptop, notebook, and tablet computers) as well as computers that are generally used in one place (e.g., such as conventional desktop computers, workstations and/or servers). In certain embodiments, the electronic device 10 in the form of a computer may be a model of a MacBook®, MacBook® Pro, MacBook Air®, iMac®, Mac® mini, or Mac Pro® available from Apple Inc. By way of example, the electronic device 10, taking the form of a notebook computer 30A, is illustrated in
As may be appreciated, in certain embodiments, the display interface 18 may be internal to the electronic device 10 (e.g., the computer 30A). For example, the computer 30A may include an internal display interface 18 that provides data to the display 20. Additionally and/or alternatively, an external display interface 18 (as depicted in
The handheld device 30B may include an enclosure 36 to protect interior components from physical damage and to shield them from electromagnetic interference. The enclosure 36 may surround the display 20, which may display indicator icons 39. The indicator icons 39 may indicate, among other things, a cellular signal strength, Bluetooth connection, and/or battery life. The I/O interfaces 24 may open through the enclosure 36 and may include, for example, an I/O port for a hard wired connection for charging and/or content manipulation using a connector and protocol, such as the Lightning connector provided by Apple Inc., a universal service bus (e.g., USB), one or more conducted RF connectors, or other connectors and protocols. In some embodiments, the I/O interfaces 24 may also act as a display interface 18 for providing data to an external display (not depicted).
User input structures 40 and 42, in combination with the display 20, may allow a user to control the handheld device 30B. For example, the input structure 40 may activate or deactivate the handheld device 30B, one of the input structures 42 may navigate user interface to a home screen, a user-configurable application screen, and/or activate a voice-recognition feature of the handheld device 30B, while other of the input structures 42 may provide volume control, or may toggle between vibrate and ring modes. Additional input structures 42 may also include a microphone may obtain a user's voice for various voice-related features, and a speaker to allow for audio playback and/or certain phone capabilities. The input structures 42 may also include a headphone input to provide a connection to external speakers and/or headphones.
Turning to
Similarly,
As mentioned previously, it is often desirable for host devices to provide video data to a display 20 for presentation on the display 20. In view of the various disadvantages associated with traditional frame centric data being provided to the display 20, many situations may arise where more data centric display data would be a desirable alternative to facilitate such communication. Indeed in a communication scenario that would benefit from relatively high bandwidth and relatively low power consumption, data centric video data represents a good option. One such example is illustrated in
An entire block 31 may be presented to the receiving system (e.g., the display 20), rather than streaming data to the receiving system (e.g., the display 20). As will be discussed in more detail below, control field elements/indicators may be used to provide control information useful for presentation of display data on the display 20. For example, control location indicator bytes may be used to satisfy timing needs of the receiving system (e.g., the display 20). For example, a control location indicator may be used to drive associated media stream timing of frame sync and/or line sync to the receiving system (e.g., the display 20). Using the control location indicators, the receiving system (e.g., the display 20) may regenerate timings for the block 31 data.
As may be appreciated, the blocks 31 reduce an amount of data to be sent to the display 20, by leaving the traditional stream-based data provision of video data. In other words, in contrast to traditional systems that provided streams of video data that included horizontal and/or vertical blanking to ensure proper placement and timing of a stream of video data, the data centric block data 31 may provide a particular indication of placement of the data and/or timing of the presentation data, such that blanking data may no longer be necessary for proper presentation of video data. In fact, in certain situations where there is static video presentation in certain portions of the presented content, data pertaining to unchanged video data may not need to be transferred via the transmission lines 54, freeing up bandwidth of the transmission lines 54 for dynamically changing video data portions. In some embodiments, using this data centric data, different refresh rates and/or other characteristics of the video data may be utilized for varied regions (e.g., lines) of the display 20.
As illustrated in
Turning now to a discussion of embodiments of the data centric block 31 structure,
As will be discussed in more detail below with regard to
Turning now to a more detailed discussion of the input bytes 92 composition,
The auxiliary header bytes 114 may be useful for expanding the functionalities of the media data elements 110. For example, in one embodiment, the auxiliary header bytes 114 may be used to provide a media data field length specification. Accordingly, an explicit indication of a length of the media data field 116 may be made by using an auxiliary header byte 114 as a media data field length indicator. As may be appreciated, the maximum length for a 198 block 31 would be 198 minus 4 bytes for FEC syndrome bytes 94 minus 1 byte for the media data header 112 minus 1 byte for the auxiliary header byte 114 (e.g., used as a media data field length header)=192 bytes.
For example,
As may be seen, the auxiliary header bytes 114 may be useful for future functionality of the display system/transport. While usage as a media data field length header has been provided as an example, these bytes may be used for many other uses.
The input bytes 92 may also include control field elements 130.
The auxiliary control header bytes 134 may be useful for expanding the functionalities of the control field elements 130. For example, in one embodiment, the auxiliary control header bytes 134 may be used to provide a control indicator location byte. Accordingly, by using the control header indication byte, the receiver may drive respective media stream timing of frame sync and/or line sync pings to the receiving subsystem.
For example,
As may be seen, the auxiliary control header bytes 134 may be useful for future functionality of the display system/transport. While usage as a media data field length header has been provided as an example, these bytes may be used for many other uses.
Knowing now that the input bytes 92 of the blocks 31 may include one or more media data elements 110 and/or control field elements 130,
Additional media data elements 110 and/or control field elements 130 may be included in the data centric block 31, as long as there is enough bytes available in the block 31 (e.g., the new element 110 and/or 130) does not cause the number of used bytes to exceed the 194 input bytes 92. Accordingly, as illustrated in
In other words, the byte following a completed media data field 116 and/or a control field 136 may contain a new header byte (e.g., a media data header 112 of
Turning now to a discussion of the contents of the headers,
In some embodiments, the expanded function bit may be set to “0” to indicate a full frame mode. A setting of “1” may be used to indicate the expanded function mode, which may include indication of a sub-region, sub-line, etc. to update with the media data. Additional auxiliary bytes may be defined for the expanded function mode. For example, a line number indication may occupy 2 bytes, a start pixel indication may occupy 2 bytes, a log length may occupy 1 byte, and/or an associated presentation time stamp may occupy up to 8 bytes. As may be appreciated, using this expansion, media updates may be coordinate based, time based, etc.
As mentioned above for control field elements 130, a control indicator header 132 is used.
The frame start bit 184 may indicate that the control field element 130 includes a frame start and the line start bit 186 may indicate that the control field element 130 includes a line start. If both the frame start bit 184 and the line start bit 186 are “0”, then neither a frame start nor a line start occurs. This may be the most common intra-line header that is used by the data centric transport mechanism.
Media stream indicator bits 188 may be set to “00000” to indicate an expanded header definition. For example, the expanded header definition may include a fill type control indicator header (e.g., the auxiliary header having a control indicator location, as discussed above). Further, the expanded header definition may include a time stamp control character with an associated time stamp data field. Additionally, a configuration control field may be provided that is useful for start up functions for the display 20.
In some embodiments, expanded control headers may be defined using programmable tables. Thus, short headers may be used for new function developed over time, by setting the transmitter and/or receiver tables at the start of use of the data centric transmission mechanism using the expanded control headers. As may be appreciated, this may result in lower level transport mechanisms to adapt to new modes as they are developed, without needing to restrict the low level transport mechanisms.
As mentioned above, the block data scheme may be useful for reducing overhead and facilitating forward error correction more effectively. However, additional applications may be efficiently implemented by using the block data scheme described herein. For example, as mentioned above, timestamps may be included in the data. These timestamps may provide an indication as to when a particular piece of data should be presented on a display. Accordingly, data may be sent prior to invocation on the display, allowing for pre-processing and/or data transmission of future content at a time when bandwidth constraints may be lower. For example, if the display is currently displaying static content or is idle, less data may be transmitted to the display (because in some embodiments only changes to the data need to be provided to the display). Accordingly, during that time of decreased bandwidth, power consumption may be reduced, as less processing power is needed to render the static image. In some embodiments, the low-bandwidth data transmission may be used to transfer subsequent content changes that may be implemented later in time. This may enable to the display to pre-process the data prior to presenting the data at the display, and may enable the lower bandwidth transmission period to be used to transmit data for use at a future time. This may help to reduce bandwidth bottlenecking, by enabling a portion of data that would be transferred during a high-bandwidth transfer to occur prior to the high-bandwidth transfer.
Further, certain embodiments may implement data scrambling to counteract resonance excitement of the display interface. Cross talk in connectors may be limited by resonant peaks. Resonances are often caused by ground loops and/or power loops via receptacles and/or corresponding plugs that are paired by a consumer of the electronic device. To avoid resonance excitement, the block data may be scrambled in a manner that avoids bit sequences that would concentrate energy at any possible resonant frequencies. These possible resonant frequencies may be limited by a range of practical lengths. For example, if a connector is 22 mm long with a delay of 6 psec per mm, there is a 120 psec delay, with a 240 psecs per wavelength (e.g., about 4 Ghz resonant frequency). A typical shortest connector might be 10 Ghz resonant, due to having a minimum length.
A resonance at 5 Ghz would be excited with a 1010101010 . . . bit pattern, where the energy climbs with the length of the sequence. To limit this, the data could be scrambled. For example, the system could choose between several scrambler combinations of data, depending on a determination of the scrambler combination with the best properties. Thus, the system could avoid strings of 0101 or 1010 by choosing a scrambler combination that minimizes such patterns.
Further, to be more exhaustive in avoiding sequences that have resonant peaks, the system may take the Fast Fourier Transform (FFT) of the bits to be sent to the display. A FFT of a 1 bit resolution sequence may be much easier to compute than higher bit count “samples.” Further, such computations may only need to be calculated across a particular frequency range of interest.
The specific embodiments described above have been shown by way of example, and it should be understood that these embodiments may be susceptible to various modifications and alternative forms. It should be further understood that the claims are not intended to be limited to the particular forms disclosed, but rather to cover all modifications, equivalents, and alternatives falling within the spirit and scope of this disclosure.
This application is a Non-Provisional Patent Application of U.S. Provisional Patent Application No. 62/212,287, entitled “Data Centric Display Communications”, filed Aug. 31, 2015, which are herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
62212287 | Aug 2015 | US |