The disclosure relates generally to display technologies, and more particularly, to a system and a method for calibrating a display.
The distance between wearable devices and the user's eyes is much closer compared with the distance between other electrical devices such as cell phones, computers, and TVs. Thus, distortions are generated on the display panel of the wearable devices due to the physical structure of the wearable device and human eye perception when using the wearable device. To counteract the distortions, active reverse distortions are generally employed on the input images of the wearable devices. For high-resolution wearable devices, pixels in the peripheral area of input images are compressed to reduce data amount during data transmission thereby improving transmission efficiency. The compressed pixels need to be reconstructed to restore the original resolution before compression after being received by the display panel using linear interpolation. However, in the distorted region, obvious jaggies will be produced in a reconstructed image by the use of linear interpolation due to distortion.
The disclosure relates generally to display technologies, and more particularly, to a system and a method for calibrating a display.
In one example, a system for display is provided. The system includes a display and a processor. The display has a plurality of pixels, and at least part of the pixels is distorted. The processor is configured to calibrate a distorted pixel by obtaining a coordinate position of the distorted pixel after distortion, obtaining at least one distortion scale parameter of the distorted pixel, and determining a calibrating window centered at the distorted pixel based on the at least one distortion scale parameter. The calibrating window includes at least one pixel located along at least one distorted direction. The processor is further configured to calculate at least one calibrated greyscale value of the at least one simulated pixel based on the at least one pixel within the calibrating window and the at least one distortion scale parameter. The processor is further configured to calculate a calibrated greyscale value of the distorted pixel based on the calibrated greyscale value of the of the at least one of simulated pixel.
In one implementation, the at least one distortion scale parameter includes a first scale HS and a first rotate scale HrS. The first scale HS is configured to illustrate a change in distance between the distorted pixel and a first pixel adjacent to the distorted pixel in a first direction. The first rotate scale HrS configured to illustrate a change in orientation between the distorted pixel and the first pixel after distortion.
In one implementation, a coordinate position of the distorted pixel after distortion is (x0, y0), a coordinate position of the first pixel after distortion is (x1, y1). The first scale HS is calculated by HS=x1−x0, and the first rotate scale HrS is calculated by HrS=y1−y0.
In one implementation, the system further includes a memory configured to store a look-up table. The look-up table is configured to store the first scale HS and the first rotate scale HrS of each distorted pixel.
In one implementation, for each distorted frame input into the display, the first scale HS and the first rotate scale HrS of each distorted pixel are extracted from the look-up table.
In one implementation, the calibrating window includes a first scope H extending along the first direction, and H=k1×HS×M+1, where k1 is a first calibrating coefficient, and M is a downscale unit of distortion of the distorted pixel along the first direction.
In one implementation, k1=2.
In one implementation, calculating the at least one calibrated greyscale value of the at least one simulated pixel within the calibrating window includes calculating a coordinate position of the at least one simulated pixel along the first direction within the first scope H by x1=x0+i, yi=y0+i×HrS. (xi, yi1) is the coordinate position of an ith simulated pixel within the first scope H, and i is an integer other than 0.
In one implementation, calculating the at least one calibrated greyscale value for the at least one simulated pixel within the calibrating window includes processing linear interpolation based on greyscale values of the at least one distorted pixel adjacent to the simulated pixel. A distance between the simulated pixel and the pixel adjacent to the simulated pixel is less than 1.
In one implementation, the greyscale value of the distorted pixel is calculated based on the at least one calibrated greyscale value of the at least one simulated pixel along the first direction within the first scope H.
In one implementation, the greyscale value of the distorted pixel is an average of the at least one calibrated greyscale value of the at least one simulated pixel along the first direction within the first scope H.
In one implementation, the at least one distortion scale parameter includes a second scale VS and a second rotate scale VrS. The second scale VS is configured to illustrate a change in distance between the distorted pixel and a second pixel adjacent to the distorted pixel in a second direction. The second rotate scale VrS is configured to illustrate a change in orientation between the distorted pixel and the second pixel.
In one implementation, a coordinate position of the distorted pixel after distortion is (x0, y0), a coordinate position of the second pixel after distortion is (x2, y2). The second scale VS is calculated by VS=x2−x0, the second rotate scale VrS is calculated by VrS=y2−y0.
In one implementation, the system further includes a memory configured to store a look-up table. The look-up table is configured to store the second scale HS and the second rotate scale HrS of each distorted pixel.
In one implementation, for each distorted frame input into the display, the second scale VS and the second rotate scale VrS of each distorted pixel are extracted from the look-up table.
In one implementation, the calibrating window includes a second scope V extending along the second direction, and V=k2×VS×N+1. k2 is a second calibrating coefficient, and N is a downscale unit of distortion of the distorted pixel along the second direction.
In one implementation, k2=2.
In one implementation, calculating the at least one greyscale value of the at least one simulated pixel within the calibrating window includes calculating a coordinate position of the at least one simulated pixel along the second direction within the second scope V by xj=x0+j, yj=y0+j×VrS, where (xj, xj) is the calibrated coordinate position of a jth simulated pixel within the second scope V, and j is an integer other than 0.
In one implementation, calculating the at least one calibrated greyscale value of the at least one simulated pixel within the calibrating window includes processing linear interpolation based on greyscale values of at least one distorted pixel adjacent to the simulated pixel. A distance between the simulated pixel and the pixel adjacent to the simulated pixel is less than 1.
In one implementation, the greyscale value of the distorted pixel is calculated based on the at least one calibrated greyscale value of the at least one simulated pixel along the second direction within the second scope V.
In one implementation, the greyscale value of the distorted pixel is an average of the at least one greyscale value of the at least one simulated pixel along the second direction within the second scope V.
In one implementation, the display is a near-to-eye display.
In one implementation, the distorted pixels are distributed in a non-focused region of the display.
In another example, a method for calibrating a display is provided. The display has a plurality of pixels, and at least part of the pixel data is distorted. The method includes obtaining a coordinate position of a distorted pixel after distortion, obtaining at least one distortion scale parameter of the distorted pixel, and determining a calibrating window centered at the distorted pixel based on the at least one distortion scale parameter. The calibrating window includes at least one pixel located along at least one distorted direction. The method further includes calculating at least one greyscale value for at least one simulated pixel based on the distorted pixel within the calibrating window and the at least one distortion scale parameter. The method further includes calculating a calibrated greyscale value of the distorted pixel based on the greyscale value of the of the at least one simulated pixel data.
In one implementation, the at least one distortion scale parameter includes a first scale HS and a first rotate scale HrS. The first scale HS is configured to illustrate a change in distance between the distorted pixel and a first pixel adjacent to the distorted pixel in a first direction after distortion. The first rotate scale HrS is configured to illustrate a change in orientation between the distorted pixel and the first pixel after distortion.
In one implementation, a coordinate position of the distorted pixel after distortion is (x0, y0), a coordinate position of the first pixel after distortion is (x1, y1). The first scale HS is calculated by HS=x1−x0, and the first rotate scale HrS is calculated by HrS=y1−y0.
In one implementation, the method further includes storing the first scale HS and the first rotate scale HrS of each distorted pixel in a look-up table.
In one implementation, for each distorted frame input into the display, the first scale HS and the first rotate scale HrS of each distorted pixel are extracted from the look-up table.
In one implementation, the calibrating window includes a first scope H extending along the first direction, and H=k1×HS×M+1, k1 is a first calibrating coefficient, and M is a downscale unit of distortion of the distorted pixel along the first direction.
In one implementation, k1=2.
In one implementation, calculating the at least one greyscale value of the at least one simulated pixel within the calibrating window includes calculating a calibrated coordinate position of the at least one simulated pixel along the first direction within the first scope H by xi=x0+i, yi=y0+i×HrS, where (xi, yi) is the calibrated coordinate position of an ith simulated pixel within the first scope H, and i is an integer other than 0.
In one implementation, calculating the at least one greyscale value of the at least one simulated pixel within the calibrating window includes processing linear interpolation based on greyscale values of at least one distorted pixel adjacent to the simulated pixel. A distance between the simulated pixel and the pixel adjacent to the simulated pixel is less than 1.
In one implementation, the greyscale value of the distorted pixel is calculated based on the at least one greyscale value of the at least one simulated pixel along the first direction within the first scope H.
In one implementation, the greyscale value of the distorted pixel is an average of the at least one greyscale value of the at least one simulated pixel along the first direction within the first scope H.
In one implementation, the at least one distortion scale parameter includes a second scale VS configured to illustrate a change in distance between the distorted pixel and a second pixel adjacent to the distorted pixel data in a second direction, and a second rotate scale VrS configured to illustrate a change in orientation between the distorted pixel and the second pixel.
In one implementation, a coordinate position of the distorted pixel after distortion is (x0, y0), a coordinate position of the second pixel after distortion is (x2, y2). The second scale VS is calculated by VS=x2−x0, and the second rotate scale VrS is calculated by VrS=y2−y0.
In one implementation, the method further includes storing a look-up table in a memory. The look-up table is configured to store the second scale HS and the second rotate scale HrS of each distorted pixel.
In one implementation, for each distorted frame input into the display, the second scale VS and the second rotate scale VrS of each distorted pixel are extracted from the look-up table.
In one implementation, the calibrating window includes a second scope V extending along the second direction, and V=k2×VS×N+1, k2 is a second calibrating coefficient, and N is a downscale unit of distortion of the distorted pixel along the second direction.
In one implementation, k2=2.
In one implementation, calculating the at least one calibrated greyscale value of at least one simulated pixel within the calibrating window includes calculating a coordinate position of a simulated pixel data along the second direction within the second scope V by xj=x0+j, yj=y0+j×VrS, where (xj, xj) is the coordinate position of a jth simulated pixel within the second scope V, and j is an integer other than 0.
In one implementation, calculating the at least one greyscale value for the at least one simulated pixel within the calibrating window includes processing linear interpolation based on greyscale values of at least one distorted pixel adjacent to the simulated pixel. A distance between the simulated pixel and the pixel adjacent to the simulated pixel is less than 1.
In one implementation, the greyscale value of the distorted pixel is calculated based on the at least one greyscale value of the at least one simulated pixel along the second direction within the second scope V.
In one implementation, the greyscale value of the distorted pixel is an average of the at least one greyscale value of the at least one simulated pixel along the second direction within the second scope V.
In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant disclosures. However, it should be apparent to those skilled in the art that the present disclosure may be practiced without such details. In other instances, well known methods, procedures, systems, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present disclosure.
Throughout the specification and claims, terms may have nuanced meanings suggested or implied in context beyond an explicitly stated meaning. Likewise, the phrase “in one embodiment/example” as used herein does not necessarily refer to the same embodiment and the phrase “in another embodiment/example” as used herein does not necessarily refer to a different embodiment. It is intended, for example, that claimed subject matter include combinations of example embodiments in whole or in part.
In general, terminology may be understood at least in part from usage in context. For example, terms, such as “and”, “or”, or “and/or,” as used herein may include a variety of meanings that may depend at least in part upon the context in which such terms are used. Typically, “or” if used to associate a list, such as A, B or C, is intended to mean A, B, and C, here used in the inclusive sense, as well as A, B or C, here used in the exclusive sense. In addition, the term “one or more” as used herein, depending at least in part upon context, may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures or characteristics in a plural sense. Similarly, terms, such as “a,” “an,” or “the,” again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context. In addition, the term “based on” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context.
The distance between wearable devices and the user's eyes is much closer compared with the distance between other electrical devices such as cell phones, computers, and TVs. Thus, distortions are generated on the display panel of the wearable devices due to the physical structure of the wearable device and human eye perception when using the wearable device. A figure on the left side of
For high-resolution wearable devices, such as 4K/8K devices, pixels in peripheral area of input images are compressed to reduce data amount during data transmission thereby improving transmission efficiency. The compressed pixels need to be reconstructed to restore the original resolution before compression after being received by the display panel using linear interpolation. However, in the distorted region, obvious jaggies will be produced in the image by the use of linear interpolation, impairing the display effect, as shown in
To solve the above problem, a system and method for calibrating a display are provided by the present disclosure. At least one distortion scale parameter is employed to illustrate changes in distance and orientation between the distorted pixel data and pixel data adjacent to the distorted pixel data in at least one direction. As shown in
Additional novel features will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by the production or operation of the examples. The novel features of the present disclosure may be realized and attained by practice or use of various aspects of the methodologies, instrumentalities, and combinations set forth in the detailed examples discussed below.
For ease of description, as used herein, “data”, “a piece of data”, or the like refers to a set of data (e.g., compensation data or display data) that can include one or more values. In the present disclosure, for example, “pixel data” or “a piece of pixel data” refers to any number of values used for compensating one pixel. The pixel data may include at least one value each for compensating a subpixel. When a piece of data includes a single value, the “piece of data” and “value” are interchangeable. The specific number of values included in a piece of data should not be limited.
Control logic 104 may be any suitable hardware, software, firmware, or a combination thereof, configured to receive display data 106 (e.g., pixel data and compensation data) and generate control signals 108 for driving the subpixels on display 102. Control signals 108 are used for controlling the writing of display data to the subpixels and directing operations of display 102. For example, subpixel rendering algorithms for various subpixel arrangements may be part of control logic 104 or implemented by control logic 104. Control logic 104 may include any other suitable components, such as an encoder, a decoder, one or more processors, controllers, and storage devices. Control logic 104 may be implemented as a standalone integrated circuit (IC) chip, such as an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA). Apparatus 100 may also include any other suitable component such as, but not limited to tracking devices 110 (e.g., inertial sensors, camera, eye tracker, GPS, or any other suitable devices for tracking motion of eyeballs, facial expression, head motion, body motion, and hand gesture) and input devices 112 (e.g., a mouse, keyboard, remote controller, handwriting device, microphone, scanner, etc.).
In this embodiment, apparatus 100 may be a handheld or a VR/AR device, such as a smart phone, a tablet, or a VR headset. Apparatus 100 may also include a processor 114 and memory 116. Processor 114 may be, for example, a graphics processor (e.g., graphics processing unit (GPU)), an application processor (AP), a general processor (e.g., APU, accelerated processing unit; GPGPU, general-purpose computing on GPU), or any other suitable processor. Memory 116 may be, for example, a discrete frame buffer or a unified memory. Processor 114 is configured to generate display data 106 in display frames and may temporally store display data 106 in memory 116 before sending it to control logic 104. Processor 114 may also generate other data, such as but not limited to, control instructions 118 or test signals, and provide them to control logic 104 directly or through memory 116. Control logic 104 then receives display data 106 from memory 116 or from processor 114 directly. In some embodiments, no control instructions 118 is directly transmitted from processor 114 to control logic 104. In some embodiments, compensation data transmitted from processor 114 to memory 116 and/or from memory 116 to control logic 104 may be compressed.
In some embodiments, control logic 104 is part of apparatus 100, processor 114 is part of an external device of apparatus 100, and memory 116 is an external storage device that is used to store data computed by processor 114. The data stored in processor 114 may be inputted into control logic 104 for further processing. In some embodiments, no control instructions 118 is transmitted from processor 114 to control logic 104. For example, apparatus 100 may be a smart phone or tablet, and control logic 104 may be part of apparatus 100. Processor 114 may be part of an external computer that is different from apparatus 100/control logic 104. Display data 106 may include any suitable data computed by and transmitted from processor 114 to control logic 104. For example, display data may include compressed compensation data. In some embodiments, display data 106 includes no pixel data. Memory 116 may include a flash drive that stores the compressed compensation data processed by processor 114. Memory 116 may be coupled to control logic 104 to input the compressed compensation data into apparatus 100 such that control logic 104 can decompress the compressed compensation data and generate corresponding control signals 108 for display 102.
In this embodiment, display panel 210 includes a light emitting layer 214 and a driving circuit layer 216. As shown in
In this embodiment, driving circuit layer 216 includes a plurality of pixel circuits 228, 230, 232, and 234, each of which includes one or more thin film transistors (TFTs), corresponding to OLEDs 218, 220, 222, and 224 of subpixels 202, 204, 206, and 208, respectively. Pixel circuits 228, 230, 232, and 234 may be individually addressed by control signals 108 from control logic 104 and configured to drive corresponding subpixels 202, 204, 206, and 208, by controlling the light emitting from respective OLEDs 218, 220, 222, and 224, according to control signals 108. Driving circuit layer 216 may further include one or more drivers (not shown) formed on the same substrate as pixel circuits 228, 230, 232, and 234. The on-panel drivers may include circuits for controlling light emitting, gate scanning, and data writing as described below in detail. Scan lines and data lines are also formed in driving circuit layer 216 for transmitting scan signals and data signals, respectively, from the drivers to each pixel circuit 228, 230, 232, and 234. Display panel 210 may include any other suitable component, such as one or more glass substrates, polarization layers, or a touch panel (not shown). Pixel circuits 228, 230, 232, and 234 and other components in driving circuit layer 216 in this embodiment are formed on a low temperature polycrystalline silicon (LTPS) layer deposited on a glass substrate, and the TFTs in each pixel circuit 228, 230, 232, and 234 are p-type transistors (e.g., PMOS LTPS-TFTs). In some embodiments, the components in driving circuit layer 216 may be formed on an amorphous silicon (a-Si) layer, and the TFTs in each pixel circuit may be n-type transistors (e.g., NMOS TFTs). In some embodiments, the TFTs in each pixel circuit may be organic TFTs (OTFT) or indium gallium zinc oxide (IGZO) TFTs.
As shown in
The example shown in
Gate scanning driver 304 in this embodiment applies a plurality of scan signals S0-Sn, which are generated based on control signals 108 from control logic 104, to the scan lines (a.k.a. gate lines) for each row of subpixels in array of subpixels 300 in a sequence. The scan signals S0-Sn are applied to the gate electrode of a switching transistor of each pixel circuit during the scan/charging period to turn on the switching transistor so that the data signal for the corresponding subpixel can be written by source writing driver 306. As will be described below in detail, the sequence of applying the scan signals to each row of array of subpixels 300 (i.e., the gate scanning order) may vary in different embodiments. In some embodiments, not all the rows of subpixels are scanned in each frame. It is to be appreciated that although one gate scanning driver 304 is illustrated in
Source writing driver 306 in this embodiment is configured to write display data received from control logic 104 into array of subpixels 300 in each frame. For example, source writing driver 306 may simultaneously apply data signals DO-Dm to the data lines (a.k.a. source lines) for each column of subpixels. That is, source writing driver 306 may include one or more shift registers, digital-analog converter (DAC), multiplexers (MUX), and arithmetic circuit for controlling the timing of application of voltage to the source electrode of the switching transistor of each pixel circuit (i.e., during the scan/charging period in each frame) and a magnitude of the applied voltage according to gradations of display data 106. It is to be appreciated that although one source writing driver 306 is illustrated in
As described above, the system and method for calibrating a display panel may be performed by processor 114 or control logic 104. Processor 114 may be any processor that can generate display data 106, e.g., pixel data, in each frame and provide display data 106 to control logic 104. Processor 114 may be, for example, a GPU, AP, APU, or GPGPU. Control logic 104 may receive other data, such as but not limited to, control instructions 118 (optional in
As described above, processor 114 may be any processor that can generate display data 106, e.g., pixel data and/or compensation data, in each frame and provide display data 106 to control logic 104. Processor 114 may be, for example, a GPU, AP, APU, or GPGPU. Processor 114 may also generate other data, such as but not limited to, control instructions 118 (optional in
Processor 114 is configured to calibrate distorted pixel data by obtaining a coordinate position of the distorted pixel data and obtaining at least one distortion scale parameter of the distorted pixel data. A calibrating window centered at the distorted pixel data is then determined based on at least one distortion scale parameter, the calibrating window comprising a plurality of pixel data located along at least one distorted direction. Then a calibrated greyscale value for at least one simulated pixel data is calculated based on the plurality of pixel data within the calibrating window and the at least one distortion scale parameter. Then a calibrated greyscale value of the distorted pixel data is calculated based on the calibrated greyscale value of the at least one of the simulated pixel data.
Referring to
The at least one distortion scale parameter is configured to represent the changes in distance and orientation along at least one direction centered at the distorted pixel. In the present disclosure, two directions are illustrated.
In a wearable device, a plurality of pixels positioned in the peripheral region of display 102 are distorted, and for each distorted pixel, the distortion scale parameters are calculated for data reconstruction. In the present embodiment, memory 116 of the system of display is further configured to store a look-up table. The look-up table is configured to store the first scale HS, the first rotate scale HrS, the second scale VS, and the second rotate scale VrS of each distorted pixel. For a first frame input into display 102, the first scale HS, the first rotate scale HrS, the second scale VS, and the second rotate scale VrS of each distorted pixel are calculated based on the change in distance and orientation between the distorted pixel and a first pixel adjacent to the distorted pixel in the first direction, as discussed above. The calculated distortion scale parameters are then stored into memory 116. For frame continuously input into a same wearable device, the distortion and reversed distortion for each frame are usually the same, for the distorted pixels in the distorted frames input into display 102 after the first distorted frame, the distortion scale parameters of each distorted pixel may be extracted from the look-up table directly without calculation to improve the efficiency of display.
Still taking distorted pixel A as an example, after the distortion scale parameters are generated, a calibration window centered on distorted pixel A should be determined as shown in
Referring to
Referring to
In other embodiments, for example, in a three-dimensional color space, the calibration window may also be three-dimensional, and a third direction Z and a corresponding third scope may be used in the calibration window (not shown in the figures). The present embodiments are used to illustrate the present disclosure and should not be construed as a limitation of the present disclosure.
Referring to
After the coordinate position of simulated pixels B the first direction X within first scope H is determined, the greyscale value for simulated pixels B within the calibrating window is calculated. In the present embodiment, linear interpolation is processed based on greyscale values of distorted pixels adjacent to simulated pixel B, and a distance between simulated pixel B and the pixel around simulated pixel B is less than 1. Referring to
The greyscale value of the distorted pixel A is then calculated based on the greyscale value of simulated pixels along first direction X within first scope H, in this embodiment, i.e., the greyscale value of simulated pixels B2, B1, B−1, and B−2. In an embodiment, the greyscale value of distorted pixel A is an average of the greyscale value of the simulated pixels along first direction X within first scope H. In other embodiments, before calculating the greyscale value of distorted pixel A, first scope H is rounded down by a rounding function to get a number of simulated pixels within first scope H in case H is not an integer in some embodiment. In other embodiments, Gaussian operations are processed to get the greyscale value of the distorted pixel A. As the distortion along first direction X is accurately reflected by first scale HS and first rotate scale HrS, which are critical parameters for calculating the calibrated greyscale value of distorted pixel A, the distortion along first direction X is fully considered during the reconstruction; thus the jaggies along first direction X can be eliminated in the present display system.
Referring to
After the coordinate position of simulated pixels C along second direction Y within second scope V is determined, the greyscale value for simulated pixels C within the calibrating window is calculated. In the present embodiment, linear interpolation is processed based on greyscale values of calibrated pixel adjacent to simulated pixel C, and a distance between simulated pixel C and the pixels around simulated pixel C is less than 1. Referring to
The greyscale value of distorted pixel A is then calculated based on the greyscale value of simulated pixels along second direction Y within second scope V, in this embodiment, i.e., the greyscale value of simulated pixels C2, C1, C−1, and C−2. In an embodiment, the greyscale value of distorted pixel A is an average of the greyscale value of the simulated pixels along second direction Y within second scope V. In another embodiment, before calculating the greyscale value of distorted pixel A, second scope V is rounded down by a rounding function to get a number of simulated pixels within second scope V in case H is not an integer in some embodiment. In other embodiments, Gaussian operations are processed to get the greyscale value of distorted pixel A. As the distortion along first direction X is accurately reflected by the first scale HS and first rotate scale HrS, which are critical parameters for calculating the calibrated greyscale value of distorted pixel A, the distortion along first direction X is fully considered during the reconstruction; thus the jaggies along first direction X can be eliminated in the present display system.
Starting at 1002, a coordinate position of the distorted pixels is obtained from the input device. For a certain pixel, the coordinate position of the pixel after distortion is the same as its coordinate position before distortion. The undistorted coordinate position and greyscale value of each pixel are input into processor 114 or control logic 104 through an input device. The coordinate position of each distorted pixel may be obtained from the input device as needed.
At 1004, at least one distortion scale parameter of the distorted pixel is obtained. The at least one distortion scale parameter is configured to represent the changes in distance and orientation along at least one direction centered at the distorted pixel. In the present disclosure, two directions are illustrated. In first direction X, first scale HS is configured to illustrate a change in distance between the distorted pixel and the first pixel adjacent to the distorted pixel, and first rotate scale HrS is configured to illustrate a change in orientation between the distorted pixel and the pixel adjacent to the distorted pixel. In second direction Y, second scale VS configured to illustrate a change in distance between the distorted pixel and the first pixel adjacent to the distorted pixel, and second rotate scale VrS configured to illustrate a change in orientation between the distorted pixel and the pixel adjacent to the distorted pixel.
In a wearable device, a plurality of pixels positioned in the peripheral region of display 102 are distorted, and for each distorted pixel, the distortion scale parameters are calculated for data reconstruction. In the present embodiment, first scale HS, first rotate scale HrS, second scale VS, and second rotate scale VrS are calculated for each of the plurality of distorted pixels. In the present embodiment, a look-up table is generated to store first scale HS, first rotate scale HrS, second scale VS, and the second rotate scale VrS of each distorted pixel. For a first frame input into display 102, first scale HS, first rotate scale HrS, second scale VS, and the second rotate scale VrS of each distorted pixel are calculated based on the change in distance and orientation between the distorted pixel and the second pixel adjacent to the distorted pixel in the first direction, as discussed above. The calculated distortion scale parameters are then stored into the look-up table. For frame continuously input into a same wearable device, the distortion and reversed distortion for each frame are usually the same; for the distorted pixel in the distorted frames input into display 102 after the first distorted frame, the distortion scale parameters of each distorted pixel may be extracted from the look-up table directly without calculation to improve the efficiency of display.
At 1006, a calibrating window centered at the distorted pixel based on the at least one distortion scale parameter is determined, the calibrating window comprising a plurality of pixels located along at least one distorted direction. Referring to
At 1008, the calibrated greyscale value for at least one simulated pixel is calculated based on the distorted pixel within the calibrating window and the at least one distortion scale parameter, and then at 1010, the calibrated greyscale value of the distorted pixel is calculated based on the calibrated greyscale value of the of the at least one simulated pixel. The method for determining the first and second scope is detailed above and will not be repeated here.
The above detailed description of the disclosure and the examples described therein have been presented for the purposes of illustration and description only and not by limitation. It is therefore contemplated that the present disclosure covers any and all modifications, variations or equivalents that fall within the spirit and scope of the basic underlying principles disclosed above and claimed herein.
This application is a continuation of International Application No. PCT/CN2023/125625, filed on Oct. 20, 2023, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2023/125625 | Oct 2023 | WO |
Child | 18385479 | US |